Spring Cloud Contract, Completing the Consumer Driven Contracts Story

A new release of Spring Cloud, Hoxton, is gathering steam. Part of Milestone 2 release is a feature I have been looking forward to for Spring Cloud Contract (SCC), the ability to generate stubs directly off of contracts. In this article we are going to take a look at what this means and how to use this new feature.

Consumer Driven Contracts in a Polyglot World

Last year SCC implemented the functionality that allowed clients to write contracts without having to be familiar with the Java tech stack. This was done by supporting a YAML DSL for defining contracts and providing two Docker images, one that could consume contracts and validates a service is meeting the contracts and the other that can consume stubs so it could act as a mock of the service the contracts were written for. For more information on this, check out this blog article by Marcin Grzejszczak.

A common approach in many Java shops is to have a backend written in Java (i.e. the producer service), but the frontend UI (i.e. the consumer) written in JavaScript. By removing the reliance on Java specific technologies when writing contracts, this made writing contracts more accessible to frontend developers. By allowing the consumers to write the contracts you are able to do a work flow called Consumer Driven Contract Development. The consumers write a contract, telling a producer what they need in a format that can be validated programmatically, and then a producer developer can write, or update, the producer service to meet the requirements of the contract.

Consumer Driven Contract Development is a desirable development paradigm because it is a “pull” rather than “push” concept. Instead of producers “guessing” what a consumer might need, you allow the consumers to define the behavior they need from the producer. This helps to producers avoid building APIs consumers don’t need, or building APIs that are difficult for consumers to, well, consume.

We Keep Waiting on the Service to Change

While SCC’s changes allowed consumers to write contracts with technologies they were more familiar with, the consumer driven contract story was still a bit hollow. SCC had a Docker images that could act as a mock of the service, however it relied upon the availability of stubs. These stubs were only generated after the contracts were validated against the producer service confirming the producer service fulfilled the contracts.

A developer building a consumer could write a contract, but until the producer service actually implemented the behavior defined in the contract, the consumer wouldn’t be able to use the mock service. This could stall development on the consumer side for days or even longer depending on the workload and priorities of the developers of the producer service. If we are going to have consumer driven contracts, we need to allow consumers to be able to develop independently of producers. The contracts are still there, and since both parties are developing based on the contract, they should still arrive at the same point.

Running a Mock Service from Contracts

With the M2 Spring Hoxton release, SCC added the functionality to run a mock service directly from contracts. I have been presenting this year about Spring Cloud Contract, and how it can be used to do Collaborative Contract Driven Development, here is my presentation from Spring I/O in Barcelona back in May. I’m happy that now with the ability to run the mock service directly from contracts, this better completes the story in my presentation.

I have created a GitHub project for demonstrating how to use Spring Cloud Contract. The project has a repo for the producer, client (consumer), and a repo for the contracts. If you clone the client repo, you can see SCC’s new functionality in action by executing the run-produce-service-mock-no-stubs.sh script. The script contains the following:

docker run --rm \
-e "STUBRUNNER_IDS=com.ibm.developer:produce-service:+:stubs:9876" \
-e "STUBRUNNER_REPOSITORY_ROOT=git://https://github.com/wkorando/produce-contracts.git" \
-e "STUBRUNNER_STUBS_MODE=REMOTE" \
-e "STUBRUNNER_GENERATE_STUBS=TRUE" \
-p "8083:8083" \
-p "9876:9876" \
springcloud/spring-cloud-contract-stub-runner:2.2.0.BUILD-SNAPSHOT

This script is running a docker image and passing in some environment variables. Let’s take a look at some of the key values being passed in:

  • ​​​​​STUBRUNNER_REPOSITORY_ROOT This is telling the Docker image where the contracts reside
  • STUBRUNNER_STUBS_MODE This is telling the docker image the contracts are in a remote location (in this case a GitHub repo)
  • STUBRUNNER_GENERATE_STUBS This is variable that was added that tells the Docker image to build stubs directly off the contracts we sent
  • -p "9876:9876" This is telling the Docker image to have the mock service running on port 9876

Executing the script will have a mock service running on http://localhost:9876. If you look at the documentation that we generate from the contracts, you can see some of what the mock service is capable of doing. If you call the url http://localhost:9876/api/v1/produce you will get back a JSON list of produce items. You can also see how this works when working with a real UI by cloning this repository and running <span class="s1">npm run start:contract.

Conclusion

The GA release of Spring Cloud Hoxton should be available later this fall. The ability to generate stubs straight from contracts is a significant step forward in completing the Consumer Driven story. It’s a feature I have been looking forward to since I first started with with Spring Cloud Contract a few years back. If you are attending Spring One Platform next week and are interested in learning more about contract driven development, be sure to swing by my presentation on Thursday Collaborative Contract Driven Development.

Keeping Dependencies Up-to-Date with Automated Testing

One of my big passions in software development is automated testing, which can be seen by how often I write on the subject. This passion stems from having spent years working with organizations that struggled delivering to production in a timely manner and the anxiety I would experience during deploys. Too often when a release was pushed to production I would soon hear back that a bug has made it to production and the commit logs would show “bkorando” for when the bug was introduced. I would feel ashamed that a change I made caused a problem in production, and felt that it was because I wasn’t being careful enough.

Of course I wasn’t the only developer accidentally introducing bugs into production and the organizations I worked with, and many others, would respond to this problem by introducing new processes. Processes like; having manager sign off on a release, a tester manually run through a test script, or requiring extensive justification for why a change should be made. These additional processes would do little to resolve the issue of bugs getting to production, they did however result in two things:

  1. Production pushes became more painful, which lead to them happening less often
  2. Encouraged developers to limit the scope of their changes to reduce risk

In this article we are going to look at how relying on manual processes hinder an organization’s ability to keep their dependencies up-to-date, why this is a problem, and look at a use case of how an automated test can catch a subtle breaking change that occurred from upgrading a dependency.

Coding On the Shoulders of Giants

As modern software developers, we owe a lot to developers that came before us. Margret Hamilton pioneered error handling and recovery techniques while developing the software for the Apollo flight control computer. If you are performing distributed transactions, Hector Garcia-Molina and Kenneth Salem paved the way with the development of the SAGA pattern on how to rollback disturbed transactions when errors inevitably occur.

Our foremothers and forefathers, have allowed our field to advance by pushing on the boundaries of what was possible and introducing new concepts and patterns. Often though the way we benefit from these pioneering advances is through the use of libraries and frameworks that implement or provide abstractions into those concepts and patterns. While we labor all day building and maintaining applications for our organizations, the reality is the code we write is only a small fraction of the code actually running in production.

Graph depicting the we write resting at the peak of a pyramid of code (dependencies) it runs on

Just like how the applications we maintain are constantly changing, so too are the libraries and frameworks we depend upon. New features are being added, bugs fixed, performance improved, security holes plugged. If you want the hair to stand on the back of your neck, I would recommend checking out shodan.io, which can be a great resource for showing not only the vulnerabilities of a website, but how to exploit them!

Frozen in Time

Image result for mosquito in amber
Photo by George Poinar, Jr.

In the introduction, I talked about how additional process would encourage developers to minimize their changes. An easy way of keeping a change set small is by not updating an application’s dependencies. I have quite frequently worked on applications whose dependencies were years old! At times working on these applications made me feel like I was a paleontologist, examining applications frozen in amber from a time long ago.

From the perspective of an organization that depends upon manual process, the decision to freeze dependencies is understandable. Updating a dependency, especially if it’s a framework dependency like Spring Boot, could impact an entire application. If an entire application might be impacted from a change, then it would need a full regression test, and when that is manual, that is a time consuming and expensive effort. However in the attempt to resolve one issue, prevent bugs from being introduced into production, this created another, deploying applications with significantly out-of-date dependencies which might contain critical security vulnerabilities (CVEs).

Focusing on the Right Issues

Image result for asking the right question iRobot gif

Manual regression testing, along with being very time consuming, isn’t a very good way to verify the correctness of an application. Issues with manual regression testing include:

  1. Difficult to impossible to recreate some scenarios (e.g. system failure cases)
  2. Test cases not being executed correctly leading to false positives or false negatives
  3. Test cases not being executed at all

Automated testing can address these issues as automated tests are: executed much more rapidly, have very little manpower requirement to execute, are much more granular and flexible, and auditable to ensure they are being executed and testing the intended behavior.

While it is nice to sing the praises of automated testing in the abstract, it would be good to see an example of how they can catch a non-trivial, non-obvious bug that occurred from updating a dependency. As luck would have it I just so happen to have an example.

Catching Upgrade

When I was working on a recent article, wiring multiple datasources, I ran into a bit of a frustrating issue. To distill what happened to the essential points. While checking my work with existing articles out there on the same subject, I tried implementing a version of a solution another author demonstrated. However this solution didn’t work for me, even though the code looked the same. The gif below demonstrates the issue I ran into, as well as how to write a test that would catch the issue:

Code can be found on my GitHub. Note the code from the gif above are in branches off master.

A Perfect and Imperfect Use Case

Most any use case is going to run into relevancy. Problems with this use case, in showing the value of automated testing when upgrading dependencies, is that this problem would show up at start up time. Upgrading to Spring Boot 2.x and not updating the properties to use *.jdbc-url would cause the application to fail at start up as it is not able to properly configure an EntityManager. So the effect of this change would be fairly obvious.

On the other hand, this is a perfect use case for demonstrating the usefulness of automated testing for several reasons. Let’s quickly step through some of them:

  1. The test is easy to write – The test case as can be seen in the above gif is pretty simple to write. Recreating the failure scenario is easy and can be done with an in-memory database, so the test is highly portable.
  2. The test covers other concerns – The test isn’t strictly about recreating a narrow error case, i.e. you’d have to know about the problem before writing the test case. Database communication is important, and is something that would require a test regardless. If there were other subtle breaking changes in that area as well resulting from a dependency upgrade, they might be caught with a test case like this one as well.
  3. An automated test suite can find multiple issues simultaneously – This issue was found at startup. In a more substantial application it is likely there would be other issues that come from doing a major version upgrade of a framework dependency like Spring Boot. Even if all issues were at startup, which wouldn’t always be the case, it would be a grueling process to sequentially go through fixing each issue as it comes up having to restart the application every time. A test suite could catch multiple issues in a single execution, allowing them to be fixed simultaneously.

Resolving issues that come up when upgrading underlying dependencies, even with a good automated test suite can be a difficult process. It took me awhile to track down the solution to my issue, which would be the case regardless of how the issue was discovered.

Safe to Fail

To begin summarizing this article, I was listening to a recent episode of A Bootiful Podcast and this quote really stuck out to me (lightly edited):

You become safe to fail, you accept that no matter what you do there will be bugs… you’re just able to find them quickly and push the fix quickly

@olga_maciaszek

An important takeaway is that no automated test suite is going to entirely eliminate the risk of a bug making it to production. What an automated test suite does gives you though is a safety net and ability to rapidly respond to bugs when they do occur. It gives you confidence that generally it will find bugs in your code, and when a bug inevitably slips through you can respond to it rapidly and with confidence you will not be introducing yet another bug into production. This protection applies to changes made within your code, as well as changes made to dependencies your code depends upon as well.

Conclusion

Awareness and importance of automation, and particularly automated testing, has been slowly increasing in the software development industry over recent years. I am pleasantly surprised when I ask during presentations how many people are working at organizations that do continuous delivery, usually between 25-50% of the room raise their hands.

Still that leaves half or more of organizations which haven’t yet turned the corner on automation. I often find the best way to encourage change at organizations is to lay out the cost of staying with the status quo, and how the proposed changes resolve these problems. If you’re at an organization that still relies heavily on manual processes, and been wanting to change this, hopefully this article might help build your case for your organization to start embracing automation and automated testing.

Testcontainers, Bringing Sanity to Integration Testing

Writing and maintaining integration tests can be a difficult and frustrating experience, filled with a veritable minefield of things that could go wrong. For integration tests that connect to a remote resource you have issues of: the resource being down, datasets being changed or deleted, or heavy load causing tests to run slow. For integration tests that connect to a local resource you have the initial install and configuration of the resource on your local machine and the overhead of keeping your local instance in-sync with what the production instance looks like, otherwise you might run into this situation:Screen Shot 2019-03-25 at 8.12.34 AMSource: Minesweeper – The Movie

No application operates in isolation. Applications, even “monoliths”, depend on remote resources be it: databases, logging services, caches, or other applications to function. Just like the application we are maintaining will change over time as business needs and client demands change, so too will the resources it depends on. This necessitates a need to continually verify that our application can communicate with its dependent resources.

So to maintain development velocity and while having confidence our applications will function properly in production we need to write automated integration tests, however we need our integration tests to be:

  • Reliable – Test failures should only happen because a change occurred in either our application or the resource, not because the resource is down or misconfigured.
  • Portable – The tests should be able to run anywhere with minimal setup.
  • Accurate – The resource being used in the integration test should be an accurate representation of what exists in production.

How do we accomplish these requirements?

Introducing Testcontainers

Testcontainers is a Java library that integrates with JUnit to provide support for starting up and tearing down a Docker container within the lifecycle of a test or test class. Testcontainers is a project that was started about four years ago, and I first learned about back in 2017 when I was putting together a Pluralsight video on automated testing.

I have noticed an uptick in interest in Testcontainers in my twitter outline recently, and it doesn’t seem long ago that Testcontainers passed the 1K stars mark on their github repo, which now sits at 2.2K. If you haven’t started familiarizing yourself with Testcontainers now would definitely be a good time.

This rapid increase in popularity is likely the result of Testconainers being easy to use, and the flexibility of Docker containers, allowing Testcontainers to address a lot of integration testing use cases. In this article we are going to look at two approaches of how to use Testcontainers for running an integration test against a database. The code examples will be using JUnit 5, if you want to get familiar with JUnit 5, I have written a lot about it, you should also check out the JUnit 5 user docs.

Launching a Testcontainer via JDBC URL

In the example we will be writing an integration test for connecting to a Postgresql database, Testcontainers does offer support for a number of other databases. The first step will be brining in the appropriate dependencies. For this example we will only need to add the Postgresql Testcontainers dependency, to our maven build file (which in turns brings in the Testcontainers JDBC and core libraries).


<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>postgresql</artifactId>
<version>1.11.1</version>
<scope>test</scope>
</dependency>

Full maven build file for this project can be found here.

With the appropriate dependencies imported, let’s look at how to use Testcontainers to write a database integration test.

Full class, including imports, here.


@SpringJUnitConfig
@ContextConfiguration(classes = { StormTrackerApplication.class }, initializers = ITStormRepo.Initializer.class)
@TestPropertySource("classpath:application.properties")
@TestMethodOrder(OrderAnnotation.class)
public class ITStormRepo {
public static class Initializer implements ApplicationContextInitializer<ConfigurableApplicationContext> {
@Override
public void initialize(ConfigurableApplicationContext applicationContext) {
TestPropertyValues.of("spring.datasource.url=jdbc:tc:postgresql:11.2://arbitrary/arbitrary", //
"spring.datasource.username=test", //
"spring.datasource.password=test", //
"spring.datasource.driver-class-name=org.testcontainers.jdbc.ContainerDatabaseDriver")//
.applyTo(applicationContext);
}
}
@Autowired
private StormRepo repo;
@Test
@Order(1)
public void testReadFromStormsTable() {
assertThat(repo.count()).isEqualTo(2);
}
@Test
public void testWriteToStormsTable() {
Storm savedStorm = repo.save(new Storm("03-17-2019", "03-20-2019", "South Atlantic", "Knoxville, Tennesee",
"Tropical Depression", 3));
assertThat(savedStorm.getId()).isEqualTo(12);
}
}

There is quite a bit going on, let’s breakdown what is happening in this class into more easily digestible bites.

@TestPropertySource("classpath:application.properties")

This isn’t really related to using Testcontainers, but since ApplicationContextInitializer (javadoc) isn’t super well known, but can also be really helpful when writing automated tests, I wanted to take a moment to show how to make it easier to work with when used in test classes.

Here I am telling the test class to bring in the properties defined in /src/test/main/application.properties (source). By bringing in the properties defined in application.properties, instead of having to define every property needed for connecting to the Testcontainers database, only the properties that are different for the tests in this class need to be overwritten. This reduces maintenance needs and helps with overall test accuracy as it is easier to keep a single properties file in-sync with what production looks like.

public static class Initializer implements ApplicationContextInitializer {
   @Override
   public void initialize(ConfigurableApplicationContext applicationContext) {
      TestPropertyValues.of("spring.datasource.url=jdbc:tc:postgresql:11.2://arbitrary/arbitrary", //
      "spring.datasource.username=arbitrary", //
      "spring.datasource.password=arbitrary", //
      "spring.datasource.driver-class-name=org.testcontainers.jdbc.ContainerDatabaseDriver")//
      .applyTo(applicationContext);
   }
}

Within Initializer four properties are being defined (overwritten), and a few of them have somewhat odd looking values, let’s take a closer look. When initializing  Testcontainers via the JDBC URL, Testcontainers will set the username, password, hostname, and database name to what ever values you pass it. Strictly speaking spring.datasource.username and password don’t need to be included as they are defined in application.properties.  For spring.datasource.url, the JDBC URL must start with jdbc:tc:. The 11.2 refers to the specific image tag of postgres to be used, this however is optional and would default to 9.6.8 if left out. Lastly, spring.datasource.driver must be set to org.testcontainers.jdbc.ContainerDatabaseDriver. ContainerDatabaseDriver is Testcontainers’ “hook” into this test class. After starting up the container, ContainerDatabaseDriver will be substituted with the standard database driver, in this case org.postgresql.Driver. While in this example I am using the base postgres image in this example, you can use a custom image, so long as the database within the container is postgres (or of the type of database you have brought in a dependency for).

The rest of the test class is comparatively simple and straightforward. Simple read and writes are being performed to ensure fields are being properly mapped and the generated id matches the expected pattern.

Using Testcontainers as a Class Field

Above we looked at how to use Testcontainers via the JDBC URL hook. This can be a great when your use case is pretty simple, however the complexities of applications in the real world often mean a need for greater control and customization in behavior.

First step would be to bring in the Testcontainers junit-jupiter library.


@Testcontainers
@SpringJUnitConfig
@ContextConfiguration(classes = {
StormTrackerApplication.class }, initializers = ITStormRepoAlternate.Initializer.class)
@TestPropertySource("classpath:application.properties")
@TestMethodOrder(OrderAnnotation.class)
public class ITStormRepoAlternate {
@Container
private static PostgreSQLContainer container = new PostgreSQLContainer("postgres:11.2");//Can be an arbitrary image name and tag
public static class Initializer implements ApplicationContextInitializer<ConfigurableApplicationContext> {
@Override
public void initialize(ConfigurableApplicationContext applicationContext) {
TestPropertyValues.of("spring.datasource.url=" + container.getJdbcUrl(),
"spring.datasource.username=" + container.getUsername(),
"spring.datasource.password=" + container.getPassword(),
.applyTo(applicationContext);
}
}
@Autowired
private StormRepo repo;
@Test
@Order(1)
public void testReadFromStormsTable() {
assertThat(repo.count()).isEqualTo(2);
}
@Test
public void testWriteToStormsTable() {
Storm savedStorm = repo.save(new Storm("03-17-2019", "03-20-2019", "South Atlantic", "Knoxville, Tennesee",
"Tropical Depression", 3));
assertThat(savedStorm.getId()).isEqualTo(12);
}
}

There are a lot of similarities with the previous code example, so lets focuses only on the differences.

At the top of the test class, is the @TestContainers annotation. This brings in the Testcontainers extension into the class which scans for fields annotated with @Container such as in this case PostgreSQLContainer container. A @Container field can be either static or an instance field. Static containers are started only once and are shared between test methods, instances containers are started and stopped for each test method.

@Container
private static PostgreSQLContainer container = new PostgreSQLContainer("storm_tracker_db:latest");

Here the container that will be used in this test class is defined. Like with the JDBC URL method, you are not required to use a base postgresql image, in this case the customer image “storm_tracker_db” is being used (the Dockerfile for this image is here). As long as the database within the container is postgres, you are fine. While not much additional customization is being done to the container in this class. Testcontainers does offer a number of options such as: executing commands, setting a volume mapping, or accessing container logs, among others. Be sure to check the documentation under features and modules for what is available, as well as the javadoc (v1.11.1).

These additional features provided when using a Testcontainer as a class field allow for flexibility in putting the container within a specific state for a test, easily switching the datasets to be used in a test, or being able to view the internals of container to verify expected behavior.

An additional benefit of using a Testcontainer as a class field is the ability to reference values from the container in use. In Initializer I am using container to populate the JDBC URL (container<span class="pl-k">.</span>getJdbcUrl()), username, and password properties for the Spring test application context. By default when using PostgreSQLContainer the username and password are both “test”, so we don’t really need to pull these values from the container, however the JDBC URL is dynamic. Being able to pull values from a container and pass them in to the application context for a Spring test, helps to increase the flexibility when using Testcontainers. Without this, you might have to use pre-defined ports, IPs, or other values, which might run into trouble when the tests are being executed on a build server.

Conclusions

I’m excited to see how much Testcontainers has grown both as a project and in interest from the community from when I first started using it. I have often struggled when writing integration tests, having to deal with either flickering tests, or the overhead of install and maintain a local resource. Neither are pleasant experiences. Testcontainers brought sanity in my life to the difficult task of writing integration tests.

The code used in this article can be found here.

How to Test Logging in Java Part Two: Parallel Boogaloo

In the first article in this series we looked at a simple way of capturing and verifying logging statements using a static list appender. The static list appender method works great if you are executing your test suite in a single thread, but isn’t practical when executing tests in parallel due to all the logging statements being written to a single list.

Whether or not a test suite can be executed in parallel should not depend upon if there are tests case verifying logging statements. We shouldn’t need to choose between a fast executing test suite or an application that produces accurate logging statements. So in this article we are going to look at a couple different methodologies for testing logging statements in a multi-threaded environment. We will also preview some of the new parallel testing features coming in JUnit 5.3, which should be released sometime in August.

Using a Thread Safe Appender

Parallel test execution works by spinning up multiple threads and using those threads to execute test simultaneously. This causes havoc when using a single list to capture logging statements across the entire JVM, but a way to get around this is to create a new list in each thread. To accomplish this we will make use of the ThreadLocal functionality in Java. Here is an example of an appender using ThreadLocal to capture logging statements:


public class ThreadSafeAppender extends AppenderBase<ILoggingEvent> {
static ThreadLocal<List<ILoggingEvent>> threadLocal = new ThreadLocal<>();
@Override
public void append(ILoggingEvent e) {
List<ILoggingEvent> events = threadLocal.get();
if (events == null) {
events = new ArrayList<>();
threadLocal.set(events);
}
events.add(e);
}
public static List<ILoggingEvent> getEvents() {
return threadLocal.get();
}
public static void clearEvents() {
threadLocal.remove();
}
}

In the code above we have the static field threadLocal that will accept a Listof ILoggingEvents. When append() is called the appender will retrieve the List for the current thread. If no list is present a new List is initialized and added to ThreadLocal. Finally the logging statement is added to the list.

getEvents() will retrieve the list of logging statements for the current thread and clearEvents() will remove the logging statements for the current thread. This class is pretty similar to StaticAppender, just with a few tweaks.

Configuration will look more or less the same, just we will be referencing the ThreadSafeAppender instead of StaticAppender:


<configuration>
<appender name="threadsafe-appender" class="com.bk.logging.ThreadSafeAppender" />
<root level="trace">
<appender-ref ref="threadsafe-appender" />
</root>
</configuration>

The next step is configuring Maven to execute the test suite in parallel. The updated surefire configuration looks like this:


<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.22.0</version>
<configuration>
<properties>
<configurationParameters>
junit.jupiter.execution.parallel.enabled=true
junit.jupiter.execution.parallel.config.dynamic.factor=1
</configurationParameters>
</properties>
</configuration>
</plugin>
</plugins>
</build>
</project>

There are several different options for executing a test suite in parallel in JUnit 5.3, in this example we are using the dynamic behavior. Dynamic behavior with a factor of “1” means that for each core that is available on the computer executing the test suite, create a thread to execute tests. So if a computer has two cores, then JUnit will create two threads to execute tests with. There is also fixed, and custom, and you can read more about the parallel test support in JUnit 5.3 here.

We will be reusing the same scenario from the last article of printing out two log statements; one in the same thread of the test case being executed and also spinning up a separate thread to write a second log message, here it is again:


public class LogProducingService {
private static final Logger LOGGER = LoggerFactory.getLogger(LogProducingService.class);
public void writeSomeLoggingStatements(String message) {
LOGGER.info("Let's assert some logs! " + message);
ExecutorService executor = Executors.newSingleThreadExecutor();
Future<?> future = executor.submit(() -> LOGGER.info("This message is in a separate thread"));
do {
// wait for future to complete
} while (!future.isDone());
}
}

This scenario is designed to demonstrate the major shortcoming of the thread safe method, which will be covered in more detail in a moment.

For the test class we will essentially set it up to run the same scenario twice, with only a small difference in the logging ouput:


public class TestThreadSafeAppender {
@BeforeEach
public void clearLoggingStatements() {
ThreadSafeAppender.clearEvents();
}
@Test
public void testAssertingLoggingStatementsA() {
LogProducingService service = new LogProducingService();
service.writeSomeLoggingStatements("A");
assertThat(ThreadSafeAppender.getEvents()).extracting("message").containsOnly("Let's assert some logs! A");
}
@Test
public void testAssertingLoggingStatementsB() {
LogProducingService service = new LogProducingService();
service.writeSomeLoggingStatements("B");
assertThat(ThreadSafeAppender.getEvents()).extracting("message").containsOnly("Let's assert some logs! B");
}
}

Looking at the test cases, I am only checking for a single logging statement despite two being written by the code under test. This is the major shortcoming with the ThreadSafeAppender, a new list is being created for each thread, not for each test case. So if the code under test creates a new thread itself, any logging statement within that new thread will be written to a separate list that the test case won’t have (easy) access too.

Typically this isn’t going to be a problem. First, writing code that spins and executes code in a separate thread isn’t that common of a need. Second, following the single responsibility principle, code that is being executed in a separate thread should be in its own class, which can then be tested independently.

That said, there might still be times where this does happen. Maybe like in the scenario above the code being executed in a separate thread is so trivial that it might not warrant a separate class, or maybe you inherited a code base that didn’t follow the single responsibility principle and you want to be able to write some tests that cover that area before you refactor it. Luckily there is a solution, which I will cover below, but first I need to talk about issues of executing tests in parallel that will apply to both methodologies.

The Joys of Concurrency

If you copy and pasted the above code into a project it would work when executing within your preferred IDE (I personally tested in Eclipse), however that is only being executed using a single thread, and if you tried running mvn test you probably get some tests failures and text that looks like the below printed to the console:

SLF4J: A number (4) of logging calls during the initialization phase have been intercepted and are
SLF4J: now being replayed. These are subject to the filtering rules of the underlying logging system.
SLF4J: See also http://www.slf4j.org/codes.html#replay

This is really only a problem that would occur when executing tests in parallel and wouldn’t really impact a deployed application. Unfortunately this will lead to “flickeringh tests” (tests that don’t pass or fail in a consistent manner) and so this issue must be addressed.

First to describe the problem. What is happening is tests are being executed before logback has initialized. This is because logback is being initialized in a separate thread and the threads executing the tests do not know this. SLF4J recognizes this, captures the log statements, and then replays them. SLF4J isn’t doing anything wrong, in-fact it is being very helpful, unfortunately for us though the replay isn’t happening until after the assert phase of some test cases, so they end up failing.

The good news is there is a work-around, and it is pretty easy to implement. What we need is to find a way to check if logback is ready, and not let a test class execute and until that check passes. Here is some code that is performing that check:


public class LogbackInitializerExtension implements BeforeAllCallback {
@Override
public void beforeAll(ExtensionContext context) throws Exception {
pauseTillLogbackReady();
}
private static void pauseTillLogbackReady() {
while(!isLogbackReady());
}
private static boolean isLogbackReady() {
return LoggerFactory.getILoggerFactory() instanceof LoggerContext;
}
}

Before logback is ready, calling getILoggerFactory() will return a default SLF4J implementation of ILoggerFactory. So the while loop is designed to to block execution of the test class until it breaks when getILoggerFactory()returns the logback implementation of ILoggerFactoryLoggerContext.

For convenience, I implemented this behavior using JUnit 5’s new extension model, so that I can simply add @ExtendWith(LogbackInitializerExtension.class) at the top of a test class that will be checking log statements. Otherwise I would have to add a @BeforeAll method implementing or calling pauseTillLogbackReady() in every test class that is testing log statements. Don’t worry, unlike RunWith in JUnit 4, in JUnit 5 a class can have multiple extensions.

I can further simplify this process by creating a custom annotation, so instead of @ExtendWith(LogbackInitializerExtension.class), which might be confusing to remember, I can just add @LogbackInitializer to the top of a test class. Creating a custom annotation with JUnit 5 is pretty easy:


@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.TYPE)
@ExtendWith(LogbackInitializerExtension.class)
public @interface LogbackInitializer {
}

Figuring this all out was admittedly a bit of a headache, but resolving these problems that crop up when executing tests in parallel doesn’t require complex configuration or relying on what I would call “winking” solutions, such as using wait timers to delay test case execution. Such solutions can lead to the flickering test failures described earlier in the article, or just unnecessarily slow down the execution of your test suite, which defeats the whole purpose of executing the test suite in parallel in the first place!

Using a Local Appender

Using the “Thread Safe” method is the method I would recommend for testing logging statements when executing tests in parallel, primarily because of its ease of configuration. However as explained above you might have inherited a poorly written code base or don’t want to separate your code out. If that is the case then you might want to try what I call the “Local Appender” method.

I call this the “Local Appender” method because you will be setting up a new appender for every test case where you will be verifying logging statements (i.e. an appender local to each test case). The advantage is this appender will capture logging statements executed in threads separate from the test case, resolving the major disadvantage of the Thread Safe method, but at cost of additional configuration overhead, which will be covered below.

The first step is implementing the appender:


public class LocalAppender extends AppenderBase<ILoggingEvent> {
private List<ILoggingEvent> events = new ArrayList<>();
public LocalAppender() {
start();
}
public static LocalAppender initialize(Stringloggers) {
LocalAppender localAppender = new LocalAppender();
for (String loggerName : loggers) {
localAppender.setContext((LoggerContext) LoggerFactory.getILoggerFactory());
Logger logger = (Logger) LoggerFactory.getLogger(loggerName);
logger.addAppender(localAppender);
}
return localAppender;
}
public void cleanup() {
this.stop();
this.clearEvents();
}
@Override
public void append(ILoggingEvent e) {
events.add(e);
}
public List<ILoggingEvent> getEvents() {
return events;
}
public void clearEvents() {
events.clear();
}
}

The bare essential elements of the LocalAppender are pretty similar to the StaticAppender from the previous article, the only difference is the List is an instance member instead of a static one and in the constructor we are starting the appender.

The other differences aren’t strictly necessary but are present for convenience. initialize() is a factory method for creating new LocalAppendersinitialize() is taking in logger coordinates and configuring an instance of LocalAppender to only capture log statements from those loggers. The second method, cleanup(), as the name suggests cleans up the appender by stopping it and clearing out all logging statements. This frees up system resources, but really isn’t that important unless you are using a lot of LocalAppenders.

Let’s see what using the LocalAppender might look like in a test class:


@LogbackInitializer
public class TestLocalAppender {
@Test
@ResourceLock(value = "LOGGING", mode = ResourceAccessMode.READ_WRITE)
public void testLocalAppenderA() {
OtherLogProducingService service = new OtherLogProducingService();
LocalAppender localAppender = LocalAppender.initialize("com.bk.logging.OtherLogProducingService");
service.writeSomeLoggingStatements("Other logging servie A");
assertThat(localAppender.getEvents()).extracting("message")
.containsOnly("Let's assert some logs! Other logging servie A", "This message is in a separate thread");
localAppender.cleanup();
}
@Test
@ResourceLock(value = "LOGGING", mode = ResourceAccessMode.READ_WRITE)
public void testLocalAppenderB() {
OtherLogProducingService service = new OtherLogProducingService();
LocalAppender localAppender = LocalAppender.initialize("com.bk.logging.OtherLogProducingService");
service.writeSomeLoggingStatements("Other logging servie B");
assertThat(localAppender.getEvents()).extracting("message")
.containsOnly("Let's assert some logs! Other logging servie B", "This message is in a separate thread");
localAppender.cleanup();
}
@Test
@ResourceLock(value = "LOGGING", mode = ResourceAccessMode.READ)
public void justAnotherTest() {
OtherLogProducingService service = new OtherLogProducingService();
service.writeSomeLoggingStatements("Local appender");
// Executing just to add some logs
}
@Test
@ResourceLock(value = "LOGGING", mode = ResourceAccessMode.READ)
public void yetAnotherTest() {
OtherLogProducingService service = new OtherLogProducingService();
service.writeSomeLoggingStatements("Local appender");
// Executing just to add some logs
}
}

So there’s a good amount going on here. Let’s walk through it step by step.

At the top I have @LogbackInitializer. Like mentioned above this is present to ensure that logback is initialized before the test cases begin executing.

Next you might notice all the test cases are marked with @ResourceLock. This is one of the new parallel features being added in JUnit 5.3. The JUnit team realized that the desire to execute tests in parallel isn’t entirely a binary decision. Most tests might be able to execute fine in parallel, but some tests might be sharing a resource, in this case logging, and so might have some special conditions around how they can be executed.

@ResourceLock takes two arguments, the resource key (which uses the valuefield), and mode which takes the values of either READ or READ_WRITE(defaults to READ_WRITE). Test cases marked READ can be executed at the same time as other test cases using the same resource marked READ, but will not be executed at the same time as a test case marked READ_WRITE that is also using the same resource. Finally @ResourceLock is applied across the entire test suite. So if we were to add @ResourceLock(value = “LOGGING”, mode = ResourceAccessMode.READ_WRITE) to a test case in one of the other test classes in the project, it wouldn’t execute at the same time as either testLocalAppenderA() or testLocalAppenderB().

Note: There is no execution order guarantee when using @ResourceLock, so a test case marked READ might be executed before or after a test case marked READ_WRITE.

Within the test cases themselves I am using the initialize() factory method I mentioned earlier and this gets at the advantage of the LocalAppender. With LocalAppender I can easily configure it to only capture logs from a very narrow area, typically the specific class (or classes) I am testing with that test case. With this fine grained control on what logging statements to capture and using features like @ResourceLock we can guarantee only the expected logging statements are being captured.

Finally justAnotherTest() and yetAnotherTest() are dummy test cases I added to further demonstrate the @ResourceLock functionality. If those annotations are removed then testLocalAppenderA() or testLocalAppenderB() might begin to fail because of the additional logging statements being written during their test execution. I would recommend trying it out!

Conclusion

If you have a very large test suite, executing tests in parallel can be a good option for cutting down build times, particularly if you have a build server with a lot of resources. Using the common method of capturing and verifying logging statements, like described in the first article, would mean either having to ignore the occasional failing test, or configuring your build to ignore those tests when the test suite is being executed in parallel (if you look at the full project I link below, I demonstrate how this can be done). Luckily though with the methods covered in this article we can execute our test suite in parallel and verify our logging statements too.

I hope also this article excited you about the upcoming JUnit 5.3 release. JUnit 5.3 should be out sometime in August and I will have an article covering some of the key new features in JUnit 5.3, as well as some changes in closely associated projects. That article will be published shortly after JUnit 5.3’s release. Until then happy testing!

The code for this article can be found here: https://github.com/wkorando/assert-logging-statements

How to Test Logging in Java

Whether you are building microservices or monoliths, accurate and robust logging is a critical element in any application. Logging can give insight on how an application is being used and provide developers and operations with detailed information on why an error occurred. It is important then, like any other feature in an application, that we use automated tests to verify that logging statements are accurate and do not cause errors themselves.

In this two part series we will look at some strategies for capturing and verifying logging statements. In part one we will look at a simple and easy to implement strategy for when you are executing your test suite sequentially. In part two we will focus on a couple of strategies for when you are executing your test suite in parallel.

The Scenario

For demonstrating how to capture and assert logging statements we have the very useful and accurately named LogProducingService:


public class LogProducingService {
private static final Logger LOGGER = LoggerFactory.getLogger(LogProducingService.class);
public void writeSomeLoggingStatements(String message) {
LOGGER.info("Let's assert some logs! " + message);
ExecutorService executor = Executors.newSingleThreadExecutor();
Future<?> future = executor.submit(() -> LOGGER.info("This message is in a separate thread"));
do {
// wait for future to complete
} while (!future.isDone());
}
}

The LogProducingService has a single method, writeSomeLoggingStatements(), which produces a pair of logging statements. One statement written within the same thread as that of the calling test case and another statement written in a separate thread (this will be more important in part two).

Implementing a Custom Appender

In normal operation an application will typically be using either some form of a file appender to write statements to system logs, or in more recent trends, using an appender that sends statements to a message queue such as Kafka. There are some other options, but none are practical for verifying if the logging statements being produced are valid. So to handle this we will need to implement our own appender like this one:


public class StaticAppender extends AppenderBase<ILoggingEvent> {
static List<ILoggingEvent> events = new ArrayList<>();
@Override
public void append(ILoggingEvent e) {
events.add(e);
}
public static List<ILoggingEvent> getEvents() {
return events;
}
public static void clearEvents() {
events.clear();
}
}

In the above class I am extending AppenderBase and instead of printing the logging statements out to a file or sending them to a message queue I am adding them to a List. To retrieve the logging statements for asserting in a test case there is getEvents(), and to clear the list out between test cases I added clearEvents().

I named this class StaticAppender because the list I am storing the logging statements in, as well as the methods to retrieve and clear that list, are all static. This is necessary as I want to be able to globally configure my test suite using logback-test.xml; and the instance of StaticAppender that logback creates won’t be easily available to my test code. By declaring the aforementioned fields and methods static, I have easy access to the logging statements my application is producing from within my test cases.

After implementing the appender, we need to configure logback to use it:


<configuration>
<appender name="static-appender" class="com.bk.logging.StaticAppender" />
<root level="trace">
<appender-ref ref="static-appender" />
</root>
</configuration>

As mentioned above I am configuring logback for my test suite using logback-test.xml and placed it in the src/test/resources directory of the project. I did this because by order of precedence logback will load logback-test.xmlfirst if it is present on classpath. So even though I have logback.xml defined in my src/main/resources, when executing my test suite I will be using StaticAppender instead of ConsoleAppender defined in logback.xml. This is important as we don’t want to modify production code so that it is aware it is running in a test scenario versus actually deployed; that is a big no-no.

Finally the test case would look something like this:


public class TestStaticAppender {
@BeforeEach
public void clearLoggingStatements() {
StaticAppender.clearEvents();
}
@Test
public void testAssertingLoggingStatementsA() {
LogProducingService service = new LogProducingService();
service.writeSomeLoggingStatements("A");
assertThat(StaticAppender.getEvents()).extracting("message").containsOnly("Let's assert some logs! A",
"This message is in a separate thread");
}
@Test
public void testAssertingLoggingStatementsB() {
LogProducingService service = new LogProducingService();
service.writeSomeLoggingStatements("B");
assertThat(StaticAppender.getEvents()).extracting("message").containsOnly("Let's assert some logs! B",
"This message is in a separate thread");
}
}

I am a huge fan of AssertJ, which I am making use of here to quickly extract out the message field from a list of ILoggingEvent and assert its value. If you want to learn more about using AssertJ in your automated testing, I wrote a blog article on that as well.

Parallel Concerns

When executing a test suite in a single thread, as would often be the case, using a static appender should work fine, as long as you are cleaning up logging statements between test cases like I am doing in clearLoggingStatements(). However if you have a large test suite and are wanting to execute tests in parallel to speed up your build times, then you will run into issues when using the StaticAppender. This is because while multiple test cases are being executed simultaneously, all their logging statements are being written to a single List. Your test cases won’t be able to know which log statements were written from the code it is executing from versus that of other test cases. There could create issues of logging statements being deleted when clearEvents() is called mid test execution.

Luckily the issues around executing tests in parallel can be overcome with relative ease which I will cover in the next article in this series.

Conclusion

Capturing and verifying logging statements is actually a pretty simple task as seen in this article. No additional libraries, beyond your logging libraries, are needed, nor is complex configuration.

While testing logging output might never be the highest priority for a development shop, it is useful to know that testing logging output is easy and practical. Writing some tests for the logging statements your application is producing might save you some frustration next time you have a production issue and you are able to look through accurate and detailed logs as you track down the cause of the problem.

The code for this article can be found here: https://github.com/wkorando/assert-logging-statements