What’s New in JUnit 5.3

The JUnit team continues to make steady work refining and improving on JUnit 5. In this article we will take a look at some of the new features and changes that are in JUnit 5.3 which was released on September 3rd. Also in this article we will look at some recent changes and improvements to libraries and tools that are often used in conjunction with JUnit.

Parallel Test Execution

I covered parallel test execution in a previous blog article. If you want an technical and in-depth look at the parallel execution features in JUnit 5.3, give that article a read. Here is a high level overview of parallel test execution in JUnit 5.3:

  • Parallel test execution is enabled and configured through the build file (i.e. pom.xml, build.gradle)
  • Parallel test execution behavior can be modified at the method and class level with the annotations: @Execution and @ResourceLock.

Be sure to check out the user guide for more detailed information on configuring and using parallel test execution in JUnit 5.

AssertThrows Enhancements

Note: This feature has been reverted in 5.3.1

T̶h̶e̶ ̶a̶s̶s̶e̶r̶t̶T̶h̶r̶o̶w̶s̶ ̶a̶s̶s̶e̶r̶t̶i̶o̶n̶ ̶h̶a̶s̶ ̶b̶e̶e̶n̶ ̶u̶p̶d̶a̶t̶e̶d̶ ̶t̶o̶ ̶d̶i̶s̶p̶l̶a̶y̶ ̶t̶h̶e̶ ̶v̶a̶l̶u̶e̶ ̶r̶e̶t̶u̶r̶n̶e̶d̶ ̶f̶r̶o̶m̶ ̶a̶ ̶m̶e̶t̶h̶o̶d̶,̶ ̶i̶f̶ ̶a̶n̶y̶,̶ ̶w̶h̶e̶n̶ ̶a̶n̶ ̶e̶x̶c̶e̶p̶t̶i̶o̶n̶ ̶i̶s̶ ̶n̶o̶t̶ ̶t̶h̶r̶o̶w̶n̶.̶ ̶T̶h̶i̶s̶ ̶c̶a̶n̶ ̶b̶e̶ ̶h̶e̶l̶p̶f̶u̶l̶ ̶w̶h̶e̶n̶ ̶d̶e̶b̶u̶g̶g̶i̶n̶g̶ ̶a̶ ̶t̶e̶s̶t̶ ̶c̶a̶s̶e̶ ̶t̶h̶a̶t̶ ̶h̶a̶s̶ ̶s̶t̶a̶r̶t̶e̶d̶ ̶f̶a̶i̶l̶i̶n̶g̶.̶ ̶H̶e̶r̶e̶ ̶i̶s̶ ̶a̶ ̶s̶c̶r̶e̶e̶n̶s̶h̶o̶t̶ ̶o̶f̶ ̶w̶h̶a̶t̶ ̶t̶h̶i̶s̶ ̶d̶i̶s̶p̶l̶a̶y̶ ̶l̶o̶o̶k̶s̶ ̶l̶i̶k̶e̶ ̶i̶n̶ ̶E̶c̶l̶i̶p̶s̶e̶.̶ ̶I̶n̶ ̶t̶h̶i̶s̶ ̶e̶x̶a̶m̶p̶l̶e̶ ̶a̶n̶ ̶e̶m̶p̶t̶y̶ ̶L̶i̶s̶t̶ ̶i̶s̶ ̶b̶e̶i̶n̶g̶ ̶r̶e̶t̶u̶r̶n̶e̶d̶ ̶i̶n̶s̶t̶e̶a̶d̶ ̶o̶f̶ ̶t̶h̶e̶ ̶e̶x̶p̶e̶c̶t̶e̶d̶ ̶e̶x̶c̶e̶p̶t̶i̶o̶n̶.

Capturing Console Output

With JUnit 5.3 it is possible to capture System.out and System.err output using a TestExecutionListener. Capturing console output isn’t a frequent need, but might be necessary if working on more legacy code bases where statements are frequently being written to console and it would be nice to see what is being written in a report.

To start capturing console output it requires two steps:

1. Implementing a TestExecutionListener

2. Configuring surefire to start capturing console output.

Here is an example of each:

TestExecutionListener

public class ConsoleOutputListener implements TestExecutionListener {
@Override
public void reportingEntryPublished(TestIdentifier testIdentifier, ReportEntry entry) {
entry.getKeyValuePairs().values().stream().forEach(c > System.out.println("Captured output: " + c));
TestExecutionListener.super.reportingEntryPublished(testIdentifier, entry);
}
}

Surefire Configuration

<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.22.0</version>
<configuration>
<properties>
<configurationParameters>
junit.platform.output.capture.stdout=true
junit.platform.output.capture.stderr=true
</configurationParameters>
</properties>
</configuration>
</plugin>

Library and Tool Updates

As mentioned in the introduction of this article, we will be taking a look at updates to some other projects often used with JUnit. Most of these changes are unrelated too (i.e. not dependent upon) JUnit 5.3, but I wanted to cover them here as they happened relatively recently.

Surefire Native Support

Back in June the maven surefire team released version 2.22.0 of their plugin. With this release the surefire (and failsafe 2.22.0) plugin offer native support for JUnit 5. This is a nice, because prior to the 2.22.0 release, using JUnit 5 required a lot of additional configuration which was frankly a bit of an eyesore. Prior to 2.22.0, to run JUnit 5 with surefire this configuration was required:

<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.21.0</version>
<dependencies>
<dependency>
<groupId>org.junit.platform</groupId>
<artifactId>junit-platform-surefire-provider</artifactId>
<version>${junit-platform.version}</version>
</dependency>
<!– Vintage engine only needed if JUnit 3/4 tests present –>
<dependency>
<groupId>org.junit.vintage</groupId>
<artifactId>junit-vintage-engine</artifactId>
<version>${junit-jupiter.version}</version>
</dependency>
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-engine</artifactId>
<version>${junit-jupiter.version}</version>
</dependency>
</dependencies>
</plugin>

With 2.22.0 the default surefire configuration can look like this:

<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.22.0</version
</plugin>

Note: For spring boot projects, you need only add maven-surefire-plugin.version to your properties, like this.

Note 2: JUnit 5.3 has a bug that prevents tests from being found when the reuseForks tag is set to false. Read more here.

Filtering Changes with Native Support

However with native support comes changes with how tag filtering behaves. When using the JUnit surefire-provider in 2.21.0 filtering by tags looked like this:

<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.21.0</version>
<configuration>
<properties>
<excludeTags>excludeTag</excludeTags>
<includeTags>includeTag</includeTags>
</properties>
</configuration>
<!– dependency stuff goes here –>
</plugin>

However this does not work when running maven-surefire-plugin 2.22.0. To pass to surefire which tags you want to filter on you must use the groups, for which tests to execute, and excludeGroups, for which tests should not be executed, tags. Here is the 2.22.0 equivalent example of the above:

<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.22.0</version>
<configuration>
<groups>includeTag</groups>
  <excludedGroups>excludeTag</excludedGroups>
</configuration>
</plugin>

Enhanced Mockito Dependency Injection Support

As the tweet from @sam_brannen states, you can now directly inject mockito mocks via constructor or method parameters. This can make help make tests more declarative as to what they depend upon and/or a little cleaner. These changes were added in 2.21.0¹, here is an example of those changes in action:

@ExtendWith(MockitoExtension.class)
public class TestMockitoInjection {
private BoringService service;
public TestMockitoInjection(@Mock BoringService service) {
this.service = service;
}
@Test
public void testConstructorInjectedValue() {
when(service.returnNumber()).thenReturn(2);
assertEquals(2, service.returnNumber());
}
@Test
public void testMethodInjection(@Mock BoringService service) {
when(service.returnNumber()).thenReturn(3);
assertEquals(3, service.returnNumber());
}
public class BoringService {
public int returnNumber() {
return 1;
}
}
}

AssertJ Soft Assertions and opentest4J

I have covered AssertJ in a previous blog post. With 3.11.1 AssertJ will now use the MultipleFailuresError in the opentest4j library, if it’s available. opentest4j is a library created by the JUnit 5 team to help make the testing experience more consistent across projects for both users and library developers.

If you are not familiar with soft assertions, they are a way of asserting multiple values in a single test and instead of the test failing on the first assertion, all the failed assertions are instead collected and reported at the end of the test. This can help avoid the irritating loop of a test failing, fixing the failure, running the test again, and hitting another failure. Here’s a look at soft assertions in AssertJ:

public class TestAssertJSoftAssertions {
@Test
public void testSoftAssertions() {
Person person = new Person("John", "Doe");
SoftAssertions softly = new SoftAssertions();
softly.assertThat(person.getFName()).isEqualTo("Anonymous");
softly.assertThat(person.getLName()).isEqualTo("Person");
softly.assertAll();
}
public class Person {
private String fName;
private String lName;
public Person(String fName, String lName) {
this.fName = fName;
this.lName = lName;
}
public String getFName() {
return fName;
}
public String getLName() {
return lName;
}
}
}

Running this test will return both assertion failures, instead of just reporting that the first assert failed.

Conclusion

The JUnit team continues to make great progress on JUnit 5 adding new features, improving on existing ones, and fixing bugs as they are reported. This is only a highlight of some of the new features and changes coming in JUnit 5.3. For a full list of all the changes and improvements coming besure to check out the release notes and the JUnit 5 user guide on how to use the features as well.

1: Technically the changes were added in 2.20.3, but that is only available from the mockito’s maven repository, not maven central.

Dynamic Testing in JUnit 5; a Practical Guide

JUnit 5 introduced several novel features, one of which is dynamic testing. I have been presenting on JUnit 5 fairly regularly, but have had trouble explaining dynamic testing. I demonstrated using a simple example, but would have a hard time providing a use case for dynamic testing. That was the case until somewhat recently when I saw this question on Stack Overflow.

When learning a new technology, I have always benefitted from following a guide that includes a practical example, as it helps me better visualize how I might use a technology to solve challenges I’m facing at my job. I’m sure I’m not the only person who learns this way, so in this article we will learn how to write dynamic tests, look at a practical example of a dynamic test, and learn about their role in our automated test suites.

Building a Factory for Tests

In JUnit 5 the first step when writing a dynamic test is annotating a method with @TestFactory. Unlike a method annotated with @Test, a method annotated with@TestFactory is a method for creating tests. This is conceptually different from the normal way of writing tests, which would be explicitly writing out the test within the method body, i.e. writing a test statically. This difference in mindset is why it can be difficult to figure out when to use dynamic tests.

To ease in to the process of learning dynamic tests, let’s first take a look at a really simple one:

@TestFactory
public List<DynamicTest> exampleTestFactory() {
return Arrays.asList(
dynamicTest("Dynamic square " + 2, () > assertEquals(4, 2 * 2)),//
dynamicTest("Dynamic true " + true, () > assertTrue(true)));
}

We already covered @TestFactory above, lets take a look at the other elements of a dynamic test.

Dynamic tests must return either a CollectionIterableIterator, or Stream, of DynamicNode types. DynamicNode is an abstract class with two default implementations of DynamicTest and DynamicContainer. We will cover DynamicContainer later, DynamicTest however is the executable test composed in a @TestFactory method that will be executed by JUnit.

For building DynamicTests the JUnit team has provided some factory methods, in the above example I am using dynamicTest(String displayName, Executable executable)displayName is the name that will show up in test reports¹ and executable is a method reference or lambda of the code that will actually be executed in the test.

Writing dynamic tests as can be seen above is pretty simple. However the above example doesn’t provide much guidance on how dynamic tests might be used to ensure our applications are being properly covered by automated tests. Let’s take a look at a more practical application of a dynamic test.

Writing Dynamic Tests to Reduce Toil

It’s not uncommon the need to write a service that communicates with multiple clients, and those clients communicating in different data formats. To demonstrate the power and utility of dynamic tests we are going to walk through writing a dynamic test that scans an application for controllers, check if those controllers have a POST endpoint, and then verify the POST endpoint can consume both XML and JSON.

Writing out tests to ensure every POST (and PUT) endpoint in a RESTful API accepts all required content types can be tedious and error prone. It is easy to imagine a scenario where a developer doesn’t properly configure an endpoint and then either not write the tests that validate the endpoint for content types it accepts or have a copy/paste error (doesn’t update the test to call the correct endpoint). There is a good possibility this bug won’t be caught until it reaches production. Let’s look at how a dynamic test can make this process easier, checking all POST endpoints to verify they accept required content types, and as new POST endpoints are added, automatically checking them as well.

Below is a dynamic test that accomplishes this, lets walk through what this code is doing:

public class TestAcceptedMediaTypes {
private static final MediaType[] SUPPORTED_MEDIA_TYPES = new MediaType[] { MediaType.APPLICATION_JSON,
MediaType.APPLICATION_XML };
@TestFactory
Collection<DynamicTest> testAcceptedMediaTypes() throws Exception {
// Creating a classpath scanner to find all controller classes in project
ClassPathBeanDefinitionScanner scanner = new ClassPathBeanDefinitionScanner(new DefaultListableBeanFactory(),
false);
scanner.addIncludeFilter(new AnnotationTypeFilter(RestController.class));
Set<BeanDefinition> beanDefinitons = scanner.findCandidateComponents("com.bk.hotel");
Set<Object> controllers = new HashSet<>();
Set<Class<?>> controllersClasses = new HashSet<>();
// Instantiating controller classes
for (BeanDefinition beanDefiniton : beanDefinitons) {
String className = beanDefiniton.getBeanClassName();
Class<?> controllerClazz = ClassLoader.getSystemClassLoader().loadClass(className);
controllersClasses.add(controllerClazz);
Constructor<?> constructor = controllerClazz.getDeclaredConstructors()[0];
Object[] arguments = new Object[constructor.getParameterTypes().length];
int i = 0;
for (Class<?> parameterType : constructor.getParameterTypes()) {
arguments[i] = mock(parameterType);
i++;
}
controllers.add(constructor.newInstance(arguments));
}
MockMvc mockMvc = MockMvcBuilders.standaloneSetup(controllers.toArray()).build();
List<DynamicTest> generatedTests = new ArrayList<>();
// Check if controller has a POST endpoint and call it with all the different
// mediatypes, throw an error in a 415 (unsupported media type) is returned
for (Class<?> controllerClazz : controllersClasses) {
RequestMapping mapping = controllerClazz.getAnnotationsByType(RequestMapping.class)[0];
StringBuilder builder = new StringBuilder();
builder.append(mapping.value()[0]);
for (Method method : controllerClazz.getMethods()) {
if (method.isAnnotationPresent(PostMapping.class)) {
for (MediaType mediaType : SUPPORTED_MEDIA_TYPES) {
generatedTests.add(dynamicTest(builder.toString() + " " + mediaType,
() > mockMvc.perform(post(builder.toString()).accept(mediaType).contentType(mediaType))
.andExpect(status().is(IsNot.not(415)))));
}
}
}
}
return generatedTests;
}
}

view raw
DynamicTestFull.java
hosted with ❤ by GitHub

The dynamic test can be broken down into roughly three steps:

  1. Scan the classpath for controller classes
  2. Mock out controller dependencies
  3. Build the tests to call the POST endpoints

To handle the first steps we will use of Spring’s ClassPathBeanDefinitionScanner. The classpath scanner has been configured to look for any class annotated with @RestController and to scan from the root package of the project. This should find any controllers within the project, ClassPathBeanDefinitionScanner can be easily configured though to narrow or widen its search criteria as might be needed in your project.

Once all the controllers in the project are found, we must figure out any dependencies the controllers require and create mocks, in this case dummies, of those dependencies. While this task could be accomplished if field injection was being used, it’s made easier by using constructor injection. Using constructor injection instead of field injection helps to define the required elements a class must have in order to be in a usable state. It’s for this reason Pivotal engineers and developer advocates recommend using constructor injection.

For the third step, we need to create the tests themselves. The first part of this step is building the URL that MockMvc will call. The code is checking to see if a @RequestMapping has been defined at the class level providing the base of the URL for the controller. Next we will check if any methods in the controller have been annotated with @PostMapping. We are assuming proper RESTful practices, so the POST endpoint should reside at the base of the controller URL.

Finally we are using mockMvc to execute the POST call and what we will be checking for is that a 415, media type not accepted, is not being returned. By checking that a 415 is not being returned, we make the process of writing the dynamic test easier as we are able to use dummies for our controller dependencies. If we were to check for a 200 (OK), this would not only add a lot of overhead, as the mock dependencies would need to have behavior, but also distract from the purpose of the test, we are checking that POST endpoints are accepting required content types, not testing any of the code within the endpoints themselves. Applying the single responsibility principleto tests as well, makes writing tests easier and automated test suites cleaner.

Executing the above code will return four hopefully passing tests that look like this:

This isn’t too bad of test reporting, but as a RESTful API increases in complexity it might become a little difficult to read. Let’s look at how DyanmicContainers can be used to make the test report a little easier to read and scale better as new controllers and POST endpoints are added.

public class TestAcceptedMediaTypes {
private static final MediaType[] SUPPORTED_MEDIA_TYPES = new MediaType[] { MediaType.APPLICATION_JSON_UTF8,
MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML };
@TestFactory
Collection<DynamicContainer> testAcceptedMediaTypes() throws Exception {
List<DynamicContainer> dynamicContainers = new ArrayList<>();
// Check if controller has a POST endpoint and call it with all the different
// mediatypes, throw an error in a 415 (unsupported media type) is returned
for (Class<?> controllerClazz : controllersClasses) {
RequestMapping mapping = controllerClazz.getAnnotationsByType(RequestMapping.class)[0];
StringBuilder builder = new StringBuilder();
builder.append(mapping.value()[0]);
for (Method method : controllerClazz.getMethods()) {
if (method.isAnnotationPresent(PostMapping.class)) {
List<DynamicTest> dynamicTests = new ArrayList<>();
for (MediaType mediaType : SUPPORTED_MEDIA_TYPES) {
dynamicTests.add(dynamicTest(mediaType.toString(),
() > mockMvc.perform(post(builder.toString()).contentType(mediaType))
.andExpect(status().is(IsNot.not(415)))));
}
dynamicContainers.add(dynamicContainer(builder.toString(), dynamicTests));
}
}
}
return dynamicContainers;
}
}

I cut out some of the code above for brevity, but what has been changed is in the method signature a Collection of DynamicContainers is being returned instead of DynamicTests. Next we have a variable dynamicContainers that is also a List of DynamicContainers. Within the for loops where the tests are being created there is now a second ListdynamicTests, and we are adding the tests we are building to that list. Finally we are adding dynamicTests to dynamicContainers.

With those changes made, when we run the test again the report we get should look something like this:

Now the test for each content type is nested under the the base URL of the controller. This will help keep the report readable as new controllers and POST endpoints are added.

When to Use Dynamic Testing

The benefit dynamic testing offers over statically writing tests doesn’t come in the form of time saved, but from avoiding toil and saving mental capacity. In the example above of verifying that POST endpoints can consume JSON and XML, it would be laborious writing out a set of tests verifying as such each time a new POST endpoint is added. This task would often be accomplished instead by copying and pasting the tests that were used to verify another endpoint. As mentioned earlier it’s pretty easy to imagine then a developer forgetting to update those tests to call the correct endpoint.

This leads to what I think is the biggest tell (or smell if you like) for when to use dynamic tests, copying and pasting tests from other test classes. Other common testing needs for example security and database communication often have a series of standard test; if this role is present allow, if not deny, if all the required fields are present update the table, if not throw an error. Developers often save time by copying, pasting, and updating tests, but it would be better to write out a single dynamic test that will then automatically generate new tests verifying common behavior as new endpoints, tables, security roles, etc. are added.

Conclusion

It’s exciting to see so many new possibilities that come from testing with JUnit 5. In an earlier article I covered how test interfaces introduced in JUnit 5 could be used to make testing easier while also encouraging developers to follow architectural patterns. As the JUnit team continues adding new features to JUnit 5 I’m sure we will find new an exciting ways to write tests that weren’t possible before.

The code in this article can be found here:

Simple dynamic test: https://github.com/wkorando/junit-5-simple-demonstrator

Practical dynamic test: https://github.com/wkorando/WelcomeToJunit5

  1. There is currently a bug in surefire where the display name of dynamic tests is not being used.

How to Test Logging in Java Part Two: Parallel Boogaloo

In the first article in this series we looked at a simple way of capturing and verifying logging statements using a static list appender. The static list appender method works great if you are executing your test suite in a single thread, but isn’t practical when executing tests in parallel due to all the logging statements being written to a single list.

Whether or not a test suite can be executed in parallel should not depend upon if there are tests case verifying logging statements. We shouldn’t need to choose between a fast executing test suite or an application that produces accurate logging statements. So in this article we are going to look at a couple different methodologies for testing logging statements in a multi-threaded environment. We will also preview some of the new parallel testing features coming in JUnit 5.3, which should be released sometime in August.

Using a Thread Safe Appender

Parallel test execution works by spinning up multiple threads and using those threads to execute test simultaneously. This causes havoc when using a single list to capture logging statements across the entire JVM, but a way to get around this is to create a new list in each thread. To accomplish this we will make use of the ThreadLocal functionality in Java. Here is an example of an appender using ThreadLocal to capture logging statements:

public class ThreadSafeAppender extends AppenderBase<ILoggingEvent> {
static ThreadLocal<List<ILoggingEvent>> threadLocal = new ThreadLocal<>();
@Override
public void append(ILoggingEvent e) {
List<ILoggingEvent> events = threadLocal.get();
if (events == null) {
events = new ArrayList<>();
threadLocal.set(events);
}
events.add(e);
}
public static List<ILoggingEvent> getEvents() {
return threadLocal.get();
}
public static void clearEvents() {
threadLocal.remove();
}
}

In the code above we have the static field threadLocal that will accept a Listof ILoggingEvents. When append() is called the appender will retrieve the List for the current thread. If no list is present a new List is initialized and added to ThreadLocal. Finally the logging statement is added to the list.

getEvents() will retrieve the list of logging statements for the current thread and clearEvents() will remove the logging statements for the current thread. This class is pretty similar to StaticAppender, just with a few tweaks.

Configuration will look more or less the same, just we will be referencing the ThreadSafeAppender instead of StaticAppender:

<configuration>
<appender name="threadsafe-appender" class="com.bk.logging.ThreadSafeAppender" />
<root level="trace">
<appender-ref ref="threadsafe-appender" />
</root>
</configuration>

view raw
logback-test.xml
hosted with ❤ by GitHub

The next step is configuring Maven to execute the test suite in parallel. The updated surefire configuration looks like this:

<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.22.0</version>
<configuration>
<properties>
<configurationParameters>
junit.jupiter.execution.parallel.enabled=true
junit.jupiter.execution.parallel.config.dynamic.factor=1
</configurationParameters>
</properties>
</configuration>
</plugin>
</plugins>
</build>
</project>

view raw
surefire-config.xml
hosted with ❤ by GitHub

There are several different options for executing a test suite in parallel in JUnit 5.3, in this example we are using the dynamic behavior. Dynamic behavior with a factor of “1” means that for each core that is available on the computer executing the test suite, create a thread to execute tests. So if a computer has two cores, then JUnit will create two threads to execute tests with. There is also fixed, and custom, and you can read more about the parallel test support in JUnit 5.3 here.

We will be reusing the same scenario from the last article of printing out two log statements; one in the same thread of the test case being executed and also spinning up a separate thread to write a second log message, here it is again:

public class LogProducingService {
private static final Logger LOGGER = LoggerFactory.getLogger(LogProducingService.class);
public void writeSomeLoggingStatements(String message) {
LOGGER.info("Let's assert some logs! " + message);
ExecutorService executor = Executors.newSingleThreadExecutor();
Future<?> future = executor.submit(() > LOGGER.info("This message is in a separate thread"));
do {
// wait for future to complete
} while (!future.isDone());
}
}

This scenario is designed to demonstrate the major shortcoming of the thread safe method, which will be covered in more detail in a moment.

For the test class we will essentially set it up to run the same scenario twice, with only a small difference in the logging ouput:

public class TestThreadSafeAppender {
@BeforeEach
public void clearLoggingStatements() {
ThreadSafeAppender.clearEvents();
}
@Test
public void testAssertingLoggingStatementsA() {
LogProducingService service = new LogProducingService();
service.writeSomeLoggingStatements("A");
assertThat(ThreadSafeAppender.getEvents()).extracting("message").containsOnly("Let's assert some logs! A");
}
@Test
public void testAssertingLoggingStatementsB() {
LogProducingService service = new LogProducingService();
service.writeSomeLoggingStatements("B");
assertThat(ThreadSafeAppender.getEvents()).extracting("message").containsOnly("Let's assert some logs! B");
}
}

Looking at the test cases, I am only checking for a single logging statement despite two being written by the code under test. This is the major shortcoming with the ThreadSafeAppender, a new list is being created for each thread, not for each test case. So if the code under test creates a new thread itself, any logging statement within that new thread will be written to a separate list that the test case won’t have (easy) access too.

Typically this isn’t going to be a problem. First, writing code that spins and executes code in a separate thread isn’t that common of a need. Second, following the single responsibility principle, code that is being executed in a separate thread should be in its own class, which can then be tested independently.

That said, there might still be times where this does happen. Maybe like in the scenario above the code being executed in a separate thread is so trivial that it might not warrant a separate class, or maybe you inherited a code base that didn’t follow the single responsibility principle and you want to be able to write some tests that cover that area before you refactor it. Luckily there is a solution, which I will cover below, but first I need to talk about issues of executing tests in parallel that will apply to both methodologies.

The Joys of Concurrency

If you copy and pasted the above code into a project it would work when executing within your preferred IDE (I personally tested in Eclipse), however that is only being executed using a single thread, and if you tried running mvn test you probably get some tests failures and text that looks like the below printed to the console:

SLF4J: A number (4) of logging calls during the initialization phase have been intercepted and are
SLF4J: now being replayed. These are subject to the filtering rules of the underlying logging system.
SLF4J: See also http://www.slf4j.org/codes.html#replay

This is really only a problem that would occur when executing tests in parallel and wouldn’t really impact a deployed application. Unfortunately this will lead to “flickeringh tests” (tests that don’t pass or fail in a consistent manner) and so this issue must be addressed.

First to describe the problem. What is happening is tests are being executed before logback has initialized. This is because logback is being initialized in a separate thread and the threads executing the tests do not know this. SLF4J recognizes this, captures the log statements, and then replays them. SLF4J isn’t doing anything wrong, in-fact it is being very helpful, unfortunately for us though the replay isn’t happening until after the assert phase of some test cases, so they end up failing.

The good news is there is a work-around, and it is pretty easy to implement. What we need is to find a way to check if logback is ready, and not let a test class execute and until that check passes. Here is some code that is performing that check:

public class LogbackInitializerExtension implements BeforeAllCallback {
@Override
public void beforeAll(ExtensionContext context) throws Exception {
pauseTillLogbackReady();
}
private static void pauseTillLogbackReady() {
while(!isLogbackReady());
}
private static boolean isLogbackReady() {
return LoggerFactory.getILoggerFactory() instanceof LoggerContext;
}
}

Before logback is ready, calling getILoggerFactory() will return a default SLF4J implementation of ILoggerFactory. So the while loop is designed to to block execution of the test class until it breaks when getILoggerFactory()returns the logback implementation of ILoggerFactoryLoggerContext.

For convenience, I implemented this behavior using JUnit 5’s new extension model, so that I can simply add @ExtendWith(LogbackInitializerExtension.class) at the top of a test class that will be checking log statements. Otherwise I would have to add a @BeforeAll method implementing or calling pauseTillLogbackReady() in every test class that is testing log statements. Don’t worry, unlike RunWith in JUnit 4, in JUnit 5 a class can have multiple extensions.

I can further simplify this process by creating a custom annotation, so instead of @ExtendWith(LogbackInitializerExtension.class), which might be confusing to remember, I can just add @LogbackInitializer to the top of a test class. Creating a custom annotation with JUnit 5 is pretty easy:

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.TYPE)
@ExtendWith(LogbackInitializerExtension.class)
public @interface LogbackInitializer {
}

Figuring this all out was admittedly a bit of a headache, but resolving these problems that crop up when executing tests in parallel doesn’t require complex configuration or relying on what I would call “winking” solutions, such as using wait timers to delay test case execution. Such solutions can lead to the flickering test failures described earlier in the article, or just unnecessarily slow down the execution of your test suite, which defeats the whole purpose of executing the test suite in parallel in the first place!

Using a Local Appender

Using the “Thread Safe” method is the method I would recommend for testing logging statements when executing tests in parallel, primarily because of its ease of configuration. However as explained above you might have inherited a poorly written code base or don’t want to separate your code out. If that is the case then you might want to try what I call the “Local Appender” method.

I call this the “Local Appender” method because you will be setting up a new appender for every test case where you will be verifying logging statements (i.e. an appender local to each test case). The advantage is this appender will capture logging statements executed in threads separate from the test case, resolving the major disadvantage of the Thread Safe method, but at cost of additional configuration overhead, which will be covered below.

The first step is implementing the appender:

public class LocalAppender extends AppenderBase<ILoggingEvent> {
private List<ILoggingEvent> events = new ArrayList<>();
public LocalAppender() {
start();
}
public static LocalAppender initialize(Stringloggers) {
LocalAppender localAppender = new LocalAppender();
for (String loggerName : loggers) {
localAppender.setContext((LoggerContext) LoggerFactory.getILoggerFactory());
Logger logger = (Logger) LoggerFactory.getLogger(loggerName);
logger.addAppender(localAppender);
}
return localAppender;
}
public void cleanup() {
this.stop();
this.clearEvents();
}
@Override
public void append(ILoggingEvent e) {
events.add(e);
}
public List<ILoggingEvent> getEvents() {
return events;
}
public void clearEvents() {
events.clear();
}
}

view raw
LocalAppender.java
hosted with ❤ by GitHub

The bare essential elements of the LocalAppender are pretty similar to the StaticAppender from the previous article, the only difference is the List is an instance member instead of a static one and in the constructor we are starting the appender.

The other differences aren’t strictly necessary but are present for convenience. initialize() is a factory method for creating new LocalAppendersinitialize() is taking in logger coordinates and configuring an instance of LocalAppender to only capture log statements from those loggers. The second method, cleanup(), as the name suggests cleans up the appender by stopping it and clearing out all logging statements. This frees up system resources, but really isn’t that important unless you are using a lot of LocalAppenders.

Let’s see what using the LocalAppender might look like in a test class:

@LogbackInitializer
public class TestLocalAppender {
@Test
@ResourceLock(value = "LOGGING", mode = ResourceAccessMode.READ_WRITE)
public void testLocalAppenderA() {
OtherLogProducingService service = new OtherLogProducingService();
LocalAppender localAppender = LocalAppender.initialize("com.bk.logging.OtherLogProducingService");
service.writeSomeLoggingStatements("Other logging servie A");
assertThat(localAppender.getEvents()).extracting("message")
.containsOnly("Let's assert some logs! Other logging servie A", "This message is in a separate thread");
localAppender.cleanup();
}
@Test
@ResourceLock(value = "LOGGING", mode = ResourceAccessMode.READ_WRITE)
public void testLocalAppenderB() {
OtherLogProducingService service = new OtherLogProducingService();
LocalAppender localAppender = LocalAppender.initialize("com.bk.logging.OtherLogProducingService");
service.writeSomeLoggingStatements("Other logging servie B");
assertThat(localAppender.getEvents()).extracting("message")
.containsOnly("Let's assert some logs! Other logging servie B", "This message is in a separate thread");
localAppender.cleanup();
}
@Test
@ResourceLock(value = "LOGGING", mode = ResourceAccessMode.READ)
public void justAnotherTest() {
OtherLogProducingService service = new OtherLogProducingService();
service.writeSomeLoggingStatements("Local appender");
// Executing just to add some logs
}
@Test
@ResourceLock(value = "LOGGING", mode = ResourceAccessMode.READ)
public void yetAnotherTest() {
OtherLogProducingService service = new OtherLogProducingService();
service.writeSomeLoggingStatements("Local appender");
// Executing just to add some logs
}
}

So there’s a good amount going on here. Let’s walk through it step by step.

At the top I have @LogbackInitializer. Like mentioned above this is present to ensure that logback is initialized before the test cases begin executing.

Next you might notice all the test cases are marked with @ResourceLock. This is one of the new parallel features being added in JUnit 5.3. The JUnit team realized that the desire to execute tests in parallel isn’t entirely a binary decision. Most tests might be able to execute fine in parallel, but some tests might be sharing a resource, in this case logging, and so might have some special conditions around how they can be executed.

@ResourceLock takes two arguments, the resource key (which uses the valuefield), and mode which takes the values of either READ or READ_WRITE(defaults to READ_WRITE). Test cases marked READ can be executed at the same time as other test cases using the same resource marked READ, but will not be executed at the same time as a test case marked READ_WRITE that is also using the same resource. Finally @ResourceLock is applied across the entire test suite. So if we were to add @ResourceLock(value = “LOGGING”, mode = ResourceAccessMode.READ_WRITE) to a test case in one of the other test classes in the project, it wouldn’t execute at the same time as either testLocalAppenderA() or testLocalAppenderB().

Note: There is no execution order guarantee when using @ResourceLock, so a test case marked READ might be executed before or after a test case marked READ_WRITE.

Within the test cases themselves I am using the initialize() factory method I mentioned earlier and this gets at the advantage of the LocalAppender. With LocalAppender I can easily configure it to only capture logs from a very narrow area, typically the specific class (or classes) I am testing with that test case. With this fine grained control on what logging statements to capture and using features like @ResourceLock we can guarantee only the expected logging statements are being captured.

Finally justAnotherTest() and yetAnotherTest() are dummy test cases I added to further demonstrate the @ResourceLock functionality. If those annotations are removed then testLocalAppenderA() or testLocalAppenderB() might begin to fail because of the additional logging statements being written during their test execution. I would recommend trying it out!

Conclusion

If you have a very large test suite, executing tests in parallel can be a good option for cutting down build times, particularly if you have a build server with a lot of resources. Using the common method of capturing and verifying logging statements, like described in the first article, would mean either having to ignore the occasional failing test, or configuring your build to ignore those tests when the test suite is being executed in parallel (if you look at the full project I link below, I demonstrate how this can be done). Luckily though with the methods covered in this article we can execute our test suite in parallel and verify our logging statements too.

I hope also this article excited you about the upcoming JUnit 5.3 release. JUnit 5.3 should be out sometime in August and I will have an article covering some of the key new features in JUnit 5.3, as well as some changes in closely associated projects. That article will be published shortly after JUnit 5.3’s release. Until then happy testing!

The code for this article can be found here: https://github.com/wkorando/assert-logging-statements

How to Test Logging in Java

Whether you are building microservices or monoliths, accurate and robust logging is a critical element in any application. Logging can give insight on how an application is being used and provide developers and operations with detailed information on why an error occurred. It is important then, like any other feature in an application, that we use automated tests to verify that logging statements are accurate and do not cause errors themselves.

In this two part series we will look at some strategies for capturing and verifying logging statements. In part one we will look at a simple and easy to implement strategy for when you are executing your test suite sequentially. In part two we will focus on a couple of strategies for when you are executing your test suite in parallel.

The Scenario

For demonstrating how to capture and assert logging statements we have the very useful and accurately named LogProducingService:

public class LogProducingService {
private static final Logger LOGGER = LoggerFactory.getLogger(LogProducingService.class);
public void writeSomeLoggingStatements(String message) {
LOGGER.info("Let's assert some logs! " + message);
ExecutorService executor = Executors.newSingleThreadExecutor();
Future<?> future = executor.submit(() > LOGGER.info("This message is in a separate thread"));
do {
// wait for future to complete
} while (!future.isDone());
}
}

The LogProducingService has a single method, writeSomeLoggingStatements(), which produces a pair of logging statements. One statement written within the same thread as that of the calling test case and another statement written in a separate thread (this will be more important in part two).

Implementing a Custom Appender

In normal operation an application will typically be using either some form of a file appender to write statements to system logs, or in more recent trends, using an appender that sends statements to a message queue such as Kafka. There are some other options, but none are practical for verifying if the logging statements being produced are valid. So to handle this we will need to implement our own appender like this one:

public class StaticAppender extends AppenderBase<ILoggingEvent> {
static List<ILoggingEvent> events = new ArrayList<>();
@Override
public void append(ILoggingEvent e) {
events.add(e);
}
public static List<ILoggingEvent> getEvents() {
return events;
}
public static void clearEvents() {
events.clear();
}
}

view raw
StaticAppender.java
hosted with ❤ by GitHub

In the above class I am extending AppenderBase and instead of printing the logging statements out to a file or sending them to a message queue I am adding them to a List. To retrieve the logging statements for asserting in a test case there is getEvents(), and to clear the list out between test cases I added clearEvents().

I named this class StaticAppender because the list I am storing the logging statements in, as well as the methods to retrieve and clear that list, are all static. This is necessary as I want to be able to globally configure my test suite using logback-test.xml; and the instance of StaticAppender that logback creates won’t be easily available to my test code. By declaring the aforementioned fields and methods static, I have easy access to the logging statements my application is producing from within my test cases.

After implementing the appender, we need to configure logback to use it:

<configuration>
<appender name="static-appender" class="com.bk.logging.StaticAppender" />
<root level="trace">
<appender-ref ref="static-appender" />
</root>
</configuration>

view raw
logback-test.xml
hosted with ❤ by GitHub

As mentioned above I am configuring logback for my test suite using logback-test.xml and placed it in the src/test/resources directory of the project. I did this because by order of precedence logback will load logback-test.xmlfirst if it is present on classpath. So even though I have logback.xml defined in my src/main/resources, when executing my test suite I will be using StaticAppender instead of ConsoleAppender defined in logback.xml. This is important as we don’t want to modify production code so that it is aware it is running in a test scenario versus actually deployed; that is a big no-no.

Finally the test case would look something like this:

public class TestStaticAppender {
@BeforeEach
public void clearLoggingStatements() {
StaticAppender.clearEvents();
}
@Test
public void testAssertingLoggingStatementsA() {
LogProducingService service = new LogProducingService();
service.writeSomeLoggingStatements("A");
assertThat(StaticAppender.getEvents()).extracting("message").containsOnly("Let's assert some logs! A",
"This message is in a separate thread");
}
@Test
public void testAssertingLoggingStatementsB() {
LogProducingService service = new LogProducingService();
service.writeSomeLoggingStatements("B");
assertThat(StaticAppender.getEvents()).extracting("message").containsOnly("Let's assert some logs! B",
"This message is in a separate thread");
}
}

I am a huge fan of AssertJ, which I am making use of here to quickly extract out the message field from a list of ILoggingEvent and assert its value. If you want to learn more about using AssertJ in your automated testing, I wrote a blog article on that as well.

Parallel Concerns

When executing a test suite in a single thread, as would often be the case, using a static appender should work fine, as long as you are cleaning up logging statements between test cases like I am doing in clearLoggingStatements(). However if you have a large test suite and are wanting to execute tests in parallel to speed up your build times, then you will run into issues when using the StaticAppender. This is because while multiple test cases are being executed simultaneously, all their logging statements are being written to a single List. Your test cases won’t be able to know which log statements were written from the code it is executing from versus that of other test cases. There could create issues of logging statements being deleted when clearEvents() is called mid test execution.

Luckily the issues around executing tests in parallel can be overcome with relative ease which I will cover in the next article in this series.

Conclusion

Capturing and verifying logging statements is actually a pretty simple task as seen in this article. No additional libraries, beyond your logging libraries, are needed, nor is complex configuration.

While testing logging output might never be the highest priority for a development shop, it is useful to know that testing logging output is easy and practical. Writing some tests for the logging statements your application is producing might save you some frustration next time you have a production issue and you are able to look through accurate and detailed logs as you track down the cause of the problem.

The code for this article can be found here: https://github.com/wkorando/assert-logging-statements