Why You Should Start Injecting Mocks as Method Arguments

One of the big improvements that came in JUnit 5 was support for dependency injection via constructors and methods. Since the release of JUnit 5 in September 2017, third-party libraries, like mockito, have started providing native support for constructor and method injection. In this article we will take a quick look at how to use constructor and method injection with mockito and then look at why you should start injecting mocks as method arguments in your automated tests.

How to Inject a Mock as an Argument

Starting with 2.21.0, current version 2.25.0, mockito has provided support for injecting mocks as both constructor and method arguments. Let’s looks at how you can start using dependency injection with mockito.

In your test class you will need to annotate it with @ExtendWith(MockitoExtension.class). Then for any arguments you would like mockito to provide a mock for, you simply annotate the argument with @Mock. Here is an example of using mockito dependency injection in action:

@ExtendWith(MockitoExtension.class)
public class TestMockitoInjection {
private BoringService service;
public TestMockitoInjection(@Mock BoringService service) {
this.service = service;
}
@Test
public void testConstructorInjectedValue() {
when(service.returnNumber()).thenReturn(2);
assertEquals(2, service.returnNumber());
}
@Test
public void testMethodInjection(@Mock BoringService service) {
when(service.returnNumber()).thenReturn(3);
assertEquals(3, service.returnNumber());
}
public class BoringService {
public int returnNumber() {
return 1;
}
}
}

Pretty simple and straight forward. Let’s now look at why you should start using method injection of mocks.

The Case for Injecting Mocks as Method Arguments

There are three major benefits that come from automated testing: speed, repeatability, and auditability. The first two are pretty well understood benefits of automated testing, auditability however is if not less well understood, definitely less often discussed. Auditability, within the context of automated testing, refers to the quality of being able to see what code has been tested and the intent of the test.

Code coverage can be achieved without spending much time thinking about how other people, developers, test engineers, business analyst, etc, might use automated tests to understand (i.e. audit) the system the tests are covering. Tests with names like testSuccess, testFail, testFail2, can be executed just fine, but do little to communicate their intent. For an automated test suite to be properly auditable, tests names need to clearly convey the intent of what behavior is being tested. While a test with a name of testRollbackAddUserAddressValidationError​ is a bit of a mouth full, it pretty clearly describes what scenario the test is covering.

While testRollbackAddUserAddressValidationError()​ provides intent, to understand the scope of the test, what dependencies the code under test interacts with, it would require inspecting the code within the test case itself. However we can begin to communicate scope by injecting mocks as method arguments. If we were to do that with the above test we will would have testRollbackAddUserAddressValidationError(@Mock UserDao userDao). Now just from reading the signature of the test case we can determine that the scope of the test also includes interacting with the UserDao class.

When executing tests as a group, we can better see the benefits of injecting mocks as method arguments. Below is an example of running two test classes performing the same set of tests, but one is using mocks at the class level, while the other is using method injection. From the JUnit report alone, we can understand that UserService depends upon the UserDao and AddressDao classes.

Screen Shot 2019-03-12 at 11.20.15 AM.png

Note: Another new feature in JUnit 5 are nested tests, which is being used here.

Conclusion

Injecting mocks as method arguments isn’t game changing, but it can help make tests easier to read, and thus audit, thru being able to communicate in the signature the scope of the test. While there will be instance where passing a mock in as a method argument isn’t practical, generally that should be rare*, and so hopefully this article encourages you to generally use method injection when you are working with mocks.

The code used in this article can be found in this gist, and also this repo.

* Complex mock setup should be seen as a smell that either (or all) the mock, the test, or the code under test has too many responsibilities

Handling and Verifying Exceptions in JUnit 5

JUnit 5 offers a number of improvements over JUnit 4. In this article we will take a quick look at how exceptions are handled and verified in JUnit 4, and then see how the new assertThrows() in JUnit 5 improves the usability and readability when catching and verifying exceptions.

Handling and Verifying Exceptions in JUnit 4

In JUnit 4 there are two primary ways of handling exceptions. The most commonly used method is with the expected field in @Test. An alternative way of handling exceptions is by using a @Rule and ExpectedException. Below are examples of both: 

public class TestHandleExceptionsJUnit4 {
@Rule
public ExpectedException expectedException = ExpectedException.none();
@Test(expected = SpecificException.class)
public void testSpecificExceptionHandling() throws SpecificException {
onlyThrowsExceptions();
}
@Test(expected = Exception.class)//Passes because Exception is super type of SpecificException
public void testExceptionHandling() throws SpecificException {
onlyThrowsExceptions();
}
@Test(expected = SpecificException.class)
public void testExceptionHandlingVerifyExceptionFields() throws SpecificException {
try {
onlyThrowsExceptions();
} catch (SpecificException e) {
assertEquals("An exception was thrown!", e.getMessage());
throw e;
}
}
@Test
public void testUseExpectedException() throws SpecificException {
expectedException.expect(SpecificException.class);
expectedException.expectMessage("An exception was thrown!");
onlyThrowsExceptions();
}
@Test
public void testUseExpectedExceptionWithSuperType() throws SpecificException {
expectedException.expect(Exception.class);//Passes because Exception is super type of SpecificException
expectedException.expectMessage("An exception was thrown!");
onlyThrowsExceptions();
}
public void onlyThrowsExceptions() throws SpecificException {
throw new SpecificException("An exception was thrown!");
}
public class SpecificException extends Exception {
public SpecificException(String message) {
super(message);
}
}
}

While both methods are capable of catching and verifying exceptions, each have issues that impact their usability and readability. Let’s step through some of these issues with expected and ExpectedException.

When using expected,  not only are you putting some of the assertion behavior into the definition of the test case, verifying fields within the thrown exception is a bit clunky. To verify the fields of an exception you’d have to add a try/catch within the test case, and within the catch block perform the additional assertions and then throw the caught exception.

When using ExpectedException you have to initially declare it with ​none(), no exception expected, which is a bit confusing. Within a test case you define the expected behavior before the method under test. This would be similar to if you were using a mock, but it’s not intuitive as a thrown exception is a “returned” value, not a dependency nor internal to the code under test.

These oddities significantly impacted the usability and readability of test cases in JUnit 4 that verified exception behavior. The latter is by no means a trivial problem as “easy to read” is probably one of, if not the, most import characteristics of test code. So it is not surprising then that exception handling behavior was heavily rewritten in JUnit 5.

Introducing assertThrows()

In JUnit 5, the above two methods of handling and verifying exceptions have been rolled into the much more straightforward and easier to use assertThrows(). assertThrows() requires two arguments, Class <T> and Executable, assertThrows() can also take an optional third argument of either String or Supplier<String> which can be used for providing a custom error message if the assertion fails. assertThrows() returns the thrown exception, which allows for further inspection and verification of the fields within the thrown exception.

Below is an example of assertThrows() in action:

public class TestHandleExceptionsJUnit5 {
@Test
public void testExceptionHandling() {
Exception e = assertThrows(SpecificException.class, () > onlyThrowsExceptions());
assertEquals("An exception was thrown!", e.getMessage());
}
@Test
public void testExceptionHandlingFailWrongExceptionType() {
assertThrows(Exception.class, () > doesntThrowExceptions(), "Wrong exception type thrown!");
}
@Test
public void testExceptionHandlingFailNoExceptionThrown() {
assertThrows(SpecificException.class, () > doesntThrowExceptions(), "An exception wasn't thrown!");
}
public void onlyThrowsExceptions() throws SpecificException {
throw new SpecificException("An exception was thrown!");
}
public void doesntThrowExceptions() {
//do nothing
}
public class SpecificException extends Exception{
public SpecificException(String message) {
super(message);
}
}
}

As can be seen in the above, assertThrows()  is much cleaner and easier to use than either method in JUnit 4. Let’s take a bit closer look at assertThrows() and some of its  more subtle improvements as well.

The second argument, the Executable is where the requirement of Java 8 in JUnit 5 starts to show its benefits. Executable is a functional interface, which allows for, with the use of a lambda, directly executing the code under test within the declaration of assertThrows(). This makes it not only easier to check for if an exception thrown, but also allows assertThrows() to return the thrown exception so additional verification can be done.

Conclusion

assertThrows() offers significant improvements to usability and readability when verifying exception behavior for code under test. This is consistent with many of the changes made in JUnit 5, which have made the writing and reading of tests easier. If you haven’t yet made the switch to JUnit 5, I hope this seeing the improvements in exception handling and verification helps to build the case for making the switch.

The code used in this article can be found here: https://github.com/wkorando/junit-5-simple-demonstrator.

EDIT: An earlier version of this blog said that assertThrows()​ doesn’t support exception subtypes, that is incorrect.

What’s New in JUnit 5.4

It’s a new year and with that comes another release of the JUnit 5 framework! In this article we will look at some of the big new features released in JUnit 5.4.

Ordering Test Case Execution

I have been personally looking forward to this feature for sometime now. While unit tests by definition should be isolated from one another, JUnit covers a space larger than “just” unit testing. In my case, I have been wanting to be able to explicitly define test execution order to resolve an issue around an integration test scenario in a project demonstrating JUnit 5.

The goal of the integration test is to validate that the application can communicate with a Postgres database. In the test class, which is making use of TestContainers, three behaviors are being verified, reading, mapping, and writing to a database. For reading from the database, a simple count of the number of records is being used, which would obviously be impacted by writing a new record to the database. While tests in JUnit 5 are executed in a consistent order, it is “intentionally nonobvious” how that order is determined. With JUnit 5.4, we can finally define an explicit test execution order.

Let’s take a look at how to order test cases in a class (full class here):

@ContextConfiguration(classes = { HotelApplication.class }, initializers = ITCustomerJUnit5Repo.Initializer.class)
@DirtiesContext(classMode = ClassMode.BEFORE_EACH_TEST_METHOD)
@TestMethodOrder(OrderAnnotation.class)
public class ITCustomerJUnit5Repo {
//Removed code, for clarity
@Test
@Order(1)
public void testCountNumberOfCustomersInDB() {
assertEquals(2, repo.count());
}
@Test
public void testRetrieveCustomerFromDatabase() {
Customer customer = repo.findAll().iterator().next();
assertEquals("John", customer.getFirstName());
assertEquals("Doe", customer.getLastName());
assertEquals("Middle", customer.getMiddleName());
assertEquals("", customer.getSuffix());
}
@Test
public void testAddCustomerToDB() throws ParseException {
Customer customer = new Customer.CustomerBuilder().firstName("BoJack").middleName("Horse").lastName("Horseman")
.suffix("Sr.").build();
repo.save(customer);
assertEquals(3, repo.count());
}
}

To enable ordering tests cases in a class, the class must be annotated with the @TestMethodOrder extension and an ordering type of either AlphanumericOrderAnnotation, or Random must be provided.

  • Alphanumeric orders test execution based on the method name* of the test case.
  • OrderAnnotation allows for a custom defined execution order using @Order like shown above.
  • Random orders test cases pseudo-randomly, the random seed can be defined by setting the property junit.jupiter.execution.order.random.seed in your build file.
  • You can also create your own custom method orderer by implementing the interface org.junit.jupiter.api.MethodOrderer

*A test case’s @DisplayName, if defined, will not be used to determine ordering.

Order Only the Tests that Matter

When using OrderAnnotation you should note, and this can be seen in the code example above, you don’t have to define an execution order for every test case in a class. In the example above only one test has an explicit execution order, testCountNumberOfCustomersInDB, as that is the only test case that will be impacted by a change in state. By default JUnit will execute any tests without a defined execution order after all tests that do have a defined execution order. If you have multiple unordered tests, as is the case above, they will be executed in the default deterministic, but “nonobvious” execution order that JUnit 5 typically uses.

This design decision is not only helpful for the obvious reason of requiring less work, but it also helps prevent polluting tests with superfluous information. Adding an execution order to a test that does not need it, it could lead to confusion. If a test begins to fail, a developer or test automation specialist might spend time fiddling with execution order when the cause of the failure is unrelated to execution order. By leaving a test without a defined execution order it is stating this test is not impacted by state change. In short, it should be actively encouraged to omit @Order on test cases that do not require it.

Extension Ordering

The new ordering functionality isn’t limited to just ordering the execution of test cases. You can also order how programmatically registered extensions, i.e. extensions registered with @RegisterExentsion, are executed. This can be useful when a test(s) has complex setup/teardown behavior and that setup/teardown has separate domains. For example testing the behavior of how a cache and database are used.

While extensions by default execute in a consistent order, like test cases, that order is “intentionally nonobvious”. With @Order an explicit and consistent extension execution order can be defined. In the below example a simple extension is defined which prints out the value passed into its constructor:

public class TestExtensionOrdering {
@RegisterExtension
@Order(3)
static ExampleJUnit5Extension extensionA = new ExampleJUnit5Extension("A");
@RegisterExtension
@Order(2)
static ExampleJUnit5Extension extensionB = new ExampleJUnit5Extension("B");
@RegisterExtension
@Order(1)
static ExampleJUnit5Extension extensionC = new ExampleJUnit5Extension("C");
@Test
public void testCaseA() {
// Do nothing
}
@Test
public void testCaseB() {
// Do nothing
}
public static class ExampleJUnit5Extension
implements BeforeAllCallback, AfterAllCallback, BeforeEachCallback, AfterEachCallback {
private String value;
public ExampleJUnit5Extension(String value) {
this.value = value;
}
@Override
public void beforeAll(ExtensionContext context) throws Exception {
System.out.println("Executing beforeAll with value:" + value);
}
@Override
public void afterAll(ExtensionContext context) throws Exception {
System.out.println("Executing afterAll with value:" + value);
}
@Override
public void afterEach(ExtensionContext context) throws Exception {
System.out.println("Executing afterEach with value:" + value);
}
@Override
public void beforeEach(ExtensionContext context) throws Exception {
System.out.println("Executing beforeEach with value:" + value);
}
}
}

Here is the console output from executing the above test class:

Executing beforeAll with value:C
Executing beforeAll with value:B
Executing beforeAll with value:A
Executing beforeEach with value:C
Executing beforeEach with value:B
Executing beforeEach with value:A
Executing afterEach with value:A
Executing afterEach with value:B
Executing afterEach with value:C
Executing beforeEach with value:C
Executing beforeEach with value:B
Executing beforeEach with value:A
Executing afterEach with value:A
Executing afterEach with value:B
Executing afterEach with value:C
Executing afterAll with value:A
Executing afterAll with value:B
Executing afterAll with value:C

Aggregate Artifact

A frequent question/concern I have heard when presenting on JUnit 5 has been the large number of dependencies that are required when using JUnit 5. With the 5.4 release the JUnit team will now start providing the junit-jupiteraggregate artifact. JUnit-Jupiter bundles junit-jupiter-api, junit-jupiter-params, so collectively this artifact should cover most of the needs when using JUnit 5. This change should help slim down the maven and gradle files of projects using JUnit 5, as well as make JUnit 5 easier to use in general. Below shows the “slimming” effect of the new aggregate artifact:

<!– New aggregate dependency –>
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter</artifactId>
<scope>test</scope>
</dependency>
<!– Old dependencies –>
<!– <dependency> –>
<!– <groupId>org.junit.jupiter</groupId> –>
<!– <artifactId>junit-jupiter-api</artifactId> –>
<!– <scope>test</scope> –>
<!– </dependency> –>
<!– <dependency> –>
<!– <groupId>org.junit.jupiter</groupId> –>
<!– <artifactId>junit-jupiter-engine</artifactId> –>
<!– <scope>test</scope> –>
<!– </dependency> –>
<!– <dependency> –>
<!– <groupId>org.junit.jupiter</groupId> –>
<!– <artifactId>junit-jupiter-params</artifactId> –>
<!– <scope>test</scope> –>
<!– </dependency> –>

TempDir

@TempDir began its life originally as part of the JUnit-Pioneer third-party library. With the release of 5.4, @TempDir has been added as a native feature of the JUnit framework. @TempDir makes the process of validating some file I/O behavior easier by handling the setup and teardown of a temporary directory within the lifecycle of a test class. @TempDir can be injected in two ways, as a method argument or as a class field and must be used with either a Path or File type. @TempDir cannot be injected as a constructor argument. Let’s take a look at @TempDir in action:

@TestMethodOrder(OrderAnnotation.class)
public class TestTempDir {
@TempDir
static Path classTempDir;
@TempDir
static File classTempDirAsFile;
@Test
@Order(1)
public void useAsClassValue() throws IOException {
File file = classTempDir.resolve("temp.txt").toFile();
FileUtils.write(file, "A", StandardCharsets.ISO_8859_1, true);
assertEquals("A", FileUtils.readFileToString(file, StandardCharsets.ISO_8859_1));
}
@Test
@Order(2)
public void useAsClassValuePart2() throws IOException {
File file = classTempDir.resolve("temp.txt").toFile();
FileUtils.write(file, "B", StandardCharsets.ISO_8859_1, true);
assertEquals("AB", FileUtils.readFileToString(file, StandardCharsets.ISO_8859_1));
}
@Test
@Order(3)
public void injectAsMethodValue(@TempDir Path argumentTempDir) throws IOException {
File file = argumentTempDir.resolve("temp.txt").toFile();
FileUtils.write(file, "C", StandardCharsets.ISO_8859_1, true);
assertEquals("ABC", FileUtils.readFileToString(file, StandardCharsets.ISO_8859_1));
}
@Test
@Order(4)
public void injectAsMethodValuePart2(@TempDir Path argumentTempDir) throws IOException {
File file = argumentTempDir.resolve("temp.txt").toFile();
FileUtils.write(file, "D", StandardCharsets.ISO_8859_1, true);
assertEquals("ABCD", FileUtils.readFileToString(file, StandardCharsets.ISO_8859_1));
}
@Test
@Order(5)
public void useAsClassFileValue() throws IOException {
File file = new File(classTempDirAsFile, "temp.txt");
FileUtils.write(file, "E", StandardCharsets.ISO_8859_1, true);
assertEquals("ABCDE", FileUtils.readFileToString(file, StandardCharsets.ISO_8859_1));
}
@Test
@Order(6)
public void injectAsMethodFileValue(@TempDir File tempFile) throws IOException {
File file = new File(classTempDirAsFile, "temp.txt");
FileUtils.write(file, "F", StandardCharsets.ISO_8859_1, true);
assertEquals("ABCDEF", FileUtils.readFileToString(file, StandardCharsets.ISO_8859_1));
}
}

view raw
TempDirExample.java
hosted with ❤ by GitHub

Note: The same directory is shared across a test class even if you inject a@TempDir in multiple locations.

TestKit

TestKit was added in 5.4 as a way to perform meta-analysis on a test suite. TestKit can be used to check the number of; executed tests, passed tests, failed tests, skipped tests, as well as a few other behaviors. Let’s take a look at how you can check for tests being skipped when executing a test suite.

public class TestKitExample {
@Test
void failIfTestsAreSkipped() {
Events testEvents = EngineTestKit
.engine("junit-jupiter")
.selectors(selectClass(TestKitSubject.class))
.execute()
.tests();
testEvents.assertStatistics(stats > stats.skipped(1));
}
}

view raw
TestKitExample.java
hosted with ❤ by GitHub

public class TestKitSubject {
@Test
public void fakeRunningTest() {
}
@Test
@Disabled
public void fakeDisabledTest() {
}
}

view raw
TestKitSubject.java
hosted with ❤ by GitHub

To use TestKit you will need to add the junit-platform-testkit dependency to your build file.

But That’s not All…

Another new feature added with 5.4 is the new Display Name Generator. Lee Turner already wrote a great article on the new display name generator, so rather than re-explaining it this article, check his: https://leeturner.me/blog/2019/02/building-a-camel-case-junit5-displaynamegenerator.html

This is only a highlight of some of the new features in JUnit 5.4, to view all the new features, changes, and bug fixes, checkout the release notes the JUnit team maintains: https://junit.org/junit5/docs/current/release-notes/

Also be sure to check out the JUnit 5 user guides for examples on how to use all the features in JUnit 5: https://junit.org/junit5/docs/current/user-guide/index.html

Conclusion

I have been continually impressed by the JUnit team’s steady work improving the JUnit 5 framework. In a little under a year and a half we have now seen four minor releases. As someone who has come to deeply appreciate and advocate for automated testing over the past couple of years, I am happy to see the JUnit team aggressively adding new features to JUnit 5 and taking in feedback from the community and other testing frameworks like Spock, TestNG, and others.

To view the code used in this article check out my project github page here: https://github.com/wkorando/WelcomeToJunit5

How to Test Logging in Java Part Two: Parallel Boogaloo

In the first article in this series we looked at a simple way of capturing and verifying logging statements using a static list appender. The static list appender method works great if you are executing your test suite in a single thread, but isn’t practical when executing tests in parallel due to all the logging statements being written to a single list.

Whether or not a test suite can be executed in parallel should not depend upon if there are tests case verifying logging statements. We shouldn’t need to choose between a fast executing test suite or an application that produces accurate logging statements. So in this article we are going to look at a couple different methodologies for testing logging statements in a multi-threaded environment. We will also preview some of the new parallel testing features coming in JUnit 5.3, which should be released sometime in August.

Using a Thread Safe Appender

Parallel test execution works by spinning up multiple threads and using those threads to execute test simultaneously. This causes havoc when using a single list to capture logging statements across the entire JVM, but a way to get around this is to create a new list in each thread. To accomplish this we will make use of the ThreadLocal functionality in Java. Here is an example of an appender using ThreadLocal to capture logging statements:

public class ThreadSafeAppender extends AppenderBase<ILoggingEvent> {
static ThreadLocal<List<ILoggingEvent>> threadLocal = new ThreadLocal<>();
@Override
public void append(ILoggingEvent e) {
List<ILoggingEvent> events = threadLocal.get();
if (events == null) {
events = new ArrayList<>();
threadLocal.set(events);
}
events.add(e);
}
public static List<ILoggingEvent> getEvents() {
return threadLocal.get();
}
public static void clearEvents() {
threadLocal.remove();
}
}

In the code above we have the static field threadLocal that will accept a Listof ILoggingEvents. When append() is called the appender will retrieve the List for the current thread. If no list is present a new List is initialized and added to ThreadLocal. Finally the logging statement is added to the list.

getEvents() will retrieve the list of logging statements for the current thread and clearEvents() will remove the logging statements for the current thread. This class is pretty similar to StaticAppender, just with a few tweaks.

Configuration will look more or less the same, just we will be referencing the ThreadSafeAppender instead of StaticAppender:

<configuration>
<appender name="threadsafe-appender" class="com.bk.logging.ThreadSafeAppender" />
<root level="trace">
<appender-ref ref="threadsafe-appender" />
</root>
</configuration>

view raw
logback-test.xml
hosted with ❤ by GitHub

The next step is configuring Maven to execute the test suite in parallel. The updated surefire configuration looks like this:

<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.22.0</version>
<configuration>
<properties>
<configurationParameters>
junit.jupiter.execution.parallel.enabled=true
junit.jupiter.execution.parallel.config.dynamic.factor=1
</configurationParameters>
</properties>
</configuration>
</plugin>
</plugins>
</build>
</project>

view raw
surefire-config.xml
hosted with ❤ by GitHub

There are several different options for executing a test suite in parallel in JUnit 5.3, in this example we are using the dynamic behavior. Dynamic behavior with a factor of “1” means that for each core that is available on the computer executing the test suite, create a thread to execute tests. So if a computer has two cores, then JUnit will create two threads to execute tests with. There is also fixed, and custom, and you can read more about the parallel test support in JUnit 5.3 here.

We will be reusing the same scenario from the last article of printing out two log statements; one in the same thread of the test case being executed and also spinning up a separate thread to write a second log message, here it is again:

public class LogProducingService {
private static final Logger LOGGER = LoggerFactory.getLogger(LogProducingService.class);
public void writeSomeLoggingStatements(String message) {
LOGGER.info("Let's assert some logs! " + message);
ExecutorService executor = Executors.newSingleThreadExecutor();
Future<?> future = executor.submit(() > LOGGER.info("This message is in a separate thread"));
do {
// wait for future to complete
} while (!future.isDone());
}
}

This scenario is designed to demonstrate the major shortcoming of the thread safe method, which will be covered in more detail in a moment.

For the test class we will essentially set it up to run the same scenario twice, with only a small difference in the logging ouput:

public class TestThreadSafeAppender {
@BeforeEach
public void clearLoggingStatements() {
ThreadSafeAppender.clearEvents();
}
@Test
public void testAssertingLoggingStatementsA() {
LogProducingService service = new LogProducingService();
service.writeSomeLoggingStatements("A");
assertThat(ThreadSafeAppender.getEvents()).extracting("message").containsOnly("Let's assert some logs! A");
}
@Test
public void testAssertingLoggingStatementsB() {
LogProducingService service = new LogProducingService();
service.writeSomeLoggingStatements("B");
assertThat(ThreadSafeAppender.getEvents()).extracting("message").containsOnly("Let's assert some logs! B");
}
}

Looking at the test cases, I am only checking for a single logging statement despite two being written by the code under test. This is the major shortcoming with the ThreadSafeAppender, a new list is being created for each thread, not for each test case. So if the code under test creates a new thread itself, any logging statement within that new thread will be written to a separate list that the test case won’t have (easy) access too.

Typically this isn’t going to be a problem. First, writing code that spins and executes code in a separate thread isn’t that common of a need. Second, following the single responsibility principle, code that is being executed in a separate thread should be in its own class, which can then be tested independently.

That said, there might still be times where this does happen. Maybe like in the scenario above the code being executed in a separate thread is so trivial that it might not warrant a separate class, or maybe you inherited a code base that didn’t follow the single responsibility principle and you want to be able to write some tests that cover that area before you refactor it. Luckily there is a solution, which I will cover below, but first I need to talk about issues of executing tests in parallel that will apply to both methodologies.

The Joys of Concurrency

If you copy and pasted the above code into a project it would work when executing within your preferred IDE (I personally tested in Eclipse), however that is only being executed using a single thread, and if you tried running mvn test you probably get some tests failures and text that looks like the below printed to the console:

SLF4J: A number (4) of logging calls during the initialization phase have been intercepted and are
SLF4J: now being replayed. These are subject to the filtering rules of the underlying logging system.
SLF4J: See also http://www.slf4j.org/codes.html#replay

This is really only a problem that would occur when executing tests in parallel and wouldn’t really impact a deployed application. Unfortunately this will lead to “flickeringh tests” (tests that don’t pass or fail in a consistent manner) and so this issue must be addressed.

First to describe the problem. What is happening is tests are being executed before logback has initialized. This is because logback is being initialized in a separate thread and the threads executing the tests do not know this. SLF4J recognizes this, captures the log statements, and then replays them. SLF4J isn’t doing anything wrong, in-fact it is being very helpful, unfortunately for us though the replay isn’t happening until after the assert phase of some test cases, so they end up failing.

The good news is there is a work-around, and it is pretty easy to implement. What we need is to find a way to check if logback is ready, and not let a test class execute and until that check passes. Here is some code that is performing that check:

public class LogbackInitializerExtension implements BeforeAllCallback {
@Override
public void beforeAll(ExtensionContext context) throws Exception {
pauseTillLogbackReady();
}
private static void pauseTillLogbackReady() {
while(!isLogbackReady());
}
private static boolean isLogbackReady() {
return LoggerFactory.getILoggerFactory() instanceof LoggerContext;
}
}

Before logback is ready, calling getILoggerFactory() will return a default SLF4J implementation of ILoggerFactory. So the while loop is designed to to block execution of the test class until it breaks when getILoggerFactory()returns the logback implementation of ILoggerFactoryLoggerContext.

For convenience, I implemented this behavior using JUnit 5’s new extension model, so that I can simply add @ExtendWith(LogbackInitializerExtension.class) at the top of a test class that will be checking log statements. Otherwise I would have to add a @BeforeAll method implementing or calling pauseTillLogbackReady() in every test class that is testing log statements. Don’t worry, unlike RunWith in JUnit 4, in JUnit 5 a class can have multiple extensions.

I can further simplify this process by creating a custom annotation, so instead of @ExtendWith(LogbackInitializerExtension.class), which might be confusing to remember, I can just add @LogbackInitializer to the top of a test class. Creating a custom annotation with JUnit 5 is pretty easy:

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.TYPE)
@ExtendWith(LogbackInitializerExtension.class)
public @interface LogbackInitializer {
}

Figuring this all out was admittedly a bit of a headache, but resolving these problems that crop up when executing tests in parallel doesn’t require complex configuration or relying on what I would call “winking” solutions, such as using wait timers to delay test case execution. Such solutions can lead to the flickering test failures described earlier in the article, or just unnecessarily slow down the execution of your test suite, which defeats the whole purpose of executing the test suite in parallel in the first place!

Using a Local Appender

Using the “Thread Safe” method is the method I would recommend for testing logging statements when executing tests in parallel, primarily because of its ease of configuration. However as explained above you might have inherited a poorly written code base or don’t want to separate your code out. If that is the case then you might want to try what I call the “Local Appender” method.

I call this the “Local Appender” method because you will be setting up a new appender for every test case where you will be verifying logging statements (i.e. an appender local to each test case). The advantage is this appender will capture logging statements executed in threads separate from the test case, resolving the major disadvantage of the Thread Safe method, but at cost of additional configuration overhead, which will be covered below.

The first step is implementing the appender:

public class LocalAppender extends AppenderBase<ILoggingEvent> {
private List<ILoggingEvent> events = new ArrayList<>();
public LocalAppender() {
start();
}
public static LocalAppender initialize(Stringloggers) {
LocalAppender localAppender = new LocalAppender();
for (String loggerName : loggers) {
localAppender.setContext((LoggerContext) LoggerFactory.getILoggerFactory());
Logger logger = (Logger) LoggerFactory.getLogger(loggerName);
logger.addAppender(localAppender);
}
return localAppender;
}
public void cleanup() {
this.stop();
this.clearEvents();
}
@Override
public void append(ILoggingEvent e) {
events.add(e);
}
public List<ILoggingEvent> getEvents() {
return events;
}
public void clearEvents() {
events.clear();
}
}

view raw
LocalAppender.java
hosted with ❤ by GitHub

The bare essential elements of the LocalAppender are pretty similar to the StaticAppender from the previous article, the only difference is the List is an instance member instead of a static one and in the constructor we are starting the appender.

The other differences aren’t strictly necessary but are present for convenience. initialize() is a factory method for creating new LocalAppendersinitialize() is taking in logger coordinates and configuring an instance of LocalAppender to only capture log statements from those loggers. The second method, cleanup(), as the name suggests cleans up the appender by stopping it and clearing out all logging statements. This frees up system resources, but really isn’t that important unless you are using a lot of LocalAppenders.

Let’s see what using the LocalAppender might look like in a test class:

@LogbackInitializer
public class TestLocalAppender {
@Test
@ResourceLock(value = "LOGGING", mode = ResourceAccessMode.READ_WRITE)
public void testLocalAppenderA() {
OtherLogProducingService service = new OtherLogProducingService();
LocalAppender localAppender = LocalAppender.initialize("com.bk.logging.OtherLogProducingService");
service.writeSomeLoggingStatements("Other logging servie A");
assertThat(localAppender.getEvents()).extracting("message")
.containsOnly("Let's assert some logs! Other logging servie A", "This message is in a separate thread");
localAppender.cleanup();
}
@Test
@ResourceLock(value = "LOGGING", mode = ResourceAccessMode.READ_WRITE)
public void testLocalAppenderB() {
OtherLogProducingService service = new OtherLogProducingService();
LocalAppender localAppender = LocalAppender.initialize("com.bk.logging.OtherLogProducingService");
service.writeSomeLoggingStatements("Other logging servie B");
assertThat(localAppender.getEvents()).extracting("message")
.containsOnly("Let's assert some logs! Other logging servie B", "This message is in a separate thread");
localAppender.cleanup();
}
@Test
@ResourceLock(value = "LOGGING", mode = ResourceAccessMode.READ)
public void justAnotherTest() {
OtherLogProducingService service = new OtherLogProducingService();
service.writeSomeLoggingStatements("Local appender");
// Executing just to add some logs
}
@Test
@ResourceLock(value = "LOGGING", mode = ResourceAccessMode.READ)
public void yetAnotherTest() {
OtherLogProducingService service = new OtherLogProducingService();
service.writeSomeLoggingStatements("Local appender");
// Executing just to add some logs
}
}

So there’s a good amount going on here. Let’s walk through it step by step.

At the top I have @LogbackInitializer. Like mentioned above this is present to ensure that logback is initialized before the test cases begin executing.

Next you might notice all the test cases are marked with @ResourceLock. This is one of the new parallel features being added in JUnit 5.3. The JUnit team realized that the desire to execute tests in parallel isn’t entirely a binary decision. Most tests might be able to execute fine in parallel, but some tests might be sharing a resource, in this case logging, and so might have some special conditions around how they can be executed.

@ResourceLock takes two arguments, the resource key (which uses the valuefield), and mode which takes the values of either READ or READ_WRITE(defaults to READ_WRITE). Test cases marked READ can be executed at the same time as other test cases using the same resource marked READ, but will not be executed at the same time as a test case marked READ_WRITE that is also using the same resource. Finally @ResourceLock is applied across the entire test suite. So if we were to add @ResourceLock(value = “LOGGING”, mode = ResourceAccessMode.READ_WRITE) to a test case in one of the other test classes in the project, it wouldn’t execute at the same time as either testLocalAppenderA() or testLocalAppenderB().

Note: There is no execution order guarantee when using @ResourceLock, so a test case marked READ might be executed before or after a test case marked READ_WRITE.

Within the test cases themselves I am using the initialize() factory method I mentioned earlier and this gets at the advantage of the LocalAppender. With LocalAppender I can easily configure it to only capture logs from a very narrow area, typically the specific class (or classes) I am testing with that test case. With this fine grained control on what logging statements to capture and using features like @ResourceLock we can guarantee only the expected logging statements are being captured.

Finally justAnotherTest() and yetAnotherTest() are dummy test cases I added to further demonstrate the @ResourceLock functionality. If those annotations are removed then testLocalAppenderA() or testLocalAppenderB() might begin to fail because of the additional logging statements being written during their test execution. I would recommend trying it out!

Conclusion

If you have a very large test suite, executing tests in parallel can be a good option for cutting down build times, particularly if you have a build server with a lot of resources. Using the common method of capturing and verifying logging statements, like described in the first article, would mean either having to ignore the occasional failing test, or configuring your build to ignore those tests when the test suite is being executed in parallel (if you look at the full project I link below, I demonstrate how this can be done). Luckily though with the methods covered in this article we can execute our test suite in parallel and verify our logging statements too.

I hope also this article excited you about the upcoming JUnit 5.3 release. JUnit 5.3 should be out sometime in August and I will have an article covering some of the key new features in JUnit 5.3, as well as some changes in closely associated projects. That article will be published shortly after JUnit 5.3’s release. Until then happy testing!

The code for this article can be found here: https://github.com/wkorando/assert-logging-statements

How to Test Logging in Java

Whether you are building microservices or monoliths, accurate and robust logging is a critical element in any application. Logging can give insight on how an application is being used and provide developers and operations with detailed information on why an error occurred. It is important then, like any other feature in an application, that we use automated tests to verify that logging statements are accurate and do not cause errors themselves.

In this two part series we will look at some strategies for capturing and verifying logging statements. In part one we will look at a simple and easy to implement strategy for when you are executing your test suite sequentially. In part two we will focus on a couple of strategies for when you are executing your test suite in parallel.

The Scenario

For demonstrating how to capture and assert logging statements we have the very useful and accurately named LogProducingService:

public class LogProducingService {
private static final Logger LOGGER = LoggerFactory.getLogger(LogProducingService.class);
public void writeSomeLoggingStatements(String message) {
LOGGER.info("Let's assert some logs! " + message);
ExecutorService executor = Executors.newSingleThreadExecutor();
Future<?> future = executor.submit(() > LOGGER.info("This message is in a separate thread"));
do {
// wait for future to complete
} while (!future.isDone());
}
}

The LogProducingService has a single method, writeSomeLoggingStatements(), which produces a pair of logging statements. One statement written within the same thread as that of the calling test case and another statement written in a separate thread (this will be more important in part two).

Implementing a Custom Appender

In normal operation an application will typically be using either some form of a file appender to write statements to system logs, or in more recent trends, using an appender that sends statements to a message queue such as Kafka. There are some other options, but none are practical for verifying if the logging statements being produced are valid. So to handle this we will need to implement our own appender like this one:

public class StaticAppender extends AppenderBase<ILoggingEvent> {
static List<ILoggingEvent> events = new ArrayList<>();
@Override
public void append(ILoggingEvent e) {
events.add(e);
}
public static List<ILoggingEvent> getEvents() {
return events;
}
public static void clearEvents() {
events.clear();
}
}

view raw
StaticAppender.java
hosted with ❤ by GitHub

In the above class I am extending AppenderBase and instead of printing the logging statements out to a file or sending them to a message queue I am adding them to a List. To retrieve the logging statements for asserting in a test case there is getEvents(), and to clear the list out between test cases I added clearEvents().

I named this class StaticAppender because the list I am storing the logging statements in, as well as the methods to retrieve and clear that list, are all static. This is necessary as I want to be able to globally configure my test suite using logback-test.xml; and the instance of StaticAppender that logback creates won’t be easily available to my test code. By declaring the aforementioned fields and methods static, I have easy access to the logging statements my application is producing from within my test cases.

After implementing the appender, we need to configure logback to use it:

<configuration>
<appender name="static-appender" class="com.bk.logging.StaticAppender" />
<root level="trace">
<appender-ref ref="static-appender" />
</root>
</configuration>

view raw
logback-test.xml
hosted with ❤ by GitHub

As mentioned above I am configuring logback for my test suite using logback-test.xml and placed it in the src/test/resources directory of the project. I did this because by order of precedence logback will load logback-test.xmlfirst if it is present on classpath. So even though I have logback.xml defined in my src/main/resources, when executing my test suite I will be using StaticAppender instead of ConsoleAppender defined in logback.xml. This is important as we don’t want to modify production code so that it is aware it is running in a test scenario versus actually deployed; that is a big no-no.

Finally the test case would look something like this:

public class TestStaticAppender {
@BeforeEach
public void clearLoggingStatements() {
StaticAppender.clearEvents();
}
@Test
public void testAssertingLoggingStatementsA() {
LogProducingService service = new LogProducingService();
service.writeSomeLoggingStatements("A");
assertThat(StaticAppender.getEvents()).extracting("message").containsOnly("Let's assert some logs! A",
"This message is in a separate thread");
}
@Test
public void testAssertingLoggingStatementsB() {
LogProducingService service = new LogProducingService();
service.writeSomeLoggingStatements("B");
assertThat(StaticAppender.getEvents()).extracting("message").containsOnly("Let's assert some logs! B",
"This message is in a separate thread");
}
}

I am a huge fan of AssertJ, which I am making use of here to quickly extract out the message field from a list of ILoggingEvent and assert its value. If you want to learn more about using AssertJ in your automated testing, I wrote a blog article on that as well.

Parallel Concerns

When executing a test suite in a single thread, as would often be the case, using a static appender should work fine, as long as you are cleaning up logging statements between test cases like I am doing in clearLoggingStatements(). However if you have a large test suite and are wanting to execute tests in parallel to speed up your build times, then you will run into issues when using the StaticAppender. This is because while multiple test cases are being executed simultaneously, all their logging statements are being written to a single List. Your test cases won’t be able to know which log statements were written from the code it is executing from versus that of other test cases. There could create issues of logging statements being deleted when clearEvents() is called mid test execution.

Luckily the issues around executing tests in parallel can be overcome with relative ease which I will cover in the next article in this series.

Conclusion

Capturing and verifying logging statements is actually a pretty simple task as seen in this article. No additional libraries, beyond your logging libraries, are needed, nor is complex configuration.

While testing logging output might never be the highest priority for a development shop, it is useful to know that testing logging output is easy and practical. Writing some tests for the logging statements your application is producing might save you some frustration next time you have a production issue and you are able to look through accurate and detailed logs as you track down the cause of the problem.

The code for this article can be found here: https://github.com/wkorando/assert-logging-statements