Blog

5 Reasons Why You Should Consider Switching to Eclipse OpenJ9

Often when the Java Virtual Machine is discussed in a presentation or article, it is described in very monolithic terms, i.e. “the JVM”. This can give the mistaken impression that there is only single JVM implementation to choose from. In reality though there are actually several JVM implementations to choose from. Knowledge that there are multiple JVMs available is becoming more common with the highly publicized GraalVM and Amazon’s Coretto. There is however another JVM option that has been around for some time, Eclipse OpenJ9.

Eclipse OpenJ9 is an open source JVM implementation that IBM supports. OpenJ9 was open sourced from IBM’s J9 JVM back in 2017 and is available for download from adoptopenjdk.net. In this article we will look at five reasons why you should consider switching to OpenJ9 for running your Java workloads in production.

#1 Significant Reduction in Memory Usage

openj9-memory-copnsumption

The biggest and most straight forward advantage OpenJ9 offers is it uses dramatically less memory when compared to the other publicly available JVMs. The above is a screenshot of a Grafana dashboard from a demo I put together comparing the performance of OpenJ9 to OpenJDK Hotspot, Amazon Corretto, and GraalVM. The demo, executing a simple Spring Boot Batch application, isn’t a specialized use case that plays to OpenJ9’s strengths, but very much in line with the roughly 40% drop in memory consumption the IBM Runtimes team has seen when testing OpenJ9 as well as what users have reported back.

OpenJ9’s is able accomplish this dramatic reduction in memory usage because of its heritage as a JVM designed to run on smartphones in the early 2000’s. Running applications in a mobile environment demands being more conscious of resource usage as; memory, CPU, battery life, were particularly scarce on mobile devices of that era. This legacy of resource conservation lives on in OpenJ9 and while there is no shortage of resources in a cloud environment, there is a price tag attached to using it. A JVM that uses less resources could mean organizations spending less a month on hosting their Java applications on cloud platforms.

#2 Class Sharing

One of my favorite features of OpenJ9 is the ability to do class sharing. Class sharing is a feature that is key for the cloud native world where new instances of an application will be started up and shut down as demand changes or a new and better version is made available.

In a world where applications are being frequently started up, the time it takes until an application is available to start servicing requests and the time it takes until it reaches peak performance (ramp up) becomes more of a concern. Start up can be a resource intensive time for an application as well as the JVM compiles code and performs optimizations.

A lot of the actions performed by the JVM during the start up and ramp up phases will be the same every time. As developers we are taught we should try to reuse code where possible and the lesson of reuse would apply here as well. Class sharing works by allowing a JVM to write to a common location; compiled code and optimizations, that can then be used by other JVMs. Two great design choices in how this feature is implemented are there isn’t neither a leader/follower concept nor will a JVM require a shared cache to be present. This removes points of failure where a JVM unexpectedly stopping or a shared cache being deleted, will not cause other JVMs to crash or fail at startup.

Class sharing can reduce CPU utilization, as compilations and optimizations are CPU intensive, as well as the aforementioned improved performance around startup and ramp up. For more information on class sharing check out this article. If you are interested in using class sharing in a (Docker) containerized environment, check out this article I wrote that looks at a couple of strategies on how to implement class sharing in a containerized environment.

#3 OpenJ9 is Free to Use

OpenJ9 is licensed under the Eclipse Public License 2.0 and Apache License 2.0 licenses, which means you can run your Java workloads in production on OpenJ9 without paying a penny in licensing costs. With the shockwaves and confusion still reverberating around the Java world from Oracle’s recent changes to how it licenses the commercial OracleJDK, having options for running Java where you can have total confidence that you don’t have a potential lawsuit looming in the future over unpaid licensing costs can be very comforting.

For more information on OpenJ9’s license, you can view it here under OpenJ9’s Github repository.

#4 Commercial Support is Available

Free to use is great if you are a start up and running on a limited budget or in the proof of concept phase (such as proving the concept of using OpenJ9 😉). However if your organization’s assets values in the millions, billions, or even trillions, saving thousands a month in licensing cost doesn’t make much sense if you are giving up the peace of mind that commercial support offers.  To that end IBM offers commercial support for OpenJ9. You can read more about the support IBM provides and support cost here.

#5 Switching is Really Easy

So what does it take to switch to OpenJ9?

Refactoring your code to use OpenJ9 concepts?

Recompiling and redeploying all your Java applications to run in an OpenJ9 environment?

The answer is neither. Java is actually an API that is defined by the Java Language Specification (JLS). OpenJ9 is built to be compatible with the Java API as specified in the JLS, here is the JLS for Java 12, and so is capable of running any Java code without any special requirements. The demo I referenced in my first point is actually running the exact same Java artifact in all the containers. If compatibility with running Java artifacts wasn’t enough, OpenJ9 even offers migration support for Hotspot JVM arguments.

But if this is enough for you, check out these testimonials from real users of OpenJ9:

openj9-testimonial.png

Conclusion

As a developer my experience and interactions with JVMs has mostly been limited to executing java -jar . From my perspective the JVM was just something to run my Java applications, I didn’t really care how. I became immediately interested by OpenJ9 however because it had such readily apparent and accessible advantages over other JVMs. This article only scratches the surface of what OpenJ9 has to offer and our IBM Runtimes team is hard at work on improving and adding new features to OpenJ9.

If you would like to know more about OpenJ9 be sure to check out the very helpful user docs: https://www.eclipse.org/openj9/docs/

Wiring Multiple Datasources in a Spring Boot and Spring Data JPA Application

FifthElement

Having to wire a Spring Boot application to talk to multiple datasources is a requirement you come along every once awhile. The good news is this can be done reasonably easily within Spring Boot and in this article we will walk through how to accomplish this.

Wiring Multiple Datasources Step-by-Step

Pre-Requisites

To run the demo application the following tools are needed:

  • Docker
  • Java

There are also some shell scripts available for building, running, testing, and tearing down the application for *nix OS users.

Application Structure

The example application used in this article has two domain models, Doctor and Clinic, each of which are persisted to their own separate datastores. This application overall is a very standard implementation of Spring Boot and Spring Data. Because there is already a lot of great documentation on how to implement such an application, those steps will be skipped. However for clarity, here is what the overall structure of the application looks like:

com.developer.ibm.multidatasource\
   clinic\
      Clinic
      ClinicsController
      ClinicsDatasourceConfiguration
      ClinicsRepo
   doctor\
      Doctor
      DoctorsController
      DoctorsDatasourceConfiguration
      DoctorsRepo
   MultiDatasourceApplication

The full application can be seen here: https://github.com/wkorando/multi-datasources-spring-boot

Configuring Spring Data

The first step would be to define the @Configuration classes that add the DataSource, PlatformTransactionManager, and LocalContainerEntityManagerFactoryBean to the application context which will be used by Spring Data when communicating with the databases. Both of the configuration classes look essentially identical, with one exception which will be covered in detail below. Let’s step through some of the key elements in these configuration classes:

@Configuration
@EnableTransactionManagement
@EnableJpaRepositories(entityManagerFactoryRef = "clinicEntityManagerFactory", transactionManagerRef = "clinicTransactionManager")
public class ClinicsDatasourceConfiguration {

    @Bean
    @Primary
    @ConfigurationProperties(prefix = "clinics.datasource")
    public DataSource clinicsDataSource() {
        return DataSourceBuilder.create().build();
    }

    @Bean
    PlatformTransactionManager clinicTransactionManager(
            @Qualifier("clinicEntityManagerFactory") LocalContainerEntityManagerFactoryBean clinicEntityManagerFactory) {
        return new JpaTransactionManager(clinicEntityManagerFactory.getObject());
    }

    @Bean
    LocalContainerEntityManagerFactoryBean clinicEntityManagerFactory(
            @Qualifier("clinicsDataSource") DataSource clinicsDatasource, EntityManagerFactoryBuilder builder) {
        return builder.dataSource(clinicsDatasource).packages(Clinic.class).build();
    }
}

@Primary

Outside of name differences, the @Primary added to clinicsDataSource is the only functional difference between ClinicsDatasourceConfiguration and DoctorsDatasourceConfiguration. Adding @Primary to clinicsDataSource is necessary as some of the autoconfiguring behavior within Spring Data depends upon a DataSource being available in the application context. However in our situation there will be two DataSources, available in the application context so adding @Primary to one gives Spring the information it needs on which bean to choose. For this applications purposes, making clinicsDataSource the primary DataSource was an arbitrary decision. However deciding which DataSource should be the primary one might be something worth thinking about depending on the requirements and behavior of your application.

@ConfigurationProperties

Automatically maps the jdbc-url, password, and username properties prefixed with clinics.datasource available in the environment, in this case defined in application.properties (source), and maps them to the DataSource being created in clinicsDataSource. If this feels too “magical” DataSourceBuilder (javadoc) also has standard builder methods; url(String), password(String), username(String) available among others.

Using @ConfigurationProperties helps to keep the behavior, from a developer’s perspective, more consistent with how a Spring Boot application would work if it had only a single DataSource. @ConfigurationProperties could also be useful in other scenarios. Here is an example of using @ConfigurationProperties to map to fields within a configuration class from an earlier version of the example project.

@Qualifier

The arguments for the @Bean methods of clinicTransactionManager and clinicEntityManagerFactory are annotated with @Qualifier. Like with @Primary, @Qualifier tells the Spring which instance of class to use when there are multiple available in the application context. By default the name of a bean is the same name as the method that created the bean.

Defining the Properties

Next we need to provide Spring with the values to connect to both our databases. In this example we are connecting to containerized instances of MySQL and Postgres. We will go into a little more detail on this properties file below:

clinics.datasource.jdbc-url=jdbc:mysql://localhost:3306/clinics-db
clinics.datasource.username=root
clinics.datasource.password=secret

doctors.datasource.jdbc-url=jdbc:postgresql://localhost:5432/doctors-db
doctors.datasource.username=postgres
doctors.datasource.password=secret

spring.jpa.open-in-view=false

jdbc-url

When defining a datasource using the spring.datasource properties the property of url would be used. However with Spring Boot 2, HirkariDataSource became the standard DataSource implementation, so to use @ConfigurationProperties like in the @Configuration classes above, the property needs to be jdbc-url.

More info can be found here, as well as a workaround if you’d prefer to keep using url.

(I plan on going into more depth about this in a future article, as this change caused me a lot of pain while putting together the example application)

spring.jpa.open-in-view

This is ultimately an optional addition. By default Spring Boot sets this property to true which is arguably an anti-pattern, which you can read more about here. Why this is relevant in an article about configuring multiple datasources is that when spring.jpa.open-in-view is set to true, Spring MVC will look for an instance of PlatformTransactionManager and LocalContainerEntityManagerFactoryBean in the application context.

This could had been alternatively resolved by adding @Primary to one of the @Bean definitions of both PlatformTransactionManager and LocalContainerEntityManagerFactoryBean as was done with clinicsDataSource, however disabling spring.jpa.open-in-view should generally be done anyways, so that is a better resolution.

Running the Application

There are several scripts available for running the demo application, added for convenience and experimentation.

  • build.sh – builds the docker images and Java artifact
  • run.sh – starts up the Docker containers and Spring Boot application (note: there is a 15 second sleep between starting the containers and starting the app, to give the containers time to startup)
  • requests.sh – curl commands for POSTing and GETting to the Spring Boot application
  • kill.sh – stops and removes the Docker containers

The application by default runs at http://localhost:8080 with GET/POST endpoints residing at /clinics and /doctors.

Conclusion

With a little work a Spring Boot application can be setup to handle multiple datasources and still have an overall pretty familiar look and feel to it.

The code used in this article can be found here: https://github.com/wkorando/multi-datasources-spring-boot

An Extended Discussion on Customization in JUnit 5

Inspiration often comes in twos for me. While reviewing a recent blog article, What’s New in JUnit 5.4, a colleague suggested I go into more depth on the usage of extensions in JUnit 5. Then in my twitter timeline I saw this from one of the core committers of the JUnit 5 framework:Screen Shot 2019-03-27 at 5.34.07 PM

Source: Twitter.

Later on in the thread, what was trying to be done with the “hack” could had been accomplished by creating a custom extension that’s available to the public.

The above tells me two things; there is a need for a deep dive on the JUnit 5 extension model and a need to explain the extendability aspect of the JUnit 5 framework. When I talk extendability, I’m specifically referring to the quality of being able to build on top of the existing framework that the JUnit team has provided in JUnit 5. Whereas hacks have often been the modus operandi for getting around the limits of frameworks (rather those limits were intentional or not!), the JUnit team went to great strides to make JUnit 5 extendable, and we’ll see in this series how to take advantage of that quality.

JUnit 5’s extension model and extensibility are by no means trivial subjects, so to make it more digestible, this will be a three part blog series. The subject of each blog article will look like this:

  1. Introduction to and using the JUnit 5 extension model
  2. The JUnit 5 extension lifecycle and building a custom extension
  3. Understanding and using extensibility in JUnit 5

In this article, we will take a high level overview of the extensions model from the perspective of a user of extensions; well learn why the extension model was introduced, how this improves upon what was in JUnit 4, the different ways to register an extension, and how to define the order of extension execution. 

The JUnit 5 Extension Model

JUnit 5 was a fundamental re-write and re-design of the JUnit framework. Some areas largely remained the same, though with a few enhancements, like assertions. Other areas were completely overhauled, which includes runner (@RunWith), MethodRule (@Rule), and TestRule (@ClassRule), being rolled into the new extension model.

The benefits of this overhaul can be experienced in a number of ways. A pretty obvious one is you can now declare multiple extensions at the class level whereas before you could only declare a single @RunWith:

@ExtendWith(SpringExtension.class)
@ExtendWith(MockitoExtension.class)
public class TestSomeStuff{...
}

A bit more subtle, parameterized tests and normal tests can now co-exist in the same class:

public class ParameterizedAndNormalTestsLivingTogether{

   @Test
   pubic void aNormalTest(){
      ...
   }

   @ParameterizedTest
   @ValueSource(strings = { "val1", "val2" })
   public void aParameterizedTest(String val) {
   ...
   }
}

Note: @ParameterizedTest is built using the extension model

If you haven’t run into the constraints imposed by the previous Runner and Rule architecture, I can assure you it’s quite the painful experience when you do! So being able to register multiple extensions in the same test class or locate a parameterized test and normal test in the same test class are reasons to celebrate. But this is only just scratching the surface of the extension model, so let’s start going deeper.

Registering Extensions

There are three different ways to register an extension in JUnit 5: declaratively, programmatically, and automatically. Each way of registering an extension comes with specific rules, constraints, and benefits. Let’s step through the different types of ways to register extensions and understand when and why you might prefer using one method of the other.

Declaratively Registering Extensions

Extensions can be registered declaratively with an annotation at the class, method, or test interface level, and even with a composed annotation (which will be covered in-depth in the article on extendability). The code samples above are examples of registering extensions declaratively.

Declarative registering of extensions is probably the easiest way of registering an extension, which can be made even easier with a composed annotation. For example it is easier to remember how to write @ParameterizedTest when you want to declare a parameterized test than @ExtendWith(ParameterizedTestExtension.class).

As you are using an annotation to register an extension all the constraints with using annotations are there, such as only being able to pass static values to the extension. Also a test class cannot easily reference extensions that have been registered declaratively.

Programmatically Registering Extensions

Extensions can be registered programmatically with @RegisterExtension. There are a few rules regarding programmatically registered extensions. First, an extension field cannot be private. Second, the extension field cannot be null at time of evaluation. Finally an extension can either a static or instance field. A static extension has access to the BeforeAll, AfterAll, and TestIntancePostProcessor steps of the extension life cycle.

Registering a programmatic extension would look like this:

@RegisterExtension
SomeExtension extension = new SomeExtension();

Test classes have a much greater degree of freedom when interacting with a programmatically registered extension as they are just another field within the test class. This can be great for retrieving values out of an extension to verify expected behavior, passing values into the extension to manipulate its state at runtime, as well as other uses.

Automatically Registering Extensions

The final way to register an extension is with the Java Service Loader. The Java Service Loader can best be described as arcane, at least I generally get blank stares or looks of confusion when I bring it up. Though like many arcane things, it can be very powerful for both good and ill!

The Java Service Loader can be used to automatically register extensions within a test suite. This can be helpful as it allows certain behaviors to happen automatically when executing tests. The flip side to this, depending on the type of work that is occurring within the extension, this could have a non-trivial impact on your test suite runtime, it could also interfere in a non-obvious way with how a test behaves (the person executing the test might not realize the extension is being executed because it wasn’t registered locally). So to quote Uncle Ben:

remember-with-great-power-comes-great-responsibility

Registering an Automatic Extension

Registering an automatic extension is a more involved process than the other two ways, let’s quickly walk through the steps:

  1. Create a folder named META-INF on the base of your classpath
  2. Create a folder named services under META-INF
  3. Create a file named org.junit.jupiter.api.extension.Extension under services
  4. In org.junit.jupiter.api.extension.Extension add the fully qualified name of the extension you want registered, for example: my.really.cool.Extension
  5. Pass in -Djunit.jupiter.extensions.autodetection.enabled=true as a JVM argument (how to do this will vary based on your IDE)
    1. Configure your build file to automatically pass in the above argument. Here is an example using Surefire in maven:
<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-surefire-plugin</artifactId>
    <configuration>
        <properties>
            <configurationParameters>
               junit.jupiter.extensions.autodetection.enabled=true
            </configurationParameters>
        </properties>
    </configuration>
</plugin>

You can see a full example of the above here. Note META-INF is located under /src/test/resources.

Doing these steps every time you would want to use an automatic extension in a project is a bit involved, in the article on extendability we’ll take a look at how to make automatic extensions more practical to work with.

Ordering Extension Execution

As of JUnit 5.4 there are two ways to order how extensions are instantiated during the test cycle. Ordering extension execution could be useful in the realm of higher level tests; that is test above the unit test level. Integration tests, functional tests, feature tests, and other such similar tests might require complex setup and tear down steps.

Even for test code, it is still important to follow principles like single responsibility. If for example you have a feature test that verifies the behavior for when your application interacts with a database and cache, it would be better to locate the logic for setting up and tearing down the database in one extension and similar behavior for the cache in a separate extension, instead of putting all that behavior in a single extension. This allows for great reusability of each extension as well as making them easier to comprehend.

For all ways of ordering extension execution, the order of execution is inverted for “after steps”. So if you have three extensions named A, B, and C, each implementing the BeforeEach and AfterEach behavior, then going into a test method the execution order would be A -> B -> C, while the execution order leaving the test method would be C -> B -> A.

Order of Declaration

When registering an extension declaratively, the order of declaration within the test class is the order in which the extensions are registered and executed. Take the below example:

@ExtendWith(FirstExtension.class)
@ExtendWith(SecondExtension.class)
public class TestExtensionExecutionOrdering(){

   @ExtendWith(ThirdExtension.class)
   public void testExtensions(){
   }
}

When executing the test method testExtensions() the execution order going in would be FirstExtension -> SecondExtension -> ThirdExtension and going out of testExtensions() it would be ThirdExtension -> SecondExtension -> FirstExtension.

I haven’t personally used this feature a whole lot. I have a lot of confidence that, from a framework perspective, this feature behaves as designed. What I worry about however is the extent this feature would be understood by most developers and test engineers. In my experience, the order in which annotations are declared in a class or on a method is not something that developers and test engineers often think about or interact with. If this concern is ever surfaced, it’s often for stylistic reasons, for example; the shortest annotation should be declared first.

The good news is, is through the enhancements to extendibility that I mentioned in the introduction to this article, a custom annotation could be created and shared that includes the declaration of multiple extensions in their proper order. We will take a deeper look at custom annotations, and other examples of extensibility later in this series.

Ordering Programmatically Registered Extensions

Ordering extension registration and execution by order of declaration has been a feature of JUnit 5 since its initial release. With JUnit 5.4 programmatically registered extensions can also be executed in a manually defined order (programmatically registered extensions have always been executed in a consistent order, but it is “intentionally non-obvious”).

To define the execution order of a programmatically registered extension the annotation @Order(n) needs to be added to the declaration of the extension field. You do not need to add an annotation at class level like you would for ordering test methods to enable this behavior. However like when ordering test method execution, you do not need to order every extension. Extensions that do not have a defined execution order are executed after all extensions that do, following the “consistent, but intentionally non-obvious” process mentioned above. So in the below example:

public class TestClass{
   @RegisterExtension
   @Order(1)
   BaseExtension baseExtension = new BaseExtension();

   @RegisterExtension
   @Order(2)
   SecondaryExtension secondaryExtension = new SecondaryExtension();

   @RegisterExtension
   AuxillaryExtension secondaryExtension = new AuxillaryExtension();

   public void testExtensions(){
   }
}

BaseExtension is executed first, SecondaryExtension second, and AuxillaryExtension, and any other extension, executed after.

Also note that programmatically registered extensions will be executed after all extensions that have been registered declaratively and automatically. So aa programmatically registered extension with an annotated with @Order(1) may not be the first extension to be executed when running the test. So keep that in mind!

Conclusion

The new extensions model added a lot of much needed (and appreciated!) flexibility when it replaced the runner and rules architecture from JUnit 4. In the next article in the series we will take an in-depth look at the lifecycle of an extension and build our own custom extension!

The code used in this article, and series, can be found here.

Testcontainers, Bringing Sanity to Integration Testing

Writing and maintaining integration tests can be a difficult and frustrating experience, filled with a veritable minefield of things that could go wrong. For integration tests that connect to a remote resource you have issues of: the resource being down, datasets being changed or deleted, or heavy load causing tests to run slow. For integration tests that connect to a local resource you have the initial install and configuration of the resource on your local machine and the overhead of keeping your local instance in-sync with what the production instance looks like, otherwise you might run into this situation:Screen Shot 2019-03-25 at 8.12.34 AMSource: Minesweeper – The Movie

No application operates in isolation. Applications, even “monoliths”, depend on remote resources be it: databases, logging services, caches, or other applications to function. Just like the application we are maintaining will change over time as business needs and client demands change, so too will the resources it depends on. This necessitates a need to continually verify that our application can communicate with its dependent resources.

So to maintain development velocity and while having confidence our applications will function properly in production we need to write automated integration tests, however we need our integration tests to be:

  • Reliable – Test failures should only happen because a change occurred in either our application or the resource, not because the resource is down or misconfigured.
  • Portable – The tests should be able to run anywhere with minimal setup.
  • Accurate – The resource being used in the integration test should be an accurate representation of what exists in production.

How do we accomplish these requirements?

Introducing Testcontainers

Testcontainers is a Java library that integrates with JUnit to provide support for starting up and tearing down a Docker container within the lifecycle of a test or test class. Testcontainers is a project that was started about four years ago, and I first learned about back in 2017 when I was putting together a Pluralsight video on automated testing.

I have noticed an uptick in interest in Testcontainers in my twitter outline recently, and it doesn’t seem long ago that Testcontainers passed the 1K stars mark on their github repo, which now sits at 2.2K. If you haven’t started familiarizing yourself with Testcontainers now would definitely be a good time.

This rapid increase in popularity is likely the result of Testconainers being easy to use, and the flexibility of Docker containers, allowing Testcontainers to address a lot of integration testing use cases. In this article we are going to look at two approaches of how to use Testcontainers for running an integration test against a database. The code examples will be using JUnit 5, if you want to get familiar with JUnit 5, I have written a lot about it, you should also check out the JUnit 5 user docs.

Launching a Testcontainer via JDBC URL

In the example we will be writing an integration test for connecting to a Postgresql database, Testcontainers does offer support for a number of other databases. The first step will be brining in the appropriate dependencies. For this example we will only need to add the Postgresql Testcontainers dependency, to our maven build file (which in turns brings in the Testcontainers JDBC and core libraries).

Full maven build file for this project can be found here.

With the appropriate dependencies imported, let’s look at how to use Testcontainers to write a database integration test.

Full class, including imports, here.

There is quite a bit going on, let’s breakdown what is happening in this class into more easily digestible bites.

@TestPropertySource("classpath:application.properties")

This isn’t really related to using Testcontainers, but since ApplicationContextInitializer (javadoc) isn’t super well known, but can also be really helpful when writing automated tests, I wanted to take a moment to show how to make it easier to work with when used in test classes.

Here I am telling the test class to bring in the properties defined in /src/test/main/application.properties (source). By bringing in the properties defined in application.properties, instead of having to define every property needed for connecting to the Testcontainers database, only the properties that are different for the tests in this class need to be overwritten. This reduces maintenance needs and helps with overall test accuracy as it is easier to keep a single properties file in-sync with what production looks like.

public static class Initializer implements ApplicationContextInitializer {
   @Override
   public void initialize(ConfigurableApplicationContext applicationContext) {
      TestPropertyValues.of("spring.datasource.url=jdbc:tc:postgresql:11.2://arbitrary/arbitrary", //
      "spring.datasource.username=arbitrary", //
      "spring.datasource.password=arbitrary", //
      "spring.datasource.driver-class-name=org.testcontainers.jdbc.ContainerDatabaseDriver")//
      .applyTo(applicationContext);
   }
}

Within Initializer four properties are being defined (overwritten), and a few of them have somewhat odd looking values, let’s take a closer look. When initializing  Testcontainers via the JDBC URL, Testcontainers will set the username, password, hostname, and database name to what ever values you pass it. Strictly speaking spring.datasource.username and password don’t need to be included as they are defined in application.properties.  For spring.datasource.url, the JDBC URL must start with jdbc:tc:. The 11.2 refers to the specific image tag of postgres to be used, this however is optional and would default to 9.6.8 if left out. Lastly, spring.datasource.driver must be set to org.testcontainers.jdbc.ContainerDatabaseDriver. ContainerDatabaseDriver is Testcontainers’ “hook” into this test class. After starting up the container, ContainerDatabaseDriver will be substituted with the standard database driver, in this case org.postgresql.Driver. While in this example I am using the base postgres image in this example, you can use a custom image, so long as the database within the container is postgres (or of the type of database you have brought in a dependency for).

The rest of the test class is comparatively simple and straightforward. Simple read and writes are being performed to ensure fields are being properly mapped and the generated id matches the expected pattern.

Using Testcontainers as a Class Field

Above we looked at how to use Testcontainers via the JDBC URL hook. This can be a great when your use case is pretty simple, however the complexities of applications in the real world often mean a need for greater control and customization in behavior.

First step would be to bring in the Testcontainers junit-jupiter library.

There are a lot of similarities with the previous code example, so lets focuses only on the differences.

At the top of the test class, is the @TestContainers annotation. This brings in the Testcontainers extension into the class which scans for fields annotated with @Container such as in this case PostgreSQLContainer container. A @Container field can be either static or an instance field. Static containers are started only once and are shared between test methods, instances containers are started and stopped for each test method.

@Container
private static PostgreSQLContainer container = new PostgreSQLContainer("storm_tracker_db:latest");

Here the container that will be used in this test class is defined. Like with the JDBC URL method, you are not required to use a base postgresql image, in this case the customer image “storm_tracker_db” is being used (the Dockerfile for this image is here). As long as the database within the container is postgres, you are fine. While not much additional customization is being done to the container in this class. Testcontainers does offer a number of options such as: executing commands, setting a volume mapping, or accessing container logs, among others. Be sure to check the documentation under features and modules for what is available, as well as the javadoc (v1.11.1).

These additional features provided when using a Testcontainer as a class field allow for flexibility in putting the container within a specific state for a test, easily switching the datasets to be used in a test, or being able to view the internals of container to verify expected behavior.

An additional benefit of using a Testcontainer as a class field is the ability to reference values from the container in use. In Initializer I am using container to populate the JDBC URL (container<span class="pl-k">.</span>getJdbcUrl()), username, and password properties for the Spring test application context. By default when using PostgreSQLContainer the username and password are both “test”, so we don’t really need to pull these values from the container, however the JDBC URL is dynamic. Being able to pull values from a container and pass them in to the application context for a Spring test, helps to increase the flexibility when using Testcontainers. Without this, you might have to use pre-defined ports, IPs, or other values, which might run into trouble when the tests are being executed on a build server.

Conclusions

I’m excited to see how much Testcontainers has grown both as a project and in interest from the community from when I first started using it. I have often struggled when writing integration tests, having to deal with either flickering tests, or the overhead of install and maintain a local resource. Neither are pleasant experiences. Testcontainers brought sanity in my life to the difficult task of writing integration tests.

The code used in this article can be found here.

Why You Should Start Injecting Mocks as Method Arguments

One of the big improvements that came in JUnit 5 was support for dependency injection via constructors and methods. Since the release of JUnit 5 in September 2017, third-party libraries, like mockito, have started providing native support for constructor and method injection. In this article we will take a quick look at how to use constructor and method injection with mockito and then look at why you should start injecting mocks as method arguments in your automated tests.

How to Inject a Mock as an Argument

Starting with 2.21.0, current version 2.25.0, mockito has provided support for injecting mocks as both constructor and method arguments. Let’s looks at how you can start using dependency injection with mockito.

In your test class you will need to annotate it with @ExtendWith(MockitoExtension.class). Then for any arguments you would like mockito to provide a mock for, you simply annotate the argument with @Mock. Here is an example of using mockito dependency injection in action:

Pretty simple and straight forward. Let’s now look at why you should start using method injection of mocks.

The Case for Injecting Mocks as Method Arguments

There are three major benefits that come from automated testing: speed, repeatability, and auditability. The first two are pretty well understood benefits of automated testing, auditability however is if not less well understood, definitely less often discussed. Auditability, within the context of automated testing, refers to the quality of being able to see what code has been tested and the intent of the test.

Code coverage can be achieved without spending much time thinking about how other people, developers, test engineers, business analyst, etc, might use automated tests to understand (i.e. audit) the system the tests are covering. Tests with names like testSuccess, testFail, testFail2, can be executed just fine, but do little to communicate their intent. For an automated test suite to be properly auditable, tests names need to clearly convey the intent of what behavior is being tested. While a test with a name of testRollbackAddUserAddressValidationError​ is a bit of a mouth full, it pretty clearly describes what scenario the test is covering.

While testRollbackAddUserAddressValidationError()​ provides intent, to understand the scope of the test, what dependencies the code under test interacts with, it would require inspecting the code within the test case itself. However we can begin to communicate scope by injecting mocks as method arguments. If we were to do that with the above test we will would have testRollbackAddUserAddressValidationError(@Mock UserDao userDao). Now just from reading the signature of the test case we can determine that the scope of the test also includes interacting with the UserDao class.

When executing tests as a group, we can better see the benefits of injecting mocks as method arguments. Below is an example of running two test classes performing the same set of tests, but one is using mocks at the class level, while the other is using method injection. From the JUnit report alone, we can understand that UserService depends upon the UserDao and AddressDao classes.

Screen Shot 2019-03-12 at 11.20.15 AM.png

Note: Another new feature in JUnit 5 are nested tests, which is being used here.

Conclusion

Injecting mocks as method arguments isn’t game changing, but it can help make tests easier to read, and thus audit, thru being able to communicate in the signature the scope of the test. While there will be instance where passing a mock in as a method argument isn’t practical, generally that should be rare*, and so hopefully this article encourages you to generally use method injection when you are working with mocks.

The code used in this article can be found in this gist, and also this repo.

* Complex mock setup should be seen as a smell that either (or all) the mock, the test, or the code under test has too many responsibilities