Keeping Dependencies Up-to-Date with Automated Testing

One of my big passions in software development is automated testing, which can be seen by how often I write on the subject. This passion stems from having spent years working with organizations that struggled delivering to production in a timely manner and the anxiety I would experience during deploys. Too often when a release was pushed to production I would soon hear back that a bug has made it to production and the commit logs would show “bkorando” for when the bug was introduced. I would feel ashamed that a change I made caused a problem in production, and felt that it was because I wasn’t being careful enough.

Of course I wasn’t the only developer accidentally introducing bugs into production and the organizations I worked with, and many others, would respond to this problem by introducing new processes. Processes like; having manager sign off on a release, a tester manually run through a test script, or requiring extensive justification for why a change should be made. These additional processes would do little to resolve the issue of bugs getting to production, they did however result in two things:

  1. Production pushes became more painful, which lead to them happening less often
  2. Encouraged developers to limit the scope of their changes to reduce risk

In this article we are going to look at how relying on manual processes hinder an organization’s ability to keep their dependencies up-to-date, why this is a problem, and look at a use case of how an automated test can catch a subtle breaking change that occurred from upgrading a dependency.

Coding On the Shoulders of Giants

As modern software developers, we owe a lot to developers that came before us. Margret Hamilton pioneered error handling and recovery techniques while developing the software for the Apollo flight control computer. If you are performing distributed transactions, Hector Garcia-Molina and Kenneth Salem paved the way with the development of the SAGA pattern on how to rollback disturbed transactions when errors inevitably occur.

Our foremothers and forefathers, have allowed our field to advance by pushing on the boundaries of what was possible and introducing new concepts and patterns. Often though the way we benefit from these pioneering advances is through the use of libraries and frameworks that implement or provide abstractions into those concepts and patterns. While we labor all day building and maintaining applications for our organizations, the reality is the code we write is only a small fraction of the code actually running in production.

Graph depicting the we write resting at the peak of a pyramid of code (dependencies) it runs on

Just like how the applications we maintain are constantly changing, so too are the libraries and frameworks we depend upon. New features are being added, bugs fixed, performance improved, security holes plugged. If you want the hair to stand on the back of your neck, I would recommend checking out shodan.io, which can be a great resource for showing not only the vulnerabilities of a website, but how to exploit them!

Frozen in Time

Image result for mosquito in amber
Photo by George Poinar, Jr.

In the introduction, I talked about how additional process would encourage developers to minimize their changes. An easy way of keeping a change set small is by not updating an application’s dependencies. I have quite frequently worked on applications whose dependencies were years old! At times working on these applications made me feel like I was a paleontologist, examining applications frozen in amber from a time long ago.

From the perspective of an organization that depends upon manual process, the decision to freeze dependencies is understandable. Updating a dependency, especially if it’s a framework dependency like Spring Boot, could impact an entire application. If an entire application might be impacted from a change, then it would need a full regression test, and when that is manual, that is a time consuming and expensive effort. However in the attempt to resolve one issue, prevent bugs from being introduced into production, this created another, deploying applications with significantly out-of-date dependencies which might contain critical security vulnerabilities (CVEs).

Focusing on the Right Issues

Image result for asking the right question iRobot gif

Manual regression testing, along with being very time consuming, isn’t a very good way to verify the correctness of an application. Issues with manual regression testing include:

  1. Difficult to impossible to recreate some scenarios (e.g. system failure cases)
  2. Test cases not being executed correctly leading to false positives or false negatives
  3. Test cases not being executed at all

Automated testing can address these issues as automated tests are: executed much more rapidly, have very little manpower requirement to execute, are much more granular and flexible, and auditable to ensure they are being executed and testing the intended behavior.

While it is nice to sing the praises of automated testing in the abstract, it would be good to see an example of how they can catch a non-trivial, non-obvious bug that occurred from updating a dependency. As luck would have it I just so happen to have an example.

Catching Upgrade

When I was working on a recent article, wiring multiple datasources, I ran into a bit of a frustrating issue. To distill what happened to the essential points. While checking my work with existing articles out there on the same subject, I tried implementing a version of a solution another author demonstrated. However this solution didn’t work for me, even though the code looked the same. The gif below demonstrates the issue I ran into, as well as how to write a test that would catch the issue:

Code can be found on my GitHub. Note the code from the gif above are in branches off master.

A Perfect and Imperfect Use Case

Most any use case is going to run into relevancy. Problems with this use case, in showing the value of automated testing when upgrading dependencies, is that this problem would show up at start up time. Upgrading to Spring Boot 2.x and not updating the properties to use *.jdbc-url would cause the application to fail at start up as it is not able to properly configure an EntityManager. So the effect of this change would be fairly obvious.

On the other hand, this is a perfect use case for demonstrating the usefulness of automated testing for several reasons. Let’s quickly step through some of them:

  1. The test is easy to write – The test case as can be seen in the above gif is pretty simple to write. Recreating the failure scenario is easy and can be done with an in-memory database, so the test is highly portable.
  2. The test covers other concerns – The test isn’t strictly about recreating a narrow error case, i.e. you’d have to know about the problem before writing the test case. Database communication is important, and is something that would require a test regardless. If there were other subtle breaking changes in that area as well resulting from a dependency upgrade, they might be caught with a test case like this one as well.
  3. An automated test suite can find multiple issues simultaneously – This issue was found at startup. In a more substantial application it is likely there would be other issues that come from doing a major version upgrade of a framework dependency like Spring Boot. Even if all issues were at startup, which wouldn’t always be the case, it would be a grueling process to sequentially go through fixing each issue as it comes up having to restart the application every time. A test suite could catch multiple issues in a single execution, allowing them to be fixed simultaneously.

Resolving issues that come up when upgrading underlying dependencies, even with a good automated test suite can be a difficult process. It took me awhile to track down the solution to my issue, which would be the case regardless of how the issue was discovered.

Safe to Fail

To begin summarizing this article, I was listening to a recent episode of A Bootiful Podcast and this quote really stuck out to me (lightly edited):

You become safe to fail, you accept that no matter what you do there will be bugs… you’re just able to find them quickly and push the fix quickly

@olga_maciaszek

An important takeaway is that no automated test suite is going to entirely eliminate the risk of a bug making it to production. What an automated test suite does gives you though is a safety net and ability to rapidly respond to bugs when they do occur. It gives you confidence that generally it will find bugs in your code, and when a bug inevitably slips through you can respond to it rapidly and with confidence you will not be introducing yet another bug into production. This protection applies to changes made within your code, as well as changes made to dependencies your code depends upon as well.

Conclusion

Awareness and importance of automation, and particularly automated testing, has been slowly increasing in the software development industry over recent years. I am pleasantly surprised when I ask during presentations how many people are working at organizations that do continuous delivery, usually between 25-50% of the room raise their hands.

Still that leaves half or more of organizations which haven’t yet turned the corner on automation. I often find the best way to encourage change at organizations is to lay out the cost of staying with the status quo, and how the proposed changes resolve these problems. If you’re at an organization that still relies heavily on manual processes, and been wanting to change this, hopefully this article might help build your case for your organization to start embracing automation and automated testing.

JPA or SQL in a Spring Boot Application? Why Not Both?

Image result for why not both?

JPA, Java Persistence API, was designed with the goal of making database interaction easier for Java developers. When used with a library like Spring Data JPA, getting basic database communication setup can be accomplished in mere minutes.

Spring Data JPA works great when performing relatively simple queries against one or two tables, but can become unwieldy once you start moving beyond this basic level of complexity. In real-world applications querying against three, four, or more tables at a time, along with performing other complex behavior within a query, is often necessary. Databases are, unsurprisingly, optimized around querying, filtering, and parsing large amounts of data, to say nothing of the transport time between the database and an application. So even if the logic of a complex query could be broken down and implemented with JPA, performance requirements might necessitate keeping that behavior as a single query.

Spring Data JPA makes handling a lot of mundane database activity a lot easier, but runs into some pain with more complex behavior. SQL can handle really complex behavior, but requires writing a decent amount of boilerplate code for even simple selects and inserts. Surely there must be a way to leverage the strengths of both JPA and SQL behind a consistent interface?

Custom Interfaces, Consistent Behavior

Spring Data JPA offers the ability to create custom repository interfaces and implementations and combine them with the default Repository interfaces Spring Data JPA offers, creating a composite repository. This allows applications to interact with the database through a single consistent interface. Let’s step through creating our own composite repository and then reference it in our application.

The first step is to define a new interface and method signature within. Here I am creating the ‘CustomBlogRepo’ interface:

public interface CustomBlogRepo {
    Iterable findBlogsByContent(String content);
}

Next we need to implement the interface. In this example I am using JdbcTemplate to handle querying, but there isn’t any constraints around how you implement database behavior in a custom repository class.

public class CustomBlogRepoImpl implements CustomBlogRepo {

    private JdbcTemplate jdbcTemplate;

    protected CustomBlogRepoImpl(JdbcTemplate jdbcTemplate) {
        this.jdbcTemplate = jdbcTemplate;
    }

    @Override
    public Iterable findBlogsByContent(String content) {
        return jdbcTemplate.query("SELECT * FROM Blogs WHERE body LIKE ?", new Object[] { "%" + content + "%" },
                new BlogRowMapper());
    }

    class BlogRowMapper implements RowMapper {
        @Override
        public Blog mapRow(ResultSet rs, int rowNum) throws SQLException {
            return new Blog(rs.getLong("id"), rs.getString("title"), rs.getString("body"));
        }
    }
}

Note: The above isn’t a good use case of when to use SQL, Spring Data JPA is easily capable of implementing a LIKE.

A couple of other important points, by default Spring Data JPA will scan for classes that match the name of your custom interface CustomBlogRepo with the post fix of “Impl”, thus why the implementation class is named CustomBlogRepoImpl. This behavior can be customized if needed. Additionally the implementation class must also be located in package where Spring Data JPA is scanning for repositories. This too is configurable, as can be seen in the previous link.

Finally we will need to extend the appropriate Repository interface with the customer interface we created. Here is an example of what this would look like:

public interface BlogRepo extends CrudRepository, CustomBlogRepo {

}

With this, we can interact both with the standard Spring Data JPA methods and the more complex database queries implemented in CustomBlogRepoImpl all through the single repository BlogRepo. We can see this in action in BlogController:

@RestController
@RequestMapping("/api/v1/blogs")
public class BlogController {

    private BlogRepo repo;

    protected BlogController(BlogRepo repo) {
        this.repo = repo;
    }

    @GetMapping
    public ResponseEntity<Iterable> getAllBlogs() {
        return ResponseEntity.ok(repo.findAll());
    }

    @GetMapping("/search-by/content/{content}")
    public ResponseEntity<Iterable> findBlogsByContent(@PathVariable String content) {
        return ResponseEntity.ok(repo.findBlogsByContent(content));
    }
}

Conclusion

jpasqlhysteria

Being able to reference either JPA or more complex SQL queries from a single consistent Repository interface makes it a lot easier and more consistent to interact with JPA and SQL within your Spring Data JPA projects. With it you can largely side step questions or rather you should use JPA or SQL in your next project, the answer now can simply be, why not both?

The code used in this article can be found on my Github.

5 Reasons Why You Should Consider Switching to Eclipse OpenJ9

Often when the Java Virtual Machine is discussed in a presentation or article, it is described in very monolithic terms, i.e. “the JVM”. This can give the mistaken impression that there is only single JVM implementation to choose from. In reality though there are actually several JVM implementations to choose from. Knowledge that there are multiple JVMs available is becoming more common with the highly publicized GraalVM and Amazon’s Coretto. There is however another JVM option that has been around for some time, Eclipse OpenJ9.

Eclipse OpenJ9 is an open source JVM implementation that IBM supports. OpenJ9 was open sourced from IBM’s J9 JVM back in 2017 and is available for download from adoptopenjdk.net. In this article we will look at five reasons why you should consider switching to OpenJ9 for running your Java workloads in production.

#1 Significant Reduction in Memory Usage

openj9-memory-copnsumption

The biggest and most straight forward advantage OpenJ9 offers is it uses dramatically less memory when compared to the other publicly available JVMs. The above is a screenshot of a Grafana dashboard from a demo I put together comparing the performance of OpenJ9 to OpenJDK Hotspot, Amazon Corretto, and GraalVM. The demo, executing a simple Spring Boot Batch application, isn’t a specialized use case that plays to OpenJ9’s strengths, but very much in line with the roughly 40% drop in memory consumption the IBM Runtimes team has seen when testing OpenJ9 as well as what users have reported back.

OpenJ9’s is able accomplish this dramatic reduction in memory usage because of its heritage as a JVM designed to run on smartphones in the early 2000’s. Running applications in a mobile environment demands being more conscious of resource usage as; memory, CPU, battery life, were particularly scarce on mobile devices of that era. This legacy of resource conservation lives on in OpenJ9 and while there is no shortage of resources in a cloud environment, there is a price tag attached to using it. A JVM that uses less resources could mean organizations spending less a month on hosting their Java applications on cloud platforms.

#2 Class Sharing

One of my favorite features of OpenJ9 is the ability to do class sharing. Class sharing is a feature that is key for the cloud native world where new instances of an application will be started up and shut down as demand changes or a new and better version is made available.

In a world where applications are being frequently started up, the time it takes until an application is available to start servicing requests and the time it takes until it reaches peak performance (ramp up) becomes more of a concern. Start up can be a resource intensive time for an application as well as the JVM compiles code and performs optimizations.

A lot of the actions performed by the JVM during the start up and ramp up phases will be the same every time. As developers we are taught we should try to reuse code where possible and the lesson of reuse would apply here as well. Class sharing works by allowing a JVM to write to a common location; compiled code and optimizations, that can then be used by other JVMs. Two great design choices in how this feature is implemented are there isn’t neither a leader/follower concept nor will a JVM require a shared cache to be present. This removes points of failure where a JVM unexpectedly stopping or a shared cache being deleted, will not cause other JVMs to crash or fail at startup.

Class sharing can reduce CPU utilization, as compilations and optimizations are CPU intensive, as well as the aforementioned improved performance around startup and ramp up. For more information on class sharing check out this article. If you are interested in using class sharing in a (Docker) containerized environment, check out this article I wrote that looks at a couple of strategies on how to implement class sharing in a containerized environment.

#3 OpenJ9 is Free to Use

OpenJ9 is licensed under the Eclipse Public License 2.0 and Apache License 2.0 licenses, which means you can run your Java workloads in production on OpenJ9 without paying a penny in licensing costs. With the shockwaves and confusion still reverberating around the Java world from Oracle’s recent changes to how it licenses the commercial OracleJDK, having options for running Java where you can have total confidence that you don’t have a potential lawsuit looming in the future over unpaid licensing costs can be very comforting.

For more information on OpenJ9’s license, you can view it here under OpenJ9’s Github repository.

#4 Commercial Support is Available

Free to use is great if you are a start up and running on a limited budget or in the proof of concept phase (such as proving the concept of using OpenJ9 😉). However if your organization’s assets values in the millions, billions, or even trillions, saving thousands a month in licensing cost doesn’t make much sense if you are giving up the peace of mind that commercial support offers.  To that end IBM offers commercial support for OpenJ9. You can read more about the support IBM provides and support cost here.

#5 Switching is Really Easy

So what does it take to switch to OpenJ9?

Refactoring your code to use OpenJ9 concepts?

Recompiling and redeploying all your Java applications to run in an OpenJ9 environment?

The answer is neither. Java is actually an API that is defined by the Java Language Specification (JLS). OpenJ9 is built to be compatible with the Java API as specified in the JLS, here is the JLS for Java 12, and so is capable of running any Java code without any special requirements. The demo I referenced in my first point is actually running the exact same Java artifact in all the containers. If compatibility with running Java artifacts wasn’t enough, OpenJ9 even offers migration support for Hotspot JVM arguments.

But if this is enough for you, check out these testimonials from real users of OpenJ9:

openj9-testimonial.png

Conclusion

As a developer my experience and interactions with JVMs has mostly been limited to executing java -jar . From my perspective the JVM was just something to run my Java applications, I didn’t really care how. I became immediately interested by OpenJ9 however because it had such readily apparent and accessible advantages over other JVMs. This article only scratches the surface of what OpenJ9 has to offer and our IBM Runtimes team is hard at work on improving and adding new features to OpenJ9.

If you would like to know more about OpenJ9 be sure to check out the very helpful user docs: https://www.eclipse.org/openj9/docs/

Wiring Multiple Datasources in a Spring Boot and Spring Data JPA Application

FifthElement

Having to wire a Spring Boot application to talk to multiple datasources is a requirement you come along every once awhile. The good news is this can be done reasonably easily within Spring Boot and in this article we will walk through how to accomplish this.

Wiring Multiple Datasources Step-by-Step

Pre-Requisites

To run the demo application the following tools are needed:

  • Docker
  • Java

There are also some shell scripts available for building, running, testing, and tearing down the application for *nix OS users.

Application Structure

The example application used in this article has two domain models, Doctor and Clinic, each of which are persisted to their own separate datastores. This application overall is a very standard implementation of Spring Boot and Spring Data. Because there is already a lot of great documentation on how to implement such an application, those steps will be skipped. However for clarity, here is what the overall structure of the application looks like:

com.developer.ibm.multidatasource\
   clinic\
      Clinic
      ClinicsController
      ClinicsDatasourceConfiguration
      ClinicsRepo
   doctor\
      Doctor
      DoctorsController
      DoctorsDatasourceConfiguration
      DoctorsRepo
   MultiDatasourceApplication

The full application can be seen here: https://github.com/wkorando/multi-datasources-spring-boot

Configuring Spring Data

The first step would be to define the @Configuration classes that add the DataSource, PlatformTransactionManager, and LocalContainerEntityManagerFactoryBean to the application context which will be used by Spring Data when communicating with the databases. Both of the configuration classes look essentially identical, with one exception which will be covered in detail below. Let’s step through some of the key elements in these configuration classes:

@Configuration
@EnableTransactionManagement
@EnableJpaRepositories(entityManagerFactoryRef = "clinicEntityManagerFactory", transactionManagerRef = "clinicTransactionManager")
public class ClinicsDatasourceConfiguration {

    @Bean
    @Primary
    @ConfigurationProperties(prefix = "clinics.datasource")
    public DataSource clinicsDataSource() {
        return DataSourceBuilder.create().build();
    }

    @Bean
    PlatformTransactionManager clinicTransactionManager(
            @Qualifier("clinicEntityManagerFactory") LocalContainerEntityManagerFactoryBean clinicEntityManagerFactory) {
        return new JpaTransactionManager(clinicEntityManagerFactory.getObject());
    }

    @Bean
    LocalContainerEntityManagerFactoryBean clinicEntityManagerFactory(
            @Qualifier("clinicsDataSource") DataSource clinicsDatasource, EntityManagerFactoryBuilder builder) {
        return builder.dataSource(clinicsDatasource).packages(Clinic.class).build();
    }
}

@Primary

Outside of name differences, the @Primary added to clinicsDataSource is the only functional difference between ClinicsDatasourceConfiguration and DoctorsDatasourceConfiguration. Adding @Primary to clinicsDataSource is necessary as some of the autoconfiguring behavior within Spring Data depends upon a DataSource being available in the application context. However in our situation there will be two DataSources, available in the application context so adding @Primary to one gives Spring the information it needs on which bean to choose. For this applications purposes, making clinicsDataSource the primary DataSource was an arbitrary decision. However deciding which DataSource should be the primary one might be something worth thinking about depending on the requirements and behavior of your application.

@ConfigurationProperties

Automatically maps the jdbc-url, password, and username properties prefixed with clinics.datasource available in the environment, in this case defined in application.properties (source), and maps them to the DataSource being created in clinicsDataSource. If this feels too “magical” DataSourceBuilder (javadoc) also has standard builder methods; url(String), password(String), username(String) available among others.

Using @ConfigurationProperties helps to keep the behavior, from a developer’s perspective, more consistent with how a Spring Boot application would work if it had only a single DataSource. @ConfigurationProperties could also be useful in other scenarios. Here is an example of using @ConfigurationProperties to map to fields within a configuration class from an earlier version of the example project.

@Qualifier

The arguments for the @Bean methods of clinicTransactionManager and clinicEntityManagerFactory are annotated with @Qualifier. Like with @Primary, @Qualifier tells the Spring which instance of class to use when there are multiple available in the application context. By default the name of a bean is the same name as the method that created the bean.

Defining the Properties

Next we need to provide Spring with the values to connect to both our databases. In this example we are connecting to containerized instances of MySQL and Postgres. We will go into a little more detail on this properties file below:

clinics.datasource.jdbc-url=jdbc:mysql://localhost:3306/clinics-db
clinics.datasource.username=root
clinics.datasource.password=secret

doctors.datasource.jdbc-url=jdbc:postgresql://localhost:5432/doctors-db
doctors.datasource.username=postgres
doctors.datasource.password=secret

spring.jpa.open-in-view=false

jdbc-url

When defining a datasource using the spring.datasource properties the property of url would be used. However with Spring Boot 2, HirkariDataSource became the standard DataSource implementation, so to use @ConfigurationProperties like in the @Configuration classes above, the property needs to be jdbc-url.

More info can be found here, as well as a workaround if you’d prefer to keep using url.

(I plan on going into more depth about this in a future article, as this change caused me a lot of pain while putting together the example application)

spring.jpa.open-in-view

This is ultimately an optional addition. By default Spring Boot sets this property to true which is arguably an anti-pattern, which you can read more about here. Why this is relevant in an article about configuring multiple datasources is that when spring.jpa.open-in-view is set to true, Spring MVC will look for an instance of PlatformTransactionManager and LocalContainerEntityManagerFactoryBean in the application context.

This could had been alternatively resolved by adding @Primary to one of the @Bean definitions of both PlatformTransactionManager and LocalContainerEntityManagerFactoryBean as was done with clinicsDataSource, however disabling spring.jpa.open-in-view should generally be done anyways, so that is a better resolution.

Running the Application

There are several scripts available for running the demo application, added for convenience and experimentation.

  • build.sh – builds the docker images and Java artifact
  • run.sh – starts up the Docker containers and Spring Boot application (note: there is a 15 second sleep between starting the containers and starting the app, to give the containers time to startup)
  • requests.sh – curl commands for POSTing and GETting to the Spring Boot application
  • kill.sh – stops and removes the Docker containers

The application by default runs at http://localhost:8080 with GET/POST endpoints residing at /clinics and /doctors.

Conclusion

With a little work a Spring Boot application can be setup to handle multiple datasources and still have an overall pretty familiar look and feel to it.

The code used in this article can be found here: https://github.com/wkorando/multi-datasources-spring-boot

An Extended Discussion on Customization in JUnit 5

Inspiration often comes in twos for me. While reviewing a recent blog article, What’s New in JUnit 5.4, a colleague suggested I go into more depth on the usage of extensions in JUnit 5. Then in my twitter timeline I saw this from one of the core committers of the JUnit 5 framework:Screen Shot 2019-03-27 at 5.34.07 PM

Source: Twitter.

Later on in the thread, what was trying to be done with the “hack” could had been accomplished by creating a custom extension that’s available to the public.

The above tells me two things; there is a need for a deep dive on the JUnit 5 extension model and a need to explain the extendability aspect of the JUnit 5 framework. When I talk extendability, I’m specifically referring to the quality of being able to build on top of the existing framework that the JUnit team has provided in JUnit 5. Whereas hacks have often been the modus operandi for getting around the limits of frameworks (rather those limits were intentional or not!), the JUnit team went to great strides to make JUnit 5 extendable, and we’ll see in this series how to take advantage of that quality.

JUnit 5’s extension model and extensibility are by no means trivial subjects, so to make it more digestible, this will be a three part blog series. The subject of each blog article will look like this:

  1. Introduction to and using the JUnit 5 extension model
  2. The JUnit 5 extension lifecycle and building a custom extension
  3. Understanding and using extensibility in JUnit 5

In this article, we will take a high level overview of the extensions model from the perspective of a user of extensions; well learn why the extension model was introduced, how this improves upon what was in JUnit 4, the different ways to register an extension, and how to define the order of extension execution. 

The JUnit 5 Extension Model

JUnit 5 was a fundamental re-write and re-design of the JUnit framework. Some areas largely remained the same, though with a few enhancements, like assertions. Other areas were completely overhauled, which includes runner (@RunWith), MethodRule (@Rule), and TestRule (@ClassRule), being rolled into the new extension model.

The benefits of this overhaul can be experienced in a number of ways. A pretty obvious one is you can now declare multiple extensions at the class level whereas before you could only declare a single @RunWith:

@ExtendWith(SpringExtension.class)
@ExtendWith(MockitoExtension.class)
public class TestSomeStuff{...
}

A bit more subtle, parameterized tests and normal tests can now co-exist in the same class:

public class ParameterizedAndNormalTestsLivingTogether{

   @Test
   pubic void aNormalTest(){
      ...
   }

   @ParameterizedTest
   @ValueSource(strings = { "val1", "val2" })
   public void aParameterizedTest(String val) {
   ...
   }
}

Note: @ParameterizedTest is built using the extension model

If you haven’t run into the constraints imposed by the previous Runner and Rule architecture, I can assure you it’s quite the painful experience when you do! So being able to register multiple extensions in the same test class or locate a parameterized test and normal test in the same test class are reasons to celebrate. But this is only just scratching the surface of the extension model, so let’s start going deeper.

Registering Extensions

There are three different ways to register an extension in JUnit 5: declaratively, programmatically, and automatically. Each way of registering an extension comes with specific rules, constraints, and benefits. Let’s step through the different types of ways to register extensions and understand when and why you might prefer using one method of the other.

Declaratively Registering Extensions

Extensions can be registered declaratively with an annotation at the class, method, or test interface level, and even with a composed annotation (which will be covered in-depth in the article on extendability). The code samples above are examples of registering extensions declaratively.

Declarative registering of extensions is probably the easiest way of registering an extension, which can be made even easier with a composed annotation. For example it is easier to remember how to write @ParameterizedTest when you want to declare a parameterized test than @ExtendWith(ParameterizedTestExtension.class).

As you are using an annotation to register an extension all the constraints with using annotations are there, such as only being able to pass static values to the extension. Also a test class cannot easily reference extensions that have been registered declaratively.

Programmatically Registering Extensions

Extensions can be registered programmatically with @RegisterExtension. There are a few rules regarding programmatically registered extensions. First, an extension field cannot be private. Second, the extension field cannot be null at time of evaluation. Finally an extension can either a static or instance field. A static extension has access to the BeforeAll, AfterAll, and TestIntancePostProcessor steps of the extension life cycle.

Registering a programmatic extension would look like this:

@RegisterExtension
SomeExtension extension = new SomeExtension();

Test classes have a much greater degree of freedom when interacting with a programmatically registered extension as they are just another field within the test class. This can be great for retrieving values out of an extension to verify expected behavior, passing values into the extension to manipulate its state at runtime, as well as other uses.

Automatically Registering Extensions

The final way to register an extension is with the Java Service Loader. The Java Service Loader can best be described as arcane, at least I generally get blank stares or looks of confusion when I bring it up. Though like many arcane things, it can be very powerful for both good and ill!

The Java Service Loader can be used to automatically register extensions within a test suite. This can be helpful as it allows certain behaviors to happen automatically when executing tests. The flip side to this, depending on the type of work that is occurring within the extension, this could have a non-trivial impact on your test suite runtime, it could also interfere in a non-obvious way with how a test behaves (the person executing the test might not realize the extension is being executed because it wasn’t registered locally). So to quote Uncle Ben:

remember-with-great-power-comes-great-responsibility

Registering an Automatic Extension

Registering an automatic extension is a more involved process than the other two ways, let’s quickly walk through the steps:

  1. Create a folder named META-INF on the base of your classpath
  2. Create a folder named services under META-INF
  3. Create a file named org.junit.jupiter.api.extension.Extension under services
  4. In org.junit.jupiter.api.extension.Extension add the fully qualified name of the extension you want registered, for example: my.really.cool.Extension
  5. Pass in -Djunit.jupiter.extensions.autodetection.enabled=true as a JVM argument (how to do this will vary based on your IDE)
    1. Configure your build file to automatically pass in the above argument. Here is an example using Surefire in maven:
<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-surefire-plugin</artifactId>
    <configuration>
        <properties>
            <configurationParameters>
               junit.jupiter.extensions.autodetection.enabled=true
            </configurationParameters>
        </properties>
    </configuration>
</plugin>

You can see a full example of the above here. Note META-INF is located under /src/test/resources.

Doing these steps every time you would want to use an automatic extension in a project is a bit involved, in the article on extendability we’ll take a look at how to make automatic extensions more practical to work with.

Ordering Extension Execution

As of JUnit 5.4 there are two ways to order how extensions are instantiated during the test cycle. Ordering extension execution could be useful in the realm of higher level tests; that is test above the unit test level. Integration tests, functional tests, feature tests, and other such similar tests might require complex setup and tear down steps.

Even for test code, it is still important to follow principles like single responsibility. If for example you have a feature test that verifies the behavior for when your application interacts with a database and cache, it would be better to locate the logic for setting up and tearing down the database in one extension and similar behavior for the cache in a separate extension, instead of putting all that behavior in a single extension. This allows for great reusability of each extension as well as making them easier to comprehend.

For all ways of ordering extension execution, the order of execution is inverted for “after steps”. So if you have three extensions named A, B, and C, each implementing the BeforeEach and AfterEach behavior, then going into a test method the execution order would be A -> B -> C, while the execution order leaving the test method would be C -> B -> A.

Order of Declaration

When registering an extension declaratively, the order of declaration within the test class is the order in which the extensions are registered and executed. Take the below example:

@ExtendWith(FirstExtension.class)
@ExtendWith(SecondExtension.class)
public class TestExtensionExecutionOrdering(){

   @ExtendWith(ThirdExtension.class)
   public void testExtensions(){
   }
}

When executing the test method testExtensions() the execution order going in would be FirstExtension -> SecondExtension -> ThirdExtension and going out of testExtensions() it would be ThirdExtension -> SecondExtension -> FirstExtension.

I haven’t personally used this feature a whole lot. I have a lot of confidence that, from a framework perspective, this feature behaves as designed. What I worry about however is the extent this feature would be understood by most developers and test engineers. In my experience, the order in which annotations are declared in a class or on a method is not something that developers and test engineers often think about or interact with. If this concern is ever surfaced, it’s often for stylistic reasons, for example; the shortest annotation should be declared first.

The good news is, is through the enhancements to extendibility that I mentioned in the introduction to this article, a custom annotation could be created and shared that includes the declaration of multiple extensions in their proper order. We will take a deeper look at custom annotations, and other examples of extensibility later in this series.

Ordering Programmatically Registered Extensions

Ordering extension registration and execution by order of declaration has been a feature of JUnit 5 since its initial release. With JUnit 5.4 programmatically registered extensions can also be executed in a manually defined order (programmatically registered extensions have always been executed in a consistent order, but it is “intentionally non-obvious”).

To define the execution order of a programmatically registered extension the annotation @Order(n) needs to be added to the declaration of the extension field. You do not need to add an annotation at class level like you would for ordering test methods to enable this behavior. However like when ordering test method execution, you do not need to order every extension. Extensions that do not have a defined execution order are executed after all extensions that do, following the “consistent, but intentionally non-obvious” process mentioned above. So in the below example:

public class TestClass{
   @RegisterExtension
   @Order(1)
   BaseExtension baseExtension = new BaseExtension();

   @RegisterExtension
   @Order(2)
   SecondaryExtension secondaryExtension = new SecondaryExtension();

   @RegisterExtension
   AuxillaryExtension secondaryExtension = new AuxillaryExtension();

   public void testExtensions(){
   }
}

BaseExtension is executed first, SecondaryExtension second, and AuxillaryExtension, and any other extension, executed after.

Also note that programmatically registered extensions will be executed after all extensions that have been registered declaratively and automatically. So aa programmatically registered extension with an annotated with @Order(1) may not be the first extension to be executed when running the test. So keep that in mind!

Conclusion

The new extensions model added a lot of much needed (and appreciated!) flexibility when it replaced the runner and rules architecture from JUnit 4. In the next article in the series we will take an in-depth look at the lifecycle of an extension and build our own custom extension!

The code used in this article, and series, can be found here.