Reflections on One Year in as a Developer Advocate; Content

This month marks my one year anniversary with IBM which is also my first full year as a developer advocate. So I wanted to take a moment to reflect on lessons learned and share the wisdom gained from my first year as a developer advocate. This will be a three part series focusing on different aspects that come with being a developer advocate; content creation, travel, and being an advocate for developers.

I hope this series offers a glimpse of what it’s like to be a developer advocate; both the highs and lows, though word of caution this is my experience as a developer advocate so what I say shouldn’t be taken as a universal truth. Take as one opinionated view of being a developer advocate, but also be sure to take in other views as well.

Content by Developers for Developers

Content creation is easily where I have spent most of my time as a developer advocate. Content can come in many forms; blog articles, podcasts, presentations, workshops, proof of concepts/working examples, and so on. What content you will be creating will vary depending upon your own strengths and preferences, as well as the needs of your organization. No content is inherently better, though having a mix is good so as to  appeal to and support the needs of a larger audience.

Content creation can be an extremely time consuming and at times mentally taxing process. When your content has a technical aspect, there can be an extensive problem solving and investigative component as you actually build the technical resource. This part is probably not surprising to any developers, what is surprising is, is that the creative process, that is the actual creation of the content itself (i.e. the writing of an article, recording of a video, etc.) can be just as time consuming. At least for me I spend a large amount of time writing, re-writing, and editing my content to make sure it flows easily and is clear in its message.

Because content creation can be a time consuming process, finding ways to reuse content is key. A blog article can become a presentation, or vice versa, a demo can be expanded into a workshop. Taking existing content and transforming it in to different forms can not only save you a lot of time, but also make that content more accessible and usable to a larger audience. A presentation has an important human aspect that some find engaging, however not everyone can attend a presentation, so by taking your presentation and writing a blog article not only draws more value out of the work you already did, but also helps make your content make accessible to a wider audience.

Be sure to give yourself realistic timelines and/or set aside enough time for content creation. A big change that comes with being a developer advocate is the frequent hard deadlines. If you are producing content for an event; conference, meetup, client meeting, etc., your content has to be ready by the agreed upon time. There will inevitably be times where you will be frantically finishing a presentation moments before going on stage, but it is best to avoid this by making sure you are setting realistic deadlines before hand.

Tip of the Spear, Edge of the Knife

As a developer advocate, you will often be working with the cutting edge of technology. The content creation covered above, a large portion of it will be for your organization explaining to developers how to consume your organization’s services/products. This means a couple of things:

  1. You will often be operating in an information vacuum, one you are working to fill! As you run into issues, you should reach out to members of your organization who work on the product you are writing about. This can not only help you in the problem solving process, but also be an opportunity for early feedback for the teams at your organization on how a product or service might be improved.
  2. Because API and offerings are not yet in their final form, you might have to often go back and re-verify everything described in your content still works the same way. It has happened several times now where a step or process has changed between the time I wrote an article and the time it has been published. There is also the curation of released content. Organization will add new features, deprecate features, and change features to match the needs of their users, as your organization goes through this process, it is important for you to update content that you have created that might be affected.

Be Clear

I couldn’t agree more with Amara on this point, clear documentation is incredibly valuable. I cannot count the number of times I have read documentation where crucial steps were skipped or elements within a sample piece of code/configuration were glossed over. Being clear, complete, but also concise can really elevate your content.

The best advice I can give when you are creating content is to appreciate that your audience is not going to be nearly as familiar with the technology you are describing as you are (after all its the reason they are consuming your content!). As developers we often experience this with users interacting with our applications in ways we don’t predict. So in the same way when developing an application keeping in mind that many users will only rarely use your application is a beneficial trait for a developer, keeping a similar mindset when it comes to creating content is also beneficial for developer advocates.

Be Cautious of Overpromising

More than anything else, the biggest problem I have had in my transition to developer advocacy is over promising when it comes to content. While working as a staff software engineer deadlines were somewhat fungible. As a developer advocate hard deadlines are very common.

Dealing with hard deadlines of completing a presentation, workshop, or demo for a conference, user group, or client visit, can lead to a lot of stress, anxiety, and late nights. Sometimes this happens because a small task ends up being much larger than expected, as developers we have all ran into that seemingly innocent ticket consuming your entire week. Because of the frequency of hard deadlines, be mindful of not over-committing yourself. Burning yourself out with constant stress and late nights, or having a failed presentation because you weren’t able to complete it neither helps yourself nor your organization.

Helping Yourself as You Are Help Others

While much of this article to this point has been warning of the dangers and difficulties of creating content creation, it is also among the most rewarding aspects of being a developer advocate. Receiving a thank you from someone because they found your presentation, article, or code example helpful is incredibly rewarding. I also cannot count the number of times I have gone back to reference an article or proof-of-concept I created when working on something only tangentially related. For me at least, it reinforces that the content I am creating has value.

As a developer advocate I have also been forced into learning many more aspects of the software development pipeline. I have been a Java developer for over a decade, but until joining IBM, I only had a very vague understanding of the JVM. While I wouldn’t label myself a JVM expert, because I of my work with Eclipse OpenJ9, I have a much deeper understanding of JVMs, and how they operate. Another area as developer I often didn’t interact a lot with was operations and platform technologies. While it has at times been a frustrating experience in the past year learning about technologies like Docker, Kubernetes, these will be valuable skills going forward.

The learning process while at times difficult and stressful is also satisfying and incredibly rewarding. Because I am working with so many different technologies and practices I never feel like I am in a rut, which is a pleasant contrast from my experiences as a developer.


Content creation is a time consuming but rewarding process. As a developer advocate there is a good chance many of your highs and lows will come from content creation. The most important points to remember is set realistic estimates and avoid overpromising and you will hopefully avoid a good chunk of the stress and anxiety I have experienced in my first year as a developer advocate.

Finding the Benefits of Impostor Syndrome

I am a frequent listener to the podcast Software Engineering Daily. On a recent episode Facebook Engineering CultureKent Beck shared his experiences from his time at Facebook. It’s a great episode and I would highly encourage giving it a listen as there are a lot of really interesting nuggets of information. One subject I found really interesting was when Mr. Beck shared how he initially struggled when first joining Facebook and how he dealt with a common feeling in the software engineering world, Impostor syndrome.

We Are All Imposters

Kent Beck was an early advocate for Test-Driven Development and Extreme Programming. He has written a number of popular books on the subjects and has been an influential voice in the software engineering world at-large for several decades now. With such a reputation Mr. Beck is not someone you would think would have to worry about feeling like an impostor.

In 2011 Mr. Beck joined Facebook, with already decades in the industry, he had thought he had “seen it all”. When joining Facebook, new engineers would go through a multi-week “bootcamp” as a way to introduce them to Facebook’s engineering culture. It was during bootcamp Mr. Beck realized he had in fact not seen it all. Facebook’s engineering culture deviated significantly from what he’d thought would be successful; few automated tests were being written, when he decided to hold a class on TDD, no one attended, and he felt he was one of the worst C++ programmers at Facebook, which was reflected in a bad review he got.

Mr. Beck had built his career and reputation on processes that put emphasis on writing automated tests, yet Facebook was finding success in not following these practices. When Mr. Beck tried to uses TDD practices at Facebook, he struggled. Upon realizing he was not performing well at Facebook, he get a feeling familiar to so many other developers, he was an imposter.

Programming Isn’t Just About Slinging Code

Slowly the software engineering world has been moving away from exclusively measuring the worth of a developer in how well they can “sling code”. A funny example of this was when tech twitter was ablaze for few weeks during the summer of 2019 with “10x engineer” memes. While it’s good that there is a greater appreciation that skills outside technical ones are necessary to be a successful developer, often when we experience imposter syndrome it’s because of our concerns related to our technical know how.

This is where Mr. Beck was finding himself back at Facebook. Despite his decades of experience and accomplishments, Mr. Beck realized that he wasn’t going to make it at Facebook just by “slinging code”. So Mr. Beck did some self-evaluation found that he would be of better service by helping to coaching up other engineers:

It was clear that just trying to sling code was not my differential advantage there. I started coaching. I had done a fair amount of one-on-one coaching with engineers before. There were no coaches at Facebook. I could see that there were engineers with tons of unrealized potential.

Because Facebook was solving unprecedented problems, there was no way they could hire somebody to solve them. They had a big bunch of the technical horsepower had to be generated in-house. I started this coaching program called good to great and began working with engineers one on one. Ended up coaching maybe a 150 or 200 engineers.

Personally, the program matched up other senior engineers with junior engineers who got more coaching. My students were demonstrably faster at getting promotions. They were twice as likely to get promoted in the year following coaching than their peers who didn’t get coached, all other things being as much equal as possible.

When we are experiencing imposter syndrome, it is important to remember that even if we lack the technical know how or skill in one area, there are other skills we have that can be of benefit to our organizations. For Mr. Beck it was his skill as a coach that helped him be successful, for you it might be skills and ability in some other area. If you are dealing with feeling like an imposter, take a moment for self-evaluation to see what other skills you could utilize to be successful.

Improvise, Adapt, Overcome

While Kent Beck was improvising by utilizing his coaching skills, a key decision he made was to re-evaluate his approach to programming. As mentioned above, Mr. Beck helped popularize Test-Driven Development. He wasn’t someone who casually wrote a unit test here or there, he literally wrote books on automated testing. However Mr. Beck wasn’t going to be successful by following these methodologies at Facebook, he was going to have to adapt:

I deliberately chose to forget everything I knew about software engineering. I just said, “I’m going to try and be a programmer and I’m going to watch what people do. I’m just going to copy what they do. If somebody says this is too diff since that one, it will be too diffs. If somebody says you need tests for this, I’ll write tests. If they say you don’t need tests for that. Why are you writing tests? Then I won’t write tests, even if I think that’s my – that’s the natural thing to do.

When dealing with the feeling of being an imposter, Mr. Beck didn’t become overwhelmed by it, nor did he suppress it. Instead he embraced it and then overcame it by using it as an opportunity to grow as a software engineer.


As someone who has also at times felt like I’m an impostor, it is good to know that even someone as accomplished as Kent Beck has experienced that same feeling as well. What is important though when encountering that feeling you are an impostor is to remember you have other skills that you can use to be successful and to take the opportunity to learn new ways of being successful so you can grow as an engineer and person.

Quotes taken from show transcript.

Three Books Every Developer Should Read

A lot goes into growing your career and knowledge as a developer. There are many ways to learn; the most common would be experience from the day-to-day work of being a developer, a bit more self directed would be; building small proof of concepts with a cool new tool or framework, reading blog articles, watching videos, or listening to podcasts, all are great ways to learn, and each provide their own unique benefits. However these forms typically are more concerned with answering the what or how. Books are another popular form of learning and where they differ from the other forms is their focus on the why.

Books, by focusing on the why, help to explain the underlying principles and rationale behind a practice or pattern. It is one thing, for example, to know that it is a good practice to wrap all calls to an outside service in a circuit breaker. It’s another to understand the goals and benefits behind the practice. This deeper level of understanding can help a developer make the move from junior to senior in more than title.

In this article I’m going to offer recommendations on three books that have read that have really helped me in my career. These books cover a wide range of subject areas; system/application design, reliability, and delivery. While the code examples in the book may be written in a language different from what you are familiar with, the principles are technology agnostic. So rather you are a; C#, Java, JavaScript, frontend or backend developer all these books should still be relevant and beneficial.

Domain Driven Design

Image result for domain driven design

Author: Eric Evans, Amazon

If you have wondered where the term “Domain Driven Design” came from, it is from this 2003 book. The concept of Domain Driven Design (DDD) has been talked about, blogged about, and tweeted about, a lot since its initial release. The quality of the commentary can vary, but it’s difficult to properly condense down a 560 page book into a 1000-2000 word blog article or hour long talk.

Despite it being written all the way back in 2003, its lessons are still as relevant as ever. A concept particularly important for today would be bounded contexts. If you are considering switching to a microservice-style architecture, finding the edges of a context is critical or else you might end up changing very cheap method calls inside a monolith application with very expensive http calls between services.

Favorite Lesson in the Book

Ubiquitous Language –  Early in my career I would frequently see code that would have very generic names Processor, a, validate, and so on. Figuring out what the code was doing or how it related to business needs was difficult. Ubiquitous Language describes writing code using the same nouns and verbs that your team uses to describe business concepts. So instead of Processor you might have CuustomAccountProcessor, appointment instead of a, and validateMailingAddress.  While the concept of “self-documenting code” is a myth, writing code in this way helps to lower the learning curve for new developers, or even for yourself months later when returning to a project or module.

Release It!

Image result for Release IT

Author: Michael Nygard, Amazon

Michael Nygard’s 2007 book covers how to design applications and systems to be fault tolerant and provide a good experience to clients when problems inevitably occur. The book covers several areas that can affect the performance of an application and/or system. The book is written in an anti-pattern/good pattern format, first covering a widely followed anti-pattern, the problems from following that anti-pattern, and then a good pattern that addresses the same need, but creates an application or system that is more stable, responsive, and able to recover from outages.

The image above is from the first edition, which as mentioned was released in 2007. This is the edition of Release It! I have read, but a second edition was released early last year. The Amazon link above is to the edition of the book which is the edition I would recommend reading. The reviews of the second edition suggest it is just as well written as the first edition.

Favorite Lesson in the Book

Limiting Cascading Failures – No matter the careful planning, amount of resources available, or good design practices followed, failures will happen. Finding out the cause of a failure is something to worry about later, what is important, in the moment, is returning to normal state as quickly as possible. The steps to getting to normal state will vary, but it will always be faster and easier if you design systems to limit the scope of a failure. If a database goes down, it will always be harder to return to normal state, if the application(s) that depend upon that database also crash and/or need to be restarted.

Several organizations I have been with didn’t see as much of an issue if a downstream service also crashed/needed to be restarted, the thought being they wouldn’t be usable anyways. This always complicated the process of returning to a normal state and additionally impacted deployments as there was a deployment order requirement. Designing systems to limit cascading failures makes recovery from failures faster and can also have the added benefit of making deployments easier as well.

Continuous Delivery

Image result for continuous delivery book

Authors: Jez Humble and David FarleyAmazon

On the subjects of deployments we have, Continuous Delivery. Not that I am playing favorites with my book recommendations, but Continuous Delivery has had the most profound impact on my career and how I view software development. The lessons in Continuous Delivery really resonated with me as it often covered the pain points I was often experiencing as a developer at the time and offered much more practical and sensible solutions that addressed them.

Continuous Delivery is one of the major reasons for why I got really interested in automated testing, as automated testing is the foundation on which continuous delivery (the practice) is built. Generally speaking my opinion is that if more organizations got continuous delivery right, it would address a lot of the other problems they are frequently dealing with.

Favorite Lesson in the Book:

Auditability and reproducibility – The major theme in Continuous Delivery is automating the delivery pipeline to production. When the subject of automation is discussed, often its benefits are described as being increased speed and reduced costs from replacing slow and expensive manual labor with processes executed by machine (script). These are certainly significant benefits, but, while subtle, the biggest benefits of automation in my opinion are its quality of being auditable and reproducible, which are also covered in Continuous Delivery.

Automated processes by their very nature are auditable and reproducible. Want to know what an automated test is doing? Look at the code for the automated test. Want to see what happened during a deployment? Look at the logging from the system the executed the deployment script. Automated tests can be repeated, deployment scripts reran to narrow in on and investigate a potential issue. Not to overhype automation too much, as it’s not a panacea, but its benefits over manual processes are difficult to overstate.


There are many great books on software development available. This is only a small selection that I have found particularly helpful. I have found myself frequently thinking back to lessons learned from these books and referencing these books in presentations and blog articles. Taking the time to read some books is really important for all developers. Rather it’s the books listed in this article or others, I hope you set aside some time to pop open a book. While benefits may not always be immediate and often require a relatively high investment in time, books can provide longterm returns in ways other types of learning may not always be able.

What’s New in JUnit 5.5

Screen Shot 2019-06-07 at 2.50.27 PM

A new minor version of JUnit 5, 5.5, was released on June 30th. That means it’s time for yet another article looking at what’s new in this release of JUnit 5! The JUnit team keeps up their blistering pace of a new minor release roughly every four months, and while this release might not quite have a “killer” new feature like previous releases have had, the aggregate of all the tweaks and enhancements have actually made this a pretty big release, far larger than I will be able to cover here. Let’s dive in and take a look at a some of the key new features and changes introduced in JUnit 5.5.

Declarative Timeouts

Ideally automated tests should execute quickly, however some tests; integration, feature, end-to-end tests, etc., might require complicated setup and/or interact with remote resources. This behavior might occasionally lead to situations where a test can run excessively long. To address this concern @Timeout was introduced to provide declarative timeout support for test cases and lifecycle methods. Here is a code example showing some of the ways @Timeout can be used. I’ll go into more detail on its usage below:

@Timeout(3) // Sets timeout limit for each test in class to three seconds
public class TimeoutTest {
static int testCounter = 0;
@Timeout(2) // If this timeout is exceeded, all tests are failed
public static void classSetupWithTimeout() throws InterruptedException {
// …complex setup code
@Timeout(2) // If timeout is exceeded current test is failed, but next test will be
// attempted
public void methodSetupWithTimeout() throws InterruptedException {
Thread.sleep(1500 * testCounter);
@Timeout(unit = TimeUnit.MILLISECONDS, value = 500L) // Default unit is seconds, but other options available
public void testTestCaseTimeout() throws InterruptedException {
public void testExceedsClassTimeLimit() throws InterruptedException {
public void timeoutTest1() {
testCounter = testCounter + 1;
@Order(4) // Will fail due to timeout
public void timeoutTest2() {
testCounter = testCounter + 1;
@Order(5) // Will fail due to timeout, but still attempted
public void timeoutTest3() {
testCounter = testCounter + 1;

@Timeout isn’t limited to being placed on test cases themselves. Along with the aforementioned test cases, @Timeout can also be placed at the type level, where it will provide a default timeout for all test cases declared in the class, this can be overridden by adding a @Timeout to a test case. @Timeout can also be added to lifecycle methods, @BeforeAll, @BeforeEach​, @AfterEach, @AfterAll​.

@Timeout also provides flexibility in setting the unit for timeout length. By default @Timeout uses seconds for its unit of measurement, however @Timeout also has the unit field which can take a value of TimeUnit. Hopefully you’ll never need to declare a TimeUnit value larger than seconds, but having the flexibility to use milliseconds, microseconds, or even nanoseconds, can be helpful when testing race conditions and following the principle of “fail fast” even in the automated testing world (even a unit as small as a second can start to add up in a large enough test suite).

Timeouts can also be declared as system properties. Like with @Timeout the default unit is seconds, but can be changed by adding the following labels:


More specific properties override less specific ones (i.e. setting timeout.beforeall, would override the value of timeout.lifecycle). This provides a way of setting sensible defaults for how long tests should run for that can be easily overridden if needed. Below is the full list of timeout system properties:


@RegisterExtension Updates

@RegisterExtension was added in JUnit 5.1 to support registering extensions programmatically. Since 5.1 @RegisterExtension has mostly remained unchanged, save for the addition of being able to declare their execution order which was added in 5.4. With JUnit 5.5 @RegisterExtension has a number of small but helpful changes made to it, let’s take a look at them.

@RegisterExtension Gets Loud 📣

There are several constraints around how @RegisterExtension can be used. Two of them are the field annotated with @RegisterExtension must not be private and the assigned value must implement an Extension interface. With JUnit 5.5 violating these constraints will cause an exception to be thrown, whereas previously fields that violated these constraints would be silently ignored. The below gif demonstrates the change in behavior.

@RegisterExtension Gets Secretive

In previous releases of JUnit, the declared field of @RegisterExtension must also implement an Extension interface. The JUnit team relaxed this constraint that only the assigned value of the field must implement Extension. This is helpful for Extension designers to hide fields that users of an extension should not be changing and, generally make an API that is cleaner and easier to use.

TestExecutionListener Fails More Quietly

While JUnit has made misusage of @RegisterExtension “noisier”, with JUnit 5.5 exceptions thrown by TestExecutionListeners have gotten quieter. In JUnit 5.4 and before when an exception was thrown by a TestExecutionListener this caused the entire test run to terminate. This behavior isn’t really desirable. Along with listeners existing within the world of reporting, so not necessarily “critical”, even a test throwing an exception would terminate a test run.

With JUnit 5.5 when a TestExecutionListener throws an exception, the stacktrace is instead printed to the console at the WARN level. This allows the test suite execution to continue, but for the information about an error occurring not being lost, and also being clearly indicated.

MethodOrder Random Seed Print Out

In JUnit 5.4 method ordering was added. One of the default implementation of method ordering is random method ordering. Random method ordering could be useful for checking that tests don’t have any unnecessary relationships between them. By running tests in a random order, this can validate that testCaseD doesn’t depend upon/is impacted by behavior occurring in testCaseC.

Note: This is generally only referring to unit tests. It can be appropriate for other types of automated tests to have dependencies upon one another.

If a seed isn’t supplied via the junit.jupiter.execution.order.random.seed property, JUnit will generate a seed for you. With JUnit 5.5, when this happens JUnit will also print log statement displaying the seed value used. This allows a test run to be recreated if a problem was discovered. The below gif demonstrates the behavior:

Execute Classes in Parallel

The JUnit team has made further improvements to their support of parallel test execution. In JUnit 5.5 the configuration parameter: ​junit.jupiter.execution.parallel.mode.classes.default has been added, that allows support for defining how parallel test execution should interact at the class level. Like with ​​junit.jupiter.execution.parallel.mode.default, there are two default values accepted SAME_THREAD and CONCURRENT.  Additionally junit.jupiter.execution.parallel.mode.classes.default will default to the value that junit.jupiter.execution.parallel.mode.default is set to, which is SAME_THREAD by default.

The goal of this change was to make it easier to configure parallel test execution in your test suite. This behavior previously could had been implemented by manually adding @Execution(MODE) to the declaration of every class. The problem with that though is in the description, it required manual work. The addition junit.jupiter.execution.parallel.mode.classes.default allows a default behavior that can be applied to an entire test suite from a single location that can be overridden if needed. Below is a gif showing how a test suite would execute using the different settings:

Default Test Discovery Implementations

A core characteristic of JUnit 5 is its extensibility. Between the needs of library developers and the needs of individual organizations, the JUnit framework cannot reasonably natively support every use case. To address this concern, the JUnit team has worked to make JUnit 5 a foundation that is exceptionally easy to build off of. This characteristic continues with the addition of default implementations for test discovery for both Jupiter and Vintage JUnit tests.

The default test discovery implementations will make it easier to write custom TestEngine implementations. Generally this isn’t something most developers will need to do, but if you have an unusual use case or a need not quite being met by existing TestEngines, then have default discovery implementations will make writing your own TestEngine a little easier.

I would recommend checking out the associated GitHub issues behind this change for more in-depth information: 1739 and 1798.

Deprecation of EnableIf/DisabledIf

Introduced with the release of JUnit 5 was the ability to conditionally disable tests. This feature has gradually been enhanced over time with the JUnit team providing several sensible defaults like disabling (or enabling) a test based upon OS, Java version, and by system property.

However with JUnit 5.5 the JUnit team will be deprecating @EnabledIf and @DisabledIf for eventual removal (currently slated to be removed in 5.6). @EnabledIf/@DisabliedIf provided a mechanism for writing out a script (JavaScript, Groovy, et al.) to evaluate if a test should be executed. Ultimately though this method provided little benefit over writing an implementation of DisabledCondition, while having the drawbacks of creating a maintenance headache as the script would have to be copy and pasted if it was to be reused and also being slower to execute.

Even setting aside the above concerns, the expiration date of using script based conditions was nigh. Nashhorn, Java’s JavaScript engine, was deprecated with Java 11 deprecated and schedule for removal, and script based conditions don’t play nicely when using the module path. If you are using @EnabledIfor @DisabliedIf in your projects, it would be a good idea to start migrating away from them now.


Because there were so many changes in this release, I was only able to cover a small portion of them (smaller than even normal) to see all the changes included in this release be sure to check out the release notes. And always be sure to check out the JUnit user guides for more in-depth information on how to write automated tests with JUnit 5.

To view the code used in this article, check out my github repo on JUnit5.

Keeping Dependencies Up-to-Date with Automated Testing

One of my big passions in software development is automated testing, which can be seen by how often I write on the subject. This passion stems from having spent years working with organizations that struggled delivering to production in a timely manner and the anxiety I would experience during deploys. Too often when a release was pushed to production I would soon hear back that a bug has made it to production and the commit logs would show “bkorando” for when the bug was introduced. I would feel ashamed that a change I made caused a problem in production, and felt that it was because I wasn’t being careful enough.

Of course I wasn’t the only developer accidentally introducing bugs into production and the organizations I worked with, and many others, would respond to this problem by introducing new processes. Processes like; having manager sign off on a release, a tester manually run through a test script, or requiring extensive justification for why a change should be made. These additional processes would do little to resolve the issue of bugs getting to production, they did however result in two things:

  1. Production pushes became more painful, which lead to them happening less often
  2. Encouraged developers to limit the scope of their changes to reduce risk

In this article we are going to look at how relying on manual processes hinder an organization’s ability to keep their dependencies up-to-date, why this is a problem, and look at a use case of how an automated test can catch a subtle breaking change that occurred from upgrading a dependency.

Coding On the Shoulders of Giants

As modern software developers, we owe a lot to developers that came before us. Margret Hamilton pioneered error handling and recovery techniques while developing the software for the Apollo flight control computer. If you are performing distributed transactions, Hector Garcia-Molina and Kenneth Salem paved the way with the development of the SAGA pattern on how to rollback disturbed transactions when errors inevitably occur.

Our foremothers and forefathers, have allowed our field to advance by pushing on the boundaries of what was possible and introducing new concepts and patterns. Often though the way we benefit from these pioneering advances is through the use of libraries and frameworks that implement or provide abstractions into those concepts and patterns. While we labor all day building and maintaining applications for our organizations, the reality is the code we write is only a small fraction of the code actually running in production.

Graph depicting the we write resting at the peak of a pyramid of code (dependencies) it runs on

Just like how the applications we maintain are constantly changing, so too are the libraries and frameworks we depend upon. New features are being added, bugs fixed, performance improved, security holes plugged. If you want the hair to stand on the back of your neck, I would recommend checking out, which can be a great resource for showing not only the vulnerabilities of a website, but how to exploit them!

Frozen in Time

Image result for mosquito in amber
Photo by George Poinar, Jr.

In the introduction, I talked about how additional process would encourage developers to minimize their changes. An easy way of keeping a change set small is by not updating an application’s dependencies. I have quite frequently worked on applications whose dependencies were years old! At times working on these applications made me feel like I was a paleontologist, examining applications frozen in amber from a time long ago.

From the perspective of an organization that depends upon manual process, the decision to freeze dependencies is understandable. Updating a dependency, especially if it’s a framework dependency like Spring Boot, could impact an entire application. If an entire application might be impacted from a change, then it would need a full regression test, and when that is manual, that is a time consuming and expensive effort. However in the attempt to resolve one issue, prevent bugs from being introduced into production, this created another, deploying applications with significantly out-of-date dependencies which might contain critical security vulnerabilities (CVEs).

Focusing on the Right Issues

Image result for asking the right question iRobot gif

Manual regression testing, along with being very time consuming, isn’t a very good way to verify the correctness of an application. Issues with manual regression testing include:

  1. Difficult to impossible to recreate some scenarios (e.g. system failure cases)
  2. Test cases not being executed correctly leading to false positives or false negatives
  3. Test cases not being executed at all

Automated testing can address these issues as automated tests are: executed much more rapidly, have very little manpower requirement to execute, are much more granular and flexible, and auditable to ensure they are being executed and testing the intended behavior.

While it is nice to sing the praises of automated testing in the abstract, it would be good to see an example of how they can catch a non-trivial, non-obvious bug that occurred from updating a dependency. As luck would have it I just so happen to have an example.

Catching Upgrade

When I was working on a recent article, wiring multiple datasources, I ran into a bit of a frustrating issue. To distill what happened to the essential points. While checking my work with existing articles out there on the same subject, I tried implementing a version of a solution another author demonstrated. However this solution didn’t work for me, even though the code looked the same. The gif below demonstrates the issue I ran into, as well as how to write a test that would catch the issue:

Code can be found on my GitHub. Note the code from the gif above are in branches off master.

A Perfect and Imperfect Use Case

Most any use case is going to run into relevancy. Problems with this use case, in showing the value of automated testing when upgrading dependencies, is that this problem would show up at start up time. Upgrading to Spring Boot 2.x and not updating the properties to use *.jdbc-url would cause the application to fail at start up as it is not able to properly configure an EntityManager. So the effect of this change would be fairly obvious.

On the other hand, this is a perfect use case for demonstrating the usefulness of automated testing for several reasons. Let’s quickly step through some of them:

  1. The test is easy to write – The test case as can be seen in the above gif is pretty simple to write. Recreating the failure scenario is easy and can be done with an in-memory database, so the test is highly portable.
  2. The test covers other concerns – The test isn’t strictly about recreating a narrow error case, i.e. you’d have to know about the problem before writing the test case. Database communication is important, and is something that would require a test regardless. If there were other subtle breaking changes in that area as well resulting from a dependency upgrade, they might be caught with a test case like this one as well.
  3. An automated test suite can find multiple issues simultaneously – This issue was found at startup. In a more substantial application it is likely there would be other issues that come from doing a major version upgrade of a framework dependency like Spring Boot. Even if all issues were at startup, which wouldn’t always be the case, it would be a grueling process to sequentially go through fixing each issue as it comes up having to restart the application every time. A test suite could catch multiple issues in a single execution, allowing them to be fixed simultaneously.

Resolving issues that come up when upgrading underlying dependencies, even with a good automated test suite can be a difficult process. It took me awhile to track down the solution to my issue, which would be the case regardless of how the issue was discovered.

Safe to Fail

To begin summarizing this article, I was listening to a recent episode of A Bootiful Podcast and this quote really stuck out to me (lightly edited):

You become safe to fail, you accept that no matter what you do there will be bugs… you’re just able to find them quickly and push the fix quickly


An important takeaway is that no automated test suite is going to entirely eliminate the risk of a bug making it to production. What an automated test suite does gives you though is a safety net and ability to rapidly respond to bugs when they do occur. It gives you confidence that generally it will find bugs in your code, and when a bug inevitably slips through you can respond to it rapidly and with confidence you will not be introducing yet another bug into production. This protection applies to changes made within your code, as well as changes made to dependencies your code depends upon as well.


Awareness and importance of automation, and particularly automated testing, has been slowly increasing in the software development industry over recent years. I am pleasantly surprised when I ask during presentations how many people are working at organizations that do continuous delivery, usually between 25-50% of the room raise their hands.

Still that leaves half or more of organizations which haven’t yet turned the corner on automation. I often find the best way to encourage change at organizations is to lay out the cost of staying with the status quo, and how the proposed changes resolve these problems. If you’re at an organization that still relies heavily on manual processes, and been wanting to change this, hopefully this article might help build your case for your organization to start embracing automation and automated testing.