Three Books Every Developer Should Read

A lot goes into growing your career and knowledge as a developer. There are many ways to learn; the most common would be experience from the day-to-day work of being a developer, a bit more self directed would be; building small proof of concepts with a cool new tool or framework, reading blog articles, watching videos, or listening to podcasts, all are great ways to learn, and each provide their own unique benefits. However these forms typically are more concerned with answering the what or how. Books are another popular form of learning and where they differ from the other forms is their focus on the why.

Books, by focusing on the why, help to explain the underlying principles and rationale behind a practice or pattern. It is one thing, for example, to know that it is a good practice to wrap all calls to an outside service in a circuit breaker. It’s another to understand the goals and benefits behind the practice. This deeper level of understanding can help a developer make the move from junior to senior in more than title.

In this article I’m going to offer recommendations on three books that have read that have really helped me in my career. These books cover a wide range of subject areas; system/application design, reliability, and delivery. While the code examples in the book may be written in a language different from what you are familiar with, the principles are technology agnostic. So rather you are a; C#, Java, JavaScript, frontend or backend developer all these books should still be relevant and beneficial.

Domain Driven Design

Image result for domain driven design

Author: Eric Evans, Amazon

If you have wondered where the term “Domain Driven Design” came from, it is from this 2003 book. The concept of Domain Driven Design (DDD) has been talked about, blogged about, and tweeted about, a lot since its initial release. The quality of the commentary can vary, but it’s difficult to properly condense down a 560 page book into a 1000-2000 word blog article or hour long talk.

Despite it being written all the way back in 2003, its lessons are still as relevant as ever. A concept particularly important for today would be bounded contexts. If you are considering switching to a microservice-style architecture, finding the edges of a context is critical or else you might end up changing very cheap method calls inside a monolith application with very expensive http calls between services.

Favorite Lesson in the Book

Ubiquitous Language –  Early in my career I would frequently see code that would have very generic names Processor, a, validate, and so on. Figuring out what the code was doing or how it related to business needs was difficult. Ubiquitous Language describes writing code using the same nouns and verbs that your team uses to describe business concepts. So instead of Processor you might have CuustomAccountProcessor, appointment instead of a, and validateMailingAddress.  While the concept of “self-documenting code” is a myth, writing code in this way helps to lower the learning curve for new developers, or even for yourself months later when returning to a project or module.

Release It!

Image result for Release IT

Author: Michael Nygard, Amazon

Michael Nygard’s 2007 book covers how to design applications and systems to be fault tolerant and provide a good experience to clients when problems inevitably occur. The book covers several areas that can affect the performance of an application and/or system. The book is written in an anti-pattern/good pattern format, first covering a widely followed anti-pattern, the problems from following that anti-pattern, and then a good pattern that addresses the same need, but creates an application or system that is more stable, responsive, and able to recover from outages.

The image above is from the first edition, which as mentioned was released in 2007. This is the edition of Release It! I have read, but a second edition was released early last year. The Amazon link above is to the edition of the book which is the edition I would recommend reading. The reviews of the second edition suggest it is just as well written as the first edition.

Favorite Lesson in the Book

Limiting Cascading Failures – No matter the careful planning, amount of resources available, or good design practices followed, failures will happen. Finding out the cause of a failure is something to worry about later, what is important, in the moment, is returning to normal state as quickly as possible. The steps to getting to normal state will vary, but it will always be faster and easier if you design systems to limit the scope of a failure. If a database goes down, it will always be harder to return to normal state, if the application(s) that depend upon that database also crash and/or need to be restarted.

Several organizations I have been with didn’t see as much of an issue if a downstream service also crashed/needed to be restarted, the thought being they wouldn’t be usable anyways. This always complicated the process of returning to a normal state and additionally impacted deployments as there was a deployment order requirement. Designing systems to limit cascading failures makes recovery from failures faster and can also have the added benefit of making deployments easier as well.

Continuous Delivery

Image result for continuous delivery book

Authors: Jez Humble and David FarleyAmazon

On the subjects of deployments we have, Continuous Delivery. Not that I am playing favorites with my book recommendations, but Continuous Delivery has had the most profound impact on my career and how I view software development. The lessons in Continuous Delivery really resonated with me as it often covered the pain points I was often experiencing as a developer at the time and offered much more practical and sensible solutions that addressed them.

Continuous Delivery is one of the major reasons for why I got really interested in automated testing, as automated testing is the foundation on which continuous delivery (the practice) is built. Generally speaking my opinion is that if more organizations got continuous delivery right, it would address a lot of the other problems they are frequently dealing with.

Favorite Lesson in the Book:

Auditability and reproducibility – The major theme in Continuous Delivery is automating the delivery pipeline to production. When the subject of automation is discussed, often its benefits are described as being increased speed and reduced costs from replacing slow and expensive manual labor with processes executed by machine (script). These are certainly significant benefits, but, while subtle, the biggest benefits of automation in my opinion are its quality of being auditable and reproducible, which are also covered in Continuous Delivery.

Automated processes by their very nature are auditable and reproducible. Want to know what an automated test is doing? Look at the code for the automated test. Want to see what happened during a deployment? Look at the logging from the system the executed the deployment script. Automated tests can be repeated, deployment scripts reran to narrow in on and investigate a potential issue. Not to overhype automation too much, as it’s not a panacea, but its benefits over manual processes are difficult to overstate.

Conclusion

There are many great books on software development available. This is only a small selection that I have found particularly helpful. I have found myself frequently thinking back to lessons learned from these books and referencing these books in presentations and blog articles. Taking the time to read some books is really important for all developers. Rather it’s the books listed in this article or others, I hope you set aside some time to pop open a book. While benefits may not always be immediate and often require a relatively high investment in time, books can provide longterm returns in ways other types of learning may not always be able.

Keeping Dependencies Up-to-Date with Automated Testing

One of my big passions in software development is automated testing, which can be seen by how often I write on the subject. This passion stems from having spent years working with organizations that struggled delivering to production in a timely manner and the anxiety I would experience during deploys. Too often when a release was pushed to production I would soon hear back that a bug has made it to production and the commit logs would show “bkorando” for when the bug was introduced. I would feel ashamed that a change I made caused a problem in production, and felt that it was because I wasn’t being careful enough.

Of course I wasn’t the only developer accidentally introducing bugs into production and the organizations I worked with, and many others, would respond to this problem by introducing new processes. Processes like; having manager sign off on a release, a tester manually run through a test script, or requiring extensive justification for why a change should be made. These additional processes would do little to resolve the issue of bugs getting to production, they did however result in two things:

  1. Production pushes became more painful, which lead to them happening less often
  2. Encouraged developers to limit the scope of their changes to reduce risk

In this article we are going to look at how relying on manual processes hinder an organization’s ability to keep their dependencies up-to-date, why this is a problem, and look at a use case of how an automated test can catch a subtle breaking change that occurred from upgrading a dependency.

Coding On the Shoulders of Giants

As modern software developers, we owe a lot to developers that came before us. Margret Hamilton pioneered error handling and recovery techniques while developing the software for the Apollo flight control computer. If you are performing distributed transactions, Hector Garcia-Molina and Kenneth Salem paved the way with the development of the SAGA pattern on how to rollback disturbed transactions when errors inevitably occur.

Our foremothers and forefathers, have allowed our field to advance by pushing on the boundaries of what was possible and introducing new concepts and patterns. Often though the way we benefit from these pioneering advances is through the use of libraries and frameworks that implement or provide abstractions into those concepts and patterns. While we labor all day building and maintaining applications for our organizations, the reality is the code we write is only a small fraction of the code actually running in production.

Graph depicting the we write resting at the peak of a pyramid of code (dependencies) it runs on

Just like how the applications we maintain are constantly changing, so too are the libraries and frameworks we depend upon. New features are being added, bugs fixed, performance improved, security holes plugged. If you want the hair to stand on the back of your neck, I would recommend checking out shodan.io, which can be a great resource for showing not only the vulnerabilities of a website, but how to exploit them!

Frozen in Time

Image result for mosquito in amber
Photo by George Poinar, Jr.

In the introduction, I talked about how additional process would encourage developers to minimize their changes. An easy way of keeping a change set small is by not updating an application’s dependencies. I have quite frequently worked on applications whose dependencies were years old! At times working on these applications made me feel like I was a paleontologist, examining applications frozen in amber from a time long ago.

From the perspective of an organization that depends upon manual process, the decision to freeze dependencies is understandable. Updating a dependency, especially if it’s a framework dependency like Spring Boot, could impact an entire application. If an entire application might be impacted from a change, then it would need a full regression test, and when that is manual, that is a time consuming and expensive effort. However in the attempt to resolve one issue, prevent bugs from being introduced into production, this created another, deploying applications with significantly out-of-date dependencies which might contain critical security vulnerabilities (CVEs).

Focusing on the Right Issues

Image result for asking the right question iRobot gif

Manual regression testing, along with being very time consuming, isn’t a very good way to verify the correctness of an application. Issues with manual regression testing include:

  1. Difficult to impossible to recreate some scenarios (e.g. system failure cases)
  2. Test cases not being executed correctly leading to false positives or false negatives
  3. Test cases not being executed at all

Automated testing can address these issues as automated tests are: executed much more rapidly, have very little manpower requirement to execute, are much more granular and flexible, and auditable to ensure they are being executed and testing the intended behavior.

While it is nice to sing the praises of automated testing in the abstract, it would be good to see an example of how they can catch a non-trivial, non-obvious bug that occurred from updating a dependency. As luck would have it I just so happen to have an example.

Catching Upgrade

When I was working on a recent article, wiring multiple datasources, I ran into a bit of a frustrating issue. To distill what happened to the essential points. While checking my work with existing articles out there on the same subject, I tried implementing a version of a solution another author demonstrated. However this solution didn’t work for me, even though the code looked the same. The gif below demonstrates the issue I ran into, as well as how to write a test that would catch the issue:

Code can be found on my GitHub. Note the code from the gif above are in branches off master.

A Perfect and Imperfect Use Case

Most any use case is going to run into relevancy. Problems with this use case, in showing the value of automated testing when upgrading dependencies, is that this problem would show up at start up time. Upgrading to Spring Boot 2.x and not updating the properties to use *.jdbc-url would cause the application to fail at start up as it is not able to properly configure an EntityManager. So the effect of this change would be fairly obvious.

On the other hand, this is a perfect use case for demonstrating the usefulness of automated testing for several reasons. Let’s quickly step through some of them:

  1. The test is easy to write – The test case as can be seen in the above gif is pretty simple to write. Recreating the failure scenario is easy and can be done with an in-memory database, so the test is highly portable.
  2. The test covers other concerns – The test isn’t strictly about recreating a narrow error case, i.e. you’d have to know about the problem before writing the test case. Database communication is important, and is something that would require a test regardless. If there were other subtle breaking changes in that area as well resulting from a dependency upgrade, they might be caught with a test case like this one as well.
  3. An automated test suite can find multiple issues simultaneously – This issue was found at startup. In a more substantial application it is likely there would be other issues that come from doing a major version upgrade of a framework dependency like Spring Boot. Even if all issues were at startup, which wouldn’t always be the case, it would be a grueling process to sequentially go through fixing each issue as it comes up having to restart the application every time. A test suite could catch multiple issues in a single execution, allowing them to be fixed simultaneously.

Resolving issues that come up when upgrading underlying dependencies, even with a good automated test suite can be a difficult process. It took me awhile to track down the solution to my issue, which would be the case regardless of how the issue was discovered.

Safe to Fail

To begin summarizing this article, I was listening to a recent episode of A Bootiful Podcast and this quote really stuck out to me (lightly edited):

You become safe to fail, you accept that no matter what you do there will be bugs… you’re just able to find them quickly and push the fix quickly

@olga_maciaszek

An important takeaway is that no automated test suite is going to entirely eliminate the risk of a bug making it to production. What an automated test suite does gives you though is a safety net and ability to rapidly respond to bugs when they do occur. It gives you confidence that generally it will find bugs in your code, and when a bug inevitably slips through you can respond to it rapidly and with confidence you will not be introducing yet another bug into production. This protection applies to changes made within your code, as well as changes made to dependencies your code depends upon as well.

Conclusion

Awareness and importance of automation, and particularly automated testing, has been slowly increasing in the software development industry over recent years. I am pleasantly surprised when I ask during presentations how many people are working at organizations that do continuous delivery, usually between 25-50% of the room raise their hands.

Still that leaves half or more of organizations which haven’t yet turned the corner on automation. I often find the best way to encourage change at organizations is to lay out the cost of staying with the status quo, and how the proposed changes resolve these problems. If you’re at an organization that still relies heavily on manual processes, and been wanting to change this, hopefully this article might help build your case for your organization to start embracing automation and automated testing.