Transforming Data Transformation in Java with Local Records

Data transformation is a common task for Java developers. When moving data from one location to a new one, and this could be from one datastore to another datastore, such as in a ETL batch process, or retrieving data and sending it to the presentation layer, some amount of data transformation is often required.

While not difficult, performing even simple transforms in Java often means writing a lot of code. Developers will need to create one or more classes with all the associated member fields, accessor methods, and implementing hashCode(), equals(), toString(), and so on. In this article we are going to look at how with the introduction of Records in Java 16 data transformation can be a lot easier.

Java Records

Records, after two rounds of being a preview feature in Java 14 and 15, is becoming a permanent feature in Java 16. Records are a new feature designed to address concerns relating to the definition of data carrier classes in Java including; the proper implementation of hashCode() and equals(), handling of immutable data, and the serialization, de-serialization, and initialization of data carrier classes. For this article we will be focusing how Records aims to make the definition of a data carrier class more concise.

First, let’s understand the issues with defining a data carrier class in Java before Java 16. Before Records, creating a data carrier class would commonly look something like this:

public class Person {

	private long id;
	private String firstName;
	private String lastName;

	public Person() {

	public Person(long id, String firstName, String lastName) { = id;
		this.firstName = firstName;
		this.lastName = lastName;

	public long getId() {
		return id;

	public String getFirstName() {
		return firstName;

	public String getLastName() {
		return lastName;

	public void setId(long id) { = id;

	public void setFirstName(String firstName) {
		this.firstName = firstName;

	public void setLastName(String lastName) {
		this.lastName = lastName;

	public int hashCode() {
		final int prime = 31;
		int result = 1;
		result = prime * result + ((firstName == null) ? 0 : firstName.hashCode());
		result = prime * result + (int) (id ^ (id >>> 32));
		result = prime * result + ((lastName == null) ? 0 : lastName.hashCode());
		return result;

	public boolean equals(Object obj) {
		if (this == obj)
			return true;
		if (obj == null)
			return false;
		if (getClass() != obj.getClass())
			return false;
		Person other = (Person) obj;
		if (firstName == null) {
			if (other.firstName != null)
				return false;
		} else if (!firstName.equals(other.firstName))
			return false;
		if (id !=
			return false;
		if (lastName == null) {
			if (other.lastName != null)
				return false;
		} else if (!lastName.equals(other.lastName))
			return false;
		return true;

	public String toString() {
		return "Person [id=" + id + ", firstName=" + firstName + ", lastName=" + lastName + "]";


Despite the only business meaningful part of the code being the three member fields; id, firstName, lastName, it would take nearly 80 lines to define the class so it could be properly used in a Java application (which would often requires being compatible with popular frameworks). With Records, the above can instead be declared in a single line:

public record Person(long id, String firstName, String lastName){}

Note: Records are shallowly immutable, which is a bit different from how many data carriers are currently defined in Java projects, like the initial Person example, which typically Getters and Setters to be compatible with frameworks. Frameworks are being updated to be compatible with Java 16, and thus Records, and you can view their status here:

Then when the above code is run through the Java compiler the below is produced:

public final class com.bk.example.Person extends java.lang.Record {
  private final long id;
  private final java.lang.String firstName;
  private final java.lang.String lastName;
  public com.bk.example.Person(long, java.lang.String, java.lang.String);
  public final java.lang.String toString();
  public final int hashCode();
  public final boolean equals(java.lang.Object);
  public long id();
  public java.lang.String firstName();
  public java.lang.String lastName();

The benefits of being able to define a data carrier class in a line or two of code, what would previously had taken dozens of lines of code are pretty straightforward. Though there are a number of other benefits Records provide, which if you would want to learn more about, I would highly recommend these two episodes of the podcast Inside Java; “Record Classes” with Gavin Bierman and “Records Serialization” with Julia Boes and Chris Hegarty.

Note: If you are wondering why the compiled code doesn’t have “Getters” and “Setters”, nomenclature that comes from the JavaBeans standard, be sure to read this article that covers some of the design considerations around Records.

Local Records

Like a normal Java class, a Java Record can also be defined with in the body of a method. However, whereas defining a data carrier class within a method body was often impractical due to impacts on the readability of code, Records impose no such liability. Let’s see how Records can be leveraged to make light data transformation easier.

In this simple code example I am printing to console a list of Persons I am retrieving from a repository:

public class PersonService {

	private PersonRepo repo = new PersonRepo();

	public void printPersons() {
		repo.getAllPersons().stream().forEach(p -> System.out.println(p.toString()));

The above code when executed would print out the following:

Person[id=1, firstName=Tony, lastName=Stark]
Person[id=2, firstName=Bruce, lastName=Banner]
Person[id=3, firstName=Sam, lastName=Wilson]
Person[id=4, firstName=Monica, lastName=Rambeau]
Person[id=5, firstName=Wanda, lastName=Maximoff]

When presenting information to users it’s often not desirable to include meta information like id. Users typically wouldn’t be interested in such information and I might want to keep such information private for reasons from security to having more flexibility when changing metadata values.

With Records I can easily define a new data carrier class in the body of printPersons() that can handle the light transformation of stripping out the id field like seen here:

public void printPersons() {
	record PersonView(String firstName, String lastName) {}
	repo.getAllPersons().stream().map(p -> new PersonView(p.firstName(), p.lastName())).forEach(pv -> System.out.println(pv.toString()));

Running the above code will produce the following output:

PersonView[firstName=Tony, lastName=Stark]
PersonView[firstName=Bruce, lastName=Banner]
PersonView[firstName=Sam, lastName=Wilson]
PersonView[firstName=Monica, lastName=Rambeau]
PersonView[firstName=Wanda, lastName=Maximoff]

This is good, but sometimes during data transformations there might be some additional behavior we need to have happen, beyond the simple moving around or stripping out of fields. Luckily the default behavior of a Records class can also be easily overwritten. If for example I wanted to provide even cleaner output, I can simply override the default toString() method with my own, like in this example:

public void printPersons() {
	record PersonView(String firstName, String lastName) {
		public String toString(){
			return firstName + " " + lastName;
	repo.getAllPersons().stream().map(p -> new PersonView(p.firstName(), p.lastName())).forEach(pv -> System.out.println(pv.toString()));

Which prints this to console:

Tony Stark
Bruce Banner
Sam Wilson
Monica Rambeau
Wanda Maximoff

Records allow for plenty of flexibility, equals(), hashcode(), and the accessor methods of a Record class can all be overwritten if needed. Custom behavior can also be added to the constructor, such as checking for null values, though the Java compiler will ensure all member fields of a Record are being assigned a value, and if not, add an assignment operation automatically. Custom methods can also be added if needed as well. For example a toJson() method could be defined for formatting the contents of a Record as a JSON message.


Java 16 is scheduled to go GA March 16th and with it fully bring Records to the Java ecosystem. In this article we saw how Records will provide some significant ergonomic benefits to Java developers when performing the common task of transforming data.

If you would like to check out Java Records for yourself, along with the other new features coming in Java 16, you can download the Java 16 JDK here:

You can find the example code used in for this article along with instructions on how to run it on my github profile:

Three Podcasts to Get You Started on Kubernetes

Over the past couple of years I have become a big fan of podcasts. They are a great way to help pass the time while working out, doing chores around the house, traveling, or general background filler.

Podcasts can be entertaining like with How Did This Get Made, which gives hilarious reviews and commentary on bombastic movies, informative like with the New York Times The Daily, or even help you learn a new language. I listen to a number of tech podcasts as well. For my role as a developer advocate, tech podcasts help to keep me informed on trends and practices of the industry and even get in-depth on a specific technology or practice.

Kubernetes has become an incredibly popular topic to learn about in the years since it was introduced. However in my experience with Kubernetes it can be difficult to know where to get started as it is covering a lot of different concerns. In this article we will look at three podcast episodes each of which take a different approach to covering Kubernetes; first a high level overview of Kubernetes, why it was created and what problems it’s trying to address, a second that gets hands and technical with some of the core concepts of Kubernetes, and finally a podcast that gives a look as to what it’s like to run Kubernetes in production. With that let’s get started.


Getting a High Level Understanding of Kubernetes

Screen Shot 2020-04-16 at 11.05.18 AM

My first podcast recommendation is of my friend Josh Long‘s podcast A Bootiful PodcastIn an April 2020 episode Josh interviewed Joe Beda, one of the co-creators of Kubernetes.

This episode of A Bootiful Podcast does a great job of laying out why Kubernetes exists and the goals of the project. In the episode Josh and Joe talk about the history of Kubernetes, originally developed by Google and built off of lessons learned from developing Borg, which is the cluster manager Google developed and still largely uses internally.

If you have been hearing about Kubernetes and wondering if it makes sense to introduce at your organization, this would be a great place to start. You can listen to this episode of A Bootiful Podcast here:

Kubernetes Nuts & Bolts

Screen Shot 2020-04-16 at 11.05.08 AM

If you want to get a bit more hands on understanding of Kubernetes, then you should definitely check out Java Pub House. Also in an April 2020 episode, Freddy Guime and Bob Paulin, dig into the core concepts and features of Kubernetes.

I have long enjoyed Java Pub House because Freddy and Bob do a great job of breaking down complex technical concepts into an easy to understand format. Their episode on Kubernetes was no different as they cover containers, pods, services, networking within a Kubernetes cluster, and other key concepts. While I have been doing a crash course on Kubernetes over about the past two years, I still learned a number of new things from listening to this episode. So even if you already have some hands on experience with Kubernetes, this episode of Java Pub House is still worth a listen.

You can listen to Java Pub House‘s episode of Kubernetes here on Apple Podcasts:

Kubernetes in the Wild

Screen Shot 2020-04-16 at 11.23.03 AM

Understanding the high level overview and history of Kubernetes is great, and getting hands on can help you setup a proof-of-concept Kubernetes cluster, but nothing can match that day-to-day lived experience of running Kubernetes in production. Which is what we get in a February 2020 episode of Software Defined Talk, when Michael Coté interviewed Charles Lowell on his experience of working with Kubernetes in production.

I found this episode deeply interesting. As mentioned above, I have been learning about Kubernetes for about the past two years, and while I can see the potential of Kubernetes, my practical experience of running Kubernetes on production workloads is limited. So hearing Charles talk about his experience of using Kubernetes to run production workloads was helpful. Coté does a great job of keeping the interviewed grounded for people who are still learning about Kubernetes themselves.

If you want to listen to this episode of Software Defined Talk you can check it out at this link on Apple Podcasts:


Podcasts can be a great opportunity to learn about the current trends and practices in software development. Technologies like Kubernetes are very complex and at times difficult to understand because they are trying to address many different problems; load balancing, ease of deployment, networking, etc.. These three podcast episodes do a great job of explaining Kubernetes at three different levels and I would highly recommend checking not only these episodes of the respective podcasts, but consider subscribing as well as each have podcast has a number of great and informative episodes.

Capturing Desktop and Zoom Audio in OBS on macOS

Because of the COVID-19 pandemic, many in person events like meetups and conferences have gone digital. We have done that with the Kansas City Java Users Group I help organize, I have also seen similar trends from fellow user group and event organizers.

In the push to go digital, many have turned to two popular tools; OBS and Zoom for handling livestreams. For macOS users it is surprisingly difficult to capture desktop and zoom audio. This article is a step-by-step guide for capturing desktop audio and also audio from a zoom call in OBS.

Prerequisites and Priors

This article assumes you have OBS and Zoom already installed on your system. Additionally I have asked some colleagues to run through these steps, and they have done so successfully, but their computers and mine are setup similarly to this:


Capturing Desktop Audio on macOS

For reasons, capturing desktop audio isn’t an intrinsic feature in macOS, so you will need to download a third party tool to do this. I’m following the advice from this YouTube video. So here are the steps.

  1. Install IShowUAudio, you should be presented on a screen that looks like below, for macOS Mojave, or later click the button to the right and click the download button on the following page.  IShowUAUdio download
  2. When installing IShowU Audio you will need to give IShowU Audio permissions (read below if you are not prompted during install):permissions-2permissions-1
    Note: If you are not prompted during install, then hit “command + space” to bring up spotlight and search for “Security & Privacy”, you should see an image similar to the above.
    For more info check Shinywhitebox’s documentation here.
  3. Once the installation is complete, restart your computer

Configuring Output Device

With IShowUAudio installed, the next step is to configure an audio device. This will allow us to capture desktop audio, while also hearing that audio as well.

  1. Hit command + space to bring up the system spotlight and search for “Audio MIDI”audioMidi
  2. Click the little “+” in the bottom left hand corner of the dialog box and select “Create Multi-Output Device”multioutput-part1
  3. You will be presented with another dialog box that looks something like below, note the arrows and run through the steps after the picturemultioutput-part2
    1. Click the “master device” dropdown in the top center of the dialog and select “iShowU Audio Capture”
    2. Select “iShowU Audio Capture” as one of the audio devices
    3. Select any other appropriate output devices from this list for your setup, they will be your “pass through” so you can hear what OBS is capturing.
      Note: If you are planning on using a bluetooth headset and you are planning on using its built in microphone read the “known problems” section

      1. Uncheck “Drift Correction” for all devices if selected
      2. (Optional) Click on the “Multi-Output Device” label in the device’s list on the left hand side of the dialog box and give it something more memorable I used: OBS Audio Capture
  4. Once you have run through the above steps your screen should look something like this:multioutput-part3
  5. Hit “command + space” to bring up spotlight and search for “Sound”
  6. Click “Output”
  7. Select the Multi-Output Device we just created (i.e. “OBS Audio Capture”):mac Sound

Configure OBS to Capture Desktop Audio

We will now need to go into OBS to configure it to use the audio device we just setup to capture audio. With OBS open run through the following steps:

  1. Click on “Settings” in the lower left hand side of the screenOBS Setup - part 1
  2. Click on Audio in the dialog pop-upOBS setup - part 2
  3. Click on the dropdown for Mic/Auxiliary Audio 2 (or some other free option) and selection “IShowU Audio Capture”OBS setup - part 3
  4. Click the “+” button under sources for a sceneOBS Setup - part 4
  5. Select “Audio Input Capture”
    OBS Setup - part 5
  6. Give a descriptive name for the audio source (e.g. “Desktop Audio”)
    OBS Setup - part 6
  7. Select “iShowU Audio” as the deviceOBS Setup - part 7
  8. You should now be capturing desktop audio, try playing a video or music to make sure sound is being captured (the sound bar should move)
    Note: See below if you are not hearing any sound, if nothing is being captured run through the previous two sections again to make sure you did everything rightOBS Setup - part 8

Configure Zoom to Allow OBS to Capture Its Audio

You will need to do a couple of steps to capture audio from a zoom call. If you haven’t already start Zoom.

  1. Make sure Zoom is your active program and open it’s preferences, top left of the desktop
    Zoom Setup - Part 1
  2. Select audio on the right side of the dialog boxZoom Setup - Part 2
  3. Open the speaker dialog box and select the multi-output device created earlier (e.g. OBS Audio Capture)Zoom Setup - Part 3
  4. Click “Test Speaker” and verify in OBS that audio is being captured

Known Problems

Below are some common problems/issues I have ran into:

  • OBS is capturing audio, but I’m not hearing anything – If OBS is capturing audio, but you can’t hear anything coming from your computer one of a few things could be an issue
    1. Make sure the “pass through” audio device you configured during the “multi-output device” section is what you are using to listen to audio
    2. Make sure your computer output is going to the “multi-output device” you setup earlier (command + space and search for sound and select output)
  • My computer audio is messed when not using OBS – When not using OBS, you probably want to use a different audio setup, e.g. using your computer’s speakers. Open up Sound in system preferences (Command + space) and under “output” select the preferred device for output
  • Bluetooth headsets – Using the integrated microphone on bluetooth headsets seems to create a feedback loop. This won’t affect the audio being captured by OBS, but is distracting/disorientating. 


Livestreaming is a new world for many as we deal with the ramifications of quarantines and social distancing brought on in response to COVID-19. Hopefully this guide addresses an issue I quickly ran into when trying to use OBS on macOS. Please leave a comment or reach out to me on twitter if you have any questions or feedback about this article.

Sources for configure IShowU Audio & Multi-Output Device:



Spring Bootcamp – The REST of It

In the second edition of the Spring Bootcamp series, we will continue to explore building a web service following REST principles. In the first article we created a few simple GET endpoints, in this article we build out an API that uses the rest of the major HTTP Verbs; POST, PUT, and DELETE.

In this article we will also look at using exceptions to control application flow, why you should use constructors for dependency injection, and also get a better understanding of model view controller application architecture and the benefits of following it.

Where's the rest of it? - Memes

Speaking Proper REST

I touched on REST briefly in my previous article. Let’s continue exploring REST in this article, by covering some of its key concepts.

Nouns and Verbs

Two key concepts within REST are “nouns” and “verbs”. Within REST nouns refer to the resources that a web service has domain over. Examples of this could be orders, accounts, customers, or in the case of the code example for this article,User.

Verbs within the context of REST refer to HTTP Methods. There are nine HTTP Methods in total, but five that actually relate to acting on a noun these are: GET, POST, PUT, PATCH, and DELETE.

GET – Operation for retrieving a resources

POST – Operation for creating a resource.

PUT – Operation for updating a resource.

DELETE – Operation for deleting a resource.

PATCH – Operation for partially updating a resource.

REST Endpoint Semantics

The “nouns” and “verbs” create very specific semantics around how a REST API should look. The “nouns” form the URL of endpoint(s), with the “verb” being the HTTP method. The API for the User service we will be creating will look like this:

GET: /api/v1/Users: Returns all Users

GET: /api/v1/Users/{id}: Retrieve a specific User

POST: /api/v1/Users: Create a new User

PUT: /api/v1/Users/{id}: Update a User

DELETE: /api/v1/Users/{id}: Delete a User

Following a properly RESTful pattern allows for a discoverable API and consistent experience for clients/users who are familiar with REST.

Note: As covered in the previous article the /api/v1 portion of the endpoint are part of general good API practices, not related to REST.

Safe and Idempotent

When creating a REST API it is also to keep in mind the concepts of; safe and idempotent. Safe means a request will not change the state of the resource. Idempotent means running the same request one or more times will provide the same result. Below is a chart laying out how the five HTTP Methods relate to these two concepts:

These are the expected behaviors when using these HTTP Methods, it is important when implementing a service that is following a RESTful API  these expectations are followed. If executing a GET operation leads to a state change for a resource, this will almost certainly result in unexpected behavior for both the owner of the service and the client(s). Similarly a PUT operation that gives different results each time it is executed, will also be problematic.

Safe for the Resource, Not the System

A final key point on this, safe and idempotent relates only to the resource being acted on. State changes can still occur within the service, for example collecting metrics and logging activity about a request. An easy way to conceptualize this is viewing a video on YouTube. YouTube will want to collect metrics about what you are viewing, but you viewing a video shouldn’t change the contents (i.e. the state) of the video itself.

Writing Proper REST

With understanding some of the key REST concepts a bit better, let see what they look like in practice. Above we covered five of the HTTP methods, but, as mentioned in the intro, we will be implementing only four of them; GET, POST, PUT, DELETE, as they map closely to the Create, Read, Update, Delete (CRUD) concepts, which will be covered in more detail in a future article on persisting to a database.

In the first article we used @GetMapping to create GET endpoints. Spring similarly offers @PostMapping, @PutMapping, and @DeleteMapping, for creating the related types of endpoints. Below is the code for a UserController which defines endpoints for retrieving all users, findAll(), looking up a specific user by id findUser(), creating a new user createUser(), update an existing user updateUser(), and deleting a user deleteUser():

public class UserController {
private UserService service;
public UserController(UserService service) {
this.service = service;
public List<User> findAll() {
return service.findAll();
public User findUser(@PathVariable long userId) {
return service.findUser(userId);
public ResponseEntity<User> createUser(@RequestBody User user) {
User createdUser = service.createUser(user);
return ResponseEntity.created(URI.create(String.format("/api/v1/users/%d", createdUser.getId())))
public User updateUser(@PathVariable long userId, @RequestBody User user) {
return service.updateUser(userId, user);
public ResponseEntity<Void> deleteUser(@PathVariable long userId) {
return ResponseEntity.ok().build();
public ResponseEntity<String> clientError(ClientException e) {
return ResponseEntity.badRequest().body(e.getMessage());
public ResponseEntity<String> resourceNotFound(NotFoundException e) {
return ResponseEntity.notFound().build();

view raw
hosted with ❤ by GitHub

Behind the controller is the UserService for handling the actual business logic, limited as it is, for the web service. In this example for the “persistence” I am simply using an ArrayList. Note the usage of exceptions in the service class which I will touch on in more detail.

public class UserService {
private List<User> users = new ArrayList<>();
private static final Random ID_GENERATOR = new Random();
public User findUser(long userId) {
for(User user : users) {
if(user.getId().equals(Long.valueOf(userId))) {
return user;
throw new NotFoundException();
//throw new ClientException(String.format("User id: %d not found!", user.getId()));
public User createUser(User user) {
return user;
public User updateUser(long userId, User user) {
// User equals looks only at the id field which is why this works despite
// looking weird
if (users.contains(user)) {
return user;
throw new ClientException(String.format("User id: %d not found!", user.getId()));
public void deleteUser(long userId) {
Optional<User> foundUser = > u.getId() == userId).findFirst();
if (foundUser.isPresent()) {
throw new ClientException(String.format("User id: %d not found!", userId));
public List<User> findAll() {
return users;

view raw
hosted with ❤ by GitHub

The code is available on my GitHub repo and you can run it locally to see how it works.

Understanding the Benefits of Model, View, Controller and Separation of Concerns

Model, View, Controller (MVC) is a popular architecture to follow when building a web service, or at least it is in theory. I’ve seen and have built web services where the line between the model, view, and controller has become decidedly blurred. Let’s review the MVC architectural pattern, where developers often go wrong when implementing MVC, and why it matters to follow MVC architecture when building a web service.

MVC Explained

MVC is an architectural pattern of separating a project based on three distinct concerns;

Controller – This is the interface the user/client interacts with to use the service. In the above code this would be represented by the UserController class.

Model – The model is the real “meat” of a service. This is where any business processing, persistence, etc. occurs. This is represented by the UserService class.

View –  The view what is the users sees. When building a REST API this is largely handled invisibly by Spring; which by default converts returned messages to JSON.

The wikipedia article on MVC provides a visualization of the above:


Why Good (MVC) Architecture Matters

As the lines between MVC start to blur it can become difficult for a developer to know where to implement new requirements, which sometimes can lead to requirements being accidentally, or even intentionally, implemented in multiple areas. As these issues build up, it can become increasingly difficult to test and maintain an application.

While the User Service we built in this article is very simple, the UserController does represent the level of concern that a controller should contain even in a more complex service. The controller should primarily be concerned with passing values to a service layer and interpreting the return from the service layer to represent back to the user. Inspecting and manipulating the values in a request is a smell that you might be deviating from MVC in a meaningful way.

We will be exploring automated testing in the next article were we will understand the practical benefits of following good architectural practices.

Exceptional Control

Early in my career I was often strongly advised against using exceptions for control flow. Exceptions should be reserved for exceptional conditions; unexpected nulls, failure to connect to a downstream service, incorrect value types, etc.. Errors relating to business reasons should be handled with normal application flows, if/else statements, setting a flag value, and so on.

There are reasons to be cautious when using exceptions to handle application flow, generating a stacktrace, which happens when throwing an exception, is expensive. However using exceptions for control flow can also make code architecturally cleaner.

In UserService, instead of setting a hasError field in User I am throwing an exception when a validation check fails, in this case when a client sends a user id that doesn’t match any existing users. I then make use of Spring’s @ExceptionHandler functionality to generate an appropriate response for the user. As seen in UserController multiple methods can be annotated with @ExceptionHandler each handling a different exception. This allows for a clean way of handling different error responses:

public ResponseEntity<String> clientError(ClientException e) {
return ResponseEntity.badRequest().body(e.getMessage());
public ResponseEntity<String> resourceNotFound(NotFoundException e) {
return ResponseEntity.notFound().build();

To Return 404 or 400 When a Resource Doesn’t Exist?

In UserService I implement two ways of handling what is the same problem, a client sending an id for a user that doesn’t exist. Going by proper REST guidelines a 404 should be returned. The potential issue is that a 404 could be ambiguous, was a 404 returned because the desired resource doesn’t exist or because the wrong endpoint is being used?

As mentioned, by REST guidelines the correct choice is clear, 404, but it may not be the correct answer in every use case. The important thing would be to document clearly the expected behavior when looking up a non-existent resource and being consistent across your service(s).

Constructor vs Field Dependency Injection

For a long time many Spring developers had a habit of using field dependency injection. If you were to go into many older Spring projects, including many I wrote myself, you’d see classes that looked something like this:

public class ClassA{

private ServiceA serviceA;

private ServiceB serviceB;

//the rest of the class

}<span style="color: var(–color-text);">“`

In the above code snippet above, the members serviceA and serviceB are being supplied via field dependency injection. Configuring dependency injection this way is problematic for two major reasons:

  1. It makes testing more difficult – In order to test the above class you must instantiate the Spring application context, which will slow down test execution and generally increases test complexity.
  2. It can make it difficult to know a class’ dependencies – Injecting via the constructors creates a kind of contract defining a classes dependencies. Field injection does create such a requirement which can lead to tests breaking in confusing ways or code breaking in difficult to understand reasons when a new field requiring dependency injection is added.

A common critique of Spring is that it’s too “magic”, a lot of this magic related back to a reliance on field injection in Spring’s earlier days. To address this, along with updating documentation and code examples to encourage constructor dependency injection, in Spring Framework 4.3 (Spring Boot 1.5) if a class only has a single constructor, Spring will automatically use that constructor for dependency injection. This removes the need to annotate that constructor with @Autowired. This is the Spring team subtly indicating the preferred way of handling dependency injection.


In the first two articles of this series we learned some good practices for building a RESTful API using Spring Boot. In the next article we will take our first steps into the world of automated testing, one of my favorite subjects!

The code in this article is available on my GitHub repo.

Spring Bootcamp – GETting Started

I was recently listening to the Arrested DevOps podcast, in the episode on Making DevOps Beginner Friendly guest Laura Santamaria talked about the importance of creating learning paths. A learning path, as the name suggests, is a series of articles or guides that walk someone through how to use a technology or practice. Learning paths differ from normal blog articles, like I have often done, which cover how to accomplish a very specific goal in isolation.

In the decade I have been working with the Spring Framework in general and the 5 years I have specifically worked with Spring Boot I have learned a lot, what to do, what not to do, and in some cases the why behind some of those answers. With many people working from home in response to the COVID-19 outbreak, seems an opportune time to go back to the basics.

In this series we will do a slow burn through Spring Boot, each article will be structured around the steps to do a complete a common task, but will take the time to explain what exactly the code is doing, what is happening in the background, as well as some of the why/best practices behind the tasks. The goal isn’t necessarily to break new ground in what Spring Boot can do, but to try to get a more well-rounded understanding of Spring Boot.

In this first article of the we will initializing a new Spring Boot project and create a couple of simple HTTP GET endpoints. So with that…

HOMAGE on Twitter:

Initializing a Spring Boot Project

Screen Shot 2020-03-27 at 9.05.06 AM

When starting a new Spring Boot project, one of the best places to go is provides an interface for defining a project’s metadata as well as the ability to easily bring in many commonly used dependencies that should all be compatible to work with one another. Below demonstrates how to quickly initialize a project:

Note: If you are following along with this article you should bring in the spring-web and spring-boot-devtools dependencies.

Building Web APIs with Spring Boot

For many Java developers, a big part of their day is spent building and maintaining applications that service a Web API. Spring Boot makes building and maintaining really easy, which is a big reason why it has become so popular in the Java world.

After importing a new Spring Boot application into your preferred IDE, we can have an accessible endpoint with just these few lines of code:

public class HelloSpringController {
public String helloWorld() {
return "Hello World";

Once added, starting the Spring Boot application should result in “Hello World!” being printed when you go to: http://localhost:8080/api/v1/hello.

Let’s look at the key elements from the above:

@RestController: This annotation marks to Spring that this class is a web controller, a class that serves as the interface to the Web for interacting with the internal application.

@RequestMapping("/api/v1/hello"): This annotation allows a developer to define the base path for the entire controller. All endpoints defined in this controller will be pre-fixed with /api/v1/hello.

@GetMapping: This annotation defines that the method helloWorld can be accessed as a HTTP GET.

With “Hello World” working, the second task when working with a new language or framework is to take in some user input to create a message. With a GET endpoint there are three ways of accepting input from a client; via the URL path, as query parameters, and as a request headers. Let’s look at how to reference values from each below.

Retrieve Values from the URL Path

public String helloMessage(@PathVariable String message) {
return String.format("Hello, %s!", message);

view raw
hosted with ❤ by GitHub

To retrieve values from the URL path, in the @GetMapping you will need to define a variable in enclosing braces like above with {message}. In the arguments of the method an argument must be annotated with @PathVariable, if the name of the argument is the same as the variable in the definition of @GetMapping then Spring will automatically map it. @PathVariable has three fields:

name: Allows for manually mapping a url path variable a method argument.

required: Boolean for if the path value is required. Defaults to true.

value: alias for name.

Retrieve Values from the URL Query

public String helloQueryMessage(@RequestParam String firstName, @RequestParam String lastName) {
return String.format("Hello %s %s!", firstName, lastName);

view raw
hosted with ❤ by GitHub

Values can easily be retrieved from the query portion of an URL, the section of the URL after the “?” e.g.: ?firstName=Billy&lastName=Korando. Spring by default will attempt map query variables to the names of arguments in the method. So in the example URL query firstName and lastName will automatically map to the arguments firstName and lastName. @RequestHeader has four fields:

name: Allows for manually mapping a query value to a method argument.

required: Boolean for if the path value is required. Defaults to true. A HTTP 400 is thrown if a required value is not provided.

defaultValue: A default value for when the parameter is not provided. Will set required to false.

value: alias for name.

Retrieve Values from the Request Header

public String welcomeUser(@RequestHeader String user) {
return String.format("Welcome %s!", user);

view raw
hosted with ❤ by GitHub

Retrieving values from a request header works very similarly to retrieving them from the URL query. Like with @RequestParam, @RequestHeader will automatically map the method argument name to the name of a header value. @RequestHeader has four fields:

name: Allows for manually mapping a header value to a method argument.

required: Boolean for if the path value is required. Defaults to true. A HTTP 400 is thrown if a required value is not provided.

defaultValue: A default value for when the parameter is not provided. Will set required to false.

value: alias for name.

String.format() or String Concatenate

Commonly when building a String in Java many Java developers build a String using concatenation like this:

"A message with a variable: " + var1 + " and another variable: " + var2 + " and more...";

Constructing a String this way can become difficult to read, and also be a formatting nightmare as the code is constantly changed because of slightly different formatting rules. When building a String it can be useful to consider using String.format() instead as demonstrated above. Readability can be a bit easier and there a number of pre-defined ways for printing things like dates available. For more information on how to use String.format() check out the official Javadoc: 8, 11, 14

Convention over Configuration

I am a longtime Spring user, my first experience with Spring was in 2010, using then Spring 2.5. While Spring was a significant improvement over frameworks I had used prior, initializing a new Spring project was still a difficult and and time consuming process process. Getting a static endpoint running as we have done in this article could take hours, even days, if starting truly from scratch.

We were able to accomplish in minutes with Spring Boot, what took hors before because Spring Boot uses a pattern called convention over configuration. In short Spring Boot has a number of default opinions, such as using an embedded Apache Tomcat server running on port ​8080. Many of of these opinions however can easily be change. If we needed to run our Spring Boot application on a different port, we can just set server.port. Using a different application server can be as easy as making a couple small changes to our build file.

Convention over configuration allows developers to focus on key business concerns, because in many cases using embedded Apache Tomcat and running on port 8080 is enough, especially in an increasingly containerized world. Understanding Spring Boot’s default opinions and how to change them will be a key element through out this series because there are definitely right and wrong ways of changing them.

Restart Revolution

As an application is being built there is often a need to rapidly iterate. This means rebuilding and restarting an application frequently. While the steps to rebuild and restart an application aren’t difficult, performing them can disrupt your “flow”. To address this the Spring team developed Spring Boot devtools. Spring Boot devtools provide two key features; automated start and, with browser extensions, live reload. Here is Spring Boot devtools in action:

To use Spring Boot devtools in your project, you will need to add it as a dependency in your build file like this:


Be sure to check out the user guides for more information on how to use Spring Boot Devtools including how to include exclude additional files, use it within a production system, using it on a remote system, and more.

Proper REST and API Best Practices

Like the code in a project itself, the usability and longterm maintainability of an API depends significantly on how well it is designed. Let’s review a few ways to improve the design of an API.

Version Your API

Probably the first, and also on of the easiest ways, to improve the design an API is to include in the URL the API version. In the examples about this was done with v1. Versioning an API allows it to more easily evolve over time as business and client need changes. When breaking changes are introduced, they can be included a new version of the API e.g. v2 and this much easier for clients to migrate to than forcing a hard and complicated switch if the same API endpoints are used.

Follow REST When Practical

Representational State Transfer, or REST, has become a popular architecture to follow when designing Web based APIs. REST was built upon the HTTP protocol, and while there are legitimate critiques that it might not always work well in every business case, there are a few good elements to follow such as; using the appropriate HTTP verb for the behavior of a endpoint e.g., for retrieving data a GET should be used, creating new resources should be a POSTDELETE for when an resource should be deleted.

Additionally proper usage of HTTP codes can be helpful as well; a HTTP 200 should be returned only when a request is successful. 400 should be returned, along with an appropriate message, when the client sends invalid or bad data. A 404 is also appropriate to return when a client requests a non-existent resource.

Fully following all of REST might not be possible or practical in all use cases, but following some of the key elements above can greatly improve the usability and maintainability of a API.


Spring Boot has been a revelation for the Spring developer community. Spring Boot has allowed developers to quickly build new applications while focusing on designing business valuable features for their organizations. As touched on in this article, there is also a lot of subtly to using Spring Boot.  Spring Boot is easy to get started with, but can take a lot to “master”, as even after five years I am still learning new things all the time. In this series we will continue to explore how to use Spring Boot to its full potential.

The code examples used in this article can be found in my GitHub.