10 minute read

An opinionated overview of the strengths of some of the best deployment tools available today!

Introduction

Since the invention of agriculture 22,000 years ago, humans have faced the challenge of sustainability. With the population of our planet having reached 8 billion and still continuing to grow, curators and designers of every field today are being pushed to their limits to find ways to cope with ever-increasing demand for services. As one can imagine, this demand has also been reflected in the digital applications that we use on a daily basis.

Similarly to how mundane systems in our cities such as sewage or water pipes may become clogged due to heavy usage, such a phenomenon can occur on the Web. Continuing with the plumbing analogy; one can imagine the internet as a series of tubes, winding throughout the “virtual globe”. Some of these tubes are overly large and can supply thousands of homes with internet access, meanwhile others can do just one. That is the concept of the connection’s “bandwidth”.

However, the tube’s diameter is not the only factor that influences the flow of water, the tubes of the internet also feature twists, turns and spirals, which make it more difficult for the information to pass through. Although, instead of a concrete physical turn that creates pressure, the “turns” that are created in the internet’s tubes are actually applications. Instead of the tube continuing straight (just performing a data transfer), said tube twists and turns (implying the processing of information, APIs, or other time-consuming procedures).

As such, the developers of today’s applications are faced with this challenge: Create applications that are as efficient in information handling as possible and require minimal bandwidth to function. Furthermore, in case of an error, these applications should be easily swapped out for something else. That surely sounds like a daunting task, so how is the industry managing to keep the lights on?

Step 1: Dockerization

The first item on the checklist that an application has to fulfil is the ability to act in a referentially transparent manner; meaning that if one day a developer discovers a terrible bug in the application, they should be able to quickly deploy a new version of the application without much fuss, effectively “replacing” the whole system with a version that does not suffer from the same issue. Meanwhile, the second item that must be dealt with is the ability to replicate the environment of a system’s runtime in an easy manner. Let’s take the example of a banking loan system built in the Spring Framework using the language Kotlin, and list a few of its’ external dependencies:

  1. The JVM version which is installed in the machine that is running the application.
  2. The Firewall rules of the machine that is running the application.
  3. The banking database, which is probably decentralised and not using the same domain all the time.
  4. The payment systems, also probably working in a similar manner like the database.

These are merely to name a few, but the real dependencies encompass a scope much wider than just the ones in the aforementioned list. Assuming we want to have a horizontally scalable application - meaning that our loan app will run on many machines, in parallel - these dependencies make our job as a developer quite difficult. After all, we need to set up all of these requirements in many machines, and create different versions of the code that call the respective banking systems for that specific application instance.

In the past, developers had to do all that manually, or at least in a very laborious way. Today, Docker is a fantastic solution to this problem. What Docker does is that it creates a virtual machine, which in Docker terminology is called a container, whose sole purpose is to run the application in question. This virtual machine can be configured in many different ways catering to the business needs of the application, through two specific ways:

The Dockerfile

This file is a list of steps written in the Dockerfile Language, which describes how the application should be built and run from inside the container. In simple terms, it tells Docker how said machine should function internally. Let’s check out an example Dockerfile for a Dockerfile that will run our banking application:

# ---- BUILD - These rules specify how our application is built ----
# Use the Amazon Corretto 17-alpine image as the base image for the build stage.
FROM amazoncorretto:17-alpine as BUILD
# Set the working directory to /app.
WORKDIR /app
# Copy the entire project directory to the container's /app directory.
COPY . .
# Set the logging level environment variable to DEBUG.
ENV APP_LOGGING_LEVEL DEBUG
# Build the project using Gradle.
RUN ./gradlew build
# ---- RUN - These rules specify how our application is run ----
# Use the Amazon Corretto 17-alpine image as the base image for the run stage.
FROM amazoncorretto:17-alpine as RUN
# Set the JAR_FILE argument to the location of the built JAR file.
ARG JAR_FILE=build/libs/banking-loan-app-1.0.0.jar
# Set the working directory to /banking-loan-app.
WORKDIR /banking-loan-app
# Copy the JAR file from the build stage to the /banking-loan-app directory.
COPY --from=build /app/${JAR_FILE} banking-loan-app.jar
# Set the DEBUG_API_OPT environment variable to the DEBUG_OPT argument.
ARG DEBUG_OPT
ENV DEBUG_API_OPT=$DEBUG_OPT
# Start the Java application using the JAR file in the /pat-backend directory and the DEBUG_API_OPT environment variable.
CMD java $DEBUG_API_OPT -jar /banking-loan-app/banking-loan-app.jar

ℹ️ Translation: To “englishize” the Dockerfile, we are telling Docker that we want to build the application in the /app directory in the Virtual Machine, and then afterwards we run it. Running this Dockerfile will yield an Image - a ready to use built application that can be distributed and run.

The Docker-Compose File

While the Dockerfile focuses on the internal rules of the container, the Docker-Compose file focuses on how the container communicates with the host system. As such, now that we have an image, we have to specify how to run it, and as such let’s explore the Docker-Compose file, written in YAML:

version: "3.8"
# Define a network named banking-loan-net using the bridge driver.
# The bridge driver connects all docker containers together.
networks:
 banking-loan-net:
   driver: bridge
# Define the services to be used in the application.
services:
 # Define the banking-loan-backend service.
 banking-loan-backend:
   container_name: banking-loan-backend  # Set the container name to banking-loan-backend.
   build: .  # Build the image using the Dockerfile in the current directory.
   hostname: banking-loan-backend  # Set the hostname to banking-loan-backend.
   networks:
     - banking-loan-net  # Connect the service to the banking-loan-net network.
   expose:
     - 8080  # Expose port 8080 to the network.
   ports:
     - "8080:8080"  # Bind port 8080 of the container to port 8080 on the host.
   env_file:
     - .env  # Use the environment variables defined in the .env file.

ℹ️ Translation: To sum up this Docker-Compose file as well, here we are showing how our container (declared under the category “services”) will communicate with the host machine, namely through the port 8080. With this configuration, this means that the host machine can connect to the container only through the port 8080. This is a very secure approach, as our application can run a security filter in that port and protect itself from unauthorised access.

Through this configuration we have surely bypassed the first two dependency problems: The Java version, and the Firewall rules. For the last two, another trick up the sleeve of Docker is its ability to seamlessly configure containers on a case-by-case basis. In the last two lines of our Docker-Compose file, we tell Docker to use the environment variables declared in the .env file (short for environment file). This means that, for each host machine we deploy this application in, we can replace the database & payment system domains by editing the text file. As such, we only need to deploy one parameterized version of the application which reads the .env file and configures itself from it.

Step 2: Kubernetes…ization?

With the Docker workhorse out of the way, we have successfully turned our banking loan application into a black box which we can adjust and replace on-demand. As such, we release this application in a production environment and start accepting requests. Within minutes, the application crashes due to the extreme load of loan requests.

The banking loan application crashed because although we successfully managed to streamline the deployment & running process, we only ran one single instance of the application. For a proof of concept or a test environment, that may be enough, but for a bank that is nowhere near sufficient. As such, one must run not just one, but multiple Docker containers at the same time. This is where Kubernetes comes in.

Kubernetes is a container orchestration platform, which means that it can accomplish just what we want: Running multiple containers or pods (in Kubernetes context) in parallel, while delegating incoming requests in an appropriate manner through a load balancer. It accomplishes this by splitting the job of deployment control and actual runtime to a Control Plane and Worker Nodes. While the Control Plane organises the deployment of our loan banking applications and the worker nodes, the worker nodes run the Dockerized application described above. The interesting part about all this is that, neither of these components have to be on the same machine. The Control Plane can be in the UK, Worker Node #1 can be in Europe, Worker Node #2 can be in the US, and so on. This combination of the Control Plane and the Worker Nodes is called a Kubernetes Cluster.

RC / Deployment
Scale
Horizontal Pod Autoscaler
Pod 1
Pod 2
Pod N

Kubernetes’ most relevant feature in our loan application situation is its Horizontal Pod Autoscaler. This Autoscaler detects an incoming spike of web requests, and instructs the Control Plane to spawn new pods in the Worker Nodes.

Step 3: Terraform

The combo of Docker & Kubernetes is a fabled one, and can be very reliable for even the most demanding applications with hundreds of pods. However, if one was running a very large banking system - say Piraeus Bank - this system would prove difficult to maintain in the long term. As one can imagine (and check in the documentation) configuring a Kubernetes Cluster is time-consuming and sometimes very abstract. In order to mitigate this, Hashicorp created Terraform, a tool that creates Infrastructure as a Code. That performs exactly what it sounds like, allowing for infrastructure deployments through text directives.

⚠️ Warning: It must be emphasised that Terraform doesn’t directly improve the performance of the application. However, since we are running a loan application for Piraeus Bank, the deployment system should be reliable, and possibly use different online providers at the same time. Banking systems are important ones after all, and as such require resiliency and fallback options in case of failures.

Terraform creates this opportunity by integrating with services such as Azure, Google Cloud and Amazon AWS. As such, Piraeus Bank can have three Kubernetes Clusters: one in each provider. In case either of them fails, or even two at the same time, the remaining system can allow banking to continue (albeit perhps at a degraded performance).

Final Thoughts

There exists a part of engineering that involves so much trial and error that one could consider it art: Deploying scalable business systems is such a part. There is no factual true answer for a deployment of a scale such as Piraeus Bank; engineers will always continue to keep trying to minimise the bandwidth and decrease the number of turns in the pipes.

However, without a proper deployment strategy, one can optimise an application all they want, and will unfortunately have poor results. It is important to keep in mind that sometimes one just has to scale up, especially in the case of Piraeus Bank!

To conclude, I believe that the deployment of an application is just as significant, if not more, in the final performance of the application, and this article can serve as an inspiration in accustoming to the industry standards that are used for deployments nowadays.