July 28, 2023 ・ Basics
Kubernetes and microservices: An introduction
Before microservices, a standard way of developing applications was with a monolithic architecture. A monolith was a single, unified entity, and within its confines, every aspect of an application was woven together like an intricate tapestry. For example, if we had an online shop application, all of its parts like user authentication, shopping cart, product catalog, sales campaigns, notification, and so on, all the code for these functionalities would be in one code base as part of one monolithic application. Everything is developed, deployed, and scaled as one unit. It means the application must be written in a single language with one technology stack with a single runtime. And if you have different teams working on different application parts, they must coordinate to ensure they don't affect each other's work. Also, if developers change the code for the payment functionality, you would need to build the whole application and deploy it as one package. You can't just update and deploy only the payment functionality changes separately.
This was a standard way of developing applications, but as they grew in size and complexity, they led to different challenges. First of all, the coordination between teams became more difficult because the code was much more extensive, and the parts of the application were more tangled into each other.
Also, if suddenly you had a usage spike in the shopping cart, for example, on holiday dates, and you would want to scale only that part of the application, you can't do it, and you need to scale the whole application. This, in turn, means higher infrastructure costs and less flexibility in scaling your application up and down.
Another major issue with monolith applications is that the release process of such applications takes longer because, for changes in any part of the application in any feature, you need to test and build the whole application to deploy those changes.
And the answer was microservices architecture.
With microservices, we break down the application into multiple smaller applications, so several small or micro applications make up this one big application. But now we have a couple of important questions when creating a microservices architecture.
First, how do we decide how to break down the application? What code goes where, and how many such micro applications or microservices do we create? How big or small should these microservices be? And finally, how do these services then talk to each other?
The best practice is to break down the application into components or microservices based on business functionalities and not technical functionalities. The microservices of an online shop application will be products, shopping carts, user accounts, checkout, and so on because all these are business features. In terms of size, each microservice must do just one isolated thing. Now you have a microservice responsible for shopping cart logic and checkout. You should always strive to keep one service doing one specific job. A significant characteristic of each microservice is that they should be self-contained and independent.
It means each service must be able to be developed, deployed, and scaled separately without any tight dependencies on any other services, even though they are part of the same application.
And this is called loose coupling. With this best practice approach, if you change something in the payment service, you will only build and deploy the payment service. Nothing else will be affected. And this means the services have their versions that are not dependent on others. So if I release one service, I don't need to release any other service, so this release cycle becomes completely independent.
How do Microservices communicate with each other?
If the services are isolated and self-contained, how do they connect? The payment service will need something from the user account to process the payment, or the checkout service will need something from the shopping cart.
A very common way for microservice communication is using API calls. Each service has an endpoint on which it accepts requests from other services. Services can talk to each other by sending HTTP requests on these endpoints. This is a synchronous communication where one service sends a request to another and waits for the response. The user account service can send an HTTP request to the payment service on its API endpoint and vice versa.
Another common way of communication between microservices is using a message broker with asynchronous communication. Here services will send messages first to the intermediary message service or a broker such as Rabbitmq, and then the message broker will forward that message to the respective service. So again, the user account will send the message to the broker saying, please pass this message on to the payment service, and the message broker will then forward that message to the payment service.
So these are different communication options. Since the services are all isolated and talk to each other with API calls or additional services, you can even develop each service with a different programming language.These teams can independently select their tech stack and develop their service without influencing or being influenced by other service teams This is the most important advantage of microservices architecture compared to the monolith.
Downsides of Microservices
Every rose has its thorns. The microservices paradigm brought with it a host of complexities and challenges. Configuring communication between services became a delicate dance, with the potential for missteps leading to unexpected results.
Microservice may be down or unhealthy and not responding yet, while another service sends requests to its API expecting a fulfilled response, in which case you may get unexpected results. Also, with microservices deployed and scaled separately, it may become difficult to keep an overview and find out when a microservice is down or which service is down when something in the application is not working properly. There are various tools for making all this easier.
Please welcome! Kubernetes.
The most popular one you probably already know is Kubernetes which is a perfect platform for running large microservices applications. Kubernetes is too complex to describe in detail here, but it deserves an overview since many people bring it up in conversations about microservices.Strictly speaking, the primary benefit of Kubernetes (aka, K8s) is to increase infrastructure utilization through the efficient sharing of computing resources across multiple processes. Kubernetes is the master of dynamically allocating computing resources to fill the demand. This allows organizations to avoid paying for computing resources they are not using. There are benefits of K8s that make the transition to microservices much easier.
Here are some benefits of using Kubernetes for microservices:
Scalability: Kubernetes makes it easy to scale services up or down as needed. This eliminates manual scaling and allows you to respond quickly to changing demands.
High availability: Kubernetes offers built-in high availability features, ensuring that services remain available even in the event of failure or network disruption.
Dynamic resource allocation: Kubernetes can dynamically allocate resources based on demand, enabling more efficient resource utilization and cost savings.
Self-healing: Kubernetes can detect and replace failed services, helping maintain uptime and reliability.
With the many benefits of this technique, it's no surprise that more developers are choosing to implement microservices with Kubernetes!
Conclusion to Using Kubernetes for Microservices.
Microservices are popular due to their flexibility in adding new features, and the use of containers in companies is also on the rise. Kubernetes is the most used container orchestration tool, a must-know technology in the data engineering world. Using Kubernetes, you can easily scale and load balance your microservices, implement service discovery, and ensure that your microservices adhere to the principles of the Twelve-Factor App. Kubernetes also provides a platform-agnostic way to manage containerized applications. This makes it easy to deploy and scale applications across different environments.