Kubernetes is an awesome tool to quickly manage containerized applications and:
- Deploy your applications quickly and predictably,
- Scale your applications on the fly,
- Seamlessly roll out new features,
- Optimize use of your hardware by using only the resources you need.
Kubernetes runs on top of an operating system and uses an API and a container in runtime to automate container operations. Containers are lightweight because they share the machine’s OS kernel, they have their own resources (CPU, memory, etc), and they are decoupled from infrastructure. This is what makes it possible to modularize and isolate an application so that it can be run on multiple hosts without considerable loss of performance.
Given this architecture, each container must have a specific responsibility, so that errors can be identified easily and so that containers can be replaced with practically no friction.
Likewise, each container should have its responsible team in order to keep the teams focused on their responsibilities only.
Kubernetes is responsible for raising, monitoring and maintaining the health of the entire project and the nodes involved, always seeking to preserve the desired state. Not only can it identify that something is wrong, but also fix it (or try to fix it).
It is possible to define a container for an application that we’ve built and for each of its dependencies.
- Node.js container
- PostgreSQL container
- Redis container
- Data container
Kubernetes uses a concept called pods to group those dependencies into a unit that will represent the application and will actually be on the same host, sharing:
- ip (localhost)
We can think a pod as an environment where the containers run and persist until it is deleted.
Another important concept of Kubernetes are the controllers; a controller can create multiple pods for you, specifying the number of replicas you want to create. Deployment, StatefulSet and DaemonSet are some examples of controllers.
Having these concepts in mind makes us prepared to run our first server with Kubernetes. Once we have our pods configured, the next step is to configure the deployment (controller) file where we will specify the number of replicas in the system, plus other metadata parameters like deploy name, version and category.
To specify the deployment file, the YAML format is used. Here’s a simple example with an nginx server, the file mynginx_deployment.yaml:
This .yaml file contains 4 fields: apiVersion, kind, metadata and spec. The first 3 fields are required by kubernetes and the value for the spec field will be different depending on the object you want to create with Kubernetes. In this example, the value for spec has the format for a Pod object. Using the Kubernetes API reference you can find the spec format for different objects.
To run our example we must execute:
$ kubectl create -f ./mynginx_deployment.yaml
To expose the nginx pod on the host port “85”, we can deploy the nginx service with the following yaml file:
$ kubectl create -f ./mynginx_service.yaml
Now you will be able to see the nginx server test page on the following URL: http://127.0.0.1:85
We have our first container running!
If you want to remove the nginx service and deployment, run the following commands:
$ kubectl delete service my-nginx-service
$ kubectl delete deployment my-nginx