Kubernetes Part 1: Overview and Features
In this series of articles, we’ll take a deep dive into the world of container orchestration and start playing with Kubernetes hands-on.
What is a container?
Containers allow developers to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package.
What is Container Orchestration?
If your business runs a large number of complex services on the cloud, containers come in handy - they make it easy to turn apps on and off to meet fluctuating demand. They also let you move apps seamlessly between different servers. As engineers know, you can’t move container apps around efficiently on your own. You need a management platform that automatically spin containers up, suspend them, or shut them down when needed, as well as control how to access resources like the network and data storage. That’s where orchestration platforms come in. They provide many features, including:
- Provisioning hosts
- Instantiating a set of containers
- Rescheduling failed containers
- Linking containers together to create interfaces
- Exposing services to machines outside of the cluster
- Scaling out or down cluster by adding or removing containers
- Rolling back a deployment in case something fails
- … and more!
The three main options right now are Docker Swarm, Kubernetes, and Apache Mesos + Marathon, which all differ in implementation and how you interact with them. Docker Swarm gives you the easiest route to orchestrate clusters of docker hosts. Kubernetes is container-centric, but focuses less on containers themselves and more on deploying and managing services. Kubernetes also supports autoscaling and also provides more control over how your apps are architected. Mesos + Marathon can handle large scale operations but introduces additional complexity.
In this article, we’ll take a dive into Kubernetes and learn about its features.
Kubernetes
Let’s take a dive into Kubernetes from a technical perspective. For the rest of the article, we’ll set up Kubernetes and learn to deploy a cluster of Docker images to a “cloud” host. Remember, we’ll use Docker to create containers and Kubernetes to manage containers.
What are the features of Kubernetes?
Automatic bin packing
Imagine that we have five servers each having 10GB of memory (RAM). We have a list of jobs that run on these five servers, with each job having different memory requirement. Kubernetes will take care of packaging these containers in servers in the most efficient way while not sacrificing availability.
Service discovery and load balancing
- Pods and nodes - Kubernetes doesn’t run containers directly. Instead it wraps one or more container in pods, which are - in turn - housed in nodes. When you specify a pod, you can optionally specify how much CPU and memory each container needs. When containers have resource requests specified, the scheduler can make better decisions about which nodes to place pods on, which ties in with automatic bin packing.
- A pod contains:
- an application container (or multiple containers)
- storage resources (DB)
- a unique network IP
- A pod contains:
- Services - pods that have the same set of functions are abstracted into sets, called services. A service can have multiple pods. Kubernetes gives pods their own IP addresses and a single DNS name for a set of pods, and can load-balance across them. With this system, Kubernetes has control over network and communication between pods and can load balance across them.
Storage orchestration
Containers running inside a pod may need to store data. Usually a single volume is shared within all the containers in a pod. The storage volume can be local, on the cloud, or within your network.
Self-healing
If a container fails, Kubernetes will restart the container. If a node dies, Kubernetes will replace and reschedule containers on other nodes. If a container does not respond to a user-defined health check, Kubernetes will kill the container and take care of the availability of the system.
Automated rollouts and rollbacks
Rollout is the process of deploying changes to the application or its configuration. Rollback is the process of reverting the changes and restoring to a previous state.
Kubernetes progressively rolls out changes to your application or its configuration while monitoring application health to ensure it doesn’t kill all your instances at the same time. If something goes wrong, Kubernetes will rollback the change for you. This ensure that there is no downtime.
Secret and configuration management
Secrets are sensitive data like passwords, keys, and tokens. Config maps are key-value pairs that represent state configurations. Secrets and config maps are created outside of pods and containers, but are located within a node. The advantage is that it makes sensitive data and configurations portable and easy to manage. The end result is that you can deploy and update secrets and application configurations without rebuilding your image and without exposing secrets in your stack configuration.
- Cool fact: Secrets and configurations are stored in ETCD, which is a key-value database. The maximum size limit for a secret is 1MB.
Batch execution
You can execute batch jobs, which each create one or more pods. During job execution if any container or pods fail, the Job Controller will reschedule the container and/or pods on another node. This allows you to run multiple pods in parallel and scale up if required.
Horizontal scaling
We can scale the cotainers up and down using manual commands, from the Kubernetes dashboard, or automatically based on CPU usage. There are three tools that allow you to control horizontal scaling: the replication controller, manifest file, and horizontal pod autoscaler.
- The replication controller that enables us to create multiple pods, then make sure that number of pods always exists. If a pod crashes, the replication controller replaces it.
- The replication controller gets the number of pods to run and make them available from the information provided in the manifest file. The
Replicas
property in the manifest file tells the replication controller how many pods to create. - The horizontal pod autoscaler will monitor the CPU usage and automatically scales the number of pods in a replication controller.
In the next article
…and that’s it for an overview of Kubernetes and its features! In the next article we’ll cover the fundamental architecture of Kubernetes.