Kubernetes
Kubernetes is an open-source container orchestration engine for automating deployment, scaling, and management of containerized applications. The open-source project is hosted by the Cloud Native Computing Foundation.
The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced the Kubernetes project in 2014. Kubernetes combines over 15 years of Google's experience running production workloads at scale with best-of-breed ideas and practices from the community.
%2FLIBO-1608973378020.png?alt=media&token=c9f1275a-b7a5-4c03-8d44-6f197e144103)
Terminology
Kubernetes has its own vocabulary which, once you get used to it, gives you some sense of how things are organized. These terms include:
✔Pods: Pods are a group of one or more containers, their shared storage, and options about how to run them. Each pod gets its own IP address.
✔Labels: Labels are key/value pairs that Kubernetes attaches to any objects, such as pods, Replication Controllers, Endpoints, and so on.
✔Annotations: Annotations are key/value pairs used to store arbitrary non-queryable metadata.
✔Services: Services are an abstraction, defining a logical set of Pods and a policy by which to access them over the network.
✔Replication Controller: Replication controllers ensure that a specific number of pod replicas are running at any one time.
✔Secrets: Secrets hold sensitive information such as passwords, TLS certificates, OAuth tokens, and ssh keys.
✔ConfigMap: ConfigMaps are mechanisms used to inject containers with configuration data while keeping containers agnostic of Kubernetes itself.
Why you need Kubernetes and what it can do
Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to start. Wouldn't it be easier if this behavior was handled by a system?
That's how Kubernetes comes to the rescue! Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more. For example, Kubernetes can easily manage a canary deployment for your system.
Kubernetes provides you with:
👉Service discovery and load balancing Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable.
👉Storage orchestration Kubernetes allows you to automatically mount a storage system of your choice, such as local storages, public cloud providers, and more.
👉Automated rollouts and rollbacks You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all their resources to the new container.
👉Automatic bin packing You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. You tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can fit containers onto your nodes to make the best use of your resources.
👉Self-healing Kubernetes restarts containers that fail, replaces containers, kills containers that don't respond to your user-defined health check, and doesn't advertise them to clients until they are ready to serve.
👉Secret and configuration management Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration.
Benefits of Kubernetes
Kubernetes enables users to schedule, run and monitor containers, typically in clustered configurations, and automate related operational tasks. These include:
· Continuously check container health, restart failed containers and remove unresponsive ones.
· Perform load balancing to distribute traffic across multiple container instances.
· Handle varied storage types for container data, from local storage to cloud resources.
· Set and modify preferred states for container deployment. Users can create new container instances, migrate existing ones to them and remove the old ones.
· Add a level of intelligence to container deployments, such as resource optimization --identify which nodes are available and which resources are required for containers, and automatically fit containers onto those nodes.
· Manage passwords, tokens, SSH keys and other sensitive information.
CASE STUDY:
Adidas
Staying True to Its Culture, Adidas Got 40% of Its Most Impactful Systems Running on Kubernetes in a Year
Challenge
In recent years, the adidas team was happy with its software choices from a technology perspective—but accessing all of the tools was a problem. For instance, "just to get a developer VM, you had to send a request form, give the purpose, give the title of the project, who's responsible, give the internal cost center a call so that they can do recharges," says Daniel Eichten, Senior Director of Platform Engineering. "The best case is you got your machine in half an hour. Worst case is half a week or sometimes even a week."
Solution
To improve the process, "we started from the developer point of view," and looked for ways to shorten the time it took to get a project up and running and into the adidas infrastructure, says Senior Director of Platform Engineering Fernando Cornago. They found the solution with containerization, agile development, continuous delivery, and a cloud native platform that includes Kubernetes and Prometheus.
Impact
Just six months after the project began, 100% of the Adidas e-commerce site was running on Kubernetes. Load time for the e-commerce site was reduced by half. Releases went from every 4-6 weeks to 3-4 times a day. With 4,000 pods, 200 nodes, and 80,000 builds per month, Adidas is now running 40% of its most critical, impactful systems on its cloud-native platform.
In recent years, the Adidas team was happy with its software choices from a technology perspective—but accessing all of the tools was a problem.
For engineers at Adidas, says Daniel Eichten, Senior Director of Platform Engineering, "it felt like being an artist with your hands tied behind your back, and you're supposed to paint something."
For instance, "just to get a developer VM, you had to send a request form, give the purpose, give the title of the project, who's responsible, give the internal cost center a call so that they can do recharges," says Eichten. "Eventually, after a ton of approvals, then the provisioning of the machine happened within minutes, and then the best case is you got your machine in half an hour. Worst case is half a week or sometimes even a week."
To improve the process, "we started from the developer point of view," and looked for ways to shorten the time it took to get a project up and running and into the Adidas infrastructure, says Senior Director of Platform Engineering Fernando Cornago.
"There is no competitive edge over our competitors like Puma or Nike in running and operating a Kubernetes cluster. Our competitive edge is that we teach our internal engineers how to build cool e-comm stores that are fast, that are resilient, that are running perfectly."— DANIEL EICHTEN, SENIOR DIRECTOR OF PLATFORM ENGINEERING AT ADIDAS
Thank you for reading the article