Kubernetes (a wonderful tool)!

Vibhanshusharma
6 min readDec 31, 2020

Kubernetes is an open source for automating deployment, scaling, and management of containerized applications.

What are the containers?

A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.

Benefits of Kubernetes!

  • Automates various manual processes: for instance, Kubernetes will control for you which server will host the container, how it will be launched etc.
  • Interacts with several groups of containers: Kubernetes is able to manage more cluster at the same time
  • Provides additional services: as well as the management of containers, Kubernetes offers security, networking and storage services
  • Self-monitoring: Kubernetes checks constantly the health of nodes and containers
  • Horizontal scaling: Kubernetes allows you scaling resources not only vertically but also horizontally, easily and quickly
  • Storage orchestration: Kubernetes mounts and add storage system of your choice to run apps
  • Automates roll outs and rollbacks: if after a change to your application something goes wrong, Kubernetes will rollback for you
  • Container balancing: Kubernetes always knows where to place containers, by calculating the “best location” for them
  • Run everywhere: Kubernetes is an open source tool and gives you the freedom to take advantage of on-premises, hybrid, or public cloud infrastructure, letting you move workloads to anywhere you want.

Terminologies included in Kubernetes!

Pod

A pod holds one or more container(s). Pods are the simplest unit that exists within Kubernetes (this is why containers technically aren’t a part of Kubernetes — as even a single container is called a pod).

Any containers in the pod share resources and a network and can communicate with each other — even if they are on separate nodes.

Node

Nodes are the hardware components. A node is likely to be a virtual machine hosted by a cloud provider or a physical machine in a data centers. But, it can simpler to think of nodes as the CPU/RAM resources to be used by your Kubernetes cluster, rather than just as unique machines. This is because pods aren’t constrained to any given machine at any given time, they will move across all available resources to achieve the desired state of the application.

Nodes are of 2 types

1 Worker Node

2 Master Node

Cluster

Clusters actually run the containerized applications being managed by Kubernetes. A cluster is a series of nodes connected together. By joining together, the nodes pool their resources making the cluster much more powerful than the individual machines it is made up of. Kubernetes moves pods around the cluster as nodes are added/removed. A cluster contains multiple worker nodes and at least one master node.

Benefits of Kubernetes for companies:

  • Control and automate deployments and updates.
  • Save money by optimizing infrastructural resources thanks to the more efficient use of hardware.
  • Orchestrate containers on multiple hosts.
  • Solve many common problems deriving by the proliferation of containers by organizing them in “pods” (see the last post!)
  • Scale resources and applications in real time.
  • Test and auto correction of applications.

Growth of Kubernetes over the years :

The rapid adoption of container technology, Dev-Ops practices and principals, micro services application architectures, and the rise of Kubernetes as the de facto standard for container orchestration are the key drivers of modern digital transformation.

Tinder’s Engineering Team recently announced their move to Kubernetes to solve scale and stability challenges. Twitter is another company that has announced its own migration from Mesos to Kubernetes. New York Times, Reddit, Airbnb, and Pinterest are just a few more examples.

Spotify and Kubernetes!

Launched in 2008, the audio-streaming platform Spotify has grown to over 200 million monthly active users across the world. An early adopter of micro services and Docker, Spotify had containerized micro services running across its fleet of virtual machines with a homegrown container orchestration system called Helios. By late 2017, it became clear to them that having a small team working on the features was just not as efficient as adopting something that was supported by a much bigger community.

Solution!

When they originally looked at Kubernetes, they were in an interesting situation, because they had already had an in-house orchestration solution they had built, and, anecdotally, launched the very same week an open source [alternative] when Kubernetes was launched. So they did a lot of work to essentially make the transition to Kubernetes incredibly easy for developers, and to make it so that we could have hundreds of teams work across shared clusters securely and safely together.

Despite its early adoption, Spotify began to shift to Kubernetes “in earnest” about a year and a half ago. Kubernetes has since played a key role in Spotify’s Dev Ops in two key ways. This includes how the platform has helped to reduce toil.

“We saw the amazing community that had grown up around Kubernetes, and we wanted to be part of that,” says Chakrabarti ,Director of Engineering, Infrastructure and Operations. Kubernetes was more feature-rich than Helios. Plus, “we wanted to benefit from added velocity and reduced cost, and also align with the rest of the industry on best practices and tools.” At the same time, the team wanted to contribute its expertise and influence in the flourishing Kubernetes community. The migration, which would happen in parallel with Helios running, could go smoothly because “Kubernetes fit very nicely as a complement and now as a replacement to Helios,” says Chakrabarti.

Conclusion!

The road to Kubernetes adoption can be, of course, fraught with challenges. In Spotify’s case, the challenges associated with empowering several hundred autonomous engineering teams to “move as quickly as possible.”

“They’re working on building and iterating features and experiments that people use on Spotify every day and we want them to keep working on that” Haughwout said. “So one of our challenges is how do we migrate them, when you have 299 million monthly users, without interrupting the music stream to things like Kubernetes, without slowing them down.”

Reddit!

One of the world’s busiest sites is Reddit. The heart of Reddit’s internal infrastructure is Kubernetes.

The Reddit infrastructure team has been adopting conventional forms of provisioning and configuration for several years. This did not go far, however, until they saw some immense disadvantages and mistakes occurring when doing it the old way. They migrated to Kubernetes.

Babylon!

A large number of Babylon’s products leverage machine learning and artificial intelligence, and in 2019, there wasn’t enough computing power in-house to run a particular experiment. The company was also growing (from 100 to 1,600 in three years) and planning expansion into other countries.

Babylon had migrated its user-facing applications to a Kubernetes platform and the infrastructure team turned to Kubeflow, a toolkit for machine learning on Kubernetes.

“We tried to create a Kubernetes core server, we deployed Kubeflow, and we orchestrated the whole experiment, which ended up being a really good success,” says AI Infrastructure Lead Jérémie Vallée.

Conclusion :

To sum up the entire article I would draw the spot of attention to the growing trend of Kubernetes. The trend of containerization has undoubtedly spiked up Kubernetes and the services it provides. It has saved time for handling different servers manually. Also providing scaling of systems when the need arises. In addition has made deployment, management way more efficient and easier.

--

--