This module will introduce us to the Kubernetes ecosystem.
This Articles Contents
Never launching containers directly.
To comprehend Kubernetes, it is essential to recognise that executing containers directly is a poor choice for the vast majority of use cases. Containers are low-level entities that necessitate an underlying framework. They require a solution that provides all the additional features we anticipate from cluster-deployed services. In other words, containers are useful but should not be executed directly.
The explanation is straightforward. Containers do not give fault tolerance by themselves. They cannot be easily deployed to the optimal location in a cluster and, to make a long story short, they are not user-friendly. This does not imply that containers are useless on their own.
They are, but much more is required to harness their true power. If we need to operate containers at scale, be fault-tolerant and self-healing, and have the other characteristics we anticipate from modern clusters, we need more. We require at least a scheduler and likely more.
Let’s explore how Kubernetes is much more than a container scheduler.
It may be utilised to deploy our services, roll out new versions without interruption, and scale (or de-scale) these services.
- It is transportable.
- It can run on a public or private cloud.
- It can operate on-premises or in a hybrid setting.
- We can migrate a Kubernetes cluster from one hosting provider to another without (almost) modifying deployment and management procedures.
- Kubernetes is extensible to meet practically all requirements. We can choose which modules to employ, as well as build and integrate our own extra features.
- Kubernetes will determine where to execute something and how to maintain the specified state.
- Kubernetes can position service copies on the most suitable server, restart them when necessary, replicate them, and scale them.
- Self-healing was incorporated into its design from the beginning. On the flip side, self-adaptation is also imminent.
- In Kubernetes, zero-downtime deployments, fault tolerance, high availability, scaling, scheduling, and self-healing offer substantial value.
- We can mount volumes for stateful applications using it.
- It permits us to conceal confidential information.
- It can be used to verify the quality of our services.
- It can monitor resources and balance requests.
- It facilitates service discovery and log access.
The list of Kubernetes’ capabilities is extensive and expanding rapidly. It is becoming, in conjunction with Docker, a platform that encompasses the entire software development and deployment lifecycle.
The Kubernetes project has recently begun. It is in its infancy, therefore we may anticipate significant enhancements and new features in the near future. Still, do not be deceived by the term “infancy.” Even though the project is young, it has one of the largest communities and is used in some of the world’s largest clusters.
Are you looking for hands-on experience with Kubernetes? In the following chapter, we will begin running a Kubernetes cluster on our computer.