In this session, we will take a brief look back at the history of deployment methods.
This Articles Contents
The Beginning
In the beginning, package managers did not exist. There were no JAR, WAR, RPM, or DEB packages available. In order to minimise software mismatches and missing prerequisites, package managers often keep a database of software dependencies and version information.
At that time, we could only bundle files that comprised a release. More often than not, we would manually copy files from one location to another. The effect of combining this technique with bare-metal servers designed to survive forever was hell on earth.
No one knew what was installed on the servers after some time had passed. Continuous overwrites, reconfigurations, package installations, and variable actions resulted in unstable, unreliable, and undocumented software operating on top of innumerable OS patches.
Configuration Management tools
The introduction of configuration management tools (e.g., CFEngine, Chef, Puppet, and others) aided in the reduction of chaos. Still, they benefited OS setup and maintenance more than new release deployments. They were never intended to do so, even though the companies behind them rapidly understood that doing so would be financially beneficial.
Even with configuration management tools, the issues associated with running several services on the same server continued. Various services may have distinct needs, and those demands may sometimes collide. One may require JDK6, while the other may require JDK7.
A new release of the first one may necessitate an upgrade to a new version of JDK, but this may have an impact on another service on the same server.
Because conflicts and operational complexity were so widespread, many businesses chose to standardise. As previously said, standardisation kills innovation. The more we standardise, the less room there is for innovative alternatives.
Even if it isn’t a problem, standardisation with clear isolation makes upgrading something difficult. The effects could be unexpected, and the sheer amount of work required to upgrade everything at once is so considerable that many people prefer not to upgrade for a long period (if ever). Many people are trapped with old stacks for an extended period of time.
The Urgency of the Hour
We required process isolation that does not require a separate VM for each service. At the same time, we had to come up with an immutable mechanism to deploy software. Mutability was distracting us from our goal of being able to develop dependable settings. Immutability became possible with the advent of virtual machines. There is no such thing as a free lunch.
We could produce a new image and instantiate as many VMs as we needed each time we wanted to release something. We could use rolling updates that are immutable. Yet, not many of us did. The term “independent” refers to a person who does not work for the government. The procedure was excessively long. Even if that didn’t matter, having a separate VM for each service would result in an excessive amount of unused CPU and memory.
Docker and Containers
Thankfully, Linux received namespaces, cgroups, and so on. Moreover, Linux got namespaces, cgroups, and so on. They were lightweight, quick, and inexpensive. They provided process isolation as well as a variety of other benefits. Unfortunately, they were difficult to utilise. Even though they’ve been there for a long, only a few businesses had the know-how to put them to good use. We had to wait for Docker to emerge in order to make containers simple to use and hence available to everyone.
Currently, containers are the preferable approach to package and distribute services.
They are the solution to immutability that we were desperately seeking.They are the answer to immutability we were so frantically attempting to implement. They provide critical isolation of operations, optimal resource use, and quite a few other benefits. However we have already understood that we require much more.
Why are Container Schedulers used?
There isn’t much further to say about it. We need them to be scalable, fault resistant, and allow transparent communication across a cluster, among other things. Containers are merely a low-level aspect of the jigsaw.
The actual benefits are realised using tools that sit on top of containers. These tools are now referred to as container schedulers. They are our interface. They handle containers, not us.
In case you are not already using one of the container schedulers, you might
be wondering what they are.
We shall get familiar with the container schedulers in the upcoming tutorial.