Understanding The Difference Between Containerisation And Virtualisation

Reading Time: 3 minutes

Virtualisation

Virtualisation means that to create a new version of software out of a combination of both hardware and software other than what it was intended to be. This means hardware that is running Windows OS for example may allow for the execution of Ubuntu Linux which is different. A Virtual machine provides an environment that is logically separated from the underlying hardware. The machine on which a new virtual machine is created is known as “host” while the machine that is created is known as “guest”.

Virtualisation helps in allowing different environments to execute on a single machine. Instead of managing numerous hardware devices, developers can make use of every environment using a single machine making it cheaper and easier to maintain. This also allows sharing of applications and resources across different users.  

While virtualisation makes the development cycle easier, having the whole guest OS on top of the host OS makes the process slow and the infrastructure bulky. This is where containerisation comes into play.

Containerisation

Containerisation on the other hand refers to the isolation of processes that execute on the host OS. Contrary to virtualisation, in containerisation, an application runs on top of the host OS in an encapsulated manner using its OS so that lightweight isolation is achieved.

Since we do not install a new operating system, and the resources are not utilised in running the guest OS, the overall application becomes less bulky and lighter than virtualisation. To install an application using containerisation a light layer sits between the application and host operating system. A popular layer today is Docker, so we will be looking at Docker.

To deploy applications using Docker, we install the Docker application on the host OS. Next, the application dependencies are installed on Docker and then the application is deployed using a file called Image that describes the application. Once the image is executed, it is called a container. An Image can be used to start any number of containers. Each container image could be only a few megabytes in size, making it easier to share, migrate, and move.

Because a single application image can be used to start multiple containers, containerisation allows for horizontal scaling. This is different from virtualisation where vertical scaling is more adopted i.e., increasing the capacity of resources rather than spinning up multiple applications.

Persisting Data In Docker Containers

Now that you know what virtualisation is and what is containerisation, it is important to understand how data is persisted in containerised applications. This is vital as a very little set of applications can function without persisting data. Knowing how to effectively store the data in containerised applications can help in designing better applications that are more robust, memory efficient, and reliable.

Storing data in containerised applications is different from virtual machines. Instead of having a separate hard disk defined as we do in virtualisation, here we make use of data volumes. I will be taking the example of Docker to explain the concept.

There are two ways to store data in the Docker application:

  1. Here we store the data in the Docker volume container with a volume. Next, we attach this Docker container to our containerised application.
  2. In this method, instead of creating a new Docker volume container, we share the data with the host operating system, similar to how we use shared folders in the virtual machine. This data sharing happens two-way, change in a host machine file will result in a change in Docker and vice versa. The point to remember here is to initialise the container in the detached mode so that upon exiting docker CLI does not kill the process.

Applications With Multiple Components

Every application is built using different components such as storage, servers, front-end application, backend application, API, and more. Now, we are going to look at how we build applications with different components in both the virtualisation ecosystem as well as containerised applications.

In the virtual machine realm, we have a guest OS installed on top of the host OS. Since we have a complete operating aystem installed, installing application components such as event messaging, databases, servers is exactly how it would be done under normal scenarios. The only care that needs to be taken is that installing multiple components can result in a slowdown of the overall system execution.

In containerised applications, there is no guest operating system. Rather the container, Docker in our case, is responsible for executing the application. This means that the installation of components is different from that which would be done under normal circumstances. 

To understand, vendors provide their Docker images on the Docker Hub which is a Docker Registry. A Docker Registry is just like a store that contains everything that can run on Docker. You can publish your own Docker images to Docker Hub or any other registry as well as explained in the JFrog blog. 

Next, you define a Docker Compose YML file which lists all the needed Docker images along with the precedence of installation and volumes specified. You can also specify which version of the Docker container you wish to use. In this way, an application is fully defined by this Docker compose file.

With so many options both containerisation and virtualisation are fundamental pieces to deploying applications today. Both these technologies allow networking to facilitate communication between applications as well as internal and external networks.

Comments are closed.