What is Docker
In this introduction to docker, we are gonna define what docker is, why it is so popular, how to use it and much more. So let’s start with the definition.
“Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. With Docker, you can manage your infrastructure in the same ways you manage your applications. By taking advantage of Docker’s methodologies for shipping, testing, and deploying code quickly, you can significantly reduce the delay between writing code and running it in production.”
Containers and images
Containers and images are 2 most common docker objects. We will talk more about these, and other, docker objects later on.
Container is a standard unit of software that contains the application and it is dependencies to keep it isolated from the rest of the system. The goal is to have better testing and developing ground for your applications so that they are not dependant on the entire environment.
Docker container is created when docker image is run. As for the container, it can be created, moved, started, stopped and deleted. Those commands can be run through docker API or CLI.
A really cool thing is that you can create an image based on the container’s current state. So basically you can create a few environments and when they are fully set up, you can create images that represent those environments. That way you will have a fast way to back up on the “base state” of a container if something goes wrong.
Image is a standalone executable package of software that contains everything you need to run your app.
- System tools
- System libraries
Images become containers when they are run on a Docker engine. So basically, images contain all the information and settings to create a docker container, which contains all you need to start your development and testing.
Creating an image
Images can be:
- Created from scratch
- Taken from docker registry
- Taken from docker registry and modified
When creating an image, you need to create a dockerfile that contains instructions for creating an image. Each line creates a different image layer and when you are changing the dockerfile, only the changed layers are being affected.
There are a lot of benefits to choose docker over VMs or to use docker alongside the VMs. We won’t cover all the benefits in this introduction to docker because some benefits you can only notice when using docker for some time. Some can be noticed only if you have a specific workload and some depend on your personal preference.
Hardware requirements and configuration time
The first and the biggest difference would be hardware requirements and the time to configure and run the environments. VMs need quite a lot of time to get going and to configure while consuming a lot of storage space and utilizing quite a lot of CPU and RAM. Docker, on the other hand, has all the settings and configuration defined in an image and when run, it sets up pretty quickly and is very lightweight. Specially when compared to a VM. Now, there is a reason for that. Docker container cannot replace a VM all the times, but when it can, it is just a better and faster option.
Second reason continues the first one. If you have a problem with your VM or your container and you feel the best course of action is to just delete it and run a new one, you will replace docker container a lot faster than a VM. You can even run 2 or more exactly the same containers if you need to.
As we can see on the picture above, docker uses client-server architecture. Client interacts with docker daemon and those two can be on the same machine or they can connect remotely.
Docker client and daemon communicate through:
- REST API
- UNIX sockets
- Network interface
Docker’s architecture consists of:
- Docker client
- Docker object
- Docker registries
- Docker daemon
and more. Let’s explain each a little bit so you can better understand the architecture. It’s really not that complicated, but you can go more in depth if you want in the official documentatin.
So, starting with docker daemon which communicates with every other part of docker to manage services. Docker daemon recieves API requests and manages docker objects which we mentioned above.
Docker client is used by users to communicate with docker daemon through docker API. Through docker client you can communicate with more than one docker daemon.
Docker registry is used to store created images. You can have your private registry or you can search for images on the official docker hub which is a public docker registry for everyone. Here you can use docker push or docker pull commands to push images or pull images from docker registry.
Docker objects are created when you use docker. We mentioned some docker objects above and explaied what containers and images are as well as how to use them. If you are starting with docker, you don’t really need anything else for now. But, volumes are quite useful and I feel we should explain them.
Docker volumes provide a way to map your local directoy to a directory inside a container. That works like a shared volume (directory) so you can manipulate your files more easily. Volumes are also used to connect specific filesystem paths of the container back to the host machine, as the documentation suggests. This way changes made in a container are also applied on the host. Normally any changes are specific to a container alone. This is very useful to persist your database in a container.
Docker vs Virtual Machines
First off, docker cannot replace VMs completely so don’t think that you can just replace your VMware or any other hypervisor with docker. That being said, let’s take a look where docker really shines and where VMs are just better.
As we said before, docker is lightweight and is much faster at running and replacing containers. If you need to swap your environments often, docker is made for you. Using docker, you can model your environment the way you see fit and create an image that represents that environment. You can then deploy that image whenever you want and it will be preinstalled and ready for use. You can model and run you VMs the similar way, but they take much longer to get running.
Just by looking at the picture above and comparing the architectures, you can see the reason docker is so lightweight. Of course, we are talking about VMs needing to have OS installed on each one while docker doesn’t.
On the other hand, VM’s are more secure than containers because they have separated operational systems while docker doesn’t. When creating a VM, you can manage it’s resources while you can’t do that with a container.
As for portability, containers are much easier to move since they don’t depend on the OS. So, containers can be moved to a different OS with no problem. Porting a VM can be challenging and time consuming.
Getting started with docker
There’s an official documentation for you to explore, and of course you can install docker on your machine to try it out. There is also an alternative that doesn’t require you to install anything and it’s called Play with Docker.
“Play with docker” is a project handled by 2 developers and is supported and sponsored by Docker inc. It let’s you play with docker commands and architecture in browser without needing to install anything, or as they have said it:
“It gives the experience of having a free Alpine Linux Virtual Machine in browser, where you can build and run Docker containers and even create clusters in Docker Swarm Mode.”
You can help yourself by installing some VS Code extensions for docker. Like
“Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.”
You can use docker compose to define your services and applications in docker-compose.yml to run them isolated. Just as you define your docker environment in dockerfile. Then you can run docker compose up command to run your environment. Docker compose looks similar to JSON so it’s not something totaly new and out of this world.
Docker compose is also show in the video bellow so you can see it in action, but we’ll show you a template example of a docker compose .yml file.
Docker compose commands all start with “docker-compose” and after that you add an action you want to execute, for example:
exec *service_name* *shell*
If you want to try docker out, here’s a download page so you can install one for yourself.
Here’s a video from docker’s youtube channel explaining how to start with docker. I feel that this video explains everything very well and it’s an official dockers video so there is no misinformation.
For privacy reasons YouTube needs your permission to be loaded.I Accept
Portainer is a great tool to use in combination with docker for GUI management of docker containers. Let’s look at what portainer.io has to say.
“Portainer Community Edition is an open source tool for managing container based applications in Kubernetes, Docker, Docker Swarm, Azure ACI and edge environments.
Portainer can be used to set up and manage your environment, deploy applications, monitor application performance, triage problems and control who can do what. It is used by developers, devops and infrastructure teams to simplify processes and streamline operations.”
Let’s take a look at portainer’s view of docker containers. We removed our published ports information as well as our log in information for security. Anyways, under published ports you should see outside ports mapped to container’s inside ports, if your containers are running. Our 3 Jenkins containers are currently stopped so there’s no information on port mappings.
Here you can see information about your containers and select each one to see additional info and make changes. Under quick action you can select (from left to right):
- Exec console
Only the first 2 bullet points are available if your containers are stopped like our 3 Jenkins containers.
Selecting one of your containers brings you to a new window with more detailed data such as:
- Container status
- Access control
- Create image
- Container details
- Connected networks
I hope you liked this article and learned something about docker. Docker is really cool and learning it is definitely not a waste of time since many IT companies use it or consider using it some time in the future. Having that knowledge in advance can really help your job application.
As for the technology itself and the use for it. It doesn’t really have much use for a single developer learning at home, unless you are working in a team with friends. In that case docker could come in handy.