Docker Introduction

Introduction

Microservices are an increasingly popular architecture for building large-scale applications. Rather than using a single, monolithic codebase, applications are broken down into a collection of smaller components called microservices. This approach offers several benefits, including the ability to scale individual microservices, keep the codebase easier to understand and test, and enable the use of different programming languages, databases, and other tools for each microservice.
Docker is an excellent tool for managing and deploying microservices. Each microservice can be further broken down into processes running in separate Docker containers, which can be specified with Dockerfiles and Docker Compose configuration files. Combined with a provisioning tool such as Kubernetes, each microservice can then be easily deployed, scaled, and collaborated on by a developer team. Specifying an environment in this way also makes it easy to link microservices together to form a larger application.
Further sections will describe all the technology used in microservice architecture.

Motivation behind using docker?

  • ROI and Cost Savings. The more a solution can drive down costs while raising profits, the better a solution it is, especially for large, established companies, which need to generate steady revenue over the long term. In this sense, Docker can help facilitate this type of savings by dramatically reducing infrastructure resources.
  • Standardization and Productivity- One of the biggest advantages to a Docker-based architecture is standardization. Docker provides repeatable development, build, test, and production environments. Docker containers allow you to commit changes to your Docker images and version control them. For example, if you perform a component upgrade that breaks your whole environment, it is very easy to roll back to a previous version of your Docker image.
  • Compatibility and Maintainability: Eliminate the “it works on my machine” problem once and for all. One of the benefits that the entire team will appreciate is parity. Parity, in terms of Docker, means that your images run the same no matter which server or whose laptop they are running on.
  • Rapid Deployment: This is because it creates a container for every process and does not boot an OS.
  • Continuous Deployment and Testing
  • Isolation Docker ensures your applications and resources are isolated and segregated. Docker makes sure each container has its own resources that are isolated from other containers.
  • Since the application and its dependencies are packaged together, there is no external dependency for the app to run. This means, the container is very light-weight.

Difference Between VM & Containers

Difference between Docker and VM

VM (Virtual Machine)

Virtual machines are a sandboxed environment which contains a full-fledged computer. With its virtual hardware, operating system, kernel, and software, booting up a virtual machine can sometimes take a few minutes to boot up.

Container

Containers are a lightweight alternative to full machine virtualization since they are commonly used to sandbox a single application, which recently became popular due to the concept of micro services. Containers use the host operating system’s kernel, and thus no bootup time is needed. You just need a few seconds and your containerized application is up. Containers are lightweight because they don’t need the extra load of a hypervisor, but run directly within the host machine’s kernel. This means you can run more containers on a given hardware combination than if you were using virtual machines.
Containers uses namespace and uname to containerize any application.

Docker & Container

Docker is a company that provides software (also called Docker) that allows you to build, run and manage software containers. While Docker’s container has been getting most of the press, there are other container solutions — Google/Canonical’s LXC/LXD, CoreOS’s rkt, etc.
A software container is a way to bundle and isolate processes (software) running on a server.

Docker

Docker containers wrap up the software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries — basically anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in.
Decoupling applications from the underlying hardware is the fundamental concept behind virtualization. Containers go a step further and decouple applications from the underlying OS. This enables cloudlike flexibility, including portability and efficient scaling. Containers bring another level of efficiency, portability, and deployment flexibility to developers beyond virtualization.

Docker Architecture:

Docker uses a client-server architecture. The Docker client talks to the Docker daemon, which does the heavy lifting of building, running, and distributing your Docker containers. The Docker client and daemon can run on the same system, or you can connect a Docker client to a remote Docker daemon. The Docker client and daemon communicate using a REST API, over UNIX sockets or a network interface.

The Docker daemon

The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes. A daemon can also communicate with other daemons to manage Docker services.

The Docker client

The Docker client (docker) is the primary way that many Docker users interact with Docker. When you use commands such as docker run, the client sends these commands to dockerd, which carries them out. The docker command uses the Docker API. The Docker client can communicate with more than one daemon.

Docker Registries

A Docker registry stores Docker images. Docker Hub and Docker Cloud are public registries that anyone can use, and Docker is configured to look for images on Docker Hub by default. You can even run your own private registry.
When you use the docker pull or docker run commands, the required images are pulled from your configured registry. When you use the docker push command, your image is pushed to your configured registry. Docker store allows you to buy and sell Docker images or distribute them for free. For instance, you can buy a Docker image containing an application or service from a software vendor and use the image to deploy the application into your testing, staging, and production environments. You can upgrade the application by pulling the new version of the image and redeploying the containers.

Docker objects

When you use Docker, you are creating and using images, containers, networks, volumes, plugins, and other objects. This section is a brief overview of some of those objects.

An image is a read-only template with instructions for creating a Docker container. Often, an image is based on another image, with some additional customization. For example, you may build an image which is based on the ubuntu image, but installs the Apache web server and your application, as well as the configuration details needed to make your application run.
You might create your own images or you might only use those created by others and published in a registry. To build your own image, you create a Dockerfile with a simple syntax for defining the steps needed to create the image and run it. Each instruction in a Dockerfile creates a layer in the image. When you change the Dockerfile and rebuild the image, only those layers which have changed are rebuilt. This is part of what makes images so lightweight, small, and fast, when compared to other virtualization technologies.

A container is a runnable instance of an image. You can create, start, stop, move, or delete a container using the Docker API or CLI. You can connect a container to one or more networks, attach storage to it, or even create a new image based on its current state.
A container is defined by its image as well as any configuration options you provide to it when you create or start it. When a container is removed, any changes to its state that are not stored in persistent storage disappear.

The underlying technology

Docker is written in Go language and takes advantage of several features of the Linux kernel to deliver its functionality.

Namespaces

Docker uses a technology called namespaces to provide the isolated workspace called the container. When you run a container, Docker creates a set of namespaces for that container.
These namespaces provide a layer of isolation. Each aspect of a container runs in a separate namespace and its access is limited to that namespace.
Docker Engine uses namespaces such as the following on Linux:

  • The pid namespace: Process isolation (PID: Process ID).
  • The net namespace: Managing network interfaces (NET: Networking).
  • The ipc namespace: Managing access to IPC resources (IPC: Inter Process Communication).
  • The mnt namespace: Managing filesystem mount points (MNT: Mount).
  • The uts namespace: Isolating kernel and version identifiers. (UTS: Unix Timesharing System).

Control groups

Docker Engine on Linux also relies on another technology called control groups (cgroups). A cgroup limits an application to a specific set of resources. Control groups allow Docker Engine to share available hardware resources to containers and optionally enforce limits and constraints. For example, you can limit the memory available to a specific container.

Union file systems

Union file systems, or UnionFS, are file systems that operate by creating layers, making them very lightweight and fast. Docker Engine uses UnionFS to provide the building blocks for containers.

Container format

Docker Engine combines the namespaces, control groups, and UnionFS into a wrapper called a container format. The default container format is libcontainer. In the future, Docker may support other container formats by integrating with technologies such as BSD Jails or Solaris Zones.

Installation

https://docs.docker.com/install/

Useful docker commands

docker ps — List containers

docker logs — Fetch the logs of a container

docker start — Start one or more stopped containers

docker stop — Stop one or more running containers

docker pull — Pull an image or a repository from a registry

docker push — Push an image or a repository to a registry

docker rm — Remove one or more containers

docker rmi — Remove one or more images

docker c p — Copy files/folders between a container and the local filesystem

docker commit — Create a new image from a container’s changes

docker build — Build an image from a Dockerfile

docker exec — Run a command in a running container

docker images — List images

docker info — Display system-wide information

docker kill — Kill one or more running containers

--

--

--

Love podcasts or audiobooks? Learn on the go with our new app.

auctions.csnglobal.net — An initiative

01-ElasticSearch-Logs: The Backend is growing up

Flutter State Management, What Is Google Hiding

Microsoft Full Stack on Docker to AWS EKS — 01

Laravel Prunable trait: Periodically remove obsolete models

Create webserver using AWS EC2 with persistent storage EFS … (Updated version of Task1)

The “CALMS” Model — DevOps Thinking.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store