Docker is a tool designed to make it easy to build, deploy, and run applications using containers.
Now here’s what you need to know. People often say docker containers all the time, but docker sounds like containers. But in reality it is not. In fact, containers are commonly used technology in the Linux operating system, and Docker is a tool that can take advantage of container technology. That’s the point we need to know. But in general, most people use Docker containers instead of other container services. Docker is often referred to as the name of container technology.
Containers
Containers allow a developer to pack all the necessary parts such as libraries and other dependencies into an application and all he can ship as one package. From this statement, we can see that containers are basically there to deploy and simplify applications.
Docker Container
Docker containers are basically a kind of virtualization technology. But in reality, there is something different from traditional virtualization as we know it. In some ways, Docker may be a bit like a virtual machine, but unlike a virtual machine, instead of building an entire virtual operating system, Docker allows applications to run the same Linux kernel as the host system it’s running on. can be used with The application comes with dependent materials such as library files and some calculation files that are not yet running on the host computer. Linux kernels always share only the host operating system, but that means sharing some dependent files in their own libraries. As such, Docker containers actually run lightweight compared to traditional virtual machines. This literally gives a huge performance boost and reduces the size of your application.
Benefit from using Docker containers.
First, using Docker containers makes it easier to deploy applications. The application and execution environment are separated in the traditional way. Every time you complete a deployment, instead of doing an application deployment, you first have to do your own time environment deployment and some dependencies. But in the Docker container style, some containers held everything about the application. This makes deployment very easy and also impossible to automate.
Then you can easily start the PAAS service library using a Docker container. If you want to test your environment the traditional way and deploy middleware or databases, you must manually install software packages before you create instances. But now, with docker containers, you can simply put your PAAS service into a docker container and when you need a database service for testing, you can spin it back in seconds without having to manually configure or install it. Get database services. This is very convenient.
Third, keep in mind that CI/CD, which stands for Continuous Integration and Continuous Delivery, is fairly prevalent in the software we develop. Docker containers simplify your CI/CD process by automating every step of building, testing, or developing software.
Finally, scaling and scaling are used very often in distributed application scenarios. The lightweight capabilities of Docker containers allow scaling and scaling activities to be performed very quickly, scaling the capacity and performance of distributed clusters very quickly. This is a huge benefit of Docker container technology.
Docker Architecture
You can see that this diagram shows the overall architecture of the Docker app. Docker uses a client-server architecture, with the Docker client communicating with the Docker daemon. The Docker daemon does the heavy lifting of building and deploying Docker containers. Docker client and daemon can run on the same system.
Alternatively, you can simply connect your Docker client to a remote Docker daemon app. Docker clients and daemons communicate with the REST API through Unix sockets or network interfaces. Detailed methods are supported.
key components of a Docker container.
There are three main members that have built the entire Docker container ecosystem:
Docker containers
Docker repositories
Docker images.
Let’s look at them one by one.
Docker image.
An image is essentially a read-only template that contains instructions for creating a Docker container. An image is often based on another image with further adjustments. For example, you can create an image based on the Ubuntu image, but when you install the Apache web server, the application and configuration details must be making the application wrong.
Docker container
A container is actually a runnable instance of an image. In fact, container images are very closely related. You can create, start, stop, move, or delete containers using the Docker API or CI. You can connect to Docker with one or more storage attached, or create a new image based on the current state. By default, containers are relatively well isolated from other containers and their host machines. You can control how much isolation a container’s networking and storage, or other underlying subsystems, have from other containers or the host machine. A container is defined by its image and some configuration options that you specified earlier when creating or starting the container. When a container is deleted, any state changes not saved to persistent storage are gone. This is how containers work.
Docker repository
Docker repositories allow you to share images with your colleagues, customers, or the Docker community. Docker Hub is the world’s largest public Docker registry, containing a huge amount of Docker images of all kinds. You can also create your own private Docker registry if you prefer. If you build your images in-house with your own Docker daemon or your own continuous integration service, you can push them to your Docker repository. Or, alternatively, if the source code for your Docker image is on GitHub or Bitbucket, for the public Docker Hub service, you can use the automated build repository which is built by the Docker Hub Services. This is the feature that is provided by the Docker Hub.