At present, we have updated the “Java Concurrent Programming” and “Docker tutorial”, welcome to pay attention to the “back-end advanced road”, easy to read all articles.
Java concurrent programming:
- Java Concurrent Programming Series -(1) Concurrent programming basics
- Java Concurrent Programming Series -(2) Concurrency utility classes for threads
- Java Concurrent Programming series -(3) Atomic Operations with CAS
- Java Concurrent Programming series -(4) Explicit locks with AQS
- Java Concurrent Programming Series -(5) Java concurrent containers
- Java Concurrent Programming Series -(6) Java thread pools
- Java Concurrent Programming Series -(7) Java thread safety
- Java Concurrent Programming series -(8) JMM and underlying implementation principles
- Java Concurrent Programming Series -(9) Concurrency in JDK 8/9/10
Docker tutorial:
- Docker series -(1) Principle and basic operation
- Docker series -(2) image production and release
- Docker series -(3) Docker-compose usage and load balancing
JVM Performance optimization:
- JVM Performance Optimization Series -(1) Java Memory region
- JVM Performance Optimization Series -(2) Garbage collector and Memory allocation Strategies
The previous article introduced the basic principles and operations of Docker. This section mainly introduces how to make Docker images and publish them.
Image file structure
The essence of a Docker image is a collection of files that are stacked one after the other to form the final image file, similar to the structure shown below,
From the bottom up, the file system layer, the operating system layer, the proprietary mirror layer, the read and write layer.
-
Startup file layer: the file system used when Docker is started. After startup, it will be automatically disconnected. Users will not have direct contact with this layer.
-
Operating system layer: This layer contains files related to the operating system, such as CentsOS, Ubuntu, and so on, depending on the distribution. The file contains the dev, /proc, /bin, /etc directories, which is a minimal operating system. Many tools are not provided, including vi, wget, curl, etc. Note that this layer does not contain a Linux kernel, but can be run on any Linux kernel that meets the requirements.
-
Proprietary image layer: Generally, all major software will make proprietary images based on the above two layers, such as Nginx, Tomcat, etc., all have special official images, which can be downloaded directly from Docker Hub.
-
Read/write layer: this is the layer that we need to operate when making our own image. It is a dynamic runtime environment, and subsequent operations such as ENV, Volume and CMD will be implemented in this runtime environment.
Mirroring is all about modifying the read/write layer. When need to modify the mirror inside a file, only to read and write at the top layer of the change, do not copy the lower the content of the file system, the original versions in the existing files in read-only layer still exist, but will be hidden by a new version of the file to read and write layer, when the docker commit the modified container file system as a new image, The content saved is only the updated files in the uppermost read/write file system.
You can view the mirror layer using the history command,
To make the mirror
There are two common methods for creating an image. The first method is to use a configured Container to generate an image. The other is to create a new image based on an existing image using a Dockerfile method, which is more common.
Configure Container to create an image
This section uses nginx image as an example to introduce the entire process.
1) Download the basic image. Here Ubuntu is used as the basic image. Because there is no local image, you can first use Docker search to obtain the name of the official image, and then Docker pull to download the image to the local.
2) Interactively start the image to facilitate the installation of software in the container. -it indicates the interactive mode. /bin/bash indicates the terminal to be started. You can see from the image below that you have successfully entered the container.
docker run -it ubuntu:latest /bin/bash
Copy the code
3) Now follow the normal installation process of Nginx. Since Ubuntu image is only a minimal system, you may need to install some required software through apt-get install.
4) Exit the container and use the COMMIT command to generate a new image.
Note that there are two ways to exit the container, usually just exit, which will also close the container. If you do not want to close the container, just exit the terminal, you can use Ctrl+P+Q shortcut keys, after exit, the container is still running in the background.
Run docker Commit and specify the container ID or name and the image name.
docker commit e0c618df0979 ubuntu-nginx
Copy the code
Now you can start the image in the normal way.
Use Dockerfile
The above describes the way to manually enter the container and make Docker image, which is generally tedious. Usually we use Dockerfile to create the image, and in this way we need to write a Dockerfile file.
Dockerfile file
Dockerfile is a configuration file in text format that users can use to quickly create custom images.
Here is a simple Dockerfile that copies the jar files generated by the compilation to the container, then declares the ports exposed by the container, and finally specifies the instructions to run when the container is started.
FROM openjdk:8
ADD ["The target/bazaar - 1.0.0. Jar"."bazaar.jar"]
EXPOSE 1234
ENTRYPOINT ["java"."-jar"."/bazaar.jar"]
Copy the code
Common instruction sets in Dockerfile include:
- FROM: The first directive must be the FROM directive, which specifies the underlying mirror.
- MAINTAINER: Specifies MAINTAINER information.
- RUN: Runs commands on the shell terminal.
- EXPOSE: The format is EXPOSE […] Declare the port number that the container needs to expose. The mirroring startup can use -p or -p to bind port mappings.
- ENV: Specifies an environment variable that can be referenced by subsequent runs and is recorded in the container.
- ADD: This command copies the specified to the container. Where can be a relative path to the directory where the Dockerfile resides; It can also be a tar file (automatically decompressed).
- VOLUME: The format is VOLUME [path]. Create a mount point that can be accessed from localhost or another container, typically to hold data that needs to be held.
- USER: Specifies the USER name used to RUN the container, which is also specified in subsequent runs.
- WORKDIR: Specifies the workspace in which subsequent commands are executed.
Complicating matters is the contrast between CMD and ENTRYPOINT, both of which can run instructions, but with slight differences.
- CMD gives the default executable of a container that can be overridden.
- ENTRYPOINT is legitimately used to define the body of the container’s execution after it is started, which must be executed.
(1) CMD is used alone
FROM debian:wheezy
CMD ["/bin/ping"."localhost"]
Copy the code
Pinging localhost without specifying any parameters after startup,
$ docker run -it testPING localhost (127.0.0.1): 48 data bytes 56 bytes from 127.0.0.1: Icmp_seq =0 TTL =64 time=0.076 ms 56 bytes from 127.0.0.1: ICmp_seq =1 TTL =64 time=0.087 ms 56 bytes from 127.0.0.1: Icmp_seq =2 TTL =64 time= 0.03ms ^C-- packets transmitted, 3 packets transmitted, 0% packet loss round-trip min/avg/ Max /stddev = 0.076/0.084/0.090/0.000 msCopy the code
But if you start the container with a new command, the original CMD will be overwritten by the new command.
docker run -it test bash
root@e8bb7249b843:/#
Copy the code
(2) CMD and ENTRYPOINT are used together
CMD is usually used to pass parameters to ENTRYPOINT, as shown in the following example:
FROM debian:wheezy
ENTRYPOINT ["/bin/ping"]
CMD ["localhost"]
Copy the code
Running the image directly does not specify any parameters and will ping localhost all the time
$ docker run -it testPING localhost (127.0.0.1): 48 data bytes 56 bytes from 127.0.0.1: Icmp_seq =0 TTL =64 time=0.096 ms 56 bytes from 127.0.0.1: ICmp_seq =1 TTL =64 time=0.088 ms 56 bytes from 127.0.0.1: icmp_seq=1 TTL =64 time=0.088 ms 56 bytes from 127.0.0.1: Icmp_seq =2 TTL =64 time= 0.03ms ^C-- transmitted packets, 3 packets received 0% packet loss round-trip min/avg/ Max /stddev = 0.088/0.091/0.096/0.000 msCopy the code
If you specify a parameter when running the command, the command will ping the corresponding parameter, and CMD will be overwritten.
$ docker run -it testGoogle.com PING google.com (173.194.45.70): 48 data bytes 56 bytes from 173.194.45.70: Icmp_seq =0 TTL =55 time=32.583 ms 56 bytes from 173.194.45.70: Icmp_seq =2 TTL =55 time=30.327 ms 56 bytes from 173.194.45.70: Icmp_seq =4 TTL =55 time=46.379 ms ^C-- google.com ping statistics -- 5 packets transmitted, 3 packets received 40% packet loss round - trip min/avg/Max/stddev 46.379/7.095 = 30.327/36.430 / msCopy the code
If you want to make a more generic container, you can use CMD [“/path/dedicated_command”] in your Dockerfile so that you can override existing directives as needed while running the container.
Make containers with Dockerfile
Generally, after we write Dockerfile, we directly enter the directory where Dockerfile is located and run docker build. Docker will generate our image according to the steps specified in Dockerfile.
$ docker build -t your_image_name .
Copy the code
This is all the process of making an image. Next, the release of the image.
Images released
Image publishing has two options, you can directly push to the official Docker hub, you only need to register a Docker account; You can also create your own private repository locally by pushing the image here.
Docker Hub
Open hub.docker.com/ after registering your account, remember your account name. Later, you need to tag the local image with the user name, and then push it.
To tag the local image, use the following command:
docker tag myImage:v1 your_user_name/myImage:v1
Copy the code
The next step is to directly push, which will automatically push to the official warehouse. Note that docker login may be required. Here, you can directly enter the user name and password.
docker push your_user_name/myImage:v1
Copy the code
In this way, the official warehouse will have your Image, and you can directly pull the docker from now on.
Local private warehouse
(1) First download the Registry image: Docker pull Registry.
Docker run -d –name reg -p 55:5000 registry
(3) Configure HTTP transmission. The private server can only use HTTPS by default, so you need to configure open HTTP.
Take the centos configuration as an example.
Note that the IP in the figure is set according to the IP of the actual Registry, which can be found through docker inspect Reg.
Restart the Docker service
systemctl daemon-reload
systemctl restart docker
Copy the code
This completes the creation of the private repository.
Next, push the Image to the warehouse directly. The process is similar to push the Image to the official warehouse, except that the user name tagged is changed to the address of the private warehouse.
(1) Tag
Docker tag hello world - http://192.168.244.7:5000/hello-worldCopy the code
(2) Push image
Docker push 192.168.244.7:5000 / hello - worldCopy the code
(3) Query mirror:
(4) Querying the image version:
The above is all the content of image production and release. The following section will introduce the use of Docker-compose and docker network communication in actual deployment.
Reference links:
- Stackoverflow.com/questions/2…
Search “the road to back-end improvement”, follow the official account, and immediately get the latest articles and BATJ high-quality interview courses worth 2,000 yuan.