Original article, welcome to reprint. Reprint please specify: reprint from IT people story, thank you! Docker guide (1)

Container technology and related tools and platforms have been extremely popular in the past two years. In each major technology forum or cloud computing summit topics, will be a large part of the mainstream cloud computing platform has quickly provided container services.

giant

  • alibaba
  • jingdong
  • Meituan
  • baidu
  • tencent
  • The wave
  • drops





    image.png





    image.png





    image.png

Search for Docker heat

  • Distribution of docker keywords





    image.png





    image.png

“Intermediate Chapter” highlights of Docker

Learn how to use it first. “Intermediate” is for you to understand, master teach apprentices, tell you, do to you, do it yourself try. “Preliminary” may leave some pit, in practice only github+ Jenkins + Docker (submit code automatically Jenkins package Docker complete system release), but there is no practice of application up just let everyone understand hands-on practice, this is to directly out of practice! Highlighting real business scenarios, Kubernetes is added to the “Intermediate Chapter”. Is not beautiful.

What can Docker do




image.png





image.png

  • This is the main usage scenario of Docker advertised by Docker Company. The biggest benefit of a virtual machine is the ability to run a variety of platforms (software, systems) with different configurations on your hardware, and Docker provides the same functionality at a reduced cost. It allows you to put the runtime environment and configuration in code and then deploy. The same Docker configuration can be used in different environments, thus reducing hardware requirements and coupling between application environments.
  • The previous scenario is a great help in managing Code pipelining. There are many intermediate environments through which code passes from the developer’s machine to its final deployment in production. Each intermediate environment has its own slight differences. Docker provides a consistent environment for applications from development to online, making the code pipeline much simpler.
  • This brings some additional benefits: Docker makes developers more productive. If you want to see a more detailed example, check out Aater’s talk at DevOpsDays Austin 2014 or DockerCon. In different development environments, we all want to do two things well. One, we wanted to make the development environment as close to production as possible, and two, we wanted to build the development environment quickly. Ideally, to achieve the first goal, we need to run each service in a separate virtual machine to monitor the health of the service in the production environment. However, we didn’t want to need a network connection every time, and it would be a hassle to connect remotely every time we recompiled. This is what Docker does particularly well. The development environment of the machine is usually relatively small memory, before using virtual, we often need to add memory for the development environment of the machine, but now Docker can easily make dozens of services run in Docker.
  • There are a number of reasons why you might choose to run different applications on a single machine, such as the development efficiency scenarios mentioned earlier. We often need to consider two things: server consolidation to reduce costs and splitting an entire application into loosely-coupled individual services. If you want to understand why loosely-coupled applications are so important, check out Steve Yege’s paper comparing Google and Amazon.
  • Just as Docker consolates multiple applications through virtual machines, Docker’s ability to isolate applications allows Docker to consolidate multiple servers to reduce costs. Docker can provide a better server consolidation solution than virtual machines due to the lack of memory footprint of multiple operating systems and the ability to share unused memory among multiple instances.
  • Debugging capabilities Docker provides a number of tools that are not necessarily specific to containers, but are applicable to containers. They provide many features, including the ability to set checkpoints for containers, set versions, and see the differences between two containers, which can help with Bug debugging.
  • Another interesting use of Docker in multi-tenant environments is in multi-tenant applications, where it can avoid critical application rewrites. One particular example of this scenario is the development of a fast, easy-to-use multi-tenant environment for IoT applications. This multi-tenant base code is complex and difficult to deal with, and replanning such an application consumes both time and money. With Docker, you can create isolated environments for multiple instances of each tenant’s application layer, which is not only easy and cheap, but also thanks to the speed of the Docker environment startup and its efficient diff command.
  • Rapid deployment It takes several days for a vm to import new hardware resources. Docker virtualization reduces this time to minutes. Docker simply creates a container process without booting the operating system, a process that takes seconds. This is a feature that both Google and Facebook value. You can create and destroy resources in the data center without worrying about the overhead of a restart. Generally, the resource utilization rate of data centers is only 30%, which can be improved by using Docker and effective resource allocation.

Container choreography tool

  • Kubernetes and docker swarm



image.png

DevOps = culture + process + tools

DevOps came about for a reason. There are two bottlenecks in the software development life cycle. The first bottleneck was between the requirements phase and the development phase, which placed high demands on software developers in response to changing requirements, followed by the agile methodology, which emphasized adaptation to requirements, rapid iteration, and continuous delivery. The second bottleneck is between the development phase and the build deployment phase, where a large number of completed development tasks can clog up the deployment phase, affecting delivery, and hence DevOps.

The three principles of DevOps:

  • Infrastructure as Code DeveOps is based on doing repetitive things using automated scripts or software, Examples include Docker (containerization), Jenkins (continuous integration), Puppet (infrastructure building), Vagrant (virtualization platform), etc
  • Continuous Delivery is the release of reliable software in a production environment and Delivery to users for use. Continuous deployment does not necessarily deliver to users. It involves Time to Repair (TTR) and Time to Marketing (TTM) when the product goes online. To deliver reliable software efficiently, you need to reduce these two times as much as possible. Deployment can be done in a variety of ways, such as blue-green deployment, canary deployment, and so on.
  • Developers and operations personnel must work closely together on a regular basis. Development should understand the operations role as another user group for the software. There are several suggestions for collaboration: 1. Automate (reduce unnecessary collaboration); 2. Small scope (each modification should not be too much to reduce the risk of release); 3. A unified information hub (e.g., a wiki so that both parties can share information); 4. Standardize collaboration tools (e.g. Jenkins)