➤ How do system administrators cope with developers’ “Challenge” when they have their hands full?
On June 27, there was a webinar on “System Administrator Docker”. The workshop was based on a common scenario in everyday life: the system administrator was sitting at his desk minding his own business when the developer came in and said, “Here’s the new application, it’s packaged in a Docker image, please deploy it as soon as possible.” Therefore, the purpose of this meeting is to provide system administrators with some guidance on managing containerized applications. (video address (over the wall) : https://www.youtube.com/watch?v=kT76aLugp48).
➤ North American Moby Summit and Open Source Summit
The next Moby Summit will be held in Los Angeles on September 14, 2017, as part of the North American Open Source Summit. Following the success of the previous version, we will maintain the same format, including a short technical talk/demo in the morning and some informal networking sessions in the afternoon. In the meantime, we are looking for people to talk about their use of Moby projects.
➤ Protect the Atsea App with Docker Secrets
Passing application configuration information as environment variables was once considered a best practice in 12-factor applications. While this approach can display information in logs, it makes it difficult to track how and when information is exposed. Instead of using environment variables, Docker uses secrets to manage system configuration and confidential information.
➤ Secure Kubernetes clusters using Pod security policies
As container technology takes shape and more and more applications move to a clustered environment, it becomes more and more important to define and enforce cluster security policies. The cluster security policy provides a framework to ensure that pods and containers only have the appropriate permissions and access to a limited set of resources. Security policies also provide a way for cluster managers to control the creation of resources by limiting the capabilities available to specific members, groups, or namespaces.
This article introduces you to the POD security policy in Kubernetes. Because POD security policies may be specific to the rules of an organization and the requirements of a particular application, there is no one-size-fits-all approach — we’ll discuss three common scenarios and guide you through creating POD security policies tailored to an individual’s actual situation.
Case 1: Preventing PODS from running as Root is one of the most common uses of Pod security policies to create a more secure clustered environment by restricting containers in pods to running as Root.
Case 2: Pod is prevented from accessing certain volume types
As a cluster manager, you may want to limit the storage options available to containers to minimize costs or prevent information access. This can be done by specifying the available volume type in the volume key of the POD security policy.
Case 3: Prevent Pod from accessing the host port
Another common security issue is that the container can access host resources, such as host ports or network interfaces. Pod security policies allow cluster administrators to enforce deep security rules to restrict such access.
➤ Publish your first image to a Docker Hub
Thanks for following our previous Docker tweets, let’s dive in and explore more. You’ve seen how to run a container and pull images, now we’re going to publish our images for others. What do we need?
-
Docker file
-
Your application
Why do we need Docker files?
Traditionally, we have an application (let’s say a Python application), and I need to install a Python runtime (or all of its dependencies) on my computer. This also creates a problem: when you run the application locally, or on a server, the environment must be the same as ours. When running on Docker, you don’t need anything (no environment). You can use a portable Python environment as a mirror, no installation required. Your build can then include the application code with a base Python image, ensuring that the application, its dependencies, and environment all work together. These portable images are called Docker files.
Docker files are environment files in containers that help create an isolated environment for your container. Such as which ports will be exposed and which files you want to “copy” into the environment. After that, the build of the application you define in this Docker file will work just fine everywhere.
Let’s create a new directory and create a Docker file.
FROM python:3.6WORKDIR/appadd. /appRUN PIP install -r requireties.txtexpose 80ENV NAME worldCMD [" python ", "app.py"]Copy the code
Now you have created the Docker file. You can see that the syntax is very straightforward.
The next step is to create a Python application.
from flask import Flaskimport osimport socketapp = Flask(__name__)@app.route("/")def hello():html = "Hello {name}!
" \"<b>Hostname:</b> {hostname}<br/>"return html.format(name=os.getenv("NAME"."world"), hostname=socket.gethostname())if __name__ == "__main__":app.run(host='0.0.0.0', port=80)
Copy the code
OK ! Now that you have created all the necessary files, start building your application.
Use the ls command to view all files:
$ lsapp.py requirements.txt DockerfileCopy the code
Creating an image:
docker build -t imagebuildinginprocess .
Copy the code
Where is the new image? It is in the local mirror registry.
$ docker imagesREPOSITORY TAG IMAGE ID CREATED SIZEimagebuildinginprocess latest 4728a04a9d39 14 minutes ago 694MBCopy the code
Run them.
docker run -p 4000:80 imagebuildinginprocess
Copy the code
What we are doing here is mapping port 4000 to port 80 exposed by the container. You should see Python indicating that our application is deployed on port http://0.0.0.0:80. But in fact, the information comes from an internal container. It doesn’t know that we mapped port 80 to 4000, making the service address http://localhost:4000. Type that address into the browser and we’ll see “Hellow World” and the container number on the web page.
We will push our image to the registry so that we can use it anywhere. The Docker CLI uses a public registry by default.
Use your local computer to log into the Docker public registry. (If you don’t have one, sign up at Cloud.Docker.com.)
docker login
Copy the code
Label an image: Name the version number of the image. This is not a necessary step, but it is recommended. Because this is a great way to help with version management. (Like Ubuntu :16.04 and Ubuntu :17.04)
docker tag imagebuildinginprocess rusrushal13/get-started:part1
Copy the code
Publish the image: Upload your tagged image to the repository. When this is done, you can see your new image and pull instructions in the Docker Hub.
docker push rusrushal13/get-started:part1
Copy the code
OK, you are done, you have successfully published your first image! Check it out in Docker Hub.
This GitHub warehouse has great reference value, I suggest you go to check. (https://github.com/jessfraz/dockerfiles)
That’s the end of this issue of “Ship’s Log” and we’ll see you next time
Refer to the link
-
https://blog.docker.com/2017/07/docker-sysadmin-webinar-qa/
-
https://blog.docker.com/2017/07/title-moby-summit-alongside-open-source-summit-north-america/
-
https://blog.docker.com/2017/07/securing-atsea-app-docker-secrets/
-
https://docs.bitnami.com/kubernetes/how-to/secure-kubernetes-cluster-psp/#assumptions-and-prerequisites
-
https://dev.to/rusrushal13/publish-your-first-image-to-docker-hub