This article focuses on running microservices-based applications on Kubernates clusters.

Learn Kubernetes in Under 3 Hours: A Detailed Guide to Orchestrating Containers

Zero. Preparation

0.1 Project Introduction

The project has only one function: enter a sentence into a Web browser and calculate the emotion the sentence expresses.

From a technical point of view, the application contains three microservices, each containing specific functionality:

  • Sa-frontend: a front-end Nginx network server that provides ReactJS static files.
  • Sa-webapp: A web application, a Java web application that handles requests from the front end;
  • Sa-logic: Logic processing, A Python application that performs sentiment analysis.

We can describe this interaction in terms of data flows between microservices:

  1. The client application requests the initial page index.html (the index.html page loads the script for the ReactJS application)
  2. User interaction with the application requests to the Spring-based WebApp
  3. WebApp forwards sentiment analysis to Python applications.
  4. The Python application calculates the emotion value and returns it.
  5. The WebApp responds back to the ReactApp, which then presents the information to the user.

Clone the codebase now: github.com/heqingbao/k… Now we’re going to do something even more exciting.

Run microservices-based applications on your computer

We need to start the three services we need, starting with the front-end application.

1.1 Deploying the SA-frontend project

1.1.1 Configuring the Local deployment of React

To run the React application, you need to install NodeJS and NPM on your computer. After installing NodeJS and NPM, go to the sa-frontend directory on your terminal and run the following command:

npm install
Copy the code

This command downloads all Javascript dependencies for the React application into the node_modules folder (all dependencies are defined in the package.json file). After all dependencies are resolved, run the following command:

npm start
Copy the code

That’s it! We ran the React application, which is now accessible through the default port localhost:3000. You are free to modify the code and see the real-time effects from the browser.

1.1.2 Prepare the React Environment

To build a production environment, we need to build a static web page of the application and serve it through a web server.

Enter the sa-frontend directory on the terminal and run the following command:

npm run build
Copy the code

This command generates a folder called “Build” in the project’s file directory. This folder contains all the static files required by the ReactJS application.

1.1.3 Using Nginx to provide static files

First install and start the Nginx web server. Then move the files in the sa-frontend/build directory to [nginx installation directory]/ HTML.

As a result, we can access the index.html file, which is the default file of the nginx service, through the [nginx installation directory]/ HTML /index.html.

By default, the Nginx web server listens on port 80. You can specify different ports by modifying the server.listen field in the [nginx installation directory]/conf/nginx.conf file.

Open your browser and access port 80, and you can see that the ReactJS application has loaded successfully.

Type the sentence in the Type your Sentence box and click SEND. Nothing happens. Because it will send a request to http://localhost:8080/sentiment, then we will deploy the service.

1.2 Deploying the Sa-WebApp Project

1.2.1 Recommended Spring Web applications

To compile the SA-WebApp project, you must install JDK8 and Maven and set their environment variables. After setting, continue to operate.

1.2.2 Packaging the application as a JAR file

Go to the sa-webapp directory on the terminal and run the following command:

mvn install
Copy the code

This command generates a folder named target in the directory sa-webapp. The target folder contains a packaged Java application package: ‘Sentiment – Analysis-web-0.0.1 – snapshot.jar’.

1.2.3 Starting an Application

Go to the target directory and start the application with the following command:

Java - jar sentiment - analysis - web - 0.0.1 - the SNAPSHOT. The jarCopy the code

If the startup fails, the following exception information is displayed:

org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'sentimentController': Injection of autowired dependencies failed; nested exception is java.lang.IllegalArgumentException: Could not resolve placeholder 'sa.logic.api.url' in value "${sa.logic.api.url}"
Copy the code

The important information shown here is that ${sa.logic.api.url} in sentimentController cannot be injected. Here is the source code:

@CrossOrigin(origins = "*")
@RestController
public class SentimentController {
@Value("${sa.logic.api.url}")
private String saLogicApiUrl;
@PostMapping("/sentiment")

public SentimentDto sentimentAnalysis(@RequestBody SentenceDto sentenceDto) {
    RestTemplate restTemplate = new RestTemplate();
    return restTemplate.postForEntity(
        saLogicApiUrl + "/analyse/sentiment", sentenceDto, SentimentDto.class).getBody(); }}Copy the code

In Spring the default attributes of the resources is application properties (location in sa – webapp/SRC/main/resources). But this is not the only way to define attributes. We can do this with the previous command:

Java - jar sentiment - analysis - web - 0.0.1 - THE SNAPSHOT. Jar -- sa. Logic. API. Url = WHAT. IS.. THE sa. Logic. API. THE urlCopy the code

This property should be initialized with a value defined by the Python application runtime so that the String network application knows where to pass the information at runtime.

For simplicity, let’s assume we’re running a Python application on localhost:5000.

Run the following command, and then we’ll deploy the last service: the Python application.

Java - jar sentiment - analysis - web - 0.0.1 - the SNAPSHOT. Jar -- sa. Logic. API. Url = http://localhost:5000Copy the code

1.3 Deploying the Sa-Logic Project

To start the Python application, we first need to install Python3 and PIP, and set their environment variables. (Virtualenv is recommended to configure Python multi-environment if python2 is native only.)

1.3.1 Installation Dependencies

On the terminal, go to the sa-logic/sa directory and run the following command:

python -m pip install -r requirements.txt
python -m textblob.download_corpora
Copy the code

Note: Python -m textblob. Download_corpora may generate the following error if python3 is newly installed and pytho> N3 is created using virtualenv:

(Venv) ➜ sa git:(Master) Qualify python -m Textblob. Download_corpora [nLTk_data] Error loading Brown: <urlopen Error [SSL: [nltk_data] CERTIFICATE_VERIFY_FAILED] certificate verify failed: [nltk_data] unable to get local issuer certificate (_ssl.c:1051)> [nltk_data] Error loading punkt: <urlopen error [SSL: [nltk_data] CERTIFICATE_VERIFY_FAILED] certificate verify failed: [nltk_data] unable to get local issuer certificate (_ssl.c:1051)> [nltk_data] Error loading wordnet: <urlopen error [SSL: [nltk_data] CERTIFICATE_VERIFY_FAILED] certificate verify failed: [nltk_data] unable to get local issuer certificate (_ssl.c:1051)> [nltk_data] Error loading averaged_perceptron_tagger: <urlopen error [nltk_data] [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify [nltk_data] failed: unable to get local issuer certificate [nltk_data] (_ssl.c:1051)> [nltk_data] Error loading conll2000: <urlopen error [SSL: [nltk_data] CERTIFICATE_VERIFY_FAILED] certificate verify failed: [nltk_data] unable to get local issuer certificate (_ssl.c:1051)> [nltk_data] Error loading movie_reviews: <urlopen error [SSL: [nltk_data] CERTIFICATE_VERIFY_FAILED] certificate verify failed: [nltk_data] unable to get local issuer certificate (_ssl.c:1051)> Finished.Copy the code

The solution is to execute the file /Applications/Python\ 3.7/Install\ Certificates.command, which can be found in Finder-> Applications. Double-click on the file.

1.3.2 Starting an Application

With dependencies installed using Pip, we can start the application by running the following command:

Python sentiment_analysis.py * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)Copy the code

This means that the application has started and is listening for HTTP requests on port 5000 of localhost.

At this point, if all goes well, go to your browser: http://localhost:3000/ and type a sentence in the input box and you should be able to display the mood value.

I’ll show you how to start these services in a Docker container, as this is a prerequisite for running these services in a Kubernetes cluster.

Create container images for each service

Kubernetes is a container management platform. It is conceivable that we need containers to manage them. But what is a container? The best answer to Docker’s official documentation is as follows:

Container images are lightweight, stand-alone, executable packages that contain everything you can run: code, runtime, system tools, system libraries, Settings. For Linux – and Windows-based applications, containerized software can run regardless of the environment.

This means that the container can run on any computer, even on a production server, with no difference.

To visualize this, let’s compare the React application running on a virtual machine and inside a container.

The React static file is provided on the VM

Disadvantages of using VMS include:

  • Resource efficiency is low, and each VM requires a fully mature operating system.
  • Platform dependence. What works well on the local machine may not work well on the production server;
  • Heavier and slower to scale than containers.

Provide the React static file through the container

The advantages of using containers include:

  • High resource efficiency, using the host operating system with the help of Docker;
  • There is no platform dependency. A container that runs on a local machine will work on any machine;
  • Provide lightweight services through the image layer.

2.1 Creating a Container Image for the React application

2.1.1 Docker profile

The most basic component of a Docker container is.dockerfile. The basic composition of the Dockerfile is a container image. We will introduce how to create a container image that meets the requirements of the application through the following series of instructions.

Before we start defining dockerfiles, let’s recall the steps to use the Nginx service React static files:

  1. Create static file (NPM run build);
  2. Start the Nginx server;
  3. Copy the contents of the build folder of the front-end project into the nginx/ HTML directory.

In the next section, you’ll notice that creating a container is very similar to setting up a local React.

2.1.2 Defining a Dockerfile for the front end

There are only two steps to creating a front-end Dockerfile. This is because the Nginx team has provided us with a basic Nginx image that we can use directly. The two steps are as follows:

  1. Start the basic Nginx image;
  2. Copy the sa-frontend/build directory into the container’s nginx/ HTML.

The converted Dockerfile looks like this:

FROM nginx
COPY build /usr/share/nginx/html
Copy the code

This file is readable and can be summarized as:

Start with the Nginx image (whatever is inside). Copy the build directory to the nginx/ HTML directory of the image. And then there you go!

You may be wondering, where should I copy the build file? For example, /usr/share/nginx/html. Very simple: it’s documented in the Nginx image documentation of Docker Hub.

2.1.3 Creating and pushing containers

Before pushing the image, we need a container registry to host the image. Docker Hub is a free cloud container service that we will use for demos. There are three tasks to complete:

  1. Install Docker CE;

  2. Register Docker Hub;

  3. Run the following command on the terminal to log in:

    docker login -u="$DOCKER_USERNAME" -p="$DOCKER_PASSWORD"
    Copy the code

    Or execute:

    docker login
    Copy the code

    Then enter the user name and password in interactive mode.

After completing any of the above, go to the directory sa-frontend. Then run the following command (please replace $docker username with your Docker hub username, e.g. Heqingbao /sentiment-analysis-frontend).

[root@VM_0_3_centos sa]# docker build -f Dockerfile -t heqingbao/sentiment-analysis-frontend . Sending build context to Trying to pull Repository Docker. IO/Library /nginx daemon 1.768 MB Step 1/2: FROM Nginx Trying to pull Repository Docker. sha256:d59a1aa7866258751a261bae525a1842c7ff0662d4f34a355d5f36826abc0341: Pulling from docker.io/library/nginx f17d81b4b692: Pull complete 82dca86e04c3: Pull complete 046ccb106982: Pull complete Digest: sha256:d59a1aa7866258751a261bae525a1842c7ff0662d4f34a355d5f36826abc0341 Status: Downloaded newer image for docker.io/nginx:latest ---> 62f816a209e6
Step 2/2 : COPY build /usr/share/nginx/html
 ---> 8284804168aa
Removing intermediate container f74eb32d3c46
Successfully built 8284804168aa
Copy the code

Note: if it is deployed on a remote server, be sure to put the App. The inside of the js http://localhost:8080/sentiment localhost for the remote host IP or IP of the vessel.

Now we can delete -f Dockerfile because we are already in the directory that contains the Dockerfile.

We can use the docker push command to push the image:

[root@VM_0_3_centos sa]# docker push heqingbao/sentiment-analysis-frontend
The push refers to a repository [docker.io/heqingbao/sentiment-analysis-frontend]
a5a0d2defc6a: Pushed 
ad9ac0e6043b: Mounted from library/nginx 
6ccbee34dd10: Mounted from library/nginx 
237472299760: Mounted from library/nginx 
latest: digest: sha256:eb5adb74d0685e267771d5bcdc536015a8cb58fe88c1860d10f13d2994d3c063 size: 1158
Copy the code

Please confirm that the image has been successfully pushed to the Docker Hub codebase.

2.1.4 Running containers

Now anyone can grab the heqingbao/sentiment-analysis-frontend image and run it:

docker pull heqingbao/sentiment-analysis-frontend
docker run -d -p 80:80 heqingbao/sentiment-analysis-frontend
Copy the code

The Docker container is already running!

Visit: http://yourid:80, give it a try, you should now have access to the React app.

2.1.5 Dockerignore file

We just saw that creating an image of sa-Frontend is very slow, sorry, super slow. This is because we have to send the setup environment file to the Docker service. More specifically, the setup environment files refer to all the data in the Dockerfile directory that will be used during image creation.

In our example, the sa-frontend file contains the following folders:

sa-frontend:
|   .dockerignore
|   Dockerfile
|   package.json
|   README.md
+---build
+---node_modules
+---public
\---src
Copy the code

But we just need the build folder. Uploading other files is a waste of time. We can save time by deleting other directories. This requires.dockerignore. You might think this is similar to.gitignore, for example you can add all the directories you want to ignore to.dockerignore, like this:

node_modules
src
public
Copy the code

The.dockerignore file should be in the same folder as the Dockerfile. It now takes only a few seconds to create the image file.

We use custom container images here, but if we only use our images once (for example, writing some test demos), we don’t need to create our own images and then upload them to the repository, we can just use the official Nginx images.

2.1.6 Deploying the React Application using the official Nginx Image

Download nginx image:

docker pull docker.io/nginx
Copy the code

Start the nginx container:

docker run -d -p 80:80 --name mynginx \
--volume "$PWD/html":/usr/share/nginx/html \
docker.io/nginx
Copy the code

The meanings of the parameters in the preceding command are as follows:

  • -d: Runs in the background
  • -p: indicates that port 80 of the container is mapped to port 80 of the host
  • –name: The container name is mynginx
  • –volume: map the HTML directory in the current directory to the /usr/share/nginx/html directory of the container

Then copy all files from the build directory of the previous sa-frontend project into the HTML directory under the current path.

Go to http://yourid:80 and give it a try. You should be able to access the React app now, too.

2.2 Creating container Images for Java Applications

Open Dockerfile in sa-webApp:

FROM openjdk:8-jdk-alpine
# Environment Variable that defines the endpoint of sentiment-analysis python api.ENV SA_LOGIC_API_URL http://localhost:5000 ADD target/sentiment- Analysis -web-0.0.1- snapshot.jar/EXPOSE 8080 CMD [" Java ", "- the jar", "sentiment - analysis - web - 0.0.1 - the SNAPSHOT. Jar", "-- sa. Logic. API. Url = ${SA_LOGIC_API_URL}"]Copy the code

The ENV keyword declares environment variables in the Docker container. This allows us to provide a URL to the sentiment analysis API when we start the container.

In addition, the keyword EXPOSE provides a port that we can access later. But wait, we didn’t do that in sa-frontend, exactly! This port is only used for documents, in other words it is used to provide information to the person reading the Dockerfile.

You should already know how to create and push container images.

Creating an image:

[root@VM_0_3_centos sa-webapp]# docker build -f Dockerfile -t heqingbao/sentiment-analysis-web-app . Sending build Context to Docker Daemon 20.49 MB Step 1/5: FROM openjdk:8-jdk-alpine Trying to pull repository docker.io/library/openjdk ... sha256:b18e45570b6f59bf80c15c78d7f0daff1e18e9c19069c323613297057095fda6: Pulling from docker.io/library/openjdk 4fe2ade4980c: Pull complete 6fc58a8d4ae4: Pull complete ef87ded15917: Pull complete Digest: sha256:b18e45570b6f59bf80c15c78d7f0daff1e18e9c19069c323613297057095fda6 Status: Downloaded newer image for docker.io/openjdk:8-jdk-alpine ---> 97bc1352afde
Step 2/5 : ENV SA_LOGIC_API_URL http://localhost:5000
 ---> Running in c3be1ec16ac4
 ---> ab213d1b2ce1Removing Intermediate container C3be1EC16AC4 Step 3/5: ADD target/sentiment- Analysis-web-0.0.1 - snapshot.jar / ---> 5d1ebdbf659d
Removing intermediate container 7e5b7519d9e3
Step 4/5 : EXPOSE 8080
 ---> Running in e428a3388798
 ---> 0893bf90a104Removing intermediate container e428a3388798 Step 5/5 : CMD Java-JAR sentiment- Analysis-web-0.0.1 - snapshot.jar --sa.logic.api.url=${SA_LOGIC_API_URL} ---> Running in 065ac2e61dbd
 ---> cba14182f49f
Removing intermediate container 065ac2e61dbd
Copy the code

Start container:

[root@VM_0_3_centos sa-webapp]# docker run -d -p 8080:8080 -e SA_LOGIC_API_URL='http://x.x.x.x:5050' heqingbao/sentiment-analysis-web-app
b3ab99abecd7a97f091e2362b4eee870037e562347f3996a9a1a2669ca60c651
Copy the code

Upload to warehouse:

[root@VM_0_3_centos sa-webapp]# docker push heqingbao/sentiment-analysis-web-app
The push refers to a repository [docker.io/heqingbao/sentiment-analysis-web-app]
4e1c5d0784bf: Pushed 
ed6f0bd39121: Mounted from library/openjdk 
0c3170905795: Mounted from library/openjdk 
df64d3292fd6: Mounted from library/openjdk 
latest: digest: sha256:be20fe12c184b6c4d2032141afe9b8cc092a9a083f1cf0a7dc8f73c4b1ebbaf8 size: 1159
Copy the code

2.3 Creating Container Images for Python Applications

Sa – Dockerfile logic:

FROM python:3.6.6-alpine COPY sa /app WORKDIR /app RUN pip3 install -r requires. TXT && \ python3 -m textblob.download_corpora EXPOSE 5000 ENTRYPOINT ["python3"]
CMD ["sentiment_analysis.py"]
Copy the code

Now you’re a Docker genius.

Build a container image:

docker build -f Dockerfile -t heqingbao/sentiment-analysis-logic .
Copy the code

Run the Docker container:

docker run -d -p 5050:5000 heqingbao/sentiment-analysis-logic
Copy the code

2.4 Test containerized applications

1. Run the sa-logic container and configure listening port 5050.

docker run -d -p 5050:5000 heqingbao/sentiment-analysis-logic
Copy the code

2. Run the sa-webApp container and configure the listener port 8080 (because we changed the port on which the Python application listens, we need to override the environment variable SA_LOGIC_API_URL) :

$ docker run -d -p 8080:8080 -e SA_LOGIC_API_URL='http://x.x.x.x:5050' heqingbao/sentiment-analysis-web-app
Copy the code

3. Run the sa-frontend container.

docker run -d -p 80:80 heqingbao/sentiment-analysis-frontend
Copy the code

And then you’re done. Open x.x.x.xx :80 in the browser.

Third, Kubernetes

3.1 Why use Kubernetes?

In this section, we learned about Dockerfile, how to use it to create images, and the commands to push images to the Docker registry directory. In addition, we explored how to reduce the number of setup environment files that need to be sent by ignoring useless files. Finally, we ran the application from the container.

Next, why use Kubernetes? We’re going to talk about Kubernetes in more detail, and I want to leave you with a quiz question.

  • If our emotion analysis web application is doing well and suddenly the traffic jumps up to millions of requests per minute, then our SA-WebApp and Sa-Logic will be under huge load. Excuse me, how can we increase the size of the container?

3.2 introduction of Kubernetes

I assure you I’m not exaggerating, and after reading this article you’ll be asking “Why don’t we just call it Supernetes?”

3.2.1 What is Kubernetes?

After starting the microservice from the container, we have a question. Let’s describe it in detail in the form of the following question and answer:

Q: How do we expand or shrink containers?

A: Let’s start another container.

Q: How do we divide the load between containers? If the current server is under maximum load, do we need another server? How do we maximize hardware utilization?

A: well… Er… (Let me search)

Q: What if the patch does not affect all services? If there is a problem with the service, how can I get back to a previously working version?

Kubernetes solves all of the above (and more!) . I can sum up Kubernetes in one sentence: “Kubernetes is a container control platform that abstracts all the underlying infrastructure (the infrastructure that containers run on).”

We have a vague idea of the container control platform. We’ll look at it in action later in this article, but this is the first time we’ve mentioned “abstraction of the underlying infrastructure,” so let’s take a closer look at the concept.

3.2.2 Abstraction of underlying infrastructure

Kubernetes provides an abstraction of the underlying infrastructure through a simple API to which we can send requests. These requests allow Kubernetes to handle them to the best of their ability. For example, you could simply ask “Kubernetes adds four containers for image X.” Kubernetes then finds the nodes in use and adds new containers to them.

What does this mean for developers? That means developers don’t need to care about the number of nodes, where to run containers from, or how to communicate with them. Developers don’t need to manage hardware optimizations, or worry about nodes shutting down (they will follow Murphy’s law), because new nodes are added to the Kubernetes cluster. At the same time, Kubernetes will add containers to other running nodes. Kubernetes will play the biggest role.

In the image above we see something new:

  • API server: The only way to interact with the cluster. Responsible for starting or stopping another container, or checking current status, logs, etc.
  • Kubelet: Monitors containers within the node and communicates with the master node;
  • Pod: Initially we can use pods as containers.

So much for the introduction, and the further introduction would be a distraction, but we can wait until a later point for some useful resources, such as official documents, or for reading Marko Lukša’s book Kubernetes in Action.

3.2.3 Standardized cloud service providers

Kubernetes is also well known for standardizing cloud service providers. This is a bold statement. Let’s take a look at it through the following example:

For example, there might be an expert on Azure, Google Cloud Platform, or another cloud service provider who works on a project to build on a brand new cloud service provider. This can have many consequences, such as: he may not meet his deadline; The company may need to hire more related personnel, etc.

Kubernetes, by contrast, doesn’t have this problem. You can run the same command from any cloud service provider. You can send requests to the API server in a given way. Kubernetes will be responsible for the abstraction and implementation of the cloud service provider.

Stop for a second and think about it. This is extremely powerful. For companies, this means they don’t need to be tied to a cloud service provider. They can calculate the overhead of another cloud provider and then move it to another provider. They can keep the same experts, they can keep the same people, and they can spend less money.

With that said, let’s actually use Kubernetes in the next section.

Kubernetes practice — Pod

We set up a microserver to run on a container, and it was bumpy, but it worked. We also mentioned that this solution is not scalable and elastic, and Kubernetes can solve these problems. Later in this article, we will move the individual services to a container managed by Kubernetes, as shown in the figure.

In this article, we’ll use Minikube for native debugging, even though everything is running on Azure and Google Cloud platforms. (HERE I am deployed on Tencent cloud)

4.1 Installing and Starting Minikube

Please refer to the official documentation for installing Minikube:

Kubernetes. IO/docs/tasks /…

k8smeetup.github.io/docs/tasks/

Install minikube on Mac:

The curl - Lo minikube https://storage.googleapis.com/minikube/releases/v0.18.0/minikube-darwin-amd64 && chmod + x minikube &&  sudo mv minikube /usr/local/bin/Copy the code

Kubectl is a client that sends requests to the Kubernetes API server:

Curl - LO, https://storage.googleapis.com/kubernetes-release/release/v1.12.2/bin/darwin/amd64/kubectl && kubectl chmod + x && sudo mv kubectl /usr/local/bin/Copy the code

Or:

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
Copy the code

Usually, the above commands will download slowly or not at all. There are two ways:

  • If there is a proxy, you can configure it on the terminal.
  • Download it directly from other means (such as a browser), change the permissions, and then put it in the path environment.

Start the minikube:

minikube start
Copy the code

The Minikube ISO will be automatically downloaded upon the first startup, which may take a long time depending on network conditions.

Note: Installation failure is possible, such as the following error:

➜  Desktop minikube start  
There is a newer version of minikube available (v0.30.0).  Download it here:
https://github.com/kubernetes/minikube/releases/tag/v0.30.0

To disable this notification, run the following:
minikube config set WantUpdateNotification false
Starting localKubernetes cluster... Starting VM... Downloading Minikube ISO 89.51 MB / 89.51 MB [= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =] 100.00% 0 s E1111 20:06:02.564775 4725 start.go:116] Error starting host: Error creating host: Error with pre-create check:"VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path".trying. E1111 20:06:02.567379 4725 start.go:122] Error starting host: Error creating host: Error with pre-create check:"VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path"
================================================================================
An error has occurred. Would you like to opt in to sending anonymized crash
information to minikube to help prevent future errors?
To opt out of these messages, run the command:
	minikube config set WantReportErrorPrompt false
================================================================================
Please enter your response [Y/n]:
Copy the code

VboxManager command not found, need to install VirtualBox first. After VirtualBox is installed, the Minikube ISO will not be re-downloaded when you run minikube start.

After startup, running the kubectl get Nodes command yields the following result:

➜ Desktop kubectl get Nodes NAME STATUS ROLES AGE VERSION minikube NotReady < None > 18M V1.6.0Copy the code

Kubectl get Nodes: kubectl get Nodes:

➜  Desktop kubectl get nodes
Error from server (NotAcceptable): unknown (get nodes)
Copy the code

This problem does not exist after the 1.8.7 version is reinstalled.

Minikube gives us a Kubernetes cluster with only one node, but remember we don’t care how many nodes there are, Kubernetes will take care of the abstraction, it’s not important for us to learn Kubernetes in depth.

Here we introduce Kubernetes’ first resource: Pod.

4.2 Pod

I love containers, and I’m sure you love containers now. So why did Kubernetes give us the smallest deployable cell Pod? What does a Pod do? A Pod made up of one or a group of containers can share the same runtime environment.

But do we really need to run two containers inside a Pod? Er… In general, only one container will be run, and this is also the case in our example. However, in cases where two containers need to share volumes, or they communicate in cross-process communication, or they are tied together, Pod can be used. Another feature of Pod is that if we want to use other Rke technologies, we can do so without relying on the Docker container.

In general, the main attributes of a Pod include (as shown in the figure above) :

  1. Each Pod can have a unique IP address within the Kubernetes cluster;
  2. Pods can have multiple containers. These containers share the same port space, so they can communicate via localhost (as you can imagine, they can’t use the same port). Communication with other containers inside pods can be done by combining the Pod’S IP.
  3. Containers within a Pod share the same volume, IP, port space, and IPC namespace.

Note: Containers have their own separate file system, although they can share data through Kubernetes resource volumes.

For more details, please refer to the relevant official documentation:

Kubernetes. IO/docs/concep…

4.2.1 Definition of Pod

Here is our first pod sa-frontend manifest file, which we’ll explain one by one.

apiVersion: v1
kind: Pod                                            # 1
metadata:
  name: sa-frontend                                  # 2
spec:                                                # 3
  containers:
    - image: rinormaloku/sentiment-analysis-frontend # 4
      name: sa-frontend                              # 5
      ports:
        - containerPort: 80                          # 6
Copy the code

#1 kind: Specifies the type of Kubernetes resource we want to create. So here’s Pod.

#2 name: Defines the name of the resource. We’ll call it sa-frontend here.

#3 Spec: This object defines the state the resource should be in. The most important property in a Pod Spec is the array of containers.

#4 Image: refers to the image of the container we want to launch in this Pod.

#5 Name: the unique name in the Pod container.

#6 containerPort: specifies the port number monitored by the container. This is just to provide documentation information (even the absence of this port does not affect access).

Create a Pod for SA Frontend

You can find the above definition of pod in the resource-manifests/sa-frontend-pod.yaml. You can access the folder from the terminal, or enter the full path from the command line. Then run the following command:

kubectl create -f sa-frontend-pod.yaml
pod "sa-frontend" created
Copy the code

You can confirm the Pod by running the following command:

kubectl get pods
NAME                          READY     STATUS    RESTARTS   AGE
sa-frontend                   1/1       Running   0          7s
Copy the code

If the Pod is still in the container generation state, you can run the command with a parameter –watch. When the Pod is in the running state, the terminal will display a message.

Access the application externally

In order to access the application from outside, we need to create Kubernetes resources of service type, which we will cover in a later section. Although it is more appropriate to support external access through service-type resources, we have another method here for quick debugging, namely forwarding ports:

Kubectl port-forward sa-frontend-pod 88:80 Forwarding from 127.0.0.1:88 -> 80Copy the code

To open the React application, access 127.0.0.1:88 in your browser.

The wrong way to scale

We said that one of the main features of Kubernetes is scalability, and to prove it, let’s run another Pod. We create another Pod resource with the following definition:

apiVersion: v1
kind: Pod                                            
metadata:
  name: sa-frontend2      # The only change
spec:                                                
  containers:
    - image: rinormaloku/sentiment-analysis-frontend 
      name: sa-frontend                              
      ports:
        - containerPort: 80
Copy the code

Then, create a new Pod with the following command:

kubectl create -f sa-frontend-pod2.yaml
pod "sa-frontend2" created
Copy the code

The second Pod can be identified with the following command:

kubectl get pods
NAME                          READY     STATUS    RESTARTS   AGE
sa-frontend                   1/1       Running   0          7s
sa-frontend2                  1/1       Running   0          7s
Copy the code

We now have two pods in operation.

Please note: this is not the final solution and there are many flaws. We will improve this solution in another section on the deployment of Kubernetes resources.

Summarize the Pod

The Nginx web server that provides static files runs in a different Pod. Now we have two questions:

  • How do you open up these services so that users can access them through urls?
  • How to balance the load between pods?

Kubernetes provides service-type resources. We’ll cover this in more detail in the next section.

Five, Kubernetes practice — service

The Kubernetes service resource can act as an entry point to a set of PODS that provide the same service. This resource is responsible for discovering services and balancing the load between pods, as shown in Figure 16.

Within the Kubernetes cluster, we have PODS (front-end, Spring networking applications, and Flask Python applications) that provide different services. So the question here is: How does the service know which Pod to process? For example: How does it generate a terminal list of these pods?

This problem can be solved with labels in two steps:

  1. Label all object Pods handled by the service;
  2. Use a selector in the service that defines all labeled object Pods.

The following view looks clearer:

We can see that pods are labeled “app: SA-frontend”, which the service uses to find the target Pod.

The label

Tags provide an easy way to manage Kubernetes resources. They have a pair of key-value representations and can be used for all resources. Follow the example in Figure 17 to modify the manifest file.

Save the file after the changes are made and apply the changes with the following command:

kubectl apply -f sa-frontend-pod.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod "sa-frontend" configured
kubectl apply -f sa-frontend-pod2.yaml 
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod "sa-frontend2" configured
Copy the code

We see a warning (when applied, not created, got it). In the second line we see pod “sa-frontend” and “sa-frontend2” deployed. We can filter the Pod we want to view:

kubectl get pod -l app=sa-frontend
NAME           READY     STATUS    RESTARTS   AGE
sa-frontend    1/1       Running   0          2h
sa-frontend2   1/1       Running   0          2h
Copy the code

Another way to validate tagged pods is to add show-labels to the command above, and the result will show all labels for each Pod.

Very good! Pods are tagged, and we’re ready to find them through the service. Let’s define a service of type LoadBalancer as shown in Figure 18.

Definition of services

The YAML definition for the LoadBalancer service is as follows:

apiVersion: v1
kind: Service              # 1
metadata:
  name: sa-frontend-lb
spec:
  type: LoadBalancer       # 2
  ports:
  - port: 80               # 3
    protocol: TCP          # 4
    targetPort: 80         # 5
  selector:                # 6
    app: sa-frontend       # 7
Copy the code

#1 kind: service;

#2 type: specifies the type, we select LoadBalancer because we want to balance the load between pods;

#3 ports: specifies the port on which the service gets the request.

#4 Protocol: Define communication;

#5 targetPort: this port can forward incoming requests to;

#6 selector: an object containing the selector pod attribute;

#7 app: sa-frontend defines which is the target Pod. Only those with the tag “app: sa-frontend” are the target pods.

Create the service by running the following command:

kubectl create -f service-sa-frontend-lb.yaml
service "sa-frontend-lb" created
Copy the code

You can run the following command to check the status of the service:

Kubectl get SVC NAME TYPE cluster-ip external-ip PORT(S) AGE sa-frontend-lb LoadBalancer 10.101.244.40 <pending> 80:30708/TCP 7mCopy the code

External-ip is pending (no need to wait, this state will not change). This is because we are using Minikube. If we run on Azure or Google Cloud services, we can get a public IP and the whole world can access our services.

Minikube doesn’t leave us alone though, it provides a very useful local debug command like this:

minikube service sa-frontend-lb
Opening kubernetes service default/sa-frontend-lb in default browser...
Copy the code

This opens the IP that points to the service in the browser. When requested, the service forwards the request to one of the pods (regardless of which one). This abstraction allows us to see and interact with multiple PODS as one by using a service as an access point.

Summary of services

In this section, we’ve covered tagging resources, using tags as selectors in services, and we’ve defined and created a LoadBalancer service. This satisfied our need to scale the application (simply by adding new labeled PODS) and load balance between pods by using services as access points.

Kubernetes practice — Deployment

Kubernetes deployments help keep the life of every application the same: change. In addition, only dead applications remain unchanged; otherwise, new requirements are constantly emerging, and more code is developed, packaged, and deployed. Every step of the way has the potential to go wrong.

Deploying resources automates the process of moving applications from one version to another, keeps services running, and allows us to quickly roll back to the previous version if something unexpected happens.

The deployment of practice

We now have two pods and one service open, with load balancing between them (see Figure 19). We mentioned that existing PODS are far from perfect. Each Pod needs to be managed separately (create, update, delete, and monitor them). Quick updates and quick rollbacks are impossible! This is not possible, and deploying Kubernetes resources solves every problem here.

Before moving on, let’s recap our goals so that we can better understand the definition of a manifest file for deploying resources. What we want is:

  1. Two pods of the image Rinormaloku /sentiment-analysis-frontend;
  2. Uninterrupted service during deployment;
  3. Pod is labeled with app: SA-frontend, so we can find each service through sa-Frontend – LB service.

In the next section, we can reflect these requirements into the definition of deployment.

Definition of deployment

The following resource definitions of YAML files accomplish all of the points mentioned above:

apiVersion: extensions/v1beta1
kind: Deployment                                          # 1
metadata:
  name: sa-frontend
spec:
  replicas: 2                                             # 2
  minReadySeconds: 15
  strategy:
    type: RollingUpdate                                   # 3
    rollingUpdate: 
      maxUnavailable: 1                                   # 4
      maxSurge: 1                                         # 5
  template:                                               # 6
    metadata:
      labels:
        app: sa-frontend                                  # 7
    spec:
      containers:
        - image: rinormaloku/sentiment-analysis-frontend
          imagePullPolicy: Always                         # 8
          name: sa-frontend
          ports:
            - containerPort: 80
Copy the code

#1 kind: deploy;

#2 Replicas: is an attribute of the deployment Spec object that defines how many pods we want to run. So it’s 2;

#3 type: Specifies the policy to deploy when moving from the current version to the next. The RollingUpdate policy here ensures uninterrupted service during deployment;

#4 maxUnavailable: Is an attribute of the RollingUpdate object that defines the maximum number of pods that are allowed to be stopped when upgrading (compared to the desired state). For our deployment, we have two copies, which means that after one Pod stops, we have another Pod running, so the application is accessible;

#5 maxSurge: Is another property of the RollingUpdate object that defines the maximum number of pods to add to the deployment (compared to the desired state). For our deployment, this means that when we migrate to a new version, we can add a Pod, so we can have three pods at once;

#6 template: Specifies the Pod template that the deployment will use when creating a new Pod. Chances are that this very similar Pod will immediately appeal to you;

#7 APP: sa-frontend: Pods created from the template will be labeled with this label;

#8 imagePullPolicy: When set to Always, the container image is reacquired for each new deployment.

Frankly, all this text is confusing me even more, so let’s look at an example:

kubectl apply -f sa-frontend-deployment.yaml
deployment "sa-frontend" created
Copy the code

As usual, let us make sure that everything is fulfilled as promised:

kubectl get pods
NAME                           READY     STATUS    RESTARTS   AGE
sa-frontend                    1/1       Running   0          2d
sa-frontend-5d5987746c-ml6m4   1/1       Running   0          1m
sa-frontend-5d5987746c-mzsgg   1/1       Running   0          1m
sa-frontend2                   1/1       Running   0          2d
Copy the code

We now have four running PODS, two created by deployment and two created manually. Delete one of the manually created pods by using the kubectl delete pod command.

Exercise: Delete the Pod created by one of the deployments and see what happens. Before you read the explanations below, think about why.

Explanation: After removing a Pod, the deployment notices the current state (only one Pod is running) versus the desired state (two pods are running), so it starts another Pod.

So, what are the benefits of using deployment other than maintaining a state of hope? Let’s look at the benefits first.

Benefit 1: Zero-downtime deployment

The product manager came to us with a new requirement and said the customer wanted a green button on the front. Once the developer had written the code, he only needed to provide us with one thing, the container image Rinormaloku/Sentiment – Analysis-Frontend :green. Then it’s our turn. We need to deploy with zero downtime. Is that hard? Let’s try it!

Edit the deploy-frontend- Pods. Yaml file to change the container image to the new image: Rinormaloku /sentiment-analysis-frontend:green. Save the changes and run the following command:

kubectl apply -f deploy-frontend-green-pods.yaml --record
deployment "sa-frontend" configured
Copy the code

Let’s check the status of the online with the following command:

kubectl rollout status deployment sa-frontend
Waiting for rollout to finish: 1 old replicas are pending termination...
Waiting for rollout to finish: 1 old replicas are pending termination...
Waiting for rollout to finish: 1 old replicas are pending termination...
Waiting for rollout to finish: 1 old replicas are pending termination...
Waiting for rollout to finish: 1 old replicas are pending termination...
Waiting for rollout to finish: 1 of 2 updated replicas are available...
deployment "sa-frontend" successfully rolled out
Copy the code

In terms of deployment, the rollout has been successful. In this process the copies are replaced one by one. Means the application is always online. Before we move on, let’s make sure the update works.

Confirm the deployment

Let’s confirm the results of the update in the browser. Run the command minikube service sa-frontend-lb, which opens the browser. We can see that the SEND button has been updated.

The situation behind “RollingUpdate”

After we apply the new deployment, Kubernetes compares the new state to the old one. In our example, the new state requires two pods for the Rinormaloku/Sentiment-analysis-Frontend :green image. This is different from the current running state, so Kubernetes performs the RollingUpdate.

The RollingUpdate here will be executed according to the specifications we specified, namely “maxUnavailable: 1” and “maxSurge: 1”. This means that the deployment needs to terminate one Pod, and only one new Pod can be run. This process repeats until all the pods have been replaced (see Figure 21).

Let’s move on to the second benefit.

Disclaimer: For entertainment purposes, I will write the following section in the form of a novel.

Benefit 2: Rollback to previous state

The product manager came into the office and said he had a big problem!

The product manager exclaimed, “The application in the production environment has a critical bug!! You need to roll back to the previous version immediately.

You look at him calmly, without batting an eye, then turn to your beloved terminal and start tapping:

kubectl rollout history deployment sa-frontend
deployments "sa-frontend"
REVISION  CHANGE-CAUSE
1         <none>         
2         kubectl.exe apply --filename=sa-frontend-deployment-green.yaml --record=true
Copy the code

You look at a deployment and ask the product manager, “The last version was buggy. Did it work perfectly?”

The product manager yelled, “Yeah, didn’t you hear me? !”

You ignore him, you know how to handle it, so you start knocking:

kubectl rollout undo deployment sa-frontend --to-revision=1
deployment "sa-frontend" rolled back
Copy the code

Then, you gently refresh the page and all the previous changes are gone!

The product manager looks at you gaping.

You saved everyone!

After the

I know… It’s a very boring story. Before Kubernetes came along, it was a good story, much more dramatic and highly tense, and it stayed that way for a long time. Those were the good old days!

Most commands come with instructions, but there are some details you need to figure out for yourself. Why is the value of the change-cause field in the first version equal to, while at the same time in the second version, Exe apply — filename=sa-frontend-deployment-green.yaml — record=true.

You can see that this is because we used the identifier record when applying the new image.

In the next section, we will use all of the previous concepts to complete the architecture.

7. Kubernetes and all other practical applications

Now that we’ve learned all the resources necessary to complete the architecture, this section will be very fast. The gray areas in Figure 22 are the things that need to be done. Let’s start at the bottom: deploy the deployment of SA-Logic.

Deployment of SA – Logic

On the terminal, go to the directory where the resource list file resides and run the following command:

kubectl apply -f sa-logic-deployment.yaml --record
deployment "sa-logic" created
Copy the code

The deployment of Sa-Logic creates three pods on which our Python application is running. This command also labels the Pod as App: SA-logic. With this tag, we can select these pods using selectors from the SA-Logic service. Take a moment to open sa-logic-deployment.yaml and view its contents.

The concepts are the same, so we can jump right into the next resource: the SA-Logic service.

SA Logic service

First, explain why you need this service. Our Java application (running in a Pod deployed with Sa-WebApp) relies on the sentiment analysis provided by the Python application. But now, unlike when we were running everything locally, we don’t have a single Python application listening on a port, we only have two pods, and we can have more if we need to.

This is why a “service” is needed to provide access to a set of PODS that provide the same functionality. This means that we can use the SA-Logic service as the access point for all sa-Logic pods.

Run the following command:

kubectl apply -f service-sa-logic.yaml
service "sa-logic" created
Copy the code

Updated application status: We now have two pods running (including Python applications), and the Sa-Logic service provides access that will be used in the Pod of Sa-WebApp.

Now that we need to deploy sa-WebApp Pod, we need to use deployment resources.

SA – WebApp deployed

We’ve looked at deployment, although this deployment will use more features. Open the sa-web-app-deployment.yaml file and you will find the following new content:

- image: rinormaloku/sentiment-analysis-web-app
  imagePullPolicy: Always
  name: sa-web-app
  env:
    - name: SA_LOGIC_API_URL
      value: "http://sa-logic"
  ports:
    - containerPort: 8080
Copy the code

The first thing we’re interested in is the env attribute. We assume that it defines the environment variable SA_LOGIC_API_URl with the value http://sa-logic in Pod. But why initialize to http://sa-logic, and what exactly is Sa-Logic?

Let’s first introduce kube-DNS.

KUBE-DNS

Kubernetes has a special Pod called kube-DNS. By default, all pods use it as a DNS server. An important property of Kube-DNS is that it creates a DNS record for each established access.

This means that when we create the SA-Logic service, it gets an IP address. Its name is added to kube-DNS (along with its IP address). In this way, all pods can translate sa-Logic into the IP address of the SA-Logic service.

Ok, now we can continue:

SA WebApp Deployment (continued)

Run the following command:

kubectl apply -f sa-web-app-deployment.yaml --record
deployment "sa-web-app" created
Copy the code

Finished. All that remains is to expose the Sa-WebApp Pod externally via the LoadBalancer service. The LoadBalancer service provides access to the Sa-WebApp Pod so that the React application can send HTTP requests.

SA – WebApp service

Open the service-sa-web-app-lb.yaml file and you’ll see something familiar.

So we can run the following command:

kubectl apply -f service-sa-web-app-lb.yaml
service "sa-web-app-lb" created
Copy the code

The architecture is complete. But there’s still a little bit of imperfection. After the deployment of SA – Frontend Pod container image points to the http://localhost:8080/sentiment of SA – WebApp. But now we need to update it to the IP address of sa-WebApp LoadBalancer (which acts as the access point for sa-WebApp Pod).

Fixing this imperfection is a great opportunity to quickly review everything (or better yet, do it yourself without following this guide). Let’s get started:

  1. Run the following command to obtain the IP address of sa-webapp LoadBalancer:
minikube service list |-------------|----------------------|-----------------------------| | NAMESPACE | NAME | URL | |-------------|----------------------|-----------------------------| | default | kubernetes | No node port | | default | Sa - frontend - lb | http://192.168.99.100:30708 | | default | sa - logic | No node port | | default | sa - web - app - lb | http://192.168.99.100:31691 | | kube - system | kube - DNS | No node port | | kube - system | kubernetes - dashboard | http://192.168.99.100:30000 | | -- -- -- -- -- -- -- -- -- -- -- -- - | -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - | -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - |Copy the code
  1. Run the following command to use the IP address of sa-webapp LoadBalancer in sa-frontend/ SRC/app. js:
AnalyzeSentence () {the fetch (' http://192.168.99.100:31691/sentiment ', { /* shortened for brevity */}) .then(response => response.json()) .then(data => this.setState(data)); }Copy the code
  1. Build static file NPM build (switch to sa-front-end first);

  2. Build container images:

    docker build -f Dockerfile -t $DOCKER_USER_ID/sentiment-analysis-frontend:minikube .

  3. Push images to Docker Hub:

    docker push $DOCKER_USER_ID/sentiment-analysis-frontend:minikube

  4. Edit sa-frontend-deployment.yaml and use the new image;

  5. Run the kubectl apply -f sa-frontend-deployment.yaml command.

Refresh the browser (if you have closed the browser, run minikube service sa-frontend-lb). Try typing a sentence!

Viii. Summary of the paper

Kubernetes is great for teams and projects as it simplifies deployment, provides scalability, flexibility, and allows us to use any underlying infrastructure. Let’s call it Supernetes from now on!

Contents covered in this article:

  • Build/package/run ReactJS, Java, and Python applications;
  • Docker container, how to use Dockerfile definition and build container;
  • Container registration directory, we use Docker Hub as the container code base;
  • Introduces the most important content of Kubernetes;
  • Pod;
  • Service;
  • The deployment;
  • New concepts such as zero downtime deployment;
  • Create scalable applications;
  • Procedurally, we converted the entire microservices application into a Kubernetes cluster.

Welcome to pay attention to the public account: non-famous developers, for more exciting content.