Ideally suited to the K8S deployment strategy for small projects
Small Vivian
K8sMeetup
WeChat ID Kuberneteschina
Intro is committed to providing the most authoritative Kubernetes technology, cases and Meetup!
4 days ago
| to | let | | | | technical surgery which | |
By Caleb Doxsey
Translation: Little Junjun
Technical proofreading: Wen Zai under the starry sky, Summer
Editor: Little Junjun
The robustness and reliability of Kubernetes make it one of the most popular cloud native technologies at this stage, but there are also many users report that Kubernetes technology is very complex to learn, only suitable for large clusters and high cost. This article will break your mind and teach you how to deploy Kubernetes clusters in small projects.
There are three reasons to choose K8S to deploy a small cluster
Reason 1: It takes less time
Here are some questions to consider before deploying a small cluster:
-
How should the application be deployed? (Just rsync to the server?)
-
What about dependencies? (If you use Python or Ruby, you have to install them on the server!)
-
Run the command manually? (It may not be the best choice to nohup binaries in the background, but to configure routing services, do you still need to learn Systemd?)
-
How do I run multiple applications with different domain names or HTTP paths? (You may need to set up haProxy or Nginx!)
-
How do you roll out new changes when you update your application? Stop service, deploy code, restart service? How to avoid downtime?
-
What if you screw up the deployment? Is there any way to roll back?
-
Does the application need to use other services? How do you configure these services? (e.g. Redis)
These problems are most likely to arise when you deploy a small cluster, but Kubernetes provides a solution for all of them. There may be other ways to solve the above problems, but using Kubernetes is often more effective because we need more time to focus on the application.
Reason 2: Kubernetes records the entire deployment process
Let’s look at the second reason to deploy a cluster with Kubernetes.
Are you in the same state at work: What was the last command I ran? What services were running on the server at that time? Which brings me to the famous Bash.org:
<erno> hm. I’ve lost a machine.. literally _lost_. it responds to ping, it works completely, I just can’t figure out where in my apartment it is.
http://bash.org/?5273
This has happened to me at work and turned a 10-minute workweek into a weekend.
However, if you choose Kubernetes deployment cluster, there is no such problem. Because Kubernetes uses a descriptive format, it’s easy for users to know what to run next and how to deploy building blocks. In addition, the control layer will handle node failures as normal and automatically reschedule pods. (For stateless services such as Web applications, there is no need to worry about failure.)
Reason 3: Kubernetes is easy to learn
Kubernetes has its own vocabulary, tools, and configuration servers that are completely different from traditional Unix. Kubernetes is knowledgeable enough to set up and maintain infrastructure. With Kubernetes, you can completely configure the service in Kubernetes without SSH to the server. You don’t have to learn Systemd or know what runlevels are; You don’t have to format a disk, or learn how to use PS or Vim.
Let me prove my point with an example!
This takes advantage of Unix:
[Unit]
Description=The NGINX HTTP and reverse proxy server
After=syslog.target network.target remote-fs.target nss-lookup.target[Service]
Type=forking
PIDFile=/run/nginx.pid
ExecStartPre=/usr/sbin/nginx -t
ExecStart=/usr/sbin/nginx
ExecReload=/usr/sbin/nginx -s reload
ExecStop=/bin/kill -s QUIT $MAINPID
PrivateTmp=true
[Install]
WantedBy=multi-user.targetCopy the code
Really harder than using Kubernetes:
apiVersion: apps/v1 kind: Deployment metadata: name: my-nginx spec: selector: matchLabels: run: my-nginx replicas: 1 template: metadata: labels: run: my-nginx spec: containers: - name: my-nginx image: nginx ports: - containerPort: 80Copy the code
There is no need to manually maintain servers when you perform infrastructure management remotely. You can do this using: Ansible, Salt, Chef, Puppet, etc. Of course, you need to learn to use a lot of Kubernetes-related tools, which is much easier than learning the alternatives.
The small knot
Kubernetes is not a tool to do one thing at a time but rather an all-in-one solution that replaces many of the techniques and tools developers are used to.
Let’s deploy a small Kubernetes cluster for you.
Build a small Kubernetes cluster
So let’s start our tutorial. For this example, we’ll use Google’s Kubernetes engine (GKE), but if Google isn’t your cup of tea, you can also opt for Amazon (EKS) or Microsoft (AKS).
To build our Kubernetes cluster we will need:
-
Domain name ($10 / year, depending on domain name);
-
DNS hosts are provided by CloudFlare (free of charge);
-
3 node Kubernetes clusters in GKE ($5 / month);
-
Send Webapp as a Docker Container to Google Container Registry (GCR) (free);
-
Some YAML files configure Kubernetes.
Also, to save money, we won’t use Google’s Ingress Controller. Instead, we will run Nginx as a Daemon on each node and build a custom operator to synchronize the external IP address of the working node with Cloudflare.
Google set
First visit console.cloud.google.com and create a project (if you don’t already have one). You also need to set up a settlement account. Then go to the Kubernetes page in hamburger’s menu and create a new cluster.
You need to do the following:
-
Select the Zonal region type (I used US-Central1-A as my region);
-
Select your version of Kubernetes;
-
Create 3 node pools using the cheapest instance type (f1-micro);
-
For this node pool, in the advanced screen, set the boot disk size to 10GB, enable preemption nodes (they are cheaper), enable automatic upgrades and automatic fixes;
-
There are other options under the node pool. We want to disable HTTP load balancing (which is expensive and unstable in GCP) and also disable all StackDriver services and the Kubernetes Dashboard.
By setting all of these options, you can continue to create clusters.
Here are the reduced costs:
-
Kubernetes control layer: free, because Google doesn’t charge for experts;
-
Kubernetes work nodes: $5.04 / month. Three micronodes are usually $11.65 / month. By using their preemption, we reduced it to $7.67 / month ($5.04 for the “forever free” tier);
-
Storage cost: free, storage cost can be accumulated in GCP. We will get 30GB of permanent disk for free, which is why we chose 10GB;
-
Load balancer cost: Free, we disable HTTP load balancer because this alone costs $18 / month. Instead, we will run our own HTTP proxy on each node and point DNS to the public IP;
-
Internet charges: Free, export is free as long as you are under 1GB per month (8 cents per GB after that).
So we can have a three-node Kubernetes cluster for the same price as a single digital machine.
In addition to setting up GKE, we also need to add some firewall rules to allow external nodes to hit HTTP ports on our nodes. The operation is as follows: Switch to the VPC network from the Hamburger menu, add the firewall rules to those for TCP ports 80 and 443, and set the IP address range to 0.0.0.0/0.
Local Settings
With the cluster up and running, we can configure it. Through cloud.google.com/sdk/docs gcloud installation tools.
Once installed, you can set it up by running the following command:
gcloud auth login
You will also need to install Docker and connect it to GCR for container push:
gcloud auth configure-docker
You can also follow the instructions here to install and set up Kubectl (this tool is suitable for Windows, OSX or Linux).
gcloud components install kubectl gcloud config set project PROJECT_ID gcloud config set compute/zone COMPUTE_ZONE gcloud container clusters get-credentials CLUSTER_NAME
Build the Web application
You can build Web applications in any programming language. We just need to build an HTTP application with port. Personally, I prefer to build these applications in Go, but for some types, let’s try using Crystal.
Create a main.cr file:
# crystal-www-example/main.crrequire "http/server"Signal::INT.trap do
exit
end
server = HTTP::Server.new do |context|
context.response.content_type = "text/plain"
context.response.print "Hello world from crystal-www-example! The time is #{Time.now}"end
server.bind_tcp("0.0.0.0".8080)
puts "Listening on http://0.0.0.0:8080"server.listenCopy the code
We also need a Dockerfile:
# crystal-www-example/Dockerfile
FROM crystallang/crystal:0.26.1 as builder
COPY main.cr main.cr
RUN crystal build -o /bin/crystal-www-example main.cr --release
ENTRYPOINT [ "/bin/crystal-www-example" ]Copy the code
We can build and test our Web application with the following commands:
docker build -t gcr.io/PROJECT_ID/crystal-www-example:latest . docker run -p 8080:8080 gcr.io/PROJECT_ID/crystal-www-example:latest
Then type localhost:8080 to access it in your browser. We can then push our application to run in GCR by:
docker push gcr.io/PROJECT_ID/crystal-www-example:latest
Configuration Kubernetes
For this example, we will create several YAML files to represent the various services and then configure them in the cluster by running Kubectl Apply. The configuration of Kubernetes is descriptive, and these YAML files will tell Kubernetes what state we want to see.
Things we need to do:
-
Create deployment and services for our Crystal-www-Example Web application;
-
Create a Daemon Set and Config Map for Nginx;
-
Run a custom application to synchronize node IP using Cloudflare to DNS.
Web App configuration
First let’s configure webapp :(replace PROJECT_ID with your project ID first)
# kubernetes-config/crystal-www-example.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: crystal-www-example
labels:
app: crystal-www-example
spec:
replicas: 1
selector:
matchLabels:
app: crystal-www-example
template:
metadata:
labels:
app: crystal-www-example
spec:
containers:
- name: crystal-www-example
image: gcr.io/PROJECT_ID/crystal-www-example:latest
ports:
- containerPort: 8080---
kind: Service
apiVersion: v1
metadata:
name: crystal-www-example
spec:
selector:
app: crystal-www-example
ports:
- protocol: TCP
port: 8080
targetPort: 8080Copy the code
Now create a Deployment, which tells Kubernetes to create a Pod containing a container to run the Docker container and a Service Discovery for the cluster. To run with this configuration (in kubernetes-config folder) :
kubectl apply -f .Copy the code
We can test if it is running using the following methods:
kubectl get pod
# you should see something like:
# crystal-www-example-698bbb44c5-l9hj9 1/1 Running 0 5mCopy the code
We can also create the proxy API so that we can access it:
kubectl proxy
Then visit:
http://localhost:8001/api/v1/namespaces/default/services/crystal-www-example/proxy/
Nginx configuration
Typically, you use the Ingress Controller when dealing with HTTP services in Kubernetes. Unfortunately, Google’s HTTP load balancer is very expensive, so we’ll run our own HTTP proxy and configure it manually.
We will use Daemon Set and Config Map. Daemon sets are applications that run on each node. The Config Map is basically a small file that we can install in a container where we will store the Nginx configuration.
Yaml looks like this:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- image: nginx:1.15.3-alpine
name: nginx
ports:
- name: http
containerPort: 80
hostPort: 80
volumeMounts:
- name: "config"
mountPath: "/etc/nginx"
volumes:
- name: config
configMap:
name: nginx-conf
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-conf
data:
nginx.conf: |
worker_processes 1;
error_log /dev/stdout info;
events {
worker_connections 10;
}
http {
access_log /dev/stdout;
server {
listen 80;
location / {
proxy_pass http://crystal-www-example.default.svc.cluster.local:8080;}}}Copy the code
You can see how we mounted the nginx.conf of config map in the Nginx container. We also set two additional fields on the specification: hostNetwork: true and dnsPolicy: ClusterFirstWithHostNet.
-
HostNetwork: true so that we can bind the host port and reach Nginx from the exnet;
-
DnsPolicy: ClusterFirstWithHostNet so that we can access services within the cluster.
Apply changes: Go to Nginx by clicking the node’s public IP address.
You can find it by running:
kubectl get node -o yaml # look for: # - address: ... # type: ExternalIPCopy the code
Our web application is now accessible over the Internet. Now think of a nice name for it!
Connect the DNS
We need A to set 3 DNS records for the nodes of the cluster:
Then add A CNAME entry to point to those A records. (that is, www.example.com CNAME kubernetes.example.com) We can do this manually, but it is best to do it automatically so that DNS records are updated automatically when nodes are extended or replaced.
I think this is also a good example of how to make Kubernetes work for you rather than against it. Kubernetes is fully scriptable and has a powerful API. So you can fill in the blanks with custom components that aren’t too hard to write. I built a small Go application for this, which can be found here: Kubernetes-Cloudflare-sync.
Create an informer:
factory := informers.NewSharedInformerFactory(client, time.Minute) lister := factory.Core().V1().Nodes().Lister() informer := factory.Core().V1().Nodes().Informer() informer.AddEventHandler(cache.ResourceEventHandlerFuncs{ AddFunc: func(obj interface{}) { resync() }, UpdateFunc: func(oldObj, newObj interface{}) { resync() }, DeleteFunc: func(obj interface{}) { resync() }, }) informer.Run(stop)Copy the code
Informer will call my resync function whenever the node changes. Then use Cloudflare API library (github.com/cloudflare/cloudflare-go) synchronous IP, similar to:
var ips []stringfor _, node := range nodes { for _, addr := range node.Status.Addresses { if addr.Type == core_v1.NodeExternalIP {
ips = append(ips, addr.Address)
}
}
}
sort.Strings(ips)for _, ip := range ips {
api.CreateDNSRecord(zoneID, cloudflare.DNSRecord{
Type: "A",
Name: options.DNSName,
Content: ip,
TTL: 120,
Proxied: false})},Copy the code
Just like our web application, we deploy this application as in Kubernetes:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes-cloudflare-sync
labels:
app: kubernetes-cloudflare-sync
spec:
replicas: 1
selector:
matchLabels:
app: kubernetes-cloudflare-sync
template:
metadata:
labels:
app: kubernetes-cloudflare-sync
spec:
serviceAccountName: kubernetes-cloudflare-sync
containers:
- name: kubernetes-cloudflare-sync
image: gcr.io/PROJECT_ID/kubernetes-cloudflare-sync
args:
- --dns-name=kubernetes.example.com
env:
- name: CF_API_KEY
valueFrom:
secretKeyRef:
name: cloudflare
key: api-key
- name: CF_API_EMAIL
valueFrom:
secretKeyRef:
name: cloudflare
key: emailCopy the code
You will need to create a Kubernetes password using the CloudFlare API key and email address:
kubectl create secret generic cloudflare --from-literal=email='EMAIL' --from-literal=api-key='API_KEY'Copy the code
You also need to create the service account (which allows our deployment to access the Kubernetes API to retrieve nodes). First run (especially for GKE) :
kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user YOUR_EMAIL_ADDRESS_HERECopy the code
Then apply for:
apiVersion: v1
kind: ServiceAccount
metadata:
name: kubernetes-cloudflare-sync
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kubernetes-cloudflare-sync
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list"."watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-cloudflare-sync-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-cloudflare-sync
subjects:
- kind: ServiceAccount
name: kubernetes-cloudflare-sync
namespace: defaultCopy the code
With the configuration in place, applications running Cloudflare will be updated whenever any node changes. So that’s how to deploy a small cluster based on Kubernetes.
Total knot
Kubernetes is such a retractable technology. Just as you may never use all the features in SQL databases, you have to admit that SQL databases greatly improve your ability to deliver solutions quickly.
Kubernetes is very similar to SQL. Under the huge technical system of Kubernetes, we can not use all functions, but we can use some functions in each project to achieve perfect deployment. Every time I deploy a small cluster with Kubernetes, I learn something new.
So my point is that Kubernetes also makes sense for small deployments and is both easy to use and cheap. If you’ve never tried it, do it now!
END
Scan QR Code via WeChat
to follow Official Account
Use applets
“Small programs
Open the