Other options:

  • Without tools, start from scratch, please refer to: Deploy kubernetes cluster step by step with me
  • Minikbe can be used only on a single server. For details, see official Install Mikikube

Liverpoolfc.tv: kubernetes. IO /

Vagrant: If you want to test locally, you can use Vagrant to create virtual machine simulation cluster tests. See appendix

The official tutorial: kubernetes. IO/docs/setup /…

This is not an official tutorial (need to climb the wall), here only introduce the domestic environment installation process in detail

Note: if this is not indicated, the Ubuntu16.04 + environment is used by default

The version at the time of test is the latest version:

  • kubernetesv1.11.2

Server Configuration Requirements

Note: It is better that all machines are in the same area, so that the Intranet communication can be used

  • Operating System Requirements
    • Ubuntu 16.04 +
    • Debian 9
    • CentOS 7
    • RHEL 7
    • Fedora 25/26 (best-effort)
    • HypriotOS v1.0.1 +
    • Container Linux (tested with 1800.6.0)
  • 2+ GB RAM
  • 2+ CPUs
  • Communication between all machines is normal
  • Unique hostname, MAC Address, and product_uUID
  • Specific port opening (not excluded by security groups and firewalls)
  • Disable Swap

Install the Docker

Ubuntu

curl -fsSL get.docker.com -o get-docker.sh
sudo sh get-docker.sh --mirror Aliyun
Copy the code

Start the Docker CE

sudo systemctl enable docker
sudo systemctl start docker
Copy the code

Create a Docker user group

Create docker group:

sudo groupadd docker
Copy the code

Add current user to Docker group:

sudo usermod -aG docker $USER
Copy the code

Mirror acceleration:

For systems using systemd, write the following in /etc/docker/daemon.json (create a new file if it does not exist)

Note: If you usually use Aricloud image more frequently, it is recommended to use Aricloud image acceleration, here take Docker official accelerator as an example

{
  "registry-mirrors": [
    "https://registry.docker-cn.com"]}Copy the code

Then restart the service

sudo systemctl daemon-reload
sudo systemctl restart docker
Copy the code

Disable the swap

Kubelet/Kubernetes should work with swap Enabled Kubelet/Kubernetes should work with swap Enabled

  • The editor/etc/fstabFile, comment out the referenceswapThe rows of
  • sudo swapoff -a
  • Test: InputtopCommand. If total is 0 in the line of KiB Swap, the function is disabled successfully

To close permanently:

  • sudo vim /etc/fstab

    Comment out the swap line

  • restart

Install kubeadm, kubelet and Kubectl

Jurisdiction: root

shell: bash

centos

Centos uses ali cloud source, but the update is not as good as the university of Science and Technology

cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF setenforce 0 yum install -y kubelet kubeadm kubectl  --disableexcludes=kubernetes systemctlenable kubelet && systemctl start kubelet
Copy the code

ubuntu

Use uSTC’s source

apt-get update && apt-get install -y apt-transport-https curl
curl -shttp://packages.faasx.com/google/apt/doc/apt-key.gpg | sudo apt-key add - cat <<EOF > /etc/apt/sources.list.d/kubernetes.list deb http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial main EOF apt-get  update apt-get install -y kubelet kubeadm kubectl apt-mark hold kubelet kubeadm kubectlCopy the code

Other Pre-configuration

Configure the CGroup driver on the Master node

Docker using cgroup driver:

docker info | grep -i cgroup
-> Cgroup Driver: cgroupfs
Copy the code

Kubelet cgroupfs = system;

sudo vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

Add the following configuration:

Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
Copy the code

or

Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"
Copy the code

Restart kubelet

systemctl daemon-reload
systemctl restart kubelet
Copy the code

Remove firewall restrictions

vim /etc/sysctl.conf

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
Copy the code

Then,

sysctl -p
Copy the code

The installation image

The core services started by the Master and Node are as follows:

The Master node The Node Node
etcd-master Control plane(e.g., Calico, Fannel)
kube-apiserver kube-proxy
kube-controller-manager other apps
kube-dns
Control plane(e.g., Calico, Fannel)
kube-proxy
kube-scheduler

Run the following command:

kubeadm config images list
Copy the code

Obtain the image required for starting kubeadm of the current version as shown in the following example:

K8s. GCR. IO/kube - apiserver - amd64: v1.11.2 k8s. GCR. IO/kube - controller - manager - amd64: v1.11.2 K8s. GCR. IO/kube - the scheduler - amd64: v1.11.2 k8s. GCR. IO/kube - proxy - amd64: v1.11.2 k8s. GCR. IO/pause: 3.1 K8s. GCR. IO/etcd - amd64:3.2.18 k8s. GCR. IO/coredns: 1.1.3Copy the code

Use the following script:

Can also be downloaded in advance in the local good, after packaged into tar, and then directly uploaded to the server to import

The image can be replaced by Google’s image from Ali Cloud

#! /bin/bashImages = (kube - apiserver - amd64: v1.11.2 kube - controller - the manager - amd64: v1.11.2 kube scheduler - amd64: v1.11.2 Kube-proxy-amd64 :v1.11.2 pause:3.1 etCD-amd64 :3.2.18 coreDNS :1.1.3 pause-amd64:3.1 kubernetes-dashboard-amd64:v1.10.0 Heapster - amd64: v1.5.4 heapster grafana - amd64: v5.0.4 heapster influxdb - amd64: v1.5.2)for imageName in ${images[@]} ; do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
done
Copy the code

On how to build a foreign image, you can use Docker Hub or Ali Cloud container image service.

Simple steps are as follows:

  • Set up a Github repository with a file like this:

    etcd-amd64/Dockerfile

    The FROM GCR. IO/google_containers/etcd - amd64:3.2.18 LABEL maintainer ="[email protected]"
    LABEL version="1.0"
    LABEL description="kubernetes"
    Copy the code
  • Then create an Auto Build repository in the image repository to automatically track github changes and update the image

The specific steps are not the focus of this article

Check whether the port is occupied

The Master node

Protocol Direction Port Range Purpose Used By
TCP Inbound 6443 * Kubernetes API server All
TCP Inbound 2379-2380. etcd server client API kube-apiserver, etcd
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 10251 kube-scheduler Self
TCP Inbound 10252 kube-controller-manager Self

The Worker nodes

Protocol Direction Port Range Purpose Used By
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 30000-32767. NodePort Services** All

Initialize kubeadm

Sudo kubeadm init --kubernetes-version=v1.11.2 --apiserver-advertise-address=<your IP > --pod-network-cidr=192.168.0.0/16Copy the code

Init main parameters:

  • –kubernetes-version: specifies the version of Kubenetes. If not specified, the latest version information will be downloaded from the Google website.
  • –pod-network-cidr: Specifies the IP address range for the POD network. The value depends on which network plug-in you select in the next step. For example, I’m using a Calico network in this article, which needs to be specified as192.168.0.0/16.
  • –apiserver-advertise-address: specifies the Ip address to be advertised by the Master service. If the Ip address is not specified, the system automatically detects the network interface, usually the Intranet Ip address.
  • –feature-gates=CoreDNS: whether to use CoreDNS (true/false), CoreDNS plugin will be upgraded to Beta in 1.10 and will eventually become the default option for Kubernetes.
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
Copy the code

Testing:

Curl https://127.0.0.1:6443 -k or https://

:6443 -k

The response is as follows:

{
  "kind": "Status"."apiVersion": "v1"."metadata": {},"status": "Failure"."message": "forbidden: User \"system:anonymous\" cannot get path \"/\""."reason": "Forbidden"."details": {},"code": 403}Copy the code

Install the network plug-in – Calico

Installing an image:

Docker pull quay. IO/calico/node: v3.1.3 docker pull quay. IO/calico/the cni: v3.1.3 docker pull quay. IO/calico/typha: v0.7.4Copy the code

Calico is used here, please see the official introduction for more, and check the difference between different plug-ins

kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
kubectl apply -fhttps://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networkin G / 1.7 / calico. YamlCopy the code

For Calico to work properly, you must use –pod-network-cidr=192.168.0.0/16 when executing kubeadm init

After the network plug-in is installed, you can check the operating status of CoreDNS POD to determine whether the network plug-in is working properly:

kubectl get pods --all-namespaces

# output as follows:
Note: CoreDNS takes time to start and is Pending at first
NAMESPACE     NAME                                   READY     STATUS              RESTARTS   AGE
kube-system   calico-node-lxz4c                      0/2       ContainerCreating   0          4m
kube-system   coredns-78fcdf6894-7xwn7               0/1       Pending             0          5m
kube-system   coredns-78fcdf6894-c2pq8               0/1       Pending             0          5m
kube-system   etcd-iz948lz3o7sz                      1/1       Running             0          5m
kube-system   kube-apiserver-iz948lz3o7sz            1/1       Running             0          5m
kube-system   kube-controller-manager-iz948lz3o7sz   1/1       Running             0          5m
kube-system   kube-proxy-wcj2r                       1/1       Running             0          5m
kube-system   kube-scheduler-iz948lz3o7sz            1/1       Running             0          4m

Copy the code

Pause amd64: pause amd64: pause amd64: pause amd64: pause amd64: pause amd64: pause amd64

Wait for the CoreDNS Pod state to change to Running to continue adding slave nodes

Adding another Node

By default your cluster will not dispatch Pods on your master for security reasons. If you want your master to be involved in scheduling, run:

kubectl taint nodes --all node-role.kubernetes.io/master-
Copy the code

or

kubectl taint nodes k8s-node1 node-role.kubernetes.io/master-
Copy the code

The output might look like the following:

node "test-01" untainted
taint "node-role.kubernetes.io/master:" not found
taint "node-role.kubernetes.io/master:" not found
Copy the code

It will be in those who have node to remove node – role. Kubernetes. IO/master pollution, including the master node, so later will be able to dispatch in any one place

kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
Copy the code

After a few seconds, you can run kubectl get Nodes on the primary node to see the new machine:

NAME STATUS ROLES AGE VERSION centos Ready Master 13m v1.11.2 Ubuntu Ready < None > 13m v1.11.2Copy the code

Install the visual Dashboard UI

Official tutorial: github.com/kubernetes/…

Prepare the image:

K8s. GCR. IO/kubernetes - dashboard - amd64: v1.8.3The following is the image of the plug-inK8s. GCR. IO/heapster - amd64: v1.5.4 k8s. GCR. IO/heapster - grafana - amd64: v5.0.4 k8s. GCR. IO/heapster - influxdb - amd64: v1.5.2Copy the code

Recommendation:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
Copy the code

Custom:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml
Copy the code

Configuring visual Interfaces

kubectl -n kube-system edit service kubernetes-dashboard
# Edit content as follows:
  ports:
  - nodePort: 32576
    port: 443
    protocol: TCP
    targetPort: 8443
  type: NodePort
Copy the code

The query

kubectl -n kube-system get service kubernetes-dashboard
Copy the code

Configure the admin

vim kubernetes-dashboard-admin.yaml

---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-admin
  namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-admin
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard-admin
  namespace: kube-system
Copy the code

Kubectl create-f kubernetes-dashboard-admin.yml

The login

Kubeconfig login

Note: Auth mode is not recommended

Creating user admin

file: admin-role.yaml

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: admin
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: admin
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
Copy the code

kubectl create -f admin-role.yaml

Access token

kubectl -n kube-system get secret|grep admin-token -> admin-token-tdvfz kubernetes.io/service-account-token 3 5s kubectl  -n kube-system describe secret admin-token-tdvfzCopy the code

Set the Kubeconfig file

copy ~/.kube/config /path/to/Kubeconfig
vim Kubeconfig
# Add the token obtained above to it
Copy the code

Something like this:

Token login

Access token:

kubectl -n kube-system describe $(kubectl -n kube-system get secret -n kube-system -o name | grep namespace) | grep token
Copy the code

Access to the admin – token:

kubectl -n kube-system describe secret/$(kubectl -n kube-system get secret | grep kubernetes-dashboard-admin | awk '{print $1}') | grep token

Copy the code

If you log in as the default user, the following may be possible:

Integrated heapster

Note: Kubernetes default master does not participate in pod allocation, and normally heapster needs to be installed on a node. If you only have master, you may not be able to start successfully

Install heapster

mkdir heapster
cdheapster wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/grafana.yaml wget  https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/heapster.yaml wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/rbac/heapster-rbac.yamlCopy the code

Then modify heapster.yaml

- source = kubernetes: https://10.209.3.82:6443 -- -- -- -- -- -- -- -- to own IP --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086Copy the code

or

--source=-source=kubernetes.summary_api:https://kubernetes.default.svc?inClusterConfig=false&kubeletHttps=true&kubeletPort=10250&insecure=true&auth=
--sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086
Copy the code

Image for registry.cn-hangzhou.aliyuncs.com/google_containers/ modify all the documents

Such as: K8s. GCR. IO/heapster – grafana – amd64: v5.0.4 – > registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-grafana-amd64:v5.0.4

Kubernetes ClusterIP is also available

kubectl get service

NAME TYPE cluster-ip external-ip PORT(S) AGE Kubernetes ClusterIP 10.0.0.1 443/TCP 1D

Modify heapster yaml

Command: – / – heapster source = kubernetes: https://10.0.0.1 – – sink = influxdb: monitoring – influxdb. Kube – system. SVC: 8086

After creating

kubectl create -f. /Copy the code

Here are the results:

Official use case deployment

Sock Shop

Officials demonstrated a sock mall, deployed as a microservice

kubectl create namespace sock-shop

kubectl apply -n sock-shop -f "https://github.com/microservices-demo/microservices-demo/blob/master/deploy/kubernetes/complete-demo.yaml?raw=true"
Copy the code

Again, wait for Pod to be in the Running state and the installation is successful

We can see that the mall is broken up into many micro-service modules, payment, user, order, shopping cart, and so on. Let’s take a look at the front end. If it works, it shows that cross-host network access is possible.

We can see that the front-end module is bound to a nodePort of 30001

To view the open port of the front-end service, run the kubectl-n sock-shop get SVC front-end command

Visit http://192.168.80.25:30001 to see the following picture

If add shopping cart function can run through, it shows that the cluster set up successfully, no problem.

Uninstall socks shop: kubectl delete namespace sock-shop

Appendix:

Use vagrant to simulate clusters locally

The host environment is: Ubuntu 16.04 LTS. Download the latest version of Vagrant from the official site and install it to the system.

Sudo DPKG -i vagrant_2.1.2_x86_64. Deb sudo apt install VirtualboxIf the installation fails, it is likely that the Linux kernel lacks the header, so you can install it manually
Copy the code

Download Ubuntu Box with Docker

vagrant box add comiq/dockerbox
Copy the code

Or manually in [official] (vagrant box add ubuntu – xenial – docker file:///d:/path/to/file.box) after download

vagrant box add ubuntu-xenial-docker /path/to/file.box
Copy the code

Using the configuration file to start the VM:

File: Vagrantfile

Related parameters:

  • Config.vm. Define is followed by the name
  • V.cooperstomize has memory-related parameters
  • Node.vm. box indicates the box used
  • node.vm.hostname hostname
  • Node.vm. synced_folder Shared directory
Vagrant.configure("2") do|config| (1.. 2).eachdo |i|

		config.vm.define "node#{i}" do |node|

		Set the Box of the virtual machine
		node.vm.box = "comiq/dockerbox"

		Set the host name of the VM
		node.vm.hostname="node#{i}"

		Set the IP address of the VM
		node.vm.network "private_network", ip: "192.168.59. # {I}"

		Set the shared directory of the host and VM
		node.vm.synced_folder "~/ public"."/home/vagrant/share"

		# VirtaulBox
		node.vm.provider "virtualbox" do |v|

			Set the name of the vm
			v.name = "node#{i}"

			Set the memory size of the VM
			v.memory = 1200

			Set the number of cpus for the VM
			v.cpus = 1
		end

		end
	end
end
Copy the code

Note: Modify the bridge by yourself as follows:

Enter ifconfig -a

Find the network adapter whose inet address starts with 192.168 as shown in the following example:

Docker_gwbridge Link Encap: Ethernet hardware address 02:42: B6: F5: A0: EC INET Address :172.19.0.1 Broadcast :172.19.255.255 Mask :255.255.0.0 Inet6 Address: Fe80: : 42: b6ff: fef5: a0ec / 64 Scope: Link UP BROADCAST RUNNING MULTICAST MTU: 1500 jump points: 1 receives packets: 0 error: 0 dropped: 0 overload: 0 frames: 0 Sent packet :106 Error :0 Discard :0 Overload :0 Carrier :0 Collision :0 Send queue Length :0 Received byte :0 (0.0b) Sent byte :15373 (15.3 KB) ENP2S0 Link EncAP: Ethernet hardware address 50:7B: 9D: D2:66:0B UP BROADCAST MULTICAST MTU:1500 Hop number :1 Received packet :0 Error :0 Discard :0 Overload :0 Frames :0 Sent packet :0 Error :0 Discard :0 Overload :0 Carrier :0 Collision :0 Send queue length :1000 Received byte :0 (0.0b) Sent byte :0 (0.0b) LO Link Encap: local loopback INET Address :127.0.0.1 Mask :255.0.0.0 Inet6 Address: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Hop points :1 Received packet :18329 Error :0 Discard :0 Overload :0 Frame number :0 Sent packet :18329 Error :0 Discard :0 Overload :0 Carrier :0 Collision :0 Send queue Length :1000 Received bytes :29306778 (29.3 MB) Sent bytes :29306778 (29.3 MB) WLP4S0 Link Encap: Ethernet hardware address 44:1c: A8:24:85:1 b inet Address :192.168.3.136 Broadcast :192.168.3.255 Mask :255.255.255.0 INet6 Address: 30 be 18 fa fe80: : : : 13 b4:1650/64 the Scope: the Link UP BROADCAST RUNNING MULTICAST MTU: 1500 jump points: 1 packet received: 25397 error: 0 dropped: 0 overload: 0 frames: 0 Sent Packet :15529 Error :0 Discard :0 Overload :0 Carrier :0 Collision :0 Send Queue Length :1000 Received Bytes :34579781 (34.5 MB) Sent bytes :1990407 (1.9 MB)Copy the code

First startup:

mkdir vagrant
cd vagrant
cp /path/to/vagrantfile Vagrantfile
vagrant up
Copy the code

Later launch simply in the relevant directory: Vagrant up

Go to box: vagrant SSH

, for example: Vagrant SSH master

Off: Vagrant Halt

Docker exports images and packages

docker save quay.io/calico/cni > calico_cni.tar
docker load < calico_cni.tar
Copy the code

The reconstruction

It is dangerous, please use with caution

sudo kubeadm reset
Copy the code

Convert docker-compose to Kubernetes Resources

Install Kompose

Github release

GitHub release page.

# Linux The curl - https://github.com/kubernetes/kompose/releases/download/v1.1.0/kompose-linux-amd64 - L o kompose# macOSThe curl - https://github.com/kubernetes/kompose/releases/download/v1.1.0/kompose-darwin-amd64 - L o kompose# WindowsThe curl - https://github.com/kubernetes/kompose/releases/download/v1.1.0/kompose-windows-amd64.exe - L o kompose. Exe chmod + x  kompose sudo mv ./kompose /usr/local/bin/kompose
Copy the code
Go

Installing using go get pulls from the master branch with the latest development changes.

go get -u github.com/kubernetes/kompose
Copy the code

Use Kompose

  1. docker-compose.yml

      version: "2"
    
      services:
    
        redis-master:
          image: k8s.gcr.io/redis:e2e 
          ports:
            - "6379"
    
        redis-slave:
          image: gcr.io/google_samples/gb-redisslave:v1
          ports:
            - "6379"
          environment:
            - GET_HOSTS_FROM=dns
    
        frontend:
          image: gcr.io/google-samples/gb-frontend:v4
          ports:
            - "80:80"
          environment:
            - GET_HOSTS_FROM=dns
          labels:
            kompose.service.type: LoadBalancer
    Copy the code
  2. Run the kompose up command to deploy to Kubernetes directly, or skip to the next step instead to generate a file to use with kubectl.

      $ kompose up
      We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application. 
      If you need different kind of resources, use the 'kompose convert' and 'kubectl create -f' commands instead. 
    
      INFO Successfully created Service: redis          
      INFO Successfully created Service: web            
      INFO Successfully created Deployment: redis       
      INFO Successfully created Deployment: web         
    
      Your application has been deployed to Kubernetes. You can run 'kubectl get deployment,svc,pods,pvc' for details.
    Copy the code
  3. To convert the docker-compose.yml file to files that you can use with kubectl, run kompose convert and then kubectl create -f <output file>.

      $ kompose convert                           
      INFO Kubernetes file "frontend-service.yaml" created         
      INFO Kubernetes file "redis-master-service.yaml" created     
      INFO Kubernetes file "redis-slave-service.yaml" created      
      INFO Kubernetes file "frontend-deployment.yaml" created      
      INFO Kubernetes file "redis-master-deployment.yaml" created  
      INFO Kubernetes file "redis-slave-deployment.yaml" created   
    Copy the code
      $ kubectl create -ffrontend-service.yaml,redis-master-service.yaml,redis-slave-service.yaml,frontend-deployment.yaml,redis-master-deploymen t.yaml,redis-slave-deployment.yaml service"frontend" created
      service "redis-master" created
      service "redis-slave" created
      deployment "frontend" created
      deployment "redis-master" created
      deployment "redis-slave" created
    Copy the code

    Your deployments are running in Kubernetes.

  4. Access your application.

    If you’re already using minikube for your development process:

      $ minikube service frontend
    Copy the code

    Otherwise, let’s look up what IP your service is using!

    $ kubectl describe svc frontend Name: frontend Namespace: default Labels: service=frontend Selector: Service =frontend Type: LoadBalancer IP: 10.0.0.183 LoadBalancer Ingress: 123.45.67.89 Port: 80 80/TCP NodePort: 80 31144/TCP Endpoints: 172.17.0.4:80 Session Affinity: None No events.Copy the code

    If you’re using a cloud provider, your IP will be listed next to LoadBalancer Ingress.

    $curl http://123.45.67.89Copy the code

Image query

Look directly at the related IMAGE parameter in YAML

Best practices

Kubernetes. IO/docs/concep…