The next period of time will learn Kubernetes source code, in order to better view the source code and debugging procedures, so built a Kubernetes development debugging environment, the environment can be combined with breakpoint debugging to understand the running process of the code.

Prepare the VM and install required software

$ yum install -y docker wget curl vim golang etcd openssl git g++ gcc
$ echo "
export GOROOT=/usr/local/go
export GOPATH=/home/workspace
export PATH=$PATH:$GOROOT/bin:$GOPATH/bin
" >> /etc/profile
$ source /etc/profile
Copy the code

Install the certificate generation tool

$ go get -u -v github.com/cloudflare/cfssl/cmd/...
Copy the code

Installing debugging Tools

$ go get -u -v github.com/go-delve/delve/cmd/dlv
Copy the code

* * note: ** if the preceding two tools fail to be installed or cannot be installed due to network problems, download the source code directly to $GOPATH/pkg/github.com, go to the corresponding directory and run make install to install. After successful installation, generate an executable program in the bin directory of the $GOPATH directory

Prepare source code and compile debug version

Clone Kubernetes code

$ git clone https://github.com/kubernetes/kubernetes.git $GOPATH/src/k8s.io/kubernetes
$ cd $GOPATH/src/k8s.io/kubernetes
$Git checkout - v1.20.1 bSwitch to the v1.20.1 branch
Copy the code

Clone etcd code

$ git clone https://https://github.com/etcd-io/etcd.git $GOPATH/src/k8s.io/etcd
$ cd $GOPATH/src/k8s.io/etcd
$Git checkout - v3.4.13 bSwitch to the v3.4.13 branch
Copy the code

Run the single-node cluster startup script

$ cd $GOPATH/src/k8s.io/kubernetes
$ ./hack/local-up-cluster.sh The execution takes a long time, so wait patientlymake: Entering directory `/home/workspace/src/k8s.io/kubernetes' make[1]: Entering directory `/home/workspace/src/k8s.io/kubernetes' make[1]: Leaving directory `/home/workspace/src/k8s.io/kubernetes' make[1]: Entering directory `/home/workspace/src/k8s.io/kubernetes' +++ [0818 08:46:21] Building go targets for linux/amd64: ./vendor/k8s.io/code-generator/cmd/prerelease-lifecycle-gen +++ [0818 08:46:29] Building go targets for linux/amd64: ./vendor/k8s.io/code-generator/cmd/deepcopy-gen +++ [0818 08:46:49] Building go targets for linux/amd64: ./vendor/k8s.io/code-generator/cmd/defaulter-gen +++ [0818 08:47:10] Building go targets for linux/amd64: ./vendor/k8s.io/code-generator/cmd/conversion-gen +++ [0818 08:48:04] Building go targets for linux/amd64: ./vendor/k8s.io/kube-openapi/cmd/openapi-gen +++ [0818 08:48:35] Building go targets for linux/amd64: ./vendor/github.com/go-bindata/go-bindata/go-bindata make[1]: Leaving directory `/home/workspace/src/k8s.io/kubernetes' +++ [0818 08:48:38] Building go targets for linux/amd64: cmd/kubectl cmd/kube-apiserver cmd/kube-controller-manager cmd/cloud-controller-manager cmd/kubelet cmd/kube-proxy cmd/kube-scheduler make: Leaving directory `/home/workspace/src/k8s.io/kubernetes' Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? Kubelet cgroup driver defaulted to use: / home/workspace/SRC/k8s. IO/kubernetes/third_party/etcd: / usr/lib64 / qt 3.3 / bin: / usr/local/sbin, / usr/local/bin: / usr/sbin/u The sr/bin: / usr/local/go/bin: / home/workspace/bin: / root/bin etcd version 3.4.13 or greater required. You can use 'hack/install-etcd.sh' to install a copy in third_party/.Copy the code

Solve the above problems

$ ./hack/install-etcd.sh The execution takes a long time, so wait patientlyDownloading https://github.com/coreos/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz succeed etcd V3.4.13 installed. To use: the export PATH = "/ home/workspace/SRC/k8s. IO/kubernetes/third_party/etcd: ${PATH}"#If the preceding packages cannot be downloaded on the Intranet, manually compile them and establish soft connections
$ go env -w GOPROXY=https://goproxy.cn,direct
$ cd $GOPATH/src/k8s.io/etcd
$ go mod vendor
$ make
$ cp bin/etcd $GOPATH/ SRC/k8s. IO/kubernetes/third_party/etcd - v3.4.13 - Linux - amd64
$Ln - FNS etcd - v3.4.13 - Linux - amd64 etcd

#Execute the following script again
$ ./hack/local-up-cluster.sh -O The -o parameter indicates that kubernetes related components are directly run after the last compilation, and no compilation is required. If it can be successfully started, it means that the single-machine cluster is successfully started
skipped the build.
  WARNING: You're not using the default seccomp profile
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Kubelet cgroup driver defaulted to use: systemd
/home/workspace/src/k8s.io/kubernetes/third_party/etcd:/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/local/go/bin:/home/workspace/bin:/root/bin
API SERVER secure port is free, proceeding...
Detected host and ready to start services.  Doing some housekeeping first...
Using GO_OUT /home/workspace/src/k8s.io/kubernetes/_output/bin
Starting services now!
Starting etcd
etcd --advertise-client-urls http://127.0.0.1:2379 --data-dir /tmp/tmp.oUDPwStJvj --listen-client-urls http://127.0.0.1:2379 --log-level=debug > "/tmp/etcd.log" 2>/dev/null
Waiting for etcd to come up.
+++ [0818 09:28:25] On try 2, etcd: : {"health":"true"}
{"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"2","raft_term":"2"}}Generating a 2048 bit RSA private key
..........................................+++
........+++
writing new private key to '/var/run/kubernetes/server-ca.key'
-----
Generating a 2048 bit RSA private key
.....................+++
..............................+++
writing new private key to '/var/run/kubernetes/client-ca.key'
-----
Generating a 2048 bit RSA private key
......................+++
...........+++
writing new private key to '/var/run/kubernetes/request-header-ca.key'
-----
2021/08/18 09:28:26 [INFO] generate received request
2021/08/18 09:28:26 [INFO] received CSR
2021/08/18 09:28:26 [INFO] generating key: rsa-2048
2021/08/18 09:28:26 [INFO] encoded CSR
2021/08/18 09:28:26 [INFO] signed certificate with serial number 194466819854548695917808765884807705197966893514
2021/08/18 09:28:26 [INFO] generate received request
2021/08/18 09:28:26 [INFO] received CSR
2021/08/18 09:28:26 [INFO] generating key: rsa-2048
2021/08/18 09:28:26 [INFO] encoded CSR
2021/08/18 09:28:26 [INFO] signed certificate with serial number 260336220793200272831981224847592195960126439635
2021/08/18 09:28:26 [INFO] generate received request
2021/08/18 09:28:26 [INFO] received CSR
2021/08/18 09:28:26 [INFO] generating key: rsa-2048
2021/08/18 09:28:27 [INFO] encoded CSR
2021/08/18 09:28:27 [INFO] signed certificate with serial number 65089740821038541129206557808263443740004405579
2021/08/18 09:28:27 [INFO] generate received request
2021/08/18 09:28:27 [INFO] received CSR
2021/08/18 09:28:27 [INFO] generating key: rsa-2048
2021/08/18 09:28:27 [INFO] encoded CSR
2021/08/18 09:28:27 [INFO] signed certificate with serial number 132854604953276655528110960324766766236121022003
2021/08/18 09:28:27 [INFO] generate received request
2021/08/18 09:28:27 [INFO] received CSR
2021/08/18 09:28:27 [INFO] generating key: rsa-2048
2021/08/18 09:28:28 [INFO] encoded CSR
2021/08/18 09:28:28 [INFO] signed certificate with serial number 509837311282419173394280736377387964609293588759
2021/08/18 09:28:28 [INFO] generate received request
2021/08/18 09:28:28 [INFO] received CSR
2021/08/18 09:28:28 [INFO] generating key: rsa-2048
2021/08/18 09:28:29 [INFO] encoded CSR
2021/08/18 09:28:29 [INFO] signed certificate with serial number 315360946686297324979111988589987330989361014387
2021/08/18 09:28:29 [INFO] generate received request
2021/08/18 09:28:29 [INFO] received CSR
2021/08/18 09:28:29 [INFO] generating key: rsa-2048
2021/08/18 09:28:29 [INFO] encoded CSR
2021/08/18 09:28:29 [INFO] signed certificate with serial number 100953735645209745033637871773067470306482613638
2021/08/18 09:28:29 [INFO] generate received request
2021/08/18 09:28:29 [INFO] received CSR
2021/08/18 09:28:29 [INFO] generating key: rsa-2048
2021/08/18 09:28:30 [INFO] encoded CSR
2021/08/18 09:28:30 [INFO] signed certificate with serial number 719845605910186326782888521350059363008185160184
Waiting for apiserver to come up
+++ [0818 09:28:40] On try 7, apiserver: : ok
clusterrolebinding.rbac.authorization.k8s.io/kube-apiserver-kubelet-admin created
clusterrolebinding.rbac.authorization.k8s.io/kubelet-csr created
Cluster "local-up-cluster" set.
use 'kubectl --kubeconfig=/var/run/kubernetes/admin-kube-aggregator.kubeconfig' to use the aggregated API server
service/kube-dns created
serviceaccount/kube-dns created
configmap/kube-dns created
deployment.apps/kube-dns created
Kube-dns addon successfully deployed.
WARNING : The kubelet is configured to not fail even if swap is enabled; production deployments should disable swap.
2021/08/18 09:28:43 [INFO] generate received request
2021/08/18 09:28:43 [INFO] received CSR
2021/08/18 09:28:43 [INFO] generating key: rsa-2048
2021/08/18 09:28:44 [INFO] encoded CSR
2021/08/18 09:28:44 [INFO] signed certificate with serial number 185139553486611438539303446897387693685807865483
kubelet ( 120208 ) is running.
wait kubelet ready
No resources found
No resources found
No resources found
No resources found
No resources found
No resources found
No resources found
127.0.0.1   NotReady   <none>   2s    v1.20.1
2021/08/18 09:29:02 [INFO] generate received request
2021/08/18 09:29:02 [INFO] received CSR
2021/08/18 09:29:02 [INFO] generating key: rsa-2048
2021/08/18 09:29:02 [INFO] encoded CSR
2021/08/18 09:29:02 [INFO] signed certificate with serial number 88607895468152274682757139244368051799851531300
Create default storage class for
storageclass.storage.k8s.io/standard created
Local Kubernetes cluster is running. Press Ctrl-C to shut it down.

Logs:
  /tmp/kube-apiserver.log
  /tmp/kube-controller-manager.log

  /tmp/kube-proxy.log
  /tmp/kube-scheduler.log
  /tmp/kubelet.log

To start using your cluster, you can open up another terminal/tab and run:

  export KUBECONFIG=/var/run/kubernetes/admin.kubeconfig
  cluster/kubectl.sh

Alternatively, you can write to the default kubeconfig:

  export KUBERNETES_PROVIDER=local

  cluster/kubectl.sh config set-cluster local --server=https://localhost:6443 --certificate-authority=/var/run/kubernetes/server-ca.crt
  cluster/kubectl.sh config set-credentials myself --client-key=/var/run/kubernetes/client-admin.key --client-certificate=/var/run/kubernetes/client-admin.crt
  cluster/kubectl.sh config set-context local --cluster=local --user=myself
  cluster/kubectl.sh config use-context local
  cluster/kubectl.sh
Copy the code

Check the startup parameters and PID of Kubernetes components

$ ps -ef|grep kube|grep -v grep
Copy the code

You can see that 7 processes are moved as follows:

root 119660 119248 10 09:28 pts/4 00:00:50 /home/workspace/src/k8s.io/kubernetes/_output/bin/kube-apiserver --authorization-mode=Node,RBAC --cloud-provider= --cloud-config= --v=3 --vmodule= --audit-policy-file=/tmp/kube-audit-policy-file --audit-log-path=/tmp/kube-apiserver-audit.log --authorization-webhook-config-file= --authentication-token-webhook-config-file= --cert-dir=/var/run/kubernetes --client-ca-file=/var/run/kubernetes/client-ca.crt --kubelet-client-certificate=/var/run/kubernetes/client-kube-apiserver.crt --kubelet-client-key=/var/run/kubernetes/client-kube-apiserver.key --service-account-key-file=/tmp/kube-serviceaccount.key --service-account-lookup=true --service-account-issuer=https://kubernetes.default.svc --service-account-signing-key-file=/tmp/kube-serviceaccount.key --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,Pr iority,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --disable-admission-plugins= - admission - control - config file = -- bind - address = 0.0.0.0 - secure - port = 6443 --tls-cert-file=/var/run/kubernetes/serving-kube-apiserver.crt --tls-private-key-file=/var/run/kubernetes/serving-kube-apiserver.key --storage-backend=etcd3 - storage - media - type = application/VND. Kubernetes. Protobuf - etcd - the servers = http://127.0.0.1:2379 - service - cluster - IP - range 24 - feature - gates = = 10.0.0.0 / AllAlpha = false, external - the hostname = localhost --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-client-ca-file=/var/run/kubernetes/request-header-ca.crt --requestheader-allowed-names=system:auth-proxy  --proxy-client-cert-file=/var/run/kubernetes/client-auth-proxy.crt --proxy-client-key-file=/var/run/kubernetes/client-auth-proxy.key - cors - charges - origins = / 127.0.0.1 (: [0-9] +)? $, / localhost (: [0-9] +)? 120034, 119248, 3 $root 09:28 00:00:16 PTS / 4 /home/workspace/src/k8s.io/kubernetes/_output/bin/kube-controller-manager --v=3 --vmodule= - service - account - private - key - the file = / TMP/kube - serviceaccount. Key - service - cluster - IP - range = 10.0.0.0/24 --root-ca-file=/var/run/kubernetes/server-ca.crt --cluster-signing-cert-file=/var/run/kubernetes/client-ca.crt --cluster-signing-key-file=/var/run/kubernetes/client-ca.key --enable-hostpath-provisioner=false --pvclaimbinder-sync-period=15s --feature-gates=AllAlpha=false --cloud-provider= --cloud-config= --kubeconfig /var/run/kubernetes/controller.kubeconfig --use-service-account-credentials --controllers=* --leader-elect=false --cert-dir=/var/run/kubernetes --master=https://localhost:6443 root 120036 119248 1 09:28 pts/4 00:00:05 /home/workspace/src/k8s.io/kubernetes/_output/bin/kube-scheduler --v=3 --config=/tmp/kube-scheduler.yaml --feature-gates=AllAlpha=false --master=https://localhost:6443 root 120208 119248 0 09:28 pts/4 00:00:00 sudo -E / home/workspace/SRC/k8s. IO/kubernetes _output/bin/kubelet - v = 3 - vmodule = - chaos - chance = 0.0 - the container - the runtime = docker - the hostname - override = 127.0.0.1 - cloud - the provider = - cloud - config = --bootstrap-kubeconfig=/var/run/kubernetes/kubelet.kubeconfig --kubeconfig=/var/run/kubernetes/kubelet-rotated.kubeconfig --config=/tmp/kubelet.yaml root 120211 120208 3 09:28 pts/4 00:00:15 / home/workspace/SRC/k8s. IO/kubernetes _output/bin/kubelet - v = 3 - vmodule = - chaos - chance = 0.0 - the container - the runtime = docker - the hostname - override = 127.0.0.1 - cloud - the provider = - cloud - config = --bootstrap-kubeconfig=/var/run/kubernetes/kubelet.kubeconfig --kubeconfig=/var/run/kubernetes/kubelet-rotated.kubeconfig --config=/tmp/kubelet.yaml root 120957 119248 0 09:29 pts/4 00:00:00 sudo /home/workspace/src/k8s.io/kubernetes/_output/bin/kube-proxy --v=3 --config=/tmp/kube-proxy.yaml --master=https://localhost:6443 root 120980 120957 0 09:29 pts/4 00:00:00 /home/workspace/src/k8s.io/kubernetes/_output/bin/kube-proxy --v=3 --config=/tmp/kube-proxy.yaml --master=https://localhost:6443Copy the code

Configuring remote Debugging

The remote file sharing tool I’m using here is Samba, and I won’t go into the configuration of Samba here, but you can find a bunch of them on the web

Use the Goland tool on Windows or Linux to open the Kubernetes source code in the virtual machine

Main menu – > Run – > Edit Configurations…

If you see Edit Configurations… You can click the Debug button and select Edit Configurations in the popup box.

Click + to select Go Remote

Host is the IP address of the VIRTUAL machine. Port is the Port number of the subsequent DLV service. The DLV service startup command is also displayed in the middle. Then add a breakpoint to kubernetes/ CMD /kube-apiserver/apiserver.go’s main function, or anywhere else

Save the modified code and breakpoint information, execute the following code to independently compile the DEBUG version of this module, using kube-apiserver as an example:

$ make GOLDFLAGS="" WHAT="cmd/kube-apiserver"
Copy the code

Run the single cluster startup script again for debuggingkube-apiserver

$ ./hack/local-up-cluster.sh -O Kube-apiserver: kube-apiserver: kube-apiserver: kube-apiserver: kube-apiserver: kube-apiserver: kube-apiserver
Copy the code

Manually stop the kube-Apiserver process

$ ps -ef|grep kube-apiserver|grep -v grep
$ kill119660-15
$ ps -ef|grep kube-apiserver|grep -v grep
Copy the code

Start kube-Apiserver using DLV

dlv --listen=:39999 --headless=true --api-version=2 exec /home/workspace/src/k8s.io/kubernetes/_output/bin/kube-apiserver -- --authorization-mode=Node,RBAC --cloud-provider= --cloud-config= --v=3 --vmodule= --audit-policy-file=/tmp/kube-audit-policy-file --audit-log-path=/tmp/kube-apiserver-audit.log --authorization-webhook-config-file= --authentication-token-webhook-config-file= --cert-dir=/var/run/kubernetes --client-ca-file=/var/run/kubernetes/client-ca.crt --kubelet-client-certificate=/var/run/kubernetes/client-kube-apiserver.crt --kubelet-client-key=/var/run/kubernetes/client-kube-apiserver.key --service-account-key-file=/tmp/kube-serviceaccount.key --service-account-lookup=true --service-account-issuer=https://kubernetes.default.svc --service-account-signing-key-file=/tmp/kube-serviceaccount.key --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,Pr iority,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --disable-admission-plugins= - admission - control - config file = -- bind - address = 0.0.0.0 - secure - port = 6443 --tls-cert-file=/var/run/kubernetes/serving-kube-apiserver.crt --tls-private-key-file=/var/run/kubernetes/serving-kube-apiserver.key --storage-backend=etcd3 - storage - media - type = application/VND. Kubernetes. Protobuf - etcd - the servers = http://127.0.0.1:2379 - service - cluster - IP - range 24 - feature - gates = = 10.0.0.0 / AllAlpha = false, external - the hostname = localhost --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-client-ca-file=/var/run/kubernetes/request-header-ca.crt --requestheader-allowed-names=system:auth-proxy  --proxy-client-cert-file=/var/run/kubernetes/client-auth-proxy.crt --proxy-client-key-file=/var/run/kubernetes/client-auth-proxy.key -- cors - charges - origins = '/ 127.0.0.1 (: [0-9] +)? $, / localhost (: [0-9] +)? $'Copy the code

Kube-apiserver: kube-apiserver: kube-apiserver: kube-apiserver: kube-apiserver: kube-apiserver: kube-apiserver The launch parameters through the command ps – ef | grep kube – apiserver | grep -v grep look up to, at the same time need to revise the launch parameters using DLV startup, as above

Debug the screenshots

Reference connection:

  1. Github.com/kubernetes/…
  2. Github.com/derekparker…
  3. Github.com/kubernetes/…