introduce

This is an example of an application of egg.js to the cloud ☁️, which the author is part of a large front-end team that has been used in production.

CI/CD & DevOps & GitOps & HPA will not be discussed here because each point is very long.

I have experimental conditions here

  • An available Kubernetes cluster
  • Kube-prometry-stack has been deployed in the cluster
  • Traefik V2.2 has been deployed in the cluster as Ingress Controller
  • The Helm V3 has been installed

The sample project

Experiments can be done directly (the article is better than providing a cloud example)

GitHub: k8s-eggjs

GitHub: k8s-eggjs

GitHub: k8s-eggjs

This example simply provides two interfaces:

/api/posts

curl -X POST http://localhost:7001/api/posts --data '{"title":"post1", "content": "post1 content"}' --header 'Content-Type:application/json; charset=UTF-8'
Copy the code

api/topics

curl -X POST http://localhost:7001/api/topics --data '{"title":"topic1", "content": "topic1 content"}' --header 'Content-Type:application/json; charset=UTF-8'
Copy the code

The author also deployed this project to

  • k8seggjs.hacker-linner.com
  • k8seggjs.hacker-linner.com
  • k8seggjs.hacker-linner.com

The cloud of actual combat

(Sample has been provided, you can do the experiment directly)

Scripts

Package. json is simply changed to:

 "start": "egg-scripts start --workers=1 --title=egg-server-k8s-eggjs-promethues".Copy the code

It is best to start in a single process and leave the orchestration of the application container entirely to Kubernetes.

Egg related issues about K8S deployment

Docker Image ready to

The file is located in docker/ dockerfile.prod

FROM node:15-alpine

RUN ln -sf /usr/share/zoneinfo/Asia/ShangHai /etc/localtime
RUN echo "Asia/Shanghai" > /etc/timezone

COPY package.json /app/dependencies/package.json
COPY yarn.lock /app/dependencies/yarn.lock
RUN cd/app/dependencies \ && yarn install --frozen-lockfile --registry=https://registry.npm.taobao.org \ && yarn cache clean \  && mkdir /app/egg \ && ln -s /app/dependencies/node_modules /app/egg/node_modules

COPY ./ /app/egg/

WORKDIR /app/egg
EXPOSE 7001

CMD npm run start
Copy the code

Build Image

Docker build -f docker/ dockerfile. prod -t k8s-eggjs-promethues:1.0.0. --no-cacheCopy the code

Tag, I test mirror is placed in Ali Cloud (the company has its own private warehouse)

Docker tag k8s eggjs - promethues: 1.0.0 registry.cn-shenzhen.aliyuncs.com/hacker-linner/k8s-eggjs-promethues:1.0.0Copy the code

Push to Aliyun

Docker push registry.cn-shenzhen.aliyuncs.com/hacker-linner/k8s-eggjs-promethues:1.0.0Copy the code

Helm Chart(k8s-helm-charts)

(Sample project has been provided, you can do experiments directly)

Generate deployment Chart

mkdir k8s-helm-charts && cd k8s-helm-charts
helm create k8seggjs
Copy the code

Let’s copy a copy of k8seggjs/values.yaml to the outer layer at the same level as the K8seggJS folder (K8S-Helm-charts /values.yaml).

k8s-helm-charts/values.yamlMake the following modifications:

replicaCount: 3 I use 3 instances for load balancing to ensure service availability

image:
  repository: registry.cn-shenzhen.aliyuncs.com/hacker-linner/k8s-eggjs-promethues The image has just been uploaded
  pullPolicy: Always The default 'IfNotPresent' can be used as the mirror pull policy

# apiPort, metricsPort default template
Yaml service. Yaml deployment.yaml file in template
service:
  type: ClusterIP
  apiPort: 7001 # Port for this API service
  metricsPort: 7777 This is the metrics port required by Prometheus

# Ingress Controller, depending on your environment, I'm using Traefik here
ingress:
  enabled: true
  annotations:
    ingress.kubernetes.io/ssl-redirect: "true"
    ingress.kubernetes.io/proxy-body-size: "0"
    kubernetes.io/ingress.class: "traefik"
    traefik.ingress.kubernetes.io/router.tls: "true"
    traefik.ingress.kubernetes.io/router.entrypoints: websecure
  hosts:
    - host: k8seggjs.hacker-linner.com
      paths:
        - /
  tls:
    - secretName: hacker-linner-cert-tls
      hosts:
      
K8S will kill it and then restart it to ensure that the service is available
resources:
  limits:
    cpu: 100m
    memory: 128Mi
  requests:
    cpu: 100m
    memory: 128Mi
Copy the code

Creating a Deployment Namespace

kubectl create ns k8seggjs
Copy the code

Deployment using Helm

helm install k8seggjs ./k8seggjs -f values.yaml -n k8seggjs

Helm uninstall K8seggjs -n k8seggjs
Copy the code

ServiceMonitor (k8s – Prometheus)

RBAC is set

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleList
items:
- apiVersion: rbac.authorization.k8s.io/v1
  kind: Role
  metadata:
    name: prometheus-k8s-k8seggjs
    namespace: k8seggjs
  rules:
  - apiGroups:
    - ""
    resources:
    - services
    - endpoints
    - pods
    verbs:
    - get
    - list
    - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBindingList
items:
- apiVersion: rbac.authorization.k8s.io/v1
  kind: RoleBinding
  metadata:
    name: prometheus-k8s-k8seggjs
    namespace: k8seggjs
  roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: Role
    name: prometheus-k8s-k8seggjs
  subjects:
  - kind: ServiceAccount
    name: prometheus-k8s
    namespace: monitoring
Copy the code

Indicator Service Settings

apiVersion: v1
kind: Service
metadata:
  namespace: k8seggjs
  name: k8seggjs-metrics
  labels:
    k8s-app: k8seggjs-metrics
  annotations:
    prometheus.io/scrape: 'true'
    prometheus.io/scheme: http
    prometheus.io/path: /metrics
    prometheus.io/port: "7777"
spec:
  selector:
    app.kubernetes.io/name: k8seggjs
  ports:
  - name: k8seggjs-metrics
    port: 7777
    targetPort: 7777
    protocol: TCP
Copy the code

ServiceMonitor set

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: k8seggjs
  namespace: monitoring
spec:
  endpoints:
  - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
    interval: 5s
    port: k8seggjs-metrics
  jobLabel: k8s-app
  namespaceSelector:
    matchNames:
    - k8seggjs
  selector:
    matchLabels:
      k8s-app: k8seggjs-metrics
Copy the code

application

kubectl apply -f ServiceMonitor.yaml
Copy the code

egg-exporter & egg-prometheus

Egg-exporter, egg.js Prometheus indicator collector plug-in with Grafana kanban.

Egg-prometheus, Prometheus Plugin for egg.js

This is used for the sample project’s metrics collection.

Grafana (k8s-grafana)

Dashboard-metrics. Json, complete panel JSON. From Egg. – My friend. The author adjusted the prefix of metrics here.

config.exporter = {
  scrapePort: 7777.scrapePath: '/metrics'.prefix: 'k8seggjs_'.defaultLabels: { stage: process.env.NODE_ENV },
};
Copy the code

We importjsonA fileGrafanaPanel to create

The modify panelVariables

$stage

  • Query: k8seggjs_nodejs_version_info{worker="app"}
  • Regex: /.*stage="([^"]*).*/

$appname

  • Query: k8seggjs_nodejs_version_info{worker="app"}
  • Regex: /.*app="([^"]*).*/

$node

  • Query: k8seggjs_nodejs_version_info{worker="app"}
  • Regex: /.*instance="([^"]*).*/

The final result

Refs

  • What happens when Egg meets K8s?
  • egg-exporter
  • egg-prometheus
  • filter-variables-with-regex

Communicate and learn from each other

My wechat account: