Author’s brief introduction
Wan Shaoyuan, CNCF Foundation official certification Kubernetes CKA&CKS engineer, cloud native solution architect. He has in-depth research on ceph, Openstack, Kubernetes, Prometheus technology and other cloud native technologies. Participated in the design and implementation of IaaS and PaaS platform design and application cloud native transformation guidance for multiple industries such as finance, insurance and manufacturing.
Before the speech
NeuVector is the industry’s first end-to-end open source container security platform, providing the only enterprise-level zero-trust security solution for containerized workloads. This article details how to deploy NeuVector from the following five aspects:
-
NeuVector overview
-
NeuVector installation
-
High availability architecture design
-
Multi-cloud security management
-
Other configuration
1. NeuVector overview
NeuVector is committed to enterprise-level container platform security, providing real-time in-depth container network visualization, east-west container network monitoring, active isolation and protection, container host security and container internal security. Container management platform seamless integration and automation of application-level container security. Suitable for a variety of cloud, cross-cloud, or local deployment container production environment.
In 2021, NeuVector was acquired by SUSE and opened source in January 2022, becoming the industry’s first end-to-end open source container security platform and the only enterprise-level zero-trust security solution for containerized workloads.
Project address: github.com/neuvector/n…
This article is based on the first open source version of NeuVector: 5.0.0-Preview.1.
1.1. Architecture analysis
NeuVector itself contains the Controller, Enforcer, Manager, Scanner, and Updater modules.
- Controller: the control module of the whole NeuVector, API entry, including configuration delivery, and high availability mainly consider the HA of the Controller. It is usually recommended to deploy three Controller modules to form a cluster.
- Enforcer: used for security policy deployment and execution. DaemonSet is deployed on each node.
- Manager: Provides web-UI(HTTPS only) and CLI console for users to manage NeuVector.
- Scanner: Scans nodes, containers, Kubernetes, and images for CVE vulnerabilities
- Updater: cronjob, used to periodically update the CVE vulnerability database
1.2. Overview of main functions
- Security vulnerability scanning
- Container network traffic visualization
- Network security policy definition
- About the firewall
- CICD security scanning
- Compliance analysis
This article focuses on installation and deployment, and detailed functions will be described in subsequent articles.
2. NeuVector installation
OS: Ubuntu18.04 Kubernetes: 1.20.14 Rancher: 2.5.12 Docker: 19.03.15 NeuVector: 5.0.0-Preview.1
2.1. Rapid deployment
Create a namespace
kubectl create namespace neuvector
Copy the code
Deploying CRD(Kubernetes 1.19+ version)
Kubectl apply -f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/crd-k8s-1.19.yamlCopy the code
Deploying CRD(Kubernetes 1.18 or later)
Kubectl apply -f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/crd-k8s-1.16.yamlCopy the code
Configuration of RBAC
kubectl create clusterrole neuvector-binding-app --verb=get,list,watch,update --resource=nodes,pods,services,namespaces kubectl create clusterrole neuvector-binding-rbac --verb=get,list,watch --resource=rolebindings.rbac.authorization.k8s.io,roles.rbac.authorization.k8s.io,clusterrolebindings.rbac.authorization .k8s.io,clusterroles.rbac.authorization.k8s.io kubectl create clusterrolebinding neuvector-binding-app --clusterrole=neuvector-binding-app --serviceaccount=neuvector:default kubectl create clusterrolebinding neuvector-binding-rbac --clusterrole=neuvector-binding-rbac --serviceaccount=neuvector:default kubectl create clusterrole neuvector-binding-admission --verb=get,list,watch,create,update,delete --resource=validatingwebhookconfigurations,mutatingwebhookconfigurations kubectl create clusterrolebinding neuvector-binding-admission --clusterrole=neuvector-binding-admission --serviceaccount=neuvector:default kubectl create clusterrole neuvector-binding-customresourcedefinition --verb=watch,create,get --resource=customresourcedefinitions kubectl create clusterrolebinding neuvector-binding-customresourcedefinition --clusterrole=neuvector-binding-customresourcedefinition --serviceaccount=neuvector:default kubectl create clusterrole neuvector-binding-nvsecurityrules --verb=list,delete --resource=nvsecurityrules,nvclustersecurityrules kubectl create clusterrolebinding neuvector-binding-nvsecurityrules --clusterrole=neuvector-binding-nvsecurityrules --serviceaccount=neuvector:default kubectl create clusterrolebinding neuvector-binding-view --clusterrole=view --serviceaccount=neuvector:default kubectl create rolebinding neuvector-admin --clusterrole=admin --serviceaccount=neuvector:default -n neuvectorCopy the code
Check whether the following RBAC objects exist
kubectl get clusterrolebinding | grep neuvector
kubectl get rolebinding -n neuvector | grep neuvector
kubectl get clusterrolebinding | grep neuvector
neuvector-binding-admission ClusterRole/neuvector-binding-admission 44h
neuvector-binding-app ClusterRole/neuvector-binding-app 44h
neuvector-binding-customresourcedefinition ClusterRole/neuvector-binding-customresourcedefinition 44h
neuvector-binding-nvadmissioncontrolsecurityrules ClusterRole/neuvector-binding-nvadmissioncontrolsecurityrules 44h
neuvector-binding-nvsecurityrules ClusterRole/neuvector-binding-nvsecurityrules 44h
neuvector-binding-nvwafsecurityrules ClusterRole/neuvector-binding-nvwafsecurityrules 44h
neuvector-binding-rbac ClusterRole/neuvector-binding-rbac 44h
neuvector-binding-view ClusterRole/view 44h
Copy the code
kubectl get rolebinding -n neuvector | grep neuvector
neuvector-admin ClusterRole/admin 44h
neuvector-binding-psp Role/neuvector-binding-psp 44h
Copy the code
Deploy NeuVector
The underlying Runtime is Docker
Kubectl apply -f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.0.0/neuvector-docker-k8s.yamlCopy the code
The underlying Runtime is Containerd (use this yaml file for K3S and Rke2)
kubectl apply -f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.0.0/neuvector-containerd-k8s.yaml
Copy the code
Kubernetes versions 1.21 or later will display the following error: Change batch/v1 to Batch /v1beta1
error: unable to recognize "Https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.0.0/neuvector-docker-k8s.yaml" : no matches for kind "CronJob" in version "batch/v1"Copy the code
1.20.x CronJob is still in the beta stage, the cronjob will be officially GA in 1.21.
By default, a Service of the loadblance type is used for the deployment of the Web-UI. To facilitate access, the Service is changed to a NodePort. The Service can also be provided through the Ingress
kubectl patch svc neuvector-service-webui -n neuvector --type='json' -p '[{"op":"replace","path":"/spec/type","value":"NodePort"},{"op":"add","path":"/spec/ports/0/nodePort","value":30888}]'
Copy the code
The default password for accessing https://node_ip:30888 is admin and admin
Click My Profile next to your profile picture to enter the Settings page and set your password and language
2.2. Helm deployment
Add the repo
helm repo add neuvector https://neuvector.github.io/neuvector-helm/
helm search repo neuvector/core
Copy the code
Create a namespace
kubectl create namespace neuvector
Copy the code
Create ServiceAccount
kubectl create serviceaccount neuvector -n neuvector
Copy the code
Helm installation
Helm install neuVector --namespace neuvector neuvector/core --set registry=docker. IO --set tag=5.0.0-preview --set=controller.image.repository=neuvector/controller.preview -- set=enforcer.image.repository=neuvector/enforcer.preview --set manager.image.repository=neuvector/manager.preview --set cve.scanner.image.repository=neuvector/scanner.preview --set cve.updater.image.repository=neuvector/updater.previewCopy the code
Helm-chart parameters: github.com/neuvector/n…
3. High availability architecture design
Neuvector-ha is primarily concerned with the HA of the Controller module. As long as one Controller is open, all data will be synchronized between the three copies.
Controller data is mainly stored in the /var/neuvector/ directory. However, when POD reconstruction or cluster redeployment occurs, backup files are automatically loaded from this directory to restore the cluster.
3.1. Deploy policies
NeuVector officially offers four HA deployment modes
Method 1: No scheduling restrictions, free scheduling management by Kubernetes.
Method 2: NeuVector Control (Manager, Controller) + Enforce and Scanner configure scheduling label limits and stain tolerance together with Kubernetes Master node deployment.
Method 3: create a dedicated NeuVector node in the Kubernetes cluster via Taint mode. Only NeuVector Control components are allowed to be deployed.
Mode 4: NeuVector Control component (Manager, Controller) configures scheduling label limits and stain tolerance with Kubernetes Master node deployment. Note The Enforce and scanner components are not deployed on the K8S-Master node, indicating that the master node does not accept scanning and policy delivery.
The following uses Method 2 as an example
Label the master node with a specific label
kubectl label nodes nodename nvcontroller=true
Copy the code
Get the node Taint
kubectl get node nodename -o yaml|grep -A 5 taint
Copy the code
Take the master node deployed by Rancher as an example
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/controlplane
value: "true"
- effect: NoExecute
key: node-role.kubernetes.io/etcd
Copy the code
Edit the deployed YAML to add nodeSelector and tolerations to the Neuvector-Control component (manager, Controller), and tolerations to the Enforce and scanner components only.
For example, take the Manager component:
kind: Deployment metadata: name: neuvector-manager-pod namespace: neuvector spec: selector: matchLabels: app: neuvector-manager-pod replicas: 1 template: metadata: labels: app: neuvector-manager-pod spec: nodeSelector: Nvcontroller: "true" containers: - name: neuvector manager - pod image: neuvector/manager. Preview: 5.0.0 - preview. 1 env: - name: CTRL_SERVER_IP value: neuvector-svc-controller.neuvector restartPolicy: Always tolerations: - effect: NoSchedule key: "node-role.kubernetes.io/controlplane" operator: Equal value: "true" - effect: NoExecute operator: "Equal" key: "node-role.kubernetes.io/etcd" value: "true"Copy the code
3.2. Data persistence
Configuring environment variables enables persistence of configuration data
- env:
- name: CTRL_PERSIST_CONFIG
Copy the code
After configuring this environment variable, by default neuvector-controller stores data in /var/neuvector, which by default maps hostPath to the /var/neuvector directory of the POD host.
If higher data reliability is required, the PV can be used to connect to the NETWORK File System (NFS) or other multi-read/write storage.
In this way, no configuration data is lost when all three POD copies of Neuvector-Controller are destroyed at the same time and the host machine is completely unrecoverable.
NFS is used as an example.
The deployment of NFS
Create pv and PVC
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolume metadata: name: neuvector-data spec: capacity: storage: 10Gi accessModes: - ReadWriteMany nfs: path: /nfsdata server: 172.16.0.195 EOF cat < < EOF | kubectl apply - f - kind: PersistentVolumeClaim apiVersion: v1 metadata: name: neuvector-data namespace: neuvector spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi EOFCopy the code
Modify neuvector-controller to deploy YAMl, add PVC information, map /var/neuvector directory to NFS (default hostPath to local)
spec:
template:
spec:
volumes:
- name: nv-share
# hostPath: // replaced by persistentVolumeClaim
# path: /var/neuvector // replaced by persistentVolumeClaim
persistentVolumeClaim:
claimName: neuvector-data
Copy the code
Or mount the NFS directory directly in NeuVector deployment YAML
Volumes: -name: nv-share NFS: path: /opt/ nfs-Deployment server: 172.26.204.144Copy the code
4. Multi-cloud security management
In real production applications, where multiple clusters are managed securely, NeuVector supports cluster federation.
You need to expose the Federation Master service on one cluster and deploy the Federation Worker service on each remote cluster. For greater flexibility, you can enable both the Federation Master and Federation Worker services on each cluster.
Deploy this YAML on each cluster
apiVersion: v1
kind: Service
metadata:
name: neuvector-service-controller-fed-master
namespace: neuvector
spec:
ports:
- port: 11443
name: fed
nodePort: 30627
protocol: TCP
type: NodePort
selector:
app: neuvector-controller-pod
---
apiVersion: v1
kind: Service
metadata:
name: neuvector-service-controller-fed-worker
namespace: neuvector
spec:
ports:
- port: 10443
name: fed
nodePort: 31783
protocol: TCP
type: NodePort
selector:
app: neuvector-controller-pod
Copy the code
Upgrade one of the clusters to the master cluster
Upgrade one of the clusters to the master cluster and configure ports that connect to the remote exposed IP and are reachable to the Remot cluster.
In the primary cluster, tokens are generated for other Remote Cluster connections.
In the Remote Cluster, add the token and connection terminal to the primary cluster
Multiple NeuVector clusters can be managed on the interface
5. Perform other configurations
Upgrade 5.1.
NeuVector deployed in yamL files can be upgraded by directly updating the corresponding component image tag. Such as
Kubectl set imagedeployment/neuvector - controller - podneuvector - controller - pod = neuvector/controller: against 2.4.1 -n neuvectorCopy the code
Kubectl set image-n neuvectords/neuvector-enforcer-pod Neuvector-enforcer-pod = Neuvector /enforcer:2.4.1Copy the code
If the Helm deployed NeuVector is used, run Helm Update to configure the corresponding parameters.
5.2. Uninstall
Delete deployed components
Kubectl delete -f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.0.0/neuvector-docker-k8s.yamlCopy the code
Delete the configured RBAC
kubectl get clusterrolebinding | grep neuvector|awk '{print $1}'|xargs kubectl delete clusterrolebinding
Copy the code
kubectl get rolebinding -n neuvector | grep neuvector|awk '{print $1}'|xargs kubectl delete rolebinding -n neuvector
Copy the code
Delete the CRD
Kubectl delete -f kubectl at https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.0.0/crd-k8s-1.19.yaml Delete - f kubectl at https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.0.0/waf-crd-k8s-1.19.yaml Delete - f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.0.0/admission-crd-k8s-1.19.yamlCopy the code
Conclusion:
SUSE’s open source NeuVector is a mature and stable container security management platform. NeuVector will better integrate with Rancher products in the future.