Flink1.13 was released on May 4, and the speed of iteration is a testament to its unstoppable pace of development. Community activity is on the rise, of course, with integration with technologies like cloud native. This article mainly explains the deployment steps of Flink’s native K8S application mode. Native K8 is essentially flink’s internal integration of K8S, which can achieve the elastic scaling of Taskmanager.
The application mode is adopted because it avoids the resource isolation problem of session mode, the cluster life cycle problem of per-job mode, and the common client resource consumption problem of both. It is also widely used in production environment because of its significant advantages.
1. Prepare
-
Kubernetes version greater than or equal to 1.9.
-
You can access lists, create and delete containers and services by configuring ~/.kube/config. You can run to verify permissions kubectl auth can -i < list | create | edit | delete > pods.
-
Enable Kubernetes DNS.
- RBAC: d
efault service account
Has the permission to create and delete a Pod.
- RBAC: d
kubectl create clusterrolebinding flink-role --clusterrole=edit --serviceaccount=namespacwe:default
Copy the code
2. Flink image making and pushing
Dockerfile is as follows:
FROM flink:1.12 COPY dataprocess-0.0.1- snapshot. jar $FLINK_HOME/Copy the code
3. Deploy flink Client (cloud on client)
4. Deploy jobManager and TaskManager
Inside flink-Client pod
Run the following command:
flink run-application \ --target kubernetes-application \ -Dkubernetes.namespace=xxx \ -Dkubernetes.cluster-id=jobmanager-application \ -Dkubernetes.rest-service.exposed.type=NodePort \ - Dkubernetes. Container. Image = 10.4 xx. Xx/library/flink: DHF \ - Dkubernetes container. The image. The pull - policy = Always Local: / / / opt/flink/dataProcess - 0.0.1 - the SNAPSHOT - jar - with - dependencies. Jar \ - kafkaBroker "192.168 xx. Xx, 9092192 xx, xx, 227:9093192168. Xx, xx: 9094" \ - kafkaSchemaTopic connector9 \ - kafkaDataTopics "connector9.dhf1.tab1,connector9.dhf1.test"Copy the code
The last three actions are custom parameters passed to the JAR package
After the command is executed, one jobManager and one or more TaskManagers are automatically created.
5. Program monitoring
Both JobManager and TaskManager have logs to view the program execution process.
The Flink Dashboard can also be accessed from the K8S Master node IP plus port 7447 exposed in the figure.
Follow the official account, add the author’s wechat, discuss more together.