There was a project that was mostly python, and the container didn’t take into account some dependencies when packaging images, requiring manual batch installation. Close-up of this script to improve efficiency:

import os sparks=["spark-wangfeng2-release-1-master-0", "spark-wangfeng2-release-1-worker-0", "spark-wangfeng2-release-1-worker-1", \ "spark-wangfeng2-release-2-master-0", "spark-wangfeng2-release-2-worker-0", "spark-wangfeng2-release-2-worker-1"] index_url = "https://pypi.tuna.tsinghua.edu.cn/simple/" trusted_host = "pypi.tuna.tsinghua.edu.cn" namespace = os.getenv("PPNS", "wangfeng2") for spark in sparks: Print (F "installing on {spark}") val = os.system(f"echo \"pip3 install grpcio==1.36.1 -I {index_URL} --trusted-host {trusted_host}\" | kubectl exec -it {spark} -n {namespace} -- /bin/sh") print(val) val = os.system(f"echo \"pip3 install  protobuf -i {index_url} --trusted-host {trusted_host}\" | kubectl exec -it {spark} -n {namespace} -- /bin/sh") print(val) val = os.system(f"echo \"pip3 install pydantic -i {index_url} --trusted-host {trusted_host}\" | kubectl exec -it {spark} -n {namespace} -- /bin/sh") print(val) val = os.system(f"echo \"pip3 install pysnooper -i {index_url} --trusted-host {trusted_host}\" | kubectl exec -it {spark} -n {namespace} -- /bin/sh") print(val) val = os.system(f"echo  \"pip3 install google -i {index_url} --trusted-host {trusted_host}\" | kubectl exec -it {spark} -n {namespace} -- /bin/sh") print(val)Copy the code

Delete redis POD data in batch:

import os

redis=["common-redis-2-0", "common-redis-2-1", "common-redis-2-2"]

namespace = os.getenv("PPNS", "wangfeng2")

for r in redis:
    print(f"flushall on {r}")
    val = os.system(f"echo 'redis-cli -a 123456 flushall' | kubectl exec -it {r} -n {namespace} /bin/sh")
    print(val)



Copy the code