🎯 background
Architecture first: standard Spring Cloud, registry using Consul, Docker containerized deployment.
Consul and Docker. The problem is that when the program is re-published, a new container is started, and when the old container is stopped, the old instance in Consul registry does not exit. Besides, our container has a random IP address, causing invalid clients every time we restart. But it’s also annoying to see on the Web side.
🎯 Solution
On the Internet to find the solution, the feel good: www.cnblogs.com/sparkdev/p/…
#! /usr/bin/env bash
set -x
pid=0
#Processing logic
term_handler() {
if [ $pid -ne 0 ]; then
kill -SIGTERM "$pid"
wait "$pid"
fi
exit 143; # 128 + 15 -- SIGTERM
}
#Listening to thetrap 'kill ${! }; term_handler' SIGTERM
#Start Consul Client for local debugging, omitting the actual connection to the registry
nohup consul agent -data-dir /root/consul >/dev/null &
#Obtain the Consul process id
pid="$!"
#Keep the background running, or it will block
java -jar /app.jar --spring.profiles.active=dev &
# wait foreverwhile true do tail -f /dev/null & wait ${! } doneCopy the code
Conclusion 🎯
After the program republishes the project, the container will execute the docker stop operation on the previous container. This operation will only send SIGTERM signal to the process whose PID is 1 (in our case, the docker_entrypoint.sh script). Consul process in the container does not receive the SIGTERM signal and is forcibly killed by kill -9. Therefore, Consul does not send an offline signal to the registry, causing many zombie clients in the registry. The preceding solution is to mark Consul process number after Consul starts and listen for its SIGTERM signal. After receiving the signal, pass the SIGTERM to Consul process number in the processing function to gracefully exit Consul.