“This is the 20th day of my participation in the Gwen Challenge in November. Check out the details: The Last Gwen Challenge in 2021.”
One, foreword
When configuring Docker and JMX monitoring today, I see one detail that differs from the JMX configuration in a non-container environment. So I’ll write it down here for other people to look it up.
Second, problems encountered
1. Problem phenomenon
In general, we can configure JMX by simply writing the following parameters.
Here are the JMX configuration parameters without password monitoring (configuration with password monitoring is the same as regular monitoring)
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=9998
-Djava.rmi.server.hostname=<serverip>
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
Copy the code
This error occurs when the docker container is configured in this way.
2. Problem analysis
So here’s the logic. Why is that?
First look at the network structure of the Docker environment.
The container uses the default network model, which is the Bridge pattern. In this mode, DNAT rules are done when Docker run to achieve the ability of data forwarding. So what we see on the Internet is something like this:
Docker nic information:
[root@f627e4cb0dbc /]# ifconfigeth0: Flags = 4163 < UP, BROADCAST, RUNNING, MULTICAST > mtu 1500 inet 172.18.0.3 netmask 255.255.0.0 BROADCAST 0.0.0.0 inet6 fe80::42:acff:fe12:3 prefixlen 64 scopeid 0x20<link> ether 02:42:ac:12:00:03 txqueuelen 0 (Ethernet) RX packets 366 Bytes 350743 (342.5kib) RX errors 0 Dropped 0 Overruns 0 Frame 0 TX packets 358 bytes 32370 (31.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0Copy the code
Routing information in docker:
[root@a2a7679f8642 /]# netstat -rKernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface default Gateway 0.0.0.0ug 0 0 0 eth0 172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 [root@a2a7679f8642 /]#
Copy the code
Information about the network adapter on the host:
docker0: Flags = 4163 < UP, BROADCAST, RUNNING, MULTICAST > mtu 1500 inet 172.18.0.1 netmask 255.255.0.0 BROADCAST 0.0.0.0 Mr 02:42:44:5a:12:8f txqueuelen 0 (Ethernet) RX packets 6691477 bytes 498130 RX errors 0 dropped 0 overruns 0 frame 0 TX Packets 6751310 bytes 3508684363 (3.2 GiB) TX errors 0 Dropped 0 Overruns 0 carrier 0 collisions 0Copy the code
Routing information on the host:
[root@7dgroup ~]# netstat -rKernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface default Gateway 0.0.0.0ug 0 0 0 eth0 Link-local 0.0.0.0 255.255.0.0 U 0 0 0 eth0 172.17.208.0 0.0.0.0 255.255.240.0 U 0 0 0 eth0 172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 192.168.16.0 0.0.0.0 255.255.240.0 U 0 0 0 BR-676bae33FF92Copy the code
So the host and container can communicate directly, even if the ports are not mapped. As follows:
[root@7dgroup ~]# telnet 172.18.0.3 8080
Trying 172.18.0.3...
Connected to 172.18.0.3.
Escape character is '^]'.
Copy the code
In addition, because it is bridged, the host also has virtual network card device information similar to veTH0B5A080, such as:
eth0b5a080: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 42:c3:45:be:88:1a txqueuelen 0 (Ethernet)
RX packets 2715512 bytes 2462280742 (2.2 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2380143 bytes 2437360499 (2.2 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Copy the code
This is the virtual network card pair veth pair, docker container one, host one. In this mode, there are several containers on the host with several virtual network adapter devices starting with VEth.
But if the host is not accessing it, it will not work. As shown below:When we use the monitor to access, the result will be like this:
Zees-Air-2:~ Zee$ telnet <serverip> 8080
Trying <serverip>...
telnet: connect to address <serverip>: Connection refused
telnet: Unable to connect to remote host
Zees-Air-2:~ Zee$
Copy the code
Because 8080 is a container port, not a host port, other machines cannot access it. This is why ports are mapped for remote access. After mapping ports, NAT rules are implemented to ensure that packets are reachable.
Take a look at the NAT rules. As follows:
[root@7dgroup ~]# iptables -t nat -vnL
Chain PREROUTING (policy ACCEPT 171 packets, 9832 bytes)
pkts bytes target prot opt in out sourceDestination 553K 33M DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL Chain INPUT (Policy ACCEPT 171 packets, 9832 bytes) pkts bytes target prot optin out source destination
Chain OUTPUT (policy ACCEPT 2586 packets, 156K bytes)
pkts bytes target prot opt in out sourceDestination 205K 12M DOCKER all -- * * 0.0.0.0/0! 60.205.104.0/22 ADDRTYPE match dst-type LOCAL 0 DOCKER all -- * * 0.0.0.0/0! 127.0.0.0/8 ADDRTYPE match dst-type LOCAL Chain POSTROUTING (policy ACCEPT 2602 packets, 157K bytes) pkts bytes target prot optin out sourcedestination 265K 16M MASQUERADE all -- * ! Docker0 172.18.0.0/16 0.0.0.0/0 0 MASQUERADE all -- *! Br-676bae33ff92 192.168.16.0/20 0.0.0.0/0 00 MASQUERADE TCP -- * * 192.168.0.4 192.168.0.4 TCP DPT :7001 00 MASQUERADE TCP -- * * 192.168.0.4 192.168.0.4 TCP DPT :4001 00 MASQUERADE TCP -- * * 192.168.0.5 192.168.0.5 TCP DPT :2375 00 MASQUERADE TCP -- * * 192.168.0.8 192.168.0.8 TCP DPT :8080 0 0 MASQUERADE TCP -- * * 172.18.0.4 172.18.0.4 TCP DPT :3306 0 0 MASQUERADE TCP -- * * 172.18.0.5 172.18.0.5 TCP DPT :6379 0 0 MASQUERADE TCP -- * * 172.18.0.2 172.18.0.2 TCP DPT :80 0 0 MASQUERADE TCP -- * * 172.18.0.6 172.18.0.6 TCP DPT :9997 0 0 MASQUERADE TCP -- * * 172.18.0.6 172.18.0.6 TCP DPT :9996 0 0 MASQUERADE TCP -- * * 172.18.0.6 172.18.0.6 TCP DPT :8080 0 0 MASQUERADE TCP -- * * 172.18.0.3 172.18.0.3 TCP DPT :9995 0 0 MASQUERADE TCP -- * * 172.18.0.3 172.18.0.3 TCP DPT :8080 Chain DOCKER (3 references) PKTS bytes target prot optin out sourceDestination 159K 9544K RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0 0 0 RETURN all -- BR-676bae33FF92 * 0.0.0.0/0 0.0.0.0/0 1 40 DNAT TCP --! Docker0 * 0.0.0.0/0 0.0.0.0/0 TCP DPT :3307 to:172.18.0.4:3306 28 1486 DNAT TCP --! Docker0 * 0.0.0.0/0 0.0.0.0/0 TCP DPT :6379 to:172.18.0.5:6379 228 137K DNAT TCP --! Docker0 * 0.0.0.0/0 0.0.0.0/0 TCP DPT :91 to:172.18.0.2:80 3 192 DNAT TCP --! Docker0 * 0.0.0.0/0 0.0.0.0/0 TCP DPT :9997 to:172.18.0.6:9997 0 0 DNAT TCP --! Docker0 * 0.0.0.0/0 0.0.0.0/0 TCP DPT :9996 to:172.18.0.6:9996 0 0 DNAT TCP --! Docker0 * 0.0.0.0/0 0.0.0.0/0 TCP DPT :9002 to:172.18.0.6:8080 12 768 DNAT TCP --! Docker0 * 0.0.0.0/0 0.0.0.0/0 TCP DPT :9995 to:172.18.0.3:9995 4 256 DNAT TCP --! Docker0 * 0.0.0.0/0 0.0.0.0/0 TCP DPT :9004 to:172.18.0.3:8080 [root@7dgroup ~]#
Copy the code
We see that data from port 91 on the host is passed to port 80 on 172.18.0.2. Port 3307 on the host passes to port 3306 on 172.18.0.4.
Rehash this, that has nothing to do with JMX. Bitter is bitter, JMX is like this:Parameters are used during registrationjmxremote.port
, and returns a new portjmxremote.rmi.port
.
The is parameter jmxremote.rmi.port is used when invoking the service. As mentioned above, because docker ports in Bridge mode must be explicitly specified with -p, otherwise there are no NAT rules and packets are unreachable. So in this case, only jmxremote.rmi.port can be exposed. So you must specify it explicitly. Because if not specified, this port will be opened randomly. Random open ports do not have NAT rules, so it does not work.
Iii. Solutions
Therefore, in this case, jmxremote.rmi.port can only be specified as a fixed value and exposed. The configuration is as follows:
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=9995
-Djava.rmi.server.hostname=<serverip>
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.rmi.port=9995
Copy the code
Settings such as the above are both 9995, which is allowed, in which case the registered and invoked ports are merged.
This is needed when the Docker container is started again.
docker run -d -p 9003:8080 -p 9995:9995 --name 7dgroup-tomcat5
-e CATALINA_OPTS="-Dcom.sun.management.jmxremote \
-Dcom.sun.management.jmxremote.port=9995 \
-Djava.rmi.server.hostname=<serverip> \
-Dcom.sun.management.jmxremote.ssl=false \
-Dcom.sun.management.jmxremote.authenticate=false \
-Dcom.sun.management.jmxremote.rmi.port=9995" c375edce8dfd
Copy the code
Then you can connect to JMX’s tools. The same problem may occur in a network environment with firewalls and other devices. Once you understand JMX’s registration call logic, you can solve a variety of similar problems.
Network links are technical points that people who do performance analysis have to figure out, so that’s why I said so much.
Four,
The choice of JMX tool is verbose here. Some like fancy ones, some like simple ones, some like black Windows. I think when you choose a tool, it depends on the situation. In the performance analysis, you must choose the right tool, not the one that reflects the high technology.
One last assignment:
-
If -p 19995:9995 is specified in docker run, that is, the switch port is exposed, all other configurations remain unchanged. Is the JMX tool still accessible?
-
If jmxremote.rmi.port and jmxremote.port are not merged and both ports are exposed at the same time, all other configurations remain unchanged. Is the JMX tool still accessible?
If you are interested, try it out for yourself.