“This is the fourth day of my participation in the August Gwen Challenge

tool

Dotnet – dump (docs.microsoft.com/en-us/dotne…). ProcDump for Linux (github.com/Sysinternal…).

dotnet-dump

Dotnet-dump is an official Microsoft release. NET global tools, installation and use are very simple. Installation:

dotnet tool install --global dotnet-dump

Copy the code

Use:

dotnet-dump collect --process-id 1902 # pid

Copy the code

But it doesn’t automatically dump(or I don’t know), so this article focuses on the following tool

ProcDump for linux

This tool is a community Linux port of ProcDump, but is primarily developed by Microsoft employees. It can realize automatic dump according to the CPU usage, memory usage, number of threads and other conditions, so that we can deal with the program abnormal need to dump file analysis scenario.

Install to Dockerfile add the following command: note that we add in the image at runtime, and it is best to build a base image yourself rather than install it every time.

# # final stage/image FROM McR.microsoft.com/dotnet/aspnet:5.0 installation required depend on the RUN apt - get the update \ && apt - get the install - y --no-install-recommends \ wget \ GDB \ LLDB # Install procdump RUN wget https://packages.microsoft.com/repos/microsoft-debian-buster-prod/pool/main/p/procdump/procdump_1.1.1-220_amd64.deb - O procdump.deb \ && dpkg -i procdump.deb \ && rm procdump.debCopy the code

Based on aspnet: 5.0 mirror, namely debian 10, if based on other images, can go to the following directory to find the corresponding package packages.microsoft.com/repos/ also can provide reference for the author of the installation instructions

Since docker containers can’t easily execute multiple processes at startup, we need an sh file to execute dotnet and procdump at startup. Since I personally don’t like having to rely on files outside of Dockerfile, I created sh files directly in Dockerfile

RUN echo "#! /bin/bash \n\ procdump -M 200 -w dotnet & \n\ dotnet \$1 \n\ " > ./start.sh RUN chmod +x ./start.sh ENTRYPOINT ["./start.sh", "<YourApp>.dll"]Copy the code

You can also create your own start.sh if necessary

#! /bin/bash procdump -M 200 -w dotnet & dotnet $1Copy the code

Dockerfile instead

COPY start.sh ./start.sh
RUN chmod +x ./start.sh
ENTRYPOINT ["./start.sh", "<YourApp>.dll"]

Copy the code

Dotnet and procdump will start at the same time during docker run, and will automatically dump when the memory is larger than 200 MB. One more thing to note is that docker run needs to add — Privileged to increase the privilege. For example docker run –privileged -it XX

The parameters of procdump are

Usage: procdump [OPTIONS...] TARGET OPTIONS -h Prints this help screen -c Triggers the generation of a core dump when the CPU exceeds or equals the specified value (0 to 100 * nCPU). -c Triggers the generation of core dump when the CPU is smaller than the specified value (0 to 100 * nCPU). -m Triggers the generation of core dump when the memory submission exceeds or is equal to the specified value (MB). -m Triggers the generation of core dump when the memory submission is smaller than the specified value (MB). -t Triggers when the number of threads exceeds or equals the specified value. -f Triggers when the file descriptor count exceeds or equals the specified value. -I Polling frequency in milliseconds (default is 1000) -n Number of core dumps to write before exiting (default is 1) -s Consecutive seconds before dump is written (default is 10) -d Writes diagnostic logs to syslog TARGET -p Indicates the pid of the process. -w Indicates the process nameCopy the code

For example, the following command creates a dump file when the CPU usage is >= 65% or the memory is >= 100 MB

procdump -C 65 -M 100 -p 1234

Copy the code

other

Persistence of dump files Everyone knows that when a docker container disappears, the dump file also disappears. So you need to export the dump file to a specified persistent volume, but unfortunately, procdump for Linux does not have an output parameter to control the output directory, only generated in the same directory as the application, so now you need to move it manually. I see PR has already been mentioned and a -o parameter will be added in the future to control the output.