1. Practical xargs command

In everyday use, I find the xargs command to be important and convenient. We can use this command to pass the output of the command as an argument to another command.

For example, if we want to find the files ending in.conf in a certain path and classify them, the common practice is to find the files ending in.conf and export them to a file, then cat the file and use the file classification command to classify the output files. This general method is a bit cumbersome, so this is where the xargs command comes in handy.

Example 1: Find the files in the/directory that end in. Conf and classify them

Command:

# find / -name *.conf -type f -print | xargs file

The following output is displayed:

! [](https://pic1.zhimg.com/80/v2-c492a16ee5a23f12767671df6756c741_720w.jpg)

2. Run commands or scripts in the background

Sometimes when we do something, don’t hope we followed after operation in the terminal session is broken is broken, especially some database import and export operation, if involves the operation of the large amount of data, we can’t guarantee that our network during our operation is not a problem, so the background script or command is a guarantee for us.

For example, if we want to run the database export operation in the background and record the command output to a file, we can do this:

Mysql > select database, database, database, database, database, database, database, database, database, database, database, database, database, database, databaseCopy the code

Of course, if you don’t want your password to be in plain text, you can do this:

Nohup mysqldump -uroot -p -- databases >./ databases.Copy the code

Carried out after the order, will be prompted to call you for a password, enter the password, the command is run in the foreground, but our purpose is to run the command, the day after tomorrow at this time you can press Ctrl + Z, and then the input bg can achieve the result of the first command, the background for the command, at the same time also can let the password hidden.

After the command is executed on the background, a nohup.out file is left in the current directory where the command is executed. You can check this file to see whether the command is executed incorrectly.

3. Identify the processes that use the most memory on the current system

In many o&M operations, we find that memory consumption is relatively serious, so how to find the memory consumption process order?

Command: # ps – aux | sort – who 4 | head – 20

! [](https://pic2.zhimg.com/80/v2-825b52b485e5c83d1c796916dfdb979a_720w.jpg)

The fourth column of the output is the percentage of memory consumed. The last column is the corresponding process.

4. Identify the processes that use the most CPU in the current system

In many operations, we find that CPU consumption is very high, so how can we find the CPU consumption process order?

Command: # ps – aux | 3 | sort – who head – 20

! [](https://pic1.zhimg.com/80/v2-8942de56a8afd0425d5d642e63b463f7_720w.jpg)

The third column of the output is the percentage of CPU consumed, and the last column is the corresponding process.

As you can see, sort 3 and 4 represent the sort for column 3 and column 4.

5. View multiple log or data files at the same time

In our daily work, we might view log files by using the tail command to view log files from terminal to terminal, one log file per terminal. I do, too, but sometimes I find this a bit cumbersome. There is actually a tool called Multitail that allows you to view multiple log files simultaneously on the same terminal.

Install multitail first:

# # wget HTTP: / / ftp://ftp.is.co.za/mirror/ftp.rpmforge.net/redhat/el6/en/x86_64/dag/RPMS/multitail-5.2.9-1.el6.rf.x86_64.rpm Yum - y localinstall multitail 5.2.9-1. El6. Rf. X86_64. RPMCopy the code

The Multitail tool supports text highlighting, content filtering, and more that you might need.

Here’s a useful example:

At this point, we want to view both secure log output specified filtering key and real-time network ping:

The command is as follows: # multitail -e “Accepted” /var/log/secure -l “ping baidu.com”

! [](https://pic3.zhimg.com/80/v2-178d7504809194723b7c7abd90909de6_720w.jpg)

Is it convenient? If you want to check the correlation between two logs, you can check whether the log output is triggered. If switching back and forth between two terminals is a bit of a waste of time, the Multitail tool is a good way to check.

6. Ping continuously and record the result in logs

Many times, the operation and maintenance always hear a voice, is there something wrong with the network ah, causing the service to appear strange symptoms, must be the server network problem. This is commonly known as the back of the pot, business problems, the first time the relevant personnel can not find the reason for many cases will be attributed to the server network problems.

This time you ping a few packets to throw out the results, people will refute you, just that period of time there is a problem, now the business is back to normal, the network must be normal ah, this time you are estimated to be angry.

If you take out zabbix and other network monitoring data, this time is not appropriate, zabbix data collection interval can not be set to 1 second, right? Xiaobian encountered such a problem, the result I through the following command for ping monitoring collection. Then, when someone asked me to take the blame, I intercepted the ping database of the time period when the problem occurred, and we talked openly about it. As a result, I caught the ping database and sent it back. From then on, they did not dare to dump the blame easily.

Command:

ping api.jpush.cn  | awk '{ print $0"\t" strftime("%Y-%m-%d %H:%M:%S",systime()) } ' >> /tmp/jiguang.log &`
Copy the code

The output result is recorded in/TMP /jiguang.log, and a ping record is added every second as follows:

! [](https://pic3.zhimg.com/80/v2-7381cf0d4e4af98987c88a9c9ba2b9eb_720w.jpg)

7. Check the TCP connection status

You can view the TCP connection status of port 80 to check whether the connection is released or analyze the status when an attack occurs.

Command: # netstat – NAT | awk ‘{print $6}’ | sort | uniq -c | sort – rn

! [](https://pic1.zhimg.com/80/v2-086b719054864154d6d8c4bb29cf6360_720w.jpg)

8. Search for the top 20 IP addresses with the most requests on port 80

Sometimes the number of business requests suddenly goes up, so at this time we can check the source IP of the request, if it is concentrated on a few IP, then there may be attack behavior, we can use the firewall to block. The command is as follows:

# netstat -anlp|grep 80|grep tcp|awk '{print $5}'|awk -F: '{print $1}'|sort|uniq -c|sort -nr|head -n20`
Copy the code

! [](https://pic3.zhimg.com/80/v2-31e97d4f05e3ac51c26eaefa3a2af9d0_720w.jpg)

9. SSH implements port forwarding

Many of you have heard that SSH is a Linux remote login security protocol, which is a popular remote login management server. However, few friends have heard that SSH can also do port forwarding. In fact, SSH is used to do port forwarding function is very powerful, the following to do a demonstration.

Example Background: Our company has a Bastion machine, all operations need to be done on the Bastion machine. Some developers need to access the head panel of ELasticSearch to see the cluster status, but we don’t want to map the port 9200 of ELasticSearch to the bastion machine. This is why requests to Fortress machine (192.168.1.15) are forwarded to server ElasticSearch (192.168.1.19) on 9200.

Example: Forwarding port 9200 access sent to the local host (192.168.1.15) to port 9200 at 192.168.1.19

SSH -p 22-c-f-n-g-l 9200:192.168.1.19:9200 [email protected]Copy the code

Remember: the premise is to transfer the key first.

After the command is executed, 192.168.1.15:9200 port is actually accessed.

I hope the above content can help you. Many PHPer will encounter some problems and bottlenecks when they are advanced, and they have no sense of direction when writing too many business codes. I have sorted out some information, including but not limited to: Distributed architecture, high scalability, high performance, high concurrency, server performance tuning, TP6, Laravel, Redis, Swoft, Kafka, Mysql optimization, shell scripting, Docker, microservices, Nginx, etc. Many knowledge points can be free to share with you