Linux Cgroup series (2) : Playing with CPU

The previous article mainly introduced some basic concepts of Cgroup, including its default Settings and control tools in CentOS system, and explained how Cgroup controls resources using CPU as an example. This article will demonstrate how cgroups can be used to limit CPU usage and how different cgroup Settings affect performance through concrete examples.

1.View the current CGroup information


There are two ways to view the current Cgroup information on your system. The first method is to view it through the systemd-cgls command, which returns the overall cgroup hierarchy of the system. The top cgroup tree consists of slice, as shown below:

$Systemd-cgls -- No-page ├─1 /usr/lib/ Systemd/Systemd -- Switched -- root --system -- Deserialize 22 ├─user.slice │ ├─ $systemd-cgls -- No-page ├─1 /usr/lib/ Systemd -- Switched -- root -- System -- Deserialize ├ ─ user - 1000. Slice │ │ └ ─ session - 11. The scope │ │ ├ ─ 9507 SSHD: Tom/priv │ │ ├ ─ 9509 SSHD: Tom @ PTS / 3 │ │ └ ─ 9510 - bash │ └ ─ user - 0. Slice │ └ ─ session - 1. The scope │ ├ ─ 6239 SSHD: Root @ PTS / 0 │ ├ ─ 6241 - ZSH │ └ ─ 11537 systemd - CGLS - no - page └ ─ system. Slice ├ ─ rsyslog. Service │ └ ─ 5831 / usr/sbin/rsyslogd -n ├ ─ SSHD. Service │ └ ─ 5828 / usr/sbin/SSHD -d ├ ─ tuned. Service │ └ ─ 5827 / usr/bin/python2 - Es/usr/sbin/tuned-l-p ├─ ├─ ├─ ├─ 2-0 /usr/sbin/ ├─ 2-0Copy the code

You can see that the top level of the system cgroup hierarchy consists of user.slice and System. slice. Since there are no VMS or containers running in the system, there is no machine.slice, so when the CPU is busy, user.slice and System. slice each get 50% of the CPU usage time.

There are two subslices under user.slice: user-1000.slice and user-0.slice. Each subslice is named by a user ID (UID), so it is easy to identify which slice belongs to which user. For example, user-1000. Slice belongs to user Tom, and user-0. Slice belongs to user root.

The systemd-cgls command provides static snapshots of the cgroup level. To view dynamic snapshots of the cgroup level, run the systemd-cgtop command.

$systemd-cgtop Path Tasks %CPU Memory Input/s Output/s / 161 1.2 161.0m - - /system.slice - 0.1 - - / system. Slice/vmtoolsd. Service 1-0.1 / system. Slice/tuned. 1-0.0 / service system. Slice/rsyslog service 1-0.0  - - /system.slice/auditd.service 1 - - - - /system.slice/chronyd.service 1 - - - - /system.slice/crond.service 1 - - - - /system.slice/dbus.service 1 - - - - /system.slice/gssproxy.service 1 - - - - /system.slice/lvm2-lvmetad.service 1 - -  - - /system.slice/network.service 1 - - - - /system.slice/polkit.service 1 - - - - /system.slice/rpcbind.service 1 - - - - /system.slice/sshd.service 1 - - - - /system.slice/system-getty.slice/[email protected] 1 - - - - /system.slice/systemd-journald.service 1 - - - - /system.slice/systemd-logind.service 1 - - - - /system.slice/systemd-udevd.service 1 - - - - /system.slice/vgauthd.service 1 - - - - /user.slice 3 - - - - /user.slice/user-0.slice/session-1.scope 3 - - - - /user.slice/user-1000.slice 3 - - - - /user.slice/user-1000.slice/session-11.scope 3 - - - - /user.slice/user-1001.slice/session-8.scope 3 - - - -Copy the code

Systemd-cgtop provides statistics and control options similar to the top command, but this command displays only those services and slices that have resource statistics enabled. For example, if you want to enable the resource statistics function of sshd.service, perform the following operations:

$ systemctl set-property sshd.service CPUAccounting=true MemoryAccounting=true
Copy the code

This command will be in the/etc/systemd/system/SSHD. Service. D/directory to create the corresponding configuration file:

$ll/etc/systemd/system/SSHD. Service. D/total amount of 1, 4 - rw - r - r - 8 root root 28 May 31 02:24 50 - CPUAccounting. Conf. 4 -rw-r--r-- 1 root root 31 May 31 02:24 50- memoryaccounting. conf $cat /etc/systemd/system/sshd.service.d/50-CPUAccounting.conf [Service] CPUAccounting=yes $ cat /etc/systemd/system/sshd.service.d/50-MemoryAccounting.conf [Service] MemoryAccounting=yesCopy the code

After the configuration, restart the SSHD service.

$ systemctl daemon-reload
$ systemctl restart sshd
Copy the code

SSHD: SSHD: SSHD: SSHD: SSHD: SSHD: SSHD: SSHD

Enabling the resource usage statistics function may increase the load on the system, because resource statistics also consume CPU and memory. In most cases, using the top command is sufficient. Of course, this is Linux, so you’re in control and you can do whatever you want.

2.Allocate the relative CPU usage


We have learned from the previous article that CPU Shares can be used to set the relative CPU usage time, so let’s verify this in practice.

The following experiments were performed on systems with a single-core CPU, which is completely different from a single-core CPU and discussed separately at the end of this article.

The test object is one service and two common users. The UID of user Tom is 1000. You can run the following command to query the UID of user Tom:

$ cat /etc/passwd|grep tom
tom:x:1000:1000::/home/tom:/bin/bash
Copy the code

Create a foo.service:

$ cat /etc/systemd/system/foo.service [Unit] Description=The foo service that does nothing useful After=remote-fs.target  nss-lookup.target [Service] ExecStart=/usr/bin/sha1sum /dev/zero ExecStop=/bin/kill -WINCH ${MAINPID}

[Install]
WantedBy=multi-user.target
Copy the code

/dev/zero is a special device file on Linux systems that provides an infinite number of null characters when you read it, so foo.service constantly consumes CPU resources. Now let’s change the CPU shares of foo.service to 2048:

$ mkdir /etc/systemd/system/foo.service.d
$ cat << EOF > /etc/systemd/system/foo.service.d/50-CPUShares.conf
[Service]
CPUShares=2048
EOF
Copy the code

Since the default CPU Shares value is 1024, foo.service will try to capture all of system. Slice’s CPU usage time when the CPU is busy.

Now start the foo service with systemctl start foo.service and use the top command to check CPU usage:

No other processes are currently consuming CPU, so foo.service can use almost 100% of the CPU.

Now let’s get user Tom involved and set user-1000.slice’s CPU shares to 256:

$ systemctl set-property user-1000.slice CPUShares=256
Copy the code

Log in to the system as user Tom and run the sha1sum /dev/zero command to check CPU usage again:

Feeling a little confused now? Foo. service has 2048 CPU shares, while Tom has 256 CPU shares. Shouldn’t Tom only use 10% of the CPU? Recall that I mentioned in the previous section that user.slice and System. slice each get 50% of their CPU usage time when the CPU is busy. This is exactly the scenario, and only sha1sum is busy under User. slice, so it gets 50% of the CPU usage.

Finally, let user Jack participate, whose CPU Shares are 1024 by default. Log in to the system as user Jack and run the sha1sum /dev/zero command to check the CPU usage again:

As we mentioned above, user.slice and System. slice each get 50% CPU usage in this scenario. User Tom’s CPU shares are 256 and User Jack’s CPU shares are 1024, so user Jack gets 4 times as much CPU usage as User Tom.

3.Allocate absolute CPU usage


As mentioned in the previous article, if you want to strictly control CPU resources, you can set a limit on how many CPU resources can be used, regardless of whether the CPU is busy. This can be set using the CPUQuota parameter. Here we set user Tom’s CPUQuota to 5% :

$ systemctl set-property user-1000.slice CPUQuota=5%
Copy the code

At this point you can see that user Tom’s sha1sum process is only getting around 5% CPU usage.

If you stop foo.service at this point and shut down user Jack’s sha1sum process, you’ll see that user Tom’s sha1sum process is still getting around 5% CPU usage.

If a non-core service is CPU intensive, you can use this method to severely limit its CPU usage and prevent it from impacting other important services in the system.

4.Dynamically set the cgroup


All operations related to a cgroup are based on the cgroup virtual filesystem in the kernel. Using a cgroup is simple, just mount the filesystem. By default, the system mounts the cgroup to the /sys/fs/cgroup directory. When the service starts, the system mounts its cgroup to the subdirectories under this directory. Take foo.service as an example:

Enter the CPU subsystem of System. slice:

$ cd /sys/fs/cgroup/cpu,cpuacct/system.slice
Copy the code

Look at foo.service’s cgroup directory:

$ ls foo.*
zsh: no matches found: foo.*
Copy the code

Since foo.service was not started, the cgroup directory was not mounted. Now start foo.service and look at its cgroup directory again:

$ ls foo.serice
cgroup.clone_children  cgroup.procs  cpuacct.usage         cpu.cfs_period_us  cpu.rt_period_us   cpu.shares  notify_on_release
cgroup.event_control   cpuacct.stat  cpuacct.usage_percpu  cpu.cfs_quota_us   cpu.rt_runtime_us  cpu.stat    tasks
Copy the code

You can also view its PID and CPU Shares:

$ cat foo.service/tasks
20225

$ cat foo.service/cpu.shares
2048
Copy the code

It is theoretically possible to dynamically change the cgroup configuration in the /sys/fs/cgroup directory, but I do not recommend doing this in a production environment. If you want to learn more about cgroups through experimentation, you can fiddle with this directory a lot.

5.What about multi-core cpus?


All of the above experiments were performed on a single-core CPU. Let’s briefly discuss the multi-core scenario, using 2 cpus as an example.

CPU Shares first, shares can only be set for single-core cpus, meaning that no matter how big your Shares value is, the Cgroup can only get 100% of the CPU usage time (i.e., a 1-core CPU). Using the example in Section 2 of this article, set the CPU shares of foo.service to 2048 and start foo.service, you’ll see that foo.service only gets 100% CPU time. Not fully using both CPU cores:

Tom’s sha1sum /dev/zero process and foo.service each use 1 CPU core:

As mentioned at the end of the last article, the CPUQuota parameter can be set to allow a Cgroup to use both CPU cores. Such as:

$ systemctl set-property foo.service CPUQuota=200%
Copy the code

Whether a process can eventually use two CPU cores entirely depends on its design support.

6.conclusion


This article uses concrete examples to see how different cgroup Settings can affect performance, and the next article will demonstrate how cgroups can be used to limit memory usage.