I recently had the opportunity to review the kernel parameters, so make a note of them.
verification required
in the setting value columnAfter researching various tuning information for the past few years, it seemed good to set the following.
Parameter name | Set value | Description |
---|---|---|
fs.file-max | Verification required | System-wide file descriptor limit |
kernel.threads-max | Verification required | Maximum number of processes in the entire system |
net.core.netdev_max_backlog | Verification required | Maximum number of packets that the kernel can queue |
net.netfilter.nf_conntrack_max | Verification required | Adjust the maximum number of simultaneous connections with netfilter |
vm.swappiness | 0 | Set the ease of swapping * If SWAP is performed even with 0, SWAP is performed. |
Parameter name | Set value | Description |
---|---|---|
kernel.sysrq | 0 | SysRq(System requirements)Disable the key |
net.ipv4.conf.all.accept_source_route | 0 | Disable source routing |
net.ipv4.conf.all.rp_filter | 1 | rp_filter(reverse path filter)Enable |
net.ipv4.conf.default.rp_filter | 1 | rp_filter(reverse path filter)Enable |
net.ipv4.ip_forward | 0 | Disable IP forwarding |
net.ipv4.tcp_syncookies | 1 | tcp_Enable syncookies * SYN FLOOD measures |
Parameter name | Set value | Description |
---|---|---|
net.core.rmem_max | Verification required | Set the maximum TCP receive buffer size |
net.core.somaxconn | Verification required | Maximum length of queue to store connection requests accepted by TCP sockets |
net.core.wmem_max | Verification required | Maximum TCP send buffer size |
net.ipv4.ip_local_port_range | Verification required | TCP/Change the IP transmission port range |
net.ipv4.tcp_fin_timeout | 5〜30 | FIN packet timeout time |
net.ipv4.tcp_keepalive_intvl | sec<75 | Interval to send TCP keepalive packets(Seconds) |
net.ipv4.tcp_keepalive_time | sec<7200 | Time to send TCP keepalive packet(Seconds) |
net.ipv4.tcp_keepalive_probes | count<9 | Number of times to send keepalive packets |
net.ipv4.tcp_max_syn_backlog | Verification required | Number of connections that can hold a connection that accepts SYN per socket and does not receive ACK |
net.ipv4.tcp_max_tw_buckets | Verification required | TIME held by the system at the same time_Maximum number of WAIT sockets |
net.ipv4.tcp_orphan_retries | Verification required | Number of retransmissions before terminating the closed TCP connection from here |
net.ipv4.tcp_rfc1337 | 1 | Comply with RFC1337 ※TIME_If RST is received in WAIT state, TIME_Close the socket without waiting for the end of the WAIT period |
net.ipv4.tcp_slow_start_after_idle | 0 | Disable slow start after communication is idle |
net.ipv4.tcp_syn_retries | 3 | Number of retries to send tcp SYN |
net.ipv4.tcp_tw_reuse | 1 | TIME_Reuse WAIT connection |
There are quite a lot of setting items for kernel parameters, and there is tuned as a tool that makes it easy for humans to manage.
In tuned, tuning can be performed by the profile according to the server type, and I examined the items to be tuned with reference to those setting values.
Profiles are prepared according to the server type as shown below.
Since the profile describes typical kernel parameters for tuning, let's consider the kernel parameters with reference to this.
$ ls -l /usr/lib/tuned/
total 56
drwxr-xr-x. 2 root root 4096 Sep 9 2019 balanced
drwxr-xr-x. 2 root root 4096 Sep 9 2019 desktop
-rw-r--r-- 1 root root 14413 Mar 14 2019 functions
drwxr-xr-x. 2 root root 4096 Sep 9 2019 latency-performance
drwxr-xr-x. 2 root root 4096 Sep 9 2019 network-latency
drwxr-xr-x. 2 root root 4096 Sep 9 2019 network-throughput
drwxr-xr-x. 2 root root 4096 Sep 9 2019 powersave
drwxr-xr-x. 2 root root 4096 Sep 9 2019 recommend.d
drwxr-xr-x. 2 root root 4096 Sep 9 2019 throughput-performance
drwxr-xr-x. 2 root root 4096 Sep 9 2019 virtual-guest
drwxr-xr-x. 2 root root 4096 Sep 9 2019 virtual-host
The description ʻinclude = throughput-performance indicates that the template for
throughput-performance` is included.
** * Comment lines are hidden because only the set values are displayed **
/usr/lib/tuned/virtual-guest/tuned.conf
[main]
summary=Optimize for running inside a virtual guest
include=throughput-performance
[sysctl]
vm.dirty_ratio = 30
vm.swappiness = 30
Here is the template for the included throughput-performance
.
You can see that it contains CPU and disk settings in addition to the kernel parameters.
/usr/lib/tuned/throughput-performance/tuned.conf
[main]
summary=Broadly applicable tuning that provides excellent performance across a variety of common server workloads
[cpu]
governor=performance
energy_perf_bias=performance
min_perf_pct=100
[disk]
readahead=>4096
[sysctl]
kernel.sched_min_granularity_ns = 10000000
kernel.sched_wakeup_granularity_ns = 15000000
vm.dirty_ratio = 40
vm.dirty_background_ratio = 10
vm.swappiness=10
I set the kernel parameters set in tuned as a trial, and tried to list the values actually set as follows.
It seems that if there is the same setting value as the included template, it will be merged, but since it is vm.swappiness = 30
, it does not necessarily overwrite after Include.
Parameter name | Set value | Description |
---|---|---|
vm.dirty_ratio | 40 | Absolute maximum amount of system memory that can be filled with dirty pages before being committed to disk |
vm.swappiness | 30 | Set the ease of swapping |
kernel.sched_min_granularity_ns | 10000000 | Minimum scheduler period for targets on which a single task runs |
kernel.sched_wakeup_granularity_ns | 15000000 | Preemption particle size when a task is launched |
vm.dirty_background_ratio | 10 | When the percentage of dirty pages reaches this value, pdflush will export in a low priority background. |
The description ʻinclude = throughput-performance indicates that the template for
throughput-performance` is included.
** * Comment lines are hidden because only the set values are displayed **
/usr/lib/tuned/network-throughput/tuned.conf
[main]
summary=Optimize for streaming network throughput, generally only necessary on older CPUs or 40G+ networks
include=throughput-performance
[sysctl]
net.ipv4.tcp_rmem="4096 87380 16777216"
net.ipv4.tcp_wmem="4096 16384 16777216"
net.ipv4.udp_mem="3145728 4194304 16777216"
The included throughput-performance
template has already been described in the previous section and will be omitted.
I set the kernel parameters set in tuned as a trial, and tried to list the values actually set as follows.
It seems that if there is the same setting value as the included template, it will be merged, but since it is vm.swappiness = 30
, it does not necessarily overwrite after Include.
Parameter name | Set value | Description |
---|---|---|
vm.dirty_ratio | 40 | Absolute maximum amount of system memory that can be filled with dirty pages before being committed to disk |
vm.swappiness | 30 | Set the ease of swapping |
kernel.sched_min_granularity_ns | 10000000 | Minimum scheduler period for targets on which a single task runs |
kernel.sched_wakeup_granularity_ns | 15000000 | Preemption particle size when a task is launched |
vm.dirty_background_ratio | 10 | When the percentage of dirty pages reaches this value, pdflush will export in a low priority background. |
net.ipv4.tcp_rmem | 4096 87380 16777216 | TCP socket receive buffer size(min, default, max) |
net.ipv4.tcp_wmem | 4096 16384 16777216 | TCP socket send buffer size(min, default, max) |
net.ipv4.udp_mem | 3145728 4194304 16777216 | UDP socket buffer size(min, default, max) |
virtual-guest
profile, basically the same default as when building the server (varies for each server) is set.network-throughput
profile, the buffer size has been increased from the defaultvm.swappiness
and vm.dirty_ratio
, it seems good to change it according to the environment you use.I think it makes sense from a security performance standpoint to set the kernel parameters properly for each server.
Recently, the popularity of container technologies such as k8s is increasing, but even so, it seems that there are still many opportunities to operate servers, not just on-premise clouds, so I would like to set them appropriately.
Recommended Posts