I replaced the old WEB server with CentOS7. Since it is a WordPress site, a database (DB) is required. WEB/DB lived together on the same server. The original database was MySQL, but the version is old and needs to be upgraded.
I decided to use MariaDB, which is installed by default with CentOS7 installation. It's also compatible with MySQL and has good performance.
The migration of WordPress is over, the page is displayed normally, and the operation has started.
After a few days, the page did not display properly and the following error was displayed on the browser screen.
Error establishing a database connection
When I shell-log in to the server and check the processes, a lot of httpd processes are launched and there are no MySQL processes.
First, restart MariaDB.
# systemctl restart mariadb.service
# ps ax | grep mysql
22600 ? Ss 0:00 /bin/sh /usr/bin/mysqld_safe --basedir=/usr
22798 ? Sl 0:01 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --log-error=/var/log/mariadb/mariadb.log --pid-file=/var/run/mariadb/mariadb.pid --socket=/var/lib/mysql/mysql.sock
22850 pts/0 S+ 0:00 grep --color=auto mysql
#
It started up safely. By the way, it is MariaDB, but the process display is mysql. Although it is written in a mess including this article, we will proceed as "MySQL ≒ MariaDB".
Also check the MariaDB version.
# rpm -qa | grep mariadb
mariadb-libs-5.5.65-1.el7.x86_64
mariadb-5.5.65-1.el7.x86_64
mariadb-server-5.5.65-1.el7.x86_64
#
I recovered by restarting the database, but I can't tolerate the database going down even once. Check ** /var/log/mariadb/mariadb.log ** to find the cause.
201211 10:06:34 [ERROR] Plugin 'InnoDB' init function returned error. 201211 10:06:34 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 201211 10:06:35 [ERROR] mysqld: Out of memory (Needed 128917504 bytes) 201211 10:06:35 [ERROR] mysqld: Out of memory (Needed 96681984 bytes) 201211 10:06:38 [Note] Plugin 'FEEDBACK' is disabled. 201211 10:06:39 [ERROR] Unknown/unsupported storage engine: InnoDB 201211 10:06:39 [ERROR] Aborting 201211 10:06:40 [Note] /usr/libexec/mysqld: Shutdown complete
It seems that the memory cannot be secured and it is shutting down.
However, I think that MySQL will fail for each process because it can not allocate memory, so I will follow **/var/log/messages ** at the same time.
Dec 11 10:06:41 sv2-33178 kernel: Out of memory: Kill process 23762 (mysqld) score 25 or sacrifice child Dec 11 10:08:59 sv2-33178 kernel: httpd invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Ah, as a result of the outbreak of httpd, omm-killer is triggered to kill the mysqld process.
You can save memory by setting up MySQL, but if httpd continues to occur, you will not have enough memory, so even if you drop some httpd processes, you do not want MySQL processes to be dropped.
I thought about measures to prevent MySQL from becoming Out of memory.
** 1. Change MySQL settings **
There is a method to allocate memory by calculating the buffer from the maximum number of connections. However, this is also already set on the pre-migration server and is not extremely memory consuming.
In addition, the following settings have already been added on the assumption that performance will drop slightly.
/etc/my.cnf
[mysqld]
performance_schema = off
** 2. Memory and swap file upgrade **
This server has 2GB of memory and 4GB of swap. It is used as a corporate server, but it usually has good performance. Swap is set as a partition, but you can set as many swap files as you like.
I considered the above, but in the end, if the http process became tight, it could affect MySQL, so I stopped it. Then, you should change the setting on the http side to limit it, but I already have the simultaneous connection limit, and I thought that it would not be possible to accurately separate the memory allocation of WEB/DB.
I don't want MySQL to fall into it, as it can be slow.
Returning to the original, if you exclude the MySQL process from the target of OOM (Out Of Memory) Killer, the process itself will not fall.
OOM Killer refers to the priority value of the setting variable ** oom_score_ad ** of each process. Immediately check the setting value of the MySQL process.
# cat /proc/<Process ID>/oom_score_ad
0
#
If you set this to -1000, it will be overlooked by OOM Killer. By the way, sshd is also set to -1000. If the sshd process goes down, you can't log in from outside.
I will set it immediately.
# echo -1000 > /proc/<Process ID>/oom_score_ad
# cat /proc/<Process ID>/oom_score_ad
-1000
#
Use the ps command to find the process ID. mysqld, not mysqld_safe. You may set both. You shouldn't have to look at the OOM Killer and let the process go down.
By the above, the process has not stopped (probably), but it has to be reconfigured when MySQL/MariaDB is restarted or the server is restarted. For CentOS6, I should have added it to the script in * /etc/init.d *, but in CentOS7, the description format of the startup script has changed.
It's hard to tell where the startup settings file is, so I'll search for it.
# find /etc -name "mariadb.*"
/etc/systemd/system/multi-user.target.wants/mariadb.service
/etc/systemd/system/mariadb.service.d
#
There is a mariadb.service file far back in the hierarchy.
I think I should add it to the mariadb.service file, but at the beginning of the file
It's not recommended to modify this file in-place
I warn you not to add it to this file.
# systemctl edit mariadb.service
Edit the config file with the editor and add the following line.
/etc/systemd/system/mariadb.service.d/override.conf
[Service]
OOMScoreAdjust=-1000
An override.conf file for addition is automatically generated.
Reboot the database.
# systemctl restart mariadb.service
Check the value of * oom_score_adj *.
# for dir in /proc/[0-9]*; do
> if [ "`cat $dir/oom_score_adj`" != 0 ]; then
> echo "`cat $dir/comm` : `cat $dir/oom_score_adj`"
> fi
> done
systemd-udevd : -1000
mysqld_safe : -1000
mysqld : -1000
auditd : -1000
dbus-daemon : -900
sshd : -1000
Oh, I was able to confirm that it was set to the same value as sshd.
OOM Killer no longer kills the process.
It's not a complete solution, but the worst case of a website going down for a long time seems to be avoided.
The cause is that the httpd process is stuck in large numbers.
It's not clear if it's a temporary concentration of access, an attack, a WordPress bug, or a system problem. Looking at the resource graph, we don't see any extreme traffic growth, only CPU usage spikes.
WordPress is also suspicious, but I am in charge of content operations such as version upgrades, and since multiple sites live together, WordPress versions are also different. .. ..
In another article, I will post a CRON script that restarts httpd when CPU usage increases, so I will use it together.
I think that MySQL/MariaDB is described in a jumbled manner so far and it is difficult to read, but please read it as appropriate!
Recommended Posts