It's a little messy.
First, set up the Raspberry Pi 4B.
I heard that Stretch that worked on 3B + does not work on 4B, so I brought a new Raspbian Buster from Download Raspbian for Raspberry Pi. Burn to a Micro SD card. Buster is Lite instead of Desktop because it seems to save memory (although it has increased) (for server use anyway).
~~ I didn't have a mini HDMI cable / adapter (it's hard, even though it's Micro ...), so ~~ (20/03/07 postscript: wrong. Micro HDMI. Mini is a thin guy?) Monitorless to set up. It sells for 100 yen, so procure it when needed. Insert the Micro SD card into the main unit and connect it to the router with a LAN cable. Connect a 5V 3A Type-C power supply (hard, replace from Micro) and turn on the power (I'm a little scared if I don't want to replace the power supply because the conversion between Micro USB and Type-C seems to be 100%).
Check the IP assigned by DHCP from the router and make an SSH connection (initial password) in the LAN.
When it says Too many authentication failures
or is played with public key
, it fails when trying to authenticate with the public key, so set -o Preferred Authentications = password
or -o Pubkey Authentication = no
as an option for the ssh command. Or add PreferredAuthentications password
or PubkeyAuthentication no
to ~ / .ssh / config
.
Change password and then change username at the same time. You cannot change the user name while logged in to the pi user, so create a new sudoer user and log in again (this time it is in the LAN, so you may temporarily set the root password).
Create tmpuser by executing the following command as pi user.
#Home directory is not created for useradd
sudo useradd tmpuser
sudo passwd tmpuser
Then add tmpuser to sudoers.
sudo adduser tmpuser sudo
-Initial setting memo when using Raspberry Pi with Raspbian (additional user) --Qiita --Add user to group on Linux --Qiita
If you want a little detour, edit / etc / sudoers and add tmpuser. Somehow / etc / sudoers and others are readonly (chmod is fine), so create /etc/sudoers.d/011_tmpuser (you can add it to the sudo group).
# /etc/sudoers.d/011_tmpuser
tmpuser ALL=(ALL:ALL) ALL
Log out once, log in again as the tmpuser user, change the name of the pi user with the following command, change the name of the pi group, and finally move the home directory.
sudo usermod -l NEW_NAME pi
sudo groupmod -n NEW_NAME pi
sudo usermod -m -d /home/NEW_NAME NEW_NAME
Log out from the tmpuser user, log in again as the NEW_NAME user, and delete the tmpuser user. If you make a detour, delete /etc/sudoers.d/011_tmpuser as well. By default, the pi user belongs to the sudo group, so there is no need to add the NEW_NAME user to sudoers again (should).
sudo userdel tmpuser
# sudo rm /etc/sudoers.d/011_tmpuser
# /etc/hostname
NEW_HOSTNAME
# /etc/hosts
...
127.0.1.1 NEW_HOSTNAME
Register the public key to the NEW_NAME user and edit / etc / ssh / sshd_config to make SSH authentication only the public key.
pi side
mkdir ~/.ssh
chmod 700 ~/.ssh
Host side
cd ~/.ssh
ssh-keygen -f KEY_NAME
scp KEY_NAME.pub RPI4_HOST:.ssh/
pi side
cd ~/.ssh
cat KEY_NAME.pub >> authorized_keys
chmod 600 authorized_keys
--Permissions of authorized_keys --Spirogyra grass
After that, specify ʻIdentity Filein ~ / .ssh / config. When it is still called
Too many authentication failures, add ʻIdentities Only yes
.
-What to do if "Too many authentication failures for ..." appears in ssh --tkuchiki's diary -[ssh] Too many authentication failures for ... error --Qiita
Edit / etc / ssh / sshd_config
to set PasswordAuthentication no
if necessary.
sudo curl -fsSL https://get.docker.com/ | sh
sudo apt install python3-pip
sudo apt install libffi-dev
sudo pip3 install docker-compose
-Install Docker on Raspberry Pi 4-Qiita
Since the Django project was managed by git, I migrated the program to the new server with git clone
. DB (SQLite3) is migrated with scp
.
Since the environment was managed by virtualenv on the old server, requirements.txt is generated from here.
pip3 freeze > requirements.txt
Since it is a big deal, build the environment of the new server with Docker / docker-compose. It was a configuration of django: wsgi-gunicorn-nginx, but first of all, it was an operation test by itself.
# Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
# docker-compose.yml
version: '3'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "127.0.0.1:8000:8000"
environment:
- ENVIRONMENT=production
sudo docker-compose up
Run MySQL (MariaDB) with docker-compose on Raspberry Pi. Use jsurf / rpi-mariadb
.
...
db:
# image: mariadb
image: jsurf/rpi-mariadb
command: mysqld --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
volumes:
- DATABASE_DIRECTORY:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=ROOT_PASSWORD
- MYSQL_DATABASE=DATABASE_NAME
- MYSQL_USER=USER
- MYSQL_PASSWORD=PASSWORD
web:
...
I'll get it back later, so I'll make it easier to get it back, and tweak Django's settings.py
to specify the DB from the environment variables.
DATABASE_ENGINE = os.environ.get('DATABASE_ENGINE', 'django.db.backends.sqlite3')
DATABASE_OPTIONS = {}
if DATABASE_ENGINE == 'django.db.backends.mysql':
DATABASE_OPTIONS = {
'charset': os.environ.get('DATABASE_CHARSET'),
}
DATABASES = {
'default': {
'ENGINE': DATABASE_ENGINE,
'HOST': os.environ.get('DATABASE_HOST'),
'PORT': os.environ.get('DATABASE_PORT'),
'NAME': os.environ.get('DATABASE_NAME', os.path.join(BASE_DIR, 'db.sqlite3')),
'USER': os.environ.get('DATABASE_USER'),
'PASSWORD': os.environ.get('DATABASE_PASSWORD'),
'OPTIONS': DATABASE_OPTIONS,
},
}
Edit docker-compose.yml
as follows. Comment out all the DATABASE part of the environment or create another docker-compose.yml
so that the DB can be returned to SQLite3.
web:
...
environment:
- ENVIRONMENT=production
- DATABASE_ENGINE=django.db.backends.mysql
- DATABASE_HOST=db
- DATABASE_PORT=3306
- DATABASE_NAME=DATABASE_NAME
- DATABASE_USER=USER
- DATABASE_PASSWORD=PASSWORD
- DATABASE_CHARSET=utf8mb4
depends_on:
- db
Add PyMySQL
to requirements.txt
and add the following at the top of manage.py
.
if os.environ.get('DATABASE_ENGINE') == 'django.db.backends.mysql':
import pymysql
pymysql.install_as_MySQLdb()
-Migration from Django SQLite3 to MySQL --Qiita -Django's DATABASE settings that I always do when using MySQL (utf8mb4) with Django --Qiita
If Django accesses it before the initialization on the MySQL side is completed, Django will drop an error, so re-execute docker-compose up
at the first execution. If Django starts up first in the second migration, take appropriate sleep or create a wait script to deal with it. After sandwiching gunicorn in section 9, Django (gunicorn) doesn't give an error even if it starts first (like), so you may not have to worry too much.
command: bash -c "sleep 5 && python manage.py runserver 0.0.0.0:8000"
Depending on the definition of the DB model, sudo docker-compose up -d
and sudo docker-compose exec web python3 manage.py migrate
will give an error. For example, if you have a TextField with a unique constraint and you don't specify max_length
. This time, I changed the URL from being put in TextField instead of URLField to URLField, and specified max_length
(255 or less) for TextField, which is known to be a short string, and solved it (however, on Raspberry Pi, this alone is not enough. It didn't work, so I ended up removing the unique constraint later).
In this area, I moved the project and DB to the main machine for speeding up and experimented. This time it is necessary to change the image of MySQL, but the good point of Docker is that it automatically prepares the same environment (generally) and does not pollute / affect the host environment (compatible without official image) I'm troubled by sex ...?).
-[Pinch with your nose without a knob: If you use mysql as a backend with django, you can't put a unique constraint on TextField --livedoor Blog](http://blog.livedoor.jp/kaz0215/archives/51119127. html) -Try creating an index on a BLOB / TEXT type column with MySQL --Qiita
After eliminating the migration error, return the DB to SQLite3, migrate and dump the data to json.
sudo docker-compose run web bash
python3 manage.py makemigrations
python3 manage.py migrate
# python3 manage.py dumpdata > dump.json
python3 manage.py dumpdata --natural-foreign --natural-primary -e contenttypes -e auth.Permission > dump.json
-Migration from Django SQLite3 to MySQL --Qiita --Dump SQLite3 data and migrate to MySQL --Qiita
python3 manage.py migrate
python3 manage.py loaddata dump.json
django.db.utils.IntegrityError: Problem installing fixture '/code/dump.json': Could not load APP.MODELNAME(pk=PK_NUM): (1062, "Duplicate entry 'ONE_FIELD_NUM' for key 'ONE_FIELD'")
It seems that it was not good to put the unique_together constraint on OneToOneField (OneToOne cannot be used many-to-one), so I changed it to ForeignKey. Also, at this point, although it worked with mariadb
, it did not work because the error around the Key length did not disappear probably because it was set to utf8mb4 with jsurf / rpi-mariadb
, so I removed the unique constraint of all strings. In addition to that, I had to rewrite the files under migrations directly because migrate stopped in the middle here. Even if I sent the DB processed by another PC directly, it didn't work, so I'm still worried about compatibility. After many trials and errors, I was finally able to load data.
Add gunicorn
to requirements.txt
.
Edit docker-compose.yml
. Adjust the number of workers -w
as needed because it consumes memory (I think).
# command: /usr/local/bin/gunicorn -w 4 -b 0.0.0.0:8000 MY_PROJECT.wsgi -t 300
command: bash -c "sleep 5 && gunicorn -w 4 -b 0.0.0.0:8000 MY_PROJECT.wsgi -t 300"
As with manage.py
, add the following at the top of wsgi.py
.
if os.environ.get('DATABASE_ENGINE') == 'django.db.backends.mysql':
import pymysql
pymysql.install_as_MySQLdb()
-Run a Python web application on gunicorn (Django and Flask) --Make group blog -nginx + gunicorn + Django timeout processing --Qiita -Start Django with CentOS7 + Nginx + Gunicorn --Narito Blog -Deploy Django application on EC2 with Nginx + Gunicorn + Supervisor-Qiita
(20/02/15 postscript)
-Job execution with Python Schedule library --Qiita
I set up using busybox crond below, but it seems that the production script did not work well due to the inconvenience around the log, so I wrote a periodic execution script in Python and a container with the same configuration as the Django container I decided to make another one and execute it. However, since it is redundant, it may be better to create an endpoint for periodic execution on the Django container side and make it a container that just skips HTTP requests.
(Old version)
This time, run a regular run in the same container as Django.
I used to write loops in python or run systemd timers for periodic scripts. This time it was systemd / timer, so I tried to move it into the Docker container, but although I can run the script in the Docker container from the host with ʻexec`, I run systemd / timer in the Docker container I'm not sure.
Anyway, the base OS of python: 3
is Debian and systemd is unlikely (init.d), so I will execute it regularly with cron.
--Regularly execute python program with cron on Docker --Qiita --crontab guidelines --Qiita --The current directory when executing cron is the home directory of the executing user --Qiita -Three ways to realize Docker + Cron environment --Qiita -Run cron in a running docker container --Qiita -How to use Cron and where it got stuck-- Qiita -Run the Docker container with cron on the host side --Notes in Room 202 -Start cron task in Docker container --Qiita -Run cron with /etc/cron.d on Docker-Qiita -What I was addicted to when running cron with Docker --Qiita --When crontab does not find anything but there is a configuration file --helen's blog -[Difference] / etc / crontab and / var / spool / cron / [user] --Qiita -How to write /etc/crontab and /etc/cron.d configuration files | server-memo.net -How to use crond --Qiita -Crontab -e should never be used --- donkey electronics are clogged --docker --Which command should be defined in docker-compose.yml or Dockerfile? --Stack Overflow
It was my first time to use cron, so I finally got lost.
-I want to run Cron on the Docker container --Qiita --BusyBox crond is convenient when you want to cron with Docker --shimoju.diary --Run busybox cron in debian-based Docker container --ngyuki's diary
I want to inherit the environment variables specified by Docker, so I use crond included in busybox.
First, create the following file crontab
in the execution directory.
# * * * * * cd /code && echo `env` >> env.txt
0 */6 * * * cd /code && /usr/local/bin/python3 AUTORUN_SCRIPT.py
The top is the setting to write the environment variables to a file every minute (for debugging), and the bottom is the setting to automatically execute /code/AUTORUN_SCRIPT.py as the root user and working directory / code every 6 hours. The time is OK with JST.
Next, define the installation of crond and the addition of the configuration file in the Dockerfile. The correct answer is that / var / spool / cron / crontabs / root is a file rather than a directory.
# Dockerfile
...
RUN apt update && apt install -y \
busybox-static
ENV TZ Asia/Tokyo
COPY crontab /var/spool/cron/crontabs/root
...
Then make sure that crond starts when the Docker container starts. Note that CMD
in Dockerfile
is not executed because we are using docker-compose this time. Instead, add the crond startup command to command
in docker-compose.yml
. Since crond
is executed in the background, gunicorn
will start automatically.
# docker-compose.yml
...
# command: bash -c "busybox crond && gunicorn -w 4 -b 0.0.0.0:8000 MY_PROJECT.wsgi -t 300"
command: bash -c "sleep 5 && busybox crond && gunicorn -w 4 -b 0.0.0.0:8000 MY_PROJECT.wsgi -t 300"
...
Since I'm messing with Django's DB, I added pymysql install in ʻAUTORUN_SCRIPT.py (same as
manage.py,
wsgi.py`), and confirmed that the experimental script works. .. Delete the cron settings for debugging and complete the settings.
Add restart: always
to docker-compose.yml
so that it starts automatically when the host starts, and start it in the background with sudo docker-compose up -d
. After that, you can reboot the host (sudo docker-compose ps
, sudo docker ps
).
-Decker Compose restart behavior --Technical memorandum -How to automatically start the container when the OS starts with docker-compose --Qiita
Successfully migrated the hardware (Raspberry Pi), migrated the DB, migrated the DB engine, migrated to Docker & made it persistent.
Performance has improved (seems to be) due to improvements in hardware specifications and migration to MySQL, and DB operations can now be performed in parallel (like), so when accessing the DB at the same time The Database is locked
error that was occurring is no longer visible.
It's a personal project, so I'm trying to break the log settings or cut it uniquely ... I gave up because it took me a long time. However, after loading data, it may be possible to return unique by migration.
After that, I thought that it would be better to separate cron into another container, but since it has exactly the same dependency as the Django project, I put it together without dividing it. How do you divide this ...?
Recommended Posts