I tried to rebuild it! Rebuild Django's development environment with Docker! !! !! !!
Team development ... Environment construction is really troublesome ... In that case, let's use Docker (miscellaneous)
This article is a repost of what I gave to qrunch
Let's build a development environment for Django / postgreSQL / gunicorn / nginx using Docker and Docker-compose, which are very convenient for team development! !!
For the overall flow, I refer to the following sites, but I intend to make the article easy to understand, such as inserting comments in almost all lines of the configuration file etc.! Dockerizing Django with Postgres, Gunicorn, and Nginx
It's been a long article, so I'll move on!
--Installing Docker and Docker-compose --Installing pipenv and building a virtual environment --Building a Docker container
As you all know, Docker can be run by virtually putting another OS on your PC (which can only be said easily), and you can copy the entire environment to other people. It's a tool that you can pass to!
For details, please refer to the following pages! Introduction to Docker (1st) -What is Docker and what is good-
Install Docker anyway!
docker
Install from https://docs.docker.com/docker-for-mac/install/
docker-compose
$ curl -L https://github.com/docker/compose/releases/download/1.24.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
$ chmod +x /usr/local/bin/docker-compose
Installation is complete when the version is displayed with the following command!
$ docker --version
Docker version 18.09.2
$ docker-compose --version
docker-compose version 1.23.2
Next, let's create a directory for creating a Django project
<!--Create a directory for your project (app directory is the root directory of your django project)-->
$ mkdir docker-demo-with-django && cd docker-demo-with-django
$ mkdir app && cd app
pipenv is a python virtual environment construction tool that is compared with recently developed venv and venv-virtuarenv.
It has a function like a combination of pip and virtuarenv, and it is an excellent one that can manage the virtual environment and package version with two types of files, pipfile
and Pipfile.lock
.
When building a Django environment with Docker, you can create a similar python development environment just by copying the above two files and executing pipenv, so let's use this
Don't forget to have a Pipfile in docker-demo-with-django / app /
and put Django =" == 2.2.3 "
in the packages
field.
docker-demo-with-django/app/Pipfile
[[source]]
name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true
[dev-packages]
[packages]
Django = "==2.2.3"
[requires]
python_version = "3.7"
After creating pipfile
, enter the following command in the same directory y
<!--Installation of pipenv body-->
:app$ pip install pipenv
<!--Build a virtual environment from Pipfile-->
:app$ pipenv install
<!--Enter the virtual environment-->
:app$ pipenv shell
<!--Start of Django project-->
(app) :app$ django-admin.py startproject django_demo .
<!--Apply model contents to database-->
(app) :app$ python manage.py migrate
<!--Start development server-->
(app) :app$ python manage.py runserver
Try accessing [http: // localhost: 8000 /](http: // localhost: 8000 /). You should see Django's welcome screen
:docker-demo-with-django$ tree
.
└── app
├── Pipfile
├── Pipfile.lock
├── db.sqlite3
├── django_demo
│ ├── __init__.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
└── manage.py
Django
Add the following Dockerfile
to the app directory
Since the purpose of this time is to create a minimum environment, all Docker images installed from the official will use the lightweight ʻalpine linux`.
This makes it possible to build an environment with a capacity of about 1/10 compared to images such as ubuntu.
docker-demo-with-django/app/Dockerfile
#Python3 from the official.7 on alpine linux image pull
FROM python:3.7-alpine
#Set working directory
WORKDIR /usr/src/app
#Set environment variables
#Prevent Python from writing to pyc files and discs
ENV PYTHONDONTWRITEBYTECODE 1
#Prevent Python from buffering standard I / O
ENV PYTHONUNBUFFERED 1
#Install Pipenv
RUN pip install --upgrade pip \
&& pip install pipenv
#Copy the host pipfile to the container's working directory
COPY ./Pipfile /usr/src/app/Pipfile
#Install packages from pipfile and build a Django environment
RUN pipenv install --skip-lock --system --dev
#Copy the host's current directory (currently the app directory) to your working directory
COPY . /usr/src/app/
Then add docker-compose.yml
to the root of your project (docker-demo-with-django
)
version: '3.7'
services:
#Service name can be set freely
django:
#From within the app directory`Dockerfile`Find
build: ./app
#Set the command to be entered after starting the service
command: python manage.py runserver 0.0.0.0:8000
#Settings for persisting data.`host:container`Describe the path with
volumes:
- ./app/:/usr/src/app/
#Specify the port to open.`host:container`List the port in
ports:
- 8000:8000
#Specify environment variables
environment:
#1 is debug mode
- DEBUG=1
# setting.SECRET listed in py_Fill in the KEY
- SECRET_KEY=hoge
Modify setting.py in your django project.
There are 3 correction items: SECRET_KEY
, DEBUG
, and ʻALLOWED_HOSTS`.
#SECRET from environment variables_Setting to get KEY
SECRET_KEY = os.environ.get('SECRET_KEY')
#Get DEBUG from environment variables. The default is True (production mode)
DEBUG = int(os.environ.get('DEBUG', default=0))
#List the allowed hosts
ALLOWED_HOSTS = ['localhost', '127.0.0.1']
After the modification, use the docker-compose up -d --build
command to build and start at the same time.
The -d
option means to start in the background
If you connect to [http: // localhost: 8000 /](http: // localhost: 8000 /) and the welcome screen is displayed, you are successful.
Postgres
To add psotgres add a new service to docker-compose.yml
At the same time, you need to configure the database for the django service
version: '3.7'
services:
#Service name can be set freely
django:
#From within the app directory`Dockerfile`Find
build: ./app
#Set the command to be entered after starting the service
command: python manage.py runserver 0.0.0.0:8000
#Settings for persisting data.`host:container`Describe the path with
volumes:
- ./app/:/usr/src/app/
#Specify the port to open.`host:container`List the port in
ports:
- 8000:8000
#Specify environment variables
environment:
#1 is debug mode
- DEBUG=1
- SECRET_KEY=hoge
- DATABASE_ENGINE=django.db.backends.postgresql
- DATABASE_DB=django_db
- DATABASE_USER=django_db_user
- DATABASE_PASSWORD=password1234
- DATABASE_HOST=postgres
- DATABASE_PORT=5432
#Specify the service to connect to
depends_on:
- postgres
postgres:
#Pull the image from the official
image: postgres:11.4-alpine
#Database persistence
#At the beginning so as not to mount in the host directory`./`Do not attach
volumes:
- postgres_data:/var/lib/postgresql/data
#Create a database with su privileges and the same name as the specified user
#value is the same as the one specified in the django service
environment:
- POSTGRES_USER=django_db_user
- POSTGRES_PASSWORD=password1234
- POSTGRES_DB=django_db
#"Named volumes" written at the top level can be referenced from multiple services
volumes:
postgres_data:
Then rewrite the DATABASES
item in setting.py
DATABASES = {
'default': {
'ENGINE': os.environ.get('DATABASE_ENGINE', 'django.db.backends.sqlite3'),
'NAME': os.environ.get('DATABASE_DB', os.path.join(BASE_DIR, 'db.sqlite3')),
'USER': os.environ.get('DATABASE_USER', 'user'),
'PASSWORD': os.environ.get('DATABASE_PASSWORD', 'password'),
'HOST': os.environ.get('DATABASE_HOST', 'localhost'),
'PORT': os.environ.get('DATABASE_PORT', '5432'),
}
}
You need to use a driver to connect to postgres from django
This time let's modify docker-demo-with-django / app / Dockerfile
to take advantage of the most major driver, psycopg2
.
The Dockerfile looks like this
Let's install psycopg2
from pip
after installing the dependency using ʻapk` which is the package manager of alpine linux
#Python3 from the official.7 on alpine linux image pull
FROM python:3.7-alpine
#Set working directory
WORKDIR /usr/src/app
#Set environment variables
#Prevent Python from writing to pyc files and discs
ENV PYTHONDONTWRITEBYTECODE 1
#Prevent Python from buffering standard I / O
ENV PYTHONUNBUFFERED 1
#Installation of psycopg2
RUN apk update \
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev \
&& pip install psycopg2 \
&& apk del build-deps
#Install Pipenv
RUN pip install --upgrade pip \
&& pip install pipenv
#Copy the host pipfile to the container's working directory
COPY ./Pipfile /usr/src/app/Pipfile
#Install packages from pipfile and build a Django environment
RUN pipenv install --skip-lock --system --dev
#Copy the host's current directory (currently the app directory) to your working directory
COPY . /usr/src/app/
Stop the containers that were started earlier with docker-compose down -v
, and then enter docker-compose up -d --build
again to restart the containers.
The -v
option represents the deletion of the volume
<!--Stop container-->
$ docker-compose down -v
<!--Start container-->
$ docker-compose up -d --build
<!--migration-->
<!-- $ docker-compose exec <service_name> python manage.py migrate --noinput -->
$ docker-compose exec django python manage.py migrate --noinput
If you do the same thing several times, Django and postgres will rarely connect and the postgres container will stop.
In that case, let's check the log
Probably postgres_1 | initdb: directory" / var / lib / postgresql / data "exists but is not empty
You can see the description like this, so let's delete docker-demo-with-django / postgres_data
on the host side.
Use the docker-compose ps
command to check that both containers are running (State is Up) as shown below.
$ docker-compose ps
Name Command State Ports
----------------------------------------------------------------------------------------------------
docker-demo-with-django_django_1 python manage.py runserver ... Up 0.0.0.0:8000->8000/tcp
docker-demo-with-django_postgres_1 docker-entrypoint.sh postgres Up 5432/tcp
Next, make sure that the database specified for the database has been created.
$docker-compose exec postgres psql --username=django_db_user --dbname=django_db
psql (11.4)
Type "help" for help.
django_db=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------------+----------+------------+------------+-----------------------------------
django_db | django_db_user | UTF8 | en_US.utf8 | en_US.utf8 |
postgres | django_db_user | UTF8 | en_US.utf8 | en_US.utf8 |
template0 | django_db_user | UTF8 | en_US.utf8 | en_US.utf8 | =c/django_db_user +
| | | | | django_db_user=CTc/django_db_user
template1 | django_db_user | UTF8 | en_US.utf8 | en_US.utf8 | =c/django_db_user +
| | | | | django_db_user=CTc/django_db_user
(4 rows)
django_db=# \dt
List of relations
Schema | Name | Type | Owner
--------+----------------------------+-------+----------------
public | auth_group | table | django_db_user
public | auth_group_permissions | table | django_db_user
public | auth_permission | table | django_db_user
public | auth_user | table | django_db_user
public | auth_user_groups | table | django_db_user
public | auth_user_user_permissions | table | django_db_user
public | django_admin_log | table | django_db_user
public | django_content_type | table | django_db_user
public | django_migrations | table | django_db_user
public | django_session | table | django_db_user
(10 rows)
django_db=# \q
Once confirmed, add ʻentrypoint.sh` to the app directory for automatic migration after confirming the connection to postgres.
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $DATABASE_HOST $DATABASE_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
python manage.py flush --no-input
python manage.py migrate
exec "$@"
After adding, grant execute permission with the chmod + x app / entrypoint.sh
command.
Finally modify the Dockerfile to run ʻentrypoint.sh and add the environment variable
$ DATABASE`
#Python3 from the official.7 on alpine linux image pull
FROM python:3.7-alpine
#Set working directory
WORKDIR /usr/src/app
#Set environment variables
#Prevent Python from writing to pyc files and discs
ENV PYTHONDONTWRITEBYTECODE 1
#Prevent Python from buffering standard I / O
ENV PYTHONUNBUFFERED 1
#Installation of psycopg2
RUN apk update \
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev \
&& pip install psycopg2 \
&& apk del build-deps
#Install Pipenv
RUN pip install --upgrade pip \
&& pip install pipenv
#Copy the host pipfile to the container's working directory
COPY ./Pipfile /usr/src/app/Pipfile
#Install packages from pipfile and build a Django environment
RUN pipenv install --skip-lock --system --dev
# entrypoint.copy sh
COPY ./entrypoint.sh /usr/src/app/entrypoint.sh
#Copy the host's current directory (currently the app directory) to your working directory
COPY . /usr/src/app/
# entrypoint.run sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
Wait for a while and if you can connect to [http: // localhost: 8000 /](http: // localhost: 8000 /) after the startup is completed, you are done.
For the time being, the personal development environment is ready
But in a production environment, you have to keep the environment variables private. You also need to start the server in a different way due to lack of functionality or security issues with python manage.py runserver
.
For that purpose, install gunicorn
(WSGI server) which is the interface between the application and the web server, set ʻenv_file, and set
nginxwhich acts as a reverse proxy of
gunicorn` to process static files. Must do
Next we will do them
Let's add gunicorn to pipfile
[[source]]
url = "https://pypi.python.org/simple"
verify_ssl = true
name = "pypi"
[packages]
django = "==2.2"
gunicorn= "==19.9.0"
[dev-packages]
[requires]
python_version = "3.7"
In addition, add docker-compose.prod.yml
to the same directory as docker-compose.yml
, and describe the settings for the production environment.
version: '3.7'
services:
#Service name can be set freely
django:
#From within the app directory`Dockerfile`Find
build: ./app
#Set the command to be entered after starting the service
command: gunicorn django_demo.wsgi:application --bind 0.0.0.0:8000
#Settings for persisting data.`host:container`Describe the path with
volumes:
- ./app/:/usr/src/app/
#Specify the port to open.`host:container`List the port in
ports:
- 8000:8000
#Specify environment variables
env_file: .env
#Specify the service to connect to
depends_on:
- postgres
postgres:
#Pull the image from the official
image: postgres:11.4-alpine
#Database persistence
volumes:
- postgres_data:/var/lib/postgresql/data
env_file: .env.db
#"Named volumes" written at the top level can be referenced from multiple services
volumes:
postgres_data:
Compared to the development environment settings, ʻenvironment: has changed to ʻenv_file:
. This eliminates the need to write production settings directly in yml.
Also, I specified the command for starting gunicorn
incommand:
of the django service to start gunicorn
instead of runserver
.
Place ʻenv_filein the same directory as
docker-compose.prod.ymland write as follows At this time, don't forget to set
DEBAG = 0 in
.env(turn off debug mode with
DEBAG = 0`).
/docker-demo-with-django/.env
DEBUG=0
SECRET_KEY=hoge
DATABASE_ENGINE=django.db.backends.postgresql
DATABASE_DB=django_db
DATABASE_USER=django_db_user
DATABASE_PASSWORD=password1234
DATABASE_HOST=postgres
DATABASE_PORT=5432
DATABASE=postgres
/docker-demo-with-django/.env.db
POSTGRES_USER=django_db_user
POSTGRES_PASSWORD=password1234
POSTGRES_DB=django_db
In addition, at this stage, migrate will be executed every time the container is started, so let's also create ʻentrypoint.prod.sh` for the production environment.
/docker-demo-with-django/app/entrypoint.prod.sh
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $DATABASE_HOST $DATABASE_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
exec "$@"
Dockerfile will also be created for production
/docker-demo-with-django/app/Dockerfile.prod
#Python3 from the official.7 on alpine linux image pull
FROM python:3.7-alpine
#Set working directory
WORKDIR /usr/src/app
#Set environment variables
#Prevent Python from writing to pyc files and discs
ENV PYTHONDONTWRITEBYTECODE 1
#Prevent Python from buffering standard I / O
ENV PYTHONUNBUFFERED 1
#Installation of psycopg2
RUN apk update \
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev \
&& pip install psycopg2 \
&& apk del build-deps
#Install Pipenv
RUN pip install --upgrade pip \
&& pip install pipenv
#Copy the host pipfile to the container's working directory
COPY ./Pipfile /usr/src/app/Pipfile
#Install packages from pipfile and build a Django environment
RUN pipenv install --skip-lock --system --dev
# entrypoint.copy sh
COPY ./entrypoint.prod.sh /usr/src/app/entrypoint.prod.sh
#Copy the host's current directory (currently the app directory) to your working directory
COPY . /usr/src/app/
# entrypoint.run sh
ENTRYPOINT ["/usr/src/app/entrypoint.prod.sh"]
Naturally, rewrite docker-compose.prod.yml
to read the production file as well.
version: '3.7'
services:
#Service name can be set freely
django:
build:
#The file name to read is`Dockerfile`If not, enter the relative path in the context and the file name in the dockerfile
context: ./app
dockerfile: Dockerfile.prod
#Set the command to be entered after starting the service
command: gunicorn django_demo.wsgi:application --bind 0.0.0.0:8000
#Settings for persisting data.`host:container`Describe the path with
volumes:
- ./app/:/usr/src/app/
#Specify the port to open.`host:container`List the port in
ports:
- 8000:8000
#Specify environment variables
env_file: .env
#Specify the service to connect to
depends_on:
- postgres
postgres:
#Pull the image from the official
image: postgres:11.4-alpine
#Database persistence
volumes:
- postgres_data:/var/lib/postgresql/data
env_file: .env.db
#"Named volumes" written at the top level can be referenced from multiple services
volumes:
postgres_data:
Let's start the container again after setting
$ docker-compose down -v
<!-- -docker with f option-compose.prod.Specify yml-->
$ docker-compose -f docker-compose.prod.yml up -d --build
<!-- entrypoint.prod.Since sh does not migrate, execute it manually-->
$ docker-compose -f docker-compose.prod.yml exec django python manage.py migrate --noinput
After starting, let's access [http: // localhost: 8000 / admin](http: // localhost: 8000 / admin)
You should be connected and see the login screen for django management, but you shouldn't see any static files (CSS, etc.)
This is because I turned off debug mode so static files are no longer loaded
Also, Gunicorn started as set, but it does not support Gunicorn static file distribution and only provides an application (django in this case), so if you do not modify the above two settings, the static file Cannot be delivered
Specifically, use python manage.py collectstatic
to collect static files in one place and read them, and use a web server such as nginx as a reverse proxy for gunicorn.
First, let's add nginx to the service
Create an nginx directory at the root of your project (/ docker-demo-with-django /
) and add Dockerfile
and nginx.conf
to it
/docker-demo-with-django/nginx/Dockerfile
FROM nginx:1.15.12-alpine
#Delete the default conf and add another setting
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
/docker-demo-with-django/nginx/nginx.conf
upstream config {
#If you specify the service name of the container, the name will be resolved.
server django:8000;
}
server {
#Stand by on port 80
listen 80;
location / {
proxy_pass http://config;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
}
Then add nginx to docker-compose.prod.yml
version: '3.7'
services:
#Service name can be set freely
django:
build:
#The file name to read is`Dockerfile`If not, enter the relative path in the context and the file name in the dockerfile
context: ./app
dockerfile: Dockerfile.prod
#Set the command to be entered after starting the service
command: gunicorn django_demo.wsgi:application --bind 0.0.0.0:8000
#Settings for persisting data.`host:container`Describe the path with
volumes:
- ./app/:/usr/src/app/
#The specified port can be accessed from the connected service
expose:
- 8000
#Specify environment variables
env_file: .env
#Specify the service to connect to
depends_on:
- postgres
postgres:
#Pull the image from the official
image: postgres:11.4-alpine
#Database persistence
volumes:
- postgres_data:/var/lib/postgresql/data
env_file: .env.db
nginx:
build: ./nginx
ports:
- 1337:80
depends_on:
- django
#"Named volumes" written at the top level can be referenced from multiple services
volumes:
postgres_data:
Change ports:
to ʻexpose:` as you will no longer be requesting the django service directly from your host OS
The port specified by this will not be exposed to the host OS, but it will be possible to connect from the linked service.
Restart the service as before
$ docker-compose -f docker-compose.prod.yml down -v
$ docker-compose -f docker-compose.prod.yml up -d --build
$ docker-compose -f docker-compose.prod.yml exec djnago python manage.py migrate --noinput
Let's connect to [http: // localhost: 1337 / admin /](http: // localhost: 1337 / admin /). You should see the admin screen
This completes the connection with nginx. The directory structure at this stage is as follows
$tree
.
├── app
│ ├── Dockerfile
│ ├── Dockerfile.prod
│ ├── Pipfile
│ ├── Pipfile.lock
│ ├── django_demo
│ │ ├── __init__.py
│ │ ├── settings.py
│ │ ├── urls.py
│ │ └── wsgi.py
│ ├── entrypoint.prod.sh
│ ├── entrypoint.sh
│ └── manage.py
├── docker-compose.prod.yml
├── docker-compose.yml
└── nginx
├── Dockerfile
└── nginx.conf
Next is the setting to process static files. Modify the end of setting.py
in the django project and add it to ʻentrypoint.sh`
/docker-demo-with-django/app/django_demo/settings.py
STATIC_URL = '/staticfiles/'
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
/docker-demo-with-django/app/entrypoint.sh
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $DATABASE_HOST $DATABASE_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
python manage.py flush --no-input
python manage.py migrate
python manage.py collectstatic --no-input --clear
exec "$@"
Now python manage.py collectstatic
will collect the static files in the path specified by STATIC_ROOT and will also deliver the static files that exist in the staticfiles
directory there.
Next, set the same volume for django and nginx in docker-compose.prod.yml
, connect the django project to the nginx container, and then route static file requests to the staticfiles
directory. Let's set
/docker-demo-with-django/docker-compose.prod.yml
version: '3.7'
services:
#Service name can be set freely
django:
build:
#The file name to read is`Dockerfile`If not, enter the relative path in the context and the file name in the dockerfile
context: ./app
dockerfile: Dockerfile.prod
#Set the command to be entered after starting the service
command: gunicorn django_demo.wsgi:application --bind 0.0.0.0:8000
#Settings for persisting data.`host:container`Describe the path with
volumes:
- static_volume:/usr/src/app/staticfiles
#The specified port can be accessed from the connected service
expose:
- 8000
#Specify environment variables
env_file: .env
#Specify the service to connect to
depends_on:
- postgres
postgres:
#Pull the image from the official
image: postgres:11.4-alpine
#Database persistence
volumes:
- postgres_data:/var/lib/postgresql/data
env_file: .env.db
nginx:
build: ./nginx
volumes:
- static_volume:/usr/src/app/staticfiles
ports:
- 1337:80
depends_on:
- django
#"Named volumes" written at the top level can be referenced from multiple services
volumes:
postgres_data:
static_volume:
/docker-demo-with-django/nginx/nginx.conf
upstream config {
#If you specify the service name of the container, the name will be resolved.
server django:8000;
}
server {
#Stand by on port 80
listen 80;
location / {
proxy_pass http://config;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
#Route static file requests to static files
location /staticfiles/ {
alias /usr/src/app/staticfiles/;
}
}
This completes all the settings! Let's start the container again!
$ docker-compose -f docker-compose.prod.yml down -v
$ docker-compose -f docker-compose.prod.yml up -d --build
$ docker-compose -f docker-compose.prod.yml exec django python manage.py migrate --noinput
$ docker-compose -f docker-compose.prod.yml exec django python manage.py collectstatic --no-input --clear
After confirming the startup, try connecting to [http: // localhost: 1337 / admin](http: // localhost: 1337 / admin). CSS should be set on the admin screen! Then modify your Django project as you like!
Thank you for your hard work! !!