This article is the 11th day of Recruit Lifestyle Advent Calendar 2015 --Qiita. I'm moremagic, who is in charge of development at Hot Pepper Beauty. We are working in the app infrastructure team to help developers.
Do you guys use Docker? I recently started using it, but it's insanely convenient!
However, as the number of containers increases, it is difficult to remember the port number. .. Also, it is awesome that you can only know which service you are accessing by port number. Instead of switching the container to access by port number Wouldn't it be easier to understand if the container name entered the domain name as it is?
So, this time I will talk about accessing with Docker with the container name as a subdomain.
For example ,,
The name of the server running Docker is ʻexample.com Container name
dev-tomcat`
Assuming port 8080 is forwarding to port 49000
Normally if you want to access port 8080 of the above dev-tomcat container Because port 8080 is forwarding to port 49000 Access as follows.
http://example.com:49000/
If there is only one container started, it's not difficult or anything. It's hard to remember when the number of containers and services increase. If you make a mistake, you will connect to a different service.
So it's easy for humans to understand
http://dev-tomcat-8080.example.com
It is a story to make it accessible as described above. In this case, you can access it if you remember the container name and the port that is actually operating in the container!
Create and launch a name resolution container that makes the container name subdomain accessible within the server. Since it is realized by http proxy, subdomain resolution is possible only for http protocol.
Just start the name resolution container and the Docker container information will be stored in the container on a regular basis. You can access by container name without rewriting the configuration file every time you start the container.
There is a Docker environment, and the host on which Docker is running can resolve the name. You have enabled Docker's RemoteAPI
Prepare the following files and create a Docker Image Place other than Dockerfile in an appropriate folder according to the description of Dockerfile.
The Dockerfile looks like this. Install redis, python3, nginx based on ubuntu: 14.04 I make a startup shell and hit it at startup
Dockerfile
FROM ubuntu:14.04
~ Omitted ~
# python3 install
RUN apt-get install -y python3 python3-pip && apt-get clean
RUN pip3 install redis
ADD redis/regist.py /usr/sbin/regist.py
RUN chmod +x /usr/sbin/regist.py
# redis
RUN apt-get update && apt-get install -fy redis-server
ADD redis/redis.conf /etc/redis/redis.conf
# nginx install
RUN apt-get -y install nginx lua-nginx-redis && apt-get clean
ADD nginx/default /etc/nginx/sites-available/
ADD nginx/rewrite.lua /etc/nginx/sites-available/
ADD nginx/cert/ /etc/nginx/cert/
#Create launch shell
RUN printf '#!/bin/bash \n\
/usr/bin/redis-server & \n\
/usr/sbin/regist.py > /dev/null & \n\
/etc/init.d/nginx start \n\
/etc/init.d/nginx reload \n\
/usr/sbin/sshd -D \n\
tail -f /var/null \n\
' >> /etc/service.sh \
&& chmod +x /etc/service.sh
EXPOSE 22 6379 80 443
CMD /etc/service.sh
nginx The key to the operation, the nginx configuration file looks like this. The server is waiting at 80,443 and is set to rewrite → proxy.
default
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /usr/share/nginx/html;
index index.html index.htm;
server_name localhost;
location / {
set $upstream "";
rewrite_by_lua_file /etc/nginx/sites-available/rewrite.lua;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://$upstream;
client_max_body_size 200M;
}
}
server {
listen 443;
server_name localhost;
ssl on;
ssl_certificate /etc/nginx/cert/ssl.crt;
ssl_certificate_key /etc/nginx/cert/ssl.key;
ssl_session_timeout 5m;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
location / {
set $upstream "";
rewrite_by_lua_file /etc/nginx/sites-available/rewrite.lua;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass https://$upstream;
client_max_body_size 200M;
}
}
rewrite.lua Get the container name and port number from the host name. Contact redis and return the actual container
rewrite.lua
local routes = _G.routes
if routes == nil then
routes = {}
ngx.log(ngx.ALERT, "[[[Route cache is empty.]]")
end
local container_name = string.sub(ngx.var.http_host, 1, string.find(ngx.var.http_host, "%.")-1)
local route = routes[container_name]
if route == nil then
local Redis = require "nginx.redis"
local client = Redis:new()
client:set_timeout(1000)
local ok, err = client:connect("127.0.0.1", 6379)
if not ok then
ngx.log(ngx.ERR, "************ Redis connection failure: " .. err)
return
end
route = client:get(container_name)
end
ngx.log(ngx.ALERT, route)
-- fallback to redis for lookups
if route ~= nil then
ngx.var.upstream = route
routes[container_name] = route
else
ngx.log(ngx.ALERT, "=ng=[[[route null]]]")
ngx.exit(ngx.HTTP_NOT_FOUND)
end
python It is a behind-the-scenes player who stores container information in Redis. It keeps updating container information every 3 seconds via Docker's Remote API. I've been running it in a loop all the time, but there may be a little more way. .. ..
regist.py
#!/usr/bin/python3
import os
import sys
import time
import json
import redis
import urllib.request
DOCKER_HOST = os.getenv('DOCKER_HOST')
REDIS_ADDR = '127.0.0.1'
REDIS_PORT = 6379
def redisDump():
conn = redis.Redis(host=REDIS_ADDR, port=REDIS_PORT)
for key in conn.keys():
print(key)
print(conn.get(key))
return conn.keys()
def addData(datas):
conn = redis.Redis(host=REDIS_ADDR, port=REDIS_PORT)
for key in set(list(datas.keys()) + list(conn.keys())):
if isinstance(key, bytes):
key = key.decode('utf-8')
if key in datas:
conn.set(key, datas[key])
else:
conn.delete(key)
def getContainers():
response = urllib.request.urlopen('http://' + DOCKER_HOST + '/containers/json?all=1')
jsonData = json.loads(response.read().decode('utf-8'))
datas = {}
for con in jsonData:
name = con['Names'][-1][1:]
con_ip = getIpAddress(con['Id'])
for port in con['Ports']:
key = name + '-' + str(port['PrivatePort'])
value=con_ip + ':' + str(port['PrivatePort'])
datas[key] = value
return datas
def getIpAddress(con_id):
response = urllib.request.urlopen('http://' + DOCKER_HOST + '/containers/' + con_id + '/json')
jsonData = json.loads(response.read().decode('utf-8'))
#print(json.dumps(jsonData))
ret = jsonData['NetworkSettings']['IPAddress']
return ret
while True:
addData(getContainers())
print( redisDump() )
sys.stdout.flush()
time.sleep(3)
Go to the folder where the Dockerfile is stored and type the following command!
# docker build -t docker-discovery .
Start the Image you created earlier. Since the container information is acquired using the DockerRemote API of Docker running on the host from inside the container, I will spell it at startup.
# docker run -d -p 80:80 -p 443: 443 -e DOCKER_HOST = <Docker Host IP address>: <RemoteAPI port> --name docker-discovery docker-discovery
This is all you need to do name resolution!
Suppose the host running docker is example.com You can connect to the corresponding container at the following URL.
http: // {container name}-{port number in the public container} .example.com
Let's start the TeamCity container as a trial.
docker run -dP --name teamcity moremagic/teamcity
This container exposes port 8111 to the outside world, so you can access it at the URL below. http://teamcity-8111.example.com
Enjoy Docker!
Recommended Posts