Container Image Support announced at AWS Lambda. What's new in AWS Lambda – Container Image Support (https://aws.amazon.com/jp/blogs/news/new-for-aws-lambda-container-image-support/)
However, the sample code in the AWS official documentation is The content is a little difficult to understand due to its habit.
In this article, I will write about the simplest deployment and execution method possible.
--Knowledge of Docker and containers. --Has a rough knowledge of AWS Lambda. --Aws-cli is installed and set up.
Follow the procedure below.
This is the directory structure. I made it as simple as possible so that it can be applied easily.
#Directory structure
├── Dockerfile
├── entry.sh
└── app
└── app.py
This time, let's use buster (debian image, OS of Linux distribution). Use an image that already has python installed. In fact, other images are fine.
The points are the following two points. --Installing aws lambdaric --Install runtime interface emulator if you want to run it locally These will be used in entry.sh (discussed below).
The source code of each file is shown below.
--Dockerfile: Create Image
# Dockerfile
FROM python:3.9-buster
#Installing runtime interface console
RUN pip install awslambdaric
#install runtime interface emulator to run locally
ADD https://github.com/aws/aws-lambda-runtime-interface-emulator/releases/latest/download/aws-lambda-rie /usr/bin/aws-lambda-rie
RUN chmod 755 /usr/bin/aws-lambda-rie
COPY entry.sh "/entry.sh"
RUN chmod 755 /entry.sh
#Place the executable file in the container.
ARG APP_DIR="/home/app/"
WORKDIR ${APP_DIR}
COPY app ${APP_DIR}
ENTRYPOINT [ "/entry.sh" ]
CMD [ "app.handler" ]
--app/app.py: Source code you want to run.
# app/app.py
def handler(event, context):
return "Hello world!!"
--entry.sh: Determine if it's local or a container on AWS Lambda Use aws-lambda-rie or aws lambdaric. Official reference.
# app/entry.sh
if [ -z "${AWS_LAMBDA_RUNTIME_API}" ]; then
exec /usr/bin/aws-lambda-rie /usr/local/bin/python -m awslambdaric $1
else
exec /usr/local/bin/python -m awslambdaric $1
fi
- Caution
/usr/local/bin/python
and/entry.sh
are lambda specifications, and it seems that commands must be written on the absolute bus. If it is not an absolute path, it will work locally, but when I run it on Lambda I get the following error:
START RequestId: 80f9d98d-06b5-4ba8-b729-b2e6ac2abbe6 Version: $LATEST
IMAGE Launch error: Couldn't find valid bootstrap(s): [python] Entrypoint: []
First, create an image.
docker build -t container_lambda .
Next, set up the container and the daemon will start up.
docker run -it --rm -p 9000:8080 container_lambda
> NFO[0000] exec '/usr/local/bin/python' (cwd=/home/app, handler=app.handler)
To skip the event, throw a post at the following URL.
curl -XPOST "http://localhost:9002/2015-03-31/functions/function/invocations" -d '{}'
> "Hello world!!"%
Then Hello World !!
was returned!
Successful execution locally.
If you want to change the file or function to be executed, change the handler specified in CMD.
Register the image in AWS's Elastic Container Registry (ECR) for it to actually work with Lambda. First, let's move to the ECR screen.
Press Create Repository
.
This time, create a repository with the name container_lambda
.
After deciding on a name, press Create Repository
at the bottom
Then push the local image to this repository.
#Specify the name of the image.
IMAGENAME=container_lambda
#Specify the URL of ECR.
REGISTRYURL=xxxxxxxxx.ecr.ap-northeast-1.amazonaws.com
#Log in to AWS ECR.
aws ecr get-login-password | docker login --username AWS --password-stdin $REGISTRYURL
#Create an image and deploy it to AWS ECR.
docker build -t ${IMAGENAME} .
docker tag ${IMAGENAME} 386617633989.dkr.ecr.ap-northeast-1.amazonaws.com/${IMAGENAME}
docker push 386617633989.dkr.ecr.ap-northeast-1.amazonaws.com/${IMAGENAME}
You have now deployed the image.
Finally, let's run it on Lambda! First, from the AWS console, go to the Lambda screen and press Create Function.
container image
container_lambda
)Create Function
to change the screen.
After a while, the upper band turns green and the lambda setup is complete.If you define the test in the upper right and run it ...
Hello world !!
is back!
So, I tried to summarize the execution in the original runtime.
In the official document, it seems complicated to use alpine system intentionally or multi stage buiid of Docker, but I think that it turned out to be easier in reality.
Lambda is extremely hot with the announcement that it supports up to 6 cores and 10GB of memory. It's also easier to create a simple machine learning API. AWS Lambda now supports up to 10 GB of memory and 6 vCPU cores for Lambda Functions
It's very convenient, so please give it a try!
Next, I would like to run an image recognition model called Detectron2 on AWS Lambda!
Recommended Posts