[DOCKER] Try running Word2vec model on AWS Lambda


This is the article on the 23rd day of mediba Advent Calendar 2020.

This is Nozaki from mediba. Currently, as a back-end engineer, I am in charge of products such as au Web Portal, and operate back-end applications and infrastructure range. He also belongs to the Tech Lead team at the Technology Center.

What's new in AWS Lambda – Container Image Support | Amazon Web Services Blog (https://aws.amazon.com/jp/blogs/news/new-for-aws-lambda-container-image-support/)

AWS Lambda container image support announced this month. You can now package and deploy `Lambda functions as container images up to 10 GB. As you can see, it is now possible to create packages of larger sizes than before. It seems that the container can contain machine learning models and dictionary data used in natural language processing (NLP).

Therefore, in this article, we will include a trained Word2vec model in a container and run it on AWS Lambda. The function to create is to return a word similar to the input word and its similarity (value in the range of 0 to 1), and this can be called via API. The trained Word2vec model is used in the part that returns similar words.

スクリーンショット 2020-12-17 14.23.54.png

The overall picture is as shown in the figure.


The background is why this kind of verification is done this time. au Web Portal has a product that distributes news articles. Some of these features are provided by applying natural language processing (NLP) to news articles.

Currently this system is built on EC2, In the future, we are considering changing to containerization or serverless. As part of this study, I came up with something like the subject.

Procedures performed

AWS Serverless Application Model-Amazon Web Services (https://aws.amazon.com/jp/serverless/sam/) This time, we will use SAM as the framework of AWS Lambda.

  1. sam init

We will proceed with the following version.

$ sam --version
SAM CLI, version 1.13.2

Initialize the sam project.

$ sam init
Which template source would you like to use?
	1 - AWS Quick Start Templates
	2 - Custom Template Location
Choice: 1
What package type would you like to use?
	1 - Zip (artifact is a zip uploaded to S3)
	2 - Image (artifact is an image uploaded to an ECR image repository)
Package type: 2

Which base image would you like to use?
	1 - amazon/nodejs12.x-base
	2 - amazon/nodejs10.x-base
	3 - amazon/python3.8-base
	4 - amazon/python3.7-base
	5 - amazon/python3.6-base
	6 - amazon/python2.7-base
	7 - amazon/ruby2.7-base
	8 - amazon/ruby2.5-base
	9 - amazon/go1.x-base
	10 - amazon/java11-base
	11 - amazon/java8.al2-base
	12 - amazon/java8-base
	13 - amazon/dotnetcore3.1-base
	14 - amazon/dotnetcore2.1-base
Base image: 3

Project name [sam-app]: sam-wiki-entity-vectors

The following projects can be created.

├── README.md
├── __init__.py
├── events
│   └── event.json
├── hello_world
│   ├── Dockerfile
│   ├── __init__.py
│   ├── app.py
│   ├── requirements.txt
│   └── tohoku_entity_vector         // 2.Directory created by getting the model
│       ├── entity_vector.model.bin
│       └── entity_vector.model.txt
├── samconfig.toml                   // 7.Created with sam deploy
├── template.yaml
└── tests
    ├── __init__.py
    └── unit
        ├── __init__.py
        └── test_handler.py

2. Get the model

This time, we will use the trained model published at Tohoku University Inui-Okazaki Laboratory.

Japanese Wikipedia Entity Vector

Obtain the following two files from the above site, create a tohoku_entity_vector directory in the sam project, and store them.

--Binary file (entity_vector.model.bin) --Text file (entity_vector.model.txt)

$ du -h hello_world/tohoku_entity_vector

These file sizes were 2.6GB.

3. Code changes

Modify the following files in the project.


Add the following,


Gensim is a Python library for topic modeling, document indexing, and similarity search using a large corpus.


Modify the COPY location as follows. In addition to tohoku_entity_vector / to be copied, include the model in the Docker container.

COPY app.py requirements.txt tohoku_entity_vector/ ./


Modify as follows.

import os
import json

from gensim.models import KeyedVectors

def lambda_handler(event, context):

    word = event.get('queryStringParameters').get('word')

    model = KeyedVectors.load_word2vec_format(
        './entity_vector.model.bin', binary=True)
    result = model.most_similar('[' + word + ']')

    return {
        "statusCode": 200,
        "body": json.dumps(result, indent=2, ensure_ascii=False),

most_similar returns words that are similar to the input word and their similarity (0 to 1) in order of similarity.



Change the Globals section, memory and timeout values.

    Timeout: 900
    MemorySize: 10240

Lambda quotas - AWS Lambda

In both cases, try raising the above quota to the upper limit.

  1. sam build

Execute the following command with docker running.

$ sam build

5. Confirmation of local operation

You can test the api with the following command in the local environment.

$ sam local start-api

Make the following request in your browser.

<img width="500" " src="https://qiita-image-store.s3.ap-northeast-1.amazonaws.com/0/889583/865510a8-5426-a876-8960-5532b3a76133.png ">

For the input Kawasaki Frontale entered in the query parameter As similar words, words such as Jubilo Iwata`` Oita Trinita and their similarities were returned as expected. It looks okay.

6. Create ECR repository

Create an ECR repository for pushing Docker images with the following command.

aws ecr create-repository --repository-name repository-name

7.sam deploy

#First time
$ sam deploy --guided

#After the second time
$ sam deploy

For the first deployment, select the deployment settings by adding the --guided option. samconfig.toml is created. In the second and subsequent deployments, it will be deployed based on this file.

The deploy command pushes the image to ECR and deploys the function to AWS Lambda.

8. Confirmation of operation

First, check the image of ECR. It is PUSHed and you can see that it has about 1.8GB.

<img width="600" " src="https://qiita-image-store.s3.ap-northeast-1.amazonaws.com/0/889583/ff04e304-9307-13ce-0fa4-4dda7aa0ebcf.png ">

Next, check the API. This time, enter FC Barcelona entered in the query parameter and enter As similar words, words such as Real Madrid`` AC Milan and their similarities were returned as expected. As expected. The API latency was around 10 seconds.

<img width="700" " src="https://qiita-image-store.s3.ap-northeast-1.amazonaws.com/0/889583/0eee8fa8-ba85-dd7d-9509-32a871b76122.png ">

Machine learning system design pattern

As I learned from writing this article, The design pattern of the machine learning system was published by Mercari. Publish the design pattern of the machine learning system. | Mercari Engineering Use cases and advantages and disadvantages are organized by specific pattern.

The configuration this time is Web single pattern | ml-system-design-pattern It looks like this pattern.


--I tried running the Word2vec model on AWS Lambda --The Word2vec model can now be included in a container, simplifying application development. --Until now, when handling large files with AWS Lambda, service linkage such as EFS was indispensable. --It is necessary to consider how to optimize the function memory allocation of AWS Lambda.

Reference article

-Understanding Word2Vec-Qiita

Recommended Posts

Try running Word2vec model on AWS Lambda
Is Java on AWS Lambda slow?
Hello World on AWS Lambda + Java
Try running Spring Boot on Kubernetes
Try running AWS X-Ray in Java
Run C binaries on AWS Lambda
Try the Docker environment on AWS ECS
Try running MPLS-VPN with FR Routing on Docker
Try running ScalarDB on WSL Ubuntu (Environment Construction)
How to deploy a container on AWS Lambda
Try running OSPF with FR Routing on Docker
Try running ScalarDB on WSL Ubuntu (Sample application creation)
Try running an app made with Quarkus on Heroku
Path of Jar file running on AWS Elastic Beanstalk
Try running the Embulk command with your Lambda function
Try DisplayLink on Ubuntu 20.04
AWS Lambda timezone change
Lambda on Terraform Container
Try OpenLiteSpeed on CentOS8
Try actions on GitHub [actions]
Regularly post imaged tweets on Twitter with AWS Lambda + Java
How to get inside a container running on AWS Fargate
I tried running a Docker container on AWS IoT Greengrass 2.0
With [AWS] CodeStar, you can build a Spring (Java) project running on Lambda in just 3 minutes! !!