I would like to know if there is a smarter way.
I imported mysqlclient to connect to RDS (mysql / aurora) from AWS Lambda. It worked locally, so deploy it with serverless. When I test run it from the AWS Management Console, it doesn't work.
Apparently, mysqlclient contains native code, so I'm getting an error trying to run an image built on a Mac on linux.
It is a story that I tried various things to solve this area.
By the way, the external package of python is managed by serverless-python-requirements.
--Reference: Management of external modules using Serverless Framework plugin
My service like this
ore-service
├── handler.py
├── requirements.py
├── requirements.txt
└── serverless.yml
Suppose that the content is like this.
requirements.txt
mysqlclient
handler.py
#!/use/bin/env python
# -*- coding: utf-8 -*-
import requirements
import MySQLdb
def lambda_handler(event, context):
con = MySQLdb.connect(host='〜', db='〜', user='〜', passwd='〜', charset='utf8')
cur = con.cursor()
cur.execute("SELECT *FROM Somehow ~")
....(Abbreviation)
I get an error when I try to test run AWS Lambda from the AWS console with sls deploy. Log output has the following output.
Unable to import module 'handler': /var/task/_mysql.so: invalid ELF header
Oh, I see. I'm trying to run _mysql.so built on a Mac on linux.
serverless-python-requirements has an option called dockerizePip, and if you set it to true, it will cross-compile using the docker-lambda image. I will try it immediately.
serverless.yml looks like this:
serverless.yml
...(Abbreviation)
plugins:
- serverless-python-requirements
custom:
pythonRequirements:
dockerizePip: true
...(Abbreviation)
So, unfortunately, I got an error when deploying.
$ sls deploy
Serverless: Installing required Python packages...
Error --------------------------------------------------
Command "python setup.py egg_info" failed with error
code 1 in /tmp/pip-build-4MzA_g/mysqlclient/
This seems to be because the python-devel and mysql-devel needed to build mysqlclient are not in the docker image.
[Source] of serverless-python-requirements (https://github.com/UnitedIncome/serverless-python-requirements/blob/master/index.js) When I looked at it, I could understand what I was doing. It's just like starting a container and doing pip install.
Then you can start the container yourself, enter with bash and install the necessary libraries. It looks like the following.
$ docker run -it --rm -v "$PWD":/var/task "lambci/lambda:build-python2.7" bash
bash-4.2# cd /var/task
bash-4.2# yum -y install python-devel mysql-devel
bash-4.2# pip install mysqlclient -t .
bash-4.2# cp /usr/lib64/mysql/libmysqlclient.so.18 .
Mount / var / task of the container to the local current. This area is a plagiarism of the processing around installRequirements () of the source that I saw earlier.
Then install the required libraries and pip. I also need libmysqlclient.so.18, so copy it.
serverless uploads the files and folders in the location of serverless.yml as a zip, so you can sls deploy as it is.
(Delete the dockerizePip: true that you added to serverless.yml earlier)
This will load the mysqlclient built for linux from within lambda and you can safely access your RDS.
A problem occurred here. I can't run locally because I have a linux image of mysqlclient locally.
$ python handler.py
Traceback (most recent call last):
File "handler.py", line 18, in <module>
import MySQLdb
File "./MySQLdb/__init__.py", line 19, in <module>
import _mysql
ImportError: dlopen(./_mysql.so, 2): no suitable image found. Did find:
./_mysql.so: unknown file type, first eight bytes: 0x7F 0x45 0x4C 0x46 0x02 0x01 0x01 0x00
Therefore, it is possible to switch the loading mysqlclient binary. The strategy is to put linux ELF under ./third-party/ and load it first if the execution environment is other than mac. The project root will not get dirty with external package folders.
The directory structure is as follows.
ore-service
├── third-party
│ ├── MySQLdb
│ ├── _mysql.so
│ ├── libmysqlclient.so.18
│ └── Other various files for linux
├── handler.py
├── requirements.py
├── requirements.txt
└── serverless.yml
Cross-compile as follows. (I just set the mount point to third-party /)
$ mkdir third-party
$ cd $_
$ docker run -it --rm -v "$PWD"/third-party/:/var/task "lambci/lambda:build-python2.7" bash
(same as above)
Then, python determines the OS and switches the loading destination. Insert third-party at the beginning of sys.path, and python doesn't seem to have LD_LIBRARY_PATH-like settings, so I'll load libmysqlclient.so.18 directly.
handler.py
import platform
if platform.system() != "Darwin":
import sys
import ctypes
sys.path.insert(0, './third-party/')
ctypes.CDLL("./third-party/libmysqlclient.so.18")
It's muddy, but I managed to achieve my goal.
After that, if you put the third-party directory in .gitignore and organize the cross-compilation process into a Dockerfile, it will be okay for configuration management.
Isn't it like this?
Recommended Posts