Create a temporary file with AWS Lambda and upload it to S3. In addition, I will try to download the file uploaded to S3 to the original location.
/ tmp / log / sess /
and archive the directory in /tmp/sess-info.tar
.sess-info.tar
to S3 \ /tmp/sess-info.tar
and download sess-info.tar
from S3 \ /tmp/sess-info.tar
./ tmp / log / sess /
if it exists and expand sess-info.tar
to / tmp / log / sess /
.Lambda Code
import boto3
import os
import os.path
import tarfile
import shutil
def lambda_handler(event, context):
bucket_name = 'bucket_name'
tmp_dir = '/tmp/'
log_dir = '/tmp/log/sess/'
file_name = 'sess-info.tar'
key = file_name
file_path = tmp_dir + file_name
# === Step 1
# create log dir
if os.path.exists(log_dir) == False:
os.makedirs(log_dir)
# write log
with open(log_dir + 'log1.txt', 'w') as file:
file.write('hoge\n')
with open(log_dir + 'log2.txt', 'w') as file:
file.write('fuga\n')
# create log archive
with tarfile.open(file_path, mode='w:gz') as archive:
archive.add(log_dir)
# === Step 2
# create s3 resource
s3 = boto3.resource('s3')
bucket = s3.Bucket(bucket_name)
# upload log archive
bucket.upload_file(file_path, key)
# === Step 3
# delete log archive
if os.path.exists(file_path):
os.remove(file_path)
# download log archive
bucket.download_file(key, file_path)
# === Step 4
# delete log dir
if os.path.exists(log_dir):
shutil.rmtree(log_dir)
# extract log archive
with tarfile.open(file_path, mode='r:gz') as archive:
archive.extractall('/')
bucket_name
as appropriate.Lambda Configuration
S3 Bucket
Create a bucket with the same name as the Lambda Code bucket_name
.
Lambda is basically designed stateless, but there are times when you want to access stateful data. In such a case, we will take advantage of the following.
This time, it was an example of using Lambda's / tmp directory and S3.
Recommended Posts