[DOCKER] Easy AWS S3 testing with MinIO

Introduction

If you want to test AWS S3, I will show you how to use MinIO to build a mock of S3 locally.

Start MinIO server

MinIO can be used by simply executing the following (without installing it).

docker run -d -p 9000:9000 --name minio -v $PWD/data:/data \
  -e "MINIO_ACCESS_KEY=AKIA0123456789ABCDEF" \
  -e "MINIO_SECRET_KEY=0123456789/abcdefghi/ABCDEFGHI0123456789" \
  minio/minio server /data

--Change MINIO_ACCESS_KEY and MINIO_SECRET_KEY to 20 and 40 characters as appropriate. --You can manage it with your browser at http://127.0.0.1:9000. --The contents of S3 will be created in data in the current directory.

Sample boto3 with environment variables

Preparation

import os
import boto3

bucket_name = 'sample'  #Bucket name
use_minio = True  #Whether to use MinIO
os.environ['AWS_ACCESS_KEY_ID'] = 'AKIA0123456789ABCDEF'
os.environ['AWS_SECRET_ACCESS_KEY'] = '0123456789/abcdefghi/ABCDEFGHI0123456789'

Suppose the environment variables ʻAWS_ACCESS_KEY_ID and ʻAWS_SECRET_ACCESS_KEY contain the ones specified by docker.

Creating a bucket

kwargs = dict(
    region_name="ap-northeast-1",
    aws_access_key_id=os.getenv("AWS_ACCESS_KEY_ID"),
    aws_secret_access_key=os.getenv("AWS_SECRET_ACCESS_KEY"),
)

if use_minio:
    kwargs["endpoint_url"] = "http://127.0.0.1:9000"

bucket = boto3.resource("s3", **kwargs).Bucket(bucket_name)

When using MinIO, you can switch by simply specifying " http://127.0.0.1:9000 " in ʻendpoint_url`.

Execution sample

Suppose the file test_file exists in the current directory.

bucket.create()  #Bucket creation
bucket.upload_file("test_file", "upload_file")  # upload_Upload as file
print(list(bucket.objects.all()))  #File list
bucket.download_file("upload_file", "download_file")  # download_Download as file
bucket.Object("upload_file").delete()  #File deletion
bucket.delete()  #Bucket deletion

It works just like regular S3. Please check http://127.0.0.1:9000 as appropriate.

Sample boto3 by profile

We recommend using profiles rather than environment variables.

Preparation

-Please install aws CLI. --Create a profile with ʻaws configure --profile test. Please change test` as appropriate.

Creating a bucket

Create a bucket using the profile you created. When using MinIO, you will specify profile_name and ʻendpoint_url`.

profile_name = "test"

cr = boto3.Session(profile_name=profile_name).get_credentials()
kwargs = dict(aws_access_key_id=cr.access_key, aws_secret_access_key=cr.secret_key)

if use_minio:
    kwargs["endpoint_url"] = "http://localhost:9000"

bucket = boto3.resource("s3", **kwargs).Bucket(bucket_name)

The created bucket can be used in the same way as the previous sample.

that's all

Recommended Posts

Easy AWS S3 testing with MinIO
[AWS] Link Lambda and S3 with boto3
Connect to s3 with AWS Lambda Python
[AWS] Do SSI-like things with S3 / Lambda
Easy Grad-CAM with pytorch-gradcam
Aggregate AWS S3 data
Testing Elasticsearch with python-tcptest
S3 uploader with boto
AWS CDK with Python
Easy debugging with ipdb
Easy TopView with OpenCV
Output CloudWatch Logs to S3 with AWS Lambda (Pythyon ver)
Easy tox environment with Jenkins
[S3] CRUD with S3 using Python [Python]
[Co-occurrence analysis] Easy co-occurrence analysis with Python! [Python]
Automate python testing with CircleCI
S3 operation with python boto3
Easy folder synchronization with Python
Easy image classification with TensorFlow
Linux fastest learning with AWS
AWS Lambda with PyTorch [Lambda import]
Pre-try local testing with Travis
Easy Python compilation with NUITKA-Utilities
Easy HTTP server with Python
Easy proxy login with django-hijack
Testing HTTP requests with ESP-WROOM-32
Use AWS interpreter with Pycharm
[Python] Convert CSV file uploaded to S3 to JSON file with AWS Lambda
[AWS] Search and acquire necessary data from S3 files with S3 Select
Send images taken with ESP32-WROOM-32 to AWS (API Gateway → Lambda → S3)