The use of the AmazonS3Client class provided in the old SDK is treated as Deprecated, and it is written in the reference that AmazonS3ClientBuilder can be used, but it seems that there are few Japanese documents, so I will leave it as a reminder.
S3Access.java
package s3test;
import java.io.InputStream;
import java.util.ArrayList;
import java.util.List;
import com.amazonaws.ClientConfiguration;
import com.amazonaws.Protocol;
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.client.builder.AwsClientBuilder.EndpointConfiguration;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.DeleteObjectsRequest;
import com.amazonaws.services.s3.model.DeleteObjectsRequest.KeyVersion;
import com.amazonaws.services.s3.model.DeleteObjectsResult;
import com.amazonaws.services.s3.model.ObjectMetadata;
import com.amazonaws.services.s3.model.S3Object;
import com.amazonaws.services.s3.model.S3ObjectInputStream;
public class S3Access {
private static final String ENDPOINT_URL = "https://s3-ap-northeast-1.amazonaws.com";
private static final String REGION = "ap-northeast-1";
private static final String ACCESS_KEY = "【access key】";
private static final String SECRET_KEY = "[Secret key]";
//--------------------------------------------------
//upload
//--------------------------------------------------
public void putObject(String bucketName, String objectKey, int objectSize, InputStream is) throws Exception {
//Client generation
AmazonS3 client = getClient(bucketName);
ObjectMetadata metadata = new ObjectMetadata();
//Set only the size just in case (Exception if inconsistent)
metadata.setContentLength(objectSize);
//upload
client.putObject(bucketName, objectKey, is, metadata);
}
//--------------------------------------------------
//download
//--------------------------------------------------
public S3ObjectInputStream getObject(String bucketName, String objectKey) throws Exception {
//Client generation
AmazonS3 client = getClient(bucketName);
//download
S3Object s3Object = client.getObject(bucketName, objectKey);
return s3Object.getObjectContent();
}
//--------------------------------------------------
//Bulk deletion
//--------------------------------------------------
public List<String> deleteObjects(String bucketName, List<String> objectKeys) throws Exception {
//Client generation
AmazonS3 client = getClient(bucketName);
List<KeyVersion> keys = new ArrayList<KeyVersion>();
objectKeys.forEach(obj -> keys.add(new KeyVersion(obj)));
//File deletion
DeleteObjectsRequest request = new DeleteObjectsRequest(bucketName).withKeys(keys);
DeleteObjectsResult result = client.deleteObjects(request);
//Get the key of the deleted object
List<String> deleted = new ArrayList<String>();
result.getDeletedObjects().forEach(obj -> deleted.add(obj.getKey()));
return deleted;
}
//--------------------------------------------------
//Client generation
//--------------------------------------------------
private AmazonS3 getClient(String bucketName) throws Exception {
//Authentication information
AWSCredentials credentials = new BasicAWSCredentials(ACCESS_KEY, SECRET_KEY);
//Client settings
ClientConfiguration clientConfig = new ClientConfiguration();
clientConfig.setProtocol(Protocol.HTTPS); //protocol
clientConfig.setConnectionTimeout(10000); //Connection timeout(ms)
//Endpoint setting
EndpointConfiguration endpointConfiguration = new EndpointConfiguration(ENDPOINT_URL, REGION);
//Client generation
AmazonS3 client = AmazonS3ClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(credentials))
.withClientConfiguration(clientConfig)
.withEndpointConfiguration(endpointConfiguration).build();
if(!client.doesBucketExist(bucketName)) {
//Exception if there is no bucket
throw new Exception("S3 bucket[" + bucketName + "]there is not");
}
return client;
}
}
Objects to be uploaded / downloaded are passed via InputStream, but it is recommended to change to pass as a file or byte array as needed.
The TransferManager class used for split upload etc. is also treated as Deprecated, so use the TransferManagerBuilder class instead.
In batch deletion, the key value of the deleted object is returned, but even if the object to be deleted does not exist, it seems that the key value is returned as if it was deleted. You can delete up to 1,000 items at the same time. If it exceeds that, an error indicating a key error will be returned.
Since it is a sample code, the access key / secret key is solidly written in the code, but if you want to execute it on EC2, use IAM Role (described later), and if you want to execute it outside AWS, save it in DB etc. It's a good idea to be able to replace it on a regular basis (within 90 days).
If you want to encrypt the uploaded data, please refer to the following article. ** Encrypt data uploaded to S3 using AWS SDK for Java / SSE-KMS **
If you want to run the application on EC2, please refer to the following article. ** Accessing S3 buckets using SSE-KMS encryption in EC2 IAM Role environment (AWS SDK for Java) ** If you don't want to encrypt your data, you don't need to attach an IAM Role to your encryption key (CMK).
Recommended Posts