Until you insert an S3 object into an EC2 DB with Lambda @ java: Java [Part 2]

Introduction

This article is a reminder of what AWS, java (almost) beginners did before inserting S3 objects into EC2 DB (SQL Server) with Lambda @ java. This is the final edition of the first post trilogy. If you point out that this is better here, please! !!

1: AWS Edition 2: Java part [Part 1] 3: Java [Part 2] <-Main 3.5: Java version [continued]

Last time I confirmed that I could get the name of the file uploaded to the S3 bucket. Now, let's email the file name you got last time.

By the way, this time I tried using Javax's Mail package.

Edit pom.xml

Add the following description to dependencies in pom.xml.

<dependency>
  <groupId>com.sun.mail</groupId>
  <artifactId>javax.mail</artifactId>
  <version>1.5.4</version>
</dependency>

send e-mail

Modify the listingNames method of the ReadS3Object class as follows:

ReadS3Object.java


@SuppressWarnings("deprecation")
public void listingNames(Context context )
{
    AmazonS3 client = new AmazonS3Client(
    		new BasicAWSCredentials(
                    "<accessKey>",
                    "<secretKey>"));
    	
    ListObjectsRequest request = new ListObjectsRequest()
            .withBucketName("test-bucket-yut0201");
    ObjectListing objectList = client.listObjects(request);

    //Get the object list and output the object name to the console
    List<S3ObjectSummary> objects = objectList.getObjectSummaries();
    //System.out.println("objectList:");
    //objects.forEach(object -> System.out.println(object.getKey()));
    	
    List<String> objectNameList = new ArrayList<String>();
    objects.forEach(object -> objectNameList.add(object.getKey()));
    sendMailTest(objectNameList);
}

I don't need the output to the console, so I comment it out and store it in a String list instead. Since the mail is sent by the sendMailTest () method, let's implement the contents immediately.

ReadS3Object.java


public void sendMailTest(List<String> objectNames) {
		final String fromAddr = "<fromMailAddress>";
		final String toAddr   = "<toMailaddress>";
		final String subject  = "test Mail Send";
		final String charset  = "UTF-8";
		String content  = "S3 object List : ";
		content += String.join(",", objectNames);
		final String encoding = "base64";
		
		final Properties properties = new Properties();

        //Basic information. Here is an example of connecting to nifty.
        properties.setProperty("mail.smtp.host", "smtp.gmail.com");
        properties.setProperty("mail.smtp.port", "587");
        
        //Time-out setting
        properties.setProperty("mail.smtp.connectiontimeout", "60000");
        properties.setProperty("mail.smtp.timeout", "60000");

        //Authentication
        properties.setProperty("mail.smtp.auth", "true");
        properties.setProperty("mail.smtp.starttls.enable", "true");

        final Session session = Session.getInstance(properties, new Authenticator() {
            protected PasswordAuthentication getPasswordAuthentication() {
                return new PasswordAuthentication("<fromMailAddress>", "<password>");
            }
        });
        
        try {
            MimeMessage message = new MimeMessage(session);

            // Set From:
            message.setFrom(new InternetAddress(fromAddr, "<userName>"));
            // Set ReplyTo:
            message.setReplyTo(new Address[]{new InternetAddress(fromAddr)});
            // Set To:
            message.setRecipient(Message.RecipientType.TO, new InternetAddress(toAddr));

            message.setSubject(subject, charset);
            message.setText(content, charset);

            message.setHeader("Content-Transfer-Encoding", encoding);

            Transport.send(message);

          } catch (MessagingException e) {
            throw new RuntimeException(e);
          } catch (UnsupportedEncodingException e) {
            throw new RuntimeException(e);
          }
	}

Set any source and destination addresses for fromAddr and toAddr. Since we are using the Gmail address this time, we have set the Gmail address for the SMTP server. In content, the object names are concatenated after the string, separated by commas, depending on the number of objects.

The timeout is 60,000 milliseconds for both connection and transmission, but any value can be set in the range of 0 to 2147483647.

In addition, Gmail needs to perform SMTP authentication to prevent open relay, so authentication information is added to properties. The arguments of PasswordAuthentication () will be your Gmail account ID (email address) and password.

Now, in this state, repackage the jar with [Run]-> [S3toLambda].

Note) S3toLambda is the execution configuration that was configured during the previous Maven build, and the goal is "package shade: shade".

After uploading the jar to Lambda, click Test. If you can log in to the Gmail account with the destination address and confirm that the email has arrived in your inbox, you are successful.

Lambda role policy changes

When executing a Lambda function, I want to be able to check the normal / abnormal end of the execution result in the log, so Let's give the Lambda execution role lambda_s3_exec_role write permission for CloudWatch Logs.

First, connect to the IAM management console.

Select [Role] from the navigation bar on the left side of the screen, and click the previously created lambda_s3_exec_role. Some policies are provided by AWS by default, but this time I will create an inline policy (it is convenient to remember because it can be customized). Click "Create Inline Policy" on the right side of the screen and the following screen will be displayed.

02_create_policy.png

Select the JSON tab and copy the following description.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "logs:*",
            "Resource": "*"
        }
    ]
}

Note) Regarding "Version", it seems to indicate the format version on the AWS side, so leave it as it is without changing it.

By the way, the above description is a full access state that "allow all actions (Read / Write) for all resources (log-group, log-stream, etc.) of CloudWatch Logs". If you want to limit the permissions, please edit and try various things.

Check the log

At this point, let's write the code to output the log. Create a new class under the package created last time.

DetectS3Event.java


package S3test.S3toLambda;

import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.LambdaLogger;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.amazonaws.services.lambda.runtime.events.S3Event;
import com.amazonaws.services.s3.event.S3EventNotification.S3EventNotificationRecord;

public class DetectS3Event implements RequestHandler<S3Event, Object>
{
    @Override
    public Object handleRequest(S3Event event, Context context) {
        context.getLogger().log("Input: " + event);
        LambdaLogger lambdaLogger = context.getLogger();
		
        S3EventNotificationRecord record = event.getRecords().get(0);
        lambdaLogger.log(record.getEventName()); 				//event name
        lambdaLogger.log(record.getS3().getBucket().getName()); //Bucket name
        lambdaLogger.log(record.getS3().getObject().getKey()); 	//Object key (object name)
	
        return null;
    }
}

Detects some event on S3 and gets the event name, target bucket name, and target object name Let's output it as a log by Lambda Logger.

Once you've packaged your Mavan build into a jar, upload it to Lambda. Change the handler accordingly. After clicking [Save], set the test event from the pull-down list on the left of [Test].

With "Create new test event" selected, select "S3 Put" or "S3 Delete" from the event template. Save it with an arbitrary test event name. 03_S3_testEvent.png In this state, execute [Test] to confirm that the execution result is successful, and then take a look at the log. 04_lambda_result.png 05_cloudwatch_logs_01.png You can see that the log has been created in the log stream, so let's expand it. 06_cloudwatch_logs_02.png Now you know that you have successfully obtained the test event information.

Event trigger settings

Since I was able to confirm the output of the information about the test data to the log, I actually triggered the event "File upload" Let's output the log on CloudWatch.

This time, we will set the event from the S3 management console (you can also set it from the Lambda side).

Select the desired bucket and click the Properties tab. Make sure no events are registered in the [Events] block, and create the following event with [+ Add Notification]. 07_S3_create_event.png

Check the event log

In this article -Do not create subfolders in the bucket ・ Upload CSV file Due to the restriction, the prefix is blank and the suffix is ".csv".

After setting the event, edit the handleRequest (S3Event event, Context context).

DetectS3Event.java


package S3test.S3toLambda;

//import omitted

public class DetectS3Event implements RequestHandler<S3Event, Object>
{
    @Override
    public Object handleRequest(S3Event event, Context context) {
        context.getLogger().log("Input: " + event);
        LambdaLogger lambdaLogger = context.getLogger();
		
        S3EventNotificationRecord record = event.getRecords().get(0);
		
        String bucketName = record.getS3().getBucket().getName();
        String key = record.getS3().getObject().getKey();
        
        try {
            @SuppressWarnings("deprecation")
            AmazonS3 client = new AmazonS3Client(
            new BasicAWSCredentials("<accessKey>","<secretKey>"));
			
            GetObjectRequest request = new GetObjectRequest(bucketName, key);
            S3Object object = client.getObject(request);
			
            BufferedInputStream bis = new BufferedInputStream(object.getObjectContent());
            BufferedReader br = new BufferedReader(new InputStreamReader(bis));
			
            String line = "";
            while ((line = br.readLine()) != null) {
                String[] data = line.split(",", 0); //Convert rows to a comma separated array

                for (String elem : data) {
                    System.out.println(elem);
                }
            }
            br.close();
        } catch (IOException e) {
            System.out.println(e);
        }
        return null;
    }
}

As a test, with the jar packaged in the above source state uploaded to Lambda, Upload the CSV prepared in advance as follows. 08_create_csv.png

upload. 09_csv_upload.png

Then, with this upload as a trigger, -S3 kicks Lambda function-> Lambda function writes execution result to CloudWatch Logs according to execution role Is born.

There seems to be a time lag of up to a few minutes until all the logs are output (I have not verified it accurately this time), but let's take a look at the CloudWatch logs. 10_logs.png I found that the uploaded CSV was output line by line, separated by commas, between the 6th and 23rd lines of the log.

Finally, try using the CSV above again to INSERT into the DB (SQL Server) on EC2.

Creating databases and users

Note) SQL Server Express Edition is assumed to be installed. You can install it by referring to Official.

First, let's connect to the created EC2 instance with the following command.

ssh -i "<keyPair>" <userName>@<IPaddress>

The key pair used is the one created at the same time when the EC2 instance is created. If you are using an existing key pair, that is fine. The IP address will be the Elastic IP assigned to your EC2 instance. You can check the connection command itself with the [Connect] button on the EC2 management console.

After connecting to the instance, connect to SQL Server.

sqlcmd -S localhost -U SA

Login is successful when the prompt changes to something like 1>. Next, let's create a database, administrative users, and tables.

Creating a database

Execute the following command. This time, I am using SQL Operations Studio.

USE master
GO
IF NOT EXISTS (
   SELECT name
   FROM sys.databases
   WHERE name = N'LambdaTestDB'
)
CREATE DATABASE [LambdaTestDB]
GO

ALTER DATABASE [LambdaTestDB] SET QUERY_STORE=ON
GO

Confirm that the DB was created normally.

select name, dbid, mode, status from sysdatabases;
go
name                                                                                                                             dbid   mode   status     
-------------------------------------------------------------------------------------------------------------------------------- ------ ------ -----------
master                                                                                                                                1      0       65544
tempdb                                                                                                                                2      0       65544
model                                                                                                                                 3      0       65536
msdb                                                                                                                                  4      0       65544
LambdaTestDB                                                                                                                          5      0       65537

(5 rows affected)

Next, create a table in the created LambdaTestDB.

USE LambdaTestDB
GO
CREATE TABLE employee (
  emp_id INT NOT NULL PRIMARY KEY,
  emp_name VARCHAR(20) NOT NULL,
  age    INT)
GO

Note) If an error is displayed, troubleshoot as appropriate.

Confirm that the table was created successfully.

select name, object_id, type_desc from sys.objects where type = 'U'
go
name                                                                                    object_id   type_desc                                                   
--------------------------------------------------------------------------------------- ----------- ------------------------------------------------------------
employee                                                                                  885578193 USER_TABLE                                                  

(1 rows affected)

If the above result is obtained, the table creation is successful.

Now that the table to be INSERTed has been created, re-edit the source as well.

Summary! ??

This article was written as the third work of three parts, but at the beginning, the volume of the third work was difficult because I did not think about the composition in particular. .. .. (apology) That's why I will publish the continuation as the 3.5th work later.

Thank you.

Recommended Posts

Until you insert an S3 object into an EC2 DB with Lambda @ java: Java [Part 2]
Until you insert an S3 object into an EC2 DB with Lambda @ java: Java [Part 1]
Until INSERT S3 object into EC2 DB with Lambda @ java: AWS
Until INSERT S3 object into EC2 DB with Lambda @ java: Java [Continued]
AWS Lambda with Java starting now Part 1
[Java] When dealing with MySQL DB in Doma, insert Batch Insert into one
Insert Eijiro's dictionary data into Oracle DB part2
[Java] Find prime numbers with an Eratosthenes sieve (Part 2)
Until you create a Web application with Servlet / JSP (Part 1)