One of the services provided by AWS, it is a service that can execute processing without server management.
For example, if you do not use Lambda, the process will be executed only by creating a server with EC2, installing the middleware and language required for processing execution, and setting the environment.
However, with Lambda, it is possible to execute by just writing the process without provisioning such a server.
This will ★ No need to manage or maintain the server itself ★ Costs can be significantly reduced depending on the frequency of processing because you are charged only for the time when processing is being executed. ★ It is very convenient for AWS main architecture because it can be easily linked with other AWS services. There is a merit.
It is necessary to set a trigger to execute the written process. It can be linked with all AWS services, and is typically executed when an alarm occurs in CloudWatch, when data exists in the kinesis data stream, or when a file is placed in S3.
In other words, Lambda is a service that works if you write only ** conditions to be executed ** and ** processes to be executed **.
Now let's upload the file to s3 and write a process to confirm that it has been uploaded.
Lambda will set the function for each region. There is no problem with S3, but when linking with services affected by other regions, select the same region.
This time I will use python 3.7. The role should be "Create new role with basic Lambda permissions". There is no problem using existing roles. In that case, Lambda must have permission to write the log group to CloudWatch for the trigger condition service and Log output.
"Runtime" means the language of processing. As of 02/05/2020, the languages that can be selected are as follows. ・ Java 11/8 ・ .NET Core2.1 (C # / PowerShell) ・ GO 1.x ・ Because. js 12. x / 10. x ・ Python 3.8 / 3.7 / 3.6 / 2.7
This time I will enable it to be executed when some object is created in s3. When the trigger medium is set to s3, there are the following five items. ·bucket -Event type (if the file is PUT ..., if the file is deleted ... etc.) -Prefix (directory path after bucket, file name, etc.) ・ Suffix (file name, extension, etc.) -Trigger activation (trigger with the above settings will work as soon as you check it. Let's turn it on after the processing test is completed)
By default, the process is described in a file called lambda_function.py.
import json
def lambda_handler(event, context):
# TODO implement
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
lambda_handler is a function that is automatically executed when the trigger condition set in Lambda is met.
Edit # TODO implement
and after, and write the process you want to execute.
This time I will try to display the path and file name where the file is placed in s3.
import json
def lambda_handler(event, context):
bucket = event['Records'][0]['s3']['bucket']['name']
key = event['Records'][0]['s3']['object']['key']
print (bucket +'+ key +'has been created in the bucket!')
The event information that triggered the event is contained in an array in `ʻevent`` passed as an argument to lambda_handler. The above gets the bucket name and file path from it and displays it.
The function itself can be executed by placing the file in the path specified by the trigger. Besides that, it is possible to execute the Lambda function in a pseudo manner by setting the value that enters `ʻevent`` from the "test" in the image below in json format.
Also, for the function execution log, you can select the target function from the CloudWatch log group and check it from there.
I was able to confirm that the Lambda function was executed correctly! !!
Lambda uses the AWS environment, and if you want to perform simple processing, it is highly recommended in terms of cost and management. I was a little worried because there is no space to write an explanation of what the function is doing, so let's thoroughly describe the function name and comments in the process so that you can have a comfortable Lambda life! !!
Recommended Posts