Trigger AWS ECS Task For File Upload Event In S3 Bucket

Introduction 

 
In this article, we will discuss and set up a rule for an S3 bucket file upload event that will trigger the ECS task using Farget.
 
First, we will create cloud watch rule using AWS CLI put-rule command on events. In an event pattern, we have mentioned event parameters. For example, the event source will be S3, the event name will be PutObject (S3 file upload), and for the bucket name, the event will only be raised for file uploads in a specified bucket.
  1. input:  
  2. aws events put-rule   
  3. --name s3_file_upload   
  4. --event-pattern '{  
  5.                     "source": [  
  6.                         "aws.s3"  
  7.                     ],  
  8.                     "detail-type": [  
  9.                         "AWS API Call via CloudTrail"  
  10.                     ],  
  11.                     "detail": {  
  12.                         "eventSource": [  
  13.                         "s3.amazonaws.com"  
  14.                         ],  
  15.                         "eventName": [  
  16.                         "PutObject"  
  17.                         ],  
  18.                         "requestParameters": {  
  19.                         "bucketName": [  
  20.                             "demo-bucket-akshay-9"  
  21.                         ]  
  22.                         }  
  23.                     }  
  24.                 }'  
  25.   
  26. output:  
  27. {  
  28.     "RuleArn""arn:aws:events:us-east-1:***:rule/s3_file_upload"  
  29. }
After creating a rule, go to Cloudwatch. Click on Rules. Select a rule that we have created above, i.e. s3_file_upload. You will see the event pattern which we have mentioned while creating a rule. In the targets section, you will see No targets because we have not yet added any target for a rule.
To create a target, select Actions in the top right corner. Select Edit. Click on Add Target.
  • Select ECS task as the target type. You can specify the task definition which will be used to initiate task. I will use task definition which contains container with dotnetcore console application. Use Farget for launch type.
  • Specify the task group. All tasks will be grouped under it. This makes it easy to manage. I will select the latest revision of task definition. Tasks to be spin up while trigger will be 1.
  • Specify subnet id in which task will be deployed. Enable public IP if your subnet is public, otherwise keep it disabled.
  • I will select create a new role for the ECS task.
Using these settings we can create target which will initiate ECS task on S3 file upload event. But how we can pass the S3 event metadata to the ECS task?
 
Cloudwatch rule provides InputTransformer attribute, which can be used to pass metadata to Target. For ECS task as a target, AWS console do not provide option to provide InputTransformer attribute. We can specify it while creating target resource via CloudFormation/Terraform or using AWS CLI. Hence, we will use the following AWS CLI command to create a target instead of console.
 
In the InputTransformer attribute, we can specify InputPathsMap and InputTemplate. InputPathsMap will be used to create properties and map them to properties in the event. Here we have created s3-bucket and s3-key properties and mapped to bucketname and key properties, respectively, in details.requestParameters of event.
 
These properties we have used in InputTemplate, which will be used while creating task. Here, we will override environment varialbe values by using containerOverride parameter. We have to mention properties to be replaced by InputPathsMap values, like <property_name>.
  1. input:  
  2. aws events put-targets --rule "name of rule" --targets '{  
  3.     "Id""9",  
  4.     "Arn""arn of ECS cluster",  
  5.     "RoleArn""arn of role to invoke ECS task",  
  6.     "EcsParameters": {  
  7.         "TaskDefinitionArn""arn of task definition",  
  8.         "TaskCount": 1,  
  9.         "LaunchType""FARGATE",  
  10.         "NetworkConfiguration": {  
  11.             "awsvpcConfiguration": {  
  12.                 "Subnets": [  
  13.                     "id of subnet"  
  14.                 ],  
  15.                 "AssignPublicIp""ENABLED"  
  16.             }  
  17.         },  
  18.         "Group""demo"  
  19.     },  
  20.     "InputTransformer": {  
  21.         "InputPathsMap": {  
  22.             "s3-bucket""$.detail.requestParameters.bucketName",  
  23.             "s3-key""$.detail.requestParameters.key"  
  24.         },  
  25.         "InputTemplate": "{  
  26.             \"containerOverrides\": [  
  27.                 {  
  28.                     \"name\": \"name of container in task def\",  
  29.                     \"environment\": [  
  30.                         {  
  31.                             \"name\": \"s3-bucket\",  
  32.                             \"value\": \"<s3-bucket>\"  
  33.                         },  
  34.                         {  
  35.                             \"name\": \"s3-key\",  
  36.                             \"value\": \"<s3-key>\"  
  37.                         }  
  38.                     ]  
  39.                 }  
  40.             ]  
  41.         }"  
  42.     }  
  43. }'  
Let's test it by uploading file to the S3 bucket. Metadata will be passed to container as environment variable. Go to the ECS cluster. Select a task which we spin up for the S3 event. Expand the container. Under the environment variable section, properties are mentioned under containerOverride. It will be visible with the values of the S3 file metadata.
I am using a dotnet core based container, hence I am accessing those variables as follows.
  1. Console.WriteLine($"bucket name: {Environment.GetEnvironmentVariable("s3-bucket")}");  
  2. Console.WriteLine($"file name: {Environment.GetEnvironmentVariable("s3-key")}");  
Great! We have setup the S3 file upload trigger as an ECS task and able to send S3 file metadata to the ECS container. I will discuss more scenarios and solutions using AWS services in upcoming articles. Until then, stay tuned! :)