Using ECS tasks on AWS Fargate to replace Lambda functions

5 minute read

In a recent project, one of the requirements was time-lapse video generation on request. Our initial thought was, as in most projects we do, to use AWS Lambda, however, we quickly ran into an issue. Lambda has a 15-minute timeout! Not only this but long-running lambdas quickly get expensive and you actually don’t have a lot of control over how much memory/CPU is provisioned! 

An introduction to AWS ECS and Fargate

Therefore we started to explore other options for on-demand servers on AWS, and ECS Fargate became the obvious choice. ECS (Elastic Container Service) allows you to run Docker containers quickly and easily on AWS servers, so all you need is a working docker image with your app bundled in and you can quickly deploy full applications without a lot of the deployment "faf"!

There are two versions of ECS cluster:

  1. EC2 instance - your docker containers deploy to that instance and you pay for the compute power of the EC2 instance.
  2. Fargate - a serverless version where containers are run on AWS-managed servers which you don't have to pay for - instead you pay for compute time.

The solution:

Note the above diagram is a simplified representation of AWS resources.

(Figure 1 explanation)

  1. The lambda function responsible for triggering our longer-running process
  2. ECS task responsible for configuration and provisioning of our docker containers
  3. Docker containers that will actually run our longer-running task code
  4. Access to databases/resources within our VPC and private subnets or to the outside world through a NAT gateway


Below I will walk through how to build and deploy a small application that displays a basic example of the above functionality. You can refer back to the example repository as a point of reference.


To automate the deployment of the infrastructure, we are going to use AWS CDK to define all of our resources. There are 4 main bits of configuration required to get this setup:

1. ECS Cluster

The cluster is essentially just a placeholder for grouping all the rest of our ECS tasks and services.

2. ECS Task definition

The task definition outlines the infrastructure the task will run on. We can set memory and CPU available as well as setting the compatibility mode to be Fargate, to make this fully serverless.

3. Task containers

This section adds the docker container that should run when the task is provisioned. Tasks can actually provision multiple containers if required but for this use case we just need the one. We can also assign environment variables or set environment variables from a Secrets Manager secret. We also need to define what docker image to use to provision the task with. In this case I am using the latest image from a ECR repository that has been set up.


4. Lambda trigger

The lambda function is responsible for triggering the ECS task to start up with all the correct configuration. We need to provide a few environment variables so that we know what ECS tasks/containers to start.

Note: I would recommend looking at Serverless Stack for provisioning lambda functions and other serverless resources but wanted to keep this tutorial as simple as possible. All of the above code could be used within a SST stack.

Deploying the infrastructure

Now that all our infrastructure is defined as code, we can easily deploy our stack with AWS CDK in one simple command:

yarn cdk deploy ExampleLambdaToEcsStack

Once that’s complete we now have all the surrounding infrastructure but we’ve not yet deployed the docker image that we are going to use for our tasks (This is separated out from the rest of the infrastructure as I find building docker images within a CDK environment a bit slow and black boxed).

To deploy our docker image we can run one of the two commands below depending on if you are using the typescript or javascript example:

yarn deploy:docker:js OR yarn deploy:docker:ts


Triggering a task

Now that the infrastructure exists, we can trigger the lambda function to run the task in the AWS console or by invoking it via the AWS cli:

aws lambda invoke \\ --function-name trigger-function-name-here \\ response.json

This will trigger the lambda function which will then run an ECS task:

We use the overrides section to load dynamic environment variables and to create a similar experience to lambda event payloads you could pass a stringified JSON object that could be parsed from within the ECS task.


Check out and fork the full project ( and use it as a base for your own applications.

There are some added resources for correctly storing logs relating to the ECS tasks as well as some Cloud Watch alerts so that you can adjust your memory and CPU appropriately if required!

Where next?

At this point, the world is your oyster! Any long-running task you can build in Docker can now be triggered on demand by a lambda function. In this example, we are running a javascript lambda style task but there’s no reason that you shouldn’t run a task that uses a different programming language.

Within our infrastructure, we have adapted the lambda trigger function to be a router for triggering different task types with differing CPU and memory resources depending on the task type.

Finally, we have created a tidy-up task that runs on a schedule which shuts down any tasks that seem to be taking too long essentially acting as a task timeout. For our tasks, that’s set to an hour but obviously, this can be adjusted to match your needs.

Written by George Evans (Senior Developer). Read more in Insights by George or check our their socials ,