Serverless Container deployments with Terraform and Fargate

Fearless
4 min readSep 29, 2021

A guest post from Fearless Site Reliability Engineer Charles Bushong

For my team, it can be really hard to keep up with the breakneck pace of technical innovation in software development. Every year there are new innovations and practices and processes that seemingly revolutionize the way we do business. And once we figure out the best way to manage our applications, it can be even harder to implement that process everywhere its valuable before something new comes along to replace it. This problem is faced not only by us as developers, but by the customers we’re helping along this journey of growth.

My team working for the Small Business Administration (SBA) supports the Infrastructure of a number of web services provided to the public. We generally focus on one team, improve their application deployment process, and then after a few months move on to the next team. This is an awesome opportunity for us to improve the practices of teams, while honing the processes we find to be most effective and useful.

Every time we bring a solution from one team to another, we iterate and make incremental improvements to the process, which is great. The problem is, it’s hard to make the time to go back to prior teams and share with them all the tweaks we’ve made. The solution we’ve found that works best for us and our customer teams: we publish our code Open Source, in reusable modules, and do it in a way that follows semantic versioning so updates can roll out reliably and without major effort to other teams.

This can ensure we don’t have to spend a whole lot of time going back to previous projects. It lets us deliver value to our customers continuously instead of one and done. Beyond that, it helps the SBA look good in front of the tech community and can act as a way to engage with the public. Even more, it takes work that is paid for by the government, and provides it to everyone without restriction; work paid for by the people is available to be used by the people.

Our most robust resource is a Terraform module that enables a team to launch a serverless container into a load-balanced service on Amazon ECS Fargate with as little as 4 lines of code. We call this “Easy Fargate Service”. It’s published to the Terraform Registry and can be used, reused, or copied by anyone!

What this module does is sift through a lot of the bulk that surrounds container deployment on AWS. Many developers start their application creation process by writing their code, and then building it into a Docker container. This makes it easy to test the app locally. However, once you deploy it to AWS, you need to understand everything about Load Balancers, ECS Services, ECS Task Definitions, Security Groups, IAM Permissions, Task Execution Roles… The list goes on. Once you figure out the resources, you’ll need to sift through dozens of required and recommended parameters for them, avoiding pain and pitfalls along the way.

We have laid out a number of examples in the examples directory to walk through how to use easy-fargate-service. The most basic of which is the simple example. To get started, configure your AWS credentials on your CLI, clone the repository, cd into the exmples/simple directory, and then run “terraform init” and “terraform apply”. This will create two instances of easy-fargate-service. One called “simple” and the other “simplest”.

The “Simplest” deployment is the absolute bare minimum needed to provision resources: a name call the aws resources it will create, a Dockerhub (or ECR) image tag, and a container name. This module will then create 15 different Terraform resources mapping to 12 different AWS resources:

  • Autoscaling Target (to setup future scaling needs)
  • CloudWatch Log Group (for storing logs)
  • ECS Service (to orchestrate container execution)
  • ECS Task Definition (to tell ECS Service what containers to start)
  • IAM Role for Execution (to allow ECS to start containers)
  • IAM Role for Tasks (to allow the containers to talk to AWS)
  • IAM Role Policy (To allow ECS to pull ECR containers and write logs)
  • Application Load Balancer (ALB, To connect the internet to the containers)
  • ALB Target Group (To manage container healthchecks and route traffic to healthy containers)
  • ALB Listener (To map ALB ports to Target Group ports)
  • 2 Security Groups (One for the ALB, one for the ECS Service)

To do all this, the module assumes a few things:

  • There is a “default” VPC (comes with every AWS account, unless you delete it)
  • There is a “default” ECS Cluster (autocreated by AWS if you do anything ECS, but you might need to create it manually on a brand new account; instructions are at the top of main.tf)
  • You want the bare minimum in all resources (cpu/ram/1 container)
  • You don’t need HTTPS

When your terraform apply of the simple example is complete, it will output a DNS name for you to connect to, which will result in a page looking something like this:

The module has a whole host of other features, such as managing environment variables/container secrets, handle Route53 DNS creation, handling certificates, and more. Configuration can be found in the README or variables.tf files. We’re always improving it, so keep an eye on the CHANGELOG for new features and upgrade notes.

--

--

Fearless

Hi, we’re Fearless, a full stack digital services firm in Baltimore that builds software with a soul. https://fearless.tech