Building a Service with Docker and ECS

At Sharethrough, we built a proxy server (codenamed ASAP) that all mobile SDKs speak to for ad server related information. The purpose of ASAP was to reduce our client side SDKs to minimal logic as well as allowing us to push changes and new features to ASAP without asking publishers to update their SDK.

Because ASAP will receive thousands of ad requests per second, we needed to build a scalable, high performance infrastructure that is easy to maintain.

Our Infrastructure

To build ASAP’s infrastructure, we looked into Spring Boot, Docker and Amazon’s EC2 Container Service.

Spring Boot 

We use Spring Boot as our MVC framework, which comes automatically embedded with a Tomcat servelet.  Spring Boot packages default Spring with powerful configuration controls and default implementations that reduce boiler plate code to run a Spring web app.


Docker is a platform for developing, shipping, and running applications using container virtualization technology. Container virtualization directly uses the host OS to run multiple guest instances, which is called a container. Each container is an isolated space that has its own root filesystem, process, memory, and network ports. Within each container, the application and libraries that the application depend on are installed. The benefits of using Docker are that containers are more lightweight because they don’t need to install the guest OS, and as a result less CPU/RAM/storage is required for launching each container. Through a DockerFile, you can declaratively define a Docker image to be run, and be assured that your Docker image will run exactly the same way in any environment. 

Amazon’s ECS (EC2 Container Service)

ECS is Amazon’s solution for deploying and running docker containers on EC2 instances. The idea behind using ECS is that once you set up your ECS cluster and specify the Docker images you want to run, you can use ECS to deploy those images to the cluster, manage and scale your cluster, and interact with other AWS services.  ECS simplifies Docker container management across EC2 Instances, but it involves a good understanding of AWS concepts and a lot of wiring to make everything work.

Steps we took to deploy docker containers on ECS

  • Create an ECS Cluster which is a group of EC2 instances managed by ECS
  • Create an ELB which is the load balancer that routes traffics to your EC2 instances
  • Create IAM Roles so that the EC2 instances can speak to ECS, and that ECS can notify ELB when it is deploying Docker containers
  • Create Security Groups to define what traffic an instance will allow. Our EC2 instances are configured with security groups to allow computers within our network to ssh in, and for the ELB to ping it for the periodic health checks. Our instances are also associated with the “asap” security group so that our redis instance can allow it to access placement information. 
  • Create an Autoscaling Group to automatically launch EC2 instances based on rules that you define. You can set min/max number of EC2 instances and define scaling policies to scale up or down 
  • A task definition specifies to ECS which Docker image to run, how much CPU and memory, port mapping, environment variables, linked containers, etc 
  • Create a service that finally runs the task that deploys the docker containers onto the ECS cluster

One of the advantages of using ECS is that when you update ECS with a new Service to run a new Task, Amazon automatically begins a blue-green deployment process for you.  Based on if you have available resources in your auto scaling group, it creates a new EC2 instance with the new Docker image. So all that is needed to deploy a new version of your Docker image is to update the service. To configure how ECS does the blue-green deployment, you can define the minimum and maximum healthy percentages that sets the lower and higher limit of running tasks during a deployment. 


We use terraform to automate all the manual steps above to set up and tear down our AWS infrastructure - our terraform scripts include configs for setting up the ELB, ECS cluster, Autoscaling group, ECS task definition, ECS service, Cloud Watch metrics, and Security groups.

Putting it all together

Our deployment process begins with pushing to master, which triggers a build and push to our staging environment in which Jenkins builds the Docker image, pushes the image to our ECR, and ECS begins a blue-green deployment. We have a script to detect if at least one task is running successfully on our staging environment before starting the build and deploy process to production. Our build is complete when at least one task is successfully running on our production environment.  

Maintaining Environments 

Testing and deploying in different environments can quickly become confusing as each environment needs to be set up with different configurations. In order to maintain all our various environments - local, local-Docker, staging-Docker, and production-Docker, we organize all our configuration variables in environment specific files.

For Docker, we have separate Dockerfiles for local, staging, and production. Our staging and production Dockerfiles are identical, but our local Dockerfile expose JMX ports so we can visualize JVM metrics using VisualVM. We also initialize the app with a much lower initial/max heap size in our local environment. 

For Tomcat and MVC specific variables, we also have separate files. In these files we specify the min/max thread count and queue size for tuning Tomcat and our MVC app.