Single Node Docker with EC2, ECR, Terraform, ASP.Net Core API

In this article, we will build a sample DotNet Core API to run in single-node Docker in EC2, and utilize AWS ECR to store our application image. To provision the infrastructure, we use Terraform to help us provision require components in AWS to run our Docker application. The commands used here had been tested in Linux and Mac environments.

Introduction

The main force in this article is to learn how to set up a basic terraform script to help us provision EC2 and automatically run our application in the Docker environment. You should able to convert all the script into CICD flow.

Requirements

To follow this guide, you need the following tools and some basic learning:-

High-Level flow

Simple Docker workflow

Prepare Application Repository

As for the CICD separation, Application Repository should be independently maintained, unless your company adopting a mono-repo strategy like Google. For this sharing, we cater to general pratices to split the Application repository and Configuration Repository. Application Repository output an Artifact, Configuration Repository use the Artifact to deploy.

dotnet new webapi -o SampleApi
cd SampleApi
dotnet build
dotnet run

Add Docker support

In order to run your application in a Docker environment, we need to declaratively define your environment in Dockerfile. Follow from the previous step, exit the build run, and create a new file Dockerfile under the root of your project.

FROM mcr.microsoft.com/dotnet/core/aspnet:3.1 AS baseWORKDIR /appEXPOSE 80EXPOSE 443EXPOSE 5000ENV ASPNETCORE_URLS=http://*:5000FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS buildWORKDIR /srcCOPY . .RUN dotnet restore "SampleApi/SampleApi.csproj"WORKDIR "/src/."RUN dotnet build "SampleApi/SampleApi.csproj" -c Release -o /app/buildFROM build AS publishRUN dotnet publish "SampleApi/SampleApi.csproj" -c Release -o /app/publishFROM base AS finalWORKDIR /appCOPY --from=publish /app/publish .ENTRYPOINT ["dotnet", "SampleApi.dll"]

Build and test docker

Use the following command to build your docker image, it must be run in the folder where your Dockerfile is.

docker build -t sampleapi .
docker run -it --rm -p 8080:5000 sampleapi

Push Image to ECR

As ECR does not provide login to push the image, AWS only supports IAM credential, hence we will use Amazon ECR Credential Helper to help us simplify the docker authentication from our IAM.

brew install docker-credential-helper-ecr
yum install amazon-ecr-credential-helper
{
"credsStore": "ecr-login"
}
aws ecr create-repository --repository-name sampleapi
docker tag sampleapi:latest {your-aws-account}.dkr.ecr.{your-aws-region}.amazonaws.com/sampleapi:latestdocker push {your-aws-account}.dkr.ecr.{your-aws-region}.amazonaws.com/sampleapi:latest
aws ecr describe-images --repository-name sampleapi

Configure Terraform script

This section we will prepare our terraform to build one EC2 and automatically pull and run our API.

Create Key Pair

Create a file key_pair.tf under the infra the folder we created, and fill in the following contents, this will prepare a key pair file with 4096 RSA private key:-

Create Instance Profile

We need EC2 to able to has the right permission to pull our image in ECR, instance profile going to help us configure that. We only need AmazonEC2ContainerRegistryReadOnly Policy. Create file instance_profile.tf in infra folder with the following content:-

Create a Security Group

We need EC2 to allow SSH and Web port, the following Terraform script will allow that connection, create a file security_group.tf in infra folder with the following content, to simplify the learning, we will leave the VPC ID blank, so everything will refer to Default VPC and Default Subnet , ensure your Default VPC and Default Subnet is not deleted when you create your account.

Create a User Data file

We want EC2 to automatically install docker and run our image, we can use user data to perform the bootstrapping, create a file user_data.sh under infra folder with the following content:-

Create the main script to provision EC2

We have Key Pair, Security Group, and User Data, now we will create EC2 resources to link them up. We must configure provider to define where we deploy to, in our case it uses the local AWS credential as the target location. If you like to use S3 remote backend, you can refer to this. You also can configure assume roles like this.

Create an output file

We need output Public IP of the EC2, and the private_key content so we can use to SSH to the machine, create a file output.tf under infra folder with the following content.

Provision and test your API

In this step, you should already complete all the Terraform Script setup, run the following script to deploy your infrastructure:-

Init Terraform to the infra folder

terraform init infra

Verify Terraform script to the infra folder

terraform plan infra

Deploy Terraform script to the infra folder

terraform apply -auto-approve infra

Verify API

When provisioning successful you will see 2 results print out as below:-

Login to machine and verify the result

For troubleshooting purposes, we need to able to login to the machine and verify our environment.

Create a Key file from Terraform output

terraform output private_pem > key.pem
chmod 400 key.pem

SSH to machine and verify environment

ssh -i key.pem ec2-user@3.25.96.223

Verify user data log

cat /var/log/user-data.log

Verify docker container status

docker ps

Clean up your environment

After you are comfortable with your result, you can run the following command to clear your environment:-

terraform destroy -auto-approve infra

Take Away

Infrastructure as a code was a fairly complex topic, it varies from platform to platform. In this article, we explore using Terraform, EC2, and Docker to build the DotNet Core API application. Some of the complicated settings like custom VPC and Subnet had already been removed for learning purposes, but it is not that difficult after you get your hands dirty. Check this out for all the resources.

A software engineer that believe to change the world, first you need to fix the code.