Building a Scalable 2-Tier Architecture in AWS with Terraform

Naveen
8 min readJan 11, 2024

--

AWS Two-Tier Scalable Architecture

In this article, we are going to create a scalable two-tier architecture using AWS services like VPC, EC2, Application load balancer, RDS database & Terraform, etc. Below are the prerequisites before jumping into this article.

Pre-requisites:

1- Create an AWS account and install & configure AWS CLI using AWS access & secret key
2- Github account
3- Install & configure Terraform

Project files:

1) MySQL Application — https://github.com/naveend3v/Python-MySQL-application
2) Terraform Code — https://github.com/naveend3v/aws_2_tier_architecture

Note: Before directly executing the terraform code, I suggest you to create all the resources manually using AWS console, because you will learn more deeply when directly integrating & troubleshooting with each AWS services.

Step -1: Set up the state management for Terraform using S3 Bucket and DynamoDB

Terraform stores the infrastructure & its configurations in the state file to track the resource states. So Terraform supports the remote backend like s3 to store the state file safely and retrieve the resource state information. DynamoDB is used for state locking when a remote backend is configured. It ensures that only one user or process can modify the Terraform state at a time. So let's create an S3 bucket using the below AWS CLI command & change the bucket name & region according to your preference.

aws s3api create-bucket --bucket <your-bucket-name> --region <your-aws-region> --output json
Output of the above command
S3 Bucket

let's create a DynamoDB table using the below AWS CLI command & change the table name & region according to your preference.

aws dynamodb create-table --region <your-aws-region> --table-name <your-table-name> --attribute-definitions AttributeName=LockID,AttributeType=S --key-schema AttributeName=LockID,KeyType=HASH --billing-mode PAY_PER_REQUEST --output json
Output of the above command
DynamoDB Table with partition key at AWS console

Now configure the S3 Bucket name & DynamoDB table name in the providers.tf file from the above project GitHub repo. So the providers.tf file looks like below and then execute the terraform init & plan regular commands it will set the state file in the s3 bucket and lock id with dynamodb.

# Managing terraform backend using S3 bucket to store terraform state file & dyanamo database table for locking mechanism
terraform {
backend "s3" {
# Insert your terraform state file storage s3 bucket name below
bucket = "naveen-terraform-remote-state-bucket"
# Insert your terraform state file path in S3 bucket below
key = "terraform.tfstate"
# Insert your terraform state file located region below
region = "us-east-1"
# Insert your dynamo db table name below to track terraform locking mechanism
dynamodb_table = "terraform-lock"
}
required_providers {
aws = {
version = "~>5.0"
source = "hashicorp/aws"
}
}
}

provider "aws" {
region = var.region
}

Step -2: VPC Setup for an isolated network

Let's create a VPC in the us-east-1 region with 2 Availability Zones & 4 Subnets (2 public & 2 private subnets) & an Internet Gateway. Attach the internet gateway with the VPC and create 2 route tables. An internet gateway provides the internet connectivity and route tables act as guides for network traffic, determining its destination based on predefined rules. Security groups are also created and it acts like a virtual firewall to control the network traffic on the resources inside subnets.

VPC
Subnets
Internet Gateway
Route Tables
Security Groups

Step -3: IAM role for EC2 instance

An IAM role for an Amazon EC2 instance provides temporary permissions for applications to access other AWS resources. let's create an IAM role for the EC2 instance so it can access the AWS service like RDS MySQL database to update the user information like username, password, and SSM Parameter store to access the stored database password and AWS secret key.

IAM Role with RDS & SSM access

Step -4: Storing passwords in the Systems Manager Parameter Store

AWS Systems Manager Parameter Store (SSM Parameter Store) is a secure way to store and manage secrets and configuration data. It can store data like passwords, database strings, Amazon Machine Image (AMI) IDs, and license codes. So I have created two parameters

my_secret_key To store the AWS account secret key.
mysql_psw — To store the MySQL database password.

Both parameters are stored in a SecureString type. So whenever the Python application is hosted in the EC2 instance it will access/change the RDS Mysql database using those passwords.

SSM parameter store

Step -5: Creating RDS MySQL Database

Amazon RDS is a managed relational database service that provides a platform-as-a-service (PaaS) for running database products like MySQL. It offers automated maintenance, patching, and backup, and provides scalability and easy provisioning of hardware resources. Let’s create an RDS MySQL database instance in private subnet groups with multi-az & attach the rds_ec2_sg security group so our EC2 instance safely communicates with the database.

Database subnet group
Database configuration
Database configuration
Security group attached to the database

Step -6: Configuring the EC2 launch template

An EC2 launch template is a template that specifies configuration information like AMI ID, CPU and RAM, and storage type & storage size for EC2 instances. So we can use this template to create multiple EC2 instances whenever needed.

EC2 launch template

Step -7: Configuring EC2 Auto-scaling

Amazon EC2 Auto Scaling group is a feature that allows users to automatically add or remove EC2 instances based on defined scaling policies. The auto-scaling group uses the EC2 launch template to launch EC2 instances to maintain the application availability for our two-tier architecture. For the Autoscaling group, we need to provide the desired capacity, minimum & maximum capacity to maintain the ec2 instances. So I have provided minimum capacity = 2 so it will spin up the 2 ec2 instances in each availability zone.

Note: Desired capacity should not be less than Minimum capacity
(i.e Desired capacity ≥ Minimum capacity)

Auto Scaling group
EC2 instance created by Autoscaling group

Step -8: Configuring Application Load Balancer

An AWS Application Load Balancer (ALB) is a load balancer that distributes incoming application traffic across multiple targets. This increases the availability of the application and provides a single point of contact like a DNS name & highly scalable, so anyone can access our application anytime easily. Let's create an ALB and associate it with our public subnets where our application is hosted in the ec2 instance.

ALB with HTTP listener rules.

After creating ALB, we have to attach a security group to control the inbound & outbound HTTP traffic. let's attach a previously created security group.

ALB Security Group

Target groups route requests to individual registered targets, such as EC2 instances, using the protocol and port number that you specify. You can register a target with multiple target groups & configure health checks on a per-target group basis. Health checks are performed on all targets registered to a target group that is specified in a listener rule for your load balancer. Let's create a target group & register our EC2 instances created by the auto-scaling group.

Target Group with associated subnets
Registerd EC2 instances

ALB mainly serves application traffic, so we have created an HTTP listener on port 80 with 5 listener rules to route the requests to target groups.

ALB listener rules

Step -9: Accessing the Application using ALB DNS name

After all our infrastructure is created we can access the application using the ALB DNS name. So myALB-429115627.us-east-l.elb.amazonaws.com is my ALB DNS name and we can access our application using the browser.

Application load balancer

Let's sign up for our application.

Sign up for our application

Once signup then sign-in to our application using the correct user credentials.

Sign in to our application

Once the user's entered credentials are correct, we can able to access the application dashboard successfully 🥳.

Application Dashboard

I’m going to destroy the infrastructure because of billing costs associated with AWS and in future you cant able to access the infrastructure using the ALB DNS name.

Learning from this project:

1- Difference between the public & private subnets, and how route tables & security groups control the network traffic.
2- Containerize the Python application using docker.
3- How the launch template & auto-scaling groups help create the ec2 instances for high availability.
4- Building the infrastructure with Terraform modules.

Thanks for reading & if you have any doubts in this project you can reach out to me over below social medias:

Twitter: https://twitter.com/naveend3v
Linkedin: https://www.linkedin.com/in/naveend3v/
Github: https://github.com/naveend3v

--

--

Naveen
Naveen

Written by Naveen

Self learner & Love to build thing on Cloud & Web

No responses yet