Skip to main content

AWS & On-Premise Hybrid Cloud via AWS Virtual Private Gateway, Customer Gateway and Site-to-Site VPN Connection: Terraform Configuration, Mermaid Flowchart

2154 words·
AWS AWS CLI Terraform Hybrid Cloud Virtual Private Cloud (VPC) Virtual Private Gateway Customer Gateway Site-to-Site VPN Connection StrongSwan Internet Gateway Routing Tables EC2 Mermaid Flowchart
Table of Contents

Network Overview
#

AWS & On-Premise
#

AWS network:

# VPC CIDR
10.10.0.0/16

# Public Subnet CIDR
10.10.0.0/24

On-premise network:

# On-premise subnet CIDR
192.168.30.0/24

# On-premise VM IP
192.168.30.16

Network Flowchart
#

graph LR %% Region 1 subgraph Region1["AWS Region 1: us-east-1"] %% VPC 1 subgraph VPC1["VPC 1: 10.10.0.0/16"] VPC1-IGW["Internet Gateway"] %% Routing Tables subgraph VPC1-RoutingTables["Routing Tables"] VPC1-PublicRT["Public Routing Table"] end %% Subnets subgraph VPC1-Subnets["Subnets"] VPC1-Subnet1["Public Subnet
10.10.0.0/24
us-east-1a"] end VPC1-PublicRT -->|Route:
0.0.0.0/0| VPC1-IGW VPC1-Subnet1 -->|Associated| VPC1-PublicRT end %% Virtual Private Gateway VPNgateway["Virtual Private Gateway"] -->|Attached:
VPC1| VPC1 SiteToSite["Site to Site VPN Connection"] <-.->VPNgateway CustomerGateway["Customer Gateway"] <-.->SiteToSite end %% Region 1 subgraph Region2["On-Premise"] %% VPC 1 subgraph OnPrem["Network 192.168.30.0/24"] LinuxVM["Linux VM: 192.198.30.16"] end end LinuxVM --> CustomerGateway %% Routes VPC1-PublicRT -->|Route:
192.168.30.0/24| OnPrem SiteToSite -->|Route:
192.168.30.0/24| OnPrem %% Dashed Border VPC1 classDef BoarderDash1 stroke-dasharray:5 5; class VPC1-Subnets,VPC1-RoutingTables,VPC1,VPC2,OnPrem BoarderDash1 %% Node Colors style VPC1-Subnet1 fill:#d1f7d6,stroke:#333,stroke-width:2px style VPC1-PublicRT fill:#d1f7d6,stroke:#333,stroke-width:2px style LinuxVM fill:#f7d6d6,stroke:#333,stroke-width:2px style CustomerGateway fill:#f7d6d6,stroke:#333,stroke-width:2px

Prerequisites
#

Install AWS CLI
#

# Update packages
sudo apt update

# Unstall zip tool
sudo apt install unzip -y

# Download AWS CLI zip file
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"

# Unzip
unzip awscliv2.zip

# Install
sudo ./aws/install
# Verify installation / check version
/usr/local/bin/aws --version

Configure AWS CLI
#

# Start AWS CLI configuration
aws configure

Install Terraform
#

# Install the HashiCorp GPG key
wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg > /dev/null

# Verify the GPG key fingerprint
gpg --no-default-keyring --keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg --fingerprint

# Add the official HashiCorp repository 
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list

# Install Terraform
sudo apt update && sudo apt-get install terraform
# Verify installation / check version
terraform version



Terraform Network Stack
#

File and Folder Structure
#

The file and folder structure of the Terraform project looks like this:

aws-hybrid-cloud-vpn
├── outputs.tf
├── terraform.tf
├── variables.tf
├── vpc1_compute.tf
├── vpc1.tf
└── vpc1_vpn.tf

Terraform Configuration Files
#

Project Folder & Terraform Provider
#

# Create Terraform project folder
TF_PROJECT_NAME=aws-hybrid-cloud-vpn
mkdir $TF_PROJECT_NAME && cd $TF_PROJECT_NAME

  • terraform.tf
# Terraform Provider
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}

# Provider Region 1
provider "aws" {
  alias  = "aws_region_1"
  region = var.aws_region_1
}

Variables
#

  • variables.tf
## On-premise Network

# On-premise Public IP
variable "customer_gateway_public_ip" {
  description = "On-premise public IP for Customer Gateway"
  type        = string
  default     = "188.23.184.191" # Define on-premise public IP
}

# VPN Connection: Static Route CIDR
variable "vpn_connection_static_route_cidr" {
  description = "On-premise network CIDR"
  type        = string
  default     = "192.168.30.0/24" # Define on-premise network
}


## AWS Region & Availability Zones

# VPC AWS Region
variable "aws_region_1" {
  description = "AWS Region"
  type        = string
  default     = "us-west-2"
}

# VPC Availability Zone
variable "region_1_availability_zone_1" {
  description = "The availability zone for the public subnet"
  type        = string
  default     = "us-west-2a"
}


## VPC & Subnets: CIDR Blocks

# VPC 1: CIDR
variable "vpc1_cidr" {
  description = "VPC CIDR block"
  type        = string
  default     = "10.10.0.0/16"
}

#VPC 1: Subnet 1 CIDR
variable "vpc1_subnet_cidr_1" {
  description = "Subnet CIDR block"
  type        = string
  default     = "10.10.0.0/24"
}


## VPC SSH Key and EC2 Image
# SSH key pair name
variable "region_1_key_name" {
  default = "us-west-2-pc-le" # Define key pair name
}

# EC2 Image ID
variable "region_1_ami_id" {
  default = "ami-05d38da78ce859165" # Define EC2 AMI ID
}

VPC and Subnet
#

  • vpc1.tf
# VPC
resource "aws_vpc" "vpc1" {
  provider = aws.aws_region_1
  cidr_block           = var.vpc1_cidr
  enable_dns_support   = true
  enable_dns_hostnames = true

  tags = {
    Name       = "VPC1"
    Env        = "Production"
  }
}

# Public Subnet "10.10.0.0/24"
resource "aws_subnet" "vpc1_subnet_public1" {
  provider = aws.aws_region_1
  vpc_id                  = aws_vpc.vpc1.id
  cidr_block              = var.vpc1_subnet_cidr_1
  availability_zone       = var.region_1_availability_zone_1
  map_public_ip_on_launch = true

  tags = {
    Name      = "VPC1 Subnet-Public-1"
    Env       = "Production"
  }
}

# Internet Gateway
resource "aws_internet_gateway" "vpc1_igw" {
  provider = aws.aws_region_1
  vpc_id = aws_vpc.vpc1.id

  tags = {
    Name        = "VPC1 IGW"
    Env         = "Production"
  }
}


# Public Routing Table
resource "aws_route_table" "vpc1_public_routetable" {
  provider = aws.aws_region_1
  vpc_id = aws_vpc.vpc1.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.vpc1_igw.id
  }

  depends_on = [aws_internet_gateway.vpc1_igw]

  tags = {
    Name        = "VPC1 Public Route Table"
    Env         = "Production"
  }
}

# Associate Routes with Subnets
resource "aws_route_table_association" "vpc1_subnet_public1_ra" {
  provider = aws.aws_region_1
  subnet_id      = aws_subnet.vpc1_subnet_public1.id
  route_table_id = aws_route_table.vpc1_public_routetable.id
}

VPN Section
#

  • vpc1_vpn.tf
## VPN Setup

# Virtual Private Gateway / VPN Gateway
resource "aws_vpn_gateway" "region1_vpn_gateway" {
  provider  = aws.aws_region_1
  vpc_id    = aws_vpc.vpc1.id

  depends_on = [
    aws_vpc.vpc1
  ]

  tags = {
    Name    = "Region1 VPN-Gateway"
  }
}


# Customer Gateway
resource "aws_customer_gateway" "region1_customer_gateway" {
  provider   = aws.aws_region_1
  bgp_asn    = 65000  # DefaultAssociationRouteTable
  type       = "ipsec.1"
  ip_address = var.customer_gateway_public_ip  # On-premise public IP

  tags = {
    Name = "Region1 Customer-Gateway"
  }
}


# Site-To-Site VPN Connection (Region1 to On-Premise)
resource "aws_vpn_connection" "aws_to_onprem" {
  provider = aws.aws_region_1
  vpn_gateway_id      = aws_vpn_gateway.region1_vpn_gateway.id
  customer_gateway_id = aws_customer_gateway.region1_customer_gateway.id
  type                = "ipsec.1"  # IPSec tunnel type
  static_routes_only  = true

  depends_on = [
    aws_vpn_gateway.region1_vpn_gateway,
    aws_customer_gateway.region1_customer_gateway
  ]

  tags = {
    Name = "AWS to On-premise VPN"
  }
}


## Add Routes

# Site-To-Site VPN Connection (Region1 to On-Premise): Add static route to on-premise network
resource "aws_vpn_connection_route" "onprem_static_route" {
  provider = aws.aws_region_1
  vpn_connection_id    = aws_vpn_connection.aws_to_onprem.id
  destination_cidr_block = var.vpn_connection_static_route_cidr # Add on-premise CIDR

  depends_on = [aws_vpn_connection.aws_to_onprem]
}

EC2 Instance & Security Group
#

  • vpc1_compute.tf
# VPC1: Security Group for SSH Access and Ping
resource "aws_security_group" "vpc1_sg" {
  provider = aws.aws_region_1
  name        = "VPC1-SG"
  description = "Security group for SSH access and ping"
  vpc_id      = aws_vpc.vpc1.id

  # Allow SSH from anywhere
  ingress {
    description = "Allow SSH from on-premise subnet"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  # Allow ping (ICMP) from anywhere
  ingress {
    description = "Allow ping from anywhere"
    from_port   = 8
    to_port     = -1
    protocol    = "icmp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  # Allow all outbound traffic
  egress {
    description = "Allow all outbound traffic"
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "VPC2-SG"
    Env  = "Production"
  }
}


# VPC1: EC2 Instance in Public Subnet
resource "aws_instance" "ec2_vpc1_public_subnet" {
  provider = aws.aws_region_1
  ami                    = var.region_1_ami_id
  instance_type          = "t2.micro"
  subnet_id              = aws_subnet.vpc1_subnet_public1.id
  key_name               = var.region_1_key_name
  vpc_security_group_ids = [aws_security_group.vpc1_sg.id]

  tags = {
    Name = "public-subnet-vm"
    Env  = "Production"
  }
}

Outputs
#

  • outputs.tf
## EC2 Instance
# Public IP
output "VPC1_EC2_Public_IP" {
  description = "VM1 Public IP"
  value       = aws_instance.ec2_vpc1_public_subnet.public_ip
}

# Private IP
output "VPC1_EC2_Private_IP" {
  description = "VM1 Private IP"
  value       = aws_instance.ec2_vpc1_public_subnet.private_ip
}



Configuration Deployment
#

Initialize Terraform Project
#

This will download and install the AWS Terraform provider defined in the terraform.tf file with “hashicorp/aws”, as well as setting up the configuration files in the project directory.

# Initialize the Terraform project
terraform init

Validate Configuration Files
#

# Validates the syntax and structure of Terraform configuration files
terraform validate

# Shell output:
Success! The configuration is valid.

Plan the Deployment
#

# Dry run / preview changes before applying them
terraform plan

Apply the Configuration
#

# Create network stack
terraform apply -auto-approve

# Shell output:
Outputs:

VPC1_EC2_Private_IP = "10.10.0.112"
VPC1_EC2_Public_IP = "35.93.137.145"



Add Missing Route via AWS CLI
#

Overview
#

For a unknown reason, it does not work when I create the following route via Terraform, even though the route is correctly added and can be verified via the AWS Management Console:

# Public Subnet Route Table: Route to on-premise CIDR
variable "vpc1_public_routetable_on_premise_cidr" {
  description = "On-premise network CIDR"
  type        = string
  default     = "192.168.30.0/24" # Define on-premise network
}

# Public Route Table: Add Static Route to On-premise
resource "aws_route" "onprem_static_route" {
  provider = aws.aws_region_1
  route_table_id         = aws_route_table.vpc1_public_routetable.id
  destination_cidr_block = var.vpc1_public_routetable_on_premise_cidr
  gateway_id             = aws_vpn_gateway.region1_vpn_gateway.id

  depends_on = [aws_vpn_gateway.region1_vpn_gateway]
}

So instead, I have added the missing route via the AWS CLI, see the next step.


Add Route to Public Subnet Route Table
#

Add a route to the on-premise network CIDR “192.168.30.0/24”, through the VPN Gateway, to the route table that is associated with the public subnet:

# Add route: Syntax
aws ec2 create-route \
  --route-table-id <vpc1_public_routetable_id> \
  --destination-cidr-block 192.168.30.0/24 \
  --gateway-id <region1_vpn_gateway_id>
# Add route: Example
aws ec2 create-route \
  --route-table-id rtb-060f6d5c2cfe77011 \
  --destination-cidr-block 192.168.30.0/24 \
  --gateway-id vgw-005f916e3f62b47d5 \
  --region us-west-2

# Shell output:
{
    "Return": true
}



Verify Resources via Management Console
#

Virtual Private Gateway / VPN Gateway
#

  • Verify the AWS Virtual Private Gateway state is “Attached”:

Customer Gateway
#

  • Verify the Customer Gateway has the correct IP of the on-premise network:

Public Route Table Routes
#

  • Verify the Route Table associated with the Public Subnet has a route to the on-premise network via the Virtual Private Gateway:

Site-To-Site VPN Connection
#

  • Verify the VPN connection state is “Available”

  • Click “Download configuration”

  • Select “Vendor” x “Openswan”

  • Click “Download”

  • Verify the static route was correctly added



On-Premise VM VPN Setup
#

Enable Packet Forwarding
#

sudo vi /etc/sysctl.conf

# Add the following configuration
net.ipv4.ip_forward = 1
# Reload the configuration file / apply changes
sudo sysctl -p

VPN Setup
#

StrongSwan Installation
#

# Install StrongSwan
sudo apt install strongswan strongswan-pki libcharon-extra-plugins -y

Adapt IPsec Configuration
#

# Open the ipsec.conf file
sudo vi /etc/ipsec.conf

Paste the configuration for the two IPsec tunnels from the downloaded VPN configuration:

config setup

conn Tunnel1
	auto=start
	left=%defaultroute
	leftid=188.23.184.191
	right=35.83.202.62
	type=tunnel
	leftauth=psk
	rightauth=psk
	keyexchange=ikev1
	ike=aes128-sha1-modp1024
	ikelifetime=8h
	esp=aes128-sha1-modp1024
	lifetime=1h
	keyingtries=%forever
  leftsubnet=192.168.30.0/24 # Adapt values
  rightsubnet=10.10.0.0/16 # Adapt values
	dpddelay=10s
	dpdtimeout=30s
	dpdaction=restart
  mark=100

conn Tunnel2
	auto=start
	left=%defaultroute
	leftid=188.23.184.191
	right=54.214.43.117
	type=tunnel
	leftauth=psk
	rightauth=psk
	keyexchange=ikev1
	ike=aes128-sha1-modp1024
	ikelifetime=8h
	esp=aes128-sha1-modp1024
	lifetime=1h
	keyingtries=%forever
  leftsubnet=192.168.30.0/24 # Adapt values
  rightsubnet=10.10.0.0/16 # Adapt values
	dpddelay=10s
	dpdtimeout=30s
	dpdaction=restart
  mark=200

Add IPsec Secrets
#

# Open the ipsec.secrets file
sudo vi /etc/ipsec.secrets

Paste the secret keys for the two IPsec tunnels from the downloaded VPN configuration:

188.23.184.191 35.83.202.62 : PSK "X5m96QCUeu7ViSYXHyRqcB.z11Tiy1iE"
188.23.184.191 54.214.43.117 : PSK "MYrB3kb7k8t9rwPSvdtZVRyd.Dlwrj9M"

Restart StrongSwan Service
#

# Restart & enable StrongSwan service
sudo systemctl restart strongswan-starter.service
sudo systemctl enable strongswan-starter.service
# Verify StrongSwan service
sudo systemctl status strongswan-starter.service

# Check Journal
sudo journalctl -u strongswan-starter.service

Verify IPsec Status
#

# Check the IPsec status
sudo ipsec statusall

# Shell output:
Status of IKE charon daemon (strongSwan 5.9.13, Linux 6.8.0-51-generic, x86_64):
  uptime: 120 seconds, since Jan 13 16:20:03 2025
  malloc: sbrk 2969600, mmap 0, used 1228352, free 1741248
  worker threads: 11 of 16 idle, 5/0/0/0 working, job queue: 0/0/0/0, scheduled: 9
  loaded plugins: charon aesni aes rc2 sha2 sha1 md5 mgf1 random nonce x509 revocation constraints pubkey pkcs1 pkcs7 pkcs12 pgp dnskey sshkey pem openssl pkcs8 fips-prf gmp agent xcbc hmac kdf gcm drbg attr kernel-netlink resolve socket-default connmark forecast farp stroke updown eap-identity eap-aka eap-md5 eap-gtc eap-mschapv2 eap-dynamic eap-radius eap-tls eap-ttls eap-peap eap-tnc xauth-generic xauth-eap xauth-pam tnc-tnccs dhcp lookip error-notify certexpire led addrblock unity counters
Listening IP addresses:
  192.168.30.16
Connections:
     Tunnel1:  %any...35.83.202.62  IKEv1, dpddelay=10s
     Tunnel1:   local:  [188.23.184.191] uses pre-shared key authentication
     Tunnel1:   remote: [35.83.202.62] uses pre-shared key authentication
     Tunnel1:   child:  192.168.30.0/24 === 10.10.0.0/16 TUNNEL, dpdaction=start
     Tunnel2:  %any...54.214.43.117  IKEv1, dpddelay=10s
     Tunnel2:   local:  [188.23.184.191] uses pre-shared key authentication
     Tunnel2:   remote: [54.214.43.117] uses pre-shared key authentication
     Tunnel2:   child:  192.168.30.0/24 === 10.10.0.0/16 TUNNEL, dpdaction=start
Security Associations (2 up, 0 connecting):
     Tunnel2[2]: ESTABLISHED 120 seconds ago, 192.168.30.16[188.23.184.191]...54.214.43.117[54.214.43.117]
     Tunnel2[2]: IKEv1 SPIs: 83f1546908fb0346_i* cb9dfbf30f1a00a8_r, pre-shared key reauthentication in 7 hours
     Tunnel2[2]: IKE proposal: AES_CBC_128/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024
     Tunnel2{2}:  INSTALLED, TUNNEL, reqid 2, ESP in UDP SPIs: c61cd97a_i c66e315e_o
     Tunnel2{2}:  AES_CBC_128/HMAC_SHA1_96/MODP_1024, 0 bytes_i, 0 bytes_o, rekeying in 47 minutes
     Tunnel2{2}:   192.168.30.0/24 === 10.10.0.0/16
     Tunnel1[1]: ESTABLISHED 120 seconds ago, 192.168.30.16[188.23.184.191]...35.83.202.62[35.83.202.62]
     Tunnel1[1]: IKEv1 SPIs: 0f71232badb04055_i* 00cc8006969aa1b2_r, pre-shared key reauthentication in 7 hours
     Tunnel1[1]: IKE proposal: AES_CBC_128/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024
     Tunnel1{1}:  INSTALLED, TUNNEL, reqid 1, ESP in UDP SPIs: c047dbe7_i c49d8182_o
     Tunnel1{1}:  AES_CBC_128/HMAC_SHA1_96/MODP_1024, 0 bytes_i, 0 bytes_o, rekeying in 41 minutes
     Tunnel1{1}:   192.168.30.0/24 === 10.10.0.0/16

Optional use the following commands to restart the IPsec tunnels:

# Restart IPsec
sudo ipsec restart

# Stop tunnels
sudo ipsec down Tunnel1
sudo ipsec down Tunnel2

# Start tunnels
sudo ipsec up Tunnel1
sudo ipsec up Tunnel2

Verify the Site-to-Site Connection
#

  • The IPsec tunnels should now have the status “Up”:



Verfiy Network Connectivity
#

Ping & Access EC2 Instance
#

SSH into the on-premise VM and ping or try to SSH into the EC2 instance, via it’s private IP address:

# Ping the EC2 instance
ping 10.10.0.112

# Shell output:
PING 10.10.0.112 (10.10.0.112) 56(84) bytes of data.
64 bytes from 10.10.0.112: icmp_seq=1 ttl=64 time=199 ms
64 bytes from 10.10.0.112: icmp_seq=2 ttl=64 time=202 ms
# SSH into the EC2 instance
ssh -i /home/ubuntu/.ssh/us-west-2-pc-le.pem ubuntu@10.10.0.112

# Shell output:
he authenticity of host '10.10.0.112 (10.10.0.112)' can't be established.
ED25519 key fingerprint is SHA256:42mpeRmVnFAs9pzMyBo+z+YYgAT7tBXa+Zfl2JV+3Gs.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.10.0.112' (ED25519) to the list of known hosts.
Welcome to Ubuntu 24.04.1 LTS (GNU/Linux 6.8.0-1018-aws x86_64)

Ping On-Premise VM from AWS VPC
#

# SSH into the EC2 instance
ssh -i /home/ubuntu/.ssh/us-west-2-pc-le.pem ubuntu@35.93.137.145
# Ping the on-premise VM
ping 192.168.30.16

# Shell output:
PING 192.168.30.16 (192.168.30.16) 56(84) bytes of data.
64 bytes from 192.168.30.16: icmp_seq=1 ttl=64 time=198 ms
64 bytes from 192.168.30.16: icmp_seq=2 ttl=64 time=196 ms