Skip to main content

GCP Virtual Private Cloud (VPC) Part 3: Public and Private Subnet Scheme, Terraform Configuration, Mermaid Flowchart

1763 words·
GCP Google Cloud SDK Terraform Virtual Private Cloud (VPC) Compite Engine Instance Mermaid Flowchart
Table of Contents
GCP-VPC-Terraform - This article is part of a series.
Part 3: This Article
GitHub Repository Available

Flowchart
#

graph TD %% VPC subgraph VPC["VPC Global"] %% Region1 subgraph Region1["GCP Region us-central1"] %% Subnet 1 AZ1 subgraph SN1["Public Subnet"] SN1AZ1["AZ us-central1-a"] SN1AZ2["AZ us-central1-b"] SN1AZ3["AZ us-central1-c"] end %% Subnet 1 AZ1 subgraph SN2["Private Subnet"] SN2AZ1["AZ us-central1-a"] SN2AZ2["AZ us-central1-b"] SN2AZ3["AZ us-central1-c"] end end %% AZ Connections SN1AZ1 <-.-> SN1AZ2 SN1AZ2 <-.-> SN1AZ3 %% Subnet Connections SN2AZ1 <-.-> SN2AZ2 SN2AZ2 <-.-> SN2AZ3 %% NAT SN2 --> NAT NAT["NAT Gateway"] --> Router["Cloud Router"] end %% Connectivity Router --> Internet["Internet"] SN1 <--> Internet %% Dashed Border classDef BoarderDash1 stroke-dasharray:5 5; class Region1,SN1AZ1,SN1AZ2,SN1AZ3,Internet BoarderDash1 classDef BoarderDash1 stroke-dasharray:5 5; class SN2AZ1,SN2AZ2,SN2AZ3 BoarderDash1 %% Node Colors style SN1 fill:#d1f7d6,stroke:#333,stroke-width:2px style SN2 fill:#f7d6d6,stroke:#333,stroke-width:2px



Prerequisites
#

Google Cloud SDK
#

Installation
#

The Google Cloud SDK (Software Development Kit) includes the gcloud command-line interface (CLI).

# Install dependencies
sudo apt install curl apt-transport-https ca-certificates gnupg -y

# Add package source
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list

# Import the Google Cloud public key
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /usr/share/keyrings/cloud.google.gpg

# Update package index
sudo apt update

# Install the Google Cloud SDK
sudo apt install google-cloud-sdk -y
# Verify installation / list version
gcloud version

# Shell output:
Google Cloud SDK 506.0.0
alpha 2025.01.10
beta 2025.01.10
bq 2.1.11
bundled-python3-unix 3.11.9
core 2025.01.10
gcloud-crc32c 1.0.0
gsutil 5.33

Login to GCP / Verify Login
#

Open the link from the shell in a browser, login to GCP and paste the verification code in the shell.

# Alternative use the following command to login without a browser
gcloud auth login --no-launch-browser

Verify Login:

# List Google Cloud accounts that are currently authenticated
gcloud auth list

Set Default Project
#

# Set the default project: Syntax
gcloud config set project <PROJECT_ID>


# Set the default project: Example
gcloud config set project my-example-project-448310

# Shell output:
Updated property [core/project].

Enable Project APIs
#

# Enable the necessary APIs, that are used in the project
gcloud services enable compute.googleapis.com

# Shell output:
Operation "operations/acf.p2-249181329052-8b52d534-2fc8-4221-88b8-7f17b4df9d95" finished successfully.

Service Account for Terraform
#

# Create a Service Account named "terraform"
gcloud iam service-accounts create terraform \
    --description="Terraform service account" \
    --display-name="Terraform"

# Shell output:
Created service account [terraform].
# Assign IAM roles: Define project ID
gcloud projects add-iam-policy-binding my-example-project-448310 \
    --member="serviceAccount:terraform@my-example-project-448310.iam.gserviceaccount.com" \
    --role="roles/editor"

# Shell output:
Updated IAM policy for project [my-example-project-448310].
bindings:
- members:
  - serviceAccount:terraform@my-example-project-448310.iam.gserviceaccount.com
  role: roles/editor
- members:
  - user:juergen@jklug.work
  role: roles/owner
etag: BwYsDPrXlRw=
version: 1
# Generate a JSON Key (used by Terraform for authentication): Define project ID
gcloud iam service-accounts keys create ~/terraform-key.json \
    --iam-account terraform@my-example-project-448310.iam.gserviceaccount.com

# Shell output:
created key [76d8418c1229bec494f7c0b1a5c1d9e600b4b65e] of type [json] as [/home/ubuntu/terraform-key.json] for [terraform@my-example-project-448310.iam.gserviceaccount.com]



Terraform Installation
#

# Install the HashiCorp GPG key
wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg > /dev/null

# Verify the GPG key fingerprint
gpg --no-default-keyring --keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg --fingerprint

# Add the official HashiCorp repository 
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list

# Install Terraform
sudo apt update && sudo apt-get install terraform
# Verify installation / check version
terraform version



Terraform Network Stack
#

File and Folder Structure
#

The file and folder structure of the Terraform project looks like this:

gcp-vpc-private-and-public-subnet-stack
├── compute.tf
├── outputs.tf
├── terraform.tf
├── variables.tf
└── vpc.tf

Create SSH Key Pair
#

# Create a new SSH key pair for the VM access
ssh-keygen -t rsa -b 4096 -f ~/.ssh/terraform

Terraform Configuration Files
#

Project Folder & Terraform Provider
#

# Create project folder
TF_PROJECT_NAME=gcp-vpc-private-and-public-subnet-stack
mkdir $TF_PROJECT_NAME && cd $TF_PROJECT_NAME
  • terraform.tf
# Terraform Provider
terraform {
  required_providers {
    google = {
      source  = "hashicorp/google"
      version = "~> 4.0"
    }
  }
}

# Provider GCP
provider "google" {
  credentials = file(var.gcp_credentials_file)
  project     = var.gcp_project_id
  region      = var.gcp_region
}

Variables
#

  • variables.tf
## GCP Configuration

# GCP Credentials File
variable "gcp_credentials_file" {
  description = "Path to the GCP credentials JSON file"
  type        = string
  default     = "~/terraform-key.json"
}

# GCP Project ID
variable "gcp_project_id" {
  description = "GCP project ID"
  type        = string
  default     = "my-example-project-448310"
}

# GCP Region
variable "gcp_region" {
  description = "GCP region"
  type        = string
  default     = "us-central1"
}


## VPC & Subnets

# VPC Name
variable "vpc_name" {
  description = "The name of the VPC"
  type        = string
  default     = "example-vpc"
}

## Subnet Region
variable "subnet_region" {
  description = "Subnet Region"
  type        = string
  default     = "us-central1"
}

# Subnet Public CIDR Block
variable "subnet_public_cidr_block" {
  description = "Public Subnet CIDR"
  type        = string
  default     = "10.0.1.0/24"
}

# Subnet Private CIDR Block
variable "subnet_private_cidr_block" {
  description = "Private Subnet CIDR"
  type        = string
  default     = "10.0.2.0/24"
}

# Subnet Public Name
variable "subnet_public_name" {
  description = "Public Subnet name"
  type        = string
  default     = "subnet-public1"
}

# Subnet Private Name
variable "subnet_private_name" {
  description = "Private Subnet name"
  type        = string
  default     = "subnet-private1"
}


## Compuite Engine

# Zone for VirtualMachines
variable "gcp_zone" {
  description = "GCP Zone"
  type        = string
  default     = "us-central1-a"
}

# Image ID
variable "image_id" {
  description = "GCP image to use for the instances"
  type        = string
  default     = "projects/debian-cloud/global/images/family/debian-12"
}

# Machine Type
variable "machine_type" {
  description = "VM Type"
  type        = string
  default     = "f1-micro"
}


## SSH Key

# Define local public SSH Key
variable "ssh_public_key_file" {
  description = "Path to the public SSH key file"
  type        = string
  default     = "~/.ssh/terraform.pub"
}

VPC & Subnets
#

  • vpc.tf
## VPC & Subnets

# VPC
resource "google_compute_network" "vpc_network" {
  name                    = var.vpc_name
  auto_create_subnetworks = false
}

# Subnet Public
resource "google_compute_subnetwork" "subnet_public" {
  name           = var.subnet_public_name
  ip_cidr_range  = var.subnet_public_cidr_block
  region         = var.subnet_region
  network        = google_compute_network.vpc_network.id
}

# Subnet Private
resource "google_compute_subnetwork" "subnet_private" {
  name                      = var.subnet_private_name
  ip_cidr_range             = var.subnet_private_cidr_block
  region                    = var.subnet_region
  network                   = google_compute_network.vpc_network.id
  private_ip_google_access  = true # Allow private IPs to access Google APIs
}


## NAT

## Create Cloud Router
resource "google_compute_router" "router" {
  name    = "nat-router"
  region  = var.subnet_region  # Same region as subnets
  network = google_compute_network.vpc_network.id

}

## Create Nat Gateway
resource "google_compute_router_nat" "nat" {
  name                               = "router-nat"
  router                             = google_compute_router.router.name
  region                             = var.subnet_region 
  nat_ip_allocate_option             = "AUTO_ONLY"
  source_subnetwork_ip_ranges_to_nat = "ALL_SUBNETWORKS_ALL_IP_RANGES"  # Associate (private) subnet

  log_config {
    enable = true
    filter = "ERRORS_ONLY"
  }
}
  • Cloud NAT (NAT Gateway): Translates private IP addresses to public IPs for outbound internet access

  • Cloud Router: Manages routing and sends the NAT-translated traffic to the internet


Computing & Firewall
#

  • compute.tf
## Compute Engine Instances

# Static IP for Public VM
resource "google_compute_address" "static_ip" {
  name   = "public-ip"
  region = var.subnet_region # Use the same region as the subnets
}

# Compute Engine Instance for Public Subnet
resource "google_compute_instance" "debian_vm_public" {
  name         = "debian-public-vm"
  machine_type = var.machine_type
  zone         = var.gcp_zone
  tags         = ["ingress-public", "egress"] # Tags for firewall rules

  # Metadata for SSH Access
  metadata = {
    ssh-keys = "debian:${file(var.ssh_public_key_file)}"
  }

  boot_disk {
    initialize_params {
      image = var.image_id
    }
  }

  network_interface {
    subnetwork = google_compute_subnetwork.subnet_public.id # Use the public subnet

    access_config {
      nat_ip = google_compute_address.static_ip.address # Assign the static public IP
    }
  }
}

# Compute Engine Instance for Private Subnet
resource "google_compute_instance" "debian_vm_private" {
  name         = "debian-private-vm"
  machine_type = var.machine_type
  zone         = var.gcp_zone
  tags         = ["ingress-private", "egress"] # Tags for firewall rules

  # Metadata for SSH Access
  metadata = {
    ssh-keys = "debian:${file(var.ssh_public_key_file)}"
  }

  boot_disk {
    initialize_params {
      image = var.image_id
    }
  }

  network_interface {
    subnetwork = google_compute_subnetwork.subnet_private.id # Use the private subnet
  }
}


## Firewall

# Firewall: Ingress Rules for Public Subnet
resource "google_compute_firewall" "ingress_public" {
  name          = "ingress-public-vm"
  network       = google_compute_network.vpc_network.name
  target_tags   = ["ingress-public"]

  allow {
    protocol = "tcp"
    ports    = ["22"]  # Allow SSH
  }
  allow {
    protocol = "icmp"  # Allow Ping
  }
  source_ranges = ["0.0.0.0/0"]  # Allow traffic from anywhere
}

# Firewall: Ingress Rules for Private Subnet
resource "google_compute_firewall" "ingress-private" {
  name    = "ingress-private-vm"
  network = google_compute_network.vpc_network.name
  target_tags   = ["ingress-private"]

  allow {
    protocol = "tcp"
    ports    = ["22"]
  }
  allow {
    protocol = "icmp"  # Allow Ping
  }

  source_ranges = ["10.0.1.0/24"] # Public subnet CIDR
}

# Firewall: Egress Rules for both Subnets
resource "google_compute_firewall" "egress" {
  name    = "egress"
  network = google_compute_network.vpc_network.name
  target_tags   = ["egress"]

  allow {
    protocol = "all"
  }
  direction          = "EGRESS"
  destination_ranges = ["0.0.0.0/0"]
}

Outputs
#

  • outputs.tf
# Public VM Outputs
output "public_vm_ips" {
  description = "Public and private IPs of the public VM"
  value = {
    name        = google_compute_instance.debian_vm_public.name
    public_ip   = google_compute_instance.debian_vm_public.network_interface[0].access_config[0].nat_ip
    private_ip  = google_compute_instance.debian_vm_public.network_interface[0].network_ip
  }
}

# Private VM Outputs
output "private_vm_ips" {
  description = "Private IP of the private VM"
  value = {
    name        = google_compute_instance.debian_vm_private.name
    private_ip  = google_compute_instance.debian_vm_private.network_interface[0].network_ip
  }
}



Apply Terraform Configuration
#

Initialize Terraform Project
#

This will download and install the GCP Terraform provider defined in the configuration.

# Initialize the Terraform project
terraform init

Validate Configuration Files
#

# Validates the syntax and structure of Terraform configuration files
terraform validate

# Shell output:
Success! The configuration is valid.

Plan the Deployment
#

# Dry run / preview changes before applying them
terraform plan

Apply the Configuration
#

# Create network stack
terraform apply -auto-approve

# Shell output:
Apply complete! Resources: 11 added, 0 changed, 0 destroyed.

Outputs:

private_vm_ips = {
  "name" = "debian-private-vm"
  "private_ip" = "10.0.2.2"
}
public_vm_ips = {
  "name" = "debian-public-vm"
  "private_ip" = "10.0.1.2"
  "public_ip" = "35.225.29.50"
}



Verify Network Connectivity
#

SSH Into Public Subnet VM
#

# Copy the private SSH key to the VM in the public subnet
scp -i ~/.ssh/terraform ~/.ssh/terraform debian@35.225.29.50:~/.ssh/

# SSH into the VM in the public subnet
ssh -i ~/.ssh/terraform debian@35.225.29.50

SSH Into Private Subnet VM
#

# Ping the VM in the private subnet
debian@debian-public-vm:~$ ping 10.0.2.2

# Shell output:
PING 10.0.2.2 (10.0.2.2) 56(84) bytes of data.
64 bytes from 10.0.2.2: icmp_seq=1 ttl=64 time=1.14 ms
64 bytes from 10.0.2.2: icmp_seq=2 ttl=64 time=0.254 ms
# SSH into the private subnet VM
ssh -i ~/.ssh/terraform debian@10.0.2.2
# Verify internet connectivity
debian@debian-private-vm:~$ ping google.com

# Shell output:
PING google.com (142.251.183.100) 56(84) bytes of data.
64 bytes from yucbfaa-in-f100.1e100.net (142.251.183.100): icmp_seq=1 ttl=115 time=2.43 ms
64 bytes from yucbfaa-in-f100.1e100.net (142.251.183.100): icmp_seq=2 ttl=115 time=1.03 ms



Verify GCP Resources
#

List VPCs
#

# List VPC networks
gcloud compute networks list

# Shell output:
NAME         SUBNET_MODE  BGP_ROUTING_MODE  IPV4_RANGE  GATEWAY_IPV4
default      AUTO         REGIONAL
example-vpc  CUSTOM       REGIONAL

List Subnets in VPC
#

# List subnets in VPC
gcloud compute networks subnets list --filter="network:example-vpc"

# Shell output:
NAME             REGION       NETWORK      RANGE        STACK_TYPE  IPV6_ACCESS_TYPE  INTERNAL_IPV6_PREFIX  EXTERNAL_IPV6_PREFIX
subnet-private1  us-central1  example-vpc  10.0.2.0/24  IPV4_ONLY
subnet-public1   us-central1  example-vpc  10.0.1.0/24  IPV4_ONLY

List Compute Instances
#

# List virtual machines
gcloud compute instances list

# Shell output:
NAME               ZONE           MACHINE_TYPE  PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP   STATUS
debian-private-vm  us-central1-a  f1-micro                   10.0.2.2                   RUNNING
debian-public-vm   us-central1-a  f1-micro                   10.0.1.2     35.225.29.50  RUNNING



Links #

# GitHub Repository
https://github.com/jueklu/terraform-gcp-vpc-subnet
GCP-VPC-Terraform - This article is part of a series.
Part 3: This Article