Category: Terraform

  • 14 – HCP Terraform (Terraform Cloud) Explained β€” Remote Execution, Authentication, and Workspaces

    Most Terraform users begin with a simple workflow:

    terraform init
    terraform plan
    terraform apply
    

    Everything runs locally, credentials come from the local machine, and the state file is stored either locally or in a configured backend.

    However, real-world teams rarely run Terraform from developer laptops.
    Infrastructure provisioning is executed from a controlled environment β€” this is where HCP Terraform (Terraform Cloud) comes in.

    This article explains how Terraform Cloud works, what actually changes compared to local execution, and how to correctly authenticate and run Terraform remotely.

    Table of Contents

    1. What Terraform Cloud Actually Changes
    2. Key Architectural Difference
    3. Creating the Organization and Workspace
    4. Connecting Local Terraform to Terraform Cloud
    5. First Remote Plan and Why It Fails
    6. Why Local Credentials Stop Working
    7. Azure Authentication in Terraform Cloud
    8. Adding Credentials to the Workspace
    9. Running Terraform Remotely
    10. Verifying Remote State
    11. Understanding Workspaces
    12. What Terraform Cloud Provides Beyond CLI
    13. Practical Takeaway

    What Terraform Cloud Actually Changes

    Terraform Cloud does not replace the Terraform CLI β€” it replaces the execution environment.

    Instead of:

    Local CLI β†’ Cloud Provider API
    

    the architecture becomes:

    Local CLI β†’ Terraform Cloud β†’ Cloud Provider API
    

    Your machine only submits configuration and receives logs.
    The actual Terraform runtime, state, and authentication live in Terraform Cloud.


    Key Architectural Difference

    ComponentLocal TerraformTerraform Cloud
    ExecutionRuns on developer machineRuns on HashiCorp infrastructure
    StateLocal or backend-configuredAlways remote
    AuthenticationLocal credentialsWorkspace credentials
    Audit HistoryLimitedBuilt-in
    CollaborationManualNative support

    Understanding this distinction is essential β€” most confusion with Terraform Cloud comes from assuming it behaves like a remote backend only.
    It is actually remote execution, not just remote storage.


    Creating the Organization and Workspace

    After signing into:

    https://app.terraform.io
    

    Create an organization and then create a workspace using:

    CLI-Driven Workflow

    This workspace acts as an isolated remote Terraform runtime.

    Each workspace has:

    • Its own state file
    • Its own variables
    • Its own credentials
    • Its own run history

    Think of it as a remote Terraform working directory.


    Connecting Local Terraform to Terraform Cloud

    Create a minimal configuration:

    terraform {
      cloud {
        organization = "my-organization"
    
        workspaces {
          name = "cli-lab"
        }
      }
    }
    
    provider "azurerm" {
      features {}
    }
    
    resource "azurerm_resource_group" "example" {
      name     = "rg-hcp-demo"
      location = "eastus"
    }
    

    Initialize:

    terraform init
    

    Terraform will request authentication with Terraform Cloud.

    Run:

    terraform login
    

    This generates an API token and stores it locally in:

    ~/.terraform.d/credentials.tfrc.json
    

    At this point, your CLI is authenticated with Terraform Cloud β€” not with Azure.


    First Remote Plan and Why It Fails

    Running:

    terraform plan
    

    will fail with an authentication error for Azure.

    This often surprises users who already have working local credentials such as:

    $env:ARM_CLIENT_ID="..."
    $env:ARM_CLIENT_SECRET="..."
    $env:ARM_SUBSCRIPTION_ID="..."
    $env:ARM_TENANT_ID="..."
    

    These variables work locally because Terraform executes on your machine.

    Terraform Cloud cannot access them.


    Why Local Credentials Stop Working

    In local execution:

    Terraform (local process) reads environment variables β†’ authenticates to Azure
    

    In Terraform Cloud:

    Terraform Cloud runtime reads workspace variables β†’ authenticates to Azure
    

    The execution environment changed, therefore the authentication location must also change.


    Azure Authentication in Terraform Cloud

    Terraform Cloud must authenticate independently using a Service Principal.

    Create one if necessary:

    az ad sp create-for-rbac --role Contributor --scopes /subscriptions/<SUBSCRIPTION_ID>
    

    Azure returns:

    appId        β†’ Client ID
    password     β†’ Client Secret
    tenant       β†’ Tenant ID
    subscription β†’ Subscription ID
    

    Adding Credentials to the Workspace

    In the workspace β†’ Variables β†’ Environment Variables

    Add:

    VariableDescription
    ARM_CLIENT_IDService Principal App ID
    ARM_CLIENT_SECRETService Principal Secret
    ARM_TENANT_IDAzure Tenant
    ARM_SUBSCRIPTION_IDAzure Subscription

    Mark secrets as sensitive.

    These variables now exist inside the remote execution environment.


    Running Terraform Remotely

    Run:

    terraform plan
    

    You will notice:

    • The command returns output locally
    • Execution logs appear in the Terraform Cloud UI
    • The plan runs remotely

    Then apply:

    terraform apply
    

    The resource is created in Azure by Terraform Cloud β€” not by your machine.


    Verifying Remote State

    Delete local Terraform metadata:

    rm -r .terraform
    

    Run again:

    terraform plan
    

    It still works.

    This confirms:

    State is stored in Terraform Cloud, not locally.


    Understanding Workspaces

    A workspace represents an isolated Terraform environment.

    Each workspace maintains:

    • Independent state
    • Independent credentials
    • Independent variables
    • Independent run history

    This enables environment separation:

    WorkspacePurpose
    devDevelopment infrastructure
    testValidation environment
    prodProduction infrastructure

    No duplication of directories or backend configuration required.


    What Terraform Cloud Provides Beyond CLI

    Terraform Cloud effectively acts as Terraform’s native CI/CD system.

    It adds:

    • Remote execution
    • State management
    • Secrets management
    • Run history
    • Access control
    • Collaboration
    • Policy enforcement (in higher tiers)

    Without external pipelines.


    Practical Takeaway

    Terraform Cloud is not just a UI layer or remote backend.

    It changes the operational model:

    From:

    Developers provisioning infrastructure

    To:

    Controlled platform provisioning infrastructure

    The CLI becomes a client β€” not the executor.

    Understanding this distinction is fundamental before using VCS-driven workflows, automated applies, or multi-environment deployments.

  • 13 – How to Import Existing Azure Resources into Terraform (Step-by-Step Mini Project)

    When learning Terraform, most tutorials start by creating new infrastructure.

    But in the real world, companies already have infrastructure running in the cloud β€” and your job is to bring it under Terraform management without recreating or breaking anything.

    That process is called:

    Terraform Import

    In this mini project we will:

    • Create Azure infrastructure manually (outside Terraform)
    • Make Terraform β€œdiscover” it
    • Connect Terraform state to real resources
    • Understand what Terraform actually manages

    This tutorial focuses on understanding β€” not best practices, modules, or clean architecture.

    Table of Contents

    1. Step 0 β€” Create Infrastructure in Azure (Without Terraform)
    2. Create Resource Group
    3. Create Virtual Network and Subnet
    4. Create App Service Plan (Free Tier)
    5. Create Web App
    6. Final Verification
    7. Step 1 β€” Terraform Tries to Recreate Existing Resources
    8. Step 2 β€” Connect Terraform to the Real Azure Resource
    9. Verify Terraform Learned the Resource
    10. The Most Important Test
    11. What Just Happened?
    12. Repeat for Other Resources
    13. Key Concept You Must Understand
    14. Final Result
    15. What You Learned

    Step 0 β€” Create Infrastructure in Azure (Without Terraform)

    We first create resources using Azure CLI so Terraform has no knowledge of them.

    Resource Names

    ResourceName
    Resource Grouprgcliminipro21212
    Virtual Networkvnetcliminipro21212
    App Service Planplancliminipro21212
    Web Appwebappcliminipro21212
    Regioncentralus

    Create Resource Group

    az group create --name rgcliminipro21212 --location centralus
    

    Verify:

    az group show --name rgcliminipro21212 --query "{Name:name,Location:location}"
    

    Create Virtual Network and Subnet

    az network vnet create --resource-group rgcliminipro21212 --name vnetcliminipro21212 --address-prefix 10.0.0.0/16 --subnet-name default --subnet-prefix 10.0.1.0/24
    

    Verify VNet:

    az network vnet show --resource-group rgcliminipro21212 --name vnetcliminipro21212 --query addressSpace.addressPrefixes
    

    Verify Subnet:

    az network vnet subnet show --resource-group rgcliminipro21212 --vnet-name vnetcliminipro21212 --name default --query addressPrefix
    

    Create App Service Plan (Free Tier)

    az appservice plan create --name plancliminipro21212 --resource-group rgcliminipro21212 --sku F1 --is-linux
    

    Verify:

    az appservice plan show --name plancliminipro21212 --resource-group rgcliminipro21212 --query "{Tier:sku.tier,Name:sku.name}"
    

    Create Web App

    az webapp create --resource-group rgcliminipro21212 --plan plancliminipro21212 --name webappcliminipro21212 --runtime "NODE:18-lts"
    

    Verify:

    az webapp show --resource-group rgcliminipro21212 --name webappcliminipro21212 --query "{State:state,Host:defaultHostName}"
    

    Final Verification

    az resource list --resource-group rgcliminipro21212 --output table
    

    At this point:

    Infrastructure exists in Azure
    Terraform knows nothing about it


    Step 1 β€” Terraform Tries to Recreate Existing Resources

    Create a file rg.tf

    resource "azurerm_resource_group" "rg" {
      name     = "rgcliminipro21212"
      location = "centralus"
    }
    

    Run:

    terraform plan
    

    You will see:

    Plan: 1 to add
    

    Why?

    Because Terraform has no memory yet β€” it only trusts the state file, not Azure.


    Step 2 β€” Connect Terraform to the Real Azure Resource

    We now map the Terraform resource to the real resource.

    Get Resource ID

    az group show --name rgcliminipro21212 --query id --output tsv
    

    Output looks like:

    /subscriptions/<sub-id>/resourceGroups/rgcliminipro21212
    

    Import into Terraform

    terraform import azurerm_resource_group.rg <RESOURCE_ID>
    

    Verify Terraform Learned the Resource

    Check state

    terraform state list
    

    Inspect resource details

    terraform state show azurerm_resource_group.rg
    

    Terraform downloaded the real configuration from Azure.


    The Most Important Test

    Run:

    terraform plan
    

    Now you should see:

    No changes. Infrastructure matches configuration.
    

    What Just Happened?

    Before import:

    TerraformAzure
    Wants to create RGRG already exists

    After import:

    TerraformAzure
    Knows RG existsRG exists

    Terraform did not create anything.
    It only learned reality.


    Repeat for Other Resources

    You repeat the same process for:

    • Virtual Network
    • Subnet
    • App Service Plan
    • Web App

    The pattern never changes:

    1. Write resource block
    2. terraform plan β†’ shows create
    3. terraform import
    4. terraform plan β†’ shows no changes

    You are teaching Terraform what already exists.


    Key Concept You Must Understand

    Terraform does NOT manage infrastructure.

    Terraform manages a state file.

    If it is not in state β†’ Terraform thinks it does not exist
    If it is in state β†’ Terraform controls it


    Final Result

    After importing all resources:

    You can now run:

    terraform destroy
    

    And Terraform will delete resources that were originally created manually.

    That proves:

    Terraform now owns the infrastructure


    What You Learned

    • Terraform does not read Azure automatically
    • Import does not create resources
    • State file is the brain of Terraform
    • Infrastructure can be adopted safely

    This is one of the most important real-world Terraform skills.


    If you understood this concept, you now understand more Terraform than most beginners who only follow terraform apply tutorials.

  • 12 – Setup Azure Monitoring And Alerting With Terraform (Hands-On Mini Project)

    In this mini project we will build a real Azure VM β†’ deploy a website β†’ monitor it β†’ get email alerts when something goes wrong β€” all using Terraform.

    This is not just copy-paste infrastructure.
    We will understand why each piece exists and what Azure is actually doing behind the scenes.

    By the end you will know:

    • How Azure Monitor actually works 🧠
    • Difference between resource, metric, and action
    • How alerts really get triggered
    • How to simulate failures (CPU stress testing πŸ”₯)

    Table of Contents

    1. Architecture Overview
    2. Step 1 β€” Networking Infrastructure
    3. Step 2 β€” Create the Virtual Machine
    4. Step 3 β€” Deploy Website Automatically (Remote-Exec)
    5. Step 4 β€” Create Notification Channel (Action Group)
    6. Step 5 β€” CPU Alert (High Usage)
    7. Step 6 β€” Memory Alert
    8. ⚠️ Important Learning (Real-World Insight)
    9. Final Result
    10. What You Learned
    11. Final Thoughts

    Architecture Overview

    We will build:

    ComponentPurpose
    Resource GroupContainer for everything
    VNet + SubnetNetwork for VM
    NSGFirewall rules
    Public IPInternet access
    Linux VMRuns website
    NginxSample application
    Action GroupNotification channel
    Metric AlertsDetect problems

    Flow:

    Problem happens β†’ Azure detects metric β†’ Alert rule triggers β†’ Action group emails you πŸ“§


    Step 1 β€” Networking Infrastructure

    We first create the foundation: network + firewall + IP + NIC

    Resource Group

    resource "azurerm_resource_group" "rg" {
      name     = "rgminipro090212"
      location = "Central US"
    }
    

    Virtual Network & Subnet

    resource "azurerm_virtual_network" "vnet" {
      name                = "vnetminipro12212"
      address_space       = ["10.0.0.0/16"]
      location            = azurerm_resource_group.rg.location
      resource_group_name = azurerm_resource_group.rg.name
    }
    
    resource "azurerm_subnet" "subnet" {
      name                 = "subnetminipro12100339"
      resource_group_name  = azurerm_resource_group.rg.name
      virtual_network_name = azurerm_virtual_network.vnet.name
      address_prefixes     = ["10.0.2.0/24"]
    }
    

    Network Security Group (Firewall)

    We allow:

    • SSH (22) β†’ remote login
    • HTTP (80) β†’ website access
    resource "azurerm_network_security_group" "nsg" {
      name                = "nsgminipro98922"
      location            = azurerm_resource_group.rg.location
      resource_group_name = azurerm_resource_group.rg.name
    
      security_rule {
        name                       = "SSH"
        priority                   = 100
        direction                  = "Inbound"
        access                     = "Allow"
        protocol                   = "Tcp"
        source_port_range          = "*"
        destination_port_range     = "22"
        source_address_prefix      = "*"
        destination_address_prefix = "*"
      }
    
      security_rule {
        name                       = "HTTP"
        priority                   = 110
        direction                  = "Inbound"
        access                     = "Allow"
        protocol                   = "Tcp"
        source_port_range          = "*"
        destination_port_range     = "80"
        source_address_prefix      = "*"
        destination_address_prefix = "*"
      }
    }
    

    Public IP + Network Interface

    resource "azurerm_public_ip" "pip" {
      name                = "pipminipro1212909"
      location            = azurerm_resource_group.rg.location
      resource_group_name = azurerm_resource_group.rg.name
      allocation_method   = "Static"
    }
    
    resource "azurerm_network_interface" "nic" {
      name                = "nicminipro90909111"
      location            = azurerm_resource_group.rg.location
      resource_group_name = azurerm_resource_group.rg.name
    
      ip_configuration {
        name                          = "internal"
        subnet_id                     = azurerm_subnet.subnet.id
        private_ip_address_allocation = "Dynamic"
        public_ip_address_id          = azurerm_public_ip.pip.id
      }
    }
    
    resource "azurerm_network_interface_security_group_association" "assoc" {
      network_interface_id      = azurerm_network_interface.nic.id
      network_security_group_id = azurerm_network_security_group.nsg.id
    }
    

    ❓ Important Concept β€” Where is NSG applied?

    You attached NSG to NIC, not subnet.

    How to verify in portal:

    Where to checkWhat you see
    NIC β†’ NetworkingNSG attached
    NSG β†’ SubnetsEmpty

    Why?

    Azure firewall works at 2 levels:

    LevelScope
    Subnet NSGApplies to all VMs
    NIC NSGApplies to single VM

    We used NIC because this project has only one VM.


    Step 2 β€” Create the Virtual Machine

    resource "azurerm_linux_virtual_machine" "vm" {
      name                = "vmminipro343900"
      location            = azurerm_resource_group.rg.location
      resource_group_name = azurerm_resource_group.rg.name
      size                = "Standard_D2s_v3"
      network_interface_ids = [azurerm_network_interface.nic.id]
    
      admin_username = "azureuser"
    
      admin_ssh_key {
        username   = "azureuser"
        public_key = file("C:/Alan/MyWork/linuxvms/mykeys/key1.pub")
      }
    
      os_disk {
        caching              = "ReadWrite"
        storage_account_type = "Standard_LRS"
      }
    
      source_image_reference {
        publisher = "Canonical"
        offer     = "UbuntuServer"
        sku       = "18.04-LTS"
        version   = "latest"
      }
    }
    

    SSH Into the VM

    Fix Windows SSH key permission:

    icacls <key> /inheritance:r
    icacls <key> /grant:r "$($env:USERNAME):(R)"
    icacls <key> /remove "Authenticated Users" "BUILTIN\Users" "Everyone"
    

    Login:

    ssh -i <key> azureuser@<public-ip>
    

    βœ” VM verified working


    Step 3 β€” Deploy Website Automatically (Remote-Exec)

    Now Terraform becomes powerful πŸ’₯
    We configure the server automatically.

    provisioner "remote-exec" {
    
      inline = [
    
        "echo waiting for cloud-init...",
        "while [ ! -f /var/lib/cloud/instance/boot-finished ]; do sleep 2; done",
    
        "sudo apt-get update -y",
        "sudo apt-get install -y nginx",
    
        "echo '<h1>Terraform Monitoring Lab Working</h1>' | sudo tee /var/www/html/index.html",
    
        "sudo systemctl restart nginx",
        "sudo systemctl enable nginx"
      ]
    
      connection {
          type        = "ssh"
          user        = "azureuser"
          private_key = file("C:/Alan/MyWork/linuxvms/mykeys/key1")
          host        = azurerm_public_ip.pip.ip_address
      }
    }
    

    ⚠️ Important learning:

    Provisioners run only during resource creation
    So we had to destroy and apply again

    Now open browser:

    http://<public-ip>
    

    Website works πŸŽ‰


    Step 4 β€” Create Notification Channel (Action Group)

    We tell Azure:

    When something breaks β†’ email me

    resource "azurerm_monitor_action_group" "ag" {
      name                = "agminipro9090"
      resource_group_name = azurerm_resource_group.rg.name
      short_name          = "alerts"
    
      email_receiver {
        name          = "sendtoadmin"
        email_address = "alankseb@gmail.com"
      }
    }
    

    Verify in portal:

    Azure Monitor β†’ Alerts β†’ Action Groups


    Step 5 β€” CPU Alert (High Usage)

    Now we create the actual monitoring rule.

    resource "azurerm_monitor_metric_alert" "cpu_alert" {
      name                = "highcpualertminipro990922"
      resource_group_name = azurerm_resource_group.rg.name
      scopes              = [azurerm_linux_virtual_machine.vm.id]
      description         = "Alert when CPU usage is greater than 60%"
    
      criteria {
        metric_namespace = "Microsoft.Compute/virtualMachines"
        metric_name      = "Percentage CPU"
        aggregation      = "Average"
        operator         = "GreaterThan"
        threshold        = 60
      }
    
      action {
        action_group_id = azurerm_monitor_action_group.ag.id
      }
    }
    

    Test the Alert πŸ”₯

    SSH into VM:

    sudo apt-get install stress -y
    stress --cpu 6 --timeout 300
    

    Wait 5 minutes…

    πŸ“§ You receive email:

    Azure Monitor alert triggered

    Congratulations β€” you built real monitoring.


    Step 6 β€” Memory Alert

    We add another rule:

    resource "azurerm_monitor_metric_alert" "disk_alert" {
      name                = "lowdiskalertminipro9090223333"
      resource_group_name = azurerm_resource_group.rg.name
      scopes              = [azurerm_linux_virtual_machine.vm.id]
      description         = "Alert when disk free space is less than 20%"
    
      criteria {
        metric_namespace = "Microsoft.Compute/virtualMachines"
        metric_name      = "Available Memory Bytes"
        aggregation      = "Average"
        operator         = "LessThan"
        threshold        = 50
      }
    
      action {
        action_group_id = azurerm_monitor_action_group.ag.id
      }
    }
    

    ⚠️ Important Learning (Real-World Insight)

    During testing I discovered:

    Azure VM metrics do NOT expose actual disk usage by default.

    This alert monitors memory (RAM), not filesystem disk space.

    Real disk monitoring requires:

    • Azure Monitor Agent
    • Log Analytics
    • Log-based alerts

    This was one of the biggest learning moments in this project 🧠


    Final Result

    We built a complete monitoring pipeline:

    EventWhat Happens
    CPU spikeAzure detects
    Alert rule firesAction group triggered
    Email sentAdmin notified

    What You Learned

    βœ” Terraform provisioning
    βœ” Remote configuration
    βœ” Azure networking
    βœ” Monitoring architecture
    βœ” Real difference between metric vs log alerts


    Final Thoughts

    This project transforms Terraform from:

    β€œtool that creates resources”

    into

    β€œtool that builds reliable production systems”

    Because infrastructure without monitoring is just waiting to fail.


    Happy learning πŸš€

  • 11 – Azure SQL Database Server Terraform Mini Project β€” Step-by-Step Guide

    In this hands-on tutorial, we will build a complete Azure SQL Server + SQL Database using Terraform, then securely connect to it from our local machine and run real SQL commands β€” without installing SSMS or Azure Data Studio.

    This mini project is perfect if you are learning:

    • Terraform Infrastructure as Code
    • Azure SQL PaaS services
    • Networking security with firewall rules
    • Database connectivity using Azure CLI and sqlcmd

    Let’s build everything step by step.

    Table of Contents

    1. What We Will Build
    2. Step 1 – Create Resource Group, SQL Server and Database
    3. Step 2 – Add Firewall Rule to Allow Local PC
    4. Step 3 – Test SQL Using CLI (No GUI Needed)
    5. Step 4 – Connect to Database Using sqlcmd
    6. Step 5 – Create Table and Insert Data
    7. What We Learned

    What We Will Build

    By the end of this demo, we will have:

    • An Azure Resource Group
    • Azure SQL Server
    • Azure SQL Database
    • Firewall rule to allow our PC to connect
    • Real database table with data
    • Full connectivity test using CLI

    Step 1 – Create Resource Group, SQL Server and Database

    First we define the core infrastructure using Terraform.

    Resource Group β€” rg.tf

    resource "azurerm_resource_group" "rg" {
      name     = "rgminipro98989"
      location = "Central US"
    }
    

    The resource group is a logical container that will hold our SQL server and database.


    SQL Server β€” sqlserver.tf

    resource "azurerm_mssql_server" "sql_server" {
      name                         = "sqlserverminipro876811"
      resource_group_name          = azurerm_resource_group.rg.name
      location                     = azurerm_resource_group.rg.location
      version                      = "12.0"
      administrator_login          = "sqladmin"
      administrator_login_password = "StrongPassword@123"
    }
    

    This creates:

    • Azure SQL logical server
    • Admin user and password
    • Hosted in Central US

    In real projects, never hardcode passwords β€” use Azure Key Vault or Terraform variables.


    SQL Database β€” sqldb.tf

    resource "azurerm_mssql_database" "sqldb" {
      name      = "sqldbminipro81829"
      server_id = azurerm_mssql_server.sql_server.id
    }
    

    This database is created inside the SQL server defined earlier.


    Deploy Infrastructure

    Run:

    terraform init
    terraform apply
    

    After apply completes:

    • Open Azure Portal
    • Navigate to your resource group
    • Verify SQL Server and Database exist

    Step 2 – Add Firewall Rule to Allow Local PC

    By default, Azure SQL blocks all external connections.
    We must allow our own IP address.

    Firewall Rule β€” firewallrule.tf

    resource "azurerm_mssql_firewall_rule" "firewall_rule" {
      name             = "sqlfirewallruleminipro909122"
      server_id        = azurerm_mssql_server.sql_server.id
      start_ip_address = ""
      end_ip_address   = ""
    }
    

    πŸ‘‰ Replace the empty IP values with your public IP.

    You can find your IP from:

    https://whatismyipaddress.com

    Example:

    start_ip_address = "203.0.113.10"
    end_ip_address   = "203.0.113.10"
    

    Apply again:

    terraform apply
    

    Step 3 – Test SQL Using CLI (No GUI Needed)

    We will connect using:

    • Azure CLI
    • sqlcmd tool

    List SQL Servers

    az sql server list -o table
    

    List Databases in Our Server

    az sql db list --server sqlserverminipro876811 --resource-group rgminipro98989 -o table
    

    Check Firewall Rules

    az sql server firewall-rule list --server sqlserverminipro876811 --resource-group rgminipro98989 -o table
    

    Step 4 – Connect to Database Using sqlcmd

    No SSMS required!

    Connect

    sqlcmd -S sqlserverminipro876811.database.windows.net -U sqladmin -P "StrongPassword@123" -d sqldbminipro81829
    

    IMPORTANT:
    Use full DNS name β†’
    sqlserverminipro876811.database.windows.net


    Verify Databases

    SELECT name FROM sys.databases;
    GO
    

    Every SQL command must end with:

    GO
    

    Step 5 – Create Table and Insert Data

    Create Table

    CREATE TABLE employees(
      id INT PRIMARY KEY,
      name VARCHAR(50),
      tech VARCHAR(30)
    );
    GO
    

    Insert Sample Data

    INSERT INTO employees VALUES
    (1, 'Alice', 'Terraform'),
    (2, 'Bob', 'Azure'),
    (3, 'Charlie', 'SQL');
    GO
    

    Query Data

    SELECT * FROM employees;
    GO
    

    πŸŽ‰ You should see real output from Azure SQL Database!


    What We Learned

    In this mini project you successfully:

    • Provisioned Azure SQL using Terraform
    • Understood logical SQL server vs database
    • Configured network security via firewall
    • Connected securely from local PC
    • Executed real SQL queries using CLI

    This is exactly how cloud engineers deploy database environments in real projects β€” automated, repeatable, and infrastructure as code.

  • 10 – Azure Policy and Governance – Terraform Mini Project

    Table of Contents

    1. Step 1 – Create Resource Group and Base Terraform Setup
    2. Step 2 – Create Mandatory Tag Policy
    3. Step 3 – Create Allowed VM Size Policy
    4. Step 4 – Create Allowed Location Policy
    5. Final Outcome of This Mini Project

    In this mini project, we implement Azure governance using Terraform. The goal is to enforce organizational standards at the subscription level using Azure Policyβ€”so that resources follow rules for:

    • Mandatory tags
    • Allowed VM sizes
    • Allowed deployment locations

    Everything is automated using Terraform infrastructure as code.


    Step 1 – Create Resource Group and Base Terraform Setup

    We start by creating:

    • A resource group
    • Variables for locations, VM sizes, and allowed tags
    • Output to display current subscription ID

    Resource Group – rg.tf

    resource "azurerm_resource_group" "rg" {
      name     = "rgminipro7878"
      location = "Central US"
    }
    

    Read Current Subscription – main.tf

    data "azurerm_subscription" "subscriptioncurrent" {}
    

    Output Subscription ID – output.tf

    output "subscription_id" {
      value = data.azurerm_subscription.subscriptioncurrent.id
    }
    

    Variables – variables.tf

    variable "location" {
      type    = list(string)
      default = ["eastus", "westus"]
    }
    
    variable "vm_sizes" {
      type    = list(string)
      default = ["Standard_B2s", "Standard_B2ms"]
    }
    
    variable "allowed_tags" {
      type    = list(string)
      default = ["department", "project"]
    }
    

    After running:

    terraform apply
    

    βœ” Resource group was created
    βœ” Subscription ID output was verified


    Step 2 – Create Mandatory Tag Policy

    Next, we enforce that every resource must contain two tags:

    • department
    • project

    If either tag is missing β†’ resource creation is denied.

    Policy Definition – policy1.tf

    resource "azurerm_policy_definition" "tagpolicy" {
    
      name         = "allowed-tag"
      policy_type  = "Custom"
      mode         = "All"
      display_name = "Allowed tags policy"
    
      policy_rule = jsonencode({
        if = {
          anyOf = [
            {
              field  = "tags[${var.allowed_tags[0]}]"
              exists = false
            },
            {
              field  = "tags[${var.allowed_tags[1]}]"
              exists = false
            }
          ]
        }
    
        then = {
          effect = "deny"
        }
      })
    }
    

    Assign Policy to Subscription

    resource "azurerm_subscription_policy_assignment" "tag_assign" {
    
      name = "tag-assignment"
    
      policy_definition_id = azurerm_policy_definition.tagpolicy.id
    
      subscription_id = data.azurerm_subscription.subscriptioncurrent.id
    }
    

    ⚠ Important
    To create and assign policies, your account must have:
    Resource Policy Contributor role.

    Testing the Policy – testrg.tf

    resource "azurerm_resource_group" "bad" {
      name     = "bad-rg"
      location = "Central US"
    
      tags = {
        department = "IT"
        project    = "Demo"
      }
    }
    

    βœ” Without tags β†’ RG creation blocked
    βœ” With tags β†’ RG creation allowed


    Step 3 – Create Allowed VM Size Policy

    Now we restrict which VM sizes can be used.

    Allowed sizes:

    • Standard_B2s
    • Standard_B2ms

    Policy Definition – policy2.tf

    resource "azurerm_policy_definition" "vm_size" {
    
      name         = "vm-size"
      policy_type  = "Custom"
      mode         = "All"
      display_name = "Allowed vm policy"
    
      policy_rule = jsonencode({
        if = {
          field = "Microsoft.Compute/virtualMachines/sku.name"
    
          notIn = [
            var.vm_sizes[0],
            var.vm_sizes[1]
          ]
        }
    
        then = {
          effect = "deny"
        }
      })
    }
    

    Assign VM Size Policy

    resource "azurerm_subscription_policy_assignment" "vm_assign" {
    
      name = "size-assignment"
    
      policy_definition_id = azurerm_policy_definition.vm_size.id
    
      subscription_id = data.azurerm_subscription.subscriptioncurrent.id
    }
    

    βœ” Any VM outside allowed list β†’ blocked
    βœ” Governance enforced at subscription level


    Step 4 – Create Allowed Location Policy

    Finally, we restrict deployments only to:

    • eastus
    • westus

    Policy Definition – policy3.tf

    resource "azurerm_policy_definition" "location" {
    
      name         = "location"
      policy_type  = "Custom"
      mode         = "All"
      display_name = "Allowed location policy"
    
      policy_rule = jsonencode({
        if = {
          field = "location"
    
          notIn = [
            var.location[0],
            var.location[1]
          ]
        }
    
        then = {
          effect = "deny"
        }
      })
    }
    

    Assign Location Policy

    resource "azurerm_subscription_policy_assignment" "loc_assign" {
    
      name = "location-assignment"
    
      policy_definition_id = azurerm_policy_definition.location.id
    
      subscription_id = data.azurerm_subscription.subscriptioncurrent.id
    }
    

    βœ” Resources in other regions β†’ denied
    βœ” Standardized deployment geography


    Final Outcome of This Mini Project

    Using Terraform + Azure Policy we achieved:

    βœ” Mandatory tagging for all resources
    βœ” Standard VM sizes enforced
    βœ” Controlled allowed regions
    βœ” Governance at subscription level
    βœ” Fully automated with IaC

    This approach is ideal for:

    • Enterprise governance
    • Cost control
    • Security compliance
    • Standardization across teams
  • 9 – Terraform Provisioners in Azure : Local-Exec vs Remote-Exec vs File Provisioner (Hands-On Guide)

    When I started learning Terraform, I wondered:

    Terraform can create infrastructure… but how do we run scripts, install software, or copy files after a VM is created?

    That is where Terraform Provisioners come into the picture.

    In this hands-on mini project I implemented:

    • Local-Exec Provisioner
    • Remote-Exec Provisioner
    • File Provisioner

    and understood their real purpose, limitations, and practical usage.

    Table of Contents

    1. Project Goal
    2. Architecture Overview
    3. Step 1 – Create Core Azure Infrastructure
    4. Step 2 – Create VM and Verify SSH
    5. Step 3 – Local-Exec Provisioner
    6. Step 4 – Remote-Exec Provisioner
    7. Debug Steps and Errors Faced
    8. Step 5 – File Provisioner
    9. Understanding Provisioners
    10. Important Reality
    11. Final Learning Outcome

    Project Goal

    Build an Azure Linux VM using Terraform and:

    1. Run a command on my local PC during deployment
    2. Install Nginx inside the VM automatically
    3. Copy a configuration file from my laptop to the VM

    Architecture Overview

    The infrastructure consists of:

    • Resource Group
    • Virtual Network and Subnet
    • Network Security Group (SSH + HTTP)
    • Public IP
    • Network Interface
    • Linux Virtual Machine

    Step 1 – Create Core Azure Infrastructure

    Resource Group

    resource "azurerm_resource_group" "rg" {
      name     = "rgminipro878933"
      location = "Central US"
    }
    

    Virtual Network & Subnet

    resource "azurerm_virtual_network" "vnet" {
      name                = "vnetminipro7678678"
      address_space       = ["10.0.0.0/16"]
      location            = azurerm_resource_group.rg.location
      resource_group_name = azurerm_resource_group.rg.name
    }
    

    Network Security Group

    Inbound rules were added to allow:

    • Port 22 β†’ SSH
    • Port 80 β†’ HTTP

    Step 2 – Create VM and Verify SSH

    Generate SSH Keys

    ssh-keygen -t rsa -b 4096
    

    Create Linux VM

    The VM was created using azurerm_linux_virtual_machine with SSH key authentication.

    Test Connection

    ssh -i key1 azureuser@<public-ip>
    

    βœ” SSH login successful.


    Step 3 – Local-Exec Provisioner

    What Local-Exec Means

    Local-exec runs a command on:

    The machine where Terraform is executed
    NOT inside the Azure VM.

    Implementation

    provisioner "local-exec" {
      command = "echo Deployment started at ${timestamp()} > deployment.log"
    }
    

    Result

    A file deployment.log was created on my laptop β€” proof that the command executed locally.

    Real-World Uses

    • Trigger Ansible after Terraform
    • Call REST API or webhook
    • Notify Slack/Email
    • Generate inventory files
    • Write audit logs

    Step 4 – Remote-Exec Provisioner

    Purpose

    Run commands inside the VM after creation.

    Goal

    Install Nginx and deploy a simple webpage automatically.

    Implementation

    provisioner "remote-exec" {
      inline = [
        "while [ ! -f /var/lib/cloud/instance/boot-finished ]; do sleep 2; done",
        "sudo apt-get update -y",
        "sudo apt-get install -y nginx",
        "echo '<h1>Terraform Provisioner Demo Working!</h1>' | sudo tee /var/www/html/index.html",
        "sudo systemctl restart nginx"
      ]
    }
    

    Result

    Opening:

    πŸ‘‰ http://<public-ip>/

    displayed the custom webpage βœ”

    Debug Lesson

    Initially nginx was not installed because:

    • VM was not fully ready
    • apt was locked by cloud-init

    Adding a wait for:

    /var/lib/cloud/instance/boot-finished
    

    fixed the issue.

    Debug Steps and Errors Faced

    While implementing this project, I faced several real-world issues. These are the exact steps that helped me troubleshoot.

    SSH Key Permission Issue on Windows

    Azure SSH login failed initially because Windows was treating the private key as insecure.

    Fix: Restrict key permissions in PowerShell

    icacls <key file path> /inheritance:r
    icacls <key file path> /grant:r "$($env:USERNAME):(R)"
    icacls <key file path> /remove "Authenticated Users" "BUILTIN\Users" "Everyone"
    

    After this, SSH worked correctly:

    ssh -i <key file path> azureuser@<public ip>
    

    Important: The key must be stored on an NTFS formatted drive (not FAT/external USB) for permissions to work.


    Web Page Not Loading After Remote-Exec

    Even though Terraform apply was successful, the browser showed:

    ERR_CONNECTION_REFUSED

    Debug Steps Inside VM

    1. SSH into the VM
    ssh -i key1 azureuser@<public-ip>
    
    1. Check if nginx is installed
    which nginx
    sudo systemctl status nginx
    
    1. Test locally inside VM
    curl http://localhost
    

    Root Cause

    • Remote-exec ran before the VM was fully ready
    • cloud-init was still configuring the system
    • apt was locked at the time of execution

    Fix Implemented

    Added wait for cloud-init before installing nginx:

    while [ ! -f /var/lib/cloud/instance/boot-finished ]; do sleep 2; done
    

    After this change, the webpage loaded correctly.


    Lesson Learned

    Terraform showing β€œApply complete” does not always mean:

    • Software is installed
    • Services are running
    • VM is fully ready

    Provisioners need proper waiting and validation logic.


    Step 5 – File Provisioner

    Purpose

    Copy files from local machine β†’ VM.

    Implementation

    provisioner "file" {
      source      = "configs/sample.conf"
      destination = "/home/azureuser/sample.conf"
    }
    

    Verification in VM

    ls -l /home/azureuser
    cat sample.conf
    

    βœ” File successfully transferred.


    Understanding Provisioners

    Local-Exec

    • Runs on local computer
    • Used for logs, notifications, triggers

    Remote-Exec

    • Runs inside the VM
    • Installs software, configures OS

    File Provisioner

    • Copies files to remote system

    Important Reality

    Terraform provisioners are:

    • ❌ Not guaranteed
    • ❌ Not idempotent
    • ❌ Not recommended for production

    Better Alternatives

    • cloud-init
    • Custom VM images
    • Ansible
    • Azure VM Extensions

    Final Learning Outcome

    This mini project helped me understand:

    • How Terraform builds infrastructure
    • Difference between the 3 provisioners
    • Debugging real deployment issues
    • Basic Linux + Azure networking

    It connected multiple skills:

    Terraform + Azure + Linux + Automation

  • 8 – πŸš€ Deploy Azure Functions with Terraform β€” QR Code Generator Mini Project (Step-by-Step)

    In this post, I’ll walk you through a complete, working mini project where we deploy an Azure Linux Function App using Terraform and then deploy a Node.js QR Code Generator function using Azure Functions Core Tools.

    This is not just theory β€” this is exactly what I built, debugged, fixed, and verified end-to-end. I’ll also call out the gotchas I hit (especially in Step 2), so you don’t lose hours troubleshooting the same issues.

    Table of Contents

    1. πŸ”Ή What We Are Building
    2. 🧱 Step 1: Create Core Azure Infrastructure with Terraform
    3. βš™οΈ Step 2: Create the Linux Function App (Most Important Step)
    4. πŸ“¦ Step 3: Prepare the QR Code Generator App
    5. πŸ” Add local.settings.json (Local Only)
    6. 🚫 Add .funcignore
    7. πŸ›  Install Azure Functions Core Tools (Windows)
    8. πŸš€ Deploy the Function Code
    9. πŸ§ͺ Step 4: Test the Function End-to-End
    10. βœ… What This Demo Proves
    11. 🧠 Final Notes
    12. 🎯 Conclusion

    πŸ”Ή What We Are Building

    • Azure Resource Group
    • Azure Storage Account
    • Azure App Service Plan (Linux)
    • Azure Linux Function App (Node.js 18)
    • A Node.js HTTP-triggered Azure Function that:
      • Accepts a URL
      • Generates a QR code
      • Stores the QR image in Azure Blob Storage
      • Returns the QR image URL as JSON

    🧱 Step 1: Create Core Azure Infrastructure with Terraform

    In this step, we create the base infrastructure required for Azure Functions.

    Resource Group (rg.tf)

    resource "azurerm_resource_group" "rg" {
      name     = "rgminipro767676233"
      location = "Central US"
    }
    

    Storage Account (sa.tf)

    Azure Functions require a storage account for:

    • Function state
    • Logs
    • Triggers
    • Blob output (our QR codes)
    resource "azurerm_storage_account" "sa" {
      name                     = "saminipro7833430909"
      resource_group_name      = azurerm_resource_group.rg.name
      location                 = azurerm_resource_group.rg.location
      account_tier             = "Standard"
      account_replication_type = "LRS"
    }
    

    ⚠️ Storage account names must be globally unique and lowercase.

    App Service Plan (splan.tf)

    This defines the compute for the Function App.

    resource "azurerm_service_plan" "splan" {
      name                = "splanminipro8787"
      resource_group_name = azurerm_resource_group.rg.name
      location            = azurerm_resource_group.rg.location
      os_type             = "Linux"
      sku_name            = "B1"
    }
    

    Apply Terraform

    terraform apply
    

    βœ… Verify in Azure Portal:

    • Resource Group created
    • Storage Account exists
    • App Service Plan is Linux (B1)

    βš™οΈ Step 2: Create the Linux Function App (Most Important Step)

    This step required multiple fixes for the app to actually run, so pay close attention.

    Linux Function App (linuxfa.tf)

    resource "azurerm_linux_function_app" "linuxfa" {
      name                = "linuxfaminipro8932340"
      resource_group_name = azurerm_resource_group.rg.name
      location            = azurerm_resource_group.rg.location
    
      storage_account_name       = azurerm_storage_account.sa.name
      storage_account_access_key = azurerm_storage_account.sa.primary_access_key
      service_plan_id            = azurerm_service_plan.splan.id
    
      app_settings = {
        FUNCTIONS_WORKER_RUNTIME = "node"
    
        # Required by Azure Functions runtime
        AzureWebJobsStorage = azurerm_storage_account.sa.primary_connection_string
    
        # Used by our application code
        STORAGE_CONNECTION_STRING = azurerm_storage_account.sa.primary_connection_string
    
        # Ensures package-based deployment
        WEBSITE_RUN_FROM_PACKAGE = "1"
      }
    
      site_config {
        application_stack {
          node_version = 18
        }
      }
    }
    

    Why Each Setting Matters

    • FUNCTIONS_WORKER_RUNTIME
      • Tells Azure this is a Node.js function app
    • AzureWebJobsStorage
      • Mandatory for Azure Functions to start
    • STORAGE_CONNECTION_STRING
      • Used by our QR code logic to upload images
    • WEBSITE_RUN_FROM_PACKAGE
      • Ensures consistent zip/package deployment
    • node_version = 18
      • Must match your app runtime

    Apply Terraform Again

    terraform apply
    

    βœ… Verify in Azure Portal:

    • Function App is Running
    • Runtime stack shows Node.js 18
    • No startup errors

    πŸ“¦ Step 3: Prepare the QR Code Generator App

    Download the App

    Clone or download the QR code generator repository:

    git clone https://github.com/rishabkumar7/azure-qr-code
    

    Navigate to the function root directory (where host.json exists).

    Run npm install

    npm install
    

    This creates the node_modules folder β€” without this, the function will fail at runtime.

    Expected Folder Structure

    qrCodeGenerator/
    β”‚
    β”œβ”€β”€ GenerateQRCode/
    β”‚   β”œβ”€β”€ index.js
    β”‚   └── function.json
    β”‚
    β”œβ”€β”€ host.json
    β”œβ”€β”€ package.json
    β”œβ”€β”€ package-lock.json
    β”œβ”€β”€ node_modules/
    

    πŸ” Add local.settings.json (Local Only)

    {
      "IsEncrypted": false,
      "Values": {
        "AzureWebJobsStorage": "<Storage Account Connection String>",
        "FUNCTIONS_WORKER_RUNTIME": "node"
      }
    }
    

    ❗ This file is NOT deployed to Azure and should never be committed.


    🚫 Add .funcignore

    This controls what gets deployed.

    .git*
    .vscode
    local.settings.json
    test
    getting_started.md
    *.js.map
    *.ts
    node_modules/@types/
    node_modules/azure-functions-core-tools/
    node_modules/typescript/
    

    βœ… We keep node_modules because this project depends on native Node packages.


    πŸ›  Install Azure Functions Core Tools (Windows)

    winget install Microsoft.Azure.FunctionsCoreTools
    

    Restart PowerShell and verify:

    func -v
    

    πŸš€ Deploy the Function Code

    Navigate to the directory where host.json exists:

    cd path/to/qrCodeGenerator
    

    Publish the function:

    func azure functionapp publish linuxfaminipro8932340 --javascript --force
    

    Successful Output Looks Like This

    Upload completed successfully.
    Deployment completed successfully.
    Functions in linuxfaminipro8932340:
        GenerateQRCode - [httpTrigger]
            Invoke url: https://linuxfaminipro8932340.azurewebsites.net/api/generateqrcode
    

    πŸ§ͺ Step 4: Test the Function End-to-End

    Invoke the Function

    https://linuxfaminipro8932340.azurewebsites.net/api/generateqrcode?url=https://example.com
    

    Sample Response

    {
      "qr_code_url": "https://saminipro7833430909.blob.core.windows.net/qr-codes/example.com.png"
    }
    

    Download the QR Code

    Open the returned Blob URL in your browser:

    https://saminipro7833430909.blob.core.windows.net/qr-codes/example.com.png
    

    πŸŽ‰ You’ll see the QR code image stored in Azure Blob Storage.


    βœ… What This Demo Proves

    • Terraform successfully provisions Azure Functions infrastructure
    • App settings are critical for runtime stability
    • Azure Functions Core Tools deploy code from the current directory
    • Missing npm install causes runtime failures
    • Blob Storage integration works end-to-end
    • Azure Functions can be tested via simple HTTP requests

    🧠 Final Notes

    • Warnings about extension bundle versions were intentionally ignored
    • This demo focuses on learning Terraform + Azure Functions, not production hardening
    • In real projects, code deployment is usually handled via CI/CD pipelines

    🎯 Conclusion

    This mini project demonstrates how Infrastructure as Code (Terraform) and Serverless (Azure Functions) work together in a practical, real-world scenario.

    If you can build and debug this, you’re well on your way to mastering Azure + Terraform.

    Happy learning πŸš€

  • 7 – πŸš€ Azure App Service with Terraform β€” Blue-Green Deployment Step-by-Step

    Blue-green deployment is a release strategy that lets you ship new versions of your app with near-zero downtime and low risk. Instead of updating your live app directly, you run two environments side-by-side and switch traffic between them.

    In this guide, I’ll walk you through how I implemented blue-green deployment on Azure using Terraform and simple HTML apps. This is written for beginners and focuses on understanding why we do each step β€” not just what to type.

    Table of Contents


    🧠 What Is Blue-Green Deployment (Simple Explanation)

    Imagine:

    • Blue = current live version
    • Green = new version

    Users only see one version at a time.

    You:

    1. Deploy the new version to Green
    2. Test it safely
    3. Swap Green β†’ Production
    4. Instantly roll back if needed

    No downtime. No risky in-place updates.

    Azure App Service deployment slots make this easy.


    🎯 What We Will Build

    We will:

    βœ… Create Azure infrastructure with Terraform
    βœ… Create a staging slot
    βœ… Deploy two app versions (Blue & Green)
    βœ… Swap them using Terraform
    βœ… Understand how real companies do this


    πŸ“Œ Prerequisites

    You should have:

    • Azure subscription
    • Terraform (by HashiCorp) installed
    • Azure CLI installed
    • Logged in using az login
    • Basic Terraform knowledge

    πŸ—οΈ Step 1 β€” Create Resource Group, App Service Plan & App Service

    Why these resources?

    Resource Group
    Container that holds everything.

    App Service Plan
    Defines pricing tier, performance, and features.
    Deployment slots require Standard tier or higher.

    App Service
    Your actual web app.


    rg.tf

    resource "azurerm_resource_group" "rg" {
      name = "rgminipro87897"
      location = "Central US"
    }
    

    asplan.tf

    resource "azurerm_app_service_plan" "asp" {
      name = "aspminipro8972"
      location = azurerm_resource_group.rg.location
      resource_group_name = azurerm_resource_group.rg.name
    
      sku {
        tier = "Standard"
        size = "S1"
      }
    }
    

    πŸ‘‰ Why S1?
    Slots are unavailable in Free/Basic tiers.


    appservice.tf

    resource "azurerm_app_service" "as" {
      name = "appserviceminipro87897987233"
      location = azurerm_resource_group.rg.location
      resource_group_name = azurerm_resource_group.rg.name
      app_service_plan_id = azurerm_app_service_plan.asp.id
    }
    

    β–Ά Run Terraform

    terraform init
    terraform apply
    

    βœ… Verify

    Open the app URL in a browser.
    You’ll see a default Azure page β€” that means infrastructure works.


    πŸ” Step 2 β€” Create a Staging Slot

    A deployment slot is a second live version of your app with its own URL.

    Think of it as a testing environment running inside the same App Service.


    slot.tf

    resource "azurerm_app_service_slot" "slot" {
      name = "slotstagingminipro78623"
      location = azurerm_resource_group.rg.location
      resource_group_name = azurerm_resource_group.rg.name
      app_service_plan_id = azurerm_app_service_plan.asp.id
      app_service_name = azurerm_app_service.as.name
    }
    

    β–Ά Apply

    terraform apply
    

    βœ… Verify in Azure

    You will see:

    • Production slot
    • Staging slot
    • Traffic: 100% production, 0% staging

    πŸ‘‰ This is normal β€” staging is for testing.


    🌈 Step 3 β€” Deploy Blue & Green Apps

    Terraform builds infrastructure.
    We use Azure CLI to deploy app code.

    (That’s also how real companies separate infra and app deployments.)


    Blue Version (Production)

    Create:

    <h1 style="background:blue;color:white;">BLUE VERSION</h1>
    

    Zip with index.html at root β†’ blueapp.zip


    Green Version (Staging)

    <h1 style="background:green;color:white;">GREEN VERSION</h1>
    

    Zip β†’ greenapp.zip


    Deploy Using Microsoft Azure CLI

    Blue β†’ Production

    az webapp deploy \
      --resource-group rgminipro87897 \
      --name appserviceminipro87897987233 \
      --src-path blueapp.zip \
      --type zip
    

    Green β†’ Staging

    az webapp deploy \
      --resource-group rgminipro87897 \
      --name appserviceminipro87897987233 \
      --slot slotstagingminipro78623 \
      --src-path greenapp.zip \
      --type zip
    

    βœ… Verify

    Production URL β†’ Blue
    Staging URL β†’ Green

    Perfect setup!


    πŸ”„ Step 4 β€” Slot Swapping (The Core of Blue-Green)

    Now we swap environments.


    swap.tf

    resource "azurerm_web_app_active_slot" "swap" {
      slot_id = azurerm_app_service_slot.slot.id
    }
    

    β–Ά Apply

    terraform apply
    

    πŸŽ‰ Result

    Now:

    Production β†’ Green
    Staging β†’ Blue

    You just performed a blue-green deployment!


    πŸ”™ How to Swap Back

    Terraform won’t auto-reverse swaps.

    Use Azure CLI:

    az webapp deployment slot swap \
      --resource-group rgminipro87897 \
      --name appserviceminipro87897987233 \
      --slot slotstagingminipro78623 \
      --target-slot production
    

    🏒 How Companies Do This in Real Life

    In real projects:

    Terraform
    β†’ Creates infrastructure

    CI/CD pipelines
    β†’ Deploy apps & swap slots

    Why?

    Because swapping affects real users and needs:

    • Testing
    • Approval
    • Monitoring
    • Rollback strategy

    Common tools:

    • GitHub Actions
    • Azure DevOps
    • Jenkins

    πŸ“Œ Key Lessons

    You learned:

    βœ” App Service basics
    βœ” Deployment slots
    βœ” Blue-green strategy
    βœ” Terraform infrastructure setup
    βœ” CLI deployment
    βœ” Slot swapping logic
    βœ” Real-world DevOps workflow


    🧹 Cleanup

    Avoid charges:

    terraform destroy
    

    πŸš€ Final Thoughts

    Blue-green deployment is a core DevOps skill.
    Mastering it early gives you a big advantage.

    This small demo mirrors how production systems reduce risk during releases.

  • 6 – Terraform + Azure Entra ID Mini Project: Step-by-Step Beginner Guide (Users & Groups from CSV)

    Table of Contents

    1. Terraform + Azure Entra ID Mini Project: Step-by-Step Beginner Guide (Users & Groups from CSV)
    2. 🎯 What We’re Building
    3. 🟒 Step 1 β€” Configure Provider & Fetch Domain
    4. 🟒 Step 2 β€” Test CSV Reading
    5. 🟒 Step 3 β€” Create ONE Test User
    6. 🟒 Step 4 β€” Create Users from CSV
    7. 🟒 Step 5 β€” Create Group & Add Members
    8. 🧠 Key Beginner Lessons
    9. πŸš€ What You Can Try Next
    10. πŸŽ‰ Final Thoughts

    Terraform + Azure Entra ID Mini Project: Step-by-Step Beginner Guide (Users & Groups from CSV)

    In this mini project, I automated user and group management in Microsoft Entra ID using Terraform.

    Instead of creating infrastructure like VMs or VNets, we manage:

    • πŸ‘€ Users
    • πŸ‘₯ Groups
    • πŸ”— Group memberships

    I followed my instructor’s tutorial but implemented it in my own small, testable steps. This blog shows exactly how you can do the same and debug easily as a beginner.


    🎯 What We’re Building

    We will:

    βœ… Fetch our tenant domain
    βœ… Read users from a CSV file
    βœ… Create Entra ID users from CSV
    βœ… Detect duplicate usernames
    βœ… Create a group
    βœ… Add users to the group based on department


    🟒 Step 1 β€” Configure Provider & Fetch Domain

    azadprovider.tf

    terraform {
      required_providers {
        azuread = {
          source  = "hashicorp/azuread"
          version = "2.41.0"
        }
      }
    }
    

    πŸ‘‰ This tells Terraform to use the Azure AD provider.


    domainfetch.tf

    data "azuread_domains" "tenant" {
      only_initial = true
    }
    
    output "domain" {
      value = data.azuread_domains.tenant.domains.0.domain_name
    }
    

    Run

    terraform init
    terraform apply
    

    Verify

    You should see:

    domain = "yourtenant.onmicrosoft.com"
    

    βœ… Now Terraform can build valid usernames.


    🟒 Step 2 β€” Test CSV Reading

    locals {
      users = csvdecode(file("users.csv"))
    }
    
    output "users_debug" {
      value = local.users
    }
    

    Why?

    Before creating users, confirm Terraform reads the CSV correctly.

    Run

    terraform plan
    

    You should see structured user data printed.

    βœ… If this fails β†’ your CSV format is wrong.


    🟒 Step 3 β€” Create ONE Test User

    Always test with one user first.

    resource "azuread_user" "testuserminipro867" {
      user_principal_name = "testuserminipro867@yourdomain.onmicrosoft.com"
      display_name = "Test User"
      password = "Password123!"
    }
    

    Verify in Portal

    Entra ID β†’ Users β†’ Confirm creation.

    βœ… Works? Good.
    Then comment it out.


    🟒 Step 4 β€” Create Users from CSV

    Now we automate.


    Generate UPNs

    locals {
      upns = [
        for u in local.users :
        lower("${u.first_name}.${u.last_name}@${data.azuread_domains.tenant.domains[0].domain_name}")
      ]
    }
    

    πŸ‘‰ Creates usernames like:

    michael.scott@tenant.onmicrosoft.com
    

    Detect Duplicates

    output "duplicate_check" {
      value = length(local.upns) != length(distinct(local.upns))
        ? "❌ DUPLICATES FOUND"
        : "βœ… No duplicates"
    }
    

    πŸ’‘ Beginner Tip:
    Duplicate usernames will break Terraform β€” always check first!


    Preview Planned Users

    output "planned_users" {
      value = local.upns
    }
    

    Create Users

    resource "azuread_user" "users" {
    
      for_each = {
        for idx, user in local.users :
        local.upns[idx] => user
      }
    
      user_principal_name = each.key
      display_name = "${each.value.first_name} ${each.value.last_name}"
      mail_nickname = lower("${each.value.first_name}${each.value.last_name}")
    
      department = each.value.department
      password = "Password123!"
    }
    

    Apply

    terraform apply
    

    Verify

    Check Entra ID β†’ Users.

    βœ… Users created automatically!


    πŸ”₯ Important Learning

    If you change the CSV later:

    Terraform will
    βœ” create new users
    βœ” update existing users
    βœ” remove deleted users

    πŸ‘‰ This is Terraform’s desired state model in action.


    🟒 Step 5 β€” Create Group & Add Members


    Create Group

    resource "azuread_group" "test_group" {
      display_name = "Test Group"
      security_enabled = true
    }
    

    Add Members by Department

    resource "azuread_group_member" "education" {
    
      for_each = {
        for u in azuread_user.users :
        u.mail_nickname => u
        if u.department == "Education"
      }
    
      group_object_id = azuread_group.test_group.id
      member_object_id = each.value.id
    }
    

    Apply

    terraform apply
    

    Verify

    Portal β†’ Groups β†’ Members tab

    βœ… Only Education department users added.


    🧠 Key Beginner Lessons

    βœ… Work in Small Steps

    Don’t deploy everything at once.


    βœ… Always Check Data First

    Validate CSV before creating resources.


    βœ… Use Outputs for Debugging

    Outputs save hours of troubleshooting.


    βœ… Terraform is Declarative

    It maintains the desired state automatically.


    πŸš€ What You Can Try Next

    πŸ‘‰ Add more users to CSV
    πŸ‘‰ Create groups by job title
    πŸ‘‰ Use Service Principal authentication
    πŸ‘‰ Generate random passwords
    πŸ‘‰ Assign roles to groups


    πŸŽ‰ Final Thoughts

    This project shows how powerful Terraform is beyond infrastructure β€” it can manage identity too.

    If you’re learning cloud or DevOps, this skill is extremely valuable because real organizations manage thousands of users and groups.

    Start small, test often, and build confidence step-by-step β€” exactly like you did here.

  • 5 – Azure VNet Peering: A Real-World Terraform Mini Project to Build a Secure Cloud Network

    In this mini project, I implemented Azure VNet peering using Terraform, but instead of applying everything at once, I deliberately broke the setup into small, testable steps.
    This approach makes it much easier to understand what’s happening, catch mistakes early, and build real confidence with Terraform and Azure networking.

    Below is the exact flow I followed β€” and you can follow the same steps as a beginner.

    Table of Contents

    1. Step 1: Create the Resource Group, Virtual Networks, and Subnets
    2. Step 2: Create VM1 in Subnet 1 (via a NIC)
    3. Step 3: Create VM2 in Subnet 2
    4. Step 4: Test Connectivity Before Peering (Expected to Fail)
    5. Step 5: Add VNet Peering (Both Directions)
    6. Step 6: Test Connectivity After Peering (Expected to Work)
    7. Key Takeaways for Beginners
    8. Why This Step-by-Step Approach Matters

    Step 1: Create the Resource Group, Virtual Networks, and Subnets

    We start by creating the network foundation:

    • One resource group
    • Two separate virtual networks
    • One subnet inside each virtual network

    At this stage, there is no connectivity between the networks.

    What we created

    • vnet1 β†’ address space 10.0.0.0/16
    • vnet2 β†’ address space 10.1.0.0/16
    • One /24 subnet in each VNet
    resource "azurerm_resource_group" "rg" {
      name     = "rgminipro76876"
      location = "Central US"
    }
    
    resource "azurerm_virtual_network" "vnet1" {
      name                = "vnet1minipro8768"
      location            = azurerm_resource_group.rg.location
      address_space       = ["10.0.0.0/16"]
      resource_group_name = azurerm_resource_group.rg.name
    }
    
    resource "azurerm_subnet" "sn1" {
      name                 = "subnet1minipro878"
      resource_group_name  = azurerm_resource_group.rg.name
      virtual_network_name = azurerm_virtual_network.vnet1.name
      address_prefixes     = ["10.0.0.0/24"]
    }
    
    resource "azurerm_virtual_network" "vnet2" {
      name                = "vnet2minipro8768"
      location            = azurerm_resource_group.rg.location
      address_space       = ["10.1.0.0/16"]
      resource_group_name = azurerm_resource_group.rg.name
    }
    
    resource "azurerm_subnet" "sn2" {
      name                 = "subnet2minipro878"
      resource_group_name  = azurerm_resource_group.rg.name
      virtual_network_name = azurerm_virtual_network.vnet2.name
      address_prefixes     = ["10.1.0.0/24"]
    }
    

    How to verify

    • Run terraform apply
    • Open Azure Portal
    • Confirm:
      • Both VNets exist
      • Each VNet has its own subnet
      • Address spaces do not overlap

    At this point, nothing can talk to anything else yet β€” and that’s expected.


    Step 2: Create VM1 in Subnet 1 (via a NIC)

    In Azure, VMs don’t live directly inside subnets.
    Instead, a Network Interface (NIC) is placed inside a subnet, and the VM attaches to that NIC.

    Here, we:

    • Create a NIC attached to subnet1
    • Create a VM that uses that NIC

    VM1 and NIC1

    resource "azurerm_network_interface" "nic1" {
      name                = "nic1minipro8789"
      location            = azurerm_resource_group.rg.location
      resource_group_name = azurerm_resource_group.rg.name
    
      ip_configuration {
        name                          = "ipconfignic1minipro989"
        subnet_id                     = azurerm_subnet.sn1.id
        private_ip_address_allocation = "Dynamic"
      }
    }
    
    resource "azurerm_virtual_machine" "vm1" {
      name                = "vm1minipro98908"
      location            = azurerm_resource_group.rg.location
      resource_group_name = azurerm_resource_group.rg.name
      network_interface_ids = [
        azurerm_network_interface.nic1.id
      ]
      vm_size = "Standard_D2s_v3"
    
      delete_os_disk_on_termination = true
    
      storage_image_reference {
        publisher = "Canonical"
        offer     = "0001-com-ubuntu-server-jammy"
        sku       = "22_04-lts"
        version   = "latest"
      }
    
      storage_os_disk {
        name              = "storageosdisk1"
        caching           = "ReadWrite"
        create_option     = "FromImage"
        managed_disk_type = "Standard_LRS"
      }
    
      os_profile {
        computer_name  = "peer1vm"
        admin_username = "testadmin"
        admin_password = "Password1234!"
      }
    
      os_profile_linux_config {
        disable_password_authentication = false
      }
    }
    

    How to verify

    • Run terraform apply
    • In Azure Portal:
      • VM1 exists
      • NIC is attached
      • NIC is in subnet1
      • VM has no public IP

    Step 3: Create VM2 in Subnet 2

    Now we repeat the same pattern for the second network:

    • NIC attached to subnet2
    • VM attached to that NIC
    resource "azurerm_network_interface" "nic2" {
      name                = "nic2minipro8789"
      location            = azurerm_resource_group.rg.location
      resource_group_name = azurerm_resource_group.rg.name
    
      ip_configuration {
        name                          = "ipconfignic2minipro989"
        subnet_id                     = azurerm_subnet.sn2.id
        private_ip_address_allocation = "Dynamic"
      }
    }
    
    resource "azurerm_virtual_machine" "vm2" {
      name                = "vm2minipro98908"
      location            = azurerm_resource_group.rg.location
      resource_group_name = azurerm_resource_group.rg.name
      network_interface_ids = [
        azurerm_network_interface.nic2.id
      ]
      vm_size = "Standard_D2s_v3"
    
      delete_os_disk_on_termination = true
    
      storage_image_reference {
        publisher = "Canonical"
        offer     = "0001-com-ubuntu-server-jammy"
        sku       = "22_04-lts"
        version   = "latest"
      }
    
      storage_os_disk {
        name              = "storageosdisk2"
        caching           = "ReadWrite"
        create_option     = "FromImage"
        managed_disk_type = "Standard_LRS"
      }
    
      os_profile {
        computer_name  = "peer2vm"
        admin_username = "testadmin"
        admin_password = "Password1234!"
      }
    
      os_profile_linux_config {
        disable_password_authentication = false
      }
    }
    

    How to verify

    • Run terraform apply
    • Confirm:
      • VM2 exists
      • NIC2 is attached
      • NIC2 belongs to subnet2
      • VM2 also has no public IP

    Step 4: Test Connectivity Before Peering (Expected to Fail)

    Now we test whether the two VMs can communicate without peering.

    Because:

    • They are in different VNets
    • There is no peering
    • No public IPs

    They should not be able to communicate.

    How I tested

    Using Azure Run Command (no SSH or Bastion needed):

    • VM1 β†’ Operations β†’ Run command β†’ RunShellScript
    • Command:
    ping -c 4 10.1.0.x
    

    Result

    4 packets transmitted, 0 received, 100% packet loss
    

    βœ… This is the correct and expected behavior


    Step 5: Add VNet Peering (Both Directions)

    VNet peering in Azure is not automatic.
    You must create two peering connections:

    • VNet1 β†’ VNet2
    • VNet2 β†’ VNet1
    resource "azurerm_virtual_network_peering" "peer1to2" {
      name                      = "peer1to2minipro455"
      resource_group_name       = azurerm_resource_group.rg.name
      virtual_network_name      = azurerm_virtual_network.vnet1.name
      remote_virtual_network_id = azurerm_virtual_network.vnet2.id
    }
    
    resource "azurerm_virtual_network_peering" "peer2to1" {
      name                      = "peer2to1minipro455"
      resource_group_name       = azurerm_resource_group.rg.name
      virtual_network_name      = azurerm_virtual_network.vnet2.name
      remote_virtual_network_id = azurerm_virtual_network.vnet1.id
    }
    

    How to verify

    • Run terraform apply
    • Azure Portal β†’ Virtual Networks β†’ Peering
    • Status should show Connected

    Step 6: Test Connectivity After Peering (Expected to Work)

    Now we repeat the same test as before.

    ping -c 4 10.1.0.x
    

    Result

    4 packets transmitted, 4 received, 0% packet loss
    

    πŸŽ‰ Success!

    This proves:

    • VNet peering is working
    • Traffic stays on Azure’s private backbone
    • No public IPs are required

    Key Takeaways for Beginners

    • VMs communicate via NICs, not directly via subnets
    • VNets are isolated by default
    • Peering must be created in both directions
    • Always test:
      • ❌ Before peering
      • βœ… After peering
    • Applying Terraform in small steps makes debugging much easier

    Why This Step-by-Step Approach Matters

    Instead of running one giant terraform apply and hoping for the best, this method:

    • Builds real understanding
    • Makes Azure networking concepts visual
    • Helps you debug like a real DevOps engineer

    If you can do this project, you already understand:

    • VNets
    • Subnets
    • NICs
    • VM placement
    • VNet peering
    • Real-world network isolation

    That’s solid progress πŸ‘