Introduction
Provisioning and managing Kubernetes clusters can often be complex and time-consuming. Azure Kubernetes Service (AKS) simplifies this by providing a fully managed Kubernetes service in Azure. However, to automate the process of provisioning and configuring an AKS cluster, Pulumi—an open-source infrastructure-as-code (IaC) tool—provides a powerful and flexible solution.
Pulumi allows developers to use familiar programming languages like TypeScript, Python, Go, and C# to define, provision, and manage cloud infrastructure. It offers a modern approach to cloud provisioning, enabling you to write real code for your infrastructure and benefit from version control, automation, and testing.
In this hands-on guide, you will focus on the process of provisioning and configuring an AKS cluster on Azure using Pulumi. Here’s a high-level overview of the steps:
- Set up Environment with Azure & Pulumi
- Install Pulumi and Azure on your local machine.
- Set up Pulumi’s Azure provider, which will allow Pulumi to interact with Azure resources in your preferred programming language.
- Organize Pulumi project in a way that makes it easy to understand and maintain
- Configure Kubernetes Resources
- Configure the necessary Azure resources like a Resource Group, Virtual Network (VNet), and Azure Container Registry (ACR).
- Specify details like node count, VM size, and networking options.
- Provision the AKS Cluster
- Create a new Azure Kubernetes Service (AKS) cluster with Pulumi
- Use Pulumi’s
pulumi upcommand to provision and deploy the entire infrastructure. - Monitor the output for real-time status updates and ensure that all resources are created successfully.
- Deploy Ingress Controller and Configure SSL/TLS
- Deploy the NGINX Ingress Controller using the Helm chart with Pulumi
- Deploy Let’s Encrypt certificate via cert-manager
- Deploy Application with Pulumi
- Once the AKS cluster is up and running, Pulumi can also be used to define Kubernetes resources like Deployments, Services, and Ingress.
- Pulumi integrates with Kubernetes, allowing you to seamlessly manage both cloud and Kubernetes resources in a unified way.
- Clean Up
Application Architecture

Infrastructure Components
Resource Group
A resource group is a container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group.
Azure Kubernetes Service (AKS) Cluster
Azure Kubernetes Service (AKS) is a managed Kubernetes service that you can use to deploy and manage containerized applications.
Azure Container Registry (ACR)
Build, store, secure, scan, replicate, and manage container images and artifacts with a fully managed, geo-replicated instance.
Cost of provisioning and deploying
This provisions resources to an Azure subscription that you will select upon provisioning them. Refer to the Pricing calculator for Microsoft Azure to estimate the cost you might incur when this hands on is running on Azure and, if needed, update the included Azure resource definitions found in config.ts to suit your needs.
You can also see the Pulumi pricing at https://www.pulumi.com/pricing/
1. Set up Environment with Azure & Pulumi
Install Pulumi and Azure on your local machine
Azure Subscription
You will need an active Azure subscription to deploy the application components. The total cost of all resources you will create should be very close to $0. You can use your developer subscription or create a free Azure subscription here.
Be sure to clean up the resources after you complete the hands-on guide, as described in the last step.
Azure CLI
We will use the command-line interface (CLI) tool to log in to an Azure subscription and run some queries. You can install the CLI tool, as described here.
The tool is cross-platform: it should work on Windows, macOS, or Linux (including WSL).
After you complete the installation, open a command prompt and type az. You should see the welcome message:
$ az
/\
/ \ _____ _ _ ___ _
/ /\ \ |_ / | | | \'__/ _\
/ ____ \ / /| |_| | | | __/
/_/ \_\/___|\__,_|_| \___|
Welcome to the cool new Azure CLI!
Use `az --version` to display the current version.
Now, login to your Azure account by typing az login and providing your credentials in the browser window. When this is done, type az account show:
// Some code
$ az account show
{
"environmentName": "AzureCloud",
"id": "12345678-9abc-def0-1234-56789abcdef0",
"isDefault": true,
"name": "My Subscription Name",
"state": "Enabled",
"tenantId": "eeeeeee-eeee-eeee-eeee-eeeeeeeeeeee",
"user": {
"name": "name@example.com",
"type": "user"
}
}
If you have multiple subscriptions and the wrong one is shown, change the active subscription.
# List available subscriptions on the logged in account
az account list
# List available subscriptions for the tenant
az account subscription list
# Set a subscription to be the current active subscription
# Subscription ID is displayed by the previous command
az account set --subscription 0ad021f2-9dde-4cb1-8aa4-d71018aaeec8
Pulumi CLI
Pulumi provides a CLI tool that drives cloud deployments from the machine where it runs. It is a cross-platform executable that has to be accessible on the system’s PATH. You can also follow this guide to install the Pulumi CLI.
Run pulumi version, and you should get a response back:
$ pulumi version
v3.130.0
Node.js and TypeScript compiler
You will write Pulumi programs in TypeScript. They will be executed by Node.js behind the scenes.
It’s quite likely you already have Node installed. If not, navigate to Download Page and install Node.js with npm.
If you have npm installed, you can install TypeScript globally on your computer with npm install -g typescript.
Text editor
Any text editor will do, but I recommend one with TypeScript syntax highlighting. The most common choice is Visual Studio Code.
Create a new Azure Pulumi Project
Now you have set up your environment by installing Pulumi, installing your preferred language runtime, and configuring your Azure credentials.
Pulumi supports several programming languages. The first step is to ask the CLI to bootstrap a basic TypeScript project.
Make a new folder called pulumi-aks-workshop anywhere on your local disk. By default, this folder gives a name to your Pulumi project.
Initialize the project
The pulumi new command creates a new Pulumi project with some basic scaffolding based on the cloud and language specified.
mkdir pulumi-ask-workshop && cd pulumi-ask-workshop && pulumi new azure-typescript
First, you will be asked for a project name and project description. Hit ENTER to accept the default values or specify new values.
This command will walk you through creating a new Pulumi project.
Enter a value or leave blank to accept the (default), and press <ENTER>.
Press ^C at any time to quit.
Project name (pulumi-ask-workshop):
Project description (A minimal Azure Native TypeScript Pulumi program):
Created project 'pulumi-ask-workshop'
Next, you will be asked for a stack name. Hit ENTER to accept the default value of dev.
Please enter your desired stack name.
To create a stack in an organization, use the format <org-name>/<stack-name> (e.g. `acmecorp/dev`).
Stack name (dev): dev
Created stack 'dev'
The package manager to use for installing dependencies [Use arrows to move, type to filter]
> npm
yarn
pnpm [not found]
You can choose the package manager likes npm or yarn. The default is npm. Hit ENTER to accept.
For Azure projects, you will be prompted for the Azure location. You can accept the default value of WestUS or choose another location. The Azure location will be stored in Pulumi.dev.yaml file. I will choose southeastasia for now.
You also can then change the region for your stack by using the pulumi config set command as shown below:
pulumi config set azure-native:location eastus
After some dependency installations, the project and stack will be ready.
Your new project is ready to go! ✨
To perform an initial deployment, run `pulumi up`
Install Dependencies
Ensure you have the required dependencies installed:
npm install @pulumi/kubernetes @pulumi/azure-native @pulumi/tls
Introduction to Pulumi Configuration
Pulumi uses configurations to manage environment-specific settings like credentials, resource options, or feature flags. This makes your infrastructure code adaptable and reusable across different environments (such as development, staging, and production) by applying different configurations for each.
Key Concepts in Pulumi Configuration
- Stack: A Pulumi stack is an isolated deployment environment for your infrastructure. You can have different stacks for different environments (e.g.,
dev,staging,prod). - Configuration: Each stack can have its own configuration settings, such as region, instance types, or API keys. These configurations are stored securely and can be referenced within your Pulumi code.
- Secrets: Pulumi can securely manage sensitive information, like passwords or API keys, using encrypted secrets.
Pulumi allows you to define configuration values through the command line, which are stored in the stack’s configuration file (e.g., Pulumi.dev.yaml, Pulumi.prod.yaml).
You can set configuration values using the Pulumi CLI:
pulumi config set <key> <value>
For example, to set the region for an Azure deployment, you can run:
pulumi config set azure:location southeastasia
This will save the location to the stack’s configuration file.
You can view the current configuration for a stack:
pulumi config
Organizing Pulumi project
Within your Pulumi project, there are good practices to consider helping keep your code organized, maintainable, and understandable.
Organize your code in a way that makes it easy to understand and maintain. One way to do this in Typescript is to break out your code into separate files, and then import them into your main file.
This project looks a bit like this:
├─ pulumi-ask-workshop
├── app
├── applicationdeployment.ts
├── resources
├── akscluster.ts
├── certmanager.ts
├── containerregistry.ts
├── namespaces.ts
├── nginxingresscontroller.ts
├── virtualnetwork.ts
├── config.ts
├── index.ts
├── package-lock.json
├── package.json
├── Pulumi.yaml
├── Pulumi.dev.yaml
├── tsconfig.json
Let’s discuss them briefly:
app– a sample app that you can deploy to AKS cluster (e.g. kuard)resources— all the azure resources for provisioning your system on azureconfig.ts— get all configurations from Pulumiindex.ts— is the Pulumi program that defines your stack resources.package.jsonandpackage-lock.json—definitions of required npm dependenciesPulumi.yamlandPulumi.dev.yaml—contains configuration values for the stack you initialized.tsconfig.json—settings for the TypeScript compiler. To allow any type, you may need to set: “noImplicitAny” tofalse..gitignore—Git exclusion list, is not important for usnode_modules—installed npm packages
Now that you have our folder structure, you can create all the resources that the service needs.
2. Configure Kubernetes Resources
Create a Resource Group
Azure Resource Group is a container for resources that are deployed together. Every Azure resource must be assigned to a resource group.
You don’t need a storage account for now therefore let’s remove the code for creating the storage account. And modify the index.ts looks like that:
import * as pulumi from "@pulumi/pulumi";
import * as resources from "@pulumi/azure-native/resources";
// Create an Azure Resource Group
const resourceGroup = new resources.ResourceGroup("resourceGroup", {
resourceGroupName: "pulumi-aks-rg"
});
export const resourceGroupName = resourceGroup.name;
Declaring a resource is just calling a constructor of the corresponding type. You assigned the new resource to the variable pulumi-aks-rg to be able to use it for other resources.
Note that each resource has two names: a logical one (first constructor argument) and a physical one (name property in the second argument). The logical name is visible in Pulumi console, while the physical name is the actual resource name in Azure. You could omit the name property: then a physical name would be automatically constructed as Logical Name + random suffix.
The location of the resource group is set in the configuration setting azure-native:location (check the Pulumi.dev.yaml file). This is an easy way to set a global location for your program, so you don’t have to specify the location for each resource manually.
You changed the program—now it’s time to apply the change to the cloud infrastructure. Run pulumi up command.
Instead of executing the changes immediately, Pulumi shows you a preview of the changes to happen:
$ pulumi up
Previewing update (dev)
Type Name Plan
+ pulumi:pulumi:Stack pulumi-ask-workshop-dev create
+ └─ azure-native:resources:ResourceGroup resourceGroup create
Outputs:
+ resourceGroupName: output<string>
Resources:
+ 2 to create
Do you want to perform this update? [Use arrows to move, type to filter]
yes
> no
details
Select yes in the command prompt to execute the change:
Updating (dev)
Type Name Status
+ pulumi:pulumi:Stack pulumi-ask-workshop-dev created (6s)
+ └─ azure-native:resources:ResourceGroup resourceGroup created (1s)
Outputs:
+ resourceGroupName: "pulumi-aks-rg"
Resources:
+ 2 created
Duration: 9s
Now make sure that your Resource Group was created successfully:
$ az group exists -g pulumi-aks-rg
true
Configure networking
Kubernetes employs a virtual networking layer to manage access within and between your applications or their components.
When creating an Azure Kubernetes Service (AKS) cluster, the networking model plays a crucial role in how Kubernetes pods communicate within the cluster and externally. In this hands on, we will focus on setting up an AKS cluster that uses Azure CNI with dynamic IP allocation. The new dynamic IP allocation capability in Azure CNI allocates pod IPs from a subnet separate from the subnet hosting the AKS cluster.
To isolate network traffic between nodes and pods, we’ll create separate subnets for each:
- Node Subnet: The subnet that AKS nodes (VMs) will use.
- Pod Subnet: The subnet from which IPs will be dynamically assigned to Kubernetes pods using the Azure CNI.
Firstly, let’s get the pulumi configuration. From the root folder you will create a new config.ts file and use the pulumi.Config object likes below:
import * as pulumi from "@pulumi/pulumi";
// Create a configuration object
const pulumiConfig = new pulumi.Config();
// Access a configuration value and export for reusing
export const config = {
location: pulumiConfig.require("azure:location")
}
Then from the resources folder, you create a new file named virtualnetwork.ts and add the following code to create the virtual network with two subnets:
import * as pulumi from "@pulumi/pulumi";
import * as azure_native from "@pulumi/azure-native";
import { config } from "../config";
export const createVirtualNetWork = (resourceGroupName: pulumi.Input<string>) => {
// Create a Virtual Network for the cluster.
const vnet = new azure_native.network.VirtualNetwork("aksVNet", {
addressSpace: {
addressPrefixes: ["10.0.0.0/8"],
},
flowTimeoutInMinutes: 10,
location: config.location,
resourceGroupName: resourceGroupName
});
// Create a Node Subnet for the cluster.
const nodeSubnet = new azure_native.network.Subnet("aksNodeSubnet", {
addressPrefix: "10.240.0.0/16",
resourceGroupName: resourceGroupName,
subnetName: "aksNodeSubnet",
virtualNetworkName: vnet.name,
});
// Create a Pod Subnet for the cluster.
const podSubnet = new azure_native.network.Subnet("aksPodSubnet", {
addressPrefix: "10.241.0.0/16",
resourceGroupName: resourceGroupName,
// Subnet Delegation to Azure Kubernetes Service
delegations: [{
name: "aksDelegation",
serviceName: "Microsoft.ContainerService/managedClusters", // AKS delegation
}],
subnetName: "aksPodSubnet",
virtualNetworkName: vnet.name,
});
return {
nodeSubnetId: nodeSubnet.id,
podSubnetId: podSubnet.id,
};
}
By using subnet delegation in Pulumi, you ensure that each Azure service likes Azure Kubernetes Service (AKS) is properly isolated and configured for optimal performance, while still benefiting from the flexibility of infrastructure as code. The more details about subnet delegation can be found inhttps://learn.microsoft.com/en-us/azure/virtual-network/subnet-delegation-overview
When IP address planning for your AKS cluster, you should consider the number of IP addresses required for upgrade and scaling operations. If you set the IP address range to only support a fixed number of nodes, you won’t be able to upgrade or scale your cluster. See IP address sizing for more details.
Next, you will update your code in index.ts that includes create a new virtual network with the following code:
import * as resources from "@pulumi/azure-native/resources";
import { createVirtualNetWork } from "./resources/virtualnetwork";
// Create an Azure Resource Group
const resourceGroup = new resources.ResourceGroup("resourceGroup", {
resourceGroupName: "pulumi-aks-rg"
});
// Create a new virtual network
const virtualNetworking = createVirtualNetWork(resourceGroup.name);
export const resourceGroupName = resourceGroup.name;
You have completed some good steps:
- Create a new
config.tsfile to access the pulumi configuration - Create a new virtual network with 2 subnets for node and pod in
virtualnetwok.ts
- Use kubenet networking with IP address ranges in Azure Kubernetes Service https://learn.microsoft.com/en-us/azure/aks/configure-kubenet
- Configure Azure CNI networking https://learn.microsoft.com/en-us/azure/aks/configure-azure-cni-dynamic-ip-allocation
- Understanding CIDR Notation https://devblogs.microsoft.com/premier-developer/understanding-cidr-notation-when-designing-azure-virtual-networks-and-subnets/
Create Azure Container Registry (ACR)
Azure Container Registry (ACR) is a fully managed Docker container registry in Azure that allows you to store and manage container images for use in Azure Kubernetes Service (AKS), Azure App Service, and other containerized workloads. ACR supports private container registries, image scanning, vulnerability detection, and integration with Azure Active Directory (AAD) for secure access control.
ACR makes it easy to manage your containerized applications by offering scalable, secure, and private repositories for storing Docker images and OCI artifacts.
Using Pulumi, you can define, deploy, and manage an ACR instance as part of your infrastructure code.
In resources folder, you create a new file named containerregistry.ts and add the below code to create an Azure Container Registry using TypeScript:
// resources/containerregistry.ts
import * as azure from "@pulumi/azure-native/";
import * as pulumi from "@pulumi/pulumi";
import { config } from "../config";
export const createContainerRegistry = (resourceGroupName: pulumi.Input<string>) => {
// Create the Azure Container Registry (ACR)
const containerRegistry = new azure.containerregistry.Registry("aksregistry", {
resourceGroupName: resourceGroupName, // Reference the resource group
sku: {
name: "Standard", // ACR pricing tier: Basic, Standard, or Premium
},
adminUserEnabled: true, // Optional: Enable the admin user (useful for simple scenarios)
location: config.location // Set location same as the resource group
});
// Return ACR resource
return containerRegistry;
}
resourceGroupName: This is the name of the Azure resource group where the registry will be created.sku.name: ACR offers different pricing tiers, such asBasic,Standard, andPremium. You can choose one depending on your requirements.- Basic: Suitable for dev/test environments.
- Standard: Best for most production workloads.
- Premium: Offers features like geo-replication, content trust, and private link access.
adminUserEnabled: When set totrue, this enables a local admin account that can be used to authenticate with the ACR. This is useful for testing and simple deployments but should be disabled for production systems.
You will make some changes from index.ts that include the method to create the Azure Container Registry:
...
import { createContainerRegistry } from "./resources/containerregistry";
...
// Create azure container registry (acr)
export const containerRegistry = createContainerRegistry(resourceGroup.name);
...
You can preview the changes Pulumi will make using the following command:
pulumi preview
3. Provisioning a Managed AKS Cluster
The following steps provision an AKS cluster with a managed node pool, attachs the created Azure Virtual Network, and grant AKS cluster identity the right to pull images from ACR.
Adding Configuration
The pulumi config CLI command can save some values as configuration parameters. Run the following commands to set the names for some of values that may its reusable in multiple environments:
$pulumi config set k8sVersion 1.30.3
$pulumi config set nodeCount 3
$pulumi config set nodeSize Standard_A2_v2
$pulumi config set adminUser aksadmin
$pulumi config set ingressNamespace ingress-nginx
$pulumi config set appNamespace apps
$pulumi config set letenscriptEmail <your_email>
Read Config Values
You modify the config.ts file and add the following code:
import * as pulumi from "@pulumi/pulumi";
// Create a configuration object
const pulumiConfig = new pulumi.Config();
// Access a configuration value and export for reusing
export const config = {
location: pulumiConfig.get("azure-native:location"),
k8sVersion: pulumiConfig.get("k8sVersion") || "1.30.3",
nodeCount: pulumiConfig.getNumber("nodeCount") || 3,
nodeSize: pulumiConfig.get("nodeSize") || "Standard_A2_v2",
adminUserName: pulumiConfig.get("adminUser") || "aksadmin",
ingressNamespace: pulumiConfig.get("ingressNamespace") || "ingress-nginx",
appNamespace: pulumiConfig.get("appNamespace") || "apps",
letenscriptEmail: pulumiConfig.get("letenscriptEmail") || "<your_email>"
}
Create a Azure Kubernetes Cluster
In order to AKS cluster, you create a new akscluster.ts file in the resources folder. And add the following code creates managed cluster
// resources/akscluster.ts
import * as azure_native from "@pulumi/azure-native";
import * as pulumi from "@pulumi/pulumi";
import * as tls from "@pulumi/tls";
import * as containerservice from "@pulumi/azure-native/containerservice";
import { config } from "../config";
export const aksCluster = (
resourceGroupName: pulumi.Input<string>,
subnetIds: {
nodeSubnetId: pulumi.Output<string>,
podSubnetId: pulumi.Output<string>
}) => {
// create a private key to use for the cluster's ssh key
const privateKey = new tls.PrivateKey("privateKey", {
algorithm: "RSA",
rsaBits: 4096,
});
// create a user assigned identity to use for the cluster
const identity = new azure_native.managedidentity.UserAssignedIdentity("identity", { resourceGroupName: resourceGroupName });
return new containerservice.ManagedCluster("cluster", {
resourceGroupName: resourceGroupName,
// Use a user-specified identity to manage cluster resources
identity: {
type: azure_native.containerservice.ResourceIdentityType.UserAssigned,
userAssignedIdentities: [identity.id],
},
agentPoolProfiles: [{
count: config.nodeCount, // Number of nodes in the pool
maxPods: 110,
mode: "System",
name: "agentpool",
nodeLabels: {},
osDiskSizeGB: 30,
osType: "Linux",
type: "VirtualMachineScaleSets",
vmSize: config.nodeSize, // VM size for the nodes
vnetSubnetID: subnetIds.nodeSubnetId, // Assign nodes to the subnet
podSubnetID: subnetIds.podSubnetId // Assign pods to the subnet
}],
dnsPrefix: resourceGroupName,
enableRBAC: true, // Enable Role-Based Access Control
kubernetesVersion: config.k8sVersion,
linuxProfile: {
adminUsername: config.adminUserName, // The admin username for the new cluster.
ssh: {
publicKeys: [{
keyData: privateKey.publicKeyOpenssh,
}],
},
},
networkProfile: {
networkPlugin: "azure" // Use Azure CNI for networking
}
});
};
Explanation of Key Parts in the Code:
- AKS Cluster: The core of the setup, an AKS cluster is created with:
agentPoolProfiles: Defines the VM size, node count, and operating system for the worker nodes.enableRBAC: Enables Kubernetes Role-Based Access Control (RBAC) for cluster management.networkProfile: Specifies the use of the Azure CNI plugin for network connectivity between pods and Azure resources.identity: Assigns a UserAssigned Managed Identity, used for integrating with other Azure services securely.
- Kubeconfig: The Kubernetes configuration is exported as an output, allowing you to connect to the AKS cluster using tools like
kubectl.
Grant the AKS Cluster Identity the AcrPull Role on ACR
Once the AKS cluster and ACR are created, the next step is to assign the AcrPull role to the AKS cluster’s managed identity. This is done by creating a role assignment that links the AKS cluster’s managed identity to the ACR.
Now from the index.ts file, you modify the code that includes aks cluster.
...
import { aksCluster } from "./resources/akscluster";
...
// Create a AKS cluster
const cluster = aksCluster(resourceGroup.name, vnet);
// Grant AKS Managed Identity `AcrPull` Role on ACR
const acrPullRoleAssignment = new azure.authorization.RoleAssignment("aksAcrPullRoleAssignment", {
principalId: aksCluster.identity.apply(id => id.principalId), // AKS cluster's Managed Identity
roleDefinitionId: pulumi.output(azure.authorization.getRoleDefinition({
roleDefinitionId: "7f951dda-4ed3-4680-a7ca-43fe172d538d", // Built-in role for AcrPull
scope: acr.id, // Scope is the ACR
})).apply(roleDef => roleDef.id),
scope: acr.id, // ACR resource ID
});
// Export the AKS Cluster kubeconfig
export const kubeconfig = pulumi.all([cluster.name, resourceGroup.name]).apply(([clusterName, rgName]) =>
azure_native.containerservice.listManagedClusterUserCredentials({
resourceGroupName: rgName,
resourceName: clusterName,
}).then(creds => Buffer.from(creds.kubeconfigs[0].value, "base64").toString())
);
const provider = new k8s.Provider("k8s-provider", {
kubeconfig: kubeconfig,
});
...
Once you’re ready, deploy the AKS cluster by running:
pulumi up
After the cluster is deployed, Pulumi will output acr, the kubeconfig and resourceGroupName.
Outputs:
+ acr : {
+ adminUserEnabled : true
+ dataEndpointEnabled : false
+ dataEndpointHostNames : []
...
}
+ kubeconfig : (yaml) {
+ apiVersion : "v1"
+ clusters : [
+ ...
]
+ contexts : [
+ ...
]
...
}
+ resourceGroupName: "pulumi-aks-rg"
+ vnet : {
...
}
You can save the kubeconfig to a file and connect to your AKS cluster using kubectl:
pulumi stack output kubeconfig > kubeconfig.yaml
export KUBECONFIG=./kubeconfig.yaml
More about kubeconfig environment, you can be found at here https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable
You can now use the following command to interact with your Kubernetes cluster:
kubectl get nodes
The output looks like that:
NAME STATUS ROLES AGE VERSION
aks-agentpool-36574824-vmss000000 Ready <none> 74m v1.30.3
aks-agentpool-36574824-vmss000001 Ready <none> 74m v1.30.3
aks-agentpool-36574824-vmss000002 Ready <none> 74m v1.30.3
Scaling and Managing the AKS Cluster
You can manage the AKS cluster post-deployment in various ways:
- Scaling Nodes: Modify the
countinagentPoolProfilesto scale the number of worker nodes, then runpulumi upto apply the changes. - Auto-Scaling: You can enable auto-scaling by adding
enableAutoScalingand specifying the minimum and maximum node counts in theagentPoolProfilesconfiguration.
agentPoolProfiles: [{
name: "agentpool",
minCount: config.nodeCount, // Minimum node count
maxCount: 5, // Maximum node count
enableAutoScaling: true, // Enable auto-scaling
count: config.nodeCount, // Number of nodes in the pool
maxPods: 110,
mode: "System",
nodeLabels: {},
osDiskSizeGB: 30,
osType: "Linux",
type: "VirtualMachineScaleSets",
vmSize: config.nodeSize, // VM size for the nodes
vnetSubnetID: subnetIds.nodeSubnetId, // Assign nodes to the subnet
podSubnetID: subnetIds.podSubnetId // Assign pods to the subnet
}]
- Upgrades: AKS provides automated upgrades to Kubernetes versions. You can trigger upgrades via the Azure portal, CLI, or integrate it with Pulumi to automate version updates.
4. Deploy Ingress Controller and Configure SSL/TLS
Deploy the NGINX Ingress Controller
To deploy the NGINX Ingress Controller in an Azure Kubernetes Service (AKS) cluster using the Helm chart with Pulumi and retrieve the LoadBalancer IP of the ingress controller, you can follow these steps. I’ll also mention some optimizations that can be useful for production environments, such as ensuring resource limits and using stable versions of Helm charts.
Create namespaces in the cluster
Before writing code to deploy some resources in AKS cluster, you can create the namespaces that they will be used later. In resource folder, create a new file named namspaces.ts and add the following code:
import * as k8s from "@pulumi/kubernetes";
import { config } from "../config";
export const createNamespaces = (k8sProvider) => {
// Create an app Kubernetes Namespace within the AKS cluster
const appNamespace = new k8s.core.v1.Namespace(config.appNamespace, {
metadata: {
name: config.appNamespace,
},
}, { provider: k8sProvider, dependsOn: k8sProvider });
// Create a namespace for the ingress
new k8s.core.v1.Namespace(config.ingressNamespace, {
metadata: {
name: config.ingressNamespace,
},
}, { provider: k8sProvider, dependsOn: k8sProvider }
);
}
This code will create two namespaces: one for application and another for ingress
Write Pulumi Code to Deploy NGINX Ingress Controller
The code will deploys the NGINX Ingress Controller using the official Helm chart.
import * as k8s from "@pulumi/kubernetes";
import { config } from "../config";
export const nginxIngressController = (k8sProvider) => {
// // Deploy the NGINX Ingress Controller
const nginxIngressController = new k8s.helm.v4.Chart("nginx-ingress-controller", {
chart: "nginx-ingress-controller",
namespace: config.ingressNamespace,// e.g. ingress
version: "11.4.1", // Pin the Helm chart version for production stability
repositoryOpts: {
repo: "https://charts.bitnami.com/bitnami"
},
values: {
controller: {
service: {
type: "LoadBalancer",
}
},
resources: {
limits: {
cpu: "500m",
memory: "512Mi", // Set resource limits for production optimization
},
requests: {
cpu: "250m",
memory: "256Mi",
},
}
}
}, { provider: k8sProvider });
return nginxIngressController.resources;
}
- We deploy the NGINX Ingress Controller using the Helm chart from Bitnami package for NGINX Ingress Controller
- We specify the
service.typeasLoadBalancerso that Azure will provision a public IP address for the ingress controller. - Resource Limits: In production environments, it is essential to define resource limits for CPU and memory to ensure that the ingress controller doesn’t consume excessive resources and cause issues in the cluster.
- Versioning: It’s a best practice to pin the Helm chart to a specific version (e.g.,
11.4.1) to avoid unexpected behavior due to future updates. This ensures stability in production environments.
From the index.ts file, you will modify the code to deploy Nginx Ingress Controller to the AKS cluster:
...
import { nginxIngressController } from "./resources/nginxingresscontroller";
...
import { config } from "./config";
// Create an Azure Resource Group
const resourceGroup = new resources.ResourceGroup("resourceGroup", {
resourceGroupName: "pulumi-aks-rg"
});
...
// Install the NGINX Ingress Controller
const ingressController = nginxIngressController(provider);
..
export const resourceGroupName = resourceGroup.name;
Now, you can deploy the NGINX ingress controller with Pulumi by using the command:
pulumi up
Verify NGINX Ingress Controller
Once the deployment is finished, you can verify that the NGINX ingress controller has been successfully deployed and is up and running:
$kubectl get services -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-controller LoadBalancer 10.0.226.2 104.43.76.238 80:30806/TCP,443:30820/TCP 87m
nginx-ingress-controller-default-backend ClusterIP 10.0.26.148 <none> 80/TCP 87m
Deploy cert-manager
To secure your NGINX ingress controller, you’ll need SSL/TLS certificates. You can either:
- Use a Let’s Encrypt certificate (via Cert-Manager).
- Use a custom certificate (if you already have an SSL certificate and private key)
You use a Let’s Encrypt certificate via cert-manager to do the work with Kubernetes to request a certificate and respond to the challenge to validate it.
You can install Cert-Manager using Helm and configure it to issue SSL certificates from Let’s Encrypt.
Write Pulumi Code to Deploy Cert-Manager
In resources folder, create a new certmanager.ts and add the below code:
import * as k8s from "@pulumi/kubernetes";
import * as pulumi from "@pulumi/pulumi";
import { config } from "../config";
export const aksClusterIssuer = (k8sProvider) => {
// Create a namespace for Cert-manager
const certManagerNamespace = new k8s.core.v1.Namespace("cert-manager", {
metadata: {
name: "cert-manager",
},
}, { provider: k8sProvider });
// Install Cert-manager using Helm
const cert = new k8s.helm.v4.Chart("cert-manager", {
chart: "cert-manager",
version: "1.15.3",
repositoryOpts: {
repo: "https://charts.jetstack.io",
},
namespace: certManagerNamespace.metadata.name,
values: {
installCRDs: true
},
}, { transformations: [
// Ignore changes that will be overwritten by the deployment.
// https://www.pulumi.com/registry/packages/kubernetes/how-to-guides/managing-resources-with-server-side-apply/#handle-field-conflicts-on-existing-resources
args => {
if (args.type === "kubernetes:admissionregistration.k8s.io/v1:ValidatingWebhookConfiguration" ||
args.type === "kubernetes:admissionregistration.k8s.io/v1:MutatingWebhookConfiguration") {
return {
props: args.props,
opts: pulumi.mergeOptions(args.opts, {
ignoreChanges: ["metadata.annotations.template", "webhooks[*].clientConfig"],
})
}
}
return undefined;
}
], provider: k8sProvider });
// Create a ClusterIssuer for Let's Encrypt
const letsEncryptIssuer = new k8s.apiextensions.CustomResource("letsencrypt-issuer", {
apiVersion: "cert-manager.io/v1",
kind: "Issuer",
metadata: {
name: "letsencrypt-staging",
namespace: config.appNamespace
},
spec: {
acme: {
server: "https://acme-staging-v02.api.letsencrypt.org/directory",
email: config.letenscriptEmail,
privateKeySecretRef: {
name: "letsencrypt-staging",
},
solvers: [{
http01: {
ingress: {
ingressClassName: "nginx",
},
},
}],
},
},
}, { provider: k8sProvider, dependsOn: cert });
return letsEncryptIssuer;
}
Make the changes from index.ts file that includes the code to create cert manager and the cluster issuer:
...
import { aksClusterIssuer } from "./resources/certmanager";
...
// Set up cert-manager for automatic let's encrypt certificates
const letsEncryptIssuer = aksClusterIssuer(provider);
export const resourceGroupName = resourceGroup.name;
Deploy the resource again with the command:
pulumi up
Verify the Cluster Issuer
When it is complete, you can check if the (Cluster)Issuer you’re using is in a ready state:
$kubectl get issuers -n apps
NAME READY AGE
letsencrypt-staging True 109m
And check the status using kubectl describe:
$kubectl describe issuer letsencrypt-staging -n apps
ame: letsencrypt-staging
Namespace: apps
Labels: <none>
Annotations: <none>
API Version: cert-manager.io/v1
Kind: Issuer
Metadata:
Creation Timestamp: 2024-09-09T03:13:54Z
Generation: 1
Resource Version: 2957
UID: f5469724-51fd-4f70-9d31-e91f0f45bb52
Spec:
Acme:
Email: email@example.com
Private Key Secret Ref:
Name: letsencrypt-staging
Server: https://acme-staging-v02.api.letsencrypt.org/directory
Solvers:
http01:
Ingress:
Ingress Class Name: nginx
Status:
Acme:
Last Private Key Hash: zGE6xeUaHb5WXL6CymJK8sHTuLsFYlHPHHM61+IinoQ=
Last Registered Email: email@example.com
Uri: https://acme-staging-v02.api.letsencrypt.org/acme/acct/164108913
Conditions:
Last Transition Time: 2024-09-09T03:13:56Z
Message: The ACME account was registered with the ACME server
Observed Generation: 1
Reason: ACMEAccountRegistered
Status: True
Type: Ready
Events: <none>
5. Deploy Application
After provisioning the AKS cluster and some relevant Azure resources. You’ll deploy a simple kuard application by going through the following steps:
- Create the KUARD Deployment: This will deploy the KUARD application to your AKS cluster.
- Create a KUARD Service: Expose the KUARD deployment internally within the cluster.
- Create an Ingress Resource: Expose the KUARD application externally using a NGINX ingress controller.
Deploy the Application
Now, you can create a Deployment for the KUARD application. This deployment specifies how the KUARD container is to be deployed in the AKS cluster. You can create a new app folder, create a new file named applicationdeployment.ts inside it. And add the following code:
import * as k8s from "@pulumi/kubernetes";
import * as pulumi from "@pulumi/pulumi";
import { config } from "../config";
export const kuardAppDeployment = (k8sProvider, ingressIP) => {
const name = "kuard";
const labels = {app: name};
// Step 1: Create a KUARD Deployment
const kuardDeployment = new k8s.apps.v1.Deployment("kuard-deployment", {
metadata: {
namespace: config.appNamespace,
name: name,
labels: { app: name},
},
spec: {
replicas: 1,
selector: { matchLabels: { app: name } },
template: {
metadata: { labels: { app: name } },
spec: {
containers: [
{
name: name,
image: "gcr.io/kuar-demo/kuard-amd64:blue", // KUARD container image
resources: {requests: {cpu: "50m", memory: "20Mi"}},
ports: [{ containerPort: 8080 }],
},
],
},
},
},
}, { provider: k8sProvider });
// Step 2: Create a KUARD Service
const kuardService = new k8s.core.v1.Service("kuard-service", {
metadata: {
namespace: config.appNamespace,
name: name,
labels: { app: name },
},
spec: {
type: "ClusterIP", // Internal Service
ports: [{ port: 80, targetPort: 8080 }],
selector: { app: name },
},
}, { provider: k8sProvider });
// Create an Ingress Resource for KUARD
const kuardIngress = new k8s.networking.v1.Ingress("kuard-ingress", {
metadata: {
namespace: config.appNamespace,
name: "kuard-ingress",
annotations: {
"kubernetes.io/ingress.class": "nginx"
}
},
spec: {
rules: [{
http: {
paths: [{
path: "/",
pathType: "Prefix",
backend: {
service: {
name: "kuard",
port: { number: 80 },
},
},
}],
},
}],
},
}, { provider: k8sProvider, dependsOn: kuardService });
}
You will modify the index.ts file with the code likes that:
...
import { kuardAppDeployment } from "./app/applicationdeployment";
...
// Create a KUARD (Kubernetes Up and Running Demo) deployment and expose it through an ingress resource
const kuardApp = kuardAppDeployment(provider);
export const resourceGroupName = resourceGroup.name;
Run pulumi preview command to see which changes are planned.
If you preview matches these expectations, go ahead and run pulumi up. In general, you should always understand the preview before you hit ‘yes’.
Verify the App
To make sure the app is deploy in the cluster, you can use some commands:
$ kubectl get pods -n apps
NAME READY STATUS RESTARTS AGE
kuard-76d946d67f-gvjxx 1/1 Running 0 142m
$ kubectl get svc -n apps
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kuard ClusterIP 10.0.170.230 <none> 80/TCP 143m
Check the status of the ingress:
$kubectl describe ing kuard-ingress -n apps
Name: kuard-ingress
Labels: <none>
Namespace: apps
Address: 10.240.0.4
Ingress Class: <none>
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
*
/ kuard:80 (10.241.0.23:8080)
Annotations: kubernetes.io/ingress.class: nginx
Events: <none>
You also want to check again the load balancer ip of Nginx Ingress Controller by using the command:
$kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-controller LoadBalancer 10.0.226.2 104.43.76.238 80:30806/TCP,443:30820/TCP 148m
nginx-ingress-controller-default-backend ClusterIP 10.0.26.148 <none> 80/TCP 148m
And you can access the KUARD app using the domain or IP (e.g., http://104.43.76.238).
Deploy a TLS Ingress Resource
There are two primary ways to do this: using annotations on the ingress with ingress-shim or directly creating a certificate resource.
In this hands-on, you will add annotations to the ingress and take advantage of ingress-shim to have it create the certificate resource on our behalf. After creating a certificate, the cert-manager will update or create an ingress resource and use that to validate the domain. Once verified and issued, cert-manager will create or update the secret defined in the certificate.
You will modify the applicationdeployment.ts like below:
import * as k8s from "@pulumi/kubernetes";
import { config } from "../config";
export const kuardAppDeployment = (k8sProvider) => {
const name = "kuard";
// Mapping IP Address to a hostname
const host = 'kuard.<ingress ip>.nip.io';
// Step 1: Create a KUARD Deployment
const kuardDeployment = new k8s.apps.v1.Deployment("kuard-deployment", {
metadata: {
namespace: config.appNamespace,
name: name,
labels: { app: name},
},
spec: {
replicas: 1,
selector: { matchLabels: { app: name } },
template: {
metadata: { labels: { app: name } },
spec: {
containers: [
{
name: name,
image: "gcr.io/kuar-demo/kuard-amd64:blue", // KUARD container image
resources: {requests: {cpu: "50m", memory: "20Mi"}},
ports: [{ containerPort: 8080 }],
},
],
},
},
},
}, { provider: k8sProvider });
// Step 2: Create a KUARD Service
const kuardService = new k8s.core.v1.Service("kuard-service", {
metadata: {
namespace: config.appNamespace,
name: name,
labels: { app: name },
},
spec: {
type: "ClusterIP", // Internal Service
ports: [{ port: 80, targetPort: 8080 }],
selector: { app: name },
},
}, { provider: k8sProvider, dependsOn: kuardDeployment });
// Step 3: Create an Ingress Resource for KUARD
const kuardIngress = new k8s.networking.v1.Ingress("kuard-ingress", {
metadata: {
namespace: config.appNamespace,
name: "kuard-ingress",
annotations: {
"kubernetes.io/ingress.class": "nginx",
"cert-manager.io/issuer": "letsencrypt-staging"
}
},
spec: {
tls: [{
hosts: [host],
secretName: "letsencrypt-staging-tls",
}],
rules: [{
host: host,
http: {
paths: [{
path: "/",
pathType: "Prefix",
backend: {
service: {
name: "kuard",
port: { number: 80 },
},
},
}],
},
}],
},
}, { provider: k8sProvider, dependsOn: kuardService });
};
This uses nip.io, which is a free service that provides wildcard DNS. You can use alternatives such as xip.io or sslip.io. Alternatively, you can use your domain name and set up the proper DNS records.
Updating the <ingress ip> value in the host key with the load balancer ip of Nginx Ingress Controller that you retrieved earlier, for example, kuard.104.43.76.238.nip.io. This value allows you to access the ingress via a host name instead of an IP address.
Apply the Changes
Preview the Changes: Run the following command to preview the resources that will be created or updated:
pulumi preview
Deploy the Resources: After reviewing the preview, run the following command to deploy the changes:
pulumi up
Verify the Resource
After the TLS Ingress is deployed, Cert-manager will read these annotations from the ingress and use them to create a certificate, which you can request and see:
$kubectl get certificates -n apps
NAME READY SECRET AGE
letsencrypt-staging-tls True letsencrypt-staging-tls 78m
cert-manager reflects the state of the process for every request in the certificate object. You can view this information using the kubectl describe command:
$kubectl describe certificates letsencrypt-staging-tls -n apps
ame: letsencrypt-staging-tls
Namespace: apps
Labels: <none>
Annotations: <none>
API Version: cert-manager.io/v1
Kind: Certificate
Metadata:
Creation Timestamp: 2024-09-21T09:26:36Z
Generation: 1
Owner References:
API Version: networking.k8s.io/v1
Block Owner Deletion: true
Controller: true
Kind: Ingress
Name: kuard-ingress
UID: 97e8f1cf-a78f-4278-8e2a-f13f122ba1f7
Resource Version: 94108
UID: 5b494a1b-c29e-4177-9185-bc5390229104
Spec:
Dns Names:
kuard.104.43.76.238.nip.io
Issuer Ref:
Group: cert-manager.io
Kind: Issuer
Name: letsencrypt-staging
Secret Name: letsencrypt-staging-tls
Usages:
digital signature
key encipherment
Status:
Conditions:
Last Transition Time: 2024-09-21T09:27:06Z
Message: Certificate is up to date and has not expired
Observed Generation: 1
Reason: Ready
Status: True
Type: Ready
Not After: 2024-12-20T08:28:31Z
Not Before: 2024-09-21T08:28:32Z
Renewal Time: 2024-11-20T08:28:31Z
Revision: 1
Events: <none>
Open the host name you configured on the ingress in a web browser to view and interact with the application. For example, at https://kuard.104.43.76.238.nip.io .
Clean Up
Congratulations! You’ve reached the end of the hands on guided. You’ve learned how to develop and deploy Pulumi programs.
Of course, we’ve only scratched some of Pulumi capabilities. If you still have time and passion, the following tasks should give you enough material to continue learning on your own.
Deploy a resource on your own
Think of any resource type that you tend to use in your serverless applications and try adding it to your template. Try adding Azure Application Insights, Azure Monitoring, Azure Keyvault, Azure Cosmos DB, or any other resource to your liking.
Destroy resources with Pulumi
Azure resources may incur charges. Once you are done with the hands on, don’t forget to clean them up. Destroying resources with Pulumi is very easy. Run the following command and confirm when prompted:
$pulumi destroy --yes
Resources:
- 81 to delete
Destroying (dev)
Type Name Status Info
- ├─ kubernetes:rbac.authorization.k8s.io/v1:RoleBinding cert-manager:cert-manager/cert-manager-webhook:dynamic-serving deleted (2s)
- ├─ kubernetes:rbac.authorization.k8s.io/v1:ClusterRoleBinding cert-manager:cert-manager-controller-approve:cert-manager-io deleted (3s)
- ├─ kubernetes:rbac.authorization.k8s.io/v1:ClusterRoleBinding cert-manager:cert-manager-controller-clusterissuers deleted (4s)
- ├─ kubernetes:rbac.authorization.k8s.io/v1:ClusterRoleBinding cert-manager:cert-manager-controller-orders deleted (0.70s)
- ├─ kubernetes:rbac.authorization.k8s.io/v1:Role cert-manager:kube-system/cert-manager:leaderelection deleted (1s)
- ├─ kubernetes:rbac.authorization.k8s.io/v1:RoleBinding cert-manager:kube-system/cert-manager:leaderelection deleted (1s)
- ├─ kubernetes:rbac.authorization.k8s.io/v1:ClusterRole cert-manager:cert-manager-controller-approve:cert-manager-io deleted (3s)
- ├─ kubernetes:rbac.authorization.k8s.io/v1:ClusterRole cert-manager:cert-manager-cluster-view deleted (4s)
- ├─ kubernetes:core/v1:ServiceAccount cert-manager:cert-manager/cert-manager deleted (12s)
- ├─ kubernetes:core/v1:Service cert-manager:cert-manager/cert-manager-webhook deleted (12s)
- ├─ kubernetes:core/v1:ServiceAccount cert-manager:cert-manager/cert-manager-webhook deleted (10s)
- ├─ kubernetes:rbac.authorization.k8s.io/v1:ClusterRole cert-manager:cert-manager-view deleted (9s)
...
The resources in the stack have been deleted, but the history and configuration associated with the stack are still maintained.
If you want to remove the stack completely, run `pulumi stack rm dev`.
Conclusion
Provisioning an Azure Kubernetes Service (AKS) cluster using Pulumi offers a flexible and powerful way to manage your containerized workloads. By defining your AKS cluster as code, you can automate deployment, scaling, and configuration across multiple environments.
Key benefits of using Pulumi for AKS provisioning:
- Consistency: Ensure consistent infrastructure across different environments (e.g., dev, staging, production).
- Automation: Integrate into CI/CD pipelines, enabling automated cluster provisioning and updates.
- Scalability: Easily manage and scale resources based on your application’s needs, while also ensuring secure connectivity via managed identities and networking features.
With this setup, you can provision an AKS cluster efficiently and manage it throughout its lifecycle with ease using Pulumi.