To streamline and automate the deployment of an Azure Kubernetes Service (AKS) cluster within a pre-existing subnet and ensure a successful installation of Jira, you need to employ a specific set of tools and technologies.
By the end of this blog post, you will be able to spin up a Jira instance running on an AKS cluster that is built in a pre-existing subnet in Azure.
Tools and tech stack used
Each tool plays a crucial role in managing different aspects of the deployment process:
- Bash, Terraform CLI, Helm CLI, Azure CLI, Visual Studio Code
Bash automates tasks; Terraform CLI provisions infrastructure; Helm CLI deploys Jira on Kubernetes; Azure CLI manages Azure resources; and Visual Studio Code aids in editing scripts and charts. - Terraform
Provisions and configures infrastructure. - Azure
Provides the cloud resources. - Helm
Deploys and manages Jira on Kubernetes. - Jira
Provides project management and issue-tracking functionalities.
What is the desired state?
Setting up an Azure Kubernetes Service (AKS) cluster is pretty straightforward. It can be done with just a few lines of commands in Azure CLI. But as soon as you have some configuration to do, like setting up the network configuration or using different managed users, it gets complicated.
In this real-life scenario, we build an AKS cluster in Azure with a database to run Jira. The subnets and the managed user, however, already exist. It is highly possible that you will be confronted with such a scenario since your team or customer will probably have an infrastructure with a specific IP range and more requirements that the AKS cluster should be integrated into.
Technically, the integration of the AKS into an existing virtual network (vnet) means rearranging the order of creating components and setting specific permissions to the managed user in advance to make the Terraform script run successfully.
Our goal is to automate as much as possible, so we deploy and manage our infrastructure as Infrastructure as Code (IaC), which helps us keep the live environment and match our configuration to avoid drift. Therefore, we create all infrastructure components using Terraform.
Architecture
The components of the Resource Group rg-jira-fw01 on the left are pre-existing components.
To organize our resources and manage access, we have three resource groups. One for the AKS, one for the Application Gateway, and another for its components. The Terraform script deploys a node pool with two nodes in the pre-existing AKS subnet, an MSSQL-Server with two databases (Jira and EazyBI), and an Application Gateway with a public IP used as an ingress to terminate incoming SSL connections. The Application Gateway is deployed in the AppGW subnet.
The graph below also shows the components for the Terraform state file (tfstate)

Set up environment
For a smooth setup, follow these steps to prepare your environment and store your Terraform state file remotely.
Tools/CLI
We’re going to skip the installation of the CLIs (see Tools and tech stack used) as there are many resources on how to install it according to your operating system.
Store tfstate remotely
It is recommended that your Terraform state be stored remotely. Having it locally will not cause any issues when you’re working alone, but once there is a second developer, you should store it remotely (see Terraform docs).
Therefore, we need to:
- Create a Resource Group.
- Create a Storage Account.
- Create a Blob Container.
- Set environment variable (to make Terraform pick the storage account).
- Configure Terraform.
All these steps are achieved by running the bash script ./scripts/create_tfstate_storage.ps1|sh once you’re logged into your Azure account.
Create a Terraform workspace
We use workspaces to create different environments, such as dev, int, and prod. The names of these workspaces will be used later by Terraform to define component names in Azure, e.g., resource group, etc.
To create a workspace, run:

Configure Terraform script
In the variables.tf, we set the managed user ID name created beforehand and the pod CIDR according to the customer’s predefined IP range. Since we have a new environment we also need to extend the flag for the vm size of it.

Using data blocks in the network_aks module, we can fetch the existing managed user ID object and both subnets for the AKS and the Application Gateway that are in the vnet called vnet-jira:

You can use the fetched variables in your Terraform script like so:

Create the AKS cluster
The main changes are based on the file terraform/modules/aks/main.tf. The AKS resource contains all the necessary information to create the cluster with the desired settings. Since we have fetched the existing AKS subnet, its ID must be provided.

Set permissions
Since the network infrastructure is not created by the AKS cluster itself, some of the permissions need to be set:
- The Ingress User of the Application Gateway needs to be a contributor on the Application Gateway.
- The Ingress User of the Application Gateway needs to be a contributor on the resource group containing the Application Gateway.
- The Ingress User of the Application Gateway needs to be a network contributor on the AKS subnet.
- The Ingress User of the Application Gateway needs to be a managed identity operator on the managed user.
- The Ingress User of the Application Gateway needs to be a managed contributor to the managed user.


Create Application Gateway
After creating the frontend public IP, you need to create the Application Gateway providing the frontend IP configuration.

Create infrastructure
You can create the infrastructure by running these Terraform commands (we will skip explaining Terraform commands here).

Install Jira
After the infrastructure has been created, we will install the Jira application via the official Helm Chart. Since we have our own settings, we can provide a values file to it.
Change hosts file
If you don’t have an owned domain yet, you can test any domain by setting the Public IP (of the Application Gateway) along with any domain, e.g., mydomain.com, in your hosts file.

Change Helm values file
The application level setup of Jira is configured in ./terraform/modules/jira/values-jira.yaml, refer to Atlassian documentation for any details, but below follows details on some particularly important settings.

Install Jira with Helm
To install the Helm Chart with the changed values file, run the following commands:

This will add the official Atlassian repository locally and install the application with the settings from your values files in a namespace called Jira.
By following this guide, you should now be equipped to handle similar deployments and manage your Infrastructure as Code with confidence.
Published: Aug 21, 2023