Implementing CI/CD with Azure DevOps and Azure Kubernetes Services (AKS) version 1.24+

·

7 min read

Following the project that I presented in a previous story (Azure and Kubernetes: Deployment of Containerized Voting Application in Kubernetes Cluster using Docker, Azure Container Registry (ACR) and Azure Kubernetes Service (AKS) | by Luis Parraguez | Mar, 2023 | Medium), we received an additional challenge to be accomplished.

Client’s additional requirement was to implement a CI/CD process for a similar containerized Voting application using Azure DevOps and deploying it to a Kubernetes cluster version 1.24+.

The solution architecture can be seen in the following diagram:

The condition for this additional project was to decommission the Jump Server that we were using as “build and push” environment to Azure Container Registry (ACR) and Azure Kubernetes Services (AKS).

I together with our new team member decided to assume the delivery of this project looking to expand our knowledge about the CI/CD process and Kubernetes.

For the preparation of the Azure environment (including both ACR and AKS), we applied the same procedure explained in the previous story.

During the implementation of this solution, we had some lessons learned that would like to share with you:

Preparation of Azure DevOps Repos

As part of the environment preparation, we created a new repository in DevOps Repos to store and organize the application files and components. It’s important to mention that the files need to be organized following the structure required by Docker and Kubernetes (also known as “build context”).

Azure DevOps Repos — Note the files structure organized for the Build of Docker and Kubernetes

Creation of a new Azure DevOps Pipeline

Our first approach was to implement the CI/CD process using the DevOps Pipeline functionality since it’s positioned as the way to automate the build and release processes using a wizard. As you can see below, there is one option in the wizard specifically focused on “Deploy to Azure Kubernetes Service” including the build and push of the container image to ACR and then the deploy to AKS, exactly what we were looking for!

DevOps Pipeline Wizard — Note the option “Deploy to Azure Kubernetes Service”

DevOps Pipeline Wizard — Subscription Selection

DevOps Pipeline Wizard — Configuration of Deployment information

After we completed all the steps above, we launched the “Validate and Configure” option and after a processing time, we got the error shown below:

DevOps Pipeline Wizard — Process could not be completed

After investigating about this issue, we found out that, until Kubernetes version 1.23.X, Azure DevOps Pipeline was using an authentication mechanism with Kubernetes based on the Azure subscription. Unfortunately, as from version 1.24+ this mechanism is not working anymore.

Downgrading to 1.23.X version was not an option for us.

Looking for an alternative to connect Azure DevOps with Kubernetes 1.24+

Continuing our investigation, we were able to find a workaround that can be applied to enable the connection that we needed.

Using Azure Cloud Shell or Azure CLI we connected to our cluster:

# Connection to the Kubernetes Cluster
az aks get-credentials --resource-group rg-aks-devops --name aks-cluster-devops

# Testing connection with the Kubernetes Cluster
kubectl get nodes

In order to resolve the issue, we required to create a set of Kubernetes resources in our Cluster:

  • Create a service account in a namespace;

  • Create a custom role with some custom permissions;

  • Create a role binding for a service account;

  • Create a secret associated with the service account.

Following we include the “azuredevops-aks.yaml” file prepared to create the last three resources:

# Create a custom role with some custom permissions
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: ado-sc-sa-role
  namespace: default
rules:
- apiGroups: ["*","apps","extensions"]
  resources: ["*"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---

# Create a role binding for a service account
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: ado-sc-sa-rolebinding
  namespace: default
subjects:
- kind: ServiceAccount
  name: ado-sc-sa
  namespace: default
roleRef:
  kind: Role
  name: ado-sc-sa-role
  apiGroup: rbac.authorization.k8s.io
---

# Create a secret associated with the service account
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
  name: ado-sc-sa-secret
  annotations:
    kubernetes.io/service-account.name: "ado-sc-sa"

With the yaml file ready, we proceeded to create the resources using kubectl:

# Creation of the Service Account
kubectl create serviceaccount ado-sc-sa -n default

# Creation of Cluster resources
kubectl apply -f azuredevops-aks.yaml

After the kubectl execution, we needed to get information that will be required later on:

# Get Kubernetes server URL
kubectl config view --minify -o jsonpath={.clusters[0].cluster.server}

# Get the secret associated with the service account in JSON format
# Save the complete output starting with { and ending with }
kubectl get secret ado-sc-sa-secret -n default -o json

Implementing the integration between Azure DevOps and Kubernetes 1.24+

With the information of the previous step available, we proceeded to set up the integration between Azure DevOps and Kubernetes 1.24+. For that, we used the “Release Pipeline” feature inside the “Pipelines” functionality.

As first step, we included the Azure DevOps Repos created before as Artifact to be used in this process:

After that we were ready to specify the “Agent Job” to be used for this process. As default, Azure DevOps uses a windows agent (“Hosted Windows 2019 with VS2019”) that we changed since we needed to build a Linux container image.

Agent Job configuration — Note the utilization of an ubuntu agent to build a Linux container

At this point, we started including the process tasks. We configured two tasks:

  • First — Build of the “azure-vote-front” container image: For this we selected a “Build” task of type “Docker” that will be responsible to create the image and push it to the Azure Container Registry (ACR):

“Build and Push” Task — Note the selection of the Docker file from the DevOps Repos

  • Second — Deployment of application to the Kubernetes Cluster: For this we selected a “Deploy” task of type “Kubectl apply”.

This is the step where we created a new “Service Connection” with our Kubernetes cluster using “Service Account” as the authentication method (rather than the “Azure Subscription”) and used the information gathered before (Kubernetes server URL and secret associated with the service account):

“kubectl” Task — Note the use of the information obtained with the workaround

“kubectl” Task — Note the utilization of the new service connection “DevOps-ACR-AKS”

“kubectl” Task — We used also here as parameter the yaml file available in the DevOps Repos for the kubectl apply command

Finally, we activated the continuous deployment trigger that will create a release any time that a new commit is made to the DevOps Repos:

After that, we were ready to test the end-to-end process! For that we created a “new release”:

After the processing time, the release pipeline ended with success meaning that we were able to integrate Azure DevOps with a Kubernetes 1.24+ cluster as requested by the client!

We verified the creation of the services and other Kubernetes resources successfully, including the load balancer, and our containerized application was up and running!

To conclude the project, we also tested the scenario where we made an update to an application configuration file in the DevOps Repos in order to see the deployment trigger working:

And we were able to verify that a new release was automatically created therefore the CI/CD process was in place!

Conclusion and Multi-cloud perspective

As I have highlighted before Multi-Cloud and DevOps is a strong partnership and the implementation of a CI/CD process is a key element of that. We can implement a CI/CD process using services available in all Hyperscalers (including the integration with third-party tools if needed).

It’s very important to highlight that the implementation of a CI/CD process is not limited to the tools setup. It must include a strong people change management process to implement the mindset, accountability and teamwork that need to be present in the multidisciplinary teams currently involved in an agile software development process.

Let’s continue the journey and see you in the following chapters!

Did you find this article valuable?

Support Vishal Kushwah Blog by becoming a sponsor. Any amount is appreciated!

Â