Skip to content

Azure Hosted Control Plane Deployment#

Follow these steps to set up a k0smotron-hosted control plane on Azure:

  1. Prerequisites

    Before you start, make sure you have the following:

    Note

    All control plane components for managed clusters will run in the management cluster. Make sure the management cluster has sufficient CPU, memory, and storage to handle the additional workload.

  2. Gather Pre-existing Resources

    In a hosted control plane setup, some Azure resources must exist before deployment and must be explicitly provided in the ClusterDeployment configuration. These resources can also be reused by the management cluster.

    If you deployed your Azure Kubernetes cluster using the Cluster API Provider for Azure (CAPZ), you can retrieve the required information using the following commands:

    Location:

    kubectl get azurecluster <cluster-name> -o go-template='{{.spec.location}}'
    

    Subscription ID:

    kubectl get azurecluster <cluster-name> -o go-template='{{.spec.subscriptionID}}'
    

    Resource Group:

    kubectl get azurecluster <cluster-name> -o go-template='{{.spec.resourceGroup}}'
    

    VNet Name:

    kubectl get azurecluster <cluster-name> -o go-template='{{.spec.networkSpec.vnet.name}}'
    

    Subnet Name:

    kubectl get azurecluster <cluster-name> -o go-template='{{(index .spec.networkSpec.subnets 1).name}}'
    

    Route Table Name:

    kubectl get azurecluster <cluster-name> -o go-template='{{(index .spec.networkSpec.subnets 1).routeTable.name}}'
    

    Security Group Name:

    kubectl get azurecluster <cluster-name> -o go-template='{{(index .spec.networkSpec.subnets 1).securityGroup.name}}'
    

  3. Create the ClusterDeployment manifest

    After collecting the required data, create a ClusterDeployment manifest to configure the hosted control plane. It should look something like this:

    apiVersion: k0rdent.mirantis.com/v1alpha1
    kind: ClusterDeployment
    metadata:
      name: azure-hosted-cp
    spec:
      template: azure-hosted-cp-0-2-0
      credential: azure-credential
      config:
        clusterLabels: {}
        location: "westus"
        subscriptionID: ceb131c7-a917-439f-8e19-cd59fe247e03
        vmSize: Standard_A4_v2
        resourceGroup: mgmt-cluster
        network:
          vnetName: mgmt-cluster-vnet
          nodeSubnetName: mgmt-cluster-node-subnet
          routeTableName: mgmt-cluster-node-routetable
          securityGroupName: mgmt-cluster-node-nsg
    
  4. Generate the ClusterDeployment Manifest

    To simplify the creation of a ClusterDeployment manifest, you can use the following template, which dynamically inserts the appropriate values:

    apiVersion: k0rdent.mirantis.com/v1alpha1
    kind: ClusterDeployment
    metadata:
      name: azure-hosted-cp
    spec:
      template: azure-hosted-cp-0-2-0
      credential: azure-credential
      config:
        clusterLabels: {}
        location: "{{.spec.location}}"
        subscriptionID: "{{.spec.subscriptionID}}"
        vmSize: Standard_A4_v2
        resourceGroup: "{{.spec.resourceGroup}}"
        network:
          vnetName: "{{.spec.networkSpec.vnet.name}}"
          nodeSubnetName: "{{(index .spec.networkSpec.subnets 1).name}}"
          routeTableName: "{{(index .spec.networkSpec.subnets 1).routeTable.name}}"
          securityGroupName: "{{(index .spec.networkSpec.subnets 1).securityGroup.name}}"
    
    Save this YAML as clusterdeployment.yaml.tpl and render the manifest with the following command:
    kubectl get azurecluster <management-cluster-name> -o go-template="$(cat clusterdeployment.yaml.tpl)" > clusterdeployment.yaml
    

  5. Create the ClusterDeployment

    To actually create the cluster, apply the ClusterDeployment manifest to the management cluster, as in:

    kubectl apply clusterdeployment.yaml -n kcm-system
    
  6. Manually update the AzureCluster object

    Due to a limitation in k0smotron, (see k0sproject/k0smotron#668), after applying the ClusterDeployment manifest, you must manually update the status of the AzureCluster object.

    Use the following command to set the AzureCluster object status to Ready:

    kubectl patch azurecluster <cluster-name> --type=merge --subresource status --patch '{"status": {"ready": true}}'
    

Important Notes on Cluster Deletion#

Due to these same k0smotron limitations, you must take some manual steps in order to delete a cluster properly:

  1. Add a Custom Finalizer to the AzureCluster Object:

    To prevent the AzureCluster object from being deleted too early, add a custom finalizer:

    kubectl patch azurecluster <cluster-name> --type=merge --patch '{"metadata": {"finalizers": ["manual"]}}'
    
  2. Delete the ClusterDeployment:

    After adding the finalizer, delete the ClusterDeployment object as usual. Confirm that all AzureMachines objects have been deleted successfully.

  3. Remove Finalizers from Orphaned AzureMachines:

    If any AzureMachines are left orphaned, delete their finalizers manually after confirming no VMs remain in Azure. Use this command to remove the finalizer:

    kubectl patch azuremachine <machine-name> --type=merge --patch '{"metadata": {"finalizers": []}}'
    
  4. Allowing Updates to Orphaned Objects:

    If Azure admission controls prevent updates to orphaned objects, you must disable the associated MutatingWebhookConfiguration by deleting it:

    kubectl delete mutatingwebhookconfiguration <webhook-name>