Deploying a Hosted Control Plane#
A hosted control plane is a Kubernetes setup in which the control plane components (such as the API server, etcd, and controllers) run inside the management cluster instead of separate controller nodes. This architecture centralizes control plane management and improves scalability by sharing resources in the management cluster. Hosted control planes are managed by k0smotron.
Instructions for setting up a hosted control plane vary slighting depending on the provider.
AWS Hosted Control Plane Deployment#
Follow these steps to set up a k0smotron-hosted control plane on AWS:
-
Prerequisites
Before proceeding, make sure you have the following:
- A management Kubernetes cluster (Kubernetes v1.28 or later) deployed on AWS with k0rdent installed.
- A default storage class configured on the management cluster to support Persistent Volumes.
- The VPC ID where the worker nodes will be deployed.
- The Subnet ID and Availability Zone (AZ) for the worker nodes.
- The AMI ID for the worker nodes (Amazon Machine Image ID for the desired OS and Kubernetes version).
Important
All control plane components for your hosted cluster will reside in the management cluster, and the management cluster must have sufficient resources to handle these additional workloads.
-
Networking
To deploy a hosted control plane, the necessary AWS networking resources must already exist or be created. If you're using the same VPC and subnets as your management cluster, you can reuse these resources.
If your management cluster was deployed using the Cluster API Provider AWS (CAPA), you can gather the required networking details using the following commands:
Retrieve the VPC ID:
kubectl get awscluster <cluster-name> -o go-template='{{.spec.network.vpc.id}}'
Retrieve Subnet ID:
kubectl get awscluster <cluster-name> -o go-template='{{(index .spec.network.subnets 0).resourceID}}'
Retrieve Availability Zone:
kubectl get awscluster <cluster-name> -o go-template='{{(index .spec.network.subnets 0).availabilityZone}}'
Retrieve Security Group:
kubectl get awscluster <cluster-name> -o go-template='{{.status.networkStatus.securityGroups.node.id}}'
Retrieve AMI ID:
kubectl get awsmachinetemplate <cluster-name>-worker-mt -o go-template='{{.spec.template.spec.ami.id}}'
Tip
If you want to use different VPCs or regions for your management and hosted clusters, you’ll need to configure additional networking, such as VPC peering, to allow communication between them.
-
Create the ClusterDeployment manifest
Once you've collected all the necessary data, you can create the
ClusterDeployment
manifest. This file tells k0rdent how to deploy and manage the hosted control plane. For example:apiVersion: k0rdent.mirantis.com/v1alpha1 kind: ClusterDeployment metadata: name: aws-hosted-cp spec: template: aws-hosted-cp-0-1-0 credential: aws-credential config: clusterLabels: {} vpcID: vpc-0a000000000000000 region: us-west-1 publicIP: true subnets: - id: subnet-0aaaaaaaaaaaaaaaa availabilityZone: us-west-1b isPublic: true natGatewayID: xxxxxx routeTableId: xxxxxx - id: subnet-1aaaaaaaaaaaaaaaa availabilityZone: us-west-1b isPublic: false routeTableId: xxxxxx instanceType: t3.medium securityGroupIDs: - sg-0e000000000000000
Note
The example above uses the
us-west-1
region, but you should use the region of your VPC. -
Generate the
ClusterDeployment
ManifestTo simplify the creation of a
ClusterDeployment
manifest, you can use the following template, which dynamically inserts the appropriate values:apiVersion: k0rdent.mirantis.com/v1alpha1 kind: ClusterDeployment metadata: name: aws-hosted spec: template: aws-hosted-cp-0-1-0 credential: aws-credential config: clusterLabels: {} vpcID: "{{.spec.network.vpc.id}}" region: "{{.spec.region}}" subnets: {{- range $subnet := .spec.network.subnets }} - id: "{{ $subnet.resourceID }}" availabilityZone: "{{ $subnet.availabilityZone }}" isPublic: {{ $subnet.isPublic }} {{- if $subnet.isPublic }} natGatewayId: "{{ $subnet.natGatewayId }}" {{- end }} routeTableId: "{{ $subnet.routeTableId }}" {{- end }} instanceType: t3.medium securityGroupIDs: - "{{.status.networkStatus.securityGroups.node.id}}"
Save this template as
clusterdeployment.yaml.tpl
, then generate your manifest using the following command:kubectl get awscluster <cluster-name> -o go-template="$(cat clusterdeployment.yaml.tpl)" > clusterdeployment.yaml
-
Apply the
ClusterTemplate
Nothing actually happens until you apply the
ClusterDeployment
manifest to create a new cluster deployment:kubectl apply -f clusterdeployment.yaml -n kcm-system
Deployment Tips#
Here are some additional tips to help with deployment:
-
Controller and Template Availability:
Make sure the KCM controller image and templates are available in a public or accessible repository.
-
Install Charts and Templates:
If you're using a custom repository, run the following commands with the appropriate
kubeconfig
:KUBECONFIG=kubeconfig IMG="ghcr.io/k0rdent/kcm/controller-ci:v0.0.1-179-ga5bdf29" REGISTRY_REPO="oci://ghcr.io/k0rdent/kcm/charts-ci" make dev-apply KUBECONFIG=kubeconfig make dev-templates
-
Mark the Infrastructure as Ready:
To scale up the
MachineDeployment
, manually mark the infrastructure as ready:For more details on why this is necessary, click here.kubectl patch AWSCluster <hosted-cluster-name> --type=merge --subresource status --patch '{"status": {"ready": true}}' -n kcm-system
Azure Hosted Control Plane Deployment#
Follow these steps to set up a k0smotron-hosted control plane on Azure:
-
Prerequisites
Before you start, make sure you have the following:
- A management Kubernetes cluster (Kubernetes v1.28+) deployed on Azure with k0rdent installed.
- A default storage class configured on the management cluster to support Persistent Volumes.
Note
All control plane components for managed clusters will run in the management cluster. Make sure the management cluster has sufficient CPU, memory, and storage to handle the additional workload.
-
Gather Pre-existing Resources
In a hosted control plane setup, some Azure resources must exist before deployment and must be explicitly provided in the
ClusterDeployment
configuration. These resources can also be reused by the management cluster.If you deployed your Azure Kubernetes cluster using the Cluster API Provider for Azure (CAPZ), you can retrieve the required information using the following commands:
Location:
kubectl get azurecluster <cluster-name> -o go-template='{{.spec.location}}'
Subscription ID:
kubectl get azurecluster <cluster-name> -o go-template='{{.spec.subscriptionID}}'
Resource Group:
kubectl get azurecluster <cluster-name> -o go-template='{{.spec.resourceGroup}}'
VNet Name:
kubectl get azurecluster <cluster-name> -o go-template='{{.spec.networkSpec.vnet.name}}'
Subnet Name:
kubectl get azurecluster <cluster-name> -o go-template='{{(index .spec.networkSpec.subnets 1).name}}'
Route Table Name:
kubectl get azurecluster <cluster-name> -o go-template='{{(index .spec.networkSpec.subnets 1).routeTable.name}}'
Security Group Name:
kubectl get azurecluster <cluster-name> -o go-template='{{(index .spec.networkSpec.subnets 1).securityGroup.name}}'
-
Create the ClusterDeployment manifest
After collecting the required data, create a
ClusterDeployment
manifest to configure the hosted control plane. It should look something like this:apiVersion: k0rdent.mirantis.com/v1alpha1 kind: ClusterDeployment metadata: name: azure-hosted-cp spec: template: azure-hosted-cp-0-1-0 credential: azure-credential config: clusterLabels: {} location: "westus" subscriptionID: ceb131c7-a917-439f-8e19-cd59fe247e03 vmSize: Standard_A4_v2 resourceGroup: mgmt-cluster network: vnetName: mgmt-cluster-vnet nodeSubnetName: mgmt-cluster-node-subnet routeTableName: mgmt-cluster-node-routetable securityGroupName: mgmt-cluster-node-nsg
-
Generate the
ClusterDeployment
ManifestTo simplify the creation of a
ClusterDeployment
manifest, you can use the following template, which dynamically inserts the appropriate values:Save this YAML asapiVersion: k0rdent.mirantis.com/v1alpha1 kind: ClusterDeployment metadata: name: azure-hosted-cp spec: template: azure-hosted-cp-0-1-0 credential: azure-credential config: clusterLabels: {} location: "{{.spec.location}}" subscriptionID: "{{.spec.subscriptionID}}" vmSize: Standard_A4_v2 resourceGroup: "{{.spec.resourceGroup}}" network: vnetName: "{{.spec.networkSpec.vnet.name}}" nodeSubnetName: "{{(index .spec.networkSpec.subnets 1).name}}" routeTableName: "{{(index .spec.networkSpec.subnets 1).routeTable.name}}" securityGroupName: "{{(index .spec.networkSpec.subnets 1).securityGroup.name}}"
clusterdeployment.yaml.tpl
and render the manifest with the following command:kubectl get azurecluster <management-cluster-name> -o go-template="$(cat clusterdeployment.yaml.tpl)" > clusterdeployment.yaml
-
Create the
ClusterDeployment
To actually create the cluster, apply the
ClusterDeployment
manifest to the management cluster, as in:kubectl apply clusterdeployment.yaml -n kcm-system
-
Manually update the
AzureCluster
objectDue to a limitation in k0smotron, (see k0sproject/k0smotron#668), after applying the
ClusterDeployment
manifest, you must manually update the status of theAzureCluster
object.Use the following command to set the
AzureCluster
object status toReady
:kubectl patch azurecluster <cluster-name> --type=merge --subresource status --patch '{"status": {"ready": true}}'
Important Notes on Cluster Deletion#
Due to these same k0smotron limitations, you must take some manual steps in order to delete a cluster properly:
-
Add a Custom Finalizer to the AzureCluster Object:
To prevent the
AzureCluster
object from being deleted too early, add a custom finalizer:kubectl patch azurecluster <cluster-name> --type=merge --patch '{"metadata": {"finalizers": ["manual"]}}'
-
Delete the ClusterDeployment:
After adding the finalizer, delete the
ClusterDeployment
object as usual. Confirm that allAzureMachines
objects have been deleted successfully. -
Remove Finalizers from Orphaned AzureMachines:
If any
AzureMachines
are left orphaned, delete their finalizers manually after confirming no VMs remain in Azure. Use this command to remove the finalizer:kubectl patch azuremachine <machine-name> --type=merge --patch '{"metadata": {"finalizers": []}}'
-
Allowing Updates to Orphaned Objects:
If Azure admission controls prevent updates to orphaned objects, you must disable the associated
MutatingWebhookConfiguration
by deleting it:kubectl delete mutatingwebhookconfiguration <webhook-name>
vSphere Hosted Control Plane Deployment#
Follow these steps to set up a k0smotron-hosted control plane on vSphere.
-
Prerequisites
Before you start, make sure you have the following:
- A management Kubernetes cluster (Kubernetes v1.28+) deployed on vSphere with k0rdent installed.
All control plane components for managed clusters will reside in the management cluster, so make sure the management cluster has sufficient resources (CPU, memory, and storage) to handle these workloads.
-
Create the
ClusterDeployment
ManifestThe
ClusterDeployment
manifest for vSphere-hosted control planes is similar to standalone control plane deployments. For a detailed list of parameters, refer to our discussion of Template parameters for vSphere.Important
The vSphere provider requires you to specify the control plane endpoint IP before deploying the cluster. This IP address must match the one assigned to the k0smotron load balancer (LB) service.
Use an annotation supported by your load balancer provider to assign the control plane endpoint IP to the k0smotron service. For example, the manifest below includes akube-vip
annotation.ClusterDeployment
objects for vSphere-based clusters include a.spec.config.vsphere
object that contains vSphere-specific parameters. For example:apiVersion: k0rdent.mirantis.com/v1alpha1 kind: ClusterDeployment metadata: name: cluster-1 spec: template: vsphere-hosted-cp-0-1-0 credential: vsphere-credential config: clusterLabels: {} vsphere: server: vcenter.example.com thumbprint: "00:00:00" datacenter: "DC" datastore: "/DC/datastore/DC" resourcePool: "/DC/host/vCluster/Resources/ResPool" folder: "/DC/vm/example" controlPlaneEndpointIP: "172.16.0.10" ssh: user: ubuntu publicKey: | ssh-rsa AAA... rootVolumeSize: 50 cpus: 2 memory: 4096 vmTemplate: "/DC/vm/template" network: "/DC/network/Net" k0smotron: service: annotations: kube-vip.io/loadbalancerIPs: "172.16.0.10"
For more information on these parameters, see the Template reference for vsphere.