AWS Hosted Control Plane Deployment#
Follow these steps to set up a k0smotron-hosted control plane on AWS:
- 
Prerequisites Before proceeding, make sure you have the following: - A management Kubernetes cluster (Kubernetes v1.28 or later) deployed on AWS with k0rdent installed.
- A default storage class configured on the management cluster to support Persistent Volumes.
- The VPC ID where the worker nodes will be deployed.
- The Subnet ID and Availability Zone (AZ) for the worker nodes.
- The AMI ID for the worker nodes (Amazon Machine Image ID for the desired OS and Kubernetes version).
 Important All control plane components for your hosted cluster will reside in the management cluster, and the management cluster must have sufficient resources to handle these additional workloads. 
- 
Networking To deploy a hosted control plane, the necessary AWS networking resources must already exist or be created. If you're using the same VPC and subnets as your management cluster, you can reuse these resources. If your management cluster was deployed using the Cluster API Provider AWS (CAPA), you can gather the required networking details using the following commands: Retrieve the VPC ID: kubectl get awscluster <cluster-name> -o go-template='{{.spec.network.vpc.id}}'Retrieve Subnet ID: kubectl get awscluster <cluster-name> -o go-template='{{(index .spec.network.subnets 0).resourceID}}'Retrieve Availability Zone: kubectl get awscluster <cluster-name> -o go-template='{{(index .spec.network.subnets 0).availabilityZone}}'Retrieve Security Group: kubectl get awscluster <cluster-name> -o go-template='{{.status.networkStatus.securityGroups.node.id}}'Retrieve AMI ID: kubectl get awsmachinetemplate <cluster-name>-worker-mt -o go-template='{{.spec.template.spec.ami.id}}'Tip If you want to use different VPCs or regions for your management and hosted clusters, you’ll need to configure additional networking, such as VPC peering, to allow communication between them. 
- 
Create the ClusterDeployment manifest Once you've collected all the necessary data, you can create the ClusterDeploymentmanifest. This file tells k0rdent how to deploy and manage the hosted control plane. For example:apiVersion: k0rdent.mirantis.com/v1beta1 kind: ClusterDeployment metadata: name: aws-hosted-cp spec: template: aws-hosted-cp-1-0-16 credential: aws-credential config: managementClusterName: aws clusterLabels: {} vpcID: vpc-0a000000000000000 region: us-west-1 publicIP: true subnets: - id: subnet-0aaaaaaaaaaaaaaaa availabilityZone: us-west-1b isPublic: true natGatewayID: xxxxxx routeTableId: xxxxxx - id: subnet-1aaaaaaaaaaaaaaaa availabilityZone: us-west-1b isPublic: false routeTableId: xxxxxx instanceType: t3.medium rootVolumeSize: 32 securityGroupIDs: - sg-0e000000000000000Note The example above uses the us-west-1region, but you should use the region of your VPC.Alternatively, you can generate the ClusterDeploymentmanifest.If the management cluster you prepared in the first step was deployed using k0rdent or the CAPI AWS provider, you can simplify the creation of a ClusterDeploymentmanifest, and use the following template, which dynamically inserts the appropriate values:apiVersion: k0rdent.mirantis.com/v1beta1 kind: ClusterDeployment metadata: name: aws-hosted spec: template: aws-hosted-cp-1-0-16 credential: aws-credential config: managementClusterName: "{{.metadata.name}}" clusterLabels: {} vpcID: "{{.spec.network.vpc.id}}" region: "{{.spec.region}}" subnets: {{- range $subnet := .spec.network.subnets }} - id: "{{ $subnet.resourceID }}" availabilityZone: "{{ $subnet.availabilityZone }}" isPublic: {{ $subnet.isPublic }} {{- if $subnet.isPublic }} natGatewayId: "{{ $subnet.natGatewayId }}" {{- end }} routeTableId: "{{ $subnet.routeTableId }}" {{- end }} instanceType: t3.medium rootVolumeSize: 32 securityGroupIDs: - "{{.status.networkStatus.securityGroups.node.id}}"Save this template as clusterdeployment.yaml.tpl, then generate your manifest using the following command:kubectl get awscluster <cluster-name> -o go-template="$(cat clusterdeployment.yaml.tpl)" > clusterdeployment.yamlOr if the management cluster is deployed on EKS, use the following command: kubectl get awsmanagedcontrolplane <cluster-name> -o go-template="$(cat clusterdeployment.yaml.tpl)" > clusterdeployment.yamlNote For EKS management clusters, update the spec.config.managementClusterNameparameter as described in Setting the managementClusterName parameter.
- 
Apply the ClusterTemplateNothing actually happens until you apply the ClusterDeploymentmanifest to create a new cluster deployment:kubectl apply -f clusterdeployment.yaml -n kcm-system
Deployment Tips#
Here are some additional tips to help with deployment:
- 
Controller and Template Availability: Make sure the KCM controller image and templates are available in a public or accessible repository. 
- 
Install Charts and Templates: If you're using a custom repository, run the following commands with the appropriate kubeconfig:KUBECONFIG=kubeconfig IMG="ghcr.io/k0rdent/kcm/controller-ci:v0.0.1-179-ga5bdf29" REGISTRY_REPO="oci://ghcr.io/k0rdent/kcm/charts-ci" make dev-apply KUBECONFIG=kubeconfig make dev-templates
- 
Mark the Infrastructure as Ready: To scale up the MachineDeployment, manually mark the infrastructure as ready:kubectl patch AWSCluster <hosted-cluster-name> --type=merge --subresource status --patch '{"status": {"ready": true}}' -n kcm-systemFor more details on why this is necessary, click here.