GCP#
Available starting in k0rdent 0.2.0 and later
Standalone clusters can be deployed on GCP instances. Follow these steps to make GCP clusters available to your users:
-
Install k0rdent
Follow the instructions in Install k0rdent to create a management cluster with k0rdent running.
-
The gcloud CLI
The gcloud CLI (
gcloud) is required to interact with GCP resources. You can install it by following the Install the gcloud CLI instructions. -
Log in to GCP
Authenticate your session with GCP:
gcloud auth login -
Enable the required API for your Google Cloud project (if it wasn't previously enabled)
The proper API to enable depends on how you plan to deploy k0rdent:
- Standalone/hosted GCP clusters: Enable the
Compute Engine API. - GKE clusters: Enable the
Compute Engine APIand theKubernetes Engine API.
To enable
Compute Engine APIusing the Google Cloud Console (UI):- Go to the Google Cloud Console.
- Select your project.
- In the top navigation bar, click on the project selector (drop-down menu).
- Choose the project where you want to enable the
Compute Engine API. - Navigate to the
API Library(click on the Navigation Menu in the upper-left corner and selectAPIs & Services→Library). - Search for Compute Engine API (in the API Library, type
Compute Engine APIin the search bar and press Enter). - Enable the API. Click on
Compute Engine APIfrom the search results. Click theEnablebutton.
- Standalone/hosted GCP clusters: Enable the
-
Create a GCP Service Account
Note
Skip this step if the Service Account already configured
Follow the GCP Service Account creation guide and create a new service account with
Editorpermissions. If you have plans to deployGKE, the Service Account will also need theiam.serviceAccountTokenCreatorrole. -
Generate a JSON Key for the GCP Service Account
Note
Skip this step if you're going to use an existing key
Follow the Create a service account key guide and create a new key with the JSON key type.
A JSON file will automatically download to your computer. Keep it somewhere safe.
The example of the JSON file:
{ "type": "service_account", "project_id": "GCP_PROJECT_ID", "private_key_id": "GCP_PRIVATE_KEY_ID", "private_key": "-----BEGIN PRIVATE KEY-----\nGCP_PRIVATE_KEY\n-----END PRIVATE KEY-----\n", "client_email": "name@project_id.iam.gserviceaccount.com", "client_id": "GCP_CLIENT_ID", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://oauth2.googleapis.com/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/user%40project_id.iam.gserviceaccount.com", "universe_domain": "googleapis.com" } -
Create a
SecretobjectCreate a
Secretobject that stores the credentials underdatasection. Create a YAML file calledgcp-cluster-identity-secret.yaml, as follows, inserting the base64-encoded GCP credentials (represented by the placeholderGCP_B64ENCODED_CREDENTIALSbelow) that you get on the previous step. To get base64 encoded credentials, run:cat <gcpJSONCredentialsFileName> | base64 -w 0apiVersion: v1 kind: Secret metadata: name: gcp-cloud-sa namespace: kcm-system labels: k0rdent.mirantis.com/component: "kcm" data: # the secret key should always equal `credentials` credentials: GCP_B64ENCODED_CREDENTIALS type: OpaqueYou can then apply the YAML to your cluster:
kubectl apply -f gcp-cluster-identity-secret.yaml -
Create the k0rdent
CredentialObjectCreate a YAML document with the specification of the
Credentialand save it asgcp-cluster-identity-cred.yaml.Note that
.spec.namemust match.metadata.nameof theSecretobject created in the previous step.apiVersion: k0rdent.mirantis.com/v1beta1 kind: Credential metadata: name: gcp-credential namespace: kcm-system spec: identityRef: apiVersion: v1 kind: Secret name: gcp-cloud-sa namespace: kcm-systemkubectl apply -f gcp-cluster-identity-cred.yamlYou should see output of:
credential.k0rdent.mirantis.com/gcp-cluster-identity-cred created -
Create the
ConfigMapresource-template ObjectCreate a YAML with the specification of our resource-template and save it as
gcp-cloud-sa-resource-template.yamlapiVersion: v1 kind: ConfigMap metadata: name: gcp-cloud-sa-resource-template namespace: kcm-system labels: k0rdent.mirantis.com/component: "kcm" annotations: projectsveltos.io/template: "true" data: configmap.yaml: | {{- $secret := (getResource "InfrastructureProviderIdentity") -}} --- apiVersion: v1 kind: Secret metadata: name: gcp-cloud-sa namespace: kube-system type: Opaque data: cloud-sa.json: {{ index $secret "data" "credentials" }}Object name needs to be exactly
gcp-cloud-sa-resource-template(credentialsSecretobject name +-resource-templatestring suffix).Apply the YAML to your cluster:
kubectl apply -f gcp-cloud-sa-resource-template.yamlconfigmap/gcp-cloud-sa-resource-template created
Now you're ready to deploy the cluster.
-
Create a
ClusterDeploymentTo test the configuration, deploy a child cluster by following these steps: First get a list of available regions:
gcloud compute regions listYou'll see output like this:
NAME CPUS DISKS_GB ADDRESSES RESERVED_ADDRESSES STATUS TURNDOWN_DATE africa-south1 0/300 0/102400 0/575 0/175 UP asia-east1 0/3000 0/102400 0/575 0/175 UP asia-east2 0/1500 0/102400 0/575 0/175 UP asia-northeast1 0/1500 0/102400 0/575 0/175 UP asia-northeast2 0/750 0/102400 0/575 0/175 UP ...Make note of the region you want to use, such as
us-east4.To create the actual child cluster, create a
ClusterDeploymentthat references the appropriate template as well as the region, credentials, and cluster configuration.You can see the available templates by listing them:
kubectl get clustertemplate -n kcm-systemNAME VALID adopted-cluster-1-1-1 true aws-eks-1-0-2 true aws-hosted-cp-1-0-8 true aws-standalone-cp-1-0-9 true azure-aks-1-0-1 true azure-hosted-cp-1-0-7 true azure-standalone-cp-1-0-7 true openstack-standalone-cp-1-0-8 true vsphere-hosted-cp-1-0-7 true vsphere-standalone-cp-1-0-8 true gcp-standalone-cp-1-0-8 true gcp-gke-1-0-2 trueCreate the YAML for the
ClusterDeploymentand save it as my-gcp-clusterdeployment1.yaml:apiVersion: k0rdent.mirantis.com/v1beta1 kind: ClusterDeployment metadata: name: my-gcp-clusterdeployment1 namespace: kcm-system spec: template: gcp-standalone-cp-1-0-8 credential: gcp-credential config: project: PROJECT_NAME # Your project name region: "GCP_REGION" # Select your desired GCP region (find it via `gcloud compute regions list`) network: name: default # Select your desired network name (select new network name to create or find it via `gcloud compute networks list --format="value(name)"`) controlPlane: instanceType: n1-standard-2 # Select your desired instance type (find it via `gcloud compute machine-types list | grep REGION`) image: projects/ubuntu-os-cloud/global/images/ubuntu-2004-focal-v20250213 # Select image (find it via `gcloud compute images list --uri`) publicIP: true worker: instanceType: n1-standard-2 image: projects/ubuntu-os-cloud/global/images/ubuntu-2004-focal-v20250213 publicIP: trueApply the YAML to your management cluster:
kubectl apply -f my-gcp-clusterdeployment1.yamlclusterdeployment.k0rdent.mirantis.com/my-gcp-clusterdeployment1 createdNote that although the
ClusterDeploymentobject has been created, there will be a delay as actual GCP instances are provisioned and added to the cluster. You can follow the provisioning process:kubectl -n kcm-system get clusterdeployment.k0rdent.mirantis.com my-gcp-clusterdeployment1 --watchAfter the cluster is
Ready, you can access it via the kubeconfig:kubectl -n kcm-system get secret my-gcp-clusterdeployment1-kubeconfig -o jsonpath='{.data.value}' | base64 -d > my-gcp-clusterdeployment1-kubeconfig.kubeconfig KUBECONFIG="my-gcp-clusterdeployment1-kubeconfig.kubeconfig" kubectl get pods -A -
Cleanup
To clean up GCP resources, delete the child cluster by deleting the
ClusterDeployment:kubectl get clusterdeployments -ANAMESPACE NAME READY STATUS kcm-system my-gcp-clusterdeployment1 True ClusterDeployment is readykubectl delete clusterdeployments my-gcp-clusterdeployment1 -n kcm-systemclusterdeployment.k0rdent.mirantis.com "my-gcp-clusterdeployment1" deleted