QuickStart 2 - GCP target environment#
Note
GCP cluster deployment is supported in K0rdent 0.2.0 and above.
In this QuickStart unit, we'll be gathering information and performing preparatory steps to enable k0rdent (running on your management node) to manage clusters on GCP and deploying a child cluster.
Note that if you have already done our AWS or Azure QuickStart (QuickStart 2 - AWS target environment), Azure QuickStart (QuickStart 2 - Azure target environment), or you can use the same management cluster, continuing here with steps to add the ability to manage clusters on GCP. The k0rdent management cluster can accommodate multiple provider and credential setups, enabling management of multiple infrastructures. And even if your management node is external to GCP (for example, it could be on an AWS EC2 virtual machine), as long as you permit outbound traffic to all IP addresses from the management node, this should work fine. A big benefit of k0rdent is that it provides a single point of control and visibility across multiple clusters on multiple clouds and infrastructures.
GCP provider is available from K0rdent the 0.2.0 release only, and it's not enabled by default.
To enable GCP provider in K0rdent, edit the Management
object and add cluster-api-provider-gcp
to the list of spec.providers
and wait for a couple of minutes for GCP components to start.
Install Gcloud CLI#
Follow Install the gcloud CLI instruction to install gcloud CLI on your local machine.
Authenticate in GCP cloud#
gcloud auth login
By default, this command will obtain access credentials for your user account via a web-based authorization flow.
Set the project in which you want to deploy a cluster. Run:
gcloud config set project PROJECT_ID
Create a GCP Service Account#
Note
Skip this step if the Service Account already configured
Follow the GCP Service Account creation guide
and create a new service account with Editor
permissions.
If you have plans to deploy GKE
, the Service Account will also need the iam.serviceAccountTokenCreator
role.
Generate JSON Key for the GCP Service Account#
Note
Skip this step if you're going to use an existing key Follow the Create a service account key guide and create a new key with the JSON key type.
A JSON file will be automatically downloaded to your computer. You should keep it somewhere safe.
The example of the JSON file:
{
"type": "service_account",
"project_id": "project_id",
"private_key_id": "akdof8v8s6n39n29251be52cabbb3984c259f1",
"private_key": "-----BEGIN PRIVATE KEY-----\nPRIVATE_KEY\n-----END PRIVATE KEY-----\n",
"client_email": "name@project_id.iam.gserviceaccount.com",
"client_id": "123456778",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/user%40project_id.iam.gserviceaccount.com",
"universe_domain": "googleapis.com"
}
Create a Secret object with the GCP (GKE) credentials#
Create a Secret
object that stores the credentials
field under data
section. Create a YAML file called
gcp-cluster-identity-secret.yaml
, as follows, inserting the base64-encoded GCP credentials
(represented by the placeholder GCP_B64ENCODED_CREDENTIALS
below) that you get on the previous step.
To get base64 encoded credentials, run:
cat <gcpJSONCredentialsFileName> | base64
apiVersion: v1
kind: Secret
metadata:
name: gcp-cloud-sa
namespace: kcm-system
labels:
k0rdent.mirantis.com/component: "kcm"
data:
# the secret key should always equal `credentials`
credentials: GCP_B64ENCODED_CREDENTIALS
type: Opaque
gcp-cloud-sa.yaml
.
kubectl apply -f gcp-cloud-sa.yaml
You should see output resembling this:
secret/gcp-cloud-sa created
Create the KCM Credential Object#
Create a YAML with the specification of our credential and save it as gcp-credential.yaml
.
Note that .spec.name
must match .metadata.name
of the Secret
object created in the previous step.
apiVersion: k0rdent.mirantis.com/v1alpha1
kind: Credential
metadata:
name: gcp-credential
namespace: kcm-system
spec:
identityRef:
apiVersion: v1
kind: Secret
name: gcp-cloud-sa
namespace: kcm-system
kubectl apply -f gcp-credential.yaml
You should see output resembling this:
credential.k0rdent.mirantis.com/gcp-credential created
Create the ConfigMap
resource-template Object#
Create a YAML with the specification of our resource-template and save it as
gcp-cloud-sa-resource-template.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: gcp-cloud-sa-resource-template
namespace: kcm-system
labels:
k0rdent.mirantis.com/component: "kcm"
annotations:
projectsveltos.io/template: "true"
data:
configmap.yaml: |
{{- $secret := (getResource "InfrastructureProviderIdentity") -}}
---
apiVersion: v1
kind: Secret
metadata:
name: gcp-cloud-sa
namespace: kube-system
type: Opaque
data:
cloud-sa.json: {{ index $secret "data" "credentials" }}
Object name needs to be exactly gcp-cloud-sa-resource-template
(credentials Secret
object name + -resource-template
string suffix).
Apply the YAML to your cluster:
kubectl apply -f gcp-cloud-sa-resource-template.yaml
configmap/gcp-cloud-sa-resource-template created
Find your location/region#
To determine where to deploy your cluster, you may wish to begin by listing your GCP regions:
gcloud compute regions list
You'll see output like this:
NAME CPUS DISKS_GB ADDRESSES RESERVED_ADDRESSES STATUS TURNDOWN_DATE
africa-south1 0/300 0/102400 0/575 0/175 UP
asia-east1 0/3000 0/102400 0/575 0/175 UP
asia-east2 0/1500 0/102400 0/575 0/175 UP
asia-northeast1 0/1500 0/102400 0/575 0/175 UP
asia-northeast2 0/750 0/102400 0/575 0/175 UP
. . .
What you'll need to insert in your ClusterDeployment is the name (first column) of the region you wish to deploy to.
(optional) Find or create your network#
If you want to deploy in existing network, obtain all network names using gcloud
CLI and choose one of available:
gcloud compute networks list --format="value(name)"
If you prefer to create a new network, follow this instruction.
Determine the instance type#
Find available machine types and its parameters by running:
gcloud compute machine-types list | grep REGION
Find available images#
To list all available images in your GCP project, run:
gcloud compute images list --uri
You'll see output like this:
https://www.googleapis.com/compute/v1/projects/centos-cloud/global/images/centos-stream-9-arm64-v20250311
https://www.googleapis.com/compute/v1/projects/centos-cloud/global/images/centos-stream-9-v20250311
https://www.googleapis.com/compute/v1/projects/cos-cloud/global/images/cos-105-17412-535-84
https://www.googleapis.com/compute/v1/projects/cos-cloud/global/images/cos-109-17800-436-79
https://www.googleapis.com/compute/v1/projects/cos-cloud/global/images/cos-113-18244-291-82
. . .
To build your own image, follow Building images instruction.
Replace REGION
with the region you'd like your cluster to live.
List available cluster templates#
k0rdent is now fully configured to manage GCP. To create a cluster, begin by listing the available ClusterTemplates provided with k0rdent:
kubectl get clustertemplate -n kcm-system
You'll see output resembling what's below. Grab the name of the GCP standalone cluster template in its present version (in the example below, that's gcp-standalone-cp-0-2-0
):
NAMESPACE NAME VALID
kcm-system adopted-cluster-0-2-0 true
kcm-system aws-eks-0-2-0 true
kcm-system aws-hosted-cp-0-2-0 true
kcm-system aws-standalone-cp-0-2-0 true
kcm-system azure-aks-0-2-0 true
kcm-system azure-hosted-cp-0-2-0 true
kcm-system azure-standalone-cp-0-2-0 true
kcm-system openstack-standalone-cp-0-2-0 true
kcm-system vsphere-hosted-cp-0-2-0 true
kcm-system vsphere-standalone-cp-0-2-0 true
kcm-system gcp-standalone-cp-0-2-0 true
kcm-system gcp-gke-{{ no such element: dict object['gcpGkeCpCluster'] }} true
Create your ClusterDeployment#
Now, to deploy a cluster, create a YAML file called my-gcp-clusterdeployment1.yaml
. We'll use this to create
a ClusterDeployment object in k0rdent, representing the deployed cluster. The ClusterDeployment
identifies for k0rdent the ClusterTemplate
you want to use for cluster creation, the
identity credential object you want to create it under, the GCP project, region, network name, machine types and images
you want to use to host control plane and worker nodes.
Note
If you decide to deploy cluster with publicIP: false
, you should make sure your cluster can communicate
with the outside world and the load balancer. To achieve this, you may create a Cloud NAT in the region you'd like
your cluster to live in by following
Set up and manage network address translation with Public NAT
instruction.
Note
Image parameter must be the fully qualified GCP image path. Otherwise, the GCP cloud controller manager will not find it.
apiVersion: k0rdent.mirantis.com/v1alpha1
kind: ClusterDeployment
metadata:
name: my-gcp-clusterdeployment1
namespace: kcm-system
spec:
template: gcp-standalone-cp-0-2-0 # name of the clustertemplate
credential: gcp-credential
config:
project: "GCP_PROJECT_NAME"
region: "GCP_REGION"
network:
name: "GCP_NETWORK_NAME" # The name of an existing network or a new network to be created by Cluster API Provider GCP
controlPlane:
instanceType: "CP_MACHINE_TYPE"
image: projects/ubuntu-os-cloud/global/images/ubuntu-2004-focal-v20250213
publicIP: true
worker:
instanceType: "WORKER_MACHINE_TYPE"
image: projects/ubuntu-os-cloud/global/images/ubuntu-2004-focal-v20250213
publicIP: true
Apply the ClusterDeployment to deploy the cluster#
Finally, we'll apply the ClusterDeployment YAML (my-gcp-clusterdeployment1.yaml
) to instruct k0rdent to deploy the cluster:
kubectl apply -f my-gcp-clusterdeployment1.yaml
Kubernetes should confirm this:
clusterdeployment.k0rdent.mirantis.com/my-gcp-clusterdeployment1 created
There will be a delay as the cluster finishes provisioning. Follow the provisioning process with the following command:
kubectl -n kcm-system get clusterdeployment.k0rdent.mirantis.com my-gcp-clusterdeployment1 --watch
Obtain the cluster's kubeconfig#
Now you can retrieve the cluster's kubeconfig:
kubectl -n kcm-system get secret my-gcp-clusterdeployment1-kubeconfig -o jsonpath='{.data.value}' | base64 -d > my-gcp-clusterdeployment1-kubeconfig.kubeconfig
And you can use the kubeconfig to see what's running on the cluster:
KUBECONFIG="my-gcp-clusterdeployment1-kubeconfig.kubeconfig" kubectl get pods -A
List child clusters#
To verify the presence of the child cluster, list the available ClusterDeployment
objects:
kubectl get ClusterDeployments -A
NAMESPACE NAME READY STATUS
kcm-system my-gcp-clusterdeployment1 True ClusterDeployment is ready
Tear down the child cluster#
To tear down the child cluster, delete the ClusterDeployment
:
kubectl delete ClusterDeployment my-gcp-clusterdeployment1 -n kcm-system
clusterdeployment.k0rdent.mirantis.com "my-gcp-clusterdeployment1" deleted
Next Steps#
Now that you've finished the k0rdent QuickStart, we have some suggestions for what to do next:
Check out the Administrator Guide ...
- For a more detailed view of k0rdent setup for production
- For details about setting up k0rdent to manage clusters on VMware and OpenStack
- For details about using k0rdent with cloud Kubernetes distros: AWS EKS and Azure AKS
Or check out the Demos Repository for fast, makefile-driven demos of k0rdent's key features!