Skip to content

QuickStart 2 - KubeVirt#

Note

The KubeVirt provider is available starting from k0rdent 1.7.0. See Enable KubeVirt Provider for instructions on how to enable the KubeVirt provider on an existing management cluster after upgrading to 1.7.0.

k0rdent supports using KubeVirt to create and manage clusters backed by KubeVirt virtual machines (VMs). This QuickStart guides you through the process of creating a cluster using KubeVirt VMs as the target machines.

If you haven't yet created a management node and installed k0rdent, go back to QuickStart 1 - Management node and cluster.

Note that if you have already done one of the other quickstarts, such as our AWS QuickStart (QuickStart 2 - AWS target environment) or (QuickStart 2 - Azure target environment), you can use the same management cluster, continuing here with steps to add the ability to manage KubeVirt clusters. The k0rdent management cluster can accommodate multiple provider and credential setups, enabling management of multiple infrastructures. A big benefit of k0rdent is that it provides a single point of control and visibility across multiple clusters on multiple clouds and infrastructures.

Prerequisites#

You must have a KubeVirt Infrastructure Cluster already set up and running. This cluster hosts the KubeVirt VMs that form the target cluster. Follow the instructions in KubeVirt Infrastructure Cluster Preparation to prepare the KubeVirt Infrastructure Cluster.

Create a Secret object containing the KubeVirt Infrastructure Cluster kubeconfig#

Create a Secret object to store the KubeVirt Infrastructure Cluster kubeconfig under the key kubeconfig. The secret must be created in the same namespace where the cluster is going to be deployed. Start by setting the following environment variables.

# Setup Environment
SECRET_NAME=kubevirt-kubeconfig
CLUSTER_NAMESPACE=kcm-system
CREDENTIAL_NAME=kubevirt-cred
RESOURCE_TEMPLATE_NAME=kubevirt-kubeconfig-resource-template
CLUSTER_DEPLOYMENT_NAME=my-kubevirt-clusterdeployment1
KUBECONFIG_PATH=/path/to/kubevirt-infra-cluster.kubeconfig
KUBEVIRT_INFRA_CLUSTER_KUBECONFIG_B64=$(cat $KUBECONFIG_PATH | base64 -w 0)

Now create the kubevirt-kubeconfig-secret.yaml file:

cat > kubevirt-kubeconfig-secret.yaml << EOF
apiVersion: v1
data:
  kubeconfig: $KUBEVIRT_INFRA_CLUSTER_KUBECONFIG_B64
kind: Secret
metadata:
  name: $SECRET_NAME
  namespace: $CLUSTER_NAMESPACE
  labels:
    k0rdent.mirantis.com/component: "kcm"
type: Opaque
EOF

Apply the YAML to the k0rdent management cluster:

kubectl apply -f kubevirt-kubeconfig-secret.yaml
secret/kubevirt-kubeconfig created

Create the KCM Credential Object#

Create a YAML file with the specification of our credential and save it as kubevirt-cred.yaml.

Note that .spec.identityRef.name must match .metadata.name of the Secret object created in the previous step.

cat > kubevirt-cred.yaml << EOF
apiVersion: k0rdent.mirantis.com/v1beta1
kind: Credential
metadata:
  name: $CREDENTIAL_NAME
  namespace: $CLUSTER_NAMESPACE
spec:
  identityRef:
    apiVersion: v1
    kind: Secret
    name: $SECRET_NAME
    namespace: $CLUSTER_NAMESPACE
EOF

Apply the YAML to your cluster:

kubectl apply -f kubevirt-cred.yaml
credential.k0rdent.mirantis.com/kubevirt-cred created

Create the Cluster Identity resource template ConfigMap#

Now create the k0rdent ClusterIdentity resource template ConfigMap. As in prior steps, create a YAML file called kubevirt-kubeconfig-resource-template.yaml:

Note

The ConfigMap name, in this case, needs to be exactly $SECRET_NAME-resource-template (in this case kubevirt-kubeconfig-resource-template). See naming the template configmap for details.

cat > kubevirt-kubeconfig-resource-template.yaml << EOF
apiVersion: v1
kind: ConfigMap
metadata:
  name: $RESOURCE_TEMPLATE_NAME
  namespace: $CLUSTER_NAMESPACE
  labels:
    k0rdent.mirantis.com/component: "kcm"
  annotations:
    projectsveltos.io/template: "true"
EOF
Note that the ConfigMap only contains metadata and no data fields. This is expected, as we do not need to template any objects inside child clusters, but we can use this object in the future if the need arises. Apply this YAML to your management cluster:

kubectl apply -f kubevirt-kubeconfig-resource-template.yaml

List available cluster templates#

To create a KubeVirt cluster, begin by listing the available ClusterTemplate objects provided with k0rdent:

kubectl get clustertemplate -n $CLUSTER_NAMESPACE

You'll see output similar to the following. Make note of the name of the KubeVirt Cluster template in its current version (in the example below, that's kubevirt-cluster-1-0-1):

NAMESPACE    NAME                            VALID
kcm-system   adopted-cluster-1-0-1           true
kcm-system   aws-eks-1-0-4                   true
kcm-system   aws-hosted-cp-1-0-21            true
kcm-system   aws-standalone-cp-1-0-20        true
kcm-system   azure-aks-1-0-2                 true
kcm-system   azure-hosted-cp-1-0-23          true
kcm-system   azure-standalone-cp-1-0-20      true
kcm-system   docker-hosted-cp-1-0-4          true
kcm-system   gcp-gke-1-0-7                   true
kcm-system   gcp-hosted-cp-1-0-20            true
kcm-system   gcp-standalone-cp-1-0-18        true
kcm-system   kubevirt-hosted-cp-1-0-1        true
kcm-system   kubevirt-standalone-cp-1-0-1    true
kcm-system   openstack-hosted-cp-1-0-22      true
kcm-system   openstack-standalone-cp-1-0-22  true
kcm-system   remote-cluster-1-0-19           true
kcm-system   vsphere-hosted-cp-1-0-19        true
kcm-system   vsphere-standalone-cp-1-0-18    true

Create your ClusterDeployment#

To deploy a cluster, create a YAML file called my-kubevirt-clusterdeployment1.yaml. You will use this to create a ClusterDeployment object representing the deployed cluster.

Note

The spec.config.cluster.controlPlaneServiceTemplate.spec.type field should be configured correctly. If you use the LoadBalancer service type, ensure that an appropriate Service LoadBalancer solution (for example, MetalLB or a cloud provider load balancer) is installed on the KubeVirt Infrastructure Cluster. This field is optional. By default, control plane nodes use a Service of type ClusterIP, which makes the workload cluster accessible only within the same cluster. In most cases, you will want to expose the workload cluster API server outside the KubeVirt Infrastructure Cluster, so using LoadBalancer is recommended.

cat > my-kubevirt-clusterdeployment1.yaml << EOF
apiVersion: k0rdent.mirantis.com/v1beta1
kind: ClusterDeployment
metadata:
  name: $CLUSTER_DEPLOYMENT_NAME
  namespace: $CLUSTER_NAMESPACE
spec:
  template: kubevirt-standalone-cp-1-0-1 # name of the clustertemplate
  credential: $CREDENTIAL_NAME
  propagateCredentials: false
  config:
    clusterLabels: {}
    clusterAnnotations: {}
    controlPlaneNumber: 1
    workersNumber: 1
    cluster:
      controlPlaneServiceTemplate:
        spec:
          type: LoadBalancer
    controlPlane:
      dnsConfig:
        nameservers:
        - 8.8.8.8
      cpu:
        model: host-passthrough
    worker:
      dnsConfig:
        nameservers:
        - 8.8.8.8
      cpu:
        model: host-passthrough
EOF

Warning

The example above creates a very basic cluster with minimal resources (1 control plane node and 1 worker node). This configuration is suitable for testing and learning purposes only. For production deployments, you should customize the cluster configuration according to your requirements, including resource allocation and networking.

Apply the ClusterDeployment to deploy the KubeVirt cluster#

Apply the ClusterDeployment YAML (my-kubevirt-clusterdeployment1.yaml) to instruct k0rdent to deploy the cluster:

kubectl apply -f my-kubevirt-clusterdeployment1.yaml

Kubernetes should confirm the creation:

clusterdeployment.k0rdent.mirantis.com/my-kubevirt-clusterdeployment1 created

There will be a delay as the cluster finishes provisioning. Follow the provisioning process with the following command:

kubectl -n kcm-system get clusterdeployment.k0rdent.mirantis.com my-kubevirt-clusterdeployment1 --watch

To verify that the cluster has been successfully provisioned, run:

kubectl -n kcm-system get clusterdeployment.k0rdent.mirantis.com my-kubevirt-clusterdeployment1 -o=jsonpath='{.status.conditions[?(@.type=="Ready")].status}'

If the cluster was provisioned, the output of this command will be:

True

If there is any error, check the readiness condition in more details:

kubectl -n kcm-system get clusterdeployment.k0rdent.mirantis.com my-kubevirt-clusterdeployment1 -o=jsonpath='{.status.conditions[?(@.type=="Ready")]}' | jq

Obtain the cluster's kubeconfig#

Now you can retrieve the cluster's kubeconfig:

kubectl -n kcm-system get secret my-kubevirt-clusterdeployment1-kubeconfig -o jsonpath='{.data.value}' | base64 -d > my-kubevirt-clusterdeployment1.kubeconfig

And you can use the kubeconfig to see what's running on the cluster:

KUBECONFIG="my-kubevirt-clusterdeployment1.kubeconfig" kubectl get pods -A

Tear down the child cluster#

To tear down the child cluster, delete the ClusterDeployment from the management cluster:

kubectl delete ClusterDeployment my-kubevirt-clusterdeployment1 -n kcm-system
clusterdeployment.k0rdent.mirantis.com "my-kubevirt-clusterdeployment1" deleted

Troubleshooting#

For troubleshooting KubeVirt cluster deployment issues, refer to the Troubleshooting KubeVirt Clusters guide.

Next Steps#

Now that you've finished the QuickStart, we have some suggestions for what to do next:

Check out the Administrator Guide ...

  • For a more detailed view of k0rdent setup for production
  • For details about setting up k0rdent to manage clusters on VMware and OpenStack
  • For details about using k0rdent with cloud Kubernetes distros such as AWS EKS and Azure AKS