Troubleshooting#
Steps to Debug KubeVirt Cluster Deployments#
-
Check the
ClusterDeploymentstatus condition on the management or regional cluster:kubectl -n $CLUSTER_NAMESPACE get clusterdeployment.k0rdent.mirantis.com $CLUSTER_NAME -o=jsonpath='{.status.conditions[?(@.type=="Ready")]}' | jq -
Check the
KubevirtClusterstatus condition on the management or regional cluster:kubectl -n $CLUSTER_NAMESPACE get kubevirtcluster $CLUSTER_NAME -o=jsonpath='{.status.conditions[?(@.type=="Ready")]}' | jq -
Check the vm,vmi status on the KubeVirt Infrastructure Cluster:
kubectl --kubeconfig $KUBEVIRT_INFRA_KUBECONFIG_PATH -n $CLUSTER_NAMESPACE get vm -l cluster.x-k8s.io/cluster-name=$CLUSTER_NAME -o=yaml -
Check the logs of the virt-handler pod on the KubeVirt Infrastructure Cluster:
kubectl --kubeconfig $KUBEVIRT_INFRA_KUBECONFIG_PATH -n kubevirt get pods -l kubevirt.io=virt-handler -
Sometimes you need to SSH into the VM created on the KubeVirt Infrastructure Cluster to check system or k0s logs:
virtctl console -n $CLUSTER_NAMESPACE $VM_NAME --kubeconfig $KUBEVIRT_INFRA_KUBECONFIG_PATHor you can port-forward the SSH port to a virtualmachine and access directly via SSH:
virtctl port-forward vmi/$VM_NAME -n $CLUSTER_NAMESPACE --kubeconfig $KUBEVIRT_INFRA_KUBECONFIG_PATH $LOCAL_PORT:22Then you can SSH into the VM:
ssh -p $LOCAL_PORT capk@127.0.0.1 -i $SSH_PRIVATE_KEY_PATHWarning
The SSH key pair is generated by Cluster API Provider KubeVirt during the provisioning process. You can retrieve the private key from the secret created in the management or regional cluster:
kubectl get secret -n $CLUSTER_NAMESPACE $CLUSTER_NAME-ssh-keys -o=jsonpath={.data.key} | base64 -d -
Check logs on the VM:
For k0s logs:
sudo journalctl -u k0sworkerFor containers logs, see
/var/log/containersdirectory. For more information see k0s troubleshooting.
Known Issues#
The KubeVirtCluster deployment fails on proto: integer overflow error#
When deploying the KubeVirt cluster, if the namespace where the ClusterDeployment has been created does not exist
on the KubeVirt Infrastructure Cluster, the following misleading error may appear in the
cluster-api-provider-kubevirt logs:
E0126 13:41:19.622971 1 controller.go:474] "Reconciler error" err="failed to create load balancer: proto: integer overflow"
controller="kubevirtcluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="KubevirtCluster"
KubevirtCluster="kcm-system/my-kubevirt-clusterdeployment1" namespace="kcm-system" name="my-kubevirt-clusterdeployment1"
reconcileID="2074dadd-23e8-4d64-bf87-b7da2905f347"
The ClusterDeployment Ready condition will be:
kubectl -n kcm-system get clusterdeployment my-kubevirt-clusterdeployment1 -o jsonpath='{.status.conditions[?(@.type=="Ready")]}'
{
"lastTransitionTime": "2026-01-26T13:37:06Z",
"message": "* InfrastructureReady:
* LoadBalancerAvailable: proto: integer overflow (0/1 conditions met)
* ControlPlaneInitialized: Control plane not yet initialized
* ControlPlaneAvailable: K0sControlPlane status.initialization.controlPlaneInitialized is false
* WorkersAvailable:
* MachineDeployment my-kubevirt-clusterdeployment1-md: 0 available replicas, at least 1 required (spec.strategy.rollout.maxUnavailable is 0, spec.replicas is 1)
* RemoteConnectionProbe: Remote connection not established yet",
"reason": "Failed",
"status": "False",
"type": "Ready"
}
Workaround
Most likely the issue is caused by the missing namespace where the ClusterDeployment exists on the KubeVirt
Infrastructure Cluster. You must create the namespace in advance before creating the ClusterDeployment object:
kubectl --kubeconfig <kubevirt-infra-cluster-kubeconfig> create namespace <cld-namespace>