Upgrading KOF#
Upgrade to any version#
- Open the latest version of the Installing KOF guide.
- Make sure you see the expected new version in the top navigation bar.
- Go to the directory with YAML files created the last time you've applied the guide.
- Create their backup copies:
for file in *.yaml; do cp $file $file.bak; done - If you don't have such files, you may get them like this:
helm get values -n kof kof-mothership -o yaml > mothership-values.yaml.bak - Ideally, create a backup of everything including VictoriaMetrics/Logs data volumes.
- Apply the guide step by step, but:
- Skip unchanged credentials like
external-dns-aws-credentials. - Verify how YAML files have changed with
diff -u $file.bak $filebefore using them. - Run all
helm upgradecommands with the new--versionand files as documented.
- Skip unchanged credentials like
- Do the same for other KOF guides.
- Apply each relevant "Upgrade to" section of this page from older to newer.
- For example, if you're upgrading from v1.1.0 to v1.3.0,
- first apply the Upgrade to v1.2.0 section,
- then apply the Upgrade to v1.3.0 section.
Upgrade to v1.3.0#
Before upgrading any helm chart:
-
If you have customized VMCluster or VMAlert resources then update your resources accordingly to the new values under "spec".
-
If you have customized VMCluster/VMAlert resources using the "k0rdent.mirantis.com/kof-storage-values" cluster annotation, then keep the old but put new values first to reconcile the new
ClusterDeploymentconfiguration on the current release, then run the helm charts upgrade. After regional clusters have the newkof-storagehelm chart installed, you can remove old values from the cluster annotation. -
If you have set
storagevalues of thekof-regionalchart, update them in the same way. -
If you are not using Istio, then on the step 8 of the Management Cluster upgrade please apply this temporary workaround for the Reconciling MultiClusterService issue:
kubectl rollout restart -n kcm-system deploy/kcm-controller-manager
Upgrade to v1.2.0#
- As part of the KOF 1.2.0 overhaul of metrics collection and representation, we switched from the victoria-metrics-k8s-stack metrics and dashboards to opentelemetry-kube-stack metrics and kube-prometheus-stack dashboards.
- Some of the previously collected metrics have slightly different labels.
- If consistency of timeseries labeling is important, users are advised to conduct relabeling of the corresponding timeseries in the metric storage by running a retroactive relabeling procedure of their preference.
- A possible reference solution here would be to use Rules backfilling via vmalert.
- The labels that would require renaming are these:
- Replace
job="integrations/kubernetes/kubelet"withjob="kubelet", metrics_path="/metrics". - Replace
job="integrations/kubernetes/cadvisor"withjob="kubelet", metrics_path="/metrics/cadvisor". - Replace
job="prometheus-node-exporter"withjob="node-exporter".
- Replace
Also:
- To upgrade from
cert-manager-1-16-4tocert-manager-v1-16-4please apply this patch to management cluster:kubectl apply -f - <<EOF apiVersion: k0rdent.mirantis.com/v1beta1 kind: ServiceTemplateChain metadata: name: patch-cert-manager-v1-16-4-from-1-16-4 namespace: kcm-system annotations: helm.sh/resource-policy: keep spec: supportedTemplates: - name: cert-manager-v1-16-4 - name: cert-manager-1-16-4 availableUpgrades: - name: cert-manager-v1-16-4 EOF
Upgrade to v1.1.0#
- After you
helm upgradethekof-mothershipchart, please run the following:kubectl apply --server-side --force-conflicts \ -f https://github.com/grafana/grafana-operator/releases/download/v5.18.0/crds.yaml - After you get
regional-kubeconfigfile on the KOF Verification step, please run the following for each regional cluster:KUBECONFIG=regional-kubeconfig kubectl apply --server-side --force-conflicts \ -f https://github.com/grafana/grafana-operator/releases/download/v5.18.0/crds.yaml - This is noted as required in the grafana-operator release notes.