Skip to content

Upgrading KOF#

Upgrade to any version#

  • Create a backup of each KOF chart values in each cluster, for example:
    # Management cluster uses default KUBECONFIG=""
    for cluster in "" regional-kubeconfig child-kubeconfig; do
      for namespace in kof istio-system; do
        for chart in $(KUBECONFIG=$cluster helm list -qn $namespace); do
          KUBECONFIG=$cluster helm get values -n $namespace $chart -o yaml \
            > values-$cluster-$chart.bak
        done
      done
    done
    ls values-*.bak
    
  • Ideally, create a backup of everything including VictoriaMetrics/Logs data volumes.
  • Open the latest version of the Installing KOF guide.
  • Make sure you see the expected new version in the top navigation bar.
  • Apply the guide step by step, but:
    • Skip unchanged credentials like external-dns-aws-credentials.
    • Before applying new YAML files, verify what has changed, for example:
      diff -u values--kof-mothership.bak mothership-values.yaml
      
    • Run all helm upgrade commands with the new --version and files as documented.
  • Do the same for other KOF guides.
  • Apply each relevant "Upgrade to" section of this page from older to newer.

Upgrade to v1.4.0#

  • PromxyServerGroup CRD was moved from the crds/ directory to the templates/ directory for auto-upgrade.
  • Please use --take-ownership on upgrade of kof-mothership to 1.4.0:
    helm upgrade --take-ownership \
      --reset-values --wait -n kof kof-mothership -f mothership-values.yaml \
      oci://ghcr.io/k0rdent/kof/charts/kof-mothership --version 1.4.0
    
  • This will not be required in future upgrades.

Upgrade to v1.3.0#

Upgrade to v1.2.0#

  • As part of the KOF 1.2.0 overhaul of metrics collection and representation, we switched from the victoria-metrics-k8s-stack metrics and dashboards to opentelemetry-kube-stack metrics and kube-prometheus-stack dashboards.
  • Some of the previously collected metrics have slightly different labels.
  • If consistency of timeseries labeling is important, users are advised to conduct relabeling of the corresponding timeseries in the metric storage by running a retroactive relabeling procedure of their preference.
  • A possible reference solution here would be to use Rules backfilling via vmalert.
  • The labels that would require renaming are these:
    • Replace job="integrations/kubernetes/kubelet" with job="kubelet", metrics_path="/metrics".
    • Replace job="integrations/kubernetes/cadvisor" with job="kubelet", metrics_path="/metrics/cadvisor".
    • Replace job="prometheus-node-exporter" with job="node-exporter".

Also:

  • To upgrade from cert-manager-1-16-4 to cert-manager-v1-16-4 please apply this patch to management cluster:
    kubectl apply -f - <<EOF
    apiVersion: k0rdent.mirantis.com/v1beta1
    kind: ServiceTemplateChain
    metadata:
      name: patch-cert-manager-v1-16-4-from-1-16-4
      namespace: kcm-system
      annotations:
        helm.sh/resource-policy: keep
    spec:
      supportedTemplates:
        - name: cert-manager-v1-16-4
        - name: cert-manager-1-16-4
          availableUpgrades:
            - name: cert-manager-v1-16-4
    EOF
    

Upgrade to v1.1.0#

  • After you helm upgrade the kof-mothership chart, please run the following:
    kubectl apply --server-side --force-conflicts \
    -f https://github.com/grafana/grafana-operator/releases/download/v5.18.0/crds.yaml
    
  • After you get regional-kubeconfig file on the KOF Verification step, please run the following for each regional cluster:
    KUBECONFIG=regional-kubeconfig kubectl apply --server-side --force-conflicts \
    -f https://github.com/grafana/grafana-operator/releases/download/v5.18.0/crds.yaml
    
  • This is noted as required in the grafana-operator release notes.