If we ever get to this, here are the steps we need to follow to restore the cluster from an existing snapshot, if you have no snapshot, you deserve it.
How to restore the complete RKE kluster
If you want to test this also do these steps:
# The cluster is running
kubectl apply -f https://k8s.io/examples/application/deployment.yaml
# Take snapshot
kubectl delete -f https://k8s.io/examples/application/deployment.yaml
kubectl get pot --all-namespaces
Now restore it:
# Stop rke2 on all servers(not agents)
systemctl stop rke2-server
# restore:
rke2 server \
--cluster-reset \
--etcd-s3 \
--cluster-reset-restore-path=etcd-snapshot-master1-1637253000 \
--etcd-s3-bucket=domain-com-contabo-rke \
--etcd-s3-access-key="keyId" \
--etcd-s3-secret-key="applicationKey"
After the restor is done:
systemctl enable rke2-server
systemctl start rke2-server
Afterwards, you will see the old data.
kubectl get pot --all-namespaces