Skip to main content

12 posts tagged with "rke"

View All Tags

· 6 min read
Hreniuc Cristian-Alexandru

Check when a new version is released here.

Development env - contabo

Update the vitess dev cluster first and see if everything works, check the logs of the vitess pods and also check if the backend can connect to the db.

cd vitess/contabo/

kubectl apply -f vitess-cluster.yaml

# Watch the pods being created
kubectl get pod

# Check logs
# Make sure the version is printed correctly: Version: 14.0.0
I0630 16:12:57.015279 1 servenv.go:100] Version: 14.0.0 (Git revision 9665c1850cf3e155667178104891f0fc41b71a9d branch 'heads/v14.0.0') built on Tue Jun 28 17:34:59 UTC 2022 by vitess@buildkitsandbox using go1.18.3 linux/amd64


# Vtgate - the last part of the name will be different
kubectl logs -f pod/vt-decontabodusseldorf-vtgate-f81fd0bc-5b7bfffb96-jxcjj

# vttablet
kubectl logs -f pod/vt-vttablet-decontabodusseldorf-2620423388-0c5af156

# vtctld
kubectl logs pod/vt-decontabodusseldorf-vtctld-55130465-65cd85fcc-n9ljn

Connect to the app and check the logs for the backend:

kubectl logs -f domain-com-backend-64d86787c5-g4vkv

· One min read
Hreniuc Cristian-Alexandru

Check when a new version is released here and check if it has been updated here, usually they only change the version of the operator image in that file.

Usually we should keep them in sync, the vitess-operator image and the vitess image.

When upgrading make sure you specify here the latest version used, so it will be easier to see what version we are using:

kubectl apply -f https://raw.githubusercontent.com/vitessio/vitess/v14.0.0/examples/operator/operator.yaml

# Vitess operator 2.7.1

kubectl get pod
# Logs

kubectl logs -f vitess-operator-647f7cc94f-zgf9q

Sometimes you might need to restart the backends, because this might trigger a restart for the vitess, and the backend sometimes doesn't refresh the connection ot the db. Check the logs of the backend.

· One min read
Hreniuc Cristian-Alexandru

Based on this doc, check here the new versions


# Hetzner
ssh -i ~/.ssh/ansible_noob_id_rsa ansible_noob@SERVER-1
ssh -i ~/.ssh/ansible_noob_id_rsa ansible_noob@SERVER-2

sudo su -
curl -sfL https://get.rke2.io | INSTALL_RKE2_VERSION=v1.22.10+rke2r2 sh -

# For server
systemctl restart rke2-server

# For agent
systemctl restart rke2-agent
# Check the logs
journalctl -u rke2-server -f
# Errors:
journalctl -u rke2-server | grep -i level=error

journalctl -u rke2-agent -f
journalctl -u rke2-agent | grep -i level=error

# See if pods are restarting or not working
kubectl get pod --all-namespaces

Sometimes you might need to restart the backends, because this might trigger a restart for the vitess, and the backend sometimes doesn't refresh the connection to the db. Check the logs of the backend.

· One min read
Hreniuc Cristian-Alexandru

As stated in these docs: https://rancher.com/docs/rancher/v2.5/en/installation/install-rancher-on-k8s/upgrades/#option-a-upgrading-rancher

sudo snap install helm --classic

Upgrade it:

# To have the same values:
helm get values rancher -n cattle-system -o yaml > rancher_values.yaml

# Add the repo
helm repo add rancher-stable https://releases.rancher.com/server-charts/stable

helm repo update

helm fetch rancher-stable/rancher


helm upgrade rancher rancher-stable/rancher \
--namespace cattle-system \
-f rancher_values.yaml \
--version=2.6.5

If helm upgrade doesn't work we need to delete the rancher and reinstall it, following these steps without the cert-manager:

helm delete rancher -n cattle-system

helm install rancher rancher-stable/rancher \
--namespace cattle-system \
-f rancher_values.yaml \
--version=2.6.5

It should reuse the old configs and so.

Check the pods and logs:

kubectl get pods --namespace cattle-system

kubectl -n cattle-system rollout status deploy/rancher

kubectl logs -f rancher-84696c75d9-62mk4 --namespace cattle-system
# Should check the version
2022/07/01 08:49:27 [INFO] Rancher version v2.6.5 (c4d59fa88) is starting

· One min read
Hreniuc Cristian-Alexandru

As stated here, but make sure there are no other specific things for upgrading from a version to another.

Upgrade CRDs

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.2/cert-manager.crds.yaml

Upgrade cert-manager

helm upgrade --version 1.8.2 cert-manager jetstack/cert-manager --namespace cert-manager

Check the pods

kubectl get pods --namespace cert-manager

· One min read
Hreniuc Cristian-Alexandru

Change the configs for our nodes on each server

# Contabo
ssh -i ~/.ssh/ansible_noob_id_rsa ansible_noob@SERVER-IP-1 # Master
ssh -i ~/.ssh/ansible_noob_id_rsa ansible_noob@SERVER-IP-2


sudo su -
# Sometimes you might need to only change on the server config
nano /etc/rancher/rke2/config.yaml

# For server
systemctl restart rke2-server

# For agent
systemctl restart rke2-agent

# Check the logs
journalctl -u rke2-server -f
journalctl -u rke2-agent -f

· One min read
Hreniuc Cristian-Alexandru

If we ever get to this, here are the steps we need to follow to restore the cluster from an existing snapshot, if you have no snapshot, you deserve it.

How to restore the complete RKE kluster

Source

If you want to test this also do these steps:

# The cluster is running

kubectl apply -f https://k8s.io/examples/application/deployment.yaml

# Take snapshot

kubectl delete -f https://k8s.io/examples/application/deployment.yaml

kubectl get pot --all-namespaces

Now restore it:


# Stop rke2 on all servers(not agents)
systemctl stop rke2-server

# restore:
rke2 server \
--cluster-reset \
--etcd-s3 \
--cluster-reset-restore-path=etcd-snapshot-master1-1637253000 \
--etcd-s3-bucket=domain-com-contabo-rke \
--etcd-s3-access-key="keyId" \
--etcd-s3-secret-key="applicationKey"

After the restor is done:

systemctl enable rke2-server
systemctl start rke2-server

Afterwards, you will see the old data.

kubectl get pot --all-namespaces

· One min read
Hreniuc Cristian-Alexandru

This document documents useful steps to investigate when we are having problems with our engine cluster(RKE2).

1. Status of rke2

ssh ansible_noob@SERVER-IP-1
sudo su
# Server
systemctl status rke2-server
# Agent
systemctl status rke2-agent

Config file: less /etc/rancher/rke2/config

2. Check the logs of rke2

Source

Server logs:

ssh ansible_noob@SERVER-IP-1
sudo su
journalctl -u rke2-server -f

Agent logs:

ssh ansible_noob@SERVER-IP-2
sudo su
journalctl -u rke2-agent -f

· One min read
Hreniuc Cristian-Alexandru

This doc describes how to create a backup on-demand for RKE2 cluster. We already have set up a recurring backup, from the rke2 engine. But if we ever want to backup this, before an update or something, we should follow these steps.

Source

Atm only the snapshot with config file works, there is a bug in their server:

rke2 etcd-snapshot --config /etc/rancher/rke2/config2.yaml --debug
s3: true
s3-access-key: keyId
s3-bucket: domain_com-contabo-rke
s3-endpoint: s3.eu-central-003.backblazeb2.com
s3-region: eu-central-003
s3-secret-key: applicationKey
snapshot-compress: true

This will create a snapshot and upload it to S3(BackBlaze).

Not working atm

rke2 etcd-snapshot \
--snapshot-compress \
--s3 \
--s3-endpoint "s3.eu-central-003.backblazeb2.com" \
--s3-bucket "domain_com-contabo-rke" \
--s3-access-key "keiId" \
--s3-secret-key "applicationKey"

· One min read
Hreniuc Cristian-Alexandru

Install vitess client locally first:

wget https://github.com/vitessio/vitess/releases/download/v14.0.0/vitess_14.0.0-9665c18_amd64.deb

sudo dpkg -i vitess_14.0.0-9665c18_amd64.deb

Start the port forwarding

Note: Make sure you have the KUBECONFIG env set when running pf.sh

cd vitess
bash pf.sh &

alias vtctlclient="vtctlclient -server localhost:15999 -alsologtostderr"
alias mysql="mysql -h 127.0.0.1 -P 15306 -u domain-com_admin"


Pass: `domain-com_admin_`

Connnect to the db to test if it works

mysql -pdomain-com_admin_

Create the backup - only the schema

mysqldump -d -h 127.0.0.1 -P 15306 -u domain-com_admin -pdomain-com_admin_ domain-com > domain-com-dev-schema.sql

Create the backup - the complete db

mysqldump -h 127.0.0.1 -P 15306 -u domain-com_admin -pdomain-com_admin_ domain-com > domain-com-dev.sql

Import the db locally

!!Make sure you use another bash terminal, nnot the one you added the aliases!!

# Create the database
mysql -u root -proot
create database domain_com_dev;
quit

# Import it
mysql -u root -proot domain_com_dev < domain-com-dev.sql

If you encounnter errors, you might have to run these commannds:

sed -i 's/utf8mb4/utf8/g' domain-com-dev.sql
sed -i 's/utf8_0900_ai_ci/utf8_general_ci/g' domain-com-dev.sql

And retry import.