Skip to main content

· 4 min read
Hreniuc Cristian-Alexandru

An example on how you can mock things in python.

Let's assume we have the following folder structure:

 - project
- src
- test
- utils
- aws.SSMClient.AWSFactory
- and so on, check below in the code

To run the unit tests:

cd src

python3 -m unittest discover -p "*test*" -v

If you added a new subfolder in test, you need to also add an empty file called __init__.py, to be discovered by the command from above.

When adding unit tests, you can enable logging for a specific testa case if you add the following line in the beginning of the test case:

logging.disable(logging.DEBUG)

Also, in all unit tests we inject fake response from aws using mock and we also enforce that a method be called with a specific set of params. To see the mock calls made on the boto3(aws) instances you can temporary add the following prints:

print(f'\n{self.mock_ssm.mock_calls}\n')
print(f'\n{self.mock_ec2.mock_calls}\n')
print(f'\n{self.mock_sqs.mock_calls}\n')

This will print all calls made on the aws instances, this way you can verify and add mock expectations in your tests.

In the file below you will notice that we import fake data or Helpers, those things are just strings that we want to return or the call params of a mock method, eg:

· One min read
Hreniuc Cristian-Alexandru

Check this

This is an example of how to connect an internal pod to an external db that is not inside the cluster. In the pod configuration you will need to connect to the brandName-database service from below.

apiVersion: v1
kind: Service
metadata:
name: brandName-database
spec:
ports:
- name: brandName-database-external
protocol: TCP
port: 3306
targetPort: 3306

---
kind: Endpoints
apiVersion: v1
metadata:
name: brandName-database
subsets:
- addresses:
- ip: 192.168.100.52
ports:
- port: 3306
name: brandName-database-external

· One min read
Hreniuc Cristian-Alexandru

Make sure you have exported the KUBECONFIG env: export KUBECONFIG=/path/rke2_inventory/hetzner/credentials/rke2.yaml

  1. Get all pods and the nodes they are on
kubectl get pods -o wide
  1. Get nodes and their labels
kubectl get nodes --show-labels
  1. Deploy something to the cluster:
kubectl apply -f hetzner/domain.com-backend.yaml
  1. Delete a deployment from the cluster:
kubectl delete -f https://k8s.io/examples/application/deployment.yaml
  1. Get/watch logs from pod:
kubectl logs -f pod/vt-vttablet-decontabodusseldorf-2620423388-0c5af156
  1. Delete pod:
kubectl delete pod/vt-vttablet-decontabodusseldorf-2620423388-0c5af156

· 6 min read
Hreniuc Cristian-Alexandru

Check when a new version is released here.

Development env - contabo

Update the vitess dev cluster first and see if everything works, check the logs of the vitess pods and also check if the backend can connect to the db.

cd vitess/contabo/

kubectl apply -f vitess-cluster.yaml

# Watch the pods being created
kubectl get pod

# Check logs
# Make sure the version is printed correctly: Version: 14.0.0
I0630 16:12:57.015279 1 servenv.go:100] Version: 14.0.0 (Git revision 9665c1850cf3e155667178104891f0fc41b71a9d branch 'heads/v14.0.0') built on Tue Jun 28 17:34:59 UTC 2022 by vitess@buildkitsandbox using go1.18.3 linux/amd64


# Vtgate - the last part of the name will be different
kubectl logs -f pod/vt-decontabodusseldorf-vtgate-f81fd0bc-5b7bfffb96-jxcjj

# vttablet
kubectl logs -f pod/vt-vttablet-decontabodusseldorf-2620423388-0c5af156

# vtctld
kubectl logs pod/vt-decontabodusseldorf-vtctld-55130465-65cd85fcc-n9ljn

Connect to the app and check the logs for the backend:

kubectl logs -f domain-com-backend-64d86787c5-g4vkv

· One min read
Hreniuc Cristian-Alexandru

Check when a new version is released here and check if it has been updated here, usually they only change the version of the operator image in that file.

Usually we should keep them in sync, the vitess-operator image and the vitess image.

When upgrading make sure you specify here the latest version used, so it will be easier to see what version we are using:

kubectl apply -f https://raw.githubusercontent.com/vitessio/vitess/v14.0.0/examples/operator/operator.yaml

# Vitess operator 2.7.1

kubectl get pod
# Logs

kubectl logs -f vitess-operator-647f7cc94f-zgf9q

Sometimes you might need to restart the backends, because this might trigger a restart for the vitess, and the backend sometimes doesn't refresh the connection ot the db. Check the logs of the backend.

· One min read
Hreniuc Cristian-Alexandru

Based on this doc, check here the new versions


# Hetzner
ssh -i ~/.ssh/ansible_noob_id_rsa ansible_noob@SERVER-1
ssh -i ~/.ssh/ansible_noob_id_rsa ansible_noob@SERVER-2

sudo su -
curl -sfL https://get.rke2.io | INSTALL_RKE2_VERSION=v1.22.10+rke2r2 sh -

# For server
systemctl restart rke2-server

# For agent
systemctl restart rke2-agent
# Check the logs
journalctl -u rke2-server -f
# Errors:
journalctl -u rke2-server | grep -i level=error

journalctl -u rke2-agent -f
journalctl -u rke2-agent | grep -i level=error

# See if pods are restarting or not working
kubectl get pod --all-namespaces

Sometimes you might need to restart the backends, because this might trigger a restart for the vitess, and the backend sometimes doesn't refresh the connection to the db. Check the logs of the backend.

· One min read
Hreniuc Cristian-Alexandru

As stated in these docs: https://rancher.com/docs/rancher/v2.5/en/installation/install-rancher-on-k8s/upgrades/#option-a-upgrading-rancher

sudo snap install helm --classic

Upgrade it:

# To have the same values:
helm get values rancher -n cattle-system -o yaml > rancher_values.yaml

# Add the repo
helm repo add rancher-stable https://releases.rancher.com/server-charts/stable

helm repo update

helm fetch rancher-stable/rancher


helm upgrade rancher rancher-stable/rancher \
--namespace cattle-system \
-f rancher_values.yaml \
--version=2.6.5

If helm upgrade doesn't work we need to delete the rancher and reinstall it, following these steps without the cert-manager:

helm delete rancher -n cattle-system

helm install rancher rancher-stable/rancher \
--namespace cattle-system \
-f rancher_values.yaml \
--version=2.6.5

It should reuse the old configs and so.

Check the pods and logs:

kubectl get pods --namespace cattle-system

kubectl -n cattle-system rollout status deploy/rancher

kubectl logs -f rancher-84696c75d9-62mk4 --namespace cattle-system
# Should check the version
2022/07/01 08:49:27 [INFO] Rancher version v2.6.5 (c4d59fa88) is starting

· One min read
Hreniuc Cristian-Alexandru

As stated here, but make sure there are no other specific things for upgrading from a version to another.

Upgrade CRDs

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.2/cert-manager.crds.yaml

Upgrade cert-manager

helm upgrade --version 1.8.2 cert-manager jetstack/cert-manager --namespace cert-manager

Check the pods

kubectl get pods --namespace cert-manager

· One min read
Hreniuc Cristian-Alexandru

Change the configs for our nodes on each server

# Contabo
ssh -i ~/.ssh/ansible_noob_id_rsa ansible_noob@SERVER-IP-1 # Master
ssh -i ~/.ssh/ansible_noob_id_rsa ansible_noob@SERVER-IP-2


sudo su -
# Sometimes you might need to only change on the server config
nano /etc/rancher/rke2/config.yaml

# For server
systemctl restart rke2-server

# For agent
systemctl restart rke2-agent

# Check the logs
journalctl -u rke2-server -f
journalctl -u rke2-agent -f