Skip to main content

4 posts tagged with "vitess"

View All Tags

· 6 min read
Hreniuc Cristian-Alexandru

Check when a new version is released here.

Development env - contabo

Update the vitess dev cluster first and see if everything works, check the logs of the vitess pods and also check if the backend can connect to the db.

cd vitess/contabo/

kubectl apply -f vitess-cluster.yaml

# Watch the pods being created
kubectl get pod

# Check logs
# Make sure the version is printed correctly: Version: 14.0.0
I0630 16:12:57.015279 1 servenv.go:100] Version: 14.0.0 (Git revision 9665c1850cf3e155667178104891f0fc41b71a9d branch 'heads/v14.0.0') built on Tue Jun 28 17:34:59 UTC 2022 by vitess@buildkitsandbox using go1.18.3 linux/amd64


# Vtgate - the last part of the name will be different
kubectl logs -f pod/vt-decontabodusseldorf-vtgate-f81fd0bc-5b7bfffb96-jxcjj

# vttablet
kubectl logs -f pod/vt-vttablet-decontabodusseldorf-2620423388-0c5af156

# vtctld
kubectl logs pod/vt-decontabodusseldorf-vtctld-55130465-65cd85fcc-n9ljn

Connect to the app and check the logs for the backend:

kubectl logs -f domain-com-backend-64d86787c5-g4vkv

· One min read
Hreniuc Cristian-Alexandru

Check when a new version is released here and check if it has been updated here, usually they only change the version of the operator image in that file.

Usually we should keep them in sync, the vitess-operator image and the vitess image.

When upgrading make sure you specify here the latest version used, so it will be easier to see what version we are using:

kubectl apply -f https://raw.githubusercontent.com/vitessio/vitess/v14.0.0/examples/operator/operator.yaml

# Vitess operator 2.7.1

kubectl get pod
# Logs

kubectl logs -f vitess-operator-647f7cc94f-zgf9q

Sometimes you might need to restart the backends, because this might trigger a restart for the vitess, and the backend sometimes doesn't refresh the connection ot the db. Check the logs of the backend.

· One min read
Hreniuc Cristian-Alexandru

Install vitess client locally first:

wget https://github.com/vitessio/vitess/releases/download/v14.0.0/vitess_14.0.0-9665c18_amd64.deb

sudo dpkg -i vitess_14.0.0-9665c18_amd64.deb

Start the port forwarding

Note: Make sure you have the KUBECONFIG env set when running pf.sh

cd vitess
bash pf.sh &

alias vtctlclient="vtctlclient -server localhost:15999 -alsologtostderr"
alias mysql="mysql -h 127.0.0.1 -P 15306 -u domain-com_admin"


Pass: `domain-com_admin_`

Connnect to the db to test if it works

mysql -pdomain-com_admin_

Create the backup - only the schema

mysqldump -d -h 127.0.0.1 -P 15306 -u domain-com_admin -pdomain-com_admin_ domain-com > domain-com-dev-schema.sql

Create the backup - the complete db

mysqldump -h 127.0.0.1 -P 15306 -u domain-com_admin -pdomain-com_admin_ domain-com > domain-com-dev.sql

Import the db locally

!!Make sure you use another bash terminal, nnot the one you added the aliases!!

# Create the database
mysql -u root -proot
create database domain_com_dev;
quit

# Import it
mysql -u root -proot domain_com_dev < domain-com-dev.sql

If you encounnter errors, you might have to run these commannds:

sed -i 's/utf8mb4/utf8/g' domain-com-dev.sql
sed -i 's/utf8_0900_ai_ci/utf8_general_ci/g' domain-com-dev.sql

And retry import.

· 12 min read
Hreniuc Cristian-Alexandru

This document describes all steps that we need to make when we decide to start the production cluster from Hetzner. This contains:

  • server installation
  • database
  • frontend apps
  • backend apps
  • ssl
  • grafana + loki

1 Install servers

We buy the servers from the clould web interface. For each server we need to do the following steps when buying:

  • Add it to the brandName-net-01 private network(Used to access the nfs storage) In the future, maybe start the cluster on this network.

  • Add it to the brandName-firewall-01 firewall

  • Add it to the brandName-01 placement group(this way they won't end up on the same phisical server, so if one fails the others are still up)

  • Add the public IP to the brandName-firewall-01 fireawall, we have two rules that allow traffic between those servers. This is due to tha fact that we couldn't make it(rke2 cluster, here's smt similar) work on the private addresses.