Skip to main content

· One min read
Hreniuc Cristian-Alexandru

If we ever get to this, here are the steps we need to follow to restore the cluster from an existing snapshot, if you have no snapshot, you deserve it.

How to restore the complete RKE kluster

Source

If you want to test this also do these steps:

# The cluster is running

kubectl apply -f https://k8s.io/examples/application/deployment.yaml

# Take snapshot

kubectl delete -f https://k8s.io/examples/application/deployment.yaml

kubectl get pot --all-namespaces

Now restore it:


# Stop rke2 on all servers(not agents)
systemctl stop rke2-server

# restore:
rke2 server \
--cluster-reset \
--etcd-s3 \
--cluster-reset-restore-path=etcd-snapshot-master1-1637253000 \
--etcd-s3-bucket=domain-com-contabo-rke \
--etcd-s3-access-key="keyId" \
--etcd-s3-secret-key="applicationKey"

After the restor is done:

systemctl enable rke2-server
systemctl start rke2-server

Afterwards, you will see the old data.

kubectl get pot --all-namespaces

· One min read
Hreniuc Cristian-Alexandru

This document documents useful steps to investigate when we are having problems with our engine cluster(RKE2).

1. Status of rke2

ssh ansible_noob@SERVER-IP-1
sudo su
# Server
systemctl status rke2-server
# Agent
systemctl status rke2-agent

Config file: less /etc/rancher/rke2/config

2. Check the logs of rke2

Source

Server logs:

ssh ansible_noob@SERVER-IP-1
sudo su
journalctl -u rke2-server -f

Agent logs:

ssh ansible_noob@SERVER-IP-2
sudo su
journalctl -u rke2-agent -f

· One min read
Hreniuc Cristian-Alexandru

This doc describes how to create a backup on-demand for RKE2 cluster. We already have set up a recurring backup, from the rke2 engine. But if we ever want to backup this, before an update or something, we should follow these steps.

Source

Atm only the snapshot with config file works, there is a bug in their server:

rke2 etcd-snapshot --config /etc/rancher/rke2/config2.yaml --debug
s3: true
s3-access-key: keyId
s3-bucket: domain_com-contabo-rke
s3-endpoint: s3.eu-central-003.backblazeb2.com
s3-region: eu-central-003
s3-secret-key: applicationKey
snapshot-compress: true

This will create a snapshot and upload it to S3(BackBlaze).

Not working atm

rke2 etcd-snapshot \
--snapshot-compress \
--s3 \
--s3-endpoint "s3.eu-central-003.backblazeb2.com" \
--s3-bucket "domain_com-contabo-rke" \
--s3-access-key "keiId" \
--s3-secret-key "applicationKey"

· One min read
Hreniuc Cristian-Alexandru

Install vitess client locally first:

wget https://github.com/vitessio/vitess/releases/download/v14.0.0/vitess_14.0.0-9665c18_amd64.deb

sudo dpkg -i vitess_14.0.0-9665c18_amd64.deb

Start the port forwarding

Note: Make sure you have the KUBECONFIG env set when running pf.sh

cd vitess
bash pf.sh &

alias vtctlclient="vtctlclient -server localhost:15999 -alsologtostderr"
alias mysql="mysql -h 127.0.0.1 -P 15306 -u domain-com_admin"


Pass: `domain-com_admin_`

Connnect to the db to test if it works

mysql -pdomain-com_admin_

Create the backup - only the schema

mysqldump -d -h 127.0.0.1 -P 15306 -u domain-com_admin -pdomain-com_admin_ domain-com > domain-com-dev-schema.sql

Create the backup - the complete db

mysqldump -h 127.0.0.1 -P 15306 -u domain-com_admin -pdomain-com_admin_ domain-com > domain-com-dev.sql

Import the db locally

!!Make sure you use another bash terminal, nnot the one you added the aliases!!

# Create the database
mysql -u root -proot
create database domain_com_dev;
quit

# Import it
mysql -u root -proot domain_com_dev < domain-com-dev.sql

If you encounnter errors, you might have to run these commannds:

sed -i 's/utf8mb4/utf8/g' domain-com-dev.sql
sed -i 's/utf8_0900_ai_ci/utf8_general_ci/g' domain-com-dev.sql

And retry import.

· 2 min read
Hreniuc Cristian-Alexandru

Here we will document how to install the Loki stack: Loki + Grafana to display our logs in grafana.

Preparations

Add the grafana-contabo.domain.com or grafana-hetzner.domain.com to the IP of the cluester in cloudflare, we will need it, because we will generate a certificate for it using let's encrypt.

Install

We will install it via Rancher using the helm chart.

First we will need to add the repository to Rancher. We should add this: https://grafana.github.io/helm-charts to Apps & Marketplace > Repositories > Create.

Go to Apps & Marketplace > Charts and search for loki-stack, install it using the following:

  • name - loki-stack
  • values - Use the yml from below. You will have to replace contabo with hetzner, if you install this on hetzner.
loki:
# https://github.com/grafana/helm-charts/blob/main/charts/loki/values.yaml
enabled: true
persistence:
enabled: true
storageClassName: nfs-master1-storage # https://github.com/grafana/helm-charts/blob/main/charts/loki/templates/statefulset.yaml#L145
size: 15Gi
# extraArgs:
# log.level: debug
grafana:
enabled: true
sidecar:
datasources:
enabled: true # https://github.com/grafana/loki/blob/88feda41a02f9c544d7476f61e296373e83fbe72/production/helm/loki-stack/templates/datasources.yaml#L1
persistence:
enabled: true # We should set it to true
storageClassName: nfs-master1-storage
size: 1Gi
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-staging
hosts:
- grafana-contabo.domain.com
tls:
- hosts:
- grafana-contabo.domain.com
secretName: grafana-contabo.domain.com-cert # Autogenerated

Note: Don't forget to change the dns based on the cluster.

To get the admin password for grafana, you should run:

kubectl get secret --namespace default loki-stack-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

The loki endpoint that should be used everywhere is http://loki-stack:3100, the DNS is the same as the name set when we installed the helm.

Import the grafana dashboard for Loki logs

Go to https://grafana-contabo.domain.com/dashboard/import or https://grafana-hetzner.domain.com/dashboard/import and paste the json found here(removed). Add it to favorites.

The dashboard should be accessible here: https://grafana-contabo.domain.com/d/domain-com-loki/logs-for-domain-com-backend or https://grafana-hetzner.domain.com/d/domain-com-loki/logs-for-domain-com-backend.

Optional features

If we want all logs from all containers to be sent to Grafana, we should enable the promtail component from the loki stack.

If we ever want logstash, to centralize the logs and store them, we can activate it.

· 12 min read
Hreniuc Cristian-Alexandru

This document describes all steps that we need to make when we decide to start the production cluster from Hetzner. This contains:

  • server installation
  • database
  • frontend apps
  • backend apps
  • ssl
  • grafana + loki

1 Install servers

We buy the servers from the clould web interface. For each server we need to do the following steps when buying:

  • Add it to the brandName-net-01 private network(Used to access the nfs storage) In the future, maybe start the cluster on this network.

  • Add it to the brandName-firewall-01 firewall

  • Add it to the brandName-01 placement group(this way they won't end up on the same phisical server, so if one fails the others are still up)

  • Add the public IP to the brandName-firewall-01 fireawall, we have two rules that allow traffic between those servers. This is due to tha fact that we couldn't make it(rke2 cluster, here's smt similar) work on the private addresses.

· One min read
Hreniuc Cristian-Alexandru

I was thinking on how can you archive your data and store it without loosing it. Maybe first option that you are thinking is HDD or SSD, but those do not last very long(3-5 years).

So I tried to find an alternative, to store my backups and I found out about M-DISC.

They say that it can store data for 1000 years. I also found some durability tests for this technology:

A summary of those:

  • write the data to the disc
  • Put the disc in water and keep it there for a few hours
  • Freeze it for 24/48 hours
  • Leave it outside for 24 hours in the sun/wind
  • Scratch it
  • Try to read from it -> Success

You will need optical disks that use M-DISC technology and also an optical writer that supports M-DISC technology.

· 4 min read
Hreniuc Cristian-Alexandru

Useful documentation with examples:

Connect to the postgresql

psql -p 5432 -h localhost -U postgres -W

Database and Schema

Show all databases

\l
# Or
SELECT datname FROM pg_database;

Switch to another database

Similar to USE from mysql.

\c databasename

Create database

Documentation

CREATE DATABASE test_db;

Drop/Delete database

Documentation

DROP DATABASE IF EXISTS test_db;

A database may contain multiple schemas, which groups tables from a database. By default public schema is created for every database. And also you don't have to specify it in your queries, eg: public.staff is the same as staff. More examples and documentation can be checked here

User/Role

Show all users

\du
# With description
\du+
# Or sql
SELECT rolname FROM pg_roles;

Create a user

Documentation

CREATE USER test_user WITH PASSWORD 'pass';

Grant user rights to connect to db

GRANT CONNECT ON DATABASE test_db TO test_user;

Try to connect afterwards:

psql -p 5432 -h localhost -U test_user -W -d test_db

Create superuser

Documentation

CREATE ROLE test_user2 WITH LOGIN SUPERUSER CREATEDB PASSWORD 'pass';
# Connect afterwards
psql -p 5432 -h localhost -U test_user2 -W -d postgres

Alter user, add createdb rights

ALTER ROLE test_user2 WITH CREATEDB;

Grant all privileges on the database

Documentation

GRANT ALL PRIVILEGES ON DATABASE test_db TO test_user2;

Common users

You ussually need the following users:

Reader:

CREATE USER reader_user WITH PASSWORD 'reader_user';
GRANT CONNECT ON DATABASE test_db TO reader_user;

\c databsename
ALTER DEFAULT PRIVILEGES
FOR USER reader_user
IN SCHEMA public
GRANT SELECT ON TABLES TO reader_user;

ALTER DEFAULT PRIVILEGES IN SCHEMA public
GRANT USAGE, SELECT ON SEQUENCES TO reader_user;

GRANT SELECT ON ALL TABLES IN SCHEMA public TO reader_user;
GRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA public TO reader_user;

Writer:

CREATE USER reader_writer WITH PASSWORD 'reader_writer';
GRANT CONNECT ON DATABASE test_db TO reader_writer;

\c databsename
ALTER DEFAULT PRIVILEGES
FOR USER reader_writer
IN SCHEMA public
GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO reader_writer;

ALTER DEFAULT PRIVILEGES IN SCHEMA public
GRANT USAGE, SELECT ON SEQUENCES TO reader_writer;

GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO reader_writer;
GRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA public TO reader_writer;

Admin

CREATE USER admin WITH PASSWORD 'admin';
GRANT CONNECT ON DATABASE test_db TO admin;

\c test_db
ALTER DEFAULT PRIVILEGES
FOR USER admin
IN SCHEMA public
GRANT ALL ON TABLES TO admin;

ALTER DEFAULT PRIVILEGES IN SCHEMA public
GRANT ALL ON SEQUENCES TO admin;

GRANT ALL ON ALL TABLES IN SCHEMA public TO admin;
GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO admin;

Tables

Create table

Documentation, official documentation

CREATE TABLE accounts (
user_id serial PRIMARY KEY,
username VARCHAR ( 50 ) UNIQUE NOT NULL,
password VARCHAR ( 50 ) NOT NULL,
email VARCHAR ( 255 ) UNIQUE NOT NULL,
created_on TIMESTAMP NOT NULL,
last_login TIMESTAMP
);

CREATE TABLE address (
id serial PRIMARY KEY,
name VARCHAR ( 50 ) NOT NULL
);

CREATE TABLE people (
id serial PRIMARY KEY,
name VARCHAR ( 50 ) NOT NULL
);

Describe table

Documentation

\d accounts

Show tables

Documentation

\dt+

Grant SELECT on table

Documentation, Official docs

First you need to connect with an user that has grant privileges and then switch to the database that contains that table:

\c test_db
GRANT SELECT ON accounts TO test_user;

Grant select on all tables even when new tables are added*

Documentation, official documentation

ALTER DEFAULT PRIVILEGES IN SCHEMA public
GRANT SELECT ON TABLES TO test_user;

See rights for specific table

Docs

\dp table
# Default priv
\ddp table

· 2 min read
Hreniuc Cristian-Alexandru

Official docker image for postgresql.

Official docker image for pgadmin4.

Docker compose file

The docker compose file has been taken from here.

Environments

This Compose file contains the following environment variables:

  • POSTGRES_USER the default value is postgres
  • POSTGRES_PASSWORD the default value is changeme
  • PGADMIN_PORT the default value is 5050
  • PGADMIN_DEFAULT_EMAIL the default value is pgadmin4@pgadmin.org
  • PGADMIN_DEFAULT_PASSWORD the default value is admin
version: "3.5"

services:
postgres:
container_name: postgres_container
image: postgres
environment:
POSTGRES_USER: ${POSTGRES_USER:-postgres}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-changeme}
PGDATA: /data/postgres
volumes:
- postgres:/data/postgres
ports:
- "5432:5432"
networks:
- postgres
restart: unless-stopped

pgadmin:
container_name: pgadmin_container
image: dpage/pgadmin4:6.13
environment:
PGADMIN_DEFAULT_EMAIL: ${PGADMIN_DEFAULT_EMAIL:-pgadmin4@pgadmin.org}
PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_DEFAULT_PASSWORD:-admin}
PGADMIN_CONFIG_SERVER_MODE: "False"
volumes:
- pgadmin:/var/lib/pgadmin

ports:
- "${PGADMIN_PORT:-5050}:80"
networks:
- postgres
restart: unless-stopped

networks:
postgres:
driver: bridge

volumes:
postgres:
pgadmin:

Start services

docker compose up -d

Access to postgres:

  • localhost:5432
  • Username: postgres (as a default)
  • Password: changeme (as a default)

Access to PgAdmin:

  • URL: - http://localhost:5050
  • Username: pgadmin4@pgadmin.org (as a default)
  • Password: admin (as a default)

Add a new server in PgAdmin:

  • Host name/address postgres_container
  • Port 5432
  • Username as POSTGRES_USER, by default: postgres
  • Password as POSTGRES_PASSWORD, by default changeme

Logging

There are no easy way to configure pgadmin log verbosity and it can be overwhelming at times. It is possible to disable pgadmin logging on the container level.

Add the following to pgadmin service in the docker-compose.yml:

logging:
driver: "none"

reference

Access between containers

They share a bridge network, to connect pgadmin to postgresql, you should use postgres_container as a dns in the pgadmin container.

Using the psql client from ubuntu

Install

sudo apt-get install postgresql-client

Connect to the postgresql

psql -p 5432 -h localhost -U postgres -W

· One min read
Hreniuc Cristian-Alexandru

Add in ~/.fonts.conf the following contents:

<?xml version="1.0" ?>
<!DOCTYPE fontconfig SYSTEM "fonts.dtd">
<fontconfig>
<match target="font">
<edit name="autohint" mode="assign">
<bool>true</bool>
</edit>
</match>
</fontconfig>

This will help you with your eyes.

Source