Skip to main content

· 2 min read
Hreniuc Cristian-Alexandru

Here we will document how to install the Loki stack: Loki + Grafana to display our logs in grafana.

Preparations

Add the grafana-contabo.domain.com or grafana-hetzner.domain.com to the IP of the cluester in cloudflare, we will need it, because we will generate a certificate for it using let's encrypt.

Install

We will install it via Rancher using the helm chart.

First we will need to add the repository to Rancher. We should add this: https://grafana.github.io/helm-charts to Apps & Marketplace > Repositories > Create.

Go to Apps & Marketplace > Charts and search for loki-stack, install it using the following:

  • name - loki-stack
  • values - Use the yml from below. You will have to replace contabo with hetzner, if you install this on hetzner.
loki:
# https://github.com/grafana/helm-charts/blob/main/charts/loki/values.yaml
enabled: true
persistence:
enabled: true
storageClassName: nfs-master1-storage # https://github.com/grafana/helm-charts/blob/main/charts/loki/templates/statefulset.yaml#L145
size: 15Gi
# extraArgs:
# log.level: debug
grafana:
enabled: true
sidecar:
datasources:
enabled: true # https://github.com/grafana/loki/blob/88feda41a02f9c544d7476f61e296373e83fbe72/production/helm/loki-stack/templates/datasources.yaml#L1
persistence:
enabled: true # We should set it to true
storageClassName: nfs-master1-storage
size: 1Gi
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-staging
hosts:
- grafana-contabo.domain.com
tls:
- hosts:
- grafana-contabo.domain.com
secretName: grafana-contabo.domain.com-cert # Autogenerated

Note: Don't forget to change the dns based on the cluster.

To get the admin password for grafana, you should run:

kubectl get secret --namespace default loki-stack-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

The loki endpoint that should be used everywhere is http://loki-stack:3100, the DNS is the same as the name set when we installed the helm.

Import the grafana dashboard for Loki logs

Go to https://grafana-contabo.domain.com/dashboard/import or https://grafana-hetzner.domain.com/dashboard/import and paste the json found here(removed). Add it to favorites.

The dashboard should be accessible here: https://grafana-contabo.domain.com/d/domain-com-loki/logs-for-domain-com-backend or https://grafana-hetzner.domain.com/d/domain-com-loki/logs-for-domain-com-backend.

Optional features

If we want all logs from all containers to be sent to Grafana, we should enable the promtail component from the loki stack.

If we ever want logstash, to centralize the logs and store them, we can activate it.

· 12 min read
Hreniuc Cristian-Alexandru

This document describes all steps that we need to make when we decide to start the production cluster from Hetzner. This contains:

  • server installation
  • database
  • frontend apps
  • backend apps
  • ssl
  • grafana + loki

1 Install servers

We buy the servers from the clould web interface. For each server we need to do the following steps when buying:

  • Add it to the brandName-net-01 private network(Used to access the nfs storage) In the future, maybe start the cluster on this network.

  • Add it to the brandName-firewall-01 firewall

  • Add it to the brandName-01 placement group(this way they won't end up on the same phisical server, so if one fails the others are still up)

  • Add the public IP to the brandName-firewall-01 fireawall, we have two rules that allow traffic between those servers. This is due to tha fact that we couldn't make it(rke2 cluster, here's smt similar) work on the private addresses.

· One min read
Hreniuc Cristian-Alexandru

I was thinking on how can you archive your data and store it without loosing it. Maybe first option that you are thinking is HDD or SSD, but those do not last very long(3-5 years).

So I tried to find an alternative, to store my backups and I found out about M-DISC.

They say that it can store data for 1000 years. I also found some durability tests for this technology:

A summary of those:

  • write the data to the disc
  • Put the disc in water and keep it there for a few hours
  • Freeze it for 24/48 hours
  • Leave it outside for 24 hours in the sun/wind
  • Scratch it
  • Try to read from it -> Success

You will need optical disks that use M-DISC technology and also an optical writer that supports M-DISC technology.

· 4 min read
Hreniuc Cristian-Alexandru

Useful documentation with examples:

Connect to the postgresql

psql -p 5432 -h localhost -U postgres -W

Database and Schema

Show all databases

\l
# Or
SELECT datname FROM pg_database;

Switch to another database

Similar to USE from mysql.

\c databasename

Create database

Documentation

CREATE DATABASE test_db;

Drop/Delete database

Documentation

DROP DATABASE IF EXISTS test_db;

A database may contain multiple schemas, which groups tables from a database. By default public schema is created for every database. And also you don't have to specify it in your queries, eg: public.staff is the same as staff. More examples and documentation can be checked here

User/Role

Show all users

\du
# With description
\du+
# Or sql
SELECT rolname FROM pg_roles;

Create a user

Documentation

CREATE USER test_user WITH PASSWORD 'pass';

Grant user rights to connect to db

GRANT CONNECT ON DATABASE test_db TO test_user;

Try to connect afterwards:

psql -p 5432 -h localhost -U test_user -W -d test_db

Create superuser

Documentation

CREATE ROLE test_user2 WITH LOGIN SUPERUSER CREATEDB PASSWORD 'pass';
# Connect afterwards
psql -p 5432 -h localhost -U test_user2 -W -d postgres

Alter user, add createdb rights

ALTER ROLE test_user2 WITH CREATEDB;

Grant all privileges on the database

Documentation

GRANT ALL PRIVILEGES ON DATABASE test_db TO test_user2;

Common users

You ussually need the following users:

Reader:

CREATE USER reader_user WITH PASSWORD 'reader_user';
GRANT CONNECT ON DATABASE test_db TO reader_user;

\c databsename
ALTER DEFAULT PRIVILEGES
FOR USER reader_user
IN SCHEMA public
GRANT SELECT ON TABLES TO reader_user;

ALTER DEFAULT PRIVILEGES IN SCHEMA public
GRANT USAGE, SELECT ON SEQUENCES TO reader_user;

GRANT SELECT ON ALL TABLES IN SCHEMA public TO reader_user;
GRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA public TO reader_user;

Writer:

CREATE USER reader_writer WITH PASSWORD 'reader_writer';
GRANT CONNECT ON DATABASE test_db TO reader_writer;

\c databsename
ALTER DEFAULT PRIVILEGES
FOR USER reader_writer
IN SCHEMA public
GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO reader_writer;

ALTER DEFAULT PRIVILEGES IN SCHEMA public
GRANT USAGE, SELECT ON SEQUENCES TO reader_writer;

GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO reader_writer;
GRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA public TO reader_writer;

Admin

CREATE USER admin WITH PASSWORD 'admin';
GRANT CONNECT ON DATABASE test_db TO admin;

\c test_db
ALTER DEFAULT PRIVILEGES
FOR USER admin
IN SCHEMA public
GRANT ALL ON TABLES TO admin;

ALTER DEFAULT PRIVILEGES IN SCHEMA public
GRANT ALL ON SEQUENCES TO admin;

GRANT ALL ON ALL TABLES IN SCHEMA public TO admin;
GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO admin;

Tables

Create table

Documentation, official documentation

CREATE TABLE accounts (
user_id serial PRIMARY KEY,
username VARCHAR ( 50 ) UNIQUE NOT NULL,
password VARCHAR ( 50 ) NOT NULL,
email VARCHAR ( 255 ) UNIQUE NOT NULL,
created_on TIMESTAMP NOT NULL,
last_login TIMESTAMP
);

CREATE TABLE address (
id serial PRIMARY KEY,
name VARCHAR ( 50 ) NOT NULL
);

CREATE TABLE people (
id serial PRIMARY KEY,
name VARCHAR ( 50 ) NOT NULL
);

Describe table

Documentation

\d accounts

Show tables

Documentation

\dt+

Grant SELECT on table

Documentation, Official docs

First you need to connect with an user that has grant privileges and then switch to the database that contains that table:

\c test_db
GRANT SELECT ON accounts TO test_user;

Grant select on all tables even when new tables are added*

Documentation, official documentation

ALTER DEFAULT PRIVILEGES IN SCHEMA public
GRANT SELECT ON TABLES TO test_user;

See rights for specific table

Docs

\dp table
# Default priv
\ddp table

· 2 min read
Hreniuc Cristian-Alexandru

Official docker image for postgresql.

Official docker image for pgadmin4.

Docker compose file

The docker compose file has been taken from here.

Environments

This Compose file contains the following environment variables:

  • POSTGRES_USER the default value is postgres
  • POSTGRES_PASSWORD the default value is changeme
  • PGADMIN_PORT the default value is 5050
  • PGADMIN_DEFAULT_EMAIL the default value is [email protected]
  • PGADMIN_DEFAULT_PASSWORD the default value is admin
version: "3.5"

services:
postgres:
container_name: postgres_container
image: postgres
environment:
POSTGRES_USER: ${POSTGRES_USER:-postgres}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-changeme}
PGDATA: /data/postgres
volumes:
- postgres:/data/postgres
ports:
- "5432:5432"
networks:
- postgres
restart: unless-stopped

pgadmin:
container_name: pgadmin_container
image: dpage/pgadmin4:6.13
environment:
PGADMIN_DEFAULT_EMAIL: ${PGADMIN_DEFAULT_EMAIL:-[email protected]}
PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_DEFAULT_PASSWORD:-admin}
PGADMIN_CONFIG_SERVER_MODE: "False"
volumes:
- pgadmin:/var/lib/pgadmin

ports:
- "${PGADMIN_PORT:-5050}:80"
networks:
- postgres
restart: unless-stopped

networks:
postgres:
driver: bridge

volumes:
postgres:
pgadmin:

Start services

docker compose up -d

Access to postgres:

  • localhost:5432
  • Username: postgres (as a default)
  • Password: changeme (as a default)

Access to PgAdmin:

  • URL: - http://localhost:5050
  • Username: [email protected] (as a default)
  • Password: admin (as a default)

Add a new server in PgAdmin:

  • Host name/address postgres_container
  • Port 5432
  • Username as POSTGRES_USER, by default: postgres
  • Password as POSTGRES_PASSWORD, by default changeme

Logging

There are no easy way to configure pgadmin log verbosity and it can be overwhelming at times. It is possible to disable pgadmin logging on the container level.

Add the following to pgadmin service in the docker-compose.yml:

logging:
driver: "none"

reference

Access between containers

They share a bridge network, to connect pgadmin to postgresql, you should use postgres_container as a dns in the pgadmin container.

Using the psql client from ubuntu

Install

sudo apt-get install postgresql-client

Connect to the postgresql

psql -p 5432 -h localhost -U postgres -W

· One min read
Hreniuc Cristian-Alexandru

Add in ~/.fonts.conf the following contents:

<?xml version="1.0" ?>
<!DOCTYPE fontconfig SYSTEM "fonts.dtd">
<fontconfig>
<match target="font">
<edit name="autohint" mode="assign">
<bool>true</bool>
</edit>
</match>
</fontconfig>

This will help you with your eyes.

Source

· One min read
Hreniuc Cristian-Alexandru

I had to create a conditional singleton object and I tried something but it didn't look so good. And while doing this I've noticed that I colleague did something like this:

static const auto executor = []() -> smt_type
{
return smt;
}();

Which creates a lamba and calls it inplace.

And I liked it so much that I ended up asking him about it and he showed me this post from Herb Sutter. So I ended up doing something like this:

static const auto executor = []() -> std::shared_ptr<Aws::Utils::Threading::PooledThreadExecutor>
{
if(!isActive())
{
return nullptr;
}
size_t workers{0};
auto const downloadWorkersStr = getenv("env_var");
if(downloadWorkersStr)
{
// Env variable overrides the config
workers = stoi(downloadWorkersStr);
}
else
{
workers = getWorkerThreadsFromConfig();
}

if(workers == 0)
{
workers = boost::thread::hardware_concurrency();
}
return Aws::MakeShared<Aws::Utils::Threading::PooledThreadExecutor>(
"", workers);
}();
return executor;

This is called only once and it's also thread safe.

· 3 min read
Hreniuc Cristian-Alexandru

After I switched to docusaurus I also wanted to make the deployment easier and I had two options:

  • selfhost the website(as I did previously with wordpress)
  • host it via gitlab pages

I went with the second option, due to the fac that for the first option I had to use ssh credentials to connect to my hosting provider and copy the files there. I wanted to avoid doing that.

So I choose Gitlab Pages which is the same as Github Pages. The advantage of this is that it auto updates the website with just a simple CI job.

So to do this I had to follow these steps:

  • I went to the Gitlab pages docs
  • Create a repository for your website
  • Then go to Settings > Pages
  • It will open a page where you need to configure the gitlabci.yml for your website to be build and copied to a public folder, this is the generated file for my website
  • The pipeline will be triggered after you complete all steps and if it passes it will generate a page for it on the gitlab domain, eg: chreniuc.gitlab.io
  • I wanted to use my custom domain name, so to do this I went back to Settings > Pages and there was an option to add a domain, like it's stated in this documentation, the pictures from that page are outdated.
  • My domain is managed from CLouldflare, so I had to do the following:
    • Add the name of the domain in that section from gitlab
    • I checked to use Let's encrypt for SSL certs
    • In Clouldflare I had to add an A record for @(hreniuc.dev) to point to 35.185.44.232, not to chreniuc.gitlab.io. as they state in the doc, Clouldflare didn't allow me to use a DNS there, only IP
    • In Clouldflare I had to add an TXT record for _gitlab-pages-verification-code with a value to verify it, this will be used for validation that you own the domain
    • I also had to add a rule in Clouldflare to redirect from www.hreniuc.dev to hreniuc.dev, because I don;t know why, but when I was trying to access www.hreniuc.dev I ended up getting 401 Not authorized, like here. The rule looked like this: www.hreniuc.dev/* > 301 Permanent Redirect > https://hreniuc.dev/$1
    • I saved the domain in the Gitlab Pages section and clicked on an icon to retry verifying the domain and it worked

Now, everytime I pushed changes to the default branch it auto generated a pipeline and updated the website.

· One min read
Hreniuc Cristian-Alexandru

Source: https://doc.qt.io/qtcreator/creator-debugging-helpers.html#adding-custom-debugging-helpers

find ~ -name personaltypes.py

/home/cristi/Qt/Tools/QtCreator/share/qtcreator/debugger/personaltypes.py

Add there the following contents:

def qdump__ClassType1(d, value):
m_member1 = value["m_member1"].integer()
m_member2 = value["m_member2"].integer()
miliseconds = int(m_member1 * 1000 / m_member2)
d.putNumChild(0)
d.putValue(str(miliseconds) + str("ms (")+ str(m_member1) + str("m1, ") + str(m_member2) + str("m2)"))

def qdump__ClassType2(d, value):
position = value["m_position"]
d.putNumChild(0)
qdump__FramesDuration(d, position)

The ClassType2 will be displayed like this: 3642331ms (160626834m1, 44100m2).

For more examples, check: /home/cristi/Qt/Tools/QtCreator/share/qtcreator/debugger/stdtypes.py.