Skip to main content

· One min read
Hreniuc Cristian-Alexandru

I was thinking on how can you archive your data and store it without loosing it. Maybe first option that you are thinking is HDD or SSD, but those do not last very long(3-5 years).

So I tried to find an alternative, to store my backups and I found out about M-DISC.

They say that it can store data for 1000 years. I also found some durability tests for this technology:

A summary of those:

  • write the data to the disc
  • Put the disc in water and keep it there for a few hours
  • Freeze it for 24/48 hours
  • Leave it outside for 24 hours in the sun/wind
  • Scratch it
  • Try to read from it -> Success

You will need optical disks that use M-DISC technology and also an optical writer that supports M-DISC technology.

· 3 min read
Hreniuc Cristian-Alexandru

Useful documentation with examples:

Connect to the postgresql

psql -p 5432 -h localhost -U postgres -W

Database and Schema

Show all databases

\l
# Or
SELECT datname FROM pg_database;

Switch to another database

Similar to USE from mysql.

\c databasename

Create database

Documentation

CREATE DATABASE test_db;

Drop/Delete database

Documentation

DROP DATABASE IF EXISTS test_db;

A database may contain multiple schemas, which groups tables from a database. By default public schema is created for every database. And also you don't have to specify it in your queries, eg: public.staff is the same as staff. More examples and documentation can be checked here

User/Role

Show all users

\du
# With description
\du+
# Or sql
SELECT rolname FROM pg_roles;

Create a user

Documentation

CREATE USER test_user WITH PASSWORD 'pass';

Grant user rights to connect to db

GRANT CONNECT ON DATABASE test_db TO test_user;

Try to connect afterwards:

psql -p 5432 -h localhost -U test_user -W -d test_db

Create superuser

Documentation

CREATE ROLE test_user2 WITH LOGIN SUPERUSER CREATEDB PASSWORD 'pass';
# Connect afterwards
psql -p 5432 -h localhost -U test_user2 -W -d postgres

Alter user, add createdb rights

ALTER ROLE test_user2 WITH CREATEDB;

Grant all privileges on the database

Documentation

GRANT ALL PRIVILEGES ON DATABASE test_db TO test_user2;

Common users

You ussually need the following users:

Reader:

CREATE USER reader_user WITH PASSWORD 'reader_user';
GRANT CONNECT ON DATABASE test_db TO reader_user;

\c databsename
ALTER DEFAULT PRIVILEGES
FOR USER reader_user
IN SCHEMA public
GRANT SELECT ON TABLES TO reader_user;

Writer:

CREATE USER reader_writer WITH PASSWORD 'reader_writer';
GRANT CONNECT ON DATABASE test_db TO reader_writer;

\c databsename
ALTER DEFAULT PRIVILEGES
FOR USER reader_writer
IN SCHEMA public
GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO reader_writer;

Admin

CREATE USER admin WITH PASSWORD 'admin';
GRANT CONNECT ON DATABASE test_db TO admin;

\c test_db
ALTER DEFAULT PRIVILEGES
FOR USER admin
IN SCHEMA public
GRANT ALL ON TABLES TO admin;

Tables

Create table

Documentation, official documentation

CREATE TABLE accounts (
user_id serial PRIMARY KEY,
username VARCHAR ( 50 ) UNIQUE NOT NULL,
password VARCHAR ( 50 ) NOT NULL,
email VARCHAR ( 255 ) UNIQUE NOT NULL,
created_on TIMESTAMP NOT NULL,
last_login TIMESTAMP
);

CREATE TABLE address (
id serial PRIMARY KEY,
name VARCHAR ( 50 ) NOT NULL
);

CREATE TABLE people (
id serial PRIMARY KEY,
name VARCHAR ( 50 ) NOT NULL
);

Describe table

Documentation

\d accounts

Show tables

Documentation

\dt+

Grant SELECT on table

Documentation, Official docs

First you need to connect with an user that has grant privileges and then switch to the database that contains that table:

\c test_db
GRANT SELECT ON accounts TO test_user;

Grant select on all tables even when new tables are added*

Documentation, official documentation

ALTER DEFAULT PRIVILEGES IN SCHEMA public
GRANT SELECT ON TABLES TO test_user;

· 2 min read
Hreniuc Cristian-Alexandru

Official docker image for postgresql.

Official docker image for pgadmin4.

Docker compose file

The docker compose file has been taken from here.

Environments

This Compose file contains the following environment variables:

  • POSTGRES_USER the default value is postgres
  • POSTGRES_PASSWORD the default value is changeme
  • PGADMIN_PORT the default value is 5050
  • PGADMIN_DEFAULT_EMAIL the default value is [email protected]
  • PGADMIN_DEFAULT_PASSWORD the default value is admin
version: "3.5"

services:
postgres:
container_name: postgres_container
image: postgres
environment:
POSTGRES_USER: ${POSTGRES_USER:-postgres}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-changeme}
PGDATA: /data/postgres
volumes:
- postgres:/data/postgres
ports:
- "5432:5432"
networks:
- postgres
restart: unless-stopped

pgadmin:
container_name: pgadmin_container
image: dpage/pgadmin4:6.13
environment:
PGADMIN_DEFAULT_EMAIL: ${PGADMIN_DEFAULT_EMAIL:-[email protected]}
PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_DEFAULT_PASSWORD:-admin}
PGADMIN_CONFIG_SERVER_MODE: "False"
volumes:
- pgadmin:/var/lib/pgadmin

ports:
- "${PGADMIN_PORT:-5050}:80"
networks:
- postgres
restart: unless-stopped

networks:
postgres:
driver: bridge

volumes:
postgres:
pgadmin:

Start services

docker compose up -d

Access to postgres:

  • localhost:5432
  • Username: postgres (as a default)
  • Password: changeme (as a default)

Access to PgAdmin:

  • URL: - http://localhost:5050
  • Username: [email protected] (as a default)
  • Password: admin (as a default)

Add a new server in PgAdmin:

  • Host name/address postgres_container
  • Port 5432
  • Username as POSTGRES_USER, by default: postgres
  • Password as POSTGRES_PASSWORD, by default changeme

Logging

There are no easy way to configure pgadmin log verbosity and it can be overwhelming at times. It is possible to disable pgadmin logging on the container level.

Add the following to pgadmin service in the docker-compose.yml:

logging:
driver: "none"

reference

Access between containers

They share a bridge network, to connect pgadmin to postgresql, you should use postgres_container as a dns in the pgadmin container.

Using the psql client from ubuntu

Install

sudo apt-get install postgresql-client

Connect to the postgresql

psql -p 5432 -h localhost -U postgres -W

· One min read
Hreniuc Cristian-Alexandru

Add in ~/.fonts.conf the following contents:

<?xml version="1.0" ?>
<!DOCTYPE fontconfig SYSTEM "fonts.dtd">
<fontconfig>
<match target="font">
<edit name="autohint" mode="assign">
<bool>true</bool>
</edit>
</match>
</fontconfig>

This will help you with your eyes.

Source

· One min read
Hreniuc Cristian-Alexandru

I had to create a conditional singleton object and I tried something but it didn't look so good. And while doing this I've noticed that I colleague did something like this:

static const auto executor = []() -> smt_type
{
return smt;
}();

Which creates a lamba and calls it inplace.

And I liked it so much that I ended up asking him about it and he showed me this post from Herb Sutter. So I ended up doing something like this:

static const auto executor = []() -> std::shared_ptr<Aws::Utils::Threading::PooledThreadExecutor>
{
if(!isActive())
{
return nullptr;
}
size_t workers{0};
auto const downloadWorkersStr = getenv("env_var");
if(downloadWorkersStr)
{
// Env variable overrides the config
workers = stoi(downloadWorkersStr);
}
else
{
workers = getWorkerThreadsFromConfig();
}

if(workers == 0)
{
workers = boost::thread::hardware_concurrency();
}
return Aws::MakeShared<Aws::Utils::Threading::PooledThreadExecutor>(
"", workers);
}();
return executor;

This is called only once and it's also thread safe.

· 3 min read
Hreniuc Cristian-Alexandru

After I switched to docusaurus I also wanted to make the deployment easier and I had two options:

  • selfhost the website(as I did previously with wordpress)
  • host it via gitlab pages

I went with the second option, due to the fac that for the first option I had to use ssh credentials to connect to my hosting provider and copy the files there. I wanted to avoid doing that.

So I choose Gitlab Pages which is the same as Github Pages. The advantage of this is that it auto updates the website with just a simple CI job.

So to do this I had to follow these steps:

  • I went to the Gitlab pages docs
  • Create a repository for your website
  • Then go to Settings > Pages
  • It will open a page where you need to configure the gitlabci.yml for your website to be build and copied to a public folder, this is the generated file for my website
  • The pipeline will be triggered after you complete all steps and if it passes it will generate a page for it on the gitlab domain, eg: chreniuc.gitlab.io
  • I wanted to use my custom domain name, so to do this I went back to Settings > Pages and there was an option to add a domain, like it's stated in this documentation, the pictures from that page are outdated.
  • My domain is managed from CLouldflare, so I had to do the following:
    • Add the name of the domain in that section from gitlab
    • I checked to use Let's encrypt for SSL certs
    • In Clouldflare I had to add an A record for @(hreniuc.dev) to point to 35.185.44.232, not to chreniuc.gitlab.io. as they state in the doc, Clouldflare didn't allow me to use a DNS there, only IP
    • In Clouldflare I had to add an TXT record for _gitlab-pages-verification-code with a value to verify it, this will be used for validation that you own the domain
    • I also had to add a rule in Clouldflare to redirect from www.hreniuc.dev to hreniuc.dev, because I don;t know why, but when I was trying to access www.hreniuc.dev I ended up getting 401 Not authorized, like here. The rule looked like this: www.hreniuc.dev/* > 301 Permanent Redirect > https://hreniuc.dev/$1
    • I saved the domain in the Gitlab Pages section and clicked on an icon to retry verifying the domain and it worked

Now, everytime I pushed changes to the default branch it auto generated a pipeline and updated the website.

· One min read
Hreniuc Cristian-Alexandru

Source: https://doc.qt.io/qtcreator/creator-debugging-helpers.html#adding-custom-debugging-helpers

find ~ -name personaltypes.py

/home/cristi/Qt/Tools/QtCreator/share/qtcreator/debugger/personaltypes.py

Add there the following contents:

def qdump__ClassType1(d, value):
m_member1 = value["m_member1"].integer()
m_member2 = value["m_member2"].integer()
miliseconds = int(m_member1 * 1000 / m_member2)
d.putNumChild(0)
d.putValue(str(miliseconds) + str("ms (")+ str(m_member1) + str("m1, ") + str(m_member2) + str("m2)"))

def qdump__ClassType2(d, value):
position = value["m_position"]
d.putNumChild(0)
qdump__FramesDuration(d, position)

The ClassType2 will be displayed like this: 3642331ms (160626834m1, 44100m2).

For more examples, check: /home/cristi/Qt/Tools/QtCreator/share/qtcreator/debugger/stdtypes.py.

· 2 min read
Hreniuc Cristian-Alexandru

Source: https://askubuntu.com/a/1264577

In sleep mode, the content of ram is kept as it is, and the computer works on a very low power mode, so as to keep the ram content intact (as ram will lose the data if power supply is cut to it). But in hibernation, the ram content is stored in the swap space, so power can be completely cut off. Hence it is recommended to have swap size as large as the ram size.