Skip to main content

· One min read
Hreniuc Cristian-Alexandru

I had to create a conditional singleton object and I tried something but it didn't look so good. And while doing this I've noticed that I colleague did something like this:

static const auto executor = []() -> smt_type
{
return smt;
}();

Which creates a lamba and calls it inplace.

And I liked it so much that I ended up asking him about it and he showed me this post from Herb Sutter. So I ended up doing something like this:

static const auto executor = []() -> std::shared_ptr<Aws::Utils::Threading::PooledThreadExecutor>
{
if(!isActive())
{
return nullptr;
}
size_t workers{0};
auto const downloadWorkersStr = getenv("env_var");
if(downloadWorkersStr)
{
// Env variable overrides the config
workers = stoi(downloadWorkersStr);
}
else
{
workers = getWorkerThreadsFromConfig();
}

if(workers == 0)
{
workers = boost::thread::hardware_concurrency();
}
return Aws::MakeShared<Aws::Utils::Threading::PooledThreadExecutor>(
"", workers);
}();
return executor;

This is called only once and it's also thread safe.

· 3 min read
Hreniuc Cristian-Alexandru

After I switched to docusaurus I also wanted to make the deployment easier and I had two options:

  • selfhost the website(as I did previously with wordpress)
  • host it via gitlab pages

I went with the second option, due to the fac that for the first option I had to use ssh credentials to connect to my hosting provider and copy the files there. I wanted to avoid doing that.

So I choose Gitlab Pages which is the same as Github Pages. The advantage of this is that it auto updates the website with just a simple CI job.

So to do this I had to follow these steps:

  • I went to the Gitlab pages docs
  • Create a repository for your website
  • Then go to Settings > Pages
  • It will open a page where you need to configure the gitlabci.yml for your website to be build and copied to a public folder, this is the generated file for my website
  • The pipeline will be triggered after you complete all steps and if it passes it will generate a page for it on the gitlab domain, eg: chreniuc.gitlab.io
  • I wanted to use my custom domain name, so to do this I went back to Settings > Pages and there was an option to add a domain, like it's stated in this documentation, the pictures from that page are outdated.
  • My domain is managed from CLouldflare, so I had to do the following:
    • Add the name of the domain in that section from gitlab
    • I checked to use Let's encrypt for SSL certs
    • In Clouldflare I had to add an A record for @(hreniuc.dev) to point to 35.185.44.232, not to chreniuc.gitlab.io. as they state in the doc, Clouldflare didn't allow me to use a DNS there, only IP
    • In Clouldflare I had to add an TXT record for _gitlab-pages-verification-code with a value to verify it, this will be used for validation that you own the domain
    • I also had to add a rule in Clouldflare to redirect from www.hreniuc.dev to hreniuc.dev, because I don;t know why, but when I was trying to access www.hreniuc.dev I ended up getting 401 Not authorized, like here. The rule looked like this: www.hreniuc.dev/* > 301 Permanent Redirect > https://hreniuc.dev/$1
    • I saved the domain in the Gitlab Pages section and clicked on an icon to retry verifying the domain and it worked

Now, everytime I pushed changes to the default branch it auto generated a pipeline and updated the website.

· One min read
Hreniuc Cristian-Alexandru

Source: https://doc.qt.io/qtcreator/creator-debugging-helpers.html#adding-custom-debugging-helpers

find ~ -name personaltypes.py

/home/cristi/Qt/Tools/QtCreator/share/qtcreator/debugger/personaltypes.py

Add there the following contents:

def qdump__ClassType1(d, value):
m_member1 = value["m_member1"].integer()
m_member2 = value["m_member2"].integer()
miliseconds = int(m_member1 * 1000 / m_member2)
d.putNumChild(0)
d.putValue(str(miliseconds) + str("ms (")+ str(m_member1) + str("m1, ") + str(m_member2) + str("m2)"))

def qdump__ClassType2(d, value):
position = value["m_position"]
d.putNumChild(0)
qdump__FramesDuration(d, position)

The ClassType2 will be displayed like this: 3642331ms (160626834m1, 44100m2).

For more examples, check: /home/cristi/Qt/Tools/QtCreator/share/qtcreator/debugger/stdtypes.py.

· 2 min read
Hreniuc Cristian-Alexandru

Source: https://askubuntu.com/a/1264577

In sleep mode, the content of ram is kept as it is, and the computer works on a very low power mode, so as to keep the ram content intact (as ram will lose the data if power supply is cut to it). But in hibernation, the ram content is stored in the swap space, so power can be completely cut off. Hence it is recommended to have swap size as large as the ram size.

· One min read
Hreniuc Cristian-Alexandru

Installl nvm

# Check latest release: https://github.com/nvm-sh/nvm/releases/latest
# This commands also adds the paths to your ~/.bashrc
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
source ~/.bashrc

Install latested node version and use it

nvm install --lts
nvm use --lts
# Or
nvm use 16.15.0

Install yarn and upgrade npm:

npm install --global yarn

npm install -g npm@8.10.0

Install RESTer extension.

· 2 min read
Hreniuc Cristian-Alexandru

Install the c++ compiler and the other things:

sudo apt-get install build-essential curl

Note: You should also check this post: Preparing C++ Dev infrastructure

Tmux and tmuxinator:

sudo apt-get install tmux tmuxinator

Others:

sudo apt-get install liblz4-tool

# To uncompress a lz4
lz4 -dc --no-sparse archive.tar.lz4 | tar xf -

sudo apt-get install vim
sudo apt-get install sqlitebrowser

MariaDB

sudo apt update
sudo apt install mariadb-server
sudo mysql_secure_installation
sudo mariadb

GRANT ALL PRIVILEGES on *.* to 'root'@'localhost' IDENTIFIED BY 'root';
FLUSH ALL PRIVILEGES;

Docker:

Source

sudo apt-get update

sudo apt-get install \
ca-certificates \
curl \
gnupg \
lsb-release

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null


sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin

sudo usermod -aG docker $USER

Kubernetes utils:

Source

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update
sudo apt-get install -y kubectl

From snap:

  • spotify
  • vscode(you can download it from ms website directly)
  • dbeaver

· One min read
Hreniuc Cristian-Alexandru

I usually have a data folder in my home folder, where I keep all my data, including the cache of my browser.

When I switch PCs I have to copy that folder on my new pc, if I try to archive the folder it will create a corupted compressed archive or the compression will fail.

So, to fix this I had to follow the following steps:


# Create a copy of the folder with rsync
# Will copy all folders/files from `data` inside the `data_backup` folder
sudo rsync -a --info=progress2 /home/chreniuc/data/ /home/chreniuc/data_backup/

# Create an archive
tar -cf - data_backup | lz4 -c > data_backup_29.04.2022.tar.lz4

# Copy the archive where you want

cp data_backup_29.04.2022.tar.lz4 destination

# To uncompress a lz4
lz4 -dc --no-sparse data_backup_29.04.2022.tar.lz4 | tar xf -

· One min read
Hreniuc Cristian-Alexandru

I've found this. I haven't tried it, but I want to save it just in case.

Contents:

Boost has experimental CMake support. You have to do a bit of dev to get it working but we've had good experiences so far.

set(BOOST_INCLUDE_LIBRARIES system thread) # enabled libraries
set(BOOST_ENABLE_CMAKE ON) # CMake support
FetchContent_Declare(boost GIT_REPOSITORY https://github.com/boostorg/boost.git ...

The above will get you the compiled libs working. For the header library (Boost::boost aka Boost::headers) there doesn't seem to be CMake support as yet so we wrote a few lines of CMake code to iterate through the FetchContent source directory and added all include folders to a new IMPORTED INTERFACE target.