Skip to main content

5 posts tagged with "backup"

View All Tags

· One min read
Hreniuc Cristian-Alexandru

If we ever get to this, here are the steps we need to follow to restore the cluster from an existing snapshot, if you have no snapshot, you deserve it.

How to restore the complete RKE kluster

Source

If you want to test this also do these steps:

# The cluster is running

kubectl apply -f https://k8s.io/examples/application/deployment.yaml

# Take snapshot

kubectl delete -f https://k8s.io/examples/application/deployment.yaml

kubectl get pot --all-namespaces

Now restore it:


# Stop rke2 on all servers(not agents)
systemctl stop rke2-server

# restore:
rke2 server \
--cluster-reset \
--etcd-s3 \
--cluster-reset-restore-path=etcd-snapshot-master1-1637253000 \
--etcd-s3-bucket=domain-com-contabo-rke \
--etcd-s3-access-key="keyId" \
--etcd-s3-secret-key="applicationKey"

After the restor is done:

systemctl enable rke2-server
systemctl start rke2-server

Afterwards, you will see the old data.

kubectl get pot --all-namespaces

· One min read
Hreniuc Cristian-Alexandru

This doc describes how to create a backup on-demand for RKE2 cluster. We already have set up a recurring backup, from the rke2 engine. But if we ever want to backup this, before an update or something, we should follow these steps.

Source

Atm only the snapshot with config file works, there is a bug in their server:

rke2 etcd-snapshot --config /etc/rancher/rke2/config2.yaml --debug
s3: true
s3-access-key: keyId
s3-bucket: domain_com-contabo-rke
s3-endpoint: s3.eu-central-003.backblazeb2.com
s3-region: eu-central-003
s3-secret-key: applicationKey
snapshot-compress: true

This will create a snapshot and upload it to S3(BackBlaze).

Not working atm

rke2 etcd-snapshot \
--snapshot-compress \
--s3 \
--s3-endpoint "s3.eu-central-003.backblazeb2.com" \
--s3-bucket "domain_com-contabo-rke" \
--s3-access-key "keiId" \
--s3-secret-key "applicationKey"

· One min read
Hreniuc Cristian-Alexandru

Install vitess client locally first:

wget https://github.com/vitessio/vitess/releases/download/v14.0.0/vitess_14.0.0-9665c18_amd64.deb

sudo dpkg -i vitess_14.0.0-9665c18_amd64.deb

Start the port forwarding

Note: Make sure you have the KUBECONFIG env set when running pf.sh

cd vitess
bash pf.sh &

alias vtctlclient="vtctlclient -server localhost:15999 -alsologtostderr"
alias mysql="mysql -h 127.0.0.1 -P 15306 -u domain-com_admin"


Pass: `domain-com_admin_`

Connnect to the db to test if it works

mysql -pdomain-com_admin_

Create the backup - only the schema

mysqldump -d -h 127.0.0.1 -P 15306 -u domain-com_admin -pdomain-com_admin_ domain-com > domain-com-dev-schema.sql

Create the backup - the complete db

mysqldump -h 127.0.0.1 -P 15306 -u domain-com_admin -pdomain-com_admin_ domain-com > domain-com-dev.sql

Import the db locally

!!Make sure you use another bash terminal, nnot the one you added the aliases!!

# Create the database
mysql -u root -proot
create database domain_com_dev;
quit

# Import it
mysql -u root -proot domain_com_dev < domain-com-dev.sql

If you encounnter errors, you might have to run these commannds:

sed -i 's/utf8mb4/utf8/g' domain-com-dev.sql
sed -i 's/utf8_0900_ai_ci/utf8_general_ci/g' domain-com-dev.sql

And retry import.

· One min read
Hreniuc Cristian-Alexandru

I was thinking on how can you archive your data and store it without loosing it. Maybe first option that you are thinking is HDD or SSD, but those do not last very long(3-5 years).

So I tried to find an alternative, to store my backups and I found out about M-DISC.

They say that it can store data for 1000 years. I also found some durability tests for this technology:

A summary of those:

  • write the data to the disc
  • Put the disc in water and keep it there for a few hours
  • Freeze it for 24/48 hours
  • Leave it outside for 24 hours in the sun/wind
  • Scratch it
  • Try to read from it -> Success

You will need optical disks that use M-DISC technology and also an optical writer that supports M-DISC technology.

· One min read
Hreniuc Cristian-Alexandru

I usually have a data folder in my home folder, where I keep all my data, including the cache of my browser.

When I switch PCs I have to copy that folder on my new pc, if I try to archive the folder it will create a corupted compressed archive or the compression will fail.

So, to fix this I had to follow the following steps:


# Create a copy of the folder with rsync
# Will copy all folders/files from `data` inside the `data_backup` folder
sudo rsync -a --info=progress2 /home/chreniuc/data/ /home/chreniuc/data_backup/

# Create an archive
tar -cf - data_backup | lz4 -c > data_backup_29.04.2022.tar.lz4

# Copy the archive where you want

cp data_backup_29.04.2022.tar.lz4 destination

# To uncompress a lz4
lz4 -dc --no-sparse data_backup_29.04.2022.tar.lz4 | tar xf -