UNDER CONSTRUCTION! Check back later to see if this warning is removed
I am borrowing this idea from Falko Timme, who wrote a similar guide several years ago. Some of his information is now outdated and some of the techniques he used are no longer recommended. Also, this guide (not his) is written for Kubuntu because as a Ubuntu derivative it creates only two subvolumes when installed: @ and @home. And, I have written using the btrfs-progs v4.4 version of BTRFS. Debian recommends btrfs-tools, which is in Kubuntu's repository.
Installing BTRFS as the root file system for Kubuntu
The BTRFS man pages are grouped as:
First assumption: you are going to use only one storage device, an SSD or a spinning disk, usually assigned /dev/sda.
Second assumption: you are going to give BTRFS the entire sda1, with the possible exception of a swap partition on sda2.
When the installation process gets to the "Disk Setup" stage, select the "Manual" method. DO NOT let the installation procedure do the partitioning for you automatically. It will chose EXT4.
In the manual mode delete all existing partitions, if any, and create one big raw partition for sda. Some experienced users will use the raw partition, i.e., the entire drive as sda as the BTRFS pool. That setup is called a "Partitionless BTRFS disk". It is recommended not to do that, but to create a partition, sda1.
If you plan to create a swap partition subtract from the size of your device a value equal to the amount of your RAM plus 5%. If you have 6GB of RAM then subtract from the total size of your device an amount equal to 6.3GB and create sda1 equal to that amount. A 750GB HD, for example, would be partitioned with 743GB, sda1, and the rest would be given to the swap partition, sda2. This will allow room for suspend to work properly. And while BTRFS does NOT use a swap file or swap partition some applications may.
Now, while still in the partition screen select the sda1 partition. The "Use as" option is pre-selected for EXT4. Change it to BTRFS. Check the "Format the partition" checkbox. In the "Mount Point" text box enter the forward slash, /, and nothing else. Click OK and proceed. The installation will create both the @ and @home subvolumes. In the /etc/fstab file those to subvolumes will be bound to / and /home. If you wanted to use the partitionless BTRFS disk method you'd select sda instead of sda1, and BTRFS subvolumes become the same as partitions. Oshunluver describes his method of installing multiple distros on a single device using the partitionless BTRFS disk method. The method has limitations:
Oshunluver has a post detailing how to do a "Partitionless BTRFS disk install" and to avoid the grub pitfalls.
Modifying /etc/fstab
In order to optimize the performance of the SSD, I strongly advise you to avoid doing the TRIM operation in real time (whenever a file is deleted) because you would be putting an unnecessary extra amount of work over the SSD. In other words: You should not enable the discard option in fstab.
Instead, what I recommend is to run a script periodically to tell the SSD which blocks are free with the command fstrim. Doing this operation daily or weekly is more than enough. This way we do not lose any performance due to TRIM when deleting files and we periodically keep informed the SSD about the free blocks.
Making your first snapshots of @ and @home
Your installation of Kubuntu is complete, you've rebooted and you are staring at your newly minted desktop. Before you do ANYTHING else, before you install a printer, or run "ubuntu-drivers" to configure your GPU, or install your favorite applications, or install or configure your email client, or upload your email archives, BEFORE ANYTHING ELSE do the following:
Open a Konsole and enter
sudo -i
and press the Enter key. You are now root.
blkid
will give you a list of your storage device UUID's. It could look like this:
The next step is to mount sda1 to /mnt, but you DO NOT want to use /dev/sda1. It would be OK for a computer with only one storage device, or HD, but if you have two or more the possibility exists that when plugging in a second or third HD the device designation could spontaneously switch. I experienced this when I plugged in my third WD hard drive, which I assumed would become sdc because sda1 and sdb1 were part of my RAID1 pool. I was stunned to see my RAID1 pool membership listed as sda1 and sdc. I had read about this and from that point onward I've only used UUIDs to mount my storage devices.
With your mouse highlight the 47a4794f-b168-4c61-9d0c-cc22f4880116 part of the sda1 UUID and use right mouse copy to copy it. Then, in the Konsole, enter
mount /dev/disk/by-uuid/uuidhere /mnt
replacing "uuidhere" with 47a4794f-b168-4c61-9d0c-cc22f4880116. That mounts the drive at /mnt and /mnt becomes the <ROOT_FS> of your Kubuntu BTRFS installation. It is the top of the tree. @, @home and all other subvolumes you may add are under <ROOT_FS>. Notice that your system and desktop are still running. You can switch to them and do stuff if you need to. You are doing live maintanence on your system!
If you do
vdir /mnt
you will see
/mnt/@
/mnt/@home
Now you need to create a normal subdirectory under <ROOT_FS> where your @ and @home snapshots can be stored.
mkdir /mnt/snapshots
Doing
vdir /mnt
will show
/mnt/@
/mnt/@home
/mnt/snapshots
Now to create your first snapshots.
btrfs su snapshot -r /mnt/@ /mnt/snapshots/@_basic_install
btrfs su snapshot -r /mnt/@home /mnt/snapshots/@home_basic_install
"su" is shorthand for "subvolume". Many of the btrfs commands have shortened aliases.
"-r" tells the snapshot command to make a read-only snapshot. Read only snapshots can be moved outside of the <ROOT_FS> pool by using the btrfs send & receive command. If "-r" is not used then the snapshot created is a read-write snapshot and cannot be moved outside of <ROOT_FS> pool.
Now you have your first set of snapshots, but you have not yet made a backup. Snapshots are not backups until they have been moved to an external storage device. You do
btrfs filesystem usage /
and the usage of your installation is shown to be, say, 28GB. You have a 128GB USB stick and you plug it in. It could be formatted with either FAT32, EXT4 or you can fire up the Partition Manager, create sdb1, and format it using BTRFS. Close the Partition Manager. Use Dolphin to unmount the USB stick. Plugging it back it the USB stick should mount automatically at /media. You convert your snapshots into backups by sending them to the USB stick:
btrfs send /mnt/snapshots/@_basic_install | btrfs receive /media
btrfs send /mnt/snapshots/@home_basic_install | btrfs receive /media
Whereas a fresh snapshot is essentially empty, the process of sending and receiving even a fresh snapshot results in the entire contents of @ and @home being read and sent to /media, otherwise they would not be a complete archival copy of your system. So, it takes only seconds to make a set of snapshots but sending them can take 15 to 30 minutes or more. While the sending and receiving are taking place you can continue to use your system normally with little or no degradation in performance. When the sending and receiving are done you can umount the USB stick and store it in a safe place. Your system is backed up.
IF your destination is not formatted with BTRFS then your send command should be:
btrfs send /mnt/snapshots/@_basic_install -f /media/@_basic_install.txt
where /media is a mounted USB stick formatted with EXT4 or FAT32, or a network destination.
The "-f" option sends the subvolume to the destination as a stdout file, i.e., ASCII.
To convert it back to a subvolume one would use:
btrfs receive -f /media/@_basic_install.txt /mnt/snapshots
The received subvolume will be set to read-only. To add the write attribute use
chmod u+w /mnt/snapshots/@basic_install.txt
To remove the txt extension use
mv /mnt/snapshots/@_basic_install.txt /mnt/snapshots/@_basic_install
Regular and Incremental snapshots
After you've made your first snapshot, which I called "@_basic_install" and "@home_basic_install", you'll need to start a regular regimen of snapshotting your system to enable the ability to roll back to a previous condition of your system if the need arises. A snapshot is essentially empty when it is first made and begins to populate as additions, deletions and changes to the system occurs. One might conclude that since snapshots are empty when first taken one can make dozens or hundreds or thousands of them. I've read one person asking if it would be OK to take a snapshot every minute! Below I discuss why more than a dozen or so snapshots is not advisable.
That said, it is ALWAYS advisable to make a snapshot of your system (@ and @home) before any significant changes in your system: doing an update & full-upgrade, installing one or more applications from the repository, uninstalling an application, or especially when installing a foreign deb, tar or zip package. IF the changes do not produce any problems, and they rarely do, one can then keep the most recent backups and delete the oldest ones, leaving no more than a dozen or so snapshots. Personally, I always date my snapshots: @YYYYMMDD & @homeYYYYMMDD and occasionally add a short description if I have taken more than one snapshot on that date: @YYYYMMDD_before_libreoffice & @homeYYYYMMDD_before_libreoffice.
Snapshots can be renamed by using the mv command:
mv /mnt/snapshots/@YYYYMMDD_before_libreoffice /mnt/snapshots/@YYYYMMDD
Btrfs has included the ability to take incremental snapshots as backups to external storage devices. Doing so shortens the time that is taken to send a snapshot to an external storage device, which can only be done by using the send & receive command, since a snapshot cannot be mv'd or cp'd out of its <ROOT_FS>. Assume that yesterday you made a snapshot of @ as @YYYYMMD1. You sent @YYYYMMD1 to the external storage using the Btrfs send & receive command:
btrfs send /mnt/snapshots/@YYYYMMD1 | btrfs receive /backup
where /backup is an external storage device mounted on the directory /backup that you had made previously.
Today you made @YYYYMMD2. Now you are going to send just the difference between @YYYYMMD1 and @YYYYMMD2 to /backup.
btrfs send -p @YYYYMMD1 @YYYYMMD2 | btrfs receive /backup
Here is what happens: receive notes that @YYYYMMD1 already exists under backup and makes a snapshot of it as @YYYYMMD2. It then begins to populate @YYYYMMD2 with the data not already in @YYYYMMD1, the incremental data. When done, the @YYYYMMD2 snapshot under /backup can be treated as a normal snapshot of @ dated YYYYMMD2.
The next day snapshot @YYYYMMD3 is taken and sent to /backup as an incremental backup.
btrfs send -p @YYYYMMD2 @YYYYMMD3 | btrfs receive /backup
And, receive makes a snapshot of @YYYYYMMD2, that it has on its side, as @YYYYYMMD3 and adds to it any changes not in @YYYYMMD2.
Although @YYYYMMD3 is called an "incremental" backup it contains the entire contents of @ as of YYYYMMD3 and can be used to replace @ at any time. This is because send & receive, without the -p parameter, sends the ENTIRE snapshot to external storage.
How often should one make a snapshot set or move a set to external storage?
Say your last snapshot was a week ago and after experimenting with some software you made a mistake installing or removing it, or making changes, and you found yourself unable to manually make changes to restore your system to what it was before the problem, or you can't log into your account or get a black screen when you do. You do a roll back. All email you've received since that week old snapshot was taken is now lost, unless your mail server keeps a copy of it on hand, like mine does. Any URLs you bookmarked in your browser during the last week will be missing. Any documents you've written since that snapshot are not in your Documents folder any more. All images or movies you've saved during the last week are not on your system any more. Any updates will have to be redone, but the system will notify you of that fact.
Considering the fact that a snapshot pair sent to an external storage device is fully populated (my @ & @home total 110GB), there is a limit to how many sets of snapshots you can save to your 2TB spinning HD storage device. It could hold only 18 of my sets, much less than one set every day for a year or even a month. My 750GB HD can hold only around 6 pair.
So, what do I do? I try to keep about 4 pair of snapshots on my <ROOT_FS> (i.e., in /mnt/snapshots), but not under / or /home. If a big update, more than 30 or so apps, is announced I make a snapshot set and then do the update. If, after a day of testing everything is working well, I take another snapshot set and send it to /backup, my mounted external storage HD. Then I delete the snapshot made just before the update. And, I delete the oldest pair from my /backup. For a massive update, like 200+ applications, I'll make a snapshot before hand, test the update, and if it is OK then make another snapshot and delete the previous one. Then I'll send them to /backup and also to a couple 128GB USB sticks that I keep handy for such a purpose.
All in all, incremental backups save time IF your total usage is several hundred GB and you are frequently deleting, adding and changing data, as in software development. Sending my snapshot pair (110GB) to /backup takes about 10 minutes for @YYYYMMDD and about 15 minutes for @homeYYYYMMDD. An incremental backup may save me some time, but it is a background operation which doesn't detract from my use of the system while send & receive is working. EDIT: after replacing my primary drive with a Samsung Pro EVO SSD I measured the total time to send & receive both @YYYYMMDD and @homeYYYYMMDD (115GB) at less than 3 minutes. I was shocked at the increase in speed, considering that the destination drive is a spinning HD.
Also, I do not use compression. How much this would effect the total number of pairs that could be saved or the time to send them to storage is outside my experience.
Using the "btrfs send -f" command
Take an ro snapshot of @home:
btrfs su snapshot -r /mnt/@home /mnt/snapshots/@home20180907
Create a readonly snapshot of '/mnt/@home' in '/mnt/snapshots/@home20180907'
Send it to a text file:
btrfs send -f /mnt/snapshots/@home07txt /mnt/snapshots/@home20180907
At subvol /mnt/snapshots/@home20180907
The "send -f" syntax is the <outfile> first and the subvolume source second. The <outfile> was created with "send -f" so "receive -f" should receive it and convert it to a normal subvolume:
btrfs receive -f /mnt/snapshots/@home07txt /backup/
At subvol @home20180907
Notice the "At subvol" gives the name of the orginal subvolume, @home20180907, which pulled from the ASCII text file. The subvolume @home20180907 can be browsed with Dolphin just like any other subvolume because it IS like any other subvolume.
As a text file, @home07txt can be moved between btrfs subvolumes or outside of a btrfs subvolume to an EXT4 partition or a remote server, and the "btrfs receive -f <filename> /mountpoint" command will convert it back to a subvolume.
Adding peripheral devices
You've created archival snapshots of your @ and @home system, so you are now ready to install your printer. If the installation proceeds normally and your printer works well then you can open a Konsole and create @_printer_installed and @home_printer_installed snapshots. Send them to the USB stick. Then install your GPU and if things go well open a Konsole and create @_gpu_installed and @home_gpu_installed, and archive them. Repeat the process for your email client. Then add your browser, your special apps, etc, and repeat the archival process. Say your gpu install fails, leaving you with a black screen and all you can do is open a terminal. In that terminal you
sudo -i
to get to root. You mount your sda1 uuid to /mnt as described above. Then you move @ and @home out of the way:
mv /mnt/@ /mnt/@old
mv /mnt/@home /mnt/@homeold
Now you are ready to create a new @ and @home subvolume using the btrfs snapshot command:
btrfs subvol snapshot /mnt/snapshots/@__printer_installed /mnt/@
btrfs subvol snapshot /mnt/snapshots/@home_printer_installed /mnt/@home
Since the "-r" parameter was not use the @ and @home snapshots just created will be read-write. Using a read-only snapshot for @ or @home will prevent those subvolumes from being mounted during boot up.
With @ and @home rolled back to the printer_installed, i.e., the snapshot made before the attempted install of the gpu, you can unmount /mnt and then reboot. After a roll back the boot up process can take as much as twice as long to reach the desktop as before, but subsequent boots will be at normal speed.
Eventually you will get to a point where you are doing normal updates and the use of special snapshot names is no longer necessary. At that point using a date scheme, @YYYYMMDD, or something similar, will suffice. Do not separate or mix @ and @home snapshot pairs. Updates often make changes in both @ and @home (/ and /home) and that's why both snapshots are taking at the same time, and rolled back using the same time stamp. Another problem you might run into is inadvertently using @ to make a @homeYYYYMMDD snapshot, or @home to make a @YYYYMMDD snapshot. To recover from these mistakes you will need a LiveUSB with btrfs installed and modprobed to make it a kernel service, and a set of snapshots you can examine with Dolphin or mc to make sure they are of @ and @home before you replace the defective @ or @home.
Subvolumes can be browsed like normal subdirectories and a snapshot of @ looks exactly like the / directory just as a snapshot of @home looks like the /home directory. Because of that you can use Dolphin or mc to browse a snapshot and copy a file or directory from it into its normal place in / or /home. This is done if a file or directory under / or /home is inadvertently deleted or altered in some way. Because of CoW (Copy on Write), when you delete a file or alter it, btrfs will make a copy of the original file or directory in the snapshot. Eventually, considering updates, changes and deletions, most files get changed in some way and the snapshots that contain those files eventually populates with the original versions of those files or directories. Older snapshots are larger than fresh snapshots. Using
btrfs subvol delete -C /mnt/snapshots/@_an old_snapshot
btrfs subvol delete -C /mnt/snapshots/@home_an old_snapshot
will commit (-C) each transaction as it completes and return the data blocks to the pool.
Maintaining your BTRFS file system
BTRFS has several maintanence tools to keep your system running smoothly and quickly. They include scrub, defragment and balance. All of the tools are used as root, operating on either /mnt or / and /home. You can continue to use your system while they run in a Konsole.
Defragment
From the btrfs-filesystem man page:
As you use your system fragmentation occurs. The defragment command copies the extents, and underlying blocks, to new extents that are more ordered and improves speed of operation. Files in snapshots still point to the old extents. This breaks the --reflinks and creates data bloat. IF you had many snapshots in /mnt/snapshots and you do a defragment operation all of those snapshots could expand to equal the size of the subvolume that they were made from. The total pool size consumed could exceed 90% and thus slow down your system dramatically. SpaceDog analyzed the situation and identified three conditions:
From the following Btrfs dev message board:
https://www.mail-archive.com/linux-b.../msg72416.html
Subvolumes with more than a dozen snapshots can greatly slow down balance and device delete. The multiple tree walks involve both high CPU and IOPS usage. This means that schemes or applications like snapper, apt-btrfs-snapshot or Timeshift that can, or is used to, snapshot a volume periodically should set a low upper limit on the total number of those snapshots that are retained.
My general approach is to manually create snapshots of @ and @home as read-only and send them to my archival storage. Then, delete all my snapshots in /mnt/snapshots before running the defragment command. When the defragment command is done I create new @ and @home snapshots and then I reboot just to be sure all works well.
Without having to mount your <ROOT_FS> to /mnt you can manually defragment your system using
sudo btrfs fi defragment -r -f / /home
Defragment can be run on one or more files and/or directories. -clzo can be used to add compression.
I defragment about once every two or three months. Since my total data is only 110GB and my pool is 698GB performance degradation has never been a problem. However, remember that defragmentation will fail on all open files and services. I get messages like this:
Those errors occure on open files. If one boots a LiveUSB stick, modprobe's btrfs, mounts the HD and runs defrag on it there will be errors only if there are real problems.
Scrub
When scrubbing a partition, btrfs reads all data and metadata blocks and verifies checksums. It automatically repairs corrupted blocks if there’s a correct copy available in a RAID configuration. BTRFS also reports any unreadable sector along with the affected file via system log. It can be run as often as one wishes, but the general practice is once a month. It can be made a monthly cron task by using:
sudo crontab -e
and adding the following lines to the bottom of the /var/spool/cron/crontabs/root crontab file that the above command edits.
This runs scrub on / every month at Noon and at 1PM on /home.
Do not modify cron files in /etc.
If you do not have at least a RAID1 installation then scrub will only report errors, not correct them. What if scrub reports errors? It means that some blocks have data or metadata checksums that do not correspond to the checksums that scrub operation actually read for those blocks. The fault could be related to hardware.
Speculating, there are ways around scrub only reporting errors on a non-RAID system. If your HD is large, say 4TB, then you could create three partitions, sda1, sda2, sda3 and possibly a swap partititon. Install BTRFS as the root file system on sda1 and then add sda2 and sda3 and balance (discussed later) them using -mraid1 -draid1 -sraid1, thus creating a "three" HD RAID1 configuration. Then scrub will repair errors. Break that 4TB into four partitions and make a RAID10. I haven't done it but it would make an interesting experiment!
You can check your storage devices for BTRFS errors using
:~$ sudo btrfs device stats /
[/dev/sda1].write_io_errs 0
[/dev/sda1].read_io_errs 0
[/dev/sda1].flush_io_errs 0
[/dev/sda1].corruption_errs 0
[/dev/sda1].generation_errs 0
and
:~# mount /dev/disk/by-uuid/b3131abd-c58d-4f32-9270-41815b72b203 /backup
(Note: use blkid to get your own uuid value for the mount command)
:~# btrfs device stats /backup
[/dev/sdc1].write_io_errs 0
[/dev/sdc1].read_io_errs 0
[/dev/sdc1].flush_io_errs 0
[/dev/sdc1].corruption_errs 0
[/dev/sdc1].generation_errs 0
and
:~# mount /dev/disk/by-uuid/17f4fe91-5cbc-46f6-9577-10aa173ac5f6 /media
:~# btrfs device stats /media
[/dev/sdb1].write_io_errs 0
[/dev/sdb1].read_io_errs 0
[/dev/sdb1].flush_io_errs 0
[/dev/sdb1].corruption_errs 0
[/dev/sdb1].generation_errs 0
which shows that all three of my hard drives are not reporting any BTFS errors.
Balance
The reason to do a balance: https://unix.stackexchange.com/quest...hat-does-it-do
More information here:https://btrfs.wiki.kernel.org/index..../btrfs-balance
FSTrim
Fstrim does not need to be run on BTRFS installed on spinning disks. It is for use with SSD's.
SSD
fstrim -v /
Running the fstrim command (fstrim -v /) as root on any mounted subvolume will perform the TRIM command on the SSD device.
When I was running Kubuntu 16.04 and Neon User Edition I set the fstrim in crontab manually to run once a week. When I installed Kubuntu 18.04 (Bionic) I noticed that fstrim was already installed as a systemd timer set to run once a week.
[code]
$ locate fstrim.timer
/etc/systemd/system/timers.target.wants/fstrim.timer
/lib/systemd/system/fstrim.timer
/var/lib/systemd/deb-systemd-helper-enabled/fstrim.timer.dsh-also
/var/lib/systemd/deb-systemd-helper-enabled/timers.target.wants/fstrim.timer
/var/lib/systemd/timers/stamp-fstrim.timer
$ cat /lib/systemd/system/fstrim.timer
[Unit]
Description=Discard unused blocks once a week
Documentation=man:fstrim
[Timer]
OnCalendar=weekly
AccuracySec=1h
Persistent=true
[Install]
WantedBy=timers.target
jerry@GreyGeek:~$ cat /etc/systemd/system/timers.target.wants/fstrim.timer
[Unit]
Description=Discard unused blocks once a week
Documentation=man:fstrim
[Timer]
OnCalendar=weekly
AccuracySec=1h
Persistent=true
[Install]
WantedBy=timers.target
[code/]
Situations with a low amount of disk write occurring on an SSD can use the discard mount option in /etc/fstab. In a high-write situation, such as having a database on an SSD, it is better to use TRIM commands rather than the discard mount option. See the comment in the /etc/fstab section
The man page for fstrim states:
It appears that Bionic follows the manpage advice, but one is free to use fstrim as frequently as they feel necessary, with the understanding that the more times it is used the shorter the life of an SSD will be. What are poor quality SSD's? I suspect that what is meant is "cheap" SSD's, but the price of good quality SSD's like Samsung's Pro 860 have dropped significantly in recent months. A Samsung 860 PRO 512GB V-NAND Solid State drive sells for $148, and the 256GB version sells for $98. With 3,000 writes (reads never bother SSD's) and "level wear" that translates into over 100 years of usage for a TLC storage type, which is probably why Samsung guarantees their SSD's for 5 years. Real world tests have shown good quality SSD drives last up to 3X the mfg guarantee period.
RAID
Don't use a partition manager on a multidrive RAID or JBOD configuration without first converting the drives to singletons.
Never use
btrfs subvolume set-default
to change the root id to some value other than 5. IOW, don't use that command on Ubuntu based distros.
BTRFS commands that require shutting down the system
(Note: In nearly 4 years of using Btrfs I have never needed to use the following commands, and the only thing I know about them is what I've read. I will post links to those commands but, as the old saying says, "If it ain't broke don't fix it!" )
check
restore
rescue
I am borrowing this idea from Falko Timme, who wrote a similar guide several years ago. Some of his information is now outdated and some of the techniques he used are no longer recommended. Also, this guide (not his) is written for Kubuntu because as a Ubuntu derivative it creates only two subvolumes when installed: @ and @home. And, I have written using the btrfs-progs v4.4 version of BTRFS. Debian recommends btrfs-tools, which is in Kubuntu's repository.
Installing BTRFS as the root file system for Kubuntu
The BTRFS man pages are grouped as:
btrfs, mkfs.btrfs(8), ionice(1), btrfs-balance(8), btrfs-check(8), btrfs-device(8), btrfs-filesystem(8), btrfs-inspect-internal(8), btrfs-property(8), btrfs-qgroup(8), btrfs-quota(8), btrfs-receive(8), btrfs-replace(8), btrfs-rescue(8), btrfs-restore(8), btrfs-scrub(8), btrfs-send(8), btrfs-subvolume(8)
Second assumption: you are going to give BTRFS the entire sda1, with the possible exception of a swap partition on sda2.
When the installation process gets to the "Disk Setup" stage, select the "Manual" method. DO NOT let the installation procedure do the partitioning for you automatically. It will chose EXT4.
In the manual mode delete all existing partitions, if any, and create one big raw partition for sda. Some experienced users will use the raw partition, i.e., the entire drive as sda as the BTRFS pool. That setup is called a "Partitionless BTRFS disk". It is recommended not to do that, but to create a partition, sda1.
If you plan to create a swap partition subtract from the size of your device a value equal to the amount of your RAM plus 5%. If you have 6GB of RAM then subtract from the total size of your device an amount equal to 6.3GB and create sda1 equal to that amount. A 750GB HD, for example, would be partitioned with 743GB, sda1, and the rest would be given to the swap partition, sda2. This will allow room for suspend to work properly. And while BTRFS does NOT use a swap file or swap partition some applications may.
Now, while still in the partition screen select the sda1 partition. The "Use as" option is pre-selected for EXT4. Change it to BTRFS. Check the "Format the partition" checkbox. In the "Mount Point" text box enter the forward slash, /, and nothing else. Click OK and proceed. The installation will create both the @ and @home subvolumes. In the /etc/fstab file those to subvolumes will be bound to / and /home. If you wanted to use the partitionless BTRFS disk method you'd select sda instead of sda1, and BTRFS subvolumes become the same as partitions. Oshunluver describes his method of installing multiple distros on a single device using the partitionless BTRFS disk method. The method has limitations:
Partitionless Btrfs disk
Warning: Most users do not want this type of setup and instead should install Btrfs on a regular partition. Furthermore GRUB strongly discourages installation to a partitionless disk.
Btrfs can occupy an entire data storage device, replacing the MBR or GPT partitioning schemes, using subvolumes to simulate partitions. However, using a partitionless setup is not required to simply create a Btrfs filesystem on an existing partition that was created using another method. There are some limitations to partitionless single disk setups:
To overwrite the existing partition table with Btrfs, run the following command:
# mkfs.btrfs /dev/sdX
For example, use /dev/sda rather than /dev/sda1. The latter would format an existing partition instead of replacing the entire partitioning scheme.
Warning: Most users do not want this type of setup and instead should install Btrfs on a regular partition. Furthermore GRUB strongly discourages installation to a partitionless disk.
Btrfs can occupy an entire data storage device, replacing the MBR or GPT partitioning schemes, using subvolumes to simulate partitions. However, using a partitionless setup is not required to simply create a Btrfs filesystem on an existing partition that was created using another method. There are some limitations to partitionless single disk setups:
- Cannot use different file systems for different mount points.
- Cannot use swap area as Btrfs does not support swap files and there is no place to create swap partition. This also limits the use of hibernation/resume, which needs a swap area to store the hibernation image.
- Cannot use UEFI to boot.
To overwrite the existing partition table with Btrfs, run the following command:
# mkfs.btrfs /dev/sdX
For example, use /dev/sda rather than /dev/sda1. The latter would format an existing partition instead of replacing the entire partitioning scheme.
Modifying /etc/fstab
In order to optimize the performance of the SSD, I strongly advise you to avoid doing the TRIM operation in real time (whenever a file is deleted) because you would be putting an unnecessary extra amount of work over the SSD. In other words: You should not enable the discard option in fstab.
Instead, what I recommend is to run a script periodically to tell the SSD which blocks are free with the command fstrim. Doing this operation daily or weekly is more than enough. This way we do not lose any performance due to TRIM when deleting files and we periodically keep informed the SSD about the free blocks.
Making your first snapshots of @ and @home
Your installation of Kubuntu is complete, you've rebooted and you are staring at your newly minted desktop. Before you do ANYTHING else, before you install a printer, or run "ubuntu-drivers" to configure your GPU, or install your favorite applications, or install or configure your email client, or upload your email archives, BEFORE ANYTHING ELSE do the following:
Open a Konsole and enter
sudo -i
and press the Enter key. You are now root.
blkid
will give you a list of your storage device UUID's. It could look like this:
:~$ blkid
/dev/sda1: UUID="47a4794f-b168-4c61-9d0c-cc22f4880116" UUID_SUB="94528e9d-7bd3-4712-bd3f-0a24774dc4b9" TYPE="btrfs" PARTUUID="99e4dabd-01"
/dev/sda1: UUID="47a4794f-b168-4c61-9d0c-cc22f4880116" UUID_SUB="94528e9d-7bd3-4712-bd3f-0a24774dc4b9" TYPE="btrfs" PARTUUID="99e4dabd-01"
With your mouse highlight the 47a4794f-b168-4c61-9d0c-cc22f4880116 part of the sda1 UUID and use right mouse copy to copy it. Then, in the Konsole, enter
mount /dev/disk/by-uuid/uuidhere /mnt
replacing "uuidhere" with 47a4794f-b168-4c61-9d0c-cc22f4880116. That mounts the drive at /mnt and /mnt becomes the <ROOT_FS> of your Kubuntu BTRFS installation. It is the top of the tree. @, @home and all other subvolumes you may add are under <ROOT_FS>. Notice that your system and desktop are still running. You can switch to them and do stuff if you need to. You are doing live maintanence on your system!
If you do
vdir /mnt
you will see
/mnt/@
/mnt/@home
Now you need to create a normal subdirectory under <ROOT_FS> where your @ and @home snapshots can be stored.
mkdir /mnt/snapshots
Doing
vdir /mnt
will show
/mnt/@
/mnt/@home
/mnt/snapshots
Now to create your first snapshots.
btrfs su snapshot -r /mnt/@ /mnt/snapshots/@_basic_install
btrfs su snapshot -r /mnt/@home /mnt/snapshots/@home_basic_install
"su" is shorthand for "subvolume". Many of the btrfs commands have shortened aliases.
"-r" tells the snapshot command to make a read-only snapshot. Read only snapshots can be moved outside of the <ROOT_FS> pool by using the btrfs send & receive command. If "-r" is not used then the snapshot created is a read-write snapshot and cannot be moved outside of <ROOT_FS> pool.
Now you have your first set of snapshots, but you have not yet made a backup. Snapshots are not backups until they have been moved to an external storage device. You do
btrfs filesystem usage /
and the usage of your installation is shown to be, say, 28GB. You have a 128GB USB stick and you plug it in. It could be formatted with either FAT32, EXT4 or you can fire up the Partition Manager, create sdb1, and format it using BTRFS. Close the Partition Manager. Use Dolphin to unmount the USB stick. Plugging it back it the USB stick should mount automatically at /media. You convert your snapshots into backups by sending them to the USB stick:
btrfs send /mnt/snapshots/@_basic_install | btrfs receive /media
btrfs send /mnt/snapshots/@home_basic_install | btrfs receive /media
Whereas a fresh snapshot is essentially empty, the process of sending and receiving even a fresh snapshot results in the entire contents of @ and @home being read and sent to /media, otherwise they would not be a complete archival copy of your system. So, it takes only seconds to make a set of snapshots but sending them can take 15 to 30 minutes or more. While the sending and receiving are taking place you can continue to use your system normally with little or no degradation in performance. When the sending and receiving are done you can umount the USB stick and store it in a safe place. Your system is backed up.
IF your destination is not formatted with BTRFS then your send command should be:
btrfs send /mnt/snapshots/@_basic_install -f /media/@_basic_install.txt
where /media is a mounted USB stick formatted with EXT4 or FAT32, or a network destination.
The "-f" option sends the subvolume to the destination as a stdout file, i.e., ASCII.
To convert it back to a subvolume one would use:
btrfs receive -f /media/@_basic_install.txt /mnt/snapshots
The received subvolume will be set to read-only. To add the write attribute use
chmod u+w /mnt/snapshots/@basic_install.txt
To remove the txt extension use
mv /mnt/snapshots/@_basic_install.txt /mnt/snapshots/@_basic_install
Regular and Incremental snapshots
After you've made your first snapshot, which I called "@_basic_install" and "@home_basic_install", you'll need to start a regular regimen of snapshotting your system to enable the ability to roll back to a previous condition of your system if the need arises. A snapshot is essentially empty when it is first made and begins to populate as additions, deletions and changes to the system occurs. One might conclude that since snapshots are empty when first taken one can make dozens or hundreds or thousands of them. I've read one person asking if it would be OK to take a snapshot every minute! Below I discuss why more than a dozen or so snapshots is not advisable.
That said, it is ALWAYS advisable to make a snapshot of your system (@ and @home) before any significant changes in your system: doing an update & full-upgrade, installing one or more applications from the repository, uninstalling an application, or especially when installing a foreign deb, tar or zip package. IF the changes do not produce any problems, and they rarely do, one can then keep the most recent backups and delete the oldest ones, leaving no more than a dozen or so snapshots. Personally, I always date my snapshots: @YYYYMMDD & @homeYYYYMMDD and occasionally add a short description if I have taken more than one snapshot on that date: @YYYYMMDD_before_libreoffice & @homeYYYYMMDD_before_libreoffice.
Snapshots can be renamed by using the mv command:
mv /mnt/snapshots/@YYYYMMDD_before_libreoffice /mnt/snapshots/@YYYYMMDD
Btrfs has included the ability to take incremental snapshots as backups to external storage devices. Doing so shortens the time that is taken to send a snapshot to an external storage device, which can only be done by using the send & receive command, since a snapshot cannot be mv'd or cp'd out of its <ROOT_FS>. Assume that yesterday you made a snapshot of @ as @YYYYMMD1. You sent @YYYYMMD1 to the external storage using the Btrfs send & receive command:
btrfs send /mnt/snapshots/@YYYYMMD1 | btrfs receive /backup
where /backup is an external storage device mounted on the directory /backup that you had made previously.
Today you made @YYYYMMD2. Now you are going to send just the difference between @YYYYMMD1 and @YYYYMMD2 to /backup.
btrfs send -p @YYYYMMD1 @YYYYMMD2 | btrfs receive /backup
Here is what happens: receive notes that @YYYYMMD1 already exists under backup and makes a snapshot of it as @YYYYMMD2. It then begins to populate @YYYYMMD2 with the data not already in @YYYYMMD1, the incremental data. When done, the @YYYYMMD2 snapshot under /backup can be treated as a normal snapshot of @ dated YYYYMMD2.
The next day snapshot @YYYYMMD3 is taken and sent to /backup as an incremental backup.
btrfs send -p @YYYYMMD2 @YYYYMMD3 | btrfs receive /backup
And, receive makes a snapshot of @YYYYYMMD2, that it has on its side, as @YYYYYMMD3 and adds to it any changes not in @YYYYMMD2.
Although @YYYYMMD3 is called an "incremental" backup it contains the entire contents of @ as of YYYYMMD3 and can be used to replace @ at any time. This is because send & receive, without the -p parameter, sends the ENTIRE snapshot to external storage.
How often should one make a snapshot set or move a set to external storage?
Say your last snapshot was a week ago and after experimenting with some software you made a mistake installing or removing it, or making changes, and you found yourself unable to manually make changes to restore your system to what it was before the problem, or you can't log into your account or get a black screen when you do. You do a roll back. All email you've received since that week old snapshot was taken is now lost, unless your mail server keeps a copy of it on hand, like mine does. Any URLs you bookmarked in your browser during the last week will be missing. Any documents you've written since that snapshot are not in your Documents folder any more. All images or movies you've saved during the last week are not on your system any more. Any updates will have to be redone, but the system will notify you of that fact.
Considering the fact that a snapshot pair sent to an external storage device is fully populated (my @ & @home total 110GB), there is a limit to how many sets of snapshots you can save to your 2TB spinning HD storage device. It could hold only 18 of my sets, much less than one set every day for a year or even a month. My 750GB HD can hold only around 6 pair.
So, what do I do? I try to keep about 4 pair of snapshots on my <ROOT_FS> (i.e., in /mnt/snapshots), but not under / or /home. If a big update, more than 30 or so apps, is announced I make a snapshot set and then do the update. If, after a day of testing everything is working well, I take another snapshot set and send it to /backup, my mounted external storage HD. Then I delete the snapshot made just before the update. And, I delete the oldest pair from my /backup. For a massive update, like 200+ applications, I'll make a snapshot before hand, test the update, and if it is OK then make another snapshot and delete the previous one. Then I'll send them to /backup and also to a couple 128GB USB sticks that I keep handy for such a purpose.
All in all, incremental backups save time IF your total usage is several hundred GB and you are frequently deleting, adding and changing data, as in software development. Sending my snapshot pair (110GB) to /backup takes about 10 minutes for @YYYYMMDD and about 15 minutes for @homeYYYYMMDD. An incremental backup may save me some time, but it is a background operation which doesn't detract from my use of the system while send & receive is working. EDIT: after replacing my primary drive with a Samsung Pro EVO SSD I measured the total time to send & receive both @YYYYMMDD and @homeYYYYMMDD (115GB) at less than 3 minutes. I was shocked at the increase in speed, considering that the destination drive is a spinning HD.
Also, I do not use compression. How much this would effect the total number of pairs that could be saved or the time to send them to storage is outside my experience.
Using the "btrfs send -f" command
Take an ro snapshot of @home:
btrfs su snapshot -r /mnt/@home /mnt/snapshots/@home20180907
Create a readonly snapshot of '/mnt/@home' in '/mnt/snapshots/@home20180907'
Send it to a text file:
btrfs send -f /mnt/snapshots/@home07txt /mnt/snapshots/@home20180907
At subvol /mnt/snapshots/@home20180907
The "send -f" syntax is the <outfile> first and the subvolume source second. The <outfile> was created with "send -f" so "receive -f" should receive it and convert it to a normal subvolume:
btrfs receive -f /mnt/snapshots/@home07txt /backup/
At subvol @home20180907
Notice the "At subvol" gives the name of the orginal subvolume, @home20180907, which pulled from the ASCII text file. The subvolume @home20180907 can be browsed with Dolphin just like any other subvolume because it IS like any other subvolume.
As a text file, @home07txt can be moved between btrfs subvolumes or outside of a btrfs subvolume to an EXT4 partition or a remote server, and the "btrfs receive -f <filename> /mountpoint" command will convert it back to a subvolume.
Adding peripheral devices
You've created archival snapshots of your @ and @home system, so you are now ready to install your printer. If the installation proceeds normally and your printer works well then you can open a Konsole and create @_printer_installed and @home_printer_installed snapshots. Send them to the USB stick. Then install your GPU and if things go well open a Konsole and create @_gpu_installed and @home_gpu_installed, and archive them. Repeat the process for your email client. Then add your browser, your special apps, etc, and repeat the archival process. Say your gpu install fails, leaving you with a black screen and all you can do is open a terminal. In that terminal you
sudo -i
to get to root. You mount your sda1 uuid to /mnt as described above. Then you move @ and @home out of the way:
mv /mnt/@ /mnt/@old
mv /mnt/@home /mnt/@homeold
Now you are ready to create a new @ and @home subvolume using the btrfs snapshot command:
btrfs subvol snapshot /mnt/snapshots/@__printer_installed /mnt/@
btrfs subvol snapshot /mnt/snapshots/@home_printer_installed /mnt/@home
Since the "-r" parameter was not use the @ and @home snapshots just created will be read-write. Using a read-only snapshot for @ or @home will prevent those subvolumes from being mounted during boot up.
With @ and @home rolled back to the printer_installed, i.e., the snapshot made before the attempted install of the gpu, you can unmount /mnt and then reboot. After a roll back the boot up process can take as much as twice as long to reach the desktop as before, but subsequent boots will be at normal speed.
Eventually you will get to a point where you are doing normal updates and the use of special snapshot names is no longer necessary. At that point using a date scheme, @YYYYMMDD, or something similar, will suffice. Do not separate or mix @ and @home snapshot pairs. Updates often make changes in both @ and @home (/ and /home) and that's why both snapshots are taking at the same time, and rolled back using the same time stamp. Another problem you might run into is inadvertently using @ to make a @homeYYYYMMDD snapshot, or @home to make a @YYYYMMDD snapshot. To recover from these mistakes you will need a LiveUSB with btrfs installed and modprobed to make it a kernel service, and a set of snapshots you can examine with Dolphin or mc to make sure they are of @ and @home before you replace the defective @ or @home.
Subvolumes can be browsed like normal subdirectories and a snapshot of @ looks exactly like the / directory just as a snapshot of @home looks like the /home directory. Because of that you can use Dolphin or mc to browse a snapshot and copy a file or directory from it into its normal place in / or /home. This is done if a file or directory under / or /home is inadvertently deleted or altered in some way. Because of CoW (Copy on Write), when you delete a file or alter it, btrfs will make a copy of the original file or directory in the snapshot. Eventually, considering updates, changes and deletions, most files get changed in some way and the snapshots that contain those files eventually populates with the original versions of those files or directories. Older snapshots are larger than fresh snapshots. Using
btrfs subvol delete -C /mnt/snapshots/@_an old_snapshot
btrfs subvol delete -C /mnt/snapshots/@home_an old_snapshot
will commit (-C) each transaction as it completes and return the data blocks to the pool.
Maintaining your BTRFS file system
BTRFS has several maintanence tools to keep your system running smoothly and quickly. They include scrub, defragment and balance. All of the tools are used as root, operating on either /mnt or / and /home. You can continue to use your system while they run in a Konsole.
Defragment
From the btrfs-filesystem man page:
Code:
[COLOR=#ff0000][B]Warning[/B][/COLOR] Defragmenting with Linux kernel versions < 3.9 or ≥ 3.14-rc2 as well as with Linux stable kernel versions ≥ 3.10.31, ≥ 3.12.12 or ≥ 3.13.4 will break up the ref-links of COW data (for example files copied with cp --reflink, snapshots or de-duplicated data). This may cause considerable increase of space usage depending on the broken up ref-links.
Just the particular instances you point it at. So, if you have subvolume A, and snapshots S1 and S2 of that subvolume A, then running defrag on just subvolume A will break the reflinks between it and the snapshots, but S1 and S2 will still share any data they originally had with each other. If you then take a third snapshot of A, it will share data with A, but not with S1 or S2 (because A is no longer sharing data with S1 or S2).
Given this behavior, you have in turn three potential cases when talking about persistent snapshots:
1) You care about minimizing space used, but aren't as worried about performance. In this case, the only option is to not run defrag at all.
2) You care about performance, but not space usage. In this case, defragment everything.
3) You care about both space usage and performance. In this balanced case, I would personally suggest defragmenting only the source subvolume (so only subvolume A in the above explanation), and doing so on a schedule that coincides with snapshot rotation. The idea is to defragment just before you take a snapshot, and at a frequency that gives a good balance between space usage and performance. As a general rule, if you take this route, start by doing the defrag on either a monthly basis if you're doing daily or weekly snapshots, or with every fourth snapshot if not, and then adjust the interval based on how that impacts your space usage.
Given this behavior, you have in turn three potential cases when talking about persistent snapshots:
1) You care about minimizing space used, but aren't as worried about performance. In this case, the only option is to not run defrag at all.
2) You care about performance, but not space usage. In this case, defragment everything.
3) You care about both space usage and performance. In this balanced case, I would personally suggest defragmenting only the source subvolume (so only subvolume A in the above explanation), and doing so on a schedule that coincides with snapshot rotation. The idea is to defragment just before you take a snapshot, and at a frequency that gives a good balance between space usage and performance. As a general rule, if you take this route, start by doing the defrag on either a monthly basis if you're doing daily or weekly snapshots, or with every fourth snapshot if not, and then adjust the interval based on how that impacts your space usage.
https://www.mail-archive.com/linux-b.../msg72416.html
Btrfs does have scaling issues due to too many snapshots (or actually the reflinks snapshots use, dedup using reflinks can trigger the same scaling issues), and single to low double-digits of snapshots per snapshotted subvolume remains the strong recommendation for that reason.
My general approach is to manually create snapshots of @ and @home as read-only and send them to my archival storage. Then, delete all my snapshots in /mnt/snapshots before running the defragment command. When the defragment command is done I create new @ and @home snapshots and then I reboot just to be sure all works well.
Without having to mount your <ROOT_FS> to /mnt you can manually defragment your system using
sudo btrfs fi defragment -r -f / /home
Defragment can be run on one or more files and/or directories. -clzo can be used to add compression.
I defragment about once every two or three months. Since my total data is only 110GB and my pool is 698GB performance degradation has never been a problem. However, remember that defragmentation will fail on all open files and services. I get messages like this:
...
ERROR: defrag failed on /usr/lib/x86_64-linux-gnu/libexec/kf5/kscreen_backend_launcher: Text file busy
ERROR: defrag failed on /usr/lib/x86_64-linux-gnu/libexec/kf5/start_kdeinit: Text file busy
ERROR: defrag failed on /usr/lib/x86_64-linux-gnu/libexec/kdeconnectd: Text file busy
ERROR: defrag failed on /usr/lib/x86_64-linux-gnu/libexec/polkit-kde-authentication-agent-1: Text file busy
ERROR: defrag failed on /usr/lib/x86_64-linux-gnu/libexec/org_kde_powerdevil: Text file busy
ERROR: defrag failed on /usr/lib/x86_64-linux-gnu/pulse/gconf-helper: Text file busy
ERROR: defrag failed on /usr/lib/x86_64-linux-gnu/sddm/sddm-helper: Text file busy
ERROR: defrag failed on /usr/lib/xorg/Xorg: Text file busy
ERROR: defrag failed on /usr/sbin/ModemManager: Text file busy
ERROR: defrag failed on /usr/sbin/NetworkManager: Text file busy
ERROR: defrag failed on /usr/sbin/acpid: Text file busy
ERROR: defrag failed on /usr/sbin/avahi-daemon: Text file busy
ERROR: defrag failed on /usr/sbin/cron: Text file busy
ERROR: defrag failed on /usr/sbin/cups-browsed: Text file busy
ERROR: defrag failed on /usr/sbin/irqbalance: Text file busy
ERROR: defrag failed on /usr/sbin/kerneloops: Text file busy
ERROR: defrag failed on /usr/sbin/rsyslogd: Text file busy
ERROR: defrag failed on /usr/sbin/thermald: Text file busy
ERROR: defrag failed on /usr/sbin/cupsd: Text file busy
ERROR: defrag failed on /usr/sbin/atd: Text file busy
total 73 failures
ERROR: defrag failed on /usr/lib/x86_64-linux-gnu/libexec/kf5/kscreen_backend_launcher: Text file busy
ERROR: defrag failed on /usr/lib/x86_64-linux-gnu/libexec/kf5/start_kdeinit: Text file busy
ERROR: defrag failed on /usr/lib/x86_64-linux-gnu/libexec/kdeconnectd: Text file busy
ERROR: defrag failed on /usr/lib/x86_64-linux-gnu/libexec/polkit-kde-authentication-agent-1: Text file busy
ERROR: defrag failed on /usr/lib/x86_64-linux-gnu/libexec/org_kde_powerdevil: Text file busy
ERROR: defrag failed on /usr/lib/x86_64-linux-gnu/pulse/gconf-helper: Text file busy
ERROR: defrag failed on /usr/lib/x86_64-linux-gnu/sddm/sddm-helper: Text file busy
ERROR: defrag failed on /usr/lib/xorg/Xorg: Text file busy
ERROR: defrag failed on /usr/sbin/ModemManager: Text file busy
ERROR: defrag failed on /usr/sbin/NetworkManager: Text file busy
ERROR: defrag failed on /usr/sbin/acpid: Text file busy
ERROR: defrag failed on /usr/sbin/avahi-daemon: Text file busy
ERROR: defrag failed on /usr/sbin/cron: Text file busy
ERROR: defrag failed on /usr/sbin/cups-browsed: Text file busy
ERROR: defrag failed on /usr/sbin/irqbalance: Text file busy
ERROR: defrag failed on /usr/sbin/kerneloops: Text file busy
ERROR: defrag failed on /usr/sbin/rsyslogd: Text file busy
ERROR: defrag failed on /usr/sbin/thermald: Text file busy
ERROR: defrag failed on /usr/sbin/cupsd: Text file busy
ERROR: defrag failed on /usr/sbin/atd: Text file busy
total 73 failures
Scrub
When scrubbing a partition, btrfs reads all data and metadata blocks and verifies checksums. It automatically repairs corrupted blocks if there’s a correct copy available in a RAID configuration. BTRFS also reports any unreadable sector along with the affected file via system log. It can be run as often as one wishes, but the general practice is once a month. It can be made a monthly cron task by using:
sudo crontab -e
and adding the following lines to the bottom of the /var/spool/cron/crontabs/root crontab file that the above command edits.
Code:
# m h dom mon dow command 0 12 * * * /bin/btrfs scrub start / 0 13 * * * /bin/btrfs scrub start /home
Do not modify cron files in /etc.
If you do not have at least a RAID1 installation then scrub will only report errors, not correct them. What if scrub reports errors? It means that some blocks have data or metadata checksums that do not correspond to the checksums that scrub operation actually read for those blocks. The fault could be related to hardware.
Speculating, there are ways around scrub only reporting errors on a non-RAID system. If your HD is large, say 4TB, then you could create three partitions, sda1, sda2, sda3 and possibly a swap partititon. Install BTRFS as the root file system on sda1 and then add sda2 and sda3 and balance (discussed later) them using -mraid1 -draid1 -sraid1, thus creating a "three" HD RAID1 configuration. Then scrub will repair errors. Break that 4TB into four partitions and make a RAID10. I haven't done it but it would make an interesting experiment!
You can check your storage devices for BTRFS errors using
:~$ sudo btrfs device stats /
[/dev/sda1].write_io_errs 0
[/dev/sda1].read_io_errs 0
[/dev/sda1].flush_io_errs 0
[/dev/sda1].corruption_errs 0
[/dev/sda1].generation_errs 0
and
:~# mount /dev/disk/by-uuid/b3131abd-c58d-4f32-9270-41815b72b203 /backup
(Note: use blkid to get your own uuid value for the mount command)
:~# btrfs device stats /backup
[/dev/sdc1].write_io_errs 0
[/dev/sdc1].read_io_errs 0
[/dev/sdc1].flush_io_errs 0
[/dev/sdc1].corruption_errs 0
[/dev/sdc1].generation_errs 0
and
:~# mount /dev/disk/by-uuid/17f4fe91-5cbc-46f6-9577-10aa173ac5f6 /media
:~# btrfs device stats /media
[/dev/sdb1].write_io_errs 0
[/dev/sdb1].read_io_errs 0
[/dev/sdb1].flush_io_errs 0
[/dev/sdb1].corruption_errs 0
[/dev/sdb1].generation_errs 0
which shows that all three of my hard drives are not reporting any BTFS errors.
Balance
The reason to do a balance: https://unix.stackexchange.com/quest...hat-does-it-do
Unlike most conventional filesystems, BTRFS uses a two-stage allocator. The first stage allocates large regions of space known as chunks for specific types of data, then the second stage allocates blocks like a regular filesystem within these larger regions. There are three different types of chunks:
Only the type of data that the chunk is allocated for can be stored in that chunk. The most common case these days when you get a -ENOSPC error on BTRFS is that the filesystem has run out of room for data or metadata in existing chunks, and can't allocate a new chunk. You can verify that this is the case by running btrfs fi df on the filesystem that threw the error. If the Data or Metadata line shows a Total value that is significantly different from the Used value, then this is probably the cause.
What btrfs balance does is to send things back through the allocator, which results in space usage in the chunks being compacted. For example, if you have two metadata chunks that are both 40% full, a balance will result in them becoming one metadata chunk that's 80% full. By compacting space usage like this, the balance operation is then able to delete the now empty chunks, and thus frees up room for the allocation of new chunks. If you again run btrfs fi df after you run the balance, you should see that the Total and Used values are much closer to each other, since balance deleted chunks that weren't needed anymore.
- Data Chunks: These store regular file data.
- Metadata Chunks: These store metadata about files, including among other things timestamps, checksums, file names, ownership, permissions, and extended attributes.
- System Chunks: These are a special type of chunk which stores data about where all the other chunks are located.
Only the type of data that the chunk is allocated for can be stored in that chunk. The most common case these days when you get a -ENOSPC error on BTRFS is that the filesystem has run out of room for data or metadata in existing chunks, and can't allocate a new chunk. You can verify that this is the case by running btrfs fi df on the filesystem that threw the error. If the Data or Metadata line shows a Total value that is significantly different from the Used value, then this is probably the cause.
What btrfs balance does is to send things back through the allocator, which results in space usage in the chunks being compacted. For example, if you have two metadata chunks that are both 40% full, a balance will result in them becoming one metadata chunk that's 80% full. By compacting space usage like this, the balance operation is then able to delete the now empty chunks, and thus frees up room for the allocation of new chunks. If you again run btrfs fi df after you run the balance, you should see that the Total and Used values are much closer to each other, since balance deleted chunks that weren't needed anymore.
More information here:https://btrfs.wiki.kernel.org/index..../btrfs-balance
FSTrim
Fstrim does not need to be run on BTRFS installed on spinning disks. It is for use with SSD's.
SSD
fstrim -v /
Running the fstrim command (fstrim -v /) as root on any mounted subvolume will perform the TRIM command on the SSD device.
When I was running Kubuntu 16.04 and Neon User Edition I set the fstrim in crontab manually to run once a week. When I installed Kubuntu 18.04 (Bionic) I noticed that fstrim was already installed as a systemd timer set to run once a week.
[code]
$ locate fstrim.timer
/etc/systemd/system/timers.target.wants/fstrim.timer
/lib/systemd/system/fstrim.timer
/var/lib/systemd/deb-systemd-helper-enabled/fstrim.timer.dsh-also
/var/lib/systemd/deb-systemd-helper-enabled/timers.target.wants/fstrim.timer
/var/lib/systemd/timers/stamp-fstrim.timer
$ cat /lib/systemd/system/fstrim.timer
[Unit]
Description=Discard unused blocks once a week
Documentation=man:fstrim
[Timer]
OnCalendar=weekly
AccuracySec=1h
Persistent=true
[Install]
WantedBy=timers.target
jerry@GreyGeek:~$ cat /etc/systemd/system/timers.target.wants/fstrim.timer
[Unit]
Description=Discard unused blocks once a week
Documentation=man:fstrim
[Timer]
OnCalendar=weekly
AccuracySec=1h
Persistent=true
[Install]
WantedBy=timers.target
[code/]
Situations with a low amount of disk write occurring on an SSD can use the discard mount option in /etc/fstab. In a high-write situation, such as having a database on an SSD, it is better to use TRIM commands rather than the discard mount option. See the comment in the /etc/fstab section
The man page for fstrim states:
Running fstrim frequently, or even using mount -o discard, might negatively affect the lifetime of poor-quality SSD devices. For most desktop and server systems a sufficient trimming frequency is once a week.
RAID
Don't use a partition manager on a multidrive RAID or JBOD configuration without first converting the drives to singletons.
Never use
btrfs subvolume set-default
to change the root id to some value other than 5. IOW, don't use that command on Ubuntu based distros.
BTRFS commands that require shutting down the system
(Note: In nearly 4 years of using Btrfs I have never needed to use the following commands, and the only thing I know about them is what I've read. I will post links to those commands but, as the old saying says, "If it ain't broke don't fix it!" )
check
restore
rescue
Comment