I decided to move away from using backups on my server to using RAID1 with btrfs instead - less CPU overhead and almost instant recovery.
BTW, I realize RAID1 isn't the same as a regular backup because if you delete a file accidentally, a backup saves you, RAID1 doesn't. However, this makes sense to me for use on the server. Anything I want to keep badly enough, I'll make an additional backup on my desktop. Mostly the server holds music and videos and my family uses it for their backups from their computers and phones.
**Notes**
I have been a long time btrfs user (since 0.19) and am very comfortable with using it. I recommend it to anyone who will listen. Here's one example why:
My server had 1 6TB drive for data and 3x2TB drives used for backups. All 4 btrfs file systems in single mode. The new configuration is adding a small 60GB SSD as a boot device, a 2nd 6TB drive for RAID1 with the first drive, and removing one of the 2TB drives. I will use the 6TB drives in RAID1 for my DVD collection and the 2TB drives in RAID1 for everything else (pics, docs, music, etc.). This will increase my total data storage to 8TB. A small part of the 2TB drives is reserved (partitioned) for a backup for the installation (Ubuntu server 14.04) so if the SSD fails I can boot up to one of the 2TB drives and be running again in short order - BIOS fallback in boot order is automatic.
Tasks:
1. Since 2TB drive I was removing was only used for backups, I just pulled it. I inserted the 6TB drive in it's place and connected the SSD to an unused sata port. I ran partprobe and all the drives were up.
2. One of BTRFS best features? send/receive. I created a new btrfs filesystem on the SSD, mounted it, and then sent a full copy of my currently running install to it. That's right, I didn't have to boot to another install or a USB drive. The steps to do this are: Make a read-only snapshot of the install subvolume, issue the send|receive command, then wait for it to complete. Once it's received, since it's read-only, simply make a new read-write snapshot of the subvolume, delete the read-only version, and you're good-to-go. I went into the install and manually edited fstab and grub.cfg to reflect the UUID of the new filesystem and rebooted. I set BIOS to boot to the new SSD and it booted first try.
3. My new scheme required some partitioning. I made a 60GB partition on both 2TB drives that allowed me to duplicate the entire SSD as a backup. The remaining space was used for a new storage filesystem in RAID1. One interesting hiccup - I deleted the old partition tables, re-partitioned, and created the BTRFS RAID1 filesystem. The first attempt showed a large portion of it in use even though no files were on it. I went back and used wipefs to remove the original BTRFS superblocks, re-created the RAID1 filesystem and all was well. Creating a RAID filesystem with btrfs is a single command: mkfs.btrfs -m raid1 -d raid1 /dev/sdc2 /dev/sdd2 That's it. Took about a second to complete. The two switches, -m and -d, refer to metadata and data. Making them both RAID1 puts a copy of everything on both drives. If you've ever tried to do this with mdadm you can appreciate just how easy btrfs really is.
4. BTRFS handles multiple devices easily as well. A single command: btrfs device add /dev/sdb /mnt/pool and it's all done. This added the new 6TB drive to the already existing 6TB filesystem mounted at /mnt/pool. I now had a 12TB filesystem with all the data on the first drive.
5. With another single command: btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt/pool and we were off to the races. Unlike the previous commands, this one did not happen instantly, I went to bed. I imagine it took several hours to copy 5.3TB of files. When complete, I had a full 6TB RAID1 filesystem with all my data duplicated on both drives.
6. Now I needed to move all the data I wanted to relocate from the 6TB filesystem to the 2TB filesystem. Just like in step 2, send|receive did the work. Again though, this takes time. I moved about 1.5TB from one filesystem to the other.
7. TBD: As of this morning I haven't decided the best way to make bootable backups of the install. With changing UUIDs and such, I have a couple issues to work out. For now, I made send|receive backup copies of my install, but the ultimate goal would be auto-fallback to a current copy of the install in the event of a failure. My BIOS will handle drive selection, but booting to a degraded RAID might be problem. BTRFS will do a incremental send|receive but I assume this will over-write any edits to fstab and grub.cfg which would be need to boot correctly.
These operations did not even require unmounting any of the subvolumes except the ones deleted when re-partitioning the 2TB drives. The entire time my server was up and running and all files accessible. I can't talk enough about how awesome btrfs is.
BTW, I realize RAID1 isn't the same as a regular backup because if you delete a file accidentally, a backup saves you, RAID1 doesn't. However, this makes sense to me for use on the server. Anything I want to keep badly enough, I'll make an additional backup on my desktop. Mostly the server holds music and videos and my family uses it for their backups from their computers and phones.
**Notes**
I did actually have to reboot once during the whole process because I changed boot devices and I wanted to be sure I could boot to the new install location before wiping the old one. Otherwise, no rebooting necessary.
My server supports hot swapping drives so I never powered down. All that's needed is sudo partprobe after a drive swap.
If you're new to BTRFS, I reference "subvolumes" below. On a btrfs, a filesystem can be divided into subvolumes. This keeps data separate while still existing on a single filesystem. This has many benefits including all free space on the filesystem is available to any subvolume to use when needed. Subvolumes are mountable just like a partition so no partitioning is necessary. I currently have 3 different installs on a single btrfs filesystem, each in their own subvolume.
My server supports hot swapping drives so I never powered down. All that's needed is sudo partprobe after a drive swap.
If you're new to BTRFS, I reference "subvolumes" below. On a btrfs, a filesystem can be divided into subvolumes. This keeps data separate while still existing on a single filesystem. This has many benefits including all free space on the filesystem is available to any subvolume to use when needed. Subvolumes are mountable just like a partition so no partitioning is necessary. I currently have 3 different installs on a single btrfs filesystem, each in their own subvolume.
I have been a long time btrfs user (since 0.19) and am very comfortable with using it. I recommend it to anyone who will listen. Here's one example why:
My server had 1 6TB drive for data and 3x2TB drives used for backups. All 4 btrfs file systems in single mode. The new configuration is adding a small 60GB SSD as a boot device, a 2nd 6TB drive for RAID1 with the first drive, and removing one of the 2TB drives. I will use the 6TB drives in RAID1 for my DVD collection and the 2TB drives in RAID1 for everything else (pics, docs, music, etc.). This will increase my total data storage to 8TB. A small part of the 2TB drives is reserved (partitioned) for a backup for the installation (Ubuntu server 14.04) so if the SSD fails I can boot up to one of the 2TB drives and be running again in short order - BIOS fallback in boot order is automatic.
Tasks:
- Remove the extra 2TB drive and install the SSD and the new 6TB drive
- Copy Ubuntu to the SSD and set up the SSD as the boot device
- Repartition the 2 remaining 2TB drives and create a new RAID1 filesystem on the 2TB drives
- Add the new 6TB drive to the current filesystem on the older 6TB drive
- Convert the 12TB filesystem to a 6TB RAID1 filesystem
- Move target data from the 6TB RAID1 to the 2TB RAID1 (docs, etc)
- Make backup copies of Ubuntu on the 2TB drives
1. Since 2TB drive I was removing was only used for backups, I just pulled it. I inserted the 6TB drive in it's place and connected the SSD to an unused sata port. I ran partprobe and all the drives were up.
2. One of BTRFS best features? send/receive. I created a new btrfs filesystem on the SSD, mounted it, and then sent a full copy of my currently running install to it. That's right, I didn't have to boot to another install or a USB drive. The steps to do this are: Make a read-only snapshot of the install subvolume, issue the send|receive command, then wait for it to complete. Once it's received, since it's read-only, simply make a new read-write snapshot of the subvolume, delete the read-only version, and you're good-to-go. I went into the install and manually edited fstab and grub.cfg to reflect the UUID of the new filesystem and rebooted. I set BIOS to boot to the new SSD and it booted first try.
3. My new scheme required some partitioning. I made a 60GB partition on both 2TB drives that allowed me to duplicate the entire SSD as a backup. The remaining space was used for a new storage filesystem in RAID1. One interesting hiccup - I deleted the old partition tables, re-partitioned, and created the BTRFS RAID1 filesystem. The first attempt showed a large portion of it in use even though no files were on it. I went back and used wipefs to remove the original BTRFS superblocks, re-created the RAID1 filesystem and all was well. Creating a RAID filesystem with btrfs is a single command: mkfs.btrfs -m raid1 -d raid1 /dev/sdc2 /dev/sdd2 That's it. Took about a second to complete. The two switches, -m and -d, refer to metadata and data. Making them both RAID1 puts a copy of everything on both drives. If you've ever tried to do this with mdadm you can appreciate just how easy btrfs really is.
4. BTRFS handles multiple devices easily as well. A single command: btrfs device add /dev/sdb /mnt/pool and it's all done. This added the new 6TB drive to the already existing 6TB filesystem mounted at /mnt/pool. I now had a 12TB filesystem with all the data on the first drive.
5. With another single command: btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt/pool and we were off to the races. Unlike the previous commands, this one did not happen instantly, I went to bed. I imagine it took several hours to copy 5.3TB of files. When complete, I had a full 6TB RAID1 filesystem with all my data duplicated on both drives.
6. Now I needed to move all the data I wanted to relocate from the 6TB filesystem to the 2TB filesystem. Just like in step 2, send|receive did the work. Again though, this takes time. I moved about 1.5TB from one filesystem to the other.
7. TBD: As of this morning I haven't decided the best way to make bootable backups of the install. With changing UUIDs and such, I have a couple issues to work out. For now, I made send|receive backup copies of my install, but the ultimate goal would be auto-fallback to a current copy of the install in the event of a failure. My BIOS will handle drive selection, but booting to a degraded RAID might be problem. BTRFS will do a incremental send|receive but I assume this will over-write any edits to fstab and grub.cfg which would be need to boot correctly.
These operations did not even require unmounting any of the subvolumes except the ones deleted when re-partitioning the 2TB drives. The entire time my server was up and running and all files accessible. I can't talk enough about how awesome btrfs is.
Comment