I often comment on many of the great reasons to use BTRFS. Here's one example: The ease of moving the contents of an entire, very large drive from one to another.
TL/DR: Simply "btrfs add" the new drive to the old drive's file system, then "btrfs remove" the old drive from the file system. Done. It takes time. It will take hours or even overnight to move 11tb of data. But because it's BTRFS I will have ZERO downtime. I will continue to be able to use the server and access the data the entire time. My server has hot-swap bays so I don't even have to reboot to add the drive and move the data.
If you're interested in the rest of the tale...
The setup: My file server has 3 hard drives - 16tb, 10tb, 6tb. The 16tb drive holds data in 17 separate subvolumes and the 10+6 tb drives serve as backup devices. I do daily BTRFS incremental backups from the 16tb drive to the other two using a script I wrote and run with anacron. I do not run the the 10 and 6 tb drives as a single file system, rather, they stand alone as separate file systems. This requires me to manually "balance" the storage space so both drives contain roughly the same amount of free space. In 5 years I've only had to redistribute the backup subvolumes once. The reason I decided to keep the drives separate was to maintain flexibility for later upgrades - adding, subtracting, or upgrading drives - without complicated data recovery requirements. No RAID or JBOD issues to deal with. To increase storage and replace failing drives, I have changed and re-aligned the drive setup several times over the years: 2x2tb > 6tb+2tb+2tb > 6tb+6tb > 10tb+6tb > 16tb+10tb+6tb to today 22tb+16tb+6tb
The reason for the drive swap: This week, the 10tb backup drive failed, and not a slow decline - the drive heads no longer move at all. Just a lot of clicking when the system starts up. No chance at any kind of recovery. Only 50k hours so just not a great drive. I have older drives with 90k+ hours still chugging along.
The new setup: Since the server is nearing capacity at 70% full, I decided to replace the 10tb drive with a 22tb drive. This means I will be making the new drive the primary storage device and setting the 16tb and 6tb drives as backup devices.
Required actions:
TL/DR: Simply "btrfs add" the new drive to the old drive's file system, then "btrfs remove" the old drive from the file system. Done. It takes time. It will take hours or even overnight to move 11tb of data. But because it's BTRFS I will have ZERO downtime. I will continue to be able to use the server and access the data the entire time. My server has hot-swap bays so I don't even have to reboot to add the drive and move the data.
If you're interested in the rest of the tale...
The setup: My file server has 3 hard drives - 16tb, 10tb, 6tb. The 16tb drive holds data in 17 separate subvolumes and the 10+6 tb drives serve as backup devices. I do daily BTRFS incremental backups from the 16tb drive to the other two using a script I wrote and run with anacron. I do not run the the 10 and 6 tb drives as a single file system, rather, they stand alone as separate file systems. This requires me to manually "balance" the storage space so both drives contain roughly the same amount of free space. In 5 years I've only had to redistribute the backup subvolumes once. The reason I decided to keep the drives separate was to maintain flexibility for later upgrades - adding, subtracting, or upgrading drives - without complicated data recovery requirements. No RAID or JBOD issues to deal with. To increase storage and replace failing drives, I have changed and re-aligned the drive setup several times over the years: 2x2tb > 6tb+2tb+2tb > 6tb+6tb > 10tb+6tb > 16tb+10tb+6tb to today 22tb+16tb+6tb
The reason for the drive swap: This week, the 10tb backup drive failed, and not a slow decline - the drive heads no longer move at all. Just a lot of clicking when the system starts up. No chance at any kind of recovery. Only 50k hours so just not a great drive. I have older drives with 90k+ hours still chugging along.
The new setup: Since the server is nearing capacity at 70% full, I decided to replace the 10tb drive with a 22tb drive. This means I will be making the new drive the primary storage device and setting the 16tb and 6tb drives as backup devices.
Required actions:
- Move 11tb of data from the 16tb drive to the new 22tb drive using btrfs add and remove functions.
- Re-evaluate the backup subvolume distribution (which subvolume backup goes where) and realign the backups.
- Recreate the missing backups that were on the 10tb drive on the 16tb or 6tb drives.
- Update fstab and the backup script for the new setup.
Comment