One of my currently in use three hard drives - all well past warranty - has begun kicking out s.m.a.r.t. errors so I decided I needed to take it off-line before it craps for good. Of the 6 partitions on this drive, three are btrfs-RAID configuration. One is a two drive install, the other three drive spans.
Here is one of the beautiful things about btrfs: On-line data realignment. I simply needed to mount all my brtfs devices, move the data off the failing drive partitions, and done!
My drives are sda, sdb, and sdc and the partitions in use are numbers 5, 7, and 8. 7 and 8 are three partitions each and 5 is only two partitions
12GB each: sda5+sdb5 --> contains a bootable OS
10GB each: sda7+sdb7+sdc7 --> currently empty
60GB each: sda8+sdb8+sdc8 --> contains the majority of my personal docs, music, videos, etc.
Here's how you do it:
RAID (in btrfs world more properly called "multiple device filesystems") requires at least two devices so mount in normal mode (vs. degraded mode). Also, there must be enough room for all the data to remove a device from the filesystem. In a two device filesystem, you need to add a replacement device then "delete" the offending device. In a three-or-more device filesystem - assuming you have enough space on the remaining devices - you can simply delete the device and re-balance the remaining devices.
It just so happens I have three different cases - one 2 device, one 3 device with plenty of file space, one 3 device nearly full.
Lets do the easiest one first. all these commands assume you are in a root terminal (or preface them all with sudo)
sd(abc)7 contains no data at this time as it was waiting for Quantal to be ready. These commands removed sdb7:
Note the difference in the mounted device vs. the physical device names. You can mount a btrfs multiple device filesystem using any device name as long as it's part of the filesystem. You can also use the UUID. Once you do the above, the UUID of the removed device is gone along with the filesystem. If you don't do the last command - balance - the filesystem will continue to report a missing device, expecting you'll add one back in. Obviously, if that's your plan skip the last step and run the "device add" command later.
So for my two device filesystem I must add a third device in to the filesystem so I have enough room to move the data off the bad drive and to allow me to remount the device in normal (not degraded) mode. Luckily, /dev/sdc5 was used, but had my old 10.04 install on it which I really no longer need. I just added it in with sd(ab)5, then removed sdb5. Balancing has to be done after adding and again after removing the devices. Here are the commands:
Results of "btrfs filesystem show" before adding the new device:
Results of "btrfs filesystem show" after adding the new device before balancing:
And after initial balancing:
At this point you might notice the math doesn't add up. The reported total used of 15.67GB doesn't equal the used amount on each device. The reasons for this are complicated and has to do with blocks and how data is stored. Fortunately, we don't have to worry about that. Just run the btrfs "df" command to get a better picture:
Now here it is again after the "device delete"
And after the final re-balance
The final step of removing the device from sd(abc)8 involves freeing up enough space to remove a 60GB device and then only the last two commands (delete and balance) so I won't bother with the details for this one.
The amazing thing is all this is done without taking the filesystem off-line. I continued to access the drives as they were being re-sized, partitions being added and removed, and balancing. Clearly, the advantages of the btrfs filesystem are just beginning to show.
**NOTE** The balancing portions of this take quite a bit of time depending on the amount of data being moved.
Since these drives are all of the same age I expect them all to die soon so I ordered a new WD "Red" 2TB replacement to hold the lion's share of my media and backups. Even though I am reverting to a single drive system for this computer (for now ) I could use ext4 - but I'm sticking with btrfs. It's just too easy!
Here is one of the beautiful things about btrfs: On-line data realignment. I simply needed to mount all my brtfs devices, move the data off the failing drive partitions, and done!
My drives are sda, sdb, and sdc and the partitions in use are numbers 5, 7, and 8. 7 and 8 are three partitions each and 5 is only two partitions
12GB each: sda5+sdb5 --> contains a bootable OS
10GB each: sda7+sdb7+sdc7 --> currently empty
60GB each: sda8+sdb8+sdc8 --> contains the majority of my personal docs, music, videos, etc.
Here's how you do it:
RAID (in btrfs world more properly called "multiple device filesystems") requires at least two devices so mount in normal mode (vs. degraded mode). Also, there must be enough room for all the data to remove a device from the filesystem. In a two device filesystem, you need to add a replacement device then "delete" the offending device. In a three-or-more device filesystem - assuming you have enough space on the remaining devices - you can simply delete the device and re-balance the remaining devices.
It just so happens I have three different cases - one 2 device, one 3 device with plenty of file space, one 3 device nearly full.
Lets do the easiest one first. all these commands assume you are in a root terminal (or preface them all with sudo)
sd(abc)7 contains no data at this time as it was waiting for Quantal to be ready. These commands removed sdb7:
mount /dev/sda7 /mnt/sda7 --> mount the device
btrfs device delete /dev/sdb7 /mnt/sda7 --> remove the bad drive partition
btrfs filesystem balance /mnt/sda7 --> tell the filesystem the device is not coming back!
btrfs device delete /dev/sdb7 /mnt/sda7 --> remove the bad drive partition
btrfs filesystem balance /mnt/sda7 --> tell the filesystem the device is not coming back!
Note the difference in the mounted device vs. the physical device names. You can mount a btrfs multiple device filesystem using any device name as long as it's part of the filesystem. You can also use the UUID. Once you do the above, the UUID of the removed device is gone along with the filesystem. If you don't do the last command - balance - the filesystem will continue to report a missing device, expecting you'll add one back in. Obviously, if that's your plan skip the last step and run the "device add" command later.
So for my two device filesystem I must add a third device in to the filesystem so I have enough room to move the data off the bad drive and to allow me to remount the device in normal (not degraded) mode. Luckily, /dev/sdc5 was used, but had my old 10.04 install on it which I really no longer need. I just added it in with sd(ab)5, then removed sdb5. Balancing has to be done after adding and again after removing the devices. Here are the commands:
mount /dev/sda5 /mnt/sda5
btrfs device add /dev/sdc5 /mnt/sda5
btrfs filesystem balance /mnt/sda5
btrfs device delete /dev/sdb5 /mnt/sda5
btrfs filesystem balance /mnt/sda5
btrfs device add /dev/sdc5 /mnt/sda5
btrfs filesystem balance /mnt/sda5
btrfs device delete /dev/sdb5 /mnt/sda5
btrfs filesystem balance /mnt/sda5
Results of "btrfs filesystem show" before adding the new device:
Label: 'precise' uuid: 5216b5e8-a9d6-497e-b8a9-ae951ed0b7f7
Total devices 2 FS bytes used 13.58GB
devid 1 size 12.00GB used 12.00GB path /dev/sda5
devid 2 size 12.00GB used 11.98GB path /dev/sdb5
Results of "btrfs filesystem show" after adding the new device before balancing:
Label: 'precise' uuid: 5216b5e8-a9d6-497e-b8a9-ae951ed0b7f7
Total devices 3 FS bytes used 13.58GB
devid 1 size 12.00GB used 12.00GB path /dev/sda5
devid 2 size 12.00GB used 11.98GB path /dev/sdb5
devid 3 size 12.00GB used 0.00 path /dev/sdc5
And after initial balancing:
Label: 'precise' uuid: 5216b5e8-a9d6-497e-b8a9-ae951ed0b7f7
Total devices 3 FS bytes used 15.67GB
devid 1 size 12.00GB used 8.05GB path /dev/sda5
devid 2 size 12.00GB used 8.03GB path /dev/sdb5
devid 3 size 12.00GB used 6.02GB path /dev/sdc5
At this point you might notice the math doesn't add up. The reported total used of 15.67GB doesn't equal the used amount on each device. The reasons for this are complicated and has to do with blocks and how data is stored. Fortunately, we don't have to worry about that. Just run the btrfs "df" command to get a better picture:
btrfs filesystem df /mnt/sda5
Data, RAID0: total=20.00GB, used=15.15GB
Data: total=8.00MB, used=8.00KB
System, RAID0: total=79.94MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, RAID0: total=2.00GB, used=531.28MB
Metadata: total=8.00MB, used=4.00KB
and a regular df shows what my computer "sees."df -h
Filesystem Size Used Avail Use% Mounted on
Filesystem Size Used Avail Use% Mounted on
/dev/sda5 37G 16G 17G 49% /mnt/sda5
Now here it is again after the "device delete"
Label: 'precise' uuid: 5216b5e8-a9d6-497e-b8a9-ae951ed0b7f7
Total devices 3 FS bytes used 13.58GB
devid 1 size 12.00GB used 9.38GB path /dev/sda5
devid 3 size 12.00GB used 9.41GB path /dev/sdc5
*** Some devices missing
And after the final re-balance
Label: 'precise' uuid: 5216b5e8-a9d6-497e-b8a9-ae951ed0b7f7
Total devices 2 FS bytes used 13.58GB
devid 1 size 12.00GB used 9.38GB path /dev/sda5
devid 3 size 12.00GB used 9.38GB path /dev/sdc5
devid 3 size 12.00GB used 9.38GB path /dev/sdc5
The final step of removing the device from sd(abc)8 involves freeing up enough space to remove a 60GB device and then only the last two commands (delete and balance) so I won't bother with the details for this one.
The amazing thing is all this is done without taking the filesystem off-line. I continued to access the drives as they were being re-sized, partitions being added and removed, and balancing. Clearly, the advantages of the btrfs filesystem are just beginning to show.
**NOTE** The balancing portions of this take quite a bit of time depending on the amount of data being moved.
Since these drives are all of the same age I expect them all to die soon so I ordered a new WD "Red" 2TB replacement to hold the lion's share of my media and backups. Even though I am reverting to a single drive system for this computer (for now ) I could use ext4 - but I'm sticking with btrfs. It's just too easy!
Comment