This is the first time since I began using btrfs I've run into a problem I couldn't fix. I don't think it's related to btrfs per se but I wanted to report it in case someone runs into this or is setting up to use btrfs for the first time.
I had a btrfs file system on an old drive I hadn't used in many months - it was a second backup location that I just didn't bother using. I run daily off of two SSDs in btrfs RAID and backup to two hard drives. This was on the second hard drive.
I attempted to mount it to see what was on it, and it returned an error. dmesg reported "BTRFS error (device sdb3): open_ctree failed"
Knowing there was nothing of value on there, I tried:
Finally, I force-formatted the partition to ext4, verified it worked, then force-formatted it back to btrfs and it worked. My best-guess is that the partition table or header got damaged somewhere along the way and changing it to ext4 re-wrote enough to fix it. A btrfs developer also told be that btrfs is very sensitive to proper alignment and the partition had been not-properly aligned, well, forever.
Hard to say what caused this; ghost in the machine or shear age - this drive has almost 72,000 hours of power-on time so it's well past it's expiration date.
The take-away from this is always backup anything you don't want to loose!
I had a btrfs file system on an old drive I hadn't used in many months - it was a second backup location that I just didn't bother using. I run daily off of two SSDs in btrfs RAID and backup to two hard drives. This was on the second hard drive.
I attempted to mount it to see what was on it, and it returned an error. dmesg reported "BTRFS error (device sdb3): open_ctree failed"
Knowing there was nothing of value on there, I tried:
- btrfs check - reported no errors
- btrfs check --repair - proceeded but made no difference
- booted into an older kernel to see if a kernel change had caused it - no
- wiped the file system with wipefs and recreated it - no errors on creation but still no mount
- used dd to write zeros to the entire partition and then reformatted it yet again - no change
- discovered the partition wasn't correctly aligned so I re partition using optimal alignment - no change
Finally, I force-formatted the partition to ext4, verified it worked, then force-formatted it back to btrfs and it worked. My best-guess is that the partition table or header got damaged somewhere along the way and changing it to ext4 re-wrote enough to fix it. A btrfs developer also told be that btrfs is very sensitive to proper alignment and the partition had been not-properly aligned, well, forever.
Hard to say what caused this; ghost in the machine or shear age - this drive has almost 72,000 hours of power-on time so it's well past it's expiration date.
The take-away from this is always backup anything you don't want to loose!
Comment