Announcement

Collapse
No announcement yet.

btrfs check --repair [btrfs.fsck] - works like a charm!

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    btrfs check --repair [btrfs.fsck] - works like a charm!

    For those of you using btrfs and those of you still on the fence, one of the issues early on was lack of a repair tool. Well, I want to report "btrfs check --repair" works and apparently well. Short version if you don't care for the details: btrfs check --repair worked as advertised and return my file system to me intact and working

    If you're interested in a bit more detail, read on:

    I've been using btrfs full time since 2012. I have many btrfs filesystems. My root filesystem contained 4 installs and their homes all in separate subvolumes along with two others, so 10 subvolumes in total and about 380GB of data on a 2 disk btrfs pool consisting of two Samsung 850 Pro 256GB drives.

    In 2014, I had been using an add-on SATA III card in my old PC since my mobo didn't support SATA III. After a year or so, the add-on card started randomly dropping drives off and on-line. This caused 4 bad file file extents (file system version of a drive sector) that btrfs.fsck could not repair then (2014). However, they didn't cause problems. I hunted down the damages files and replaced them. They couldn't be deleted, but I could rename them - so I did and copied clean versions into the install as replacements. The results of this was the subvolume in question could not be backed-up, but it continued to work (Kubuntu 14.04) without issue so I left it alone.

    Yesterday, I was cleaning house - wiping old backups, clearing old snapshots, etc. I decided to make a full set of backups of this filesystem and delete all but Kubuntu 16.04 and KDEneon. I didn't feel I needed the others and hadn't booted to either of them in over a year. Last night, I did a send|receive backup of all the subvolumes to a backup drive (except 14.04 - still unable due to the broken extents). Tonight, I started deleting the subvolumes. I deleted three, no issue, then I attempted to delete 14.04. I assumed broken extents wouldn't matter once I deleted the subvolume, but it wouldn't let me delete it. It forced my entire filesystem to Read Only because of the error so basically I was toast for the moment. I rebooted and KDEneon came back up normally. I went to the subvolume location and not only was 14.04 still there, but two of the three other subvolumes I had deleted before were still there also. I assumed this was due to delayed "commits" (the file system finishing the action) so I deleted those two again use the --commit-each option. Then tried to delete 14.04 again. Once again - forced to Read-only. Rebooted again and this time the other subvolumes were gone (so it seems I was right about the commits) but 14.04 still there. One more attempt at deletion resulted in 14.04 subvolume deletion, but now reboot failed and I was dumped into emergency maintenance mode.

    Since the file-system still mounted in emr. main. mode, I was unable to run a file check on it - if you don't know, file system check and repair is one of very few things you can't do with a btrfs file system while mounted. Luckily, I have a second install on a different disk for just this sort of emergency. I booted to it, ran "sudo btrfs check --repair /dev/sdc3" and crossed my fingers. A few minutes later, I'm pleased to report all was repaired and I'm typing this from my main install from my now repaired btrfs file system. Just for good measure, I ran btrfs check on all my other btrfs file systems as well.
    Last edited by oshunluvr; Dec 13, 2016, 08:04 AM.

    Please Read Me

    #2
    Just some of the joys of btrfs!

    [#]BTRFS[/#]
    Last edited by GreyGeek; Sep 22, 2017, 11:56 AM.
    "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
    – John F. Kennedy, February 26, 1962.

    Comment


      #3
      right on time oshunluvr ,,,,,I was just getting ready to (in my lackadaisical fashion) remove some subvolumes of a netrunner install on my BTRFS storage drive ,,,

      but have not taken the time as yet to refresh my self on the proper way to do this ,,,,,and it's nice to know that the "btrfs check --repair" is their and working if I muck things up to bad ,,,,,,,,,

      VINNY
      i7 4core HT 8MB L3 2.9GHz
      16GB RAM
      Nvidia GTX 860M 4GB RAM 1152 cuda cores

      Comment


        #4
        Vinny, one sort-of "Gotcha" thing that can happen if you're deleting subvolumes; you can't delete a subvolume if it contains subvolumes within it. Remember - snapshots are subvolumes. This happens if you use snapper or keep a snapshot within the install. You have to delete the snapshots within the subvolume first. If you try, you'll get a "subvolume not empty" error. I didn't find this message particularly helpful, but I figured it out by running a "filesystem show" command. I had used snapper in my 15.04 install and there were about 30 snapshots within the 15.04 subvolume. The default subvolume name for snapshots with snapper is "snapshot" and it puts them in a hidden folder with a subfolder for each snapshot. Deleting that many snapshots manually would have been a PITA, so I used "find" to remove them all in one shot. Here's what I did:

        First, I went full "su" mode so I didn't have to use sudo every time:
        sudo -i

        This shows all the subvolumes, thus showing me the hidden snapshots:
        btrfs fi sh

        This deleted all the subvolumes named "snapshot" that were nested in folders in the hidden directory:
        find . -name "snapshot" -exec btrfs su de {} \;
        This one I executed in the hidden folder that had the 30ish subfolders containing all the snapshots. ONce all the snapshots were gone, I was able to delete the host subvolume.
        Last edited by oshunluvr; Dec 13, 2016, 08:28 AM.

        Please Read Me

        Comment


          #5
          yes I just ran into that ,,,kinda

          Code:
          vinny@vinny-Bonobo-Extreme:/mnt/btrfs$ sudo btrfs subvolume delete -c /mnt/btrfs/@
          Delete subvolume (commit): '/mnt/btrfs/@'
          ERROR: cannot delete '/mnt/btrfs/@': Directory not empty
          so I look closer

          Code:
          vinny@vinny-Bonobo-Extreme:/mnt/btrfs$ sudo btrfs subvolume list -a /mnt/btrfs/@
          ID 773 gen 39558 top level 5 path <FS_TREE>/@
          ID 1069 gen 28192 top level 773 path @/var/lib/machines
          WTF ,,,,, OK then

          Code:
          vinny@vinny-Bonobo-Extreme:/mnt/btrfs$ sudo btrfs subvolume delete -c /mnt/btrfs/@/var/lib/machines
          Delete subvolume (commit): '/mnt/btrfs/@/var/lib/machines'
          so now

          Code:
          vinny@vinny-Bonobo-Extreme:/mnt/btrfs$ sudo btrfs subvolume delete -c /mnt/btrfs/@
          Delete subvolume (commit): '/mnt/btrfs/@'
          well thats better

          now the old Netrunner install is gone and I will try a Kubuntu-16.04 install in it's place to test some on this https://www.kubuntuforums.net/showth...287#post396287

          figured the BTRFS drive would be better for this since it will be so easy to backup/snapshot the system before any upgrading just in case

          VINNY
          i7 4core HT 8MB L3 2.9GHz
          16GB RAM
          Nvidia GTX 860M 4GB RAM 1152 cuda cores

          Comment


            #6
            Sweet, vinny!
            "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
            – John F. Kennedy, February 26, 1962.

            Comment


              #7
              I've had my ~650 GB of valuable data on a BTRFS filesystem since about 2010 or 2011. It runs 24/7/365. I use a pair of 1 TB hard drives. I make the filesystem directly on the unpartitioned drives, and I use the default mirrored metadata and striped data configuration. After 3 or 4 years, one of the drives started spitting the occasional error, so I replaced the pair, made the new filesystem on them, and copied the entire set of data onto the new filesystem. That's the one that is running today. I don't really care about snapshots or subvolumes -- I just want my data safe and accessible. (Yes, it's backed up, too). Power on hours for the pair of drives today is 21,868.

              Comment


                #8
                More evidence of the sweetness of btrfs!
                "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
                – John F. Kennedy, February 26, 1962.

                Comment


                  #9
                  Yeah, I've done similar migration. My server holds 4 HDs so when I upgraded it, removed the smallest drive from the btrfs pool, pulled it, installed the new drive, added it to the pool, done. Didn't even have to take it off-line, reboot, or shut it down. Cake.

                  I'm just about ready to add another drive so I can go full mirror like Don. Right now I have a 6TB drive as storage and 3x2tb drives as backup for it. I'll replace a 2TB drive with another 6TB drive and have 6+2TB mirrored.

                  Please Read Me

                  Comment


                    #10
                    My son's System76 Gazelle Professional (gazp9) had a bank of his 8Gb of RAM go out. It took his btrfs with it. Dmesg revealed an csum error in an inode, which was off by 300 or so, verified with a status check.

                    I booted into KDE Neon Edition on my USB Stick, which uses Btrfs, and tried "sudo btrfs check --repair /dev/sda1" After a minute or two it came back with multiple overlap errors which it could not fix. He had a snapshot from April but I decided to do a fresh install because all of his important data is on my memory stick, and his snapshot was before I installed that data.
                    "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
                    – John F. Kennedy, February 26, 1962.

                    Comment

                    Working...
                    X