Announcement

Collapse
No announcement yet.

BTRFS Suspected Version Conflicts, Can't Delete and Start from Scratch

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    [SOLVED] BTRFS Suspected Version Conflicts, Can't Delete and Start from Scratch

    I have a NAS with 3 each, 2TB drives. Using Ubuntu Server 14.04, I installed BTRFS Ver 0.19. Then I created a BTRFS using all 3 drives. I created a UUID for the fs and mounted it to /nas.

    I have since upgraded the OS to 18.04 which includes
    BTRFS version 4.15.0. So I was thinking <Yeah, I know... dangerous> I should really redo the NAS drives so the BTRFS version is the same as that included with the OS. Besides, I really want RAID 5 and I am hoping some improvements, patches, and stability are present in the new version.

    Problem: I can't delete the BTRFS, or remove the devices from the FS. I know this is a bug, but there has to be a work-around. There's nothing on the ganged drives so I'm okay with wiping the entire system out a and starting from scratch.

    Would using fdisk to wipe the BTRFS drives work?

    I can mount, umount /nas, but I've only been able to remove one drive of the three. BTRFS won't allow me to remove anymore because it needs at least 2 drives for RAID1.
    "If you're in a room with another person who sees the world exactly as you do, one of you is redundant." Dr. Steven Covey, The 7-Habits of Highly Effective People

    #2
    Originally posted by mhumm2 View Post
    I have a NAS with 3 each, 2TB drives. Using Ubuntu Server 14.04, I installed BTRFS Ver 0.19. Then I created a BTRFS using all 3 drives. I created a UUID for the fs and mounted it to /nas.

    I have since upgraded the OS to 18.04 which includes
    BTRFS version 4.15.0. So I was thinking <Yeah, I know... dangerous> I should really redo the NAS drives so the BTRFS version is the same as that included with the OS. Besides, I really want RAID 5 and I am hoping some improvements, patches, and stability are present in the new version.

    Problem: I can't delete the BTRFS, or remove the devices from the FS. I know this is a bug, but there has to be a work-around. There's nothing on the ganged drives so I'm okay with wiping the entire system out a and starting from scratch.

    Would using fdisk to wipe the BTRFS drives work?

    I can mount, umount /nas, but I've only been able to remove one drive of the three. BTRFS won't allow me to remove anymore because it needs at least 2 drives for RAID1.
    I did all the things you want to do with v.19, with no problems.

    To split the two drive RAID1 you can use the btrfs balance command:

    btrfs balance start -f -sconvert=single -mconvert=single -dconvert=single /mnt

    -f forces the conversion, -s means system, -m means metadata, -d means data.

    I use /mnt because I open a Konsole, issue "sudo -i", mount my system using
    mount /dev/disk/bu-uuid/whatevertheuuidis /mnt
    Use blkid to confirm which UUID to use to mount the drive to /mnt.

    From that Konsole environment I also issue all my btrfs commands.
    When that balance is done all your files will by on the UUID which is /dev/sda1. Be careful, though, because the three drives can switch their /dev/sdX designations without warning.
    "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
    – John F. Kennedy, February 26, 1962.

    Comment


      #3
      There is/was a RAID1 bug I believe that only allows you to mount a broken RAID1 array in degraded mode once, to ostensibly replace the missing device. After that, it will only mount read-only. BTRFS-tools v 0.19 is waaay old so I'm not surprised you're running into some trouble.

      Anyway, yes you cannot have RAID<any number> with less than two devices. As GG said, you have to convert to JBOD (just a bunch of disks). Also, I'm not 100% but RAID1 will only use even numbers of devices. I'd be surprised if you were getting to use all three drives for data. I just read it does use 3 drives because it mirrors data "blocks" not sectors. Not sure how this would work when a drive failed.

      As far as "wiping" - fdisk/gdisk or reformatting do not wipe drives. They just change partition tables and metadata. Install and use "wipefs" to actually clear btrfs off of a drive. Note this is not like a true "wipe" in the sense of data security, it just removes all indicators of previous formatting so the drive appears blank to a new formatting effort.

      How to proceed will depend greatly on how much data you have and want to save. If you're actually going to change to RAID5, you'll probably have to move all the data off anyway. I have no idea if you can convert 3 JBOD drives to RAID5 with data. But this says you can:

      Yes, it is possible to convert from RAID-1 to RAID-5 or (better RAID-6). For example, you could convert a single btrfs filesystem (one drive) to RAID-1 (2+ drives), then to RAID-5 (3+ drives) and to RAID-6 (4+ drives). In every case, you have to add the new drive first (btrfs device add drive /mnt/point), then rebalance and convert (btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt/point).
      Last edited by oshunluvr; Jul 07, 2018, 05:10 PM.

      Please Read Me

      Comment


        #4
        Good info guys. Yeah I knew something was wrong when I started using the new install (Bionic) and the 3 drives were already "seen" by BTRFS 4.15 as being together in a RAID 1 config, but none of the 6GB of data could be seen or read. Not a problem as I still have the data available. When I get the NAS fixed, I'll just copy the data back over. Thanks again. I'm going to get started killing all remnants of BTRFS v.19.
        "If you're in a room with another person who sees the world exactly as you do, one of you is redundant." Dr. Steven Covey, The 7-Habits of Highly Effective People

        Comment


          #5
          Okay, I moved everything (Icouldn't see any files) over to drive 3 of 3 and was able to "device delete" drive 2 of 3. Then I was able to wipe the btrfs off of 3 of 3. It turns out I had to umount /nas to perform the wipefs.

          Now I've created a new btrfs using the v4.5.03 of btrfs. Here's the command I used:

          sudo mkfs.btrfs -d raid5 -m raid5 /dev/sda /dev/sdb /dev/sdc

          BTRFS spit out confirmation, a new UUID (which I'll use in my fstab), and this line:

          "Incompat features: extref, raid56, skinny-metadata"

          Any ideas? Do I actually have a RAID 5 configuration or something else?
          "If you're in a room with another person who sees the world exactly as you do, one of you is redundant." Dr. Steven Covey, The 7-Habits of Highly Effective People

          Comment


            #6
            Sorry, typing too fast. 3rd line above .. using the v4.15.03 of btrfs.
            "If you're in a room with another person who sees the world exactly as you do, one of you is redundant." Dr. Steven Covey, The 7-Habits of Highly Effective People

            Comment


              #7
              https://www.spinics.net/lists/linux-btrfs/msg52994.html

              It means you can't mount your raid 5/6 on older kernels

              Please Read Me

              Comment


                #8
                Really? uname-a displays Linux 4.15.0-23-generic from May-23 - 2018. How much newer can I get?

                Okay, I read the link you sentoshunlvr and I checked out my kernel with:

                ls /sys/fs/btrfs/features

                and it does list "raid56" and "skinny-metadata" so I should be okay. Now I'm moving on to Openvpn. Thanks again.
                "If you're in a room with another person who sees the world exactly as you do, one of you is redundant." Dr. Steven Covey, The 7-Habits of Highly Effective People

                Comment


                  #9
                  maybe I typed too fast. I don't seem to be able to mount the btrfs I just created.

                  Strange too because I don't get any error messages whether I mount from fstab (Yes, I changed the UUID to the one I got back from the mkfs.btrfs command) or from the cli (sudo mount /dev/sda /nas

                  It just comes back to my empty command line and when I issue the df -h command, there's nothing mounted to /nas.
                  "If you're in a room with another person who sees the world exactly as you do, one of you is redundant." Dr. Steven Covey, The 7-Habits of Highly Effective People

                  Comment


                    #10
                    Did you use sudo?
                    "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
                    – John F. Kennedy, February 26, 1962.

                    Comment


                      #11
                      Originally posted by mhumm2 View Post
                      Really? uname-a displays Linux 4.15.0-23-generic from May-23 - 2018. How much newer can I get?

                      Okay, I read the link you sentoshunlvr and I checked out my kernel with:

                      ls /sys/fs/btrfs/features

                      and it does list "raid56" and "skinny-metadata" so I should be okay. Now I'm moving on to Openvpn. Thanks again.
                      You misunderstood. It's a warning that you won't be able to use the RAID5 btrfs filesystem with older kernels - not that you have an older kernel. It's a NOTICE not an ERROR.

                      Please Read Me

                      Comment


                        #12
                        I did. "sudo btrfs filesystem show" shows me all 3 HDDs, their size and usage, and the UUID of the grouped "drive." Then I "sudo mount -a -v" I get the following:
                        Code:
                        /           :ignored
                        none    :ignored
                        /nas     :successfully mounted
                        Then I issue "df -h" and no HDD is listed nor is the /nas mount point listed.
                        "If you're in a room with another person who sees the world exactly as you do, one of you is redundant." Dr. Steven Covey, The 7-Habits of Highly Effective People

                        Comment


                          #13
                          Switching back and forth is getting to be very inconvenient. I don't want to set up ssh again. Is there an easy way to control the nas computer from my main computer?
                          "If you're in a room with another person who sees the world exactly as you do, one of you is redundant." Dr. Steven Covey, The 7-Habits of Highly Effective People

                          Comment


                            #14
                            Originally posted by mhumm2 View Post
                            I did. "sudo btrfs filesystem show" shows me all 3 HDDs, their size and usage, and the UUID of the grouped "drive." Then I "sudo mount -a -v" I get the following:
                            Code:
                            /           :ignored
                            none    :ignored
                            /nas     :successfully mounted
                            Then I issue "df -h" and no HDD is listed nor is the /nas mount point listed.
                            From mount's man:
                            -V, --version
                            Display version information and exit.
                            and showing the version of mount while doing a mount all is unnecessary.

                            and
                            -U, --uuid uuid
                            Mount the partition that has the specified uuid.
                            From what I gather you have 3 storage devices and you used btrfs to format them all. Have you successfully separated them all into JOBDs? IF you want to mount them individually have you created a directory for each one to mount to? Say, /mnt/d1, /mnt/d2 and /mnt/d3. Then
                            sudo -i
                            mount /dev/disk/by-uuid/theactualuuidd1 /mnt/d1
                            mount /dev/disk/by-uuid/theactualuuidd2 /mnt/d2
                            mount /dev/disk/by-uuid/theactualuuidd3 /mnt/d3
                            Then do your btrfs stuff (snapshots, send&receive, etc.), then umount them, exit root and exit the Konsole.

                            But, that's just my wild guess. What are your goals for the three HDs? Three independent btrfs storage drives? (What's your OS running on?) Two combined into a single btrfs pool with one used as backup storage? One running the OS and two for storage? (My setup)
                            Last edited by GreyGeek; Jul 08, 2018, 08:00 PM.
                            "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
                            – John F. Kennedy, February 26, 1962.

                            Comment


                              #15
                              Originally posted by mhumm2 View Post
                              Switching back and forth is getting to be very inconvenient. I don't want to set up ssh again. Is there an easy way to control the nas computer from my main computer?
                              Webmin

                              Please Read Me

                              Comment

                              Working...
                              X