Announcement

Collapse
No announcement yet.

subvolumes or directories?

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    subvolumes or directories?

    In the Beginner's Guide https://www.kubuntuforums.net/forum/...guide-to-btrfs

    It shows mounting the disk partition that contains the btrfs root file system on /mnt and then creating a directory at /mnt/snapshots that is subsequently used to create the system snapshots with
    Code:
    btrfs su sn /mnt/@ /mnt/snapshots/@_basic_install
    Having tested this I know it works.

    What puzzles me is some other tutorials on btrfs I have read do it differently and I'm not sure if there is a difference in the end as subvolumes are not complete clear to me yet.

    In one case the tutorial shows creating the drive structure prior to installation. First mounting the rootfs on /mnt as above but they create a subvol using
    Code:
    btrfs su cr /mnt/@snapshots
    Then they umount /mnt and then remount the rootfs using
    Code:
    mount -o noatime,compress=zstd,space_cache,subvol=@ /dev/sdc3 /mnt
    Next they create directories for subvols
    Code:
    mkdir  /mnt/.snapshots
    Then they mount subvolumes
    Code:
    mount -o noatime,compress=zstd,space_cache,subvol=@snapshots /dev/sdc3 /mnt/.snapshots
    All I can see with this method is the snapshots are taken directly to /.snapshots/ without having to mount it on /mnt since /etc/fstab has the mount for the @snapshots subvol at /.snapshots.

    #2
    I completely do not understand the point of doing this. What's the benefit of all the extra steps?
    If you want to store subvolume snapshots inside of another subvolume that's hidden, you don't need half-a-dozen obscure steps to do it. "sudo btrfs su cr .snapshots" gets it done in one.

    Personally, I don't like nesting subvolumes. Too easy to lose track of what is where. Remember, subvolume snapshots grow over time and in theory can completely fill your file system
    I also don't see any benefit to taking all these extra steps when simply copy-and-pasting a line in fstab mounts the root subvolume and gives you a logical place to store snapshots.
    AFAIK, unless the root fs is mounted you can't snapshot @ or @home, so you're doing that regardless, and that's all I snapshot on my desktop pc.

    I think some people just like to make things unnecessarily complicated. Take a look at the default btrfs install method of openSUSE and you'll see what I mean. I've been using BTRFS daily since tools version 0.19 and I'll stick with the easiest way to do things rather than the most complex.

    Please Read Me

    Comment


      #3
      I agree that if there is no advantage to doing it the more complicated way, then why do it. At first I thought it was because they did it to match the layout was suggested by the snapper wiki, but for a test yesterday, I installed Archlinux on a machine using the official archlinstall script with btrfs and it was very minimalistic. Their fstab only mounted / and and the efi partition. When you examing that list of subvolumes, snapshot isn't even listed, nor is @. Yet if you installed snapper it installs and runs correctly.

      I think some people are just guessing. The instructions in the Beginner's Guide do work and it makes sense to me. I'm just hoping that if I read enough btrfs stuff at some point the who directory and/or subvolume concept will become crystal clear. I hate doing stuff just following instructions without fully understanding the whats and whys.

      Thanks for tolerating my questions.

      Comment


        #4
        Originally posted by jfabernathy View Post
        I hate doing stuff just following instructions without fully understanding the whats and whys.

        Thanks for tolerating my questions.
        As do we all (mostly; some of us 'like' trying to break things!). Asking questions and seeking answers is what we are about.
        Windows no longer obstructs my view.
        Using Kubuntu Linux since March 23, 2007.
        "It is a capital mistake to theorize before one has data." - Sherlock Holmes

        Comment


          #5
          Originally posted by Snowhog View Post
          As do we all (mostly; some of us 'like' trying to break things!). Asking questions and seeking answers is what we are about.
          I did find a linked article about subvolumes that is starting to make some sense to me.

          https://lwn.net/Articles/579009/

          Comment


            #6
            Originally posted by jfabernathy View Post
            I agree that if there is no advantage to doing it the more complicated way, then why do it. At first I thought it was because they did it to match the layout was suggested by the snapper wiki, but for a test yesterday, I installed Archlinux on a machine using the official archlinstall script with btrfs and it was very minimalistic. Their fstab only mounted / and and the efi partition. When you examing that list of subvolumes, snapshot isn't even listed, nor is @. Yet if you installed snapper it installs and runs correctly.

            I think some people are just guessing. The instructions in the Beginner's Guide do work and it makes sense to me. I'm just hoping that if I read enough btrfs stuff at some point the who directory and/or subvolume concept will become crystal clear. I hate doing stuff just following instructions without fully understanding the whats and whys.

            Thanks for tolerating my questions.
            The only dumb question is the one not asked. Why do people make some things more complicated than they need to? Some people love to carry their nads around in a wheelbarrow for all to admire, I guess.

            I wrote the first post in the BTRFS subforum that you referred to and what I wrote is what I could understand about BTRFS in the way I understood it. Most of it was gathered from around the web from postings and ramblings of other BTRFS users whose writing I could understand.
            As far as I know, I'm the only BTRFS user that moved @home (/home/jerry) under @ (/home) and then commented out the stanza in fstab that mounted @home to /home. I got tired of making two snapshots each time I wanted to back up my system, and since it is better, IMO, not to mix up a @somedate snapshot with @homesomeotherday snapshot I decided that only one subvolume was needed, @, for MY use case. I used to have a @data subvolume which I mounted as /home/jerry/data, but that was little different from having @ and @home, so I dropped it.


            I also have used snapper, and liked it, keeping in mind that the most number of snapshots you'd want to keep on your main drive is around 10-12 per subvolume, so I configured it to keep the number of snapshots to a minimum. As oshunluver wrote, snapshots start out empty but as time progresses and you make changes to your system the snapshots grow. The older they are the larger they grow. Eventually they well get as large as your system. If you are using a 500Gb drive with 465Gb of useable space, and your @ and @home are 100Gb total, three snapshots plus your 100Gb of system will almost fill up your drive. You'll never finish creating a fourth snapshot before an "out of space" error stops your attempt.

            What I didn't like about snapper is that it kept the snapshots under / and under /home, accessible as root without having to mount the rootfs. I also tried TimeShift. It's got a nice GUI interface but if you decide to uninstall it you MUST first delete all your snapshots. TimeShift keeps its snapshots under /mnt too, but then binds /mnt to /runner, where they can be accessed as root without mounting the rootfs. TImeShift also has an annoying directory/snapshot/directory/snapshot]/... hierarchy. While exploring that I found out I could saw off the subvolume I was in and could not recover because TImeShift modifies grub so that the selected subvolume gets booted into on the next boot. If you've sawed that one off ...

            Currently, I keep five snapshots of @ on my main drive and mount a USB drive to store a copy onto another storage device, a 1 TB SSD, using a script I wrote to create the new snapshots with timestamps, and deleting the oldest snapshots on both /mnt/snapshots and /backup.
            I also do rollbacks manually as well, as described in that subforum's first post. IMO, it is simpler to do snapshotting and recovery manually than to use snapper or TimeShift. I've used BTRFS since 2016 and Kubuntu 16.04 and it has never given me a single problem. I will never use a distro that does not allow me to make BTRFS the rootfs.
            "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
            – John F. Kennedy, February 26, 1962.

            Comment


              #7
              Thanks for your work on the Beginner's Guide. It really is simpler to understand than most tutorials out there. I was going to spend some time trying to understand how to recover the boot partition without editing UUIDs, but I'll cross that bridge after it falls in. Being a distro hopper, I've learned to keep all my data on a server in a Samba share. So testing out a new distro is relatively easy. My NAS is part of my media server where I do care about backups and mirrors, with offsite mirrors. That is where I care about keeping an image backup like Clonezilla, and to use snapshots in between new images backups. But that's only of the boot drive. All critical stuff in on the mirror and in the cloud.

              Comment


                #8
                Originally posted by jfabernathy View Post
                ... Being a distro hopper, I've learned to keep all my data on a server in a Samba share. So testing out a new distro is relatively easy. My NAS is part of my media server where I do care about backups and mirrors, with offsite mirrors. That is where I care about keeping an image backup like Clonezilla, and to use snapshots in between new images backups. But that's only of the boot drive. All critical stuff in on the mirror and in the cloud.
                I like to test out distros as well. I watch YouTube for the latest reviews of the latest distro and if one sounds interesting I install it as a VM using qemu/KVM. As a VM it can take the entire screen and run just about as fast as if I installed it on bare SSD.


                If you go that route be sure to turn of COW on /var/lib/libvirt/images before you create your first virtual hd.
                chattr +C /var/lib/libvirt/images

                You can check to see if the command worked using
                lsattr -d /var/lib/libvirt/images

                "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
                – John F. Kennedy, February 26, 1962.

                Comment


                  #9
                  Originally posted by GreyGeek View Post
                  If you go that route be sure to turn of COW on /var/lib/libvirt/images before you create your first virtual hd.
                  chattr +C /var/lib/libvirt/images

                  You can check to see if the command worked using
                  lsattr -d /var/lib/libvirt/images
                  What is the reason to turn off COW during the creation of the virtual HD?

                  This also brings up the question about what other activities are not suited for COW?

                  The bulk of my writes to my data mirror which is BTRFS are large sequential files like using a tuner card to capture OTA TV signals and record them on disk for later viewing. This is roughly 6 GB/hour. These large files will be deleted once they are views, so autodefrag is on.

                  Also when I created my mirror on btrfs the first time, I used CIFS/SMB to fill it the first time with over 1TB of data. Should COW be off for that type of activity? The mirror is a volume that will not be snapshot-ed just like the libvirt virtual HD.

                  Comment


                    #10
                    COW is not suitable for dynamically allocated files because they can become corrupted.

                    SWAP files and dynamically sized virtual drives are dynamically allocated, which I guess means the swap or vm drive file can change size without the file system being in control of allocation.

                    GG is concerned about using VM drives that are not fixed size. You can just use fixed size virtual drives and a swap partition and not worry about it.

                    Here's more info: http://www.infotinks.com/btrfs-disab...ory-nodatacow/

                    Data like image files are not dynamically sized and therefore do not need to be non-COW. In fact, I know of no other file types that are dynamically allocated other than these two.

                    Please Read Me

                    Comment


                      #11
                      Originally posted by oshunluvr View Post
                      COW is not suitable for dynamically allocated files because they can become corrupted.

                      SWAP files and dynamically sized virtual drives are dynamically allocated, which I guess means the swap or vm drive file can change size without the file system being in control of allocation.

                      GG is concerned about using VM drives that are not fixed size. You can just use fixed size virtual drives and a swap partition and not worry about it.

                      Here's more info: http://www.infotinks.com/btrfs-disab...ory-nodatacow/

                      Data like image files are not dynamically sized and therefore do not need to be non-COW. In fact, I know of no other file types that are dynamically allocated other than these two.
                      When I've use KVM/QMU libvirt, if I request a HD partition of 40 GB, it seems to create a file of 40GB, so that doesn't seem to be the dynamically allocated type. However, when I use virtualbox, it seems to grow the virtual HD partition as I use it.

                      So, in my mind, it's more problem for virtualbox use than libvirt?

                      Comment


                        #12
                        This is kind of off this topic, but AFAIK the virtual disk drive format used by default in QEMU/KVM is "qcow2" - a dynamically allocated disk, but it is pre-allocated to it's full size upon creation and is of "sparse format." This is above my pay grade, but I think that means it reserves the space needed for the full VDD but doesn't occupy it until needed. Seems to me this defeats the advantage of dynamically allocating the drive space. I suppose there's a slight performance gain.

                        Unless you can locate new info suggesting otherwise, RedHat has tested this extensively and advises to NOT use btrfs file system with COW enabled to host these VDDs but the reasons given are due to extremely poor performance rather than data corruption. Since you can chattr a folder to NODATACOW and prevent the issue - why wouldn't you?

                        In my case, I have 5 drives in my system so I just made an EXT4 partition for virtual drives. Seemed simpler. I tend to destroy and create VMs on a regular basis so this way I don't have to think about the VDDs.

                        Please Read Me

                        Comment


                          #13
                          I quickly tested setting nodatacow on a folder on a btrfs fs and then making a file in it. It assumed the setting as expected. I'll park my next VDD in there and see how it goes.

                          Please Read Me

                          Comment

                          Working...
                          X