Announcement

Collapse
No announcement yet.

How to install Kubuntu 21.10 with BTRFS on root?

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    #16
    I always use these at a minimum:

    Code:
    noatime,space_cache,compress=lzo,autodefrag

    Please Read Me

    Comment


      #17
      Or https://man7.org/linux/man-pages/man5/fstab.5.html
      Windows no longer obstructs my view.
      Using Kubuntu Linux since March 23, 2007.
      "It is a capital mistake to theorize before one has data." - Sherlock Holmes

      Comment


        #18
        Originally posted by oshunluvr View Post
        I always use these at a minimum:

        Code:
        noatime,space_cache,compress=lzo,autodefrag
        I was able to edit /etc/fstab and add these options to @/ and @/home. No issues, Thanks

        Comment


          #19
          Originally posted by jfabernathy View Post
          What do you use for automatic snapshots, Timeshift? or what.
          Timeshift "is designed to protect only system files and settings. User files such as documents, pictures and music are excluded." Not for me.

          I use snapper. It's default config would fill a filesystem quickly, in the past, but now it has a space limit; I'd recommend considering how many daily, weekly, and monthly automatic snapshots you want in the light of your backup schedule. Snapper creates what it needs; OpenSUSE installs it by default and it seems they've tried to make it suitable as such for end users, with various hand-holding tools. I haven't kept up with them, except that I use snapper-gui a little. I do hourly snapshots (default is 2 hours). I can't imagine doing without automatic snapshots now.

          grub-btrfs is used in Archlinux to make sure when changes are made for system updates or install that grub had those snapshots made before and after the update/install as bootable images just like old versions of the kernel.
          Ubuntu had various packages for this, with confusingly similar names, but IIUC it's built-in now; I'm not sure into what. I see apt snapshots in snapper-gui. do-release-upgrade (the Ubuntu script to move to the next release) does it's own snapshots too.

          As to what subvolumes to have... I used the Ubuntu default @ and @home for years, but now I split things up depending on the backup strategy I want for that data. I don't see the point of having GB of browser caches filling up my backups and snapshots so I direct them to their own subvolumes, and have separate subvolumes for appimages, isos, and other big downloads.

          One practice I recommend is renaming @ and @home. One has to adjust /etc/fstab and grub accordingly. With @ and @home renamed, I can do a fresh *buntu install into the same btrfs; I often have half a dozen installs in the one btrfs, all bootable, avoiding a lot of the partition shuffling I used to do before I used btrfs. When a new Kubuntu release is in the offing, I usually test out a fresh install, and keep the old release going for a while after upgrading to the new.
          Regards, John Little

          Comment


            #20
            So I have choices between, "roll-your-own", timeshift + a user files backup system like deja-dup, and snapper.

            To keep the disks from filling up with snapshots, I only want the automatic ones to happen with apt installs, and system upgrades. I've watched a video on being about to boot from a live USB of the distro and using the command line to restore a recent snapshot to recover a bad situation. I tested that and it works. So I want that capability.

            From reading between the lines, I think I need to add an additional subvolume in my case. I think I want snapshots of @home and @ done once or twice a day. However, the largest amount of data is TV recordings I do with MythTV DVR. Those recordings are many GBs, but get deleted within a few weeks as we watch the TV shows and then delete them. I don't think I want or need snapshots of those. Currently those recordings are stored in the NAS mirror part of my system which is mounted at /mnt/md0. I would need a new BTRFS raid mirror mounted as a subvolume somewhere else.

            I thinking about having 3 drives. The main drive would be a nvme m.2 SSD with @ and @home. Then I would have the other 2 drives be 2 4TB hard drives in a RAID 1 mirror. The mirror currently is used as a NAS for the home network and that is where I keep the recorded media.

            I don't think I need snapshots of the mirror, since the mirror is backed up incrementally to a cloud service. I have lost a drive before but 2 commands and physically replacing the drive was all it took.

            I'm not sure how you go about setting up a subvolume in BTRFS that is a Raid Mirror. I've done this with my testing with ZFS and it's trivial there under Ubuntu.

            So any advice on this would be appreciated.


            Comment


              #21
              It's actually pretty simple: mkfs.btrfs -L Media -m raid1 -d raid1 /dev/sda /dev/sdb

              This says: "Make a BTRFS file system, label it Media, store the metadata and data using RAID1, using the whole drives sda and sdb,"

              Note these are not partitions, but whole disk file systems - no partition table needed. IMO the only reason to have a partition table and other partitions would be if you wanted these drives bootable or to partition them for some other purpose than media storage.

              reference: https://btrfs.wiki.kernel.org/index....ltiple_Devices

              However - just my opinion - RAID in general has pitfalls. It's not uncomplicated nor quick to repair a degraded RAID array, especially if you don't have a replacement drive on hand. In fact, it may take days if you have a lot of data.
              Since BTRFS has built-in backup capability, I left RAID1 behind and went with automated incremental backups instead. This has some advantages (again, IMO), and there's no performance difference in use.

              Using automated backup rather than RAID:
              1. A failed drive means you only have to change the mount and you're back in business - down only for a few seconds.
              2. When you get around to replacing the drive, the backup will resume automatically and in the background (no additional downtime).
              3. Incremental backups happen quickly, in the background.
              4. You can backup incrementally at any interval you feel safe with. I do it weekly, but no reason you couldn't do it hourly or even more often.
              5. You can use different sized devices without losing the extra space.
              With RAID:
              1. A failed drive means a very long reboot time, followed by manual intervention to mount the array in degraded mode to remove the dead device.
              2. The array must then be rebuilt with a replacement drive OR RAID removed from the filesystem before using it normally can resume. This can take many hours.
              3. If your drives are not the same size, the larger drive will only be partially used by RAID1 (you can mitigate this with partitioning).
              I have had a home server for over a decade. Initially I replaced drives to increase capacity as my need grew. Now that drives are so large, I replace them as the start to fail or show signs they are likely to. The last capacity upgrade occurred this way (note I have a 4-drive hot-swap capable server):
              Initially, 2x6tb drives and 2x2tb drives (8tb storage and backup) configured as JBOD (not RAID). I needed more capacity and the 2x2TB drives were quite old, so I replaced them with a single 10TB drive. The new configuration was to be 10TB (new drive) storage and 12TB (2x6TB) backup. To accomplish the change over;
              • I unmounted the backup file system and physically removed the 2TB backup drive
              • This left a 6TB drive unused and the storage filesystem (one each 6TB +2TB drives) still "live."
              • I then inserted the 10TB drive, did "btrfs device add" and added it to "storage" resulting in 18TB storage.
              • Finally "btrfs device delete" removed the 6TB drive and 2TB drive from storage leaving 10TB on the new drive alone.
              • I then physically pulled the last 2TB drive.
              • The final step was to create a new backup filesystem using the 2X6TB drives and mount to resume the backup procedure.
              The important things to note here are filesystem access and time. NOT ONCE during the entire operation above did I have to take the server off-line or power down. The whole operation occurred while still accessing the files, using the media server, and other services. Obviously, if you don't have hot-swap capability, you'd have to power down for physical drive changes.
              Moving TBs of data around does take time, but since BTRFS does it in the background I simply issued the needed command and came back later to do the next step. All the above took days to complete, but partially because there was no rush to complete it because I could still use the server. I checked back a couple times a day to issue the next command when it was ready for it.

              About three years after above, the oldest 6TB drive died, and I have since replaced it with a 16TB drive using the same procedure, leaving 16TB storage and backup.

              The point of this is using RAID1 accomplishes a simple backup, but does not ensure 100% reliable access. The above list looks complicated on the surface, but actually was only 6 command entries total;
              1. umount ~~~ | take "backup" off-line
              2. btrfs device add ~~~ | add the 10TB drive to storage
              3. btrfs device delete ~~~ | delete the 6 and 2 TB drives from storage
              4. wipefs ~~~ | erase the file system on the 2 6TB drives
              5. mkfs.btrfs ~~~ | make a new backup file system
              6. mount ~~~ | mount the backup


              Please Read Me

              Comment


                #22
                I'm missing something about the method you are using being easier than RAID 1. I clearly don't completely understand what you are doing with all your drives and using incremental backups. Studying your daily script may clear it up for me. As to RAID, the last time I got errors on one of my mirror drives, it automatically degraded the mirror. I pulled the drive and out a new one in. I did a few mdadm commands and the new drive was now a part of the mirror. It remained degraded for a few hours while mdadm repaired the mirror. Down time could be a problem if you didn't have spare drives laying around.

                Thanks for the explanation. I'll study closely and see if I can understand your approach. I'll setup some test cases.

                Comment


                  #23
                  RAID1 is probably easier IF you have a spare drive around, don't mind the down time, and know how to fix it.

                  BTRFS has so much more flexibility to offer than old-school RAID. If that isn't a plus factor for you, then go with what you know.

                  I wasn't trying to talk you out of RAID1, just pointing out with BTRFS you have more options than just 1+1=1

                  Please Read Me

                  Comment


                    #24
                    I sort of understand the concept of mounting the btrfs partition at /mnt and then creating normal directories that become places where you can mount new subvolumes after they are created by the btrfs su cr command.

                    I'm experimenting with having another hard drive in the system for backups and formating it as btrfs so I can easily use the btrfs send command to create backups and store them in the other hard drive.

                    I'm not having any success with mounting the second hard drive's subvolume. I'm not looking at creating a RAID, just a second drive that contains backups built from snapshots and the send command.

                    I know I'm missing a key idea somewhere.

                    Comment


                      #25
                      My use case differs from that of oshunluver's. At one point I tested all the backup alternatives for BTRFS. Snapper and Timeshift were far an above the best, with TimeShift's GUI taking the lead. It can be set to back up more than system files. However, TimeSHift mirrors a /mnt/snapshots configuration under "/run", which it creates and keeps permanently mounted. I don't like that setup for a variety of reasons, but I won't get into that, and between the two I'd prefer snapper.

                      However, l like oshunluver, I went the "write my own script" route, which I keep under root and run with a sudo command from the CLI. It is customized to my use case.
                      My use case? I modified my initial subvoumes, @ and @home, by moving @home's contents into @/home and then commenting out the line in fstab that mounts @home. (One cannot move one subvolume into another one so just the contents of /home/jerry were moved to @home to create @/home/jerry/... Notice that @/home/jerry is not the same as @home/jerry.

                      Why, you ask? Because the installation of many packages can put new files, or erase old ones, in BOTH @ and @home. When I was using both I'd create my snapshots so that they would have the same yyyymmddhhmm extension, thus identifying them as a pair. That way, I wouldn't replace @ with a @ snapshot which wasn't paired with a @home snapshot.

                      So, I have only one subvolume, @, and I need to make only one snapshot each evening and then use the incremental send command to send that snapshot to my /backup SSD. The snapshot takes a fraction of a second and with the "btrfs send -p" incremental command making a copy of @ on /backup usually takes less than a minute, depending on what I've added and removed from my system. If I use the regular send command to send a snapshot to /backup it can take up to 15-20 minutes. HOWEVER, with btrfs one can continue working even while the snapshot is being sent.

                      When you open a terminal and mount your primary btrfs partition to /mnt what you are actually doing is making /mnt the "ROOTFS", or root file system. Everything in btrfs is under the ROOTFS. That's why you see /mnt/@ and /mnt/@home, and other subvolumes you may have created under /mnt. When you are working in /mnt you are working in the live system. No harm, though. BTRFS is very flexible and all but a couple of its parameters are tuned automatically, so you don't have a ton of settings to adjust to "tune" your system. That's probably why Facebook runs BTRFS as its file system on its hundreds of thousands of servers, which are assembled out of the cheapest components they can buy.

                      If I want to recover a file or folder from a previous snapshot I either use Dolphin or MC. I browse down into a previous snapshot, say @202201041531, and copy the file I want over to my account, /home/jerry/somefolder/somefile. Or, I can use the cp or mv command. Much faster than trying to do it with either TimeShift or snapper. I've added and removed files and folders from previous snapshots without harm or foul, but realize that using the "-r" parameter makes the snapshot a read only. If you want to make @yyyymmddhhmm the new @ you have to use "btrfs subvol snapshot /mnt/snapshots/@yyyymmddhhmm /mnt/@" without the -r switch, otherwise your next boot will fail.

                      I've used BTRFS since 2016, and I will NEVER use a distro which doesn't allow me to use BTRFS as the rootfs. I do LOTS of experimentation, and often I need to roll back to recover. Recovery is just a couple of minutes away, which is a LOT faster than trying to roll back manually changes which could be in the hundreds.


                      Last edited by GreyGeek; Jan 14, 2022, 03:51 PM.
                      "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
                      – John F. Kennedy, February 26, 1962.

                      Comment


                        #26
                        I'm following the Beginner's Guide referenced in the first answer to my original question.

                        I want to see if this makes sense to other. So I have Kubuntu 21.10 installed on an SSD using btrfs.

                        I mounted /dev/sdc3 on /mnt/subvol directory which is my btrfs @ subvol.
                        Then I creates /mnt/subvol/snapshots directory.

                        Next I create the first snapshot:
                        Code:
                        btrfs su snapshot -r /mnt/@ /mnt/subvol/snapshots/@_basic_install
                        btrfs su snapshot -r /mnt/@home /mnt/subvol/snapshots/@home_basic_install
                        To have a place to store backups I partitioned /dev/sda and created 1 partition, then I formatted with mkfs.btrfs /dev/sda1

                        Then I simple mounted /dev/sda1 to /mnt/backup. I was surprised I wouldn't create subvolumes first and use the subvolume mount commands, but that didn't work.

                        The backups were created with:
                        Code:
                        btrfs send /mnt/subvol/snapshots/@_basic_install | btrfs receive /mnt/backup
                        btrfs send /mnt/subvol/snapshots/@home_basic_install | btrfs receive /mnt/backup
                        What I ended up with in /mnt/backup were 2 directories @_basic_install and @_home_basic_install.

                        Inside those directories were a complete copy of / and /home of my original install.


                        Comment


                          #27
                          Originally posted by jfabernathy View Post
                          I'm not having any success with mounting the second hard drive's subvolume.
                          Care to post your attempts and error messages?

                          It shouldn't be any different than mounting any other file system.

                          Code:
                          sudo mount /dev/sda1 -o subvol=@whatever /mnt/point
                          Maybe you're leaving out the device?

                          BTW, there aren't any rules pertaining to naming of subvolumes. *bunutus default to @ and @home so that it's clear they are more than just regular folders, but that's an adopted convention, not a rule. You can even create subvolumes "in place" or where ever you want.

                          For example, lets say you want to have individual subvolumes for your Documents, Pictures, and Music instead of folder. You can make subvolumes somewhere, then mount them at the folder location OR you can just make the subvolume right in your home folder like so:

                          cd /home
                          rmdir Documents
                          sudo btrfs subvolume create Documents
                          sudo chown 1000:1000 Documents

                          Now your Documents folder is gone and has be replaced by a subvolume named Documents. Note that I had to "chown" the subvolume to me user to gain easy access to it.

                          Since @home is already a subvolume, you now have nested Documents within it. This reads out like this in thee subvolume list:

                          Code:
                          ID 2351 gen 3005916 top level 5 path @home
                          ID 3296 gen 3005916 top level 2351 path @home/stuart/Documents
                          5 is my root fs, @home is subvolume 2351 in 5, and Documents is 3296 in 2351.

                          Cool or confusing?


                          Please Read Me

                          Comment


                            #28
                            what I did that worked did not involve creating any BTRFS subvolumes, just mounted the BTRFS formatted partition to /mnt/backup directory.
                            What I originally thought I should do was mount the BTRFS partition and create a subdirectory where the subvolume would be mounted under that. Something is missing in that step.

                            I'll completely start over and capture what I'm doing and if it fails, I'll post.

                            Comment


                              #29
                              Well, mounting the root file system at /mnt/backup is even simpler, assuming partition is /dev/sda1

                              sudo mount /dev/sda1 /mnt/backup

                              should do it.

                              Please Read Me

                              Comment


                                #30
                                Originally posted by oshunluvr View Post
                                Well, mounting the root file system at /mnt/backup is even simpler, assuming partition is /dev/sda1

                                sudo mount /dev/sda1 /mnt/backup

                                should do it.
                                I guess the confusing part is, I can mount the BTRFS partition of the backup drive on /mnt/backup like a normal drive or I can create a subvol=@backup and mount that with all the compress, noatime, etc options on /mnt/backup. Either way the btrfs send/receive works and the results are the same.

                                So are there any advantages to one way or the other.

                                Comment

                                Working...
                                X