Announcement

Collapse
No announcement yet.

Kubuntu 20.04 on top of ZFS

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    #16
    Originally posted by TWPonKubuntu View Post
    Thanks again, Jerry, for the analysis of ZFS. I'll wait until it has more time under the polishing wheel.
    I blew an aneurysm reading that stuff.

    I really don't see how the average linux user with laptop or desktop is going to navigate this.

    For btrfs at least, a default install and timeshift will serve most people well.

    Comment


      #17
      Originally posted by mr_raider View Post
      I blew an aneurysm reading that stuff.

      I really don't see how the average linux user with laptop or desktop is going to navigate this.

      For btrfs at least, a default install and timeshift will serve most people well.
      That was what I was wondering while I explored how zsys worked. Most Linux users have not edited the grub menu and probably don't know how. I suspect that zsys edits the grub menu to remove any snapshots that are deleted. If not, then the user has a technical challenge ahead of them.

      Like you say, BTRFS is well baked and VERY EASY to use. It has only two user settable properties, and only one of which I rarely use. It is drop dead easy to have more than one snapshot set and to rollback to any one of them without damaging or destroying the others. Snapshotting and rolling back is so easy that apps like snapper, Timeshift, etc., aren't really needed, IMO. It doesn't use or need a swapfile (although some apps may benefit from it) and doesn't use EXT4 and FAT partitions in order to run, the way Ubuntu's ZFS does.

      Another question that arose is whether zsys will be a snap only app, requiring the proprietary snap store in order to maintain it. I'll find out about that when I attempt to mask and/or delete zsysd.

      I suspect, and hope, that after I delete zsys and its dependencies that ZFS in Ubuntu (and any super imposed KDE desktop) will behave in the classical ZFS way.
      "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
      – John F. Kennedy, February 26, 1962.

      Comment


        #18
        Still researching how to delete snapshots made by zsys (that would also clean up the grub menu), and I came across this:
        https://askubuntu.com/questions/1232...zpool-import-o

        In the comment section a person claimed it wasn't the fault of ZFS and after a couple more exchanges filed a bug report. He also recommended an app called "Sanoid", which looks like another Snapper for ZFS. According to Sanoid anything can blow away a ZFS installation, but they have the audacity to write on their GitHub page:
        Btrfs support plans are shelved unless and until btrfs becomes reliable.
        ROF,LLLL.... Personally, I don't believe their "support" is needed.

        Or you can, for $3,500, buy one of their fully equipped servers with Sanoid fully installed and functional, as advertised on their home page.
        I think they are exaggerating ZFS's instabilities.



        .
        Last edited by GreyGeek; Jul 24, 2020, 12:45 PM.
        "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
        – John F. Kennedy, February 26, 1962.

        Comment


          #19
          My first ZFS snapshot

          I disabled and masked snapd and zsys and other systemd units associated with them. Then I used synaptic to uninstall them. Interestingly, Chromium is NOT bound at the hip to snapd in Ubuntu/ZFS the way it was for me in Kubuntu. In fact, it is not even installed as default. FireFox is.

          Anyway, out of curiosity, I took a look at /etc/fstab. I understand efi and swap, but the grub line is interesting:
          cat /etc/fstab
          Code:
          # /etc/fstab: static file system information.
          #
          # Use 'blkid' to print the universally unique identifier for a
          # device; this may be used with UUID= as a more robust way to name devices
          # that works even if disks are added and removed. See fstab(5).
          #
          # <file system> <mount point>   <type>  <options>       <dump>  <pass>
          # /boot/efi was on /dev/vda1 during installation
          UUID=CE53-4F67  /boot/efi       vfat    umask=0022,fmask=0022,dmask=0022      0       1
          /boot/efi/grub    /boot/grub    none    defaults,bind    0    0
          UUID=fe8e30b6-b5a5-4e40-960e-bab2e5fa349d    none    swap    discard    0    0
          The contents of /boot/grub on Ubuntu is identical to that on Kubuntu, except that the grub.cfg contents on UbuntuZFS points to snapshots and not drives or partitions like Kubuntu.

          I assumed that everything in front of "@autozsys_" was the actual volume. I deleted several older snapshots of both /home/jerry and /root, and created my own snapshots for /home/jerry and /root. Here are the commands:
          Code:
          $ [COLOR=#000000][B]sudo zfs snapshot -r rpool/USERDATA/jerry_r7h6kl@jerry_202007241528[/B][/COLOR]
          $ [B]sudo zfs snapshot -r rpool/USERDATA/root_r7h6kl@root_202007241528[/B]
          Here is the result:
          $ zfs list -t snapshot
          Code:
          NAME                                                               USED  AVAIL     REFER  MOUNTPOINT
          bpool/BOOT/ubuntu_cfgs2t@autozsys_hkcg8x                             0B      -     90.2M  -
          bpool/BOOT/ubuntu_cfgs2t@autozsys_xw170v                             0B      -     90.2M  -
          rpool/ROOT/ubuntu_cfgs2t@autozsys_hkcg8x                          58.4M      -     2.35G  -
          rpool/ROOT/ubuntu_cfgs2t@autozsys_xw170v                          56.7M      -     2.35G  -
          rpool/ROOT/ubuntu_cfgs2t/srv@autozsys_hkcg8x                         0B      -       96K  -
          rpool/ROOT/ubuntu_cfgs2t/srv@autozsys_xw170v                         0B      -       96K  -
          rpool/ROOT/ubuntu_cfgs2t/usr@autozsys_hkcg8x                         0B      -       96K  -
          rpool/ROOT/ubuntu_cfgs2t/usr@autozsys_xw170v                         0B      -       96K  -
          rpool/ROOT/ubuntu_cfgs2t/usr/local@autozsys_hkcg8x                   0B      -      128K  -
          rpool/ROOT/ubuntu_cfgs2t/usr/local@autozsys_xw170v                   0B      -      128K  -
          rpool/ROOT/ubuntu_cfgs2t/var@autozsys_hkcg8x                         0B      -       96K  -
          rpool/ROOT/ubuntu_cfgs2t/var@autozsys_xw170v                         0B      -       96K  -
          rpool/ROOT/ubuntu_cfgs2t/var/games@autozsys_hkcg8x                   0B      -       96K  -
          rpool/ROOT/ubuntu_cfgs2t/var/games@autozsys_xw170v                   0B      -       96K  -
          rpool/ROOT/ubuntu_cfgs2t/var/lib@autozsys_hkcg8x                  23.3M      -      475M  -
          rpool/ROOT/ubuntu_cfgs2t/var/lib@autozsys_xw170v                  23.2M      -      475M  -
          rpool/ROOT/ubuntu_cfgs2t/var/lib/AccountsService@autozsys_hkcg8x     0B      -      104K  -
          rpool/ROOT/ubuntu_cfgs2t/var/lib/AccountsService@autozsys_xw170v     0B      -      104K  -
          rpool/ROOT/ubuntu_cfgs2t/var/lib/NetworkManager@autozsys_hkcg8x     88K      -      132K  -
          rpool/ROOT/ubuntu_cfgs2t/var/lib/NetworkManager@autozsys_xw170v     88K      -      132K  -
          rpool/ROOT/ubuntu_cfgs2t/var/lib/apt@autozsys_hkcg8x              6.70M      -     73.0M  -
          rpool/ROOT/ubuntu_cfgs2t/var/lib/apt@autozsys_xw170v               864K      -     67.1M  -
          rpool/ROOT/ubuntu_cfgs2t/var/lib/dpkg@autozsys_hkcg8x             1.64M      -     31.2M  -
          rpool/ROOT/ubuntu_cfgs2t/var/lib/dpkg@autozsys_xw170v             1.59M      -     31.2M  -
          rpool/ROOT/ubuntu_cfgs2t/var/log@autozsys_hkcg8x                  1.21M      -     2.03M  -
          rpool/ROOT/ubuntu_cfgs2t/var/log@autozsys_xw170v                  1.27M      -     2.09M  -
          rpool/ROOT/ubuntu_cfgs2t/var/mail@autozsys_hkcg8x                    0B      -       96K  -
          rpool/ROOT/ubuntu_cfgs2t/var/mail@autozsys_xw170v                    0B      -       96K  -
          rpool/ROOT/ubuntu_cfgs2t/var/snap@autozsys_hkcg8x                    0B      -      120K  -
          rpool/ROOT/ubuntu_cfgs2t/var/snap@autozsys_xw170v                    0B      -      120K  -
          rpool/ROOT/ubuntu_cfgs2t/var/spool@autozsys_hkcg8x                   0B      -      112K  -
          rpool/ROOT/ubuntu_cfgs2t/var/spool@autozsys_xw170v                   0B      -      112K  -
          rpool/ROOT/ubuntu_cfgs2t/var/www@autozsys_hkcg8x                     0B      -       96K  -
          rpool/ROOT/ubuntu_cfgs2t/var/www@autozsys_xw170v                     0B      -       96K  -
          [COLOR=#ff0000][B]rpool/USERDATA/jerry_r7h6kl@jerry_202007241528                       0B      -     42.5M  -[/B][/COLOR]
          rpool/USERDATA/root_r7h6kl@autozsys_hkcg8x                           0B      -      112K  -
          rpool/USERDATA/root_r7h6kl@autozsys_xw170v                           0B      -      112K  -
          [COLOR=#ff0000][B]rpool/USERDATA/root_r7h6kl@root_202007241528  [/B][/COLOR]                      [B][COLOR=#ff0000] 0B      -      156K  -[/COLOR][/B]
          jerry@jerryZFS:~$
          Then I rebooted. Here is the snapshot listing after the reboot. Something interesting occurred. My listing was sprinkled with snapshots that I didn't make! Except for the first four the pattern is obvious. I have no clue what mechanism or app created those snapshots, or why. However, I suspect that I could destroy those snapshots and not harm my system. That's my next play, after I give this old brain a rest.

          $ zfs list -t snapshot
          Code:
          NAME                                                               USED  AVAIL     REFER  MOUNTPOINT
          bpool/BOOT/ubuntu_cfgs2t@autozsys_hkcg8x                             0B      -     90.2M  -
          bpool/BOOT/ubuntu_cfgs2t@autozsys_xw170v                             0B      -     90.2M  -
          rpool/ROOT/ubuntu_cfgs2t@autozsys_hkcg8x                          58.4M      -     2.35G  -
          rpool/ROOT/ubuntu_cfgs2t@autozsys_xw170v                          56.7M      -     2.35G  -
          [B]rpool/ROOT/ubuntu_cfgs2t@ROOT2020071528 [/B]                             0B      -     2.30G  -
          rpool/ROOT/ubuntu_cfgs2t/srv@autozsys_hkcg8x                         0B      -       96K  -
          rpool/ROOT/ubuntu_cfgs2t/srv@autozsys_xw170v                         0B      -       96K  -
          [B]rpool/ROOT/ubuntu_cfgs2t/srv@ROOT2020071528  [/B]                        0B      -       96K  -
          rpool/ROOT/ubuntu_cfgs2t/usr@autozsys_hkcg8x                         0B      -       96K  -
          rpool/ROOT/ubuntu_cfgs2t/usr@autozsys_xw170v                         0B      -       96K  -
          [B]rpool/ROOT/ubuntu_cfgs2t/usr@ROOT2020071528[/B]                          0B      -       96K  -
          rpool/ROOT/ubuntu_cfgs2t/usr/local@autozsys_hkcg8x                   0B      -      128K  -
          rpool/ROOT/ubuntu_cfgs2t/usr/local@autozsys_xw170v                   0B      -      128K  -
          [B]rpool/ROOT/ubuntu_cfgs2t/usr/local@ROOT2020071528[/B]                    0B      -      128K  -
          rpool/ROOT/ubuntu_cfgs2t/var@autozsys_hkcg8x                         0B      -       96K  -
          rpool/ROOT/ubuntu_cfgs2t/var@autozsys_xw170v                         0B      -       96K  -
          [B]rpool/ROOT/ubuntu_cfgs2t/var@ROOT2020071528 [/B]                         0B      -       96K  -
          rpool/ROOT/ubuntu_cfgs2t/var/games@autozsys_hkcg8x                   0B      -       96K  -
          rpool/ROOT/ubuntu_cfgs2t/var/games@autozsys_xw170v                   0B      -       96K  -
          [B]rpool/ROOT/ubuntu_cfgs2t/var/games@ROOT2020071528   [/B]                 0B      -       96K  -
          rpool/ROOT/ubuntu_cfgs2t/var/lib@autozsys_hkcg8x                  23.3M      -      475M  -
          rpool/ROOT/ubuntu_cfgs2t/var/lib@autozsys_xw170v                  23.2M      -      475M  -
          [B]rpool/ROOT/ubuntu_cfgs2t/var/lib@ROOT2020071528[/B]                      0B      -      476M  -
          rpool/ROOT/ubuntu_cfgs2t/var/lib/AccountsService@autozsys_hkcg8x     0B      -      104K  -
          rpool/ROOT/ubuntu_cfgs2t/var/lib/AccountsService@autozsys_xw170v     0B      -      104K  -
          [B]rpool/ROOT/ubuntu_cfgs2t/var/lib/AccountsService@ROOT2020071528 [/B]     0B      -      104K  -
          rpool/ROOT/ubuntu_cfgs2t/var/lib/NetworkManager@autozsys_hkcg8x     88K      -      132K  -
          rpool/ROOT/ubuntu_cfgs2t/var/lib/NetworkManager@autozsys_xw170v     88K      -      132K  -
          [B]rpool/ROOT/ubuntu_cfgs2t/var/lib/NetworkManager@ROOT2020071528[/B]       0B      -      144K  -
          rpool/ROOT/ubuntu_cfgs2t/var/lib/apt@autozsys_hkcg8x              6.70M      -     73.0M  -
          rpool/ROOT/ubuntu_cfgs2t/var/lib/apt@autozsys_xw170v               864K      -     67.1M  -
          [B]rpool/ROOT/ubuntu_cfgs2t/var/lib/apt@ROOT2020071528[/B]                  0B      -     66.7M  -
          rpool/ROOT/ubuntu_cfgs2t/var/lib/dpkg@autozsys_hkcg8x             1.64M      -     31.2M  -
          rpool/ROOT/ubuntu_cfgs2t/var/lib/dpkg@autozsys_xw170v             1.59M      -     31.2M  -
          [B]rpool/ROOT/ubuntu_cfgs2t/var/lib/dpkg@ROOT2020071528 [/B]                0B      -     31.3M  -
          rpool/ROOT/ubuntu_cfgs2t/var/log@autozsys_hkcg8x                  1.21M      -     2.03M  -
          rpool/ROOT/ubuntu_cfgs2t/var/log@autozsys_xw170v                  1.27M      -     2.09M  -
          [B]rpool/ROOT/ubuntu_cfgs2t/var/log@ROOT2020071528[/B]                      0B      -     4.85M  -
          rpool/ROOT/ubuntu_cfgs2t/var/mail@autozsys_hkcg8x                    0B      -       96K  -
          rpool/ROOT/ubuntu_cfgs2t/var/mail@autozsys_xw170v                    0B      -       96K  -
          [B]rpool/ROOT/ubuntu_cfgs2t/var/mail@ROOT2020071528[/B]                     0B      -       96K  -
          rpool/ROOT/ubuntu_cfgs2t/var/snap@autozsys_hkcg8x                    0B      -      120K  -
          rpool/ROOT/ubuntu_cfgs2t/var/snap@autozsys_xw170v                    0B      -      120K  -
          [B]rpool/ROOT/ubuntu_cfgs2t/var/snap@ROOT2020071528 [/B]                    0B      -       96K  -
          rpool/ROOT/ubuntu_cfgs2t/var/spool@autozsys_hkcg8x                   0B      -      112K  -
          rpool/ROOT/ubuntu_cfgs2t/var/spool@autozsys_xw170v                   0B      -      112K  -
          [B]rpool/ROOT/ubuntu_cfgs2t/var/spool@ROOT2020071528[/B]                    0B      -      112K  -
          rpool/ROOT/ubuntu_cfgs2t/var/www@autozsys_hkcg8x                     0B      -       96K  -
          rpool/ROOT/ubuntu_cfgs2t/var/www@autozsys_xw170v                     0B      -       96K  -
          [B]rpool/ROOT/ubuntu_cfgs2t/var/www@ROOT2020071528[/B]                      0B      -       96K  -
          rpool/USERDATA/jerry_r7h6kl@jerry_202007241528                       0B      -     42.5M  -
          [B][COLOR=#ff0000]rpool/USERDATA/root_r7h6kl@root_202007241528 [/COLOR][/B]                        0B      -      156K  -
          jerry@jerryZFS:~$
          "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
          – John F. Kennedy, February 26, 1962.

          Comment


            #20
            Well, I am again making this post from my Plasma desktop running on top of Ubuntu with ZFS as the root file system. I installed "kubuntu-desktop" and llike before it offered me the opportunity to switch from gdm to sddm, which automatically selected Plasma as the DE.

            I've made two snapshots, one of root and one of my home account. I am now going to destroy those snapshots I mentioned before.
            Oh, Plasma is fairly snappy on this VM, just like it was several months ago.
            "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
            – John F. Kennedy, February 26, 1962.

            Comment


              #21
              Here is a list of volumes on my system and the accounts they are mounted on

              $ zfs list
              Code:
              NAME                                               USED  AVAIL     REFER  MOUNTPOINT
              bpool                                             92.2M  1.66G       96K  /boot
              bpool/BOOT                                        91.4M  1.66G       96K  none
              bpool/BOOT/ubuntu_cfgs2t                          91.3M  1.66G     91.3M  /boot
              rpool                                             6.76G  46.5G       96K  /
              rpool/ROOT                                        6.67G  46.5G       96K  none
              [B]rpool/ROOT/ubuntu_cfgs2t                          6.67G  46.5G     5.87G  /[/B]
              rpool/ROOT/ubuntu_cfgs2t/srv                        96K  46.5G       96K  /srv
              rpool/ROOT/ubuntu_cfgs2t/usr                       312K  46.5G       96K  /usr
              rpool/ROOT/ubuntu_cfgs2t/usr/local                 216K  46.5G      144K  /usr/local
              rpool/ROOT/ubuntu_cfgs2t/var                       709M  46.5G       96K  /var
              rpool/ROOT/ubuntu_cfgs2t/var/games                  96K  46.5G       96K  /var/games
              rpool/ROOT/ubuntu_cfgs2t/var/lib                   694M  46.5G      518M  /var/lib
              rpool/ROOT/ubuntu_cfgs2t/var/lib/AccountsService   176K  46.5G      104K  /var/lib/AccountsService
              rpool/ROOT/ubuntu_cfgs2t/var/lib/NetworkManager    240K  46.5G      132K  /var/lib/NetworkManager
              rpool/ROOT/ubuntu_cfgs2t/var/lib/apt              84.3M  46.5G     81.6M  /var/lib/apt
              rpool/ROOT/ubuntu_cfgs2t/var/lib/dpkg             67.5M  46.5G     61.8M  /var/lib/dpkg
              rpool/ROOT/ubuntu_cfgs2t/var/log                  14.9M  46.5G     11.1M  /var/log
              rpool/ROOT/ubuntu_cfgs2t/var/mail                   96K  46.5G       96K  /var/mail
              rpool/ROOT/ubuntu_cfgs2t/var/snap                   96K  46.5G       96K  /var/snap
              rpool/ROOT/ubuntu_cfgs2t/var/spool                 168K  46.5G      112K  /var/spool
              rpool/ROOT/ubuntu_cfgs2t/var/www                    96K  46.5G       96K  /var/www
              rpool/USERDATA                                    86.4M  46.5G       96K  /
              [B]rpool/USERDATA/jerry_r7h6kl                       85.9M  46.5G     49.6M  /home/jerry[/B]
              [B]rpool/USERDATA/root_r7h6kl                         460K  46.5G      276K  /root[/B]
              Here are my existing snapshots by creation time.

              $ zfs list -t snapshot -o name,creation
              Code:
              NAME                                                             CREATION
              rpool/ROOT/ubuntu_cfgs2t@ROOT2020071528                          Fri Jul 24 15:35 2020
              rpool/ROOT/ubuntu_cfgs2t/srv@ROOT2020071528                      Fri Jul 24 15:35 2020
              rpool/ROOT/ubuntu_cfgs2t/usr@ROOT2020071528                      Fri Jul 24 15:35 2020
              rpool/ROOT/ubuntu_cfgs2t/usr/local@ROOT2020071528                Fri Jul 24 15:35 2020
              rpool/ROOT/ubuntu_cfgs2t/var@ROOT2020071528                      Fri Jul 24 15:35 2020
              rpool/ROOT/ubuntu_cfgs2t/var/games@ROOT2020071528                Fri Jul 24 15:35 2020
              rpool/ROOT/ubuntu_cfgs2t/var/lib@ROOT2020071528                  Fri Jul 24 15:35 2020
              rpool/ROOT/ubuntu_cfgs2t/var/lib/AccountsService@ROOT2020071528  Fri Jul 24 15:35 2020
              rpool/ROOT/ubuntu_cfgs2t/var/lib/NetworkManager@ROOT2020071528   Fri Jul 24 15:35 2020
              rpool/ROOT/ubuntu_cfgs2t/var/lib/apt@ROOT2020071528              Fri Jul 24 15:35 2020
              rpool/ROOT/ubuntu_cfgs2t/var/lib/dpkg@ROOT2020071528             Fri Jul 24 15:35 2020
              rpool/ROOT/ubuntu_cfgs2t/var/log@ROOT2020071528                  Fri Jul 24 15:35 2020
              rpool/ROOT/ubuntu_cfgs2t/var/mail@ROOT2020071528                 Fri Jul 24 15:35 2020
              rpool/ROOT/ubuntu_cfgs2t/var/snap@ROOT2020071528                 Fri Jul 24 15:35 2020
              rpool/ROOT/ubuntu_cfgs2t/var/spool@ROOT2020071528                Fri Jul 24 15:35 2020
              rpool/ROOT/ubuntu_cfgs2t/var/www@ROOT2020071528                  Fri Jul 24 15:35 2020
              
              rpool/USERDATA/jerry_r7h6kl@jerry_202007241528                   Fri Jul 24 15:28 2020
              rpool/USERDATA/jerry_r7h6kl@jerry_20200724_KDE                   Fri Jul 24 17:49 2020
              rpool/USERDATA/jerry_r7h6kl@jerry_SAGE                           Fri Jul 24 [B]21:37 2020[/B]
              
              rpool/USERDATA/root_r7h6kl@root_202007241528                     Fri Jul 24 15:30 2020
              rpool/USERDATA/root_r7h6kl@root_20200724_KDE                     Fri Jul 24 17:49 2020
              rpool/USERDATA/root_r7h6kl@root_SAGE                             Fri Jul 24 [B]21:38 2020[/B]
              The most recent snapshots were made after I installed SAGE MATH.
              Before SAGE, I made a set of snapshots after I had installed kubuntu-desktop.
              Two hours prior to installing kubuntu-desktop I completed installing Ubuntu itself and removed snapd and zsys, and then made a set of snapshots. All of them have a 15:35 timestamp.


              I think I've figured out how to do rollbacks in ZFS.

              Snapshots are stored in a hidden file, .zfs, under the home account. I.E., under /home/jerry/.zfs and /root/.zfs.
              In each of the other volumes there exists a hidden .zfs folder as well.

              Here is a list of my relevant snapshots:
              Code:
              jerry@jerryZFS:~/.zfs$ [B]vdir snapshot/[/B]
              drwxrwxrwx 1 root root 0 Jul 24 22:02 jerry_202007241528
              drwxrwxrwx 1 root root 0 Jul 24 22:02 jerry_20200724_KDE
              drwxrwxrwx 1 root root 0 Jul 24 22:02 jerry_SAGE
              
              jerry@jerryZFS:~/.zfs$ [B]sudo vdir /root/.zfs/snapshot[/B]
              [sudo] password for jerry: 
              drwxrwxrwx 1 root root 0 Jul 24 22:03 root_202007241528
              drwxrwxrwx 1 root root 0 Jul 24 22:03 root_20200724_KDE
              drwxrwxrwx 1 root root 0 Jul 24 22:03 root_SAGE
              Suppose I do some stuff that changes my home account only and I want to undo it.
              Since SAGE is the most recent home account snapshot I can do
              zfs rollback rpool/USERDATA/jerry_r7h6kl@jerry_SAGE
              and then do a reboot.

              However, say I want to eliminate my installation of SAGE MATH by reverting to the KDE snapshot.
              Obviously I would do
              zfs rollback -r rpool/USERDATA/jerry_r7h6kl@jerry_20200724_KDE
              BUT, what other folders did SAGE install components into? I'd have to roll them back as well before I rebooted.
              Would
              zfs rollback -r rpool/USERDATA/root_r7h6kl@root_20200724_KDE
              catch everything? I don't know. Maybe
              zfs rollback rpool/ROOT/ubuntu_cfgs2t@ROOT2020071528
              would cover all the other folders besides /home/jerry better than @root_2020071528_KDE would.

              A note about the "-r" parameter. To revert to the KDE snapshot I must delete the more recent SAGE snapshot first. Including the "-r" parameter (recursive) in the rollback command automatically does that without having to specifically identify all more recent snapshots.

              If I were going to use to volumes to control everything they'd be
              rpool/USERDATA/jerry_r7h6kl
              rpool/ROOT/ubuntu_cfgs2t
              because the 2nd is bound to "/". Does that mean it includes all the sub folders under / ? It looks so to me.


              BTRFS is different. There are only two subvolumes to worry about, @ and @home, unless you add others. If the changes I make affect only my home account I can rollback @home to any one of my previous @home snapshots without having to delete others. IF I wanted to rollback my installation of SAGE I can choose a snapshot PAIR that I created just before I did the SAGE install.

              Also, with BTRFS, the rollback is not a single command like it is in ZFS.
              First, one must "mv" @ to @old and @home to @homeold, and then use
              btrfs subvol snapshot /mnt/snapshots/@yyyymmdd /mnt/@
              btrfs subvol snapshot /mnt/snapshots/@homeyyyymmdd /mnt/@home

              to the yyyymmdd snapshot pair, and then reboot.

              Snapshots made before or after yyyymmdd do not have to be deleted.

              After the reboot it is advisable to issue
              btrfs subvol delete -C /mnt/@old
              btrfs subvol delete -C /mnt/@homeold

              to clean out the old stuff.

              Or, one can use TImeShift or a stripped down version of Snapper in BTRFS to do it all automaticallly.

              ZFS has apps similar to TimeShift and Snapper. A few are
              sanoid
              zfsnap
              zfs-auto-snapshot

              I don't know if snapper works on ZFS or not. I suspect not.

              The reader can decide which rollback methods they think are better.
              "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
              – John F. Kennedy, February 26, 1962.

              Comment


                #22
                A Summary

                This is a summary of the method I used to run Kubuntu with ZFS as its <ROOT FS>.

                I assume that you are going to give Ubuntu your entire drive (HD or SSD)

                I chose the full Ubuntu install during the installation process, but if I were to do it again I'd choose the minimum Ubuntu install just to get ZFS as the <ROOT FS>. If you have a fast Internet connection let the install include the 3rd party software, proprietary drivers and such.

                Do the usual installation steps until you come to the part of the install where you are given the choice of selecting the Experimental ZFS filesystem. Choose it.

                Let the install continue normally and boot into it when the installation is complete.

                At this point you have Ubuntu on top of ZFS as the <ROOT FS> ... AND ... zsys running as Ubuntu's ZFS automatic snapshot application. Every time you run apt a snapshot will be created first.

                Zsys uses the grub menu system to keep track of snapshots. To rollback to a specific snapshot you hold down ESC or SHIFT to get to the grub menu and then select the "History" submenu, that holds a list of the snapshots. YOU do not edit grub to add or remove snapshots. The zsys does. If you create a manual snapshot zsys will add it to the grub history menu. If you destroy a snapshot zsys will remove it from the grub history menu. Or, at least, that is what it is supposed to do.

                My research on it suggests to me that zsys is not ready for production use, or even casual home use.
                Therefore, before I installed KDE I first disabled and masked the zsys and snap services. I also killed their daemons, zsysd and snapd. Then I used apt to purge them, and no snapshot was created. If you decide to try or use zsys and/or snap then don't do this step.

                From the Ubuntu desktop open a terminal and "sudo apt install kde-standard". The KDE desktop and standard set of applications will be installed. You can also choose "kde-plasma-desktop", which will give you the KDE desktop with a minimal set of applications. Or, you can choose "kubuntu-desktop", which gives the full monty.

                During the install you should be given a choice of either the gdm3 or the sddm display manager. Choose sddm.
                If you are not asked which display manager you want to use then issue "sudo dpkg-reconfigure sddm" and when it is offered choose it.
                Now you can reboot. The plasma desktop will be offered as default on the login screen.
                If you chose to continue with zsys then rolling back will be accomplished using the grub history menu.

                Otherwise, you can do "zfs list -t snapshot" to see a list of snapshots, choose the one you want to roll back to, and issue "zfs rollback nameofsnapshot". Then reboot.

                To destroy a snapshot use "zfs destroy -r nameofsnapshot. If "nameofsnapshot" is the most recent snapshot then the "-r" parameter will be ignored. Otherwise, all snapshots made AFTER "nameofsnapshot" will be destroyed.

                On BTRFS systems you only concern yourself with the snapshots of the subvolumes @ and @home, unless you've created other subvolumes, like @data and use fstab to bind it to, say, /home/youracct/data.
                On ZFS there are about 19 datasets. Zsys handles them all. Here is what "zfs list" displays:
                Click image for larger version

Name:	zfs-list.png
Views:	1
Size:	62.5 KB
ID:	644848


                If zsys is removed then it is up to you to snapshot those datasets when you add or remove apps or other files or folders. The home account is easy, for example: rpool/USERDATA/jerry_a7x5fg
                To make a snapshot: sudo zfs snapshot rpool/USERDATA/jerry_a7x5fg@jerry_20200725
                It's the other 18 datasets that are another matter. WIll snapshotting rpool/BOOT/ubuntu_4b3cl3 snapshot everything else except the home account? I don't know.

                I could install zfsnap, or zfs-auto-snapshot and use them. That's what I am going to try next.
                "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
                – John F. Kennedy, February 26, 1962.

                Comment


                  #23
                  IDK, it might be me, but all this seems like a lot of effort, a lot of new functionality to learn, and little or no benefits at all over BTRFS.

                  I have encountered several ZFS fanbois but I still haven't heard one single actual reason why it's "better" than BTRFS, and I can think of several why it's not.

                  GG. do you feel there are any specific benefits to ZFS?

                  Please Read Me

                  Comment


                    #24
                    Originally posted by oshunluvr View Post
                    IDK, it might be me, but all this seems like a lot of effort, a lot of new functionality to learn, and little or no benefits at all over BTRFS.

                    I have encountered several ZFS fanbois but I still haven't heard one single actual reason why it's "better" than BTRFS, and I can think of several why it's not.

                    GG. do you feel there are any specific benefits to ZFS?
                    I undertook this experiment specifically to answer the question "Can KDE run on top of ZFS and am I truly missing out of features ZFS offers that BTRFS does not?". And, to learn ZFS for my own curiosity.

                    *raising right hand and solemnly swearing* "I do not know of any ZFS features that are worth switching for".
                    And, I know of advantages of BTRFS that ZFS cannot do. And, all the things you DON'T have to do to have a well oiled BTRFS.

                    The first thing I discovered in my research is that there are two myths that are repeated often on the web: ZFS is more stable, and BTRFS is unstable/experimental. Both myths currently exist because of constant repetition. Both are wrong. The horror stories of ZFS crashing and loosing TB's of data are abundant on YT and on the web in various blogs. Similar stores exist about BTRFS. Each fs has its strengths and weaknesses, and their "best fit" applications. The B-25 was called "The Widow Maker" because of the number of novice pilots that were killed trying to learn how to fly it, but seasoned pilots LOVED it. That could be said of either ZFS or BTRFS, except that BTRFS is, IMO, easier to fly. Both can handle 100's of TB of data effortlessly. Both use CoW and checksumming. BTRFS completely fits on its own, and only, partition and does not need an Efi, FAT and EXT partition. ZFS requires those in order to use the ZFS partition. BTRFS can roll back to any previous snaphot without having to destroy more recent snapshots. ZFS has 19 filesystems/datasets that need to be backed up. BTRFS has two. (I've been exploring how to reduce the number of ZFS backups to two: root and home, by choosing the proper rpool/DATASET to backup, to make things easier. BTRFS Raid 5/6 are not ready for production. ZFS raid-z1 (5) and raid-z2 (6) are. That doesn't mean that you won't read or see horror stories about ZFS data loss following a power failure during a write operation. And, if one has a power failure while using BTRFS recovery can be possible by running the BTRFS check before mounting the fs.

                    Both BTRFS and ZFS are under constant development, patching and bug fixing. That will never change. Problems encountered today may not be problems tomorrow.

                    I've been using BTRFS for about five years and have tried and tested everything about it except raid5/6, and offline check. Everything else has run perfectly and in my five years experience I have not had a single hiccup. IMO, BTRFS is the perfect filesystem for the personal computer, AND, for many server use cases.

                    In an interesting turn of events, Fedora (Red Hat, which was purchased by IBM) may be switching to BTRFS! The news, given by developer Jeff Laws, begins at 12:15
                    https://www.jupiterbroadcasting.com/...unplugged-361/


                    Linus said he wasn't in favor of adding ZFS to the kernel due to the CDDL license and Ellison's tendency to sue when significant use is in play. But it has been added to the kernel used by Ubuntu. Was it added upstream? Has Linus been over-ridden (by who?), or did he changed his mind.

                    The zfs.ko has the largest number of parm's that I've ever seen in a kernel module:
                    Code:
                    [B]$ modinfo zfs[/B]
                    filename:       /lib/modules/5.4.0-42-generic/kernel/zfs/zfs.ko
                    version:        0.8.3-1ubuntu12.1
                    license:        CDDL
                    author:         OpenZFS on Linux
                    description:    ZFS
                    alias:          devname:zfs
                    alias:          char-major-10-249
                    srcversion:     4846FE465C7D89EAF09E22A
                    depends:        zlua,spl,znvpair,zcommon,icp,zunicode,zavl
                    retpoline:      Y
                    name:           zfs
                    vermagic:       5.4.0-42-generic SMP mod_unload 
                    sig_id:         PKCS#7
                    signer:         Build time autogenerated kernel key
                    sig_key:        02:B6:04:06:D9:82:F4:38:95:E4:6F:84:9F:1D:B4:8E:C5:85:90:8B
                    sig_hashalgo:   sha512
                    signature:      0E:D6:EC:D0:4F:20:E4:95:94:72:CC:A0:F2:4B:CD:5B:47:19:C7:77:
                                    3F:EE:70:65:2D:55:F7:08:02:65:34:80:F3:24:3D:68:C7:87:87:D0:
                                    B3:E4:74:75:8B:09:32:EA:00:3E:4B:57:FF:A7:6E:F1:B3:B2:BE:94:
                                    62:56:9E:52:EB:67:2F:63:28:D2:44:09:E9:2D:DA:A0:81:72:75:90:
                                    3D:17:80:49:FC:16:2F:4D:03:8C:36:A2:AC:DD:31:CF:B6:46:AA:3A:
                                    6E:ED:DA:27:F7:71:4C:FB:09:01:69:17:F4:A0:58:F0:67:75:E7:1E:
                                    20:2C:1C:40:97:E0:B6:BB:B4:75:55:9C:9B:AF:35:E2:8F:15:44:CB:
                                    92:EE:04:AD:91:19:9D:74:07:D2:02:42:00:58:B6:D7:51:04:03:1A:
                                    3A:19:51:3C:27:E1:63:64:5A:92:EF:C3:DE:CB:E0:A6:21:4F:5E:26:
                                    EE:CE:33:0E:BA:51:D1:40:62:1D:25:7C:2A:22:E7:37:74:66:DC:06:
                                    EA:03:BB:2F:A6:C5:8F:09:22:FD:9E:52:BB:29:A6:7F:28:B7:E9:1E:
                                    97:CB:5E:50:5D:29:92:45:71:C5:A1:77:FB:C7:9D:53:6F:74:72:71:
                                    FD:79:7B:B5:31:2C:44:AE:70:A5:DA:C2:EE:A4:A9:6A:AF:DD:6F:9B:
                                    E9:F5:EC:86:DE:10:15:E6:96:06:A7:6B:C3:AD:58:76:E6:22:D5:2C:
                                    84:0F:C6:7B:CE:11:8B:CB:15:81:03:CC:E4:A9:E1:C8:1B:84:AE:20:
                                    BA:64:14:42:E0:AD:30:2B:AE:6E:9A:34:B1:47:AA:7F:43:09:55:D7:
                                    88:FE:CF:8D:CC:39:31:D8:2B:75:8D:58:DC:07:AB:AD:D5:DE:75:67:
                                    AE:C3:69:89:DC:B5:6F:43:30:B3:1B:CA:8D:37:41:0E:7D:39:2F:59:
                                    B3:E2:B8:50:C5:9B:60:FA:0D:72:ED:81:9B:30:6E:08:31:1E:9B:91:
                                    C3:8E:DA:8C:DB:40:97:21:E3:73:EB:1A:A6:88:F7:00:88:99:8B:63:
                                    E4:B3:6C:47:B6:77:23:82:12:C5:3D:2A:E2:51:CD:DF:D7:79:0D:54:
                                    51:7F:1B:F8:8C:20:48:D7:40:0A:3D:C1:D8:AF:B2:1E:61:91:6F:4A:
                                    09:58:16:F5:2B:A3:45:5C:74:B0:15:8F:72:29:94:5E:2B:E0:64:D9:
                                    EA:52:F4:09:E4:F6:27:8D:6E:C2:8C:BD:E7:52:1E:55:5B:83:3F:00:
                                    59:88:FB:B1:CE:95:DD:9B:43:C1:A6:60:3F:FA:C3:A8:D2:74:02:95:
                                    42:3A:F3:AB:31:48:D7:49:2A:58:B6:86
                    parm:           zvol_inhibit_dev:Do not create zvol device nodes (uint)
                    parm:           zvol_major:Major number for zvol device (uint)
                    parm:           zvol_threads:Max number of threads to handle I/O requests (uint)
                    parm:           zvol_request_sync:Synchronously handle bio requests (uint)
                    parm:           zvol_max_discard_blocks:Max number of blocks to discard (ulong)
                    parm:           zvol_prefetch_bytes:Prefetch N bytes at zvol start+end (uint)
                    parm:           zvol_volmode:Default volmode property value (uint)
                    parm:           zfs_key_max_salt_uses:Max number of times a salt value can be used for generating encryption keys before it is rotated (ulong)
                    parm:           zio_slow_io_ms:Max I/O completion time (milliseconds) before marking it as slow (int)
                    parm:           zio_requeue_io_start_cut_in_line:Prioritize requeued I/O (int)
                    parm:           zfs_sync_pass_deferred_free:Defer frees starting in this pass (int)
                    parm:           zfs_sync_pass_dont_compress:Don't compress starting in this pass (int)
                    parm:           zfs_sync_pass_rewrite:Rewrite new bps starting in this pass (int)
                    parm:           zio_dva_throttle_enabled:Throttle block allocations in the ZIO pipeline (int)
                    parm:           zio_deadman_log_all:Log all slow ZIOs, not just those with vdevs (int)
                    parm:           zfs_commit_timeout_pct:ZIL block open timeout percentage (int)
                    parm:           zil_replay_disable:Disable intent logging replay (int)
                    parm:           zil_nocacheflush:Disable ZIL cache flushes (int)
                    parm:           zil_slog_bulk:Limit in bytes slog sync writes per commit (ulong)
                    parm:           zil_maxblocksize:Limit in bytes of ZIL log block size (int)
                    parm:           zfs_object_mutex_size:Size of znode hold array (uint)
                    parm:           zfs_unlink_suspend_progress:Set to prevent async unlinks (debug - leaks space into the unlinked set) (int)
                    parm:           zfs_delete_blocks:Delete files larger than N blocks async (ulong)
                    parm:           zfs_read_chunk_size:Bytes to read per chunk (ulong)
                    parm:           zfs_immediate_write_sz:Largest data block to write to zil (long)
                    parm:           zfs_dbgmsg_enable:Enable ZFS debug message log (int)
                    parm:           zfs_dbgmsg_maxsize:Maximum ZFS debug log size (int)
                    parm:           zfs_admin_snapshot:Enable mkdir/rmdir/mv in .zfs/snapshot (int)
                    parm:           zfs_expire_snapshot:Seconds to expire .zfs/snapshot (int)
                    parm:           zfs_lua_max_instrlimit:Max instruction limit that can be specified for a channel program (ulong)
                    parm:           zfs_lua_max_memlimit:Max memory limit that can be specified for a channel program (ulong)
                    parm:           zap_iterate_prefetch:When iterating ZAP object, prefetch it (int)
                    parm:           zfs_trim_extent_bytes_max:Max size of TRIM commands, larger will be split (uint)
                    parm:           zfs_trim_extent_bytes_min:Min size of TRIM commands, smaller will be skipped (uint)
                    parm:           zfs_trim_metaslab_skip:Skip metaslabs which have never been initialized (uint)
                    parm:           zfs_trim_txg_batch:Min number of txgs to aggregate frees before issuing TRIM (uint)
                    parm:           zfs_trim_queue_limit:Max queued TRIMs outstanding per leaf vdev (uint)
                    parm:           zfs_removal_ignore_errors:Ignore hard IO errors when removing device (int)
                    parm:           zfs_remove_max_segment:Largest contiguous segment to allocate when removing device (int)
                    parm:           vdev_removal_max_span:Largest span of free chunks a remap segment can span (int)
                    parm:           zfs_removal_suspend_progress:Pause device removal after this many bytes are copied (debug use only - causes removal to hang) (int)
                    parm:           zfs_vdev_raidz_impl:Select raidz implementation.
                    parm:           zfs_vdev_aggregation_limit:Max vdev I/O aggregation size (int)
                    parm:           zfs_vdev_aggregation_limit_non_rotating:Max vdev I/O aggregation size for non-rotating media (int)
                    parm:           zfs_vdev_aggregate_trim:Allow TRIM I/O to be aggregated (int)
                    parm:           zfs_vdev_read_gap_limit:Aggregate read I/O over gap (int)
                    parm:           zfs_vdev_write_gap_limit:Aggregate write I/O over gap (int)
                    parm:           zfs_vdev_max_active:Maximum number of active I/Os per vdev (int)
                    parm:           zfs_vdev_async_write_active_max_dirty_percent:Async write concurrency max threshold (int)
                    parm:           zfs_vdev_async_write_active_min_dirty_percent:Async write concurrency min threshold (int)
                    parm:           zfs_vdev_async_read_max_active:Max active async read I/Os per vdev (int)
                    parm:           zfs_vdev_async_read_min_active:Min active async read I/Os per vdev (int)
                    parm:           zfs_vdev_async_write_max_active:Max active async write I/Os per vdev (int)
                    parm:           zfs_vdev_async_write_min_active:Min active async write I/Os per vdev (int)
                    parm:           zfs_vdev_initializing_max_active:Max active initializing I/Os per vdev (int)
                    parm:           zfs_vdev_initializing_min_active:Min active initializing I/Os per vdev (int)
                    parm:           zfs_vdev_removal_max_active:Max active removal I/Os per vdev (int)
                    parm:           zfs_vdev_removal_min_active:Min active removal I/Os per vdev (int)
                    parm:           zfs_vdev_scrub_max_active:Max active scrub I/Os per vdev (int)
                    parm:           zfs_vdev_scrub_min_active:Min active scrub I/Os per vdev (int)
                    parm:           zfs_vdev_sync_read_max_active:Max active sync read I/Os per vdev (int)
                    parm:           zfs_vdev_sync_read_min_active:Min active sync read I/Os per vdev (int)
                    parm:           zfs_vdev_sync_write_max_active:Max active sync write I/Os per vdev (int)
                    parm:           zfs_vdev_sync_write_min_active:Min active sync write I/Os per vdev (int)
                    parm:           zfs_vdev_trim_max_active:Max active trim/discard I/Os per vdev (int)
                    parm:           zfs_vdev_trim_min_active:Min active trim/discard I/Os per vdev (int)
                    parm:           zfs_vdev_queue_depth_pct:Queue depth percentage for each top-level vdev (int)
                    parm:           zfs_vdev_mirror_rotating_inc:Rotating media load increment for non-seeking I/O's (int)
                    parm:           zfs_vdev_mirror_rotating_seek_inc:Rotating media load increment for seeking I/O's (int)
                    parm:           zfs_vdev_mirror_rotating_seek_offset:Offset in bytes from the last I/O which triggers a reduced rotating media seek increment (int)
                    parm:           zfs_vdev_mirror_non_rotating_inc:Non-rotating media load increment for non-seeking I/O's (int)
                    parm:           zfs_vdev_mirror_non_rotating_seek_inc:Non-rotating media load increment for seeking I/O's (int)
                    parm:           zfs_initialize_value:Value written during zpool initialize (ulong)
                    parm:           zfs_condense_indirect_vdevs_enable:Whether to attempt condensing indirect vdev mappings (int)
                    parm:           zfs_condense_min_mapping_bytes:Minimum size of vdev mapping to condense (ulong)
                    parm:           zfs_condense_max_obsolete_bytes:Minimum size obsolete spacemap to attempt condensing (ulong)
                    parm:           zfs_condense_indirect_commit_entry_delay_ms:Delay while condensing vdev mapping (int)
                    parm:           zfs_reconstruct_indirect_combinations_max:Maximum number of combinations when reconstructing split segments (int)
                    parm:           zfs_vdev_scheduler:I/O scheduler
                    parm:           zfs_vdev_cache_max:Inflate reads small than max (int)
                    parm:           zfs_vdev_cache_size:Total size of the per-disk cache (int)
                    parm:           zfs_vdev_cache_bshift:Shift size to inflate reads too (int)
                    parm:           zfs_vdev_default_ms_count:Target number of metaslabs per top-level vdev (int)
                    parm:           zfs_vdev_min_ms_count:Minimum number of metaslabs per top-level vdev (int)
                    parm:           zfs_vdev_ms_count_limit:Practical upper limit of total metaslabs per top-level vdev (int)
                    parm:           zfs_slow_io_events_per_second:Rate limit slow IO (delay) events to this many per second (uint)
                    parm:           zfs_checksum_events_per_second:Rate limit checksum events to this many checksum errors per second (do not set below zedthreshold). (uint)
                    parm:           zfs_scan_ignore_errors:Ignore errors during resilver/scrub (int)
                    parm:           vdev_validate_skip:Bypass vdev_validate() (int)
                    parm:           zfs_nocacheflush:Disable cache flushes (int)
                    parm:           zfs_txg_timeout:Max seconds worth of delta per txg (int)
                    parm:           zfs_read_history:Historical statistics for the last N reads (int)
                    parm:           zfs_read_history_hits:Include cache hits in read history (int)
                    parm:           zfs_txg_history:Historical statistics for the last N txgs (int)
                    parm:           zfs_multihost_history:Historical statistics for last N multihost writes (int)
                    parm:           zfs_flags:Set additional debugging flags (uint)
                    parm:           zfs_recover:Set to attempt to recover from fatal errors (int)
                    parm:           zfs_free_leak_on_eio:Set to ignore IO errors during free and permanently leak the space (int)
                    parm:           zfs_deadman_synctime_ms:Pool sync expiration time in milliseconds
                    parm:           zfs_deadman_ziotime_ms:IO expiration time in milliseconds
                    parm:           zfs_deadman_checktime_ms:Dead I/O check interval in milliseconds (ulong)
                    parm:           zfs_deadman_enabled:Enable deadman timer (int)
                    parm:           zfs_deadman_failmode:Failmode for deadman timer
                    parm:           spa_asize_inflation:SPA size estimate multiplication factor (int)
                    parm:           spa_slop_shift:Reserved free space in pool
                    parm:           zfs_ddt_data_is_special:Place DDT data into the special class (int)
                    parm:           zfs_user_indirect_is_special:Place user data indirect blocks into the special class (int)
                    parm:           zfs_special_class_metadata_reserve_pct:Small file blocks in special vdevs depends on this much free space available (int)
                    parm:           spa_config_path:SPA config file (/etc/zfs/zpool.cache) (charp)
                    parm:           zfs_autoimport_disable:Disable pool import at module load (int)
                    parm:           zfs_spa_discard_memory_limit:Maximum memory for prefetching checkpoint space map per top-level vdev while discarding checkpoint (ulong)
                    parm:           spa_load_verify_shift:log2(fraction of arc that can be used by inflight I/Os when verifying pool during import (int)
                    parm:           spa_load_verify_metadata:Set to traverse metadata on pool import (int)
                    parm:           spa_load_verify_data:Set to traverse data on pool import (int)
                    parm:           spa_load_print_vdev_tree:Print vdev tree to zfs_dbgmsg during pool import (int)
                    parm:           zio_taskq_batch_pct:Percentage of CPUs to run an IO worker thread (uint)
                    parm:           zfs_max_missing_tvds:Allow importing pool with up to this number of missing top-level vdevs (in read-only mode) (ulong)
                    parm:           zfs_multilist_num_sublists:Number of sublists used in each multilist (int)
                    parm:           zfs_multihost_fail_intervals:Max allowed period without a successful mmp write (uint)
                    parm:           zfs_multihost_interval:Milliseconds between mmp writes to each leaf
                    parm:           zfs_multihost_import_intervals:Number of zfs_multihost_interval periods to wait for activity (uint)
                    parm:           metaslab_aliquot:allocation granularity (a.k.a. stripe size) (ulong)
                    parm:           metaslab_debug_load:load all metaslabs when pool is first opened (int)
                    parm:           metaslab_debug_unload:prevent metaslabs from being unloaded (int)
                    parm:           metaslab_preload_enabled:preload potential metaslabs during reassessment (int)
                    parm:           zfs_mg_noalloc_threshold:percentage of free space for metaslab group to allow allocation (int)
                    parm:           zfs_mg_fragmentation_threshold:fragmentation for metaslab group to allow allocation (int)
                    parm:           zfs_metaslab_fragmentation_threshold:fragmentation for metaslab to allow allocation (int)
                    parm:           metaslab_fragmentation_factor_enabled:use the fragmentation metric to prefer less fragmented metaslabs (int)
                    parm:           metaslab_lba_weighting_enabled:prefer metaslabs with lower LBAs (int)
                    parm:           metaslab_bias_enabled:enable metaslab group biasing (int)
                    parm:           zfs_metaslab_segment_weight_enabled:enable segment-based metaslab selection (int)
                    parm:           zfs_metaslab_switch_threshold:segment-based metaslab selection maximum buckets before switching (int)
                    parm:           metaslab_force_ganging:blocks larger than this size are forced to be gang blocks (ulong)
                    parm:           metaslab_df_max_search:max distance (bytes) to search forward before using size tree (int)
                    parm:           metaslab_df_use_largest_segment:when looking in size tree, use largest segment instead of exact fit (int)
                    parm:           zfs_zevent_len_max:Max event queue length (int)
                    parm:           zfs_zevent_cols:Max event column width (int)
                    parm:           zfs_zevent_console:Log events to the console (int)
                    parm:           zfs_scan_vdev_limit:Max bytes in flight per leaf vdev for scrubs and resilvers (ulong)
                    parm:           zfs_scrub_min_time_ms:Min millisecs to scrub per txg (int)
                    parm:           zfs_obsolete_min_time_ms:Min millisecs to obsolete per txg (int)
                    parm:           zfs_free_min_time_ms:Min millisecs to free per txg (int)
                    parm:           zfs_resilver_min_time_ms:Min millisecs to resilver per txg (int)
                    parm:           zfs_scan_suspend_progress:Set to prevent scans from progressing (int)
                    parm:           zfs_no_scrub_io:Set to disable scrub I/O (int)
                    parm:           zfs_no_scrub_prefetch:Set to disable scrub prefetching (int)
                    parm:           zfs_async_block_max_blocks:Max number of blocks freed in one txg (ulong)
                    parm:           zfs_free_bpobj_enabled:Enable processing of the free_bpobj (int)
                    parm:           zfs_scan_mem_lim_fact:Fraction of RAM for scan hard limit (int)
                    parm:           zfs_scan_issue_strategy:IO issuing strategy during scrubbing. 0 = default, 1 = LBA, 2 = size (int)
                    parm:           zfs_scan_legacy:Scrub using legacy non-sequential method (int)
                    parm:           zfs_scan_checkpoint_intval:Scan progress on-disk checkpointing interval (int)
                    parm:           zfs_scan_max_ext_gap:Max gap in bytes between sequential scrub / resilver I/Os (ulong)
                    parm:           zfs_scan_mem_lim_soft_fact:Fraction of hard limit used as soft limit (int)
                    parm:           zfs_scan_strict_mem_lim:Tunable to attempt to reduce lock contention (int)
                    parm:           zfs_scan_fill_weight:Tunable to adjust bias towards more filled segments during scans (int)
                    parm:           zfs_resilver_disable_defer:Process all resilvers immediately (int)
                    parm:           zfs_dirty_data_max_percent:percent of ram can be dirty (int)
                    parm:           zfs_dirty_data_max_max_percent:zfs_dirty_data_max upper bound as % of RAM (int)
                    parm:           zfs_delay_min_dirty_percent:transaction delay threshold (int)
                    parm:           zfs_dirty_data_max:determines the dirty space limit (ulong)
                    parm:           zfs_dirty_data_max_max:zfs_dirty_data_max upper bound in bytes (ulong)
                    parm:           zfs_dirty_data_sync_percent:dirty data txg sync threshold as a percentage of zfs_dirty_data_max (int)
                    parm:           zfs_delay_scale:how quickly delay approaches infinity (ulong)
                    parm:           zfs_sync_taskq_batch_pct:max percent of CPUs that are used to sync dirty data (int)
                    parm:           zfs_zil_clean_taskq_nthr_pct:max percent of CPUs that are used per dp_sync_taskq (int)
                    parm:           zfs_zil_clean_taskq_minalloc:number of taskq entries that are pre-populated (int)
                    parm:           zfs_zil_clean_taskq_maxalloc:max number of taskq entries that are cached (int)
                    parm:           zfs_disable_ivset_guid_check:Set to allow raw receives without IVset guids (int)
                    parm:           zfs_max_recordsize:Max allowed record size (int)
                    parm:           zfs_prefetch_disable:Disable all ZFS prefetching (int)
                    parm:           zfetch_max_streams:Max number of streams per zfetch (uint)
                    parm:           zfetch_min_sec_reap:Min time before stream reclaim (uint)
                    parm:           zfetch_max_distance:Max bytes to prefetch per stream (default 8MB) (uint)
                    parm:           zfetch_array_rd_sz:Number of bytes in a array_read (ulong)
                    parm:           zfs_pd_bytes_max:Max number of bytes to prefetch (int)
                    parm:           ignore_hole_birth:Alias for send_holes_without_birth_time (int)
                    parm:           send_holes_without_birth_time:Ignore hole_birth txg for zfs send (int)
                    parm:           zfs_override_estimate_recordsize:Record size calculation override for zfs send estimates (ulong)
                    parm:           zfs_send_corrupt_data:Allow sending corrupt data (int)
                    parm:           zfs_send_queue_length:Maximum send queue length (int)
                    parm:           zfs_send_unmodified_spill_blocks:Send unmodified spill blocks (int)
                    parm:           zfs_recv_queue_length:Maximum receive queue length (int)
                    parm:           dmu_object_alloc_chunk_shift:CPU-specific allocator grabs 2^N objects at once (int)
                    parm:           zfs_nopwrite_enabled:Enable NOP writes (int)
                    parm:           zfs_per_txg_dirty_frees_percent:percentage of dirtied blocks from frees in one TXG (ulong)
                    parm:           zfs_dmu_offset_next_sync:Enable forcing txg sync to find holes (int)
                    parm:           dmu_prefetch_max:Limit one prefetch call to this size (int)
                    parm:           zfs_dedup_prefetch:Enable prefetching dedup-ed blks (int)
                    parm:           zfs_dbuf_state_index:Calculate arc header index (int)
                    parm:           dbuf_cache_max_bytes:Maximum size in bytes of the dbuf cache. (ulong)
                    parm:           dbuf_cache_hiwater_pct:Percentage over dbuf_cache_max_bytes when dbufs must be evicted directly. (uint)
                    parm:           dbuf_cache_lowater_pct:Percentage below dbuf_cache_max_bytes when the evict thread stops evicting dbufs. (uint)
                    parm:           dbuf_metadata_cache_max_bytes:Maximum size in bytes of the dbuf metadata cache. (ulong)
                    parm:           dbuf_metadata_cache_shift:int
                    parm:           dbuf_cache_shift:Set the size of the dbuf cache to a log2 fraction of arc size. (int)
                    parm:           zfs_arc_min:Min arc size
                    parm:           zfs_arc_max:Max arc size
                    parm:           zfs_arc_meta_limit:Meta limit for arc size
                    parm:           zfs_arc_meta_limit_percent:Percent of arc size for arc meta limit
                    parm:           zfs_arc_meta_min:Min arc metadata
                    parm:           zfs_arc_meta_prune:Meta objects to scan for prune (int)
                    parm:           zfs_arc_meta_adjust_restarts:Limit number of restarts in arc_adjust_meta (int)
                    parm:           zfs_arc_meta_strategy:Meta reclaim strategy (int)
                    parm:           zfs_arc_grow_retry:Seconds before growing arc size
                    parm:           zfs_arc_p_dampener_disable:disable arc_p adapt dampener (int)
                    parm:           zfs_arc_shrink_shift:log2(fraction of arc to reclaim)
                    parm:           zfs_arc_pc_percent:Percent of pagecache to reclaim arc to (uint)
                    parm:           zfs_arc_p_min_shift:arc_c shift to calc min/max arc_p
                    parm:           zfs_arc_average_blocksize:Target average block size (int)
                    parm:           zfs_compressed_arc_enabled:Disable compressed arc buffers (int)
                    parm:           zfs_arc_min_prefetch_ms:Min life of prefetch block in ms
                    parm:           zfs_arc_min_prescient_prefetch_ms:Min life of prescient prefetched block in ms (int)
                    parm:           l2arc_write_max:Max write bytes per interval (ulong)
                    parm:           l2arc_write_boost:Extra write bytes during device warmup (ulong)
                    parm:           l2arc_headroom:Number of max device writes to precache (ulong)
                    parm:           l2arc_headroom_boost:Compressed l2arc_headroom multiplier (ulong)
                    parm:           l2arc_feed_secs:Seconds between L2ARC writing (ulong)
                    parm:           l2arc_feed_min_ms:Min feed interval in milliseconds (ulong)
                    parm:           l2arc_noprefetch:Skip caching prefetched buffers (int)
                    parm:           l2arc_feed_again:Turbo L2ARC warmup (int)
                    parm:           l2arc_norw:No reads during writes (int)
                    parm:           zfs_arc_lotsfree_percent:System free memory I/O throttle in bytes
                    parm:           zfs_arc_sys_free:System free memory target size in bytes
                    parm:           zfs_arc_dnode_limit:Minimum bytes of dnodes in arc
                    parm:           zfs_arc_dnode_limit_percent:Percent of ARC meta buffers for dnodes (ulong)
                    parm:           zfs_arc_dnode_reduce_percent:Percentage of excess dnodes to try to unpin (ulong)
                    parm:           zfs_abd_scatter_enabled:Toggle whether ABD allocations must be linear. (int)
                    parm:           zfs_abd_scatter_min_size:Minimum size of scatter allocations. (int)
                    parm:           zfs_abd_scatter_max_order:Maximum order allocation used for a scatter ABD. (uint)
                    In looking them over I don't have a clue as to what values most should receive, beyond the default values.

                    The BTRFS kernel module, by comparison, has NO parm's, and there is only two subvolume properties that are user adjustible, and one is setting read-only property to false to make a snapshot rw instead of just r. In five years I've only used that option a couple times. The other user settable property I can't remember. One can, however, adjust the mounting properties to control BTRF's behavior. I use "defaults" in fstab and that has worked great.

                    So, I am going to install zfs-auto-snapshot and play with it for a while to see if I can circumvent the necessity of figuring out if I have to snapshot all the other filesystes/datasets in rpool. Then both the VM and virt-manager are getting rolled back out of existence.

                    Meanwhile, I can answer the question "Does Kubuntu run fine using ZFS for a fs?" Yes, but why, when BTRFS is so easy to install and use?
                    Last edited by GreyGeek; Jul 26, 2020, 12:01 PM.
                    "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
                    – John F. Kennedy, February 26, 1962.

                    Comment


                      #25
                      I installed zfs-auto-snapshot and played with it a bit. It is a shell script that sets up cron jobs and can be run manually as well. It is tricky to use, but it answered the questions I had about what filesystems/datasets needed to be snapshotted to capture the entire system. Answer: all of them.

                      First, a dry run:
                      Code:
                      jerry@jerryZFS:~$ sudo zfs-auto-snapshot -v -n
                      [sudo] password for jerry: 
                      Error: The filesystem argument list is empty.
                      Opps! add something
                      Code:
                      jerry@jerryZFS:~$ sudo zfs-auto-snapshot -v -n rpool/DATASET
                      Error: rpool/DATASET is not a ZFS filesystem or volume.
                      Oppsie again! (It's obvious I don't have the terminology down right)
                      Code:
                      jerry@jerryZFS:~$ sudo zfs-auto-snapshot -v -n rpool/DATASET/jerry_r7h6kl
                      Error: rpool/DATASET/jerry_r7h6kl is not a ZFS filesystem or volume.
                      jerry@jerryZFS:~$ sudo zfs-auto-snapshot -v -n /home/jerry
                      Error: /home/jerry is not a ZFS filesystem or volume.
                      jerry@jerryZFS:~$ sudo zfs-auto-snapshot -v -n /
                      Error: / is not a ZFS filesystem or volume.
                      So, I tried a pool name(-v is verbose, -n is dry run):
                      Code:
                      jerry@jerryZFS:~$ [B]sudo zfs-auto-snapshot -v -n rpool[/B]
                      zfs snapshot -o com.sun:auto-snapshot-desc='-' -r 'rpool@zfs-auto-snap-2020-07-26-1853'
                      @zfs-auto-snap-2020-07-26-1853, 1 created, 0 destroyed, 0 warnings.
                      Success?
                      Nothing is supposed to be added during a dry run but I wanted to check and make sure:
                      Code:
                      jerry@jerryZFS:~$ [B]zfs list -t snapshot[/B]
                      NAME                                                                                      USED  AVAIL     REFER  MOUNTPOINT
                      bpool@zfs-auto-snap_frequent-2020-07-26-1830                                                0B      -       96K  -
                      bpool@zfs-auto-snap_frequent-2020-07-26-1845                                                0B      -       96K  -
                      bpool/BOOT@zfs-auto-snap_frequent-2020-07-26-1830                                           0B      -       96K  -
                      bpool/BOOT@zfs-auto-snap_frequent-2020-07-26-1845                                           0B      -       96K  -
                      bpool/BOOT/ubuntu_cfgs2t@zfs-auto-snap_frequent-2020-07-26-1830                             0B      -     91.3M  -
                      bpool/BOOT/ubuntu_cfgs2t@zfs-auto-snap_frequent-2020-07-26-1845                             0B      -     91.3M  -
                      rpool@zfs-auto-snap_frequent-2020-07-26-1830                                                0B      -       96K  -
                      rpool@zfs-auto-snap_frequent-2020-07-26-1845                                                0B      -       96K  -
                      rpool/ROOT@zfs-auto-snap_frequent-2020-07-26-1830                                           0B      -       96K  -
                      rpool/ROOT@zfs-auto-snap_frequent-2020-07-26-1845                                           0B      -       96K  -
                      rpool/ROOT/ubuntu_cfgs2t@ROOT2020071528                                                   112M      -     2.30G  -
                      rpool/ROOT/ubuntu_cfgs2t@zfs-auto-snap_frequent-2020-07-26-1830                           248K      -     5.89G  -
                      rpool/ROOT/ubuntu_cfgs2t@zfs-auto-snap_frequent-2020-07-26-1845                           208K      -     5.89G  -
                      rpool/ROOT/ubuntu_cfgs2t/srv@ROOT2020071528                                                 0B      -       96K  -
                      rpool/ROOT/ubuntu_cfgs2t/srv@zfs-auto-snap_frequent-2020-07-26-1830                         0B      -       96K  -
                      rpool/ROOT/ubuntu_cfgs2t/srv@zfs-auto-snap_frequent-2020-07-26-1845                         0B      -       96K  -
                      rpool/ROOT/ubuntu_cfgs2t/usr@ROOT2020071528                                                 0B      -       96K  -
                      rpool/ROOT/ubuntu_cfgs2t/usr@zfs-auto-snap_frequent-2020-07-26-1830                         0B      -       96K  -
                      rpool/ROOT/ubuntu_cfgs2t/usr@zfs-auto-snap_frequent-2020-07-26-1845                         0B      -       96K  -
                      rpool/ROOT/ubuntu_cfgs2t/usr/local@ROOT2020071528                                          72K      -      128K  -
                      rpool/ROOT/ubuntu_cfgs2t/usr/local@zfs-auto-snap_frequent-2020-07-26-1830                   0B      -      144K  -
                      rpool/ROOT/ubuntu_cfgs2t/usr/local@zfs-auto-snap_frequent-2020-07-26-1845                   0B      -      144K  -
                      rpool/ROOT/ubuntu_cfgs2t/var@ROOT2020071528                                                 0B      -       96K  -
                      rpool/ROOT/ubuntu_cfgs2t/var@zfs-auto-snap_frequent-2020-07-26-1830                         0B      -       96K  -
                      rpool/ROOT/ubuntu_cfgs2t/var@zfs-auto-snap_frequent-2020-07-26-1845                         0B      -       96K  -
                      rpool/ROOT/ubuntu_cfgs2t/var/games@ROOT2020071528                                           0B      -       96K  -
                      rpool/ROOT/ubuntu_cfgs2t/var/games@zfs-auto-snap_frequent-2020-07-26-1830                   0B      -       96K  -
                      rpool/ROOT/ubuntu_cfgs2t/var/games@zfs-auto-snap_frequent-2020-07-26-1845                   0B      -       96K  -
                      rpool/ROOT/ubuntu_cfgs2t/var/lib@ROOT2020071528                                          25.5M      -      476M  -
                      rpool/ROOT/ubuntu_cfgs2t/var/lib@zfs-auto-snap_frequent-2020-07-26-1830                    96K      -      519M  -
                      rpool/ROOT/ubuntu_cfgs2t/var/lib@zfs-auto-snap_frequent-2020-07-26-1845                     0B      -      519M  -
                      rpool/ROOT/ubuntu_cfgs2t/var/lib/AccountsService@ROOT2020071528                            72K      -      104K  -
                      rpool/ROOT/ubuntu_cfgs2t/var/lib/AccountsService@zfs-auto-snap_frequent-2020-07-26-1830     0B      -      104K  -
                      rpool/ROOT/ubuntu_cfgs2t/var/lib/AccountsService@zfs-auto-snap_frequent-2020-07-26-1845     0B      -      104K  -
                      rpool/ROOT/ubuntu_cfgs2t/var/lib/NetworkManager@ROOT2020071528                            108K      -      144K  -
                      rpool/ROOT/ubuntu_cfgs2t/var/lib/NetworkManager@zfs-auto-snap_frequent-2020-07-26-1830     88K      -      140K  -
                      rpool/ROOT/ubuntu_cfgs2t/var/lib/NetworkManager@zfs-auto-snap_frequent-2020-07-26-1845     88K      -      140K  -
                      rpool/ROOT/ubuntu_cfgs2t/var/lib/apt@ROOT2020071528                                      2.73M      -     66.7M  -
                      rpool/ROOT/ubuntu_cfgs2t/var/lib/apt@zfs-auto-snap_frequent-2020-07-26-1830                64K      -     81.6M  -
                      rpool/ROOT/ubuntu_cfgs2t/var/lib/apt@zfs-auto-snap_frequent-2020-07-26-1845                 0B      -     81.6M  -
                      rpool/ROOT/ubuntu_cfgs2t/var/lib/dpkg@ROOT2020071528                                     5.77M      -     31.3M  -
                      rpool/ROOT/ubuntu_cfgs2t/var/lib/dpkg@zfs-auto-snap_frequent-2020-07-26-1830                0B      -     62.8M  -
                      rpool/ROOT/ubuntu_cfgs2t/var/lib/dpkg@zfs-auto-snap_frequent-2020-07-26-1845                0B      -     62.8M  -
                      rpool/ROOT/ubuntu_cfgs2t/var/log@ROOT2020071528                                          4.09M      -     4.85M  -
                      rpool/ROOT/ubuntu_cfgs2t/var/log@zfs-auto-snap_frequent-2020-07-26-1830                  3.89M      -     13.9M  -
                      rpool/ROOT/ubuntu_cfgs2t/var/log@zfs-auto-snap_frequent-2020-07-26-1845                  2.28M      -     13.9M  -
                      rpool/ROOT/ubuntu_cfgs2t/var/mail@ROOT2020071528                                            0B      -       96K  -
                      rpool/ROOT/ubuntu_cfgs2t/var/mail@zfs-auto-snap_frequent-2020-07-26-1830                    0B      -       96K  -
                      rpool/ROOT/ubuntu_cfgs2t/var/mail@zfs-auto-snap_frequent-2020-07-26-1845                    0B      -       96K  -
                      rpool/ROOT/ubuntu_cfgs2t/var/snap@ROOT2020071528                                            0B      -       96K  -
                      rpool/ROOT/ubuntu_cfgs2t/var/snap@zfs-auto-snap_frequent-2020-07-26-1830                    0B      -       96K  -
                      rpool/ROOT/ubuntu_cfgs2t/var/snap@zfs-auto-snap_frequent-2020-07-26-1845                    0B      -       96K  -
                      rpool/ROOT/ubuntu_cfgs2t/var/spool@ROOT2020071528                                          72K      -      112K  -
                      rpool/ROOT/ubuntu_cfgs2t/var/spool@zfs-auto-snap_frequent-2020-07-26-1830                  56K      -      112K  -
                      rpool/ROOT/ubuntu_cfgs2t/var/spool@zfs-auto-snap_frequent-2020-07-26-1845                   0B      -      112K  -
                      rpool/ROOT/ubuntu_cfgs2t/var/www@ROOT2020071528                                             0B      -       96K  -
                      rpool/ROOT/ubuntu_cfgs2t/var/www@zfs-auto-snap_frequent-2020-07-26-1830                     0B      -       96K  -
                      rpool/ROOT/ubuntu_cfgs2t/var/www@zfs-auto-snap_frequent-2020-07-26-1845                     0B      -       96K  -
                      rpool/USERDATA@zfs-auto-snap_frequent-2020-07-26-1830                                       0B      -       96K  -
                      rpool/USERDATA@zfs-auto-snap_frequent-2020-07-26-1845                                       0B      -       96K  -
                      rpool/USERDATA/jerry_r7h6kl@jerry_202007241528                                           1.77M      -     42.5M  -
                      rpool/USERDATA/jerry_r7h6kl@jerry_20200724_KDE                                           2.18M      -     45.3M  -
                      rpool/USERDATA/jerry_r7h6kl@jerry_SAGE                                                   15.5M      -     53.1M  -
                      rpool/USERDATA/jerry_r7h6kl@zfs-auto-snap_frequent-2020-07-26-1830                        140K      -     56.8M  -
                      rpool/USERDATA/jerry_r7h6kl@zfs-auto-snap_frequent-2020-07-26-1845                         56K      -     56.8M  -
                      rpool/USERDATA/root_r7h6kl@root_202007241528                                               80K      -      156K  -
                      rpool/USERDATA/root_r7h6kl@root_20200724_KDE                                               80K      -      156K  -
                      rpool/USERDATA/root_r7h6kl@root_SAGE                                                       92K      -      276K  -
                      rpool/USERDATA/root_r7h6kl@zfs-auto-snap_frequent-2020-07-26-1830                           0B      -      276K  -
                      rpool/USERDATA/root_r7h6kl@zfs-auto-snap_frequent-2020-07-26-1845                           0B      -      276K  -
                      Whoa!
                      Yup, I crashed that B-25 a lot!
                      While I'd been playing around with zfs-auto-snapshot it had been busy creating snapshots every 15 minutes!
                      AND, it snapshotted everything!

                      Code:
                      $ [B]zfs list -t snapshot -o  name,creation | grep Sun[/B]
                      bpool@zfs-auto-snap_frequent-2020-07-26-1830                                             Sun Jul 26 13:30 2020
                      bpool@zfs-auto-snap_frequent-2020-07-26-1845                                             Sun Jul 26 13:45 2020
                      bpool@zfs-auto-snap_frequent-2020-07-26-1900                                             Sun Jul 26 14:00 2020
                      bpool/BOOT@zfs-auto-snap_frequent-2020-07-26-1830                                        Sun Jul 26 13:30 2020
                      bpool/BOOT@zfs-auto-snap_frequent-2020-07-26-1845                                        Sun Jul 26 13:45 2020
                      bpool/BOOT@zfs-auto-snap_frequent-2020-07-26-1900                                        Sun Jul 26 14:00 2020
                      bpool/BOOT/ubuntu_cfgs2t@zfs-auto-snap_frequent-2020-07-26-1830                          Sun Jul 26 13:30 2020
                      bpool/BOOT/ubuntu_cfgs2t@zfs-auto-snap_frequent-2020-07-26-1845                          Sun Jul 26 13:45 2020
                      bpool/BOOT/ubuntu_cfgs2t@zfs-auto-snap_frequent-2020-07-26-1900                          Sun Jul 26 14:00 2020
                      rpool@zfs-auto-snap_frequent-2020-07-26-1830                                             Sun Jul 26 13:30 2020
                      rpool@zfs-auto-snap_frequent-2020-07-26-1845                                             Sun Jul 26 13:45 2020
                      rpool@zfs-auto-snap_frequent-2020-07-26-1900                                             Sun Jul 26 14:00 2020
                      rpool/ROOT@zfs-auto-snap_frequent-2020-07-26-1830                                        Sun Jul 26 13:30 2020
                      rpool/ROOT@zfs-auto-snap_frequent-2020-07-26-1845                                        Sun Jul 26 13:45 2020
                      rpool/ROOT@zfs-auto-snap_frequent-2020-07-26-1900                                        Sun Jul 26 14:00 2020
                      rpool/ROOT/ubuntu_cfgs2t@zfs-auto-snap_frequent-2020-07-26-1830                          Sun Jul 26 13:30 2020
                      rpool/ROOT/ubuntu_cfgs2t@zfs-auto-snap_frequent-2020-07-26-1845                          Sun Jul 26 13:45 2020
                      rpool/ROOT/ubuntu_cfgs2t@zfs-auto-snap_frequent-2020-07-26-1900                          Sun Jul 26 14:00 2020
                      rpool/ROOT/ubuntu_cfgs2t/srv@zfs-auto-snap_frequent-2020-07-26-1830                      Sun Jul 26 13:30 2020
                      rpool/ROOT/ubuntu_cfgs2t/srv@zfs-auto-snap_frequent-2020-07-26-1845                      Sun Jul 26 13:45 2020
                      rpool/ROOT/ubuntu_cfgs2t/srv@zfs-auto-snap_frequent-2020-07-26-1900                      Sun Jul 26 14:00 2020
                      rpool/ROOT/ubuntu_cfgs2t/usr@zfs-auto-snap_frequent-2020-07-26-1830                      Sun Jul 26 13:30 2020
                      rpool/ROOT/ubuntu_cfgs2t/usr@zfs-auto-snap_frequent-2020-07-26-1845                      Sun Jul 26 13:45 2020
                      rpool/ROOT/ubuntu_cfgs2t/usr@zfs-auto-snap_frequent-2020-07-26-1900                      Sun Jul 26 14:00 2020
                      rpool/ROOT/ubuntu_cfgs2t/usr/local@zfs-auto-snap_frequent-2020-07-26-1830                Sun Jul 26 13:30 2020
                      rpool/ROOT/ubuntu_cfgs2t/usr/local@zfs-auto-snap_frequent-2020-07-26-1845                Sun Jul 26 13:45 2020
                      rpool/ROOT/ubuntu_cfgs2t/usr/local@zfs-auto-snap_frequent-2020-07-26-1900                Sun Jul 26 14:00 2020
                      rpool/ROOT/ubuntu_cfgs2t/var@zfs-auto-snap_frequent-2020-07-26-1830                      Sun Jul 26 13:30 2020
                      rpool/ROOT/ubuntu_cfgs2t/var@zfs-auto-snap_frequent-2020-07-26-1845                      Sun Jul 26 13:45 2020
                      rpool/ROOT/ubuntu_cfgs2t/var@zfs-auto-snap_frequent-2020-07-26-1900                      Sun Jul 26 14:00 2020
                      rpool/ROOT/ubuntu_cfgs2t/var/games@zfs-auto-snap_frequent-2020-07-26-1830                Sun Jul 26 13:30 2020
                      rpool/ROOT/ubuntu_cfgs2t/var/games@zfs-auto-snap_frequent-2020-07-26-1845                Sun Jul 26 13:45 2020
                      rpool/ROOT/ubuntu_cfgs2t/var/games@zfs-auto-snap_frequent-2020-07-26-1900                Sun Jul 26 14:00 2020
                      rpool/ROOT/ubuntu_cfgs2t/var/lib@zfs-auto-snap_frequent-2020-07-26-1830                  Sun Jul 26 13:30 2020
                      rpool/ROOT/ubuntu_cfgs2t/var/lib@zfs-auto-snap_frequent-2020-07-26-1845                  Sun Jul 26 13:45 2020
                      rpool/ROOT/ubuntu_cfgs2t/var/lib@zfs-auto-snap_frequent-2020-07-26-1900                  Sun Jul 26 14:00 2020
                      rpool/ROOT/ubuntu_cfgs2t/var/lib/AccountsService@zfs-auto-snap_frequent-2020-07-26-1830  Sun Jul 26 13:30 2020
                      rpool/ROOT/ubuntu_cfgs2t/var/lib/AccountsService@zfs-auto-snap_frequent-2020-07-26-1845  Sun Jul 26 13:45 2020
                      rpool/ROOT/ubuntu_cfgs2t/var/lib/AccountsService@zfs-auto-snap_frequent-2020-07-26-1900  Sun Jul 26 14:00 2020
                      rpool/ROOT/ubuntu_cfgs2t/var/lib/NetworkManager@zfs-auto-snap_frequent-2020-07-26-1830   Sun Jul 26 13:30 2020
                      rpool/ROOT/ubuntu_cfgs2t/var/lib/NetworkManager@zfs-auto-snap_frequent-2020-07-26-1845   Sun Jul 26 13:45 2020
                      rpool/ROOT/ubuntu_cfgs2t/var/lib/NetworkManager@zfs-auto-snap_frequent-2020-07-26-1900   Sun Jul 26 14:00 2020
                      rpool/ROOT/ubuntu_cfgs2t/var/lib/apt@zfs-auto-snap_frequent-2020-07-26-1830              Sun Jul 26 13:30 2020
                      rpool/ROOT/ubuntu_cfgs2t/var/lib/apt@zfs-auto-snap_frequent-2020-07-26-1845              Sun Jul 26 13:45 2020
                      rpool/ROOT/ubuntu_cfgs2t/var/lib/apt@zfs-auto-snap_frequent-2020-07-26-1900              Sun Jul 26 14:00 2020
                      rpool/ROOT/ubuntu_cfgs2t/var/lib/dpkg@zfs-auto-snap_frequent-2020-07-26-1830             Sun Jul 26 13:30 2020
                      rpool/ROOT/ubuntu_cfgs2t/var/lib/dpkg@zfs-auto-snap_frequent-2020-07-26-1845             Sun Jul 26 13:45 2020
                      rpool/ROOT/ubuntu_cfgs2t/var/lib/dpkg@zfs-auto-snap_frequent-2020-07-26-1900             Sun Jul 26 14:00 2020
                      rpool/ROOT/ubuntu_cfgs2t/var/log@zfs-auto-snap_frequent-2020-07-26-1830                  Sun Jul 26 13:30 2020
                      rpool/ROOT/ubuntu_cfgs2t/var/log@zfs-auto-snap_frequent-2020-07-26-1845                  Sun Jul 26 13:45 2020
                      rpool/ROOT/ubuntu_cfgs2t/var/log@zfs-auto-snap_frequent-2020-07-26-1900                  Sun Jul 26 14:00 2020
                      rpool/ROOT/ubuntu_cfgs2t/var/mail@zfs-auto-snap_frequent-2020-07-26-1830                 Sun Jul 26 13:30 2020
                      rpool/ROOT/ubuntu_cfgs2t/var/mail@zfs-auto-snap_frequent-2020-07-26-1845                 Sun Jul 26 13:45 2020
                      rpool/ROOT/ubuntu_cfgs2t/var/mail@zfs-auto-snap_frequent-2020-07-26-1900                 Sun Jul 26 14:00 2020
                      rpool/ROOT/ubuntu_cfgs2t/var/snap@zfs-auto-snap_frequent-2020-07-26-1830                 Sun Jul 26 13:30 2020
                      rpool/ROOT/ubuntu_cfgs2t/var/snap@zfs-auto-snap_frequent-2020-07-26-1845                 Sun Jul 26 13:45 2020
                      rpool/ROOT/ubuntu_cfgs2t/var/snap@zfs-auto-snap_frequent-2020-07-26-1900                 Sun Jul 26 14:00 2020
                      rpool/ROOT/ubuntu_cfgs2t/var/spool@zfs-auto-snap_frequent-2020-07-26-1830                Sun Jul 26 13:30 2020
                      rpool/ROOT/ubuntu_cfgs2t/var/spool@zfs-auto-snap_frequent-2020-07-26-1845                Sun Jul 26 13:45 2020
                      rpool/ROOT/ubuntu_cfgs2t/var/spool@zfs-auto-snap_frequent-2020-07-26-1900                Sun Jul 26 14:00 2020
                      rpool/ROOT/ubuntu_cfgs2t/var/www@zfs-auto-snap_frequent-2020-07-26-1830                  Sun Jul 26 13:30 2020
                      rpool/ROOT/ubuntu_cfgs2t/var/www@zfs-auto-snap_frequent-2020-07-26-1845                  Sun Jul 26 13:45 2020
                      rpool/ROOT/ubuntu_cfgs2t/var/www@zfs-auto-snap_frequent-2020-07-26-1900                  Sun Jul 26 14:00 2020
                      rpool/USERDATA@zfs-auto-snap_frequent-2020-07-26-1830                                    Sun Jul 26 13:30 2020
                      rpool/USERDATA@zfs-auto-snap_frequent-2020-07-26-1845                                    Sun Jul 26 13:45 2020
                      rpool/USERDATA@zfs-auto-snap_frequent-2020-07-26-1900                                    Sun Jul 26 14:00 2020
                      rpool/USERDATA/jerry_r7h6kl@zfs-auto-snap_frequent-2020-07-26-1830                       Sun Jul 26 13:30 2020
                      rpool/USERDATA/jerry_r7h6kl@zfs-auto-snap_frequent-2020-07-26-1845                       Sun Jul 26 13:45 2020
                      rpool/USERDATA/jerry_r7h6kl@zfs-auto-snap_frequent-2020-07-26-1900                       Sun Jul 26 14:00 2020
                      rpool/USERDATA/root_r7h6kl@zfs-auto-snap_frequent-2020-07-26-1830                        Sun Jul 26 13:30 2020
                      rpool/USERDATA/root_r7h6kl@zfs-auto-snap_frequent-2020-07-26-1845                        Sun Jul 26 13:45 2020
                      rpool/USERDATA/root_r7h6kl@zfs-auto-snap_frequent-2020-07-26-1900                        Sun Jul 26 14:00 2020
                      I tried using "--destroy-only" and used every name from "rpool' to "rpool/DATASET" to a complete snapshot name, rpool/USERDATA/root_r7h6kl@zfs-auto-snap_frequent-2020-07-26-1845, and just zfs-auto-snap_frequent-2020-07-26-1845 Regardless of what I used it always gave the error of "no count specified". Adding a number gave another "unrecognized" error.

                      zfs-auto-snashot and zfsnap, both scripts that work the same way, are certainly not a replacement for Snapper or TimeShift, neither of which work on ZFS.


                      But, it doesn't matter. I'm done now.
                      "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
                      – John F. Kennedy, February 26, 1962.

                      Comment


                        #26
                        Jerry, I've enjoyed learning about ZFS via your experiments. I learned that I don't want to try ZFS, now or in the future. It needs a lot of work to polish the Very rough spots.
                        Thanks for your effort.
                        Kubuntu 24.11 64bit under Kernel 6.11.0, Hp Pavilion, 6MB ram. Stay away from all things Google...

                        Comment


                          #27
                          Originally posted by TWPonKubuntu View Post
                          Jerry, I've enjoyed learning about ZFS via your experiments. I learned that I don't want to try ZFS, now or in the future. It needs a lot of work to polish the Very rough spots.
                          Thanks for your effort.
                          My sentiments, also, on all four points, thank you TWPonKubuntu. (Although, I suspect there'll be ZFS in my future.)
                          Regards, John Little

                          Comment


                            #28
                            I did try a rollback and it didn't give me an error message. But, when I rebooted, what I expected to disappear, because it wasn't present when I created the snapshot I rolled back to, was still there. So, apparently the rollback didn't occur. No success or error msgs.

                            With ZFS one has to do the rollback command and then immediately reboot to have it take effect. Canonical had the idea of tying the snapshots into grub so that one could select the snapshot to roll back to during the bootup. This also implies that one doesn't need to create a snapshot if they want to roll back to an existing snapshot. Using BTRFS one can roll back to any particular snapshot by replacing @ and @home with @somesnap and @homesomesnap and rebooting. IMO, and experience, @ and @home are often interlinked and to replace a specific @ without replacing its partner, @home, may cause problems. For example: you've frequently installed an application that stores components under root and under /home. If you rollback @home the root components have nothing to connect to. If you rollback @ then the @home components can't function properly because the root components are missing. Ergo, snapshot @ and @home at the same time with identical ID names. That's why I use @yyyymmdd and @homeyyyymmdd with, perhaps, additional suffix tags.

                            ZFS also has more complex snapshot deletion rules than BTRFS. You can't delete a ZFS snapshot if it has a dataset. You can't delete a dataset if it has been cloned. So, one must delete the clone, then its parent dataset and then the snapshot. I never got into creating datasets or clones.


                            In BTRFS cloning is the creation of a new BTRFS system from an existing system. There are several ways to do that. DDG "btrfs clone" and count the many ways.

                            Zsys is supposed to create snapshots when apt is used, but before it actually installs or removes anything, making rolling back easy to do if things go south after the install or removal. Zsys must also track the usage of the zfs command as well because manually created and destroyed snapshots are done using the zfs command. Tying that all into grub is, IMO, fraught with danger, a Rudegoldberg device kludged onto grub. And by all the field reports it's not working out too well, yet.

                            Anywhoo, my curiosity has been satisfied, and my main question answered. Kubuntu can run seamlessly and quickly on top of ZFS. In the process of learning that I also learned more about ZFS than I will ever use because my experience with it has increased my appreciation for the power and ease of use of BTRFS.

                            I deleted the VM and purged virt-manager and its dependencies. I disabled and masked the unit services and then used mc as root to delete the null pointers, which is what mask creates when it is used to "mask" a service. And, should I ever need to return to this experiment, I have a snapshot pair created just after I installed Virt-Manager and before I installed Ubunto and Kubuntu on top of it. I also have a snapshot pair of my system with Kubuntu installed on ZFS in virt-manager. Later today I will send them to my archival HDs and then delete the snapshots to free up the space on my SSDs.
                            "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
                            – John F. Kennedy, February 26, 1962.

                            Comment


                              #29
                              Solution for VM and Drive

                              If you want install ubuntu and kubuntu on top in the VM e.g. Virtual-Box...:
                              1. Don't create a drive and continue
                              2. Add "Controller" "NVMe", create a "drive" (i used 50 GB), select new drive and click "add"
                              3. Assure "EFI" boot for special... and check all options like acceleration... here a picture
                              cannot post foto
                              4. Once you have started with UEFI open Terminal [CTRL + ALT + T] and type:
                              Code:
                              sudo -i
                              
                              nano /usr/share/ubiquity/zsys-setup
                              5. As here show https://pov.es/linux/ubuntu/ubuntu-2...nd-encryption/
                              Code:
                              # Pools
                                   # rpool
                                   echo PASSWORD | zpool create -f \
                                           -o ashift=12 \
                                           -O compression=lz4 \
                                           -O acltype=posixacl \
                                           -O xattr=sa \
                                           -O relatime=on \
                                           -O normalization=formD \
                                           -O mountpoint=/ \
                                           -O canmount=off \
                                           -O dnodesize=auto \
                                           -O sync=disabled \
                                           -O recordsize=1M \
                                           -O encryption=aes-256-gcm \
                                           -O keylocation=prompt \
                                           -O keyformat=passphrase \
                                           -O mountpoint=/ -R "${target}" rpool "${partrpool}"
                              * Replace PASSWORD with the encryption password you want to use. You will be prompted to type this at boot time. Use here preferably you Login password
                              * This is about line N. 316
                              [CTRL + O] for save/store and afterwards [CTRL + X] to close.
                              6. Edit `sources.list`
                              Code:
                              nano /mnt/apt/sources.list
                              and insert at least those lines

                              Code:
                              deb http://archive.ubuntu.com/ubuntu focal main restricted universe multiverse
                              deb http://archive.ubuntu.com/ubuntu focal-updates main restricted universe multiverse
                              deb http://archive.ubuntu.com/ubuntu focal-backports main restricted universe multiverse
                              deb http://security.ubuntu.com/ubuntu focal-security main restricted universe multiverse
                              save (CTRL + O), close (CTRL + X) and update with code
                              Code:
                              apt update
                              7. Install
                              now start ubiquity over terminal with
                              Code:
                              ubiquity
                              .
                              Don't close Terminal until installation is complete.
                              After installation click "continue try ubuntu" close terminal and restart VM or PC.
                              8. First after reboot
                              * Check `/etc/apt/sources.list` again than in terminal...

                              Code:
                              apt update; apt upgrade --yes; apt full-upgrade --yes; update-grub
                              9. Additional install
                              Code:
                              apt install --yes plasma-desktop kde-full  sddm-theme-breeze
                              now restart and choose "Plasma Desktop", finish.

                              The trick is (besides reconfiguration of `zsys-setup` to complete `sources.list` and `apt update` (only).

                              Comment


                                #30
                                Thanks for that post. It should prove useful for those who want to run Kubuntu on top of ZFS.
                                "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
                                – John F. Kennedy, February 26, 1962.

                                Comment

                                Working...
                                X