Announcement

Collapse
No announcement yet.

btrfs vs. zfs

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    btrfs vs. zfs

    I notice that Unbuntu now offers ZFS as an option for installation. I use btrfs, but I keep hearing about ZFS and wonder which file system is better. Any feedback from those who have used both?

    #2
    https://www.kubuntuforums.net/showth...ot-file-system

    Please Read Me

    Comment


      #3
      I haven't used ZFS, and won't consider it because I'm allergic to anything from Oracle.
      Regards, John Little

      Comment


        #4
        Originally posted by oldgeek View Post
        I notice that Unbuntu now offers ZFS as an option for installation. I use btrfs, but I keep hearing about ZFS and wonder which file system is better. Any feedback from those who have used both?
        Personally, I think it's a "use case" issue. I'm using BTRFS on my 20.04, but I don't do any backups; it's my laptop, and NOT my primary *buntu OS, so I "don't care" if I have to wipe it and reinstall.
        Windows no longer obstructs my view.
        Using Kubuntu Linux since March 23, 2007.
        "It is a capital mistake to theorize before one has data." - Sherlock Holmes

        Comment


          #5
          Originally posted by oldgeek View Post
          I notice that Unbuntu now offers ZFS as an option for installation. I use btrfs, but I keep hearing about ZFS and wonder which file system is better. Any feedback from those who have used both?
          Funny you should mention that. Earlier today I downloaded the Ubuntu 20.04 ISO with the deliberate purpose of installing it as a VM using ZFS as the root file system. I have done that and have begun learning how to use it.

          I plan to post my experiences in the BTRFS forum which Oshunluver linked to, unless the admins create a ZFS subforum.

          BTW, Sun open sourced Solaris and ZFS in 2005 under the CDDL license. IN 2010 Oracle bought Sun and stopped updating the ZFS so a company whose name I forgot took the CDDL source code and began maintaining it. In 2013 OpenZFS was formed and is now the entity responsible for maintaining and developing the code. Oracle has nothing to do with it and cannot shut it down as long as the devs use green room techniques.

          Canonical has decided that the CDDL isn't enough of a hindrance to not add it to their release. So they added it.

          When you get to the section of the installer where you choose what device you are going to install Ubuntu on there is a radio button labeled "Advanced". Click it and you are on your way to making ZFS the <ROOT_FS> of your installation.

          In the days and weeks ahead, if I maintain an interest in it, I will be posting my experiences.

          So far, I can say this: running a VM of Ubuntu 20.04 on ZFS is no slower than running a VM of UbuntuDDE on EXT4. I gave ZFS only 30GB of vdisk1, 4K of RAM and 2 cores, but it is fairly fast, once it finishes loading. The systemd-analyze time to a working DE is 25.4 seconds. Also, "systemctl list-unit-files" does not list any of the snapd loops or loop mount points that Kubuntu lists, but snapd daemons are present. I searched the "Store" high and low for zfs-auto-snapshot but couldn't find it. So, assuming it was in the repository, but not the one the store was using, I gave the terminal a chance to find it. "sudo apt install zfs-auto-snapshot" installed it quickly from the CLI.

          Here is a list of the bpool and rpool (b for boot and r for root)
          Code:
          jerry@jerry-Standard-PC-Q35-ICH9-2009:~$[B] zfs list[/B]
          NAME                                               USED  AVAIL     REFER  MOUNTPOINT
          bpool                                             90.8M  1.16G       96K  /boot
          bpool/BOOT                                        90.1M  1.16G       96K  none
          bpool/BOOT/ubuntu_z6p9vy                          90.0M  1.16G     90.0M  /boot
          rpool                                             3.47G  22.2G       96K  /
          rpool/ROOT                                        3.34G  22.2G       96K  none
          rpool/ROOT/ubuntu_z6p9vy                          3.34G  22.2G     2.42G  /
          rpool/ROOT/ubuntu_z6p9vy/srv                        96K  22.2G       96K  /srv
          rpool/ROOT/ubuntu_z6p9vy/usr                       224K  22.2G       96K  /usr
          rpool/ROOT/ubuntu_z6p9vy/usr/local                 128K  22.2G      128K  /usr/local
          rpool/ROOT/ubuntu_z6p9vy/var                       792M  22.2G       96K  /var
          rpool/ROOT/ubuntu_z6p9vy/var/games                  96K  22.2G       96K  /var/games
          rpool/ROOT/ubuntu_z6p9vy/var/lib                   780M  22.2G      678M  /var/lib
          rpool/ROOT/ubuntu_z6p9vy/var/lib/AccountsService   208K  22.2G       96K  /var/lib/AccountsService
          rpool/ROOT/ubuntu_z6p9vy/var/lib/NetworkManager    500K  22.2G      124K  /var/lib/NetworkManager
          rpool/ROOT/ubuntu_z6p9vy/var/lib/apt              56.3M  22.2G     51.4M  /var/lib/apt
          rpool/ROOT/ubuntu_z6p9vy/var/lib/dpkg             39.5M  22.2G     32.6M  /var/lib/dpkg
          rpool/ROOT/ubuntu_z6p9vy/var/log                  12.0M  22.2G     5.97M  /var/log
          rpool/ROOT/ubuntu_z6p9vy/var/mail                   96K  22.2G       96K  /var/mail
          rpool/ROOT/ubuntu_z6p9vy/var/snap                  256K  22.2G      112K  /var/snap
          rpool/ROOT/ubuntu_z6p9vy/var/spool                 280K  22.2G      112K  /var/spool
          rpool/ROOT/ubuntu_z6p9vy/var/www                    96K  22.2G       96K  /var/www
          rpool/USERDATA                                     126M  22.2G       96K  /
          [B]rpool/USERDATA/jerry_0mgvki                        126M  22.2G     87.0M  /home/jerry[/B]
          rpool/USERDATA/root_0mgvki                         104K  22.2G      104K  /root
          jerry@jerry-Standard-PC-Q35-ICH9-2009:~$
          Here is the snapshot listing:
          Code:
          jerry@jerry-Standard-PC-Q35-ICH9-2009:~$ zfs list -t snapshot
          NAME                                                               USED  AVAIL     REFER  MOUNTPOINT
          bpool/BOOT/ubuntu_z6p9vy@autozsys_ps1e80                             0B      -     90.0M  -
          bpool/BOOT/ubuntu_z6p9vy@autozsys_kqk0hl                             0B      -     90.0M  -
          bpool/BOOT/ubuntu_z6p9vy@autozsys_2ivctg                             0B      -     90.0M  -
          bpool/BOOT/ubuntu_z6p9vy@autozsys_3f11n0                             0B      -     90.0M  -
          bpool/BOOT/ubuntu_z6p9vy@autozsys_lv1ui9                             0B      -     90.0M  -
          rpool/ROOT/ubuntu_z6p9vy@autozsys_ps1e80                             0B      -     2.40G  -
          rpool/ROOT/ubuntu_z6p9vy@autozsys_kqk0hl                             0B      -     2.40G  -
          rpool/ROOT/ubuntu_z6p9vy@autozsys_2ivctg                          1.13M      -     2.41G  -
          rpool/ROOT/ubuntu_z6p9vy@autozsys_3f11n0                          23.4M      -     2.44G  -
          rpool/ROOT/ubuntu_z6p9vy@autozsys_lv1ui9                          45.9M      -     2.42G  -
          rpool/ROOT/ubuntu_z6p9vy/srv@autozsys_ps1e80                         0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/srv@autozsys_kqk0hl                         0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/srv@autozsys_2ivctg                         0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/srv@autozsys_3f11n0                         0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/srv@autozsys_lv1ui9                         0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/usr@autozsys_ps1e80                         0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/usr@autozsys_kqk0hl                         0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/usr@autozsys_2ivctg                         0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/usr@autozsys_3f11n0                         0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/usr@autozsys_lv1ui9                         0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/usr/local@autozsys_ps1e80                   0B      -      128K  -
          rpool/ROOT/ubuntu_z6p9vy/usr/local@autozsys_kqk0hl                   0B      -      128K  -
          rpool/ROOT/ubuntu_z6p9vy/usr/local@autozsys_2ivctg                   0B      -      128K  -
          rpool/ROOT/ubuntu_z6p9vy/usr/local@autozsys_3f11n0                   0B      -      128K  -
          rpool/ROOT/ubuntu_z6p9vy/usr/local@autozsys_lv1ui9                   0B      -      128K  -
          rpool/ROOT/ubuntu_z6p9vy/var@autozsys_ps1e80                         0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/var@autozsys_kqk0hl                         0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/var@autozsys_2ivctg                         0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/var@autozsys_3f11n0                         0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/var@autozsys_lv1ui9                         0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/var/games@autozsys_ps1e80                   0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/var/games@autozsys_kqk0hl                   0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/var/games@autozsys_2ivctg                   0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/var/games@autozsys_3f11n0                   0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/var/games@autozsys_lv1ui9                   0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/var/lib@autozsys_ps1e80                     0B      -      434M  -
          rpool/ROOT/ubuntu_z6p9vy/var/lib@autozsys_kqk0hl                     0B      -      434M  -
          rpool/ROOT/ubuntu_z6p9vy/var/lib@autozsys_2ivctg                   184K      -      434M  -
          rpool/ROOT/ubuntu_z6p9vy/var/lib@autozsys_3f11n0                   156K      -      434M  -
          rpool/ROOT/ubuntu_z6p9vy/var/lib@autozsys_lv1ui9                  2.13M      -      436M  -
          rpool/ROOT/ubuntu_z6p9vy/var/lib/AccountsService@autozsys_ps1e80     0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/var/lib/AccountsService@autozsys_kqk0hl     0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/var/lib/AccountsService@autozsys_2ivctg     0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/var/lib/AccountsService@autozsys_3f11n0     0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/var/lib/AccountsService@autozsys_lv1ui9     0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/var/lib/NetworkManager@autozsys_ps1e80      0B      -      124K  -
          rpool/ROOT/ubuntu_z6p9vy/var/lib/NetworkManager@autozsys_kqk0hl      0B      -      124K  -
          rpool/ROOT/ubuntu_z6p9vy/var/lib/NetworkManager@autozsys_2ivctg     96K      -      132K  -
          rpool/ROOT/ubuntu_z6p9vy/var/lib/NetworkManager@autozsys_3f11n0     96K      -      132K  -
          rpool/ROOT/ubuntu_z6p9vy/var/lib/NetworkManager@autozsys_lv1ui9     96K      -      132K  -
          rpool/ROOT/ubuntu_z6p9vy/var/lib/apt@autozsys_ps1e80                 0B      -     55.7M  -
          rpool/ROOT/ubuntu_z6p9vy/var/lib/apt@autozsys_kqk0hl                 0B      -     55.7M  -
          rpool/ROOT/ubuntu_z6p9vy/var/lib/apt@autozsys_2ivctg                92K      -     55.7M  -
          rpool/ROOT/ubuntu_z6p9vy/var/lib/apt@autozsys_3f11n0               100K      -     55.7M  -
          rpool/ROOT/ubuntu_z6p9vy/var/lib/apt@autozsys_lv1ui9               124K      -     51.4M  -
          rpool/ROOT/ubuntu_z6p9vy/var/lib/dpkg@autozsys_ps1e80                0B      -     32.3M  -
          rpool/ROOT/ubuntu_z6p9vy/var/lib/dpkg@autozsys_kqk0hl                0B      -     32.3M  -
          rpool/ROOT/ubuntu_z6p9vy/var/lib/dpkg@autozsys_2ivctg              944K      -     32.3M  -
          rpool/ROOT/ubuntu_z6p9vy/var/lib/dpkg@autozsys_3f11n0             1.53M      -     33.1M  -
          rpool/ROOT/ubuntu_z6p9vy/var/lib/dpkg@autozsys_lv1ui9             1.44M      -     32.6M  -
          rpool/ROOT/ubuntu_z6p9vy/var/log@autozsys_ps1e80                    80K      -     1.80M  -
          rpool/ROOT/ubuntu_z6p9vy/var/log@autozsys_kqk0hl                    80K      -     1.80M  -
          rpool/ROOT/ubuntu_z6p9vy/var/log@autozsys_2ivctg                  1.16M      -     1.97M  -
          rpool/ROOT/ubuntu_z6p9vy/var/log@autozsys_3f11n0                  1.29M      -     2.15M  -
          rpool/ROOT/ubuntu_z6p9vy/var/log@autozsys_lv1ui9                  2.43M      -     3.40M  -
          rpool/ROOT/ubuntu_z6p9vy/var/mail@autozsys_ps1e80                    0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/var/mail@autozsys_kqk0hl                    0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/var/mail@autozsys_2ivctg                    0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/var/mail@autozsys_3f11n0                    0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/var/mail@autozsys_lv1ui9                    0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/var/snap@autozsys_ps1e80                    0B      -      104K  -
          rpool/ROOT/ubuntu_z6p9vy/var/snap@autozsys_kqk0hl                    0B      -      104K  -
          rpool/ROOT/ubuntu_z6p9vy/var/snap@autozsys_2ivctg                    0B      -      112K  -
          rpool/ROOT/ubuntu_z6p9vy/var/snap@autozsys_3f11n0                    0B      -      112K  -
          rpool/ROOT/ubuntu_z6p9vy/var/snap@autozsys_lv1ui9                    0B      -      112K  -
          rpool/ROOT/ubuntu_z6p9vy/var/spool@autozsys_ps1e80                   0B      -      112K  -
          rpool/ROOT/ubuntu_z6p9vy/var/spool@autozsys_kqk0hl                   0B      -      112K  -
          rpool/ROOT/ubuntu_z6p9vy/var/spool@autozsys_2ivctg                  56K      -      112K  -
          rpool/ROOT/ubuntu_z6p9vy/var/spool@autozsys_3f11n0                   0B      -      112K  -
          rpool/ROOT/ubuntu_z6p9vy/var/spool@autozsys_lv1ui9                   0B      -      112K  -
          rpool/ROOT/ubuntu_z6p9vy/var/www@autozsys_ps1e80                     0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/var/www@autozsys_kqk0hl                     0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/var/www@autozsys_2ivctg                     0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/var/www@autozsys_3f11n0                     0B      -       96K  -
          rpool/ROOT/ubuntu_z6p9vy/var/www@autozsys_lv1ui9                     0B      -       96K  -
          rpool/USERDATA/jerry_0mgvki@autozsys_nt6t1g                        148K      -     1.69M  -
          rpool/USERDATA/jerry_0mgvki@autozsys_ps1e80                          0B      -     1.74M  -
          rpool/USERDATA/jerry_0mgvki@autozsys_kqk0hl                          0B      -     1.74M  -
          rpool/USERDATA/jerry_0mgvki@autozsys_2ivctg                        364K      -     2.61M  -
          rpool/USERDATA/jerry_0mgvki@autozsys_m38icf                       13.5M      -     39.6M  -
          rpool/USERDATA/jerry_0mgvki@autozsys_x34i0j                       11.0M      -     77.5M  -
          rpool/USERDATA/jerry_0mgvki@autozsys_h8dw72                       1.29M      -     82.0M  -
          rpool/USERDATA/jerry_0mgvki@autozsys_3f11n0                        928K      -     82.3M  -
          rpool/USERDATA/jerry_0mgvki@autozsys_lv1ui9                        748K      -     87.1M  -
          rpool/USERDATA/jerry_0mgvki@autozsys_sgr37g                        728K      -     87.0M  -
          rpool/USERDATA/jerry_0mgvki@autozsys_qopq6v                        132K      -     87.0M  -
          rpool/USERDATA/root_0mgvki@autozsys_ps1e80                           0B      -      104K  -
          rpool/USERDATA/root_0mgvki@autozsys_kqk0hl                           0B      -      104K  -
          rpool/USERDATA/root_0mgvki@autozsys_2ivctg                           0B      -      104K  -
          rpool/USERDATA/root_0mgvki@autozsys_3f11n0                           0B      -      104K  -
          rpool/USERDATA/root_0mgvki@autozsys_lv1ui9                           0B      -      104K  -
          jerry@jerry-Standard-PC-Q35-ICH9-2009:~$
          You'll notice that there appears to be a lot of redundant snapshotting. My /home/jerry subvolume,
          rpool/USERDATA/jerry_0mgvki,
          is snapshoted 11 times. Nine of those snapshots have different REFER values, so they must use different parameters for their snapshot command. Here is what I did to make and destroy a snapshot of my home account:
          Code:
           sudo zfs snapshot rpool/USERDATA/jerry_0mgvki@SUNDAY04-26-20
           zfs list -t snapshot (verify it is in the snapshot list)
           rm  -rf /home/jerry/Documents (delete the Documents directory)
           vdir  (verify Documents is no longer present)
           sudo zfs rollback rpool/USERDATA/jerry_0mgvki@SUNDAY04-26-20 
           vdir  (verify Documents has been restored)
           sudo zfs destroy rpool/USERDATA/jerry_0mgvki@SUNDAY04-26-20 
           zfs list  (verify that the snapshot has been destroyed)
          Notice that I did not have to exit /home/jerry in order to do the rollback of /home/jerry to recover the Documents folder. When I am running on BTRFS I usually sudo -i in a Konsole and mount the device BTRFS is running on to /mnt. So, while I have left /home/jerry I haven't really left the system. It is still running. IF I were to replace @ with a snapshot and/or @home with a snapshot, I'd have to umount /mnt, exit root to return to /home/jerry and reboot the system to make the snapshot the working system. That's easy to do, but not as convenient as ZFS.

          Here are the properties of my /home/jerry subvolume:
          Code:
          jerry@jerry-Standard-PC-Q35-ICH9-2009:~$ zfs get all rpool/USERDATA/jerry_0mgvki
          NAME                         PROPERTY                         VALUE                            SOURCE
          rpool/USERDATA/jerry_0mgvki  type                             filesystem                       -
          rpool/USERDATA/jerry_0mgvki  creation                         Sun Apr 26 14:08 2020            -
          rpool/USERDATA/jerry_0mgvki  used                             126M                             -
          rpool/USERDATA/jerry_0mgvki  available                        22.2G                            -
          rpool/USERDATA/jerry_0mgvki  referenced                       87.0M                            -
          rpool/USERDATA/jerry_0mgvki  compressratio                    1.69x                            -
          rpool/USERDATA/jerry_0mgvki  mounted                          yes                              -
          rpool/USERDATA/jerry_0mgvki  quota                            none                             default
          rpool/USERDATA/jerry_0mgvki  reservation                      none                             default
          rpool/USERDATA/jerry_0mgvki  recordsize                       128K                             default
          rpool/USERDATA/jerry_0mgvki  mountpoint                       /home/jerry                      local
          rpool/USERDATA/jerry_0mgvki  sharenfs                         off                              default
          rpool/USERDATA/jerry_0mgvki  checksum                         on                               default
          rpool/USERDATA/jerry_0mgvki  compression                      lz4                              inherited from rpool
          rpool/USERDATA/jerry_0mgvki  atime                            on                               default
          rpool/USERDATA/jerry_0mgvki  devices                          on                               default
          rpool/USERDATA/jerry_0mgvki  exec                             on                               default
          rpool/USERDATA/jerry_0mgvki  setuid                           on                               default
          rpool/USERDATA/jerry_0mgvki  readonly                         off                              default
          rpool/USERDATA/jerry_0mgvki  zoned                            off                              default
          rpool/USERDATA/jerry_0mgvki  snapdir                          hidden                           default
          rpool/USERDATA/jerry_0mgvki  aclinherit                       restricted                       default
          rpool/USERDATA/jerry_0mgvki  createtxg                        152                              -
          rpool/USERDATA/jerry_0mgvki  canmount                         on                               local
          rpool/USERDATA/jerry_0mgvki  xattr                            sa                               inherited from rpool
          rpool/USERDATA/jerry_0mgvki  copies                           1                                default
          rpool/USERDATA/jerry_0mgvki  version                          5                                -
          rpool/USERDATA/jerry_0mgvki  utf8only                         on                               -
          rpool/USERDATA/jerry_0mgvki  normalization                    formD                            -
          rpool/USERDATA/jerry_0mgvki  casesensitivity                  sensitive                        -
          rpool/USERDATA/jerry_0mgvki  vscan                            off                              default
          rpool/USERDATA/jerry_0mgvki  nbmand                           off                              default
          rpool/USERDATA/jerry_0mgvki  sharesmb                         off                              default
          rpool/USERDATA/jerry_0mgvki  refquota                         none                             default
          rpool/USERDATA/jerry_0mgvki  refreservation                   none                             default
          rpool/USERDATA/jerry_0mgvki  guid                             8843932201193722231              -
          rpool/USERDATA/jerry_0mgvki  primarycache                     all                              default
          rpool/USERDATA/jerry_0mgvki  secondarycache                   all                              default
          rpool/USERDATA/jerry_0mgvki  usedbysnapshots                  39.4M                            -
          rpool/USERDATA/jerry_0mgvki  usedbydataset                    87.0M                            -
          rpool/USERDATA/jerry_0mgvki  usedbychildren                   0B                               -
          rpool/USERDATA/jerry_0mgvki  usedbyrefreservation             0B                               -
          rpool/USERDATA/jerry_0mgvki  logbias                          latency                          default
          rpool/USERDATA/jerry_0mgvki  objsetid                         216                              -
          rpool/USERDATA/jerry_0mgvki  dedup                            off                              default
          rpool/USERDATA/jerry_0mgvki  mlslabel                         none                             default
          rpool/USERDATA/jerry_0mgvki  sync                             standard                         inherited from rpool
          rpool/USERDATA/jerry_0mgvki  dnodesize                        auto                             inherited from rpool
          rpool/USERDATA/jerry_0mgvki  refcompressratio                 1.57x                            -
          rpool/USERDATA/jerry_0mgvki  written                          0                                -
          rpool/USERDATA/jerry_0mgvki  logicalused                      202M                             -
          rpool/USERDATA/jerry_0mgvki  logicalreferenced                131M                             -
          rpool/USERDATA/jerry_0mgvki  volmode                          default                          default
          rpool/USERDATA/jerry_0mgvki  filesystem_limit                 none                             default
          rpool/USERDATA/jerry_0mgvki  snapshot_limit                   none                             default
          rpool/USERDATA/jerry_0mgvki  filesystem_count                 none                             default
          rpool/USERDATA/jerry_0mgvki  snapshot_count                   none                             default
          rpool/USERDATA/jerry_0mgvki  snapdev                          hidden                           default
          rpool/USERDATA/jerry_0mgvki  acltype                          posixacl                         inherited from rpool
          rpool/USERDATA/jerry_0mgvki  context                          none                             default
          rpool/USERDATA/jerry_0mgvki  fscontext                        none                             default
          rpool/USERDATA/jerry_0mgvki  defcontext                       none                             default
          rpool/USERDATA/jerry_0mgvki  rootcontext                      none                             default
          rpool/USERDATA/jerry_0mgvki  relatime                         on                               inherited from rpool
          rpool/USERDATA/jerry_0mgvki  redundant_metadata               all                              default
          rpool/USERDATA/jerry_0mgvki  overlay                          off                              default
          rpool/USERDATA/jerry_0mgvki  encryption                       off                              default
          rpool/USERDATA/jerry_0mgvki  keylocation                      none                             default
          rpool/USERDATA/jerry_0mgvki  keyformat                        none                             default
          rpool/USERDATA/jerry_0mgvki  pbkdf2iters                      0                                default
          rpool/USERDATA/jerry_0mgvki  special_small_blocks             0                                default
          rpool/USERDATA/jerry_0mgvki  com.ubuntu.zsys:bootfs-datasets  rpool/ROOT/ubuntu_z6p9vy         local
          rpool/USERDATA/jerry_0mgvki  com.ubuntu.zsys:last-used        1587951969                       local
          jerry@jerry-Standard-PC-Q35-ICH9-2009:~$
          There are about 70+ properties associated with my subvolume. I don't know how many of them are user tweakable. I know of only one property that is user setable in BTRFS; changing the r/w property of a snapshot.

          So, as you can see, there is a lot to learn. I'm 78. I don't know if I have enough time left to learn it all, or even enough of it to be useful.

          EDIT:
          I forgot to add how ZFS properties of a pool are set:

          zfs set someproperty=somevalue somepool

          Setting properties in BTRFS is:

          btrfs property set -ts /path/to/snapshot ro false

          which changes a read only (ro) snapshot to read/write, using backward logic. There are only a handful of properties that can be set in BTRFS but the property shown is the only one I've ever used.
          Last edited by GreyGeek; Apr 27, 2020, 12:42 PM.
          "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
          – John F. Kennedy, February 26, 1962.

          Comment


            #6
            I downloaded a fresh Kubuntu 20.04 ISO and created a VM in order to checked and see if it allowed using ZFS. It does not.
            So, I will continue with Ubuntu 20.04
            "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
            – John F. Kennedy, February 26, 1962.

            Comment


              #7
              I did the same as you, to see the new Ubuntu and to try out ZFS and maybe learn how to use it. Before I do so, I'm going to look for a decent manual. But I'm happy with BTRFS so far--it made my clean install of Kubuntu 20.04 go quite smoothly, apart from errors I made. Thanks to all for the feedback.

              Comment


                #8
                I thought that perhaps I could reinstall UbuntuDDE and play with Deepin and ZFS at the same time. Alas, just like Kubunut, UbuntuDDE does not offer the "Advanced ..." button that leads to a ZFS installation. Both allow BTRFS, of course.

                In order to run a BTRFS's check utility you have to shut down the <ROOT_FS>, i.e., shut down your computer and boot into a USB Live medium that has BTRFS installed. Just creating a USB Live stick won't cut it because BTRFS isn't installed by default, so BTRFS has to be selected during the hardware part of the installation routine. So, from a USB Live stick of Kubuntu you choose to install Kubuntu onto another USB stick, choosing BTRFS as the root filesystem. That gives you a USB Live stick of Kubuntu that runs on top of BTRFS and contains all the BTRFS utilities. THEN, the btrfs check utility can be run against the internal drive containing the BTRFS system volumes @ and @home which, of course, are not running. I've been using BTRFS for 4 1/2 years, and I have never needed to do a check on my system because it has never given me a single problem.

                With ZFS a single disk or a striped set does not have error correction, only error detection. It would be ideal of one could merely install zfs-intramfs to add ZFS utilities to the running system, but one has to follow the same method outlined for the BTRFS USB stick. Also, ZFS doesn't have a "check" utility or a fsck utility, but it does have scrub. While ZFS is performing a scrub on your pool, it is checking every block in the storage pool against its known checksum. Scrubbing ZFS storage pools is not something that happens automatically. You need to do it manually, and it's highly recommended that you do it on a regularly scheduled interval, depending on the quality of your hardware. Once a month for good hardware, once a week for consumer grade hardware. Unlike BTRFS, the pool you are checking does NOT need to be taken offline.

                Being able to scrub while continuing to use the filesystem gives ZFS the advantage in repairing data errors.

                Using the mkusb utility (added via their PPA (make sure you also install usb-pack-efi too) , one can create a persistent USB Live stick with both BTRFS and ZFS installed, from which those commands can be run against the non-running system that needs to be checked. Install zfs-intramfs to use OpenZFS (not zfs-fuse) and bttrfs-progs to utilize BTRFS. With those two installed it doesn't matter what file system the persistent USB Live stick is using. In fact, a persistent stick should contain the ntfs file system as the persistent storage part in order for it to be readable by all three major OSs.
                "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
                – John F. Kennedy, February 26, 1962.

                Comment


                  #9
                  ZFS v BTRFS cont

                  One of the features of BTRFS is the ability to mount the <ROOT_FS>, usually to /mnt, and then navigate through any snapshots stored under /mnt/snapshots using Dolphin or mc, and copy any file found inside that snapshot to any subdirectory in your running system. BTRFS snapshots are truly independent. I can delete any snapshot regardless of when it was created or if any were created before or after it. I can also rollback to any snapshot without affecting the other snapshots, aside from the normal CoW operations.

                  One can browse a ZFS snapshot and pull files from it, even though it is read only, although one doesn't have to mount it because it is part of the running system. Rolling back to a specific snapshot destroys any snapshots made before it. A clone created from a snapshot is rw. If one wants to destroy a snapshot they must first destroy any clones made from that snapshot. Using the "-r" flag will do that automatically.

                  ZFS Virtual Devices (ZFS VDEVs)

                  A VDEV is a meta-device that can represent one or more devices. ZFS supports 7 different types of VDEV:
                  • File - a pre-allocated file
                  • Physical Drive (HDD, SDD, PCIe NVME, etc)
                  • Mirror - a standard RAID1 mirror
                  • ZFS software raidz1, raidz2, raidz3 'distributed' parity based RAID
                  • Hot Spare - hot spare for ZFS software raid.
                  • Cache - a device for level 2 adaptive read cache (ZFS L2ARC)
                  • Log - ZFS Intent Log (ZFS ZIL)

                  VDEVS are dynamically striped by ZFS. A device can be added to a VDEV, but cannot be removed from it.

                  Something that ZFS has and BTRFS doesn't (to the best of my knowlege) is Virtual Devices, called VDEV's. ZFS uses adaptive read caching, called ARC, which is RAM intensive. All but 1 Gb of RAM is usually given to ARC. There is also a "Level 2 adaptive read cache", called L2ARV, or just cache. Because ZFS reads and writes aggressively to storage devices, it can eat up USB sticks which do not have write leveling pretty fast. To minimize that a cache is used. You can set up the cache to minimize the USB wear by using

                  zfs set secondarycache=metadata pool

                  where secondarycache is a property, metadata is a value, and pool is the pool the property is being applied to. Create cache in pool and it will minimize SSD or USB unlevel wearing. Just use

                  zpool add -f pool cache /dev/disk/by-id/usb1id /dev/disk/by-id/usb2id /dev/disk/by-id/usb3id ... /dev/disk/by-id/usbnid

                  BTRFS does not have virtual devices nor "secondarycache" settings. It wouldn't be a good idea to install it on a group of USB sticks in order to test out various RAID configurations, unless you don't mind if they get worn out quickly.


                  At this point I began re-evaluating my plan to explore ZFS.
                  Most of its vaunted reliability is lost when used on a typical single drive laptop or desktop with 16GB or less of non ECC memory. On a single drive it will detect errors but not correct them. You'll need to run scrub to do that. Run it with 4 or 8 Gb of memory and the adaptive read cache isn't as quick, and it leaves less of memory for your applications to run in. 16, or better yet, 64 Gb of ECC memory is best to use in order to make ZFS fast and still allow enough memory for your applications to run fast as well. 64Gb of good ECC memory is selling for around $200 at Amazon.

                  Snapshotting and rollbacks are quicker in ZFS, but even doing it manually I take less than a minute to snapshot, less than 2 minutes to do an incremental backup to my 2nd SSD or my spinner, and rollbacks aren't much slower. Plus, my snapshots are outside of @ and @home. To lose them my HD would have to have a hardware failure. Even ZFS can't prevent that on non-commercial systems.

                  All in all, IMO, BTRFS is the best filesystem to use on my laptop, even though I have an Nvidia GPU and an i7 CPU with 16GB of RAM and three storage devices. One is a Toshiba spinner on its last legs. The other two are Samsung 860 EVO 500Gb SSD's. Their statistics are:

                  Power on time 282 days 20 hours 90 days 4 hours
                  Est Time Remaining > 1000 days > 1000 days
                  Lifetime Writes 10.54 TB 435.71 GB

                  I'm averaging about 12 hours a day on this laptop, usually between 12 noon and 12 midnight. BTRFS has been gentle to my SSDs. I think I'll keep it that way.

                  Signing off of my ZFS experiment, which as triggered by UbuntuDDE. UDDE is interesting but nowhere near Kubuntu in power and ease of use. Running UDDE on ZFS would offer nothing new but a pretty hummingbird wallpaper, which I'll probably download and use.
                  Last edited by GreyGeek; Apr 27, 2020, 02:42 PM.
                  "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
                  – John F. Kennedy, February 26, 1962.

                  Comment

                  Working...
                  X