Announcement

Collapse
No announcement yet.

Dual booting *buntu EFI installs???

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    #16
    IMO multiple Linux installs with several instances of grub, fighting over who controls the boot, sooner or later gives trouble.

    One approach to avoid this is to designate one install as the one in control, and uninstall grub from the others. When installing the non-controlling ones, persuade the installer not to install a boot loader at all. With the old ubiquity installer one could give it the -b switch, and with calamares just not tell it about an EFI partition; it complains but continues. This approach gives problems if one wants to use btrfs to hedge bets on a big change to the controlling install, usually a release upgrade, by having before and after installs runnable at the same time; now there's two installs vying for control of the boot, and when you drop one of them it might be the one in control at that moment.

    Another approach, which I adopted after a debacle (about precise, 12.04), is to install grub independently of any install. Before I moved to btrfs I had a small partition just for grub, and with btrfs a grub subvolume. One has to give up the grub configuration machinery and manually maintain grub.cfg, but IME that's far simpler and gives far, far less trouble; the grub update system breaks easily. (At one point I had several *buntus, a gentoo, and a Centos, with some of the *buntus having several releases, all in a 200 GB drive.) To boot an ubuntu install one just needs, for example, the stanza
    Code:
    menuentry 'Kubuntu' {
        search --no-floppy --set=root --label "mainn"
        linux /@r/boot/vmlinuz root=LABEL=mainn ro rootflags=subvol=@r
        initrd /@r/boot/initrd.img
    }
    Use of /boot/vmlinuz and /boot/initrd.img, symlinks to the latest kernel maintained by APT, means it will work indefinitely, and the use of the label makes it readable and thus less error-prone (using labels in /etc/fstab has a similar benefit). Several installs can have grub and their own /boot/grub, but if they don't mount the ESP they can't mess with it. (Release upgrades don't like this, but it's easy to fix what they break.) I've gone for years without booting needing attention, very different to what happened before.

    However, I suspect a better approach is to use VMs for multiple installs. Maybe I'll investigate it one day. I imagine that would use a lot more storage compared to having multiple installs in one btrfs. I don't know how that would cope with the before and after release upgrade scenario; can one easily move an install into a VM?
    Regards, John Little

    Comment


      #17
      Originally posted by jlittle View Post
      IMO multiple Linux installs with several instances of grub, fighting over who controls the boot, sooner or later gives trouble.
      I don't think there is any actual conflicts with multiple bootloaders andgrubs on EFI, since the one used is the one set in the Firmware (BIOS) settings. This of course doesn't include multiple installs of the same exact operating system on the same drive (aka Ubuntu and Kubuntu, or Fedora and Fedora's KDE spin)

      I think that systemd-boot could be a useful and somewhat similar method to a manual grub entry, and it seems simpler to configure (manually), but I 101% agree with using virutal machines for multiple version of the same distro ( I use virt-manager myself)

      Comment


        #18
        I've never had multiple instances of GRUB cause any problem. Just like any other application, it's does what it's "told" to do so it's always user error that's the issue. Having GRUB installed on every drive on my system has been a long past practice to prevent having to boot to USB it make a repair on that rare occasion where it was needed.. Keeping track of which GRUB boots which install is the user task. Of course, I'm also fully capable of booting GRUB from the GRUB console making a "rescue" even more likely.

        However, I have read many complaints about EFI booting having problems with multiple installs. To be fair, mostly because of Windows. Since Windows doesn't "know" about GRUB, IME it never messes with it - as long as you have Windows on a different drive. IME multiple "flavors" of Ubuntu using the same folders under /efi seems problematic and very poorly thought out. Clearly, no one wants anyone to dual-boot those distros.

        The key for me to manage multiple GRUB booting is to have a dedicated GRUB install; an installation where it's only function is to boot to itself and provide a menu to boot to whatever other install I choose. Rather than modifying (or caring) whether an install is using symlinks or direct kernel selection, I simply chain-load the other installs' grub.cfg. This also has the advantage of allowing me to "back out" of a boot selection because of the nested menus. It also makes the custom stanzas very simple:

        Code:
        menuentry 'Kubuntu 24.04' --class kubuntu {
            insmod part_gpt
            insmod btrfs
            search --no-floppy --fs-uuid --set=root 247e6a5b-351d-4704-b852-c50964d2ee6
            configfile /@kubuntu2404/boot/grub/grub.cfg
        }​
        To add or change a boot entry, I need only change the configfile location and the title. This works because I'm using BTRFS and all my installs other than Win11 are on it. I do like using the Label instead of UUID, but I havn'rt fully transitioned to that yet for GRUB because if it ain't broke...

        Please Read Me

        Comment


          #19
          Originally posted by jlittle View Post
          However, I suspect a better approach is to use VMs for multiple installs. Maybe I'll investigate it one day. I imagine that would use a lot more storage compared to having multiple installs in one btrfs. I don't know how that would cope with the before and after release upgrade scenario; can one easily move an install into a VM?
          Using QEMU/KVM and BTRFS, I've moved an Ubuntu server install from a VM to metal and I suspect moving one into a VM wouldn't be any more difficult, but I haven't tried it. I suppose one could run into a hardware incompatibility but since most drivers are kernel-bound I think it could work.


          Please Read Me

          Comment


            #20
            Originally posted by claydoh View Post
            This of course doesn't include multiple installs of the same exact operating system on the same drive (aka Ubuntu and Kubuntu, or Fedora and Fedora's KDE spin)
            This is where GRUB is superior.

            Please Read Me

            Comment


              #21
              Originally posted by oshunluvr View Post
              The key for me to manage multiple GRUB booting is to have a dedicated GRUB install; an installation where it's only function is to boot to itself and provide a menu to boot to whatever other install I choose. Rather than modifying (or caring) whether an install is using symlinks or direct kernel selection, I simply chain-load the other installs' grub.cfg. This also has the advantage of allowing me to "back out" of a boot selection because of the nested menus. It also makes the custom stanzas very simple:

              Too much work, lol.
              Whichever boot entry one sets in the bios as primary becomes that one grub to rule them all, so to speak. You simply ignore other bootoaders completely. The only thing that needs to be done now in the past couple of year is the need to re enable the os_prober in grub so that any new or updated OS installs are picked up when grub gets updated

              No special steps or anything. I'm trying hard to recall any time in the past 8 or so years when I've ever had a boot issue that wasn't self created.

              I'm not seeing a ton of people with uefi issues in my daily trawling the numerous places I lurk and hang out in. I think we may be making things seem to be more complicated than they actually are. Plus my usual over-explaining mangling things
              Last edited by claydoh; Sep 25, 2024, 08:43 AM. Reason: I should stop posting via mobile. Either VBulletin or Firefox suck at this

              Comment


                #22
                Back in the GRUB days, I came to realize that a dedicated GRUB partition was a good way to go.
                But I also can agree with
                Whichever boot entry one sets in the bios as primary becomes that one grub to rule them all, so to speak.
                You can use UEFI/BIOS as your boot manager (employing that change-of-label trick I referenced in my how-to).
                And, you can just use the boot manager/boot loader (by stub loading), rEFInd by Rod Smith.

                None of these 4 options should really cause too much hassle.
                Last edited by Qqmike; Sep 25, 2024, 12:03 PM. Reason: I wrote UEFI incorrectly as EUFI
                An intellectual says a simple thing in a hard way. An artist says a hard thing in a simple way. Charles Bukowski

                Comment


                  #23
                  OK here's a question/idea;

                  Since booting to EFI still loads a grub menu, can I just put an entry in 40_custom and have two (or more) installs bootable from a single EFI -entry > GRUB menu?

                  The parameters are: Two or more *buntu installs on the same BTRFS file system in separate subvolumes.

                  My hacked Chrome-box has Kubuntu 22.04 (the EFI default) and Kubuntu 24.04 (via do-release-upgrade) on it in separate subvolumes. Couldn't I just add a boot entry for 24.04 in the 22.04 GRUB menu?
                  Last edited by oshunluvr; Oct 01, 2024, 09:05 PM.

                  Please Read Me

                  Comment


                    #24
                    Well, that worked! Chain loading the grub menu did not work for some reason, but using vmlinuz and initid.img in a custom stanza let me boot to 24.04.

                    Please Read Me

                    Comment


                      #25
                      Assuming the Chromebox is using a full-uefi rom to replace the stock firmware, I don't see why not. No idea how it works on the rw-legacy firmware option. Heck I can't recall if a Chromebox needs any of that like Chromebooks do.


                      For grub it probably just needs the subvol replaced with the snapshot for the root file system desired.
                      Also if nothing else, this might be used as a reference for the process :
                      https://github.com/Antynea/grub-btrfs

                      I am sure this can be done with grub, but I do know that on my Fedora kinoite install on a previous Chromebook, it uses systemd-boot which allows booting the previous snapshot by default, and also allowed booting to either Fedora 39 or 40 after I upgraded. iirc this is just scripted. So there are options, for sure. systemd-boot is supposed to be simpler and easier to configure, but it is very plain and simple compared to grub

                      Comment


                        #26
                        For the Chromebox to run Linux, I had to install a custom EFI firmware and I have no plans to mess with that. It was 2018 and I see no reason to start over. Kubuntu 18.04 + 22.04 ran perfectly fine and I can dual boot 24.04 without messing with EFI - so I won't be doing that.


                        Please Read Me

                        Comment

                        Working...
                        X