Announcement

Collapse
No announcement yet.

Grub mystery

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    #16
    Bingo!
    You da mon. You da wizard.
    I could see it immediately, as I have xplanetFX as wallpaper, and it was morning. (it's 7PM here now).
    And then conky shows sdb1 as / and sdb4 as /boot/efi.
    I'll see if I can add some tags (in case someone has a similar problem) and mark it Solved.

    [EDIT] Couldn't find a "tagging" tool, I just added them to the top of the first post, hope it's OK.
    Last edited by Don B. Cilly; Jun 13, 2019, 11:22 AM.

    Comment


      #17
      I hashtagged your first post. The option to create hashtags is in the editor, but you have to click on the Go Advanced button to see it. It's the # icon on the bottom row at the right.
      Windows no longer obstructs my view.
      Using Kubuntu Linux since March 23, 2007.
      "It is a capital mistake to theorize before one has data." - Sherlock Holmes

      Comment


        #18
        Nice. Thanks.

        Comment


          #19
          The mystery deepens. The plot thickens. Global annoyance levels rise... ;·)
          Mind you. The problem is solved. I have neon running on the SSD. And a good thing it is, because it boots a lot faster.

          There are glitches though.
          The SSD neon was all of two days outdated. The other day I mounted the old neon to recover some minor stuff that was not in the new one.
          I start with a text file, copy it to my Documents (on the new one). Dolphin says, are you stupid or what, you're trying to copy a file onto itself.
          So I think, is Dolphin stupid or what, I'm doing no such thing... then I notice that Conky reports the mounted partition as /dev/sda2. It should be (and was a moment before) /dev/sdb1.
          Funny thing 2, the disk usage bar (reads sdb1) is empty. But the lua bar graph which also reads sdb1 is showing disk activity.
          Nothing else untoward is apparent.

          So I reboot. Back to normal. I re-mount sda2, back to impossible.
          Reboot, and - even though I do remember the "Do not meddle in the affairs of wizards" quote - I play the dunce and install grub to sdb. Because, I say, if the older disk fails, I have an EFI on the SSD...
          Nothing bad happens, except the efi partition used to show as sdb4, now it's sda4.
          Oh well. Irrelevant... or is it.

          The thing that slightly worries me is: if I decide to use that sda partition for something else, like trying out a distro, is it going to completely mess up all of my disk tables?
          Because - also - grub-install tells me:

          Code:
          Installing for x86_64-efi platform.
          GUID Partition Table Header signature is wrong: be5608740128e852 != 5452415020494645
          GUID Partition Table Header signature is wrong: 0 != 5452415020494645
          GUID Partition Table Header signature is wrong: be5608740128e852 != 5452415020494645
          GUID Partition Table Header signature is wrong: 0 != 5452415020494645
          GUID Partition Table Header signature is wrong: be5608740128e852 != 5452415020494645
          GUID Partition Table Header signature is wrong: 0 != 5452415020494645
          Installation finished. [B]No error reported[/B].
          Which seems a funny way of reporting no errors.
          So, even though everything works (except mounting sda2 - I tried copying stuff from sdc and it was fine) I'm a little uneasy as to future developments.

          Comment


            #20
            Addendum:
            Now I remember, I tried writing grub to sdb not just as a redundancy measure, but because, well, it was booting the SSD, but not as the default entry.
            It still had the "HDD" one as default. It didn't change.
            Which is a bit funny, isn't it, if I install and update from the SDD version, shouldn't it set that as default?
            And the fact that before writing to sdb it reported sdb4 as /boot/efi and after, sda4?

            Comment


              #21
              Originally posted by Don B. Cilly View Post
              ...
              It still had the "HDD" one as default. It didn't change...
              I'm not sure grub and the computer is doing exactly what you think it is. I've been quite confused on this sometimes.
              Which is a bit funny, isn't it,.. the fact that before writing to sdb it reported sdb4 as /boot/efi and after, sda4?
              There are no guarantees that the device names in Linux remain the same between boots. On some systems they change depending on what you've got plugged in (not just drives), and on others on timing. So the installers set things up using UUIDs, and grub uses them (by default) too.

              I suggest:

              • Checking all the UUIDs of your drives and partitions using gparted. If they're not unique or messed up you can set new ones, though running update-grub would become necessary. This might clean up those partition header warnings.
              • UUIDs are user unfriendly IMO; labels are much easier to remember and to use generally. Start by setting labels you like on each device and partition; even if you don't get the OS's and grub to use them dolphin will.
              • Consider ditching the debian grub script machinery, and using a grub.cfg you edit manually. There's a learning curve, but the If your installs are dynamic, by which I mean some may come and go, you'll save a lot of time and effort and failed boots in the not very long run. With device and partition labels things become straightforward. (Keeping the debian grub stuff out of it needs a couple of tricks.) Some of us here have done this for many years.
              • Consider using btrfs, using btrfs subvolumes rather than partitions for different installs. Free space is shared, and managing partitions only needed for OS's that don't support btrfs. This practice might allow you to have them all on the same SSD. As well as booting faster, installing can be faster too.

                (I can install a *buntu release in 7 minutes, from download complete to install complete, booting from the iso on the SSD.)


              Sent from my VFD 822 using Tapatalk
              Regards, John Little

              Comment


                #22
                This is what blkid is reporting as of now:
                Code:
                /dev/sdb1: LABEL="SSD_Neon" UUID="[B]3331581b-e3a3-4976-86a3-bbfc025262be[/B]" TYPE="ext4" PARTUUID="6e09a355-01"
                /dev/sdb2: LABEL="SSD2" UUID="5c3af4ec-b016-4eea-accd-23d517b87248" TYPE="ext4" PARTUUID="6e09a355-02"
                /dev/sdb4: UUID="754B-22C4" TYPE="vfat" PARTUUID="6e09a355-04"
                /dev/sdb5: UUID="bbb7dbfc-e767-43c9-8b99-614bbaab2cc5" TYPE="swap" PARTUUID="6e09a355-05"
                /dev/sda1: LABEL="K14" UUID="4d8f0727-3f74-46ca-9b8f-bb6aa04ac6f8" TYPE="ext4" PARTUUID="000aef22-01"
                /dev/sda2: LABEL="NEON" UUID="[B]879b2424-5b36-49b4-a53b-f51dbde63b30[/B]" TYPE="ext4" PARTUUID="000aef22-02"
                /dev/sda3: UUID="d697a1d9-5452-4aba-bb15-0c3103238c71" TYPE="swap" PARTUUID="000aef22-03"
                /dev/sda4: UUID="8EF0-A702" TYPE="vfat" PARTUUID="000aef22-04"
                /dev/sdc1: LABEL="K18" UUID="3733e2b4-b159-434d-a4d2-ab4cb0cfa031" TYPE="ext4" PARTUUID="85e3aaee-01"
                /dev/sdc2: LABEL="joey" UUID="13a64598-bbd5-489d-a193-4ebc1cad8072" TYPE="ext4" PARTUUID="85e3aaee-02"
                /dev/sdc3: UUID="239B-2FC4" TYPE="vfat" PARTUUID="85e3aaee-03"
                /dev/sdc4: LABEL="joey-gnome" UUID="66336795-cced-40b0-93aa-0845872860fc" TYPE="ext4" PARTUUID="85e3aaee-04"
                I've highlighted the two partitions (old and new NEONs) that cause problems.
                As you can see, I do use labels - well, except for the efi and swap partitions.
                The UUIDs, they are unique. If they are messed up, I wouldn't know.
                Manually editing the grub.cfg... as you can see I have quite a few OSs on the machine - and each of those comes with quite a few kernel sub-entries. So you can imagine...

                Do you think it would be a good idea to delete all efi partitions and just keep one?
                I mean if I lose it, I can restore it from a live medium, right?

                Comment

                Working...
                X