Announcement

Collapse
No announcement yet.

Wrong Hard disk Detect

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    [RESOLVED] Wrong Hard disk Detect

    Hi there,


    I have a system with an Asus motherboard. Model ROG Strix B350F Gaming UM V2, with 16 GB of DDR4 RAM, a Ryzen 5 1600 processor, and 05 Hard disks and 03 SSDs.

    About the 5 hard drives: 2 are on a SAS LSI card with LVM, and 3 are installed separately as a database.

    About the 03 SSDs: One is with Kubuntu 18.04, another with Windows 10, and the third with Kubuntu 24.04.

    I would like to stop 18.04 and run 24.04 as default, but I'm worried about bugs in the new OS.

    On the old Kubuntu 18.04 and Windows 10, everything is running like a charm.

    In the new 24.04, two hard drives are not recognized. They are formatted in ext4. In these drives i erased the MBR before. Both of them are Western Digital 1TB. 18.04 recognize and mount them. No problem at all. 22.04 does not.

    KDE Partition Managers says no file system and unknown device.
    Gparted says both of the WDs are part of raid array, with a mount point on /dev/mapper/pdc_cggfjedfhg1.
    Blkid says /dev/sdf: TYPE="promise_fasttrack_raid_member and /dev/sde: TYPE="promise_fasttrack_raid_member.

    There are no promise fasttrack hardware or even an array on the system. Sata raid option in bios is disabled (only AHCI). No software raid at all.

    I don't know why kubuntu 24.04 recognizes this 2 Western Digital Hard Disk as a part of a raid array from a promise fasttrack. I don't even own this hardware on the system.

    I tried everything ( using my little knowledge) is possible. There is one point: if I remove one of the WDs disk, the other is recognized normally even with the ext4 file system and mounted properly. If i connect both, the system recognizes as a raid array of a promise fasttrack. IN KDE Disk Manager both are unknown and /run as mount point. In gparted one of the WD disk is ntfs as file system and /dev/mapper/pdc_cggfjedfhg1 as a mount pointer ( remember : both of them are formatted with ext4 ). The other is ataraid as file system and /dev/mapper/pdc_cggfjedfhg1 as mount point. It does not matter if i change the sata ports on Motherboards.

    I'm really lost. Does anyone have any ideas please? Thanks in advance

    I am sorry if i wrote something wrong, but english is not my first language.


    Last edited by Snowhog; May 12, 2024, 07:09 AM. Reason: Readability

    #2
    Welcome.

    It is hard to read your wall of text, but it seems like you maybe have corrupted the partition type settings when you erased the MBRs (What did you do this for? Did you install new partition tables aftwerwards or are the drives using the backup partition tables now? MBR or GPT?) or something similar may have happened - but I am just guessing here.

    Possibly the gdisk -l and fdisk -l informations of the drives could be useful outputs for a start to begin narrowing the error down.
    Last edited by Schwarzer Kater; May 12, 2024, 02:44 AM. Reason: typo
    Debian KDE & LXQt • Kubuntu & Lubuntu • openSUSE KDE • Windows • macOS X
    Desktop: Lenovo ThinkCentre M75s • Laptop: Apple MacBook Pro 13" • and others

    get rid of Snap script (20.04 +)reinstall Snap for release-upgrade script (20.04 +)
    install traditional Firefox script (22.04 +)​ • install traditional Thunderbird script (24.04)

    Comment


      #3
      Thanks for the answer. Sorry for the long text.

      Here is the fdisk -l command?

      Code:
      Disk /dev/sda: 7,28 TiB, 8001563222016 bytes, 15628053168 sectors
      Disk model: ST8000NE001-2M71
      Units: sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 4096 bytes
      I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      Disklabel type: gpt
      Disk identifier: 95BCB983-E415-4E1A-A68B-19F3DE6D4922
      
      /dev/sda1   2048 15628052479 15628050432  7,3T Linux LVM
      
      
      Disk /dev/sdb: 7,28 TiB, 8001563222016 bytes, 15628053168 sectors
      Disk model: ST8000NE001-2M71
      Units: sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 4096 bytes
      I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      Disklabel type: gpt
      Disk identifier: 8D458F21-BA0A-47FC-854B-A80A7A92F3E3
      
      
      /dev/sdb1   2048 15628052479 15628050432  7,3T Linux LVM
      
      
      Disk /dev/sdc: 447,14 GiB, 480113590272 bytes, 937721856 sectors
      Disk model: WDC WDS480G2G0A-
      Units: sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes
      Disklabel type: dos
      Disk identifier: 0xe60f5532
      
      
      /dev/sdc1  *     2048 937719807 937717760 447,1G 83 Linux
      
      
      Disk /dev/sdd: 447,14 GiB, 480113590272 bytes, 937721856 sectors
      Disk model: WDC WDS480G2G0A-
      Units: sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes
      Disklabel type: dos
      Disk identifier: 0xd1934f90
      
      
      /dev/sdd1  *         2048   1126399   1124352   549M  7 HPFS/NTFS/exFAT
      /dev/sdd2         1126400 936654464 935528065 446,1G  7 HPFS/NTFS/exFAT
      /dev/sdd3       936654848 937717759   1062912   519M 27 Hidden NTFS WinRE
      
      
      Disk /dev/sde: 931,51 GiB, 1000204886016 bytes, 1953525168 sectors
      Disk model: WDC WD1002FAEX-0
      Units: sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes
      Disklabel type: dos
      Disk identifier: 0xa8707dd6
      
      
      /dev/sde1        2048 1953519615 1953517568 931,5G 83 Linux
      
      
      Disk /dev/sdf: 931,51 GiB, 1000204886016 bytes, 1953525168 sectors
      Disk model: WDC WD1002FAEX-0
      Units: sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes
      Disklabel type: dos
      Disk identifier: 0x862c9daa
      
      
      /dev/sdf1        2048 1953519615 1953517568 931,5G 83 Linux
      
      
      Disk /dev/sdg: 2,73 TiB, 3000592982016 bytes, 5860533168 sectors
      Disk model: ST3000DM001-1ER1
      Units: sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 4096 bytes
      I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      Disklabel type: gpt
      Disk identifier: F9256716-B4A2-4E5D-B95C-F7D16794C763
      
      
      /dev/sdg1      34     262177     262144  128M Microsoft reserved
      /dev/sdg2  264192 5860532223 5860268032  2,7T Microsoft basic data
      
      Partition 1 does not start on physical sector boundary.
      
      
      Disk /dev/sdh: 447,13 GiB, 480103981056 bytes, 937703088 sectors
      Disk model: KINGSTON SA400S3
      Units: sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes
      Disklabel type: dos
      Disk identifier: 0x1b833727
      
      
      /dev/sdh1  *     2048 937697984 937695937 447,1G 83 Linux
      
      
      Disk /dev/mapper/myvg-mylv: 14,55 TiB, 16003115253760 bytes, 31256084480 sectors
      Units: sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 4096 bytes
      I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      
      
      Disk /dev/mapper/pdc_cggfjedfhg: 1,82 TiB, 1999999991808 bytes, 3906249984 sectors
      Units: sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 65536 bytes / 131072 bytes
      Disklabel type: dos
      Disk identifier: 0xa8707dd6
      
      
      /dev/mapper/pdc_cggfjedfhg-part1       2048 1953519615 1953517568 931,5G 83 Linux
      ​
      The problem is Kubuntu 24.04 see the devices sde and sdf as a TYPE="promise_fasttrack_raid_member" and mount them as an array like you see in the last part of the fdisk -l command (/dev/mapper/pdc_cggfjedfhg). There is no promise card, no sata raid option in bios and no software raid on System Operational.This is not happens on Kubuntu 18.04 and Kubuntu 23.04.

      Comment


      • oshunluvr
        oshunluvr commented
        Editing a comment
        Dang, I thought I had a lot of drives, LOL

      #4
      So the next questions would be:
      How do you mount the drives in 18.04 and how in 24.04?
      What are the contents of /etc/fstab in 18.04 and in 24.04?

      An output of lsblk -f -e7 from both 18.04 and 24.04 could also be helpful.


      PS: and a little list like (only an example)
      sda = Seagate HDD 2TB NTFS with Windows 10
      sdb = WD SSD 1TB ext4 with Kubuntu 23.10
      sdc = Samsung SSD 500MB ext4 for data​

      would also be helpful for other people to keep track of your setup.

      sdc1, sdd1 and sdh1 seem to have boot flags, correct?
      Last edited by Schwarzer Kater; May 12, 2024, 02:18 PM. Reason: typos & PS
      Debian KDE & LXQt • Kubuntu & Lubuntu • openSUSE KDE • Windows • macOS X
      Desktop: Lenovo ThinkCentre M75s • Laptop: Apple MacBook Pro 13" • and others

      get rid of Snap script (20.04 +)reinstall Snap for release-upgrade script (20.04 +)
      install traditional Firefox script (22.04 +)​ • install traditional Thunderbird script (24.04)

      Comment


        #5
        So just tossing out ideas here...

        Some facts:
        • Those two drives are pretty old Western Digital Caviar Black drives - like maybe as far back as 2010.
        • At some point, they were used and configured in a system that had Promise Fasttrack RAID (BIOS/hardware based RAID aka "fake RAID"). Also a very old hardware RAID.
        • That far back, hardware RAID was not supported by the kernel, but now some of the hardware RAID signatures are supported by the modern kernels.
        Guessing:
        • Either you had them used in a Fasttrack RAID like 8 computers ago or something (and forgot) or maybe you got them used and the previous owner did.
        • The 18.04 kernel doesn't detect the RAID signature so does not try and mount them that way, but the kernel in 24.04 does - so now you're stuck.
        Assuming the above is close to correct, the necessary work:
        1. Using 18.04 or Windows you will need to remove any data from the drives to another device or a backup device.
        2. Wipe both drives clean.
        3. Create new partition tables, partitions, and reformat.
        4. Restore data.
        To wipe the drives, write zeros over the entire drive. This two commands will do that:

        Code:
        sudo dd if=/dev/zero of=/dev/sde bs=16M status=progress && sync​
        sudo dd if=/dev/zero of=/dev/sdf bs=16M status=progress && sync​


        This will take some time, but it will update you via the progress option. This will wipe all master boot records, partition tables, and data. Once complete, re-partition and format the drives and re-load your data. After this, 24.04 should detect the file systems normally. I suggest you might test the drives before reloading the data just to be sure they work as expected.

        Also, I must also state that this is not a problem or bug caused by 24.04. Kubuntu 24.04 isn't adding mysterious RAID signatures to drives. On the other hand, Kubuntu 18.04 has been EOL for 3 years. The number of changes that occur during release cycles is significant and this issue would have been discovered long ago had a normal upgrade cycle been adhered to. I totally understand that once you get a system "just right" you don't want to change things until you have to, but no one that I know of would recommend running without updates for 3 years - regardless of the operating system. I suggest moving forward that you consider a more robust approach to longevity and upgrade your LTS release every 2.5 years. This would not have prevented this problem since it isn't related to Kubuntu at all, but you might have been able to solve it years ago.

        Good luck and let us know how it goes.

        Please Read Me

        Comment


          #6
          oshunluvr: I think your ideas are pretty good ones - I just didn't want to fetch the sledgehammer from the basement yet…
          But perhaps this is a faster (and more thorough?) solution all in all.
          Debian KDE & LXQt • Kubuntu & Lubuntu • openSUSE KDE • Windows • macOS X
          Desktop: Lenovo ThinkCentre M75s • Laptop: Apple MacBook Pro 13" • and others

          get rid of Snap script (20.04 +)reinstall Snap for release-upgrade script (20.04 +)
          install traditional Firefox script (22.04 +)​ • install traditional Thunderbird script (24.04)

          Comment


            #7
            Originally posted by oshunluvr View Post
            So just tossing out ideas here...

            Some facts:
            • Those two drives are pretty old Western Digital Caviar Black drives - like maybe as far back as 2010.
            • At some point, they were used and configured in a system that had Promise Fasttrack RAID (BIOS/hardware based RAID aka "fake RAID"). Also a very old hardware RAID.
            • That far back, hardware RAID was not supported by the kernel, but now some of the hardware RAID signatures are supported by the modern kernels.
            Guessing:
            • Either you had them used in a Fasttrack RAID like 8 computers ago or something (and forgot) or maybe you got them used and the previous owner did.
            • The 18.04 kernel doesn't detect the RAID signature so does not try and mount them that way, but the kernel in 24.04 does - so now you're stuck.
            Assuming the above is close to correct, the necessary work:
            1. Using 18.04 or Windows you will need to remove any data from the drives to another device or a backup device.
            2. Wipe both drives clean.
            3. Create new partition tables, partitions, and reformat.
            4. Restore data.
            To wipe the drives, write zeros over the entire drive. This two commands will do that:

            Code:
            sudo dd if=/dev/zero of=/dev/sde bs=16M status=progress && sync​
            sudo dd if=/dev/zero of=/dev/sdf bs=16M status=progress && sync​


            This will take some time, but it will update you via the progress option. This will wipe all master boot records, partition tables, and data. Once complete, re-partition and format the drives and re-load your data. After this, 24.04 should detect the file systems normally. I suggest you might test the drives before reloading the data just to be sure they work as expected.

            Also, I must also state that this is not a problem or bug caused by 24.04. Kubuntu 24.04 isn't adding mysterious RAID signatures to drives. On the other hand, Kubuntu 18.04 has been EOL for 3 years. The number of changes that occur during release cycles is significant and this issue would have been discovered long ago had a normal upgrade cycle been adhered to. I totally understand that once you get a system "just right" you don't want to change things until you have to, but no one that I know of would recommend running without updates for 3 years - regardless of the operating system. I suggest moving forward that you consider a more robust approach to longevity and upgrade your LTS release every 2.5 years. This would not have prevented this problem since it isn't related to Kubuntu at all, but you might have been able to solve it years ago.

            Good luck and let us know how it goes.
            Yes, you are right. Those WD disk drive are old. In the past i use to used them on raid bios, dynamic disk on windows and software raid on old linux distros. I believe your statement about The 18.04 kernel doesn't detect the RAID signature so does not try and mount them that way, but the kernel in 24.04 does is correct. I believe in that too, so i did exactly what you suggested.In my first post i said that i did a erase MBR on both of the disks ( filled the first 1M with zeros). About to fill the entire disks with zero i did that too but only in one of the disks. That was formated with ntfs file system. I did not do that using line command, instead the KDE Disk Manager ( 3 hours to entire fill the hard disk with zeros ).

            I will try what you suggest in both disks. A question only... that line command to fill the entire disk with bs=16M are going to fill zero only in the first 16M of the disk correct?

            Comment


              #8
              Originally posted by Virginio Miranda View Post
              bs=16M are going to fill zero only in the first 16M of the disk correct?
              No. bs=16M means block size equal 16 megabytes. So each block of data will be 16 megabytes in size.
              Windows no longer obstructs my view.
              Using Kubuntu Linux since March 23, 2007.
              "It is a capital mistake to theorize before one has data." - Sherlock Holmes

              Comment


                #9
                Originally posted by Snowhog View Post
                No. bs=16M means block size equal 16 megabytes. So each block of data will be 16 megabytes in size.
                Ok. Thanks for the information.

                I used this command on both of the two disks to erase the MBR dd if=/dev/zero of=/dev/sdX2 bs=1M . Probably i used sde1 and sdf1 and i only erased the partition not the entire disk. I will try again and let all of you know the results. Thanks again.

                Comment


                  #10
                  One suggestion out of experience: If you manage more complex disk operations, start from a live USB stick with GParted (https://gparted.org/download.php) and use this instead of booting from a live (K)ubuntu USB stick or a system on one of the drives and using the KDE Partition Manager.
                  This comes from someone who is absolutely no fan of anything (modern) GNOME
                  Last edited by Schwarzer Kater; May 12, 2024, 07:28 PM.
                  Debian KDE & LXQt • Kubuntu & Lubuntu • openSUSE KDE • Windows • macOS X
                  Desktop: Lenovo ThinkCentre M75s • Laptop: Apple MacBook Pro 13" • and others

                  get rid of Snap script (20.04 +)reinstall Snap for release-upgrade script (20.04 +)
                  install traditional Firefox script (22.04 +)​ • install traditional Thunderbird script (24.04)

                  Comment


                    #11

                    Originally posted by oshunluvr View Post
                    So just tossing out ideas here...

                    sudo dd if=/dev/zero of=/dev/sde bs=16M status=progress && sync​
                    sudo dd if=/dev/zero of=/dev/sdf bs=16M status=progress && sync​[/CODE]​

                    Good luck and let us know how it goes.
                    This operation made the trick. The two WD hard disk are like a new now and kubuntu 24.04 mount both in a correct way.​



                    Originally posted by Schwarzer Kater View Post
                    One suggestion out of experience: If you manage more complex disk operations, start from a live USB stick with GParted (https://gparted.org/download.php) and use this instead of booting from a live (K)ubuntu USB stick or a system on one of the drives and using the KDE Partition Manager.
                    This comes from someone who is absolutely no fan of anything (modern) GNOME
                    I did not know there is a gparted live USB. Thanks for the input. I will try it in the next time.

                    Comment


                      #12
                      Glad to hear oshunluvr's suggestion worked out for you!
                      Debian KDE & LXQt • Kubuntu & Lubuntu • openSUSE KDE • Windows • macOS X
                      Desktop: Lenovo ThinkCentre M75s • Laptop: Apple MacBook Pro 13" • and others

                      get rid of Snap script (20.04 +)reinstall Snap for release-upgrade script (20.04 +)
                      install traditional Firefox script (22.04 +)​ • install traditional Thunderbird script (24.04)

                      Comment

                      Working...
                      X