[UPDATE 3]....I hope i can help anyone else in the same/similar predicament as me.
Your drive names/numbers and sizes/locations will be different from mine and i dont know if this would work for anything other than nvidia raid.
You might have to read this whole thing to get the gist of what has actually happened.
I know both my drives are 976773168 total sectors in size. check with fdisk -l,
I created the hpa area again with hdat2, bring it down to 976771055 as was in the log.
I set the kernel command line 'libata.ignore_hpa=0' in /etc/default/grub (look for 'GRUB_CMDLINE_LINUX_DEFAULT=')
run 'update grub'
reboot
note how both disks are actually two sectors short.
remove the 'libata.ignore_hpa=0' kernel command line again and update grub
reboot
reboot, back to dos, run hdat2, remove hpa, done.
[UPDATE 2]
It's easy to miss things when you don't know what to look for....... Note line 3,
Feb 5 22:05:43 kubuntu kernel: [ 1.362496] ata2.00: ATA-8: WDC WD5000AADS-00S9B0, 01.00A01, max UDMA/133
Feb 5 22:05:43 kubuntu kernel: [ 1.362499] ata2.00: 976773168 sectors, multi 16: LBA48 NCQ (depth 31/32)
Feb 5 22:05:43 kubuntu kernel: [ 1.362865] ata1.00: HPA unlocked: 976771055 -> 976773168, native 976773168
Feb 5 22:05:43 kubuntu kernel: [ 1.362869] ata1.00: ATA-8: Hitachi HDS721050CLA362, JP2OA25C, max UDMA/133
Feb 5 22:05:43 kubuntu kernel: [ 1.362872] ata1.00: 976773168 sectors, multi 16: LBA48 NCQ (depth 31/32)
At least i know where the hpa started now and it explains why the preboot raid manager failed after booting linux.
I am just hoping now that i can set the hpa back on then tell Linux to leave it TF alone.
[UPDATE]
I dumped the last 3000 sectors of the disk that wasn't detected as this is where the metadata should be stored, opened the file in hexedit and found strings like:
Copyright (c) 2000, Award Software, Inc
6/25/2009-P965-ICH8-6A79LG0DC-
I know whats happened here, The last board i had was a P965 based thing that had a feature (more of an annoyance) that would create a 'host protected area' or hpa on the drive and copy the bios to it. I would usually turn this off but it was on by default so after so many cmos resets i got tired of turning it off and left it on. I only had this one disk on that board and after a while that board died. I never thought about the hpa after that.
Right, so upon realizing this i just booted dos, run hdat2 and yes, there was a small hpa at the end of the disk. I removed it, not deleted anything but now the array isn't even recognized by the preboot manager. oops... I am sure the metadata will still be intact but: a, i didn't take note of the position of the hpa, and b, i don't know what this metadata will look like or how it's interpreted.
Im sure this is the problem anyway and dmraid should be aware of this. I dont know if it is or not so ill go off now to ask them.
Any help or suggestions to get the metadata back to its proper place? Thank you.
------>previous post-------->
Hi guys. I don't know if this is the most appropriate place to post this, but its the closest i can find for my problem. I'm new to Kubuntu but not an absolute beginner with Linux/Gnu. I'm having bit of a problem with my raid setup. Please bear with me while i attempt to explain.
I'm running 10.10 Maverick Meerkat
The chipset is an Nforce 740i. The primary os is M$ Win7 as it is necessary evil. This is a 2x 500GB sataii raid0 using the onboard Nvidia raid. It is partitioned as:
#1 100MB, this is created by the M$ install "to keep your system working properly" and is hidden from the actual os,
#2 100GB os partition where the evil lives,
#3 the rest, this is where i store everything, mp3, video, installers, etc. (don't worry, i have backup of the most important stuff),
Kubuntu is on a 40GB pata disk. This is through a Via vt6330 controller but all is fine here so its not worth the partition info.
I need to mount that M$ raid,
All the disks are there.
sda and sdb should be the raid. Im not sure how this type of raid works but how is it that there is some sort of partition info on sda? sdb is just all wrong.
Should both disks be listed here?
Something is definitely wrong.
Also if i reboot here the preboot raid manager says something along the lines of "the array nvidia stripe has failed" and no amount of rebooting fixes it only a power cycle will.
From what i have read, dmraid reads the raid info from bios (am i right?) and then mounts the drives as prescribed. I am aware that this isn't hardware raid.
I have experience with Linux software raid, using mdadm to configure pata drives in various configurations but that wouldn't help here.
I just had a thought while writing this, i shut down, disconnected the two sata disks and powered up. The preboot raid manager informed me that it was disabling raid due to no raid disks being installed. I allowed it to boot into Kubuntu then i shut it down. I plugged both the disks back in but into the opposite sockets disk1/disk2 now disk2/disk1. No problems. I allowed it to boot the evil and manually ran chkdsk on all partitions, rebooted and selected Kubuntu to boot.
Now theres a change:
now the information is physically flipped which leads me to believe the problem is with the data on the now sda disk and not with what the controller and dmraid is doing.
Any ideas?
I have no idea about how partition tables work, only what they're for. While i have backup of my most important stuff the stuff on the raid is still to important to just blow away. I don't want to be making amateurish changes and experiments for the sake of just trying it out and then maybe theres the chance that it still won't work.
Thanks all and sorry for the length of this post.
Your drive names/numbers and sizes/locations will be different from mine and i dont know if this would work for anything other than nvidia raid.
You might have to read this whole thing to get the gist of what has actually happened.
I know both my drives are 976773168 total sectors in size. check with fdisk -l,
I created the hpa area again with hdat2, bring it down to 976771055 as was in the log.
I set the kernel command line 'libata.ignore_hpa=0' in /etc/default/grub (look for 'GRUB_CMDLINE_LINUX_DEFAULT=')
run 'update grub'
reboot
Code:
dmraid -r /dev/sdc: nvidia, "nvidia_bacgjebc", stripe, ok, 976773166 sectors, data@ 0 /dev/sdb: nvidia, "nvidia_bacgjebc", stripe, ok, 976771053 sectors, data@ 0
Code:
dd if=/dev/sdb skip=976771053 of=sdbmet
reboot
Code:
dd if=sdbmet of=/dev/sdb seek=976773166 dmraid -r /dev/sdc: nvidia, "nvidia_bacgjebc", stripe, ok, 976773166 sectors, data@ 0 /dev/sdb: nvidia, "nvidia_bacgjebc", stripe, ok, 976773166 sectors, data@ 0 dmraid -ay RAID set "nvidia_bacgjebc" was activated RAID set "nvidia_bacgjebc1" was activated RAID set "nvidia_bacgjebc2" was activated RAID set "nvidia_bacgjebc3" was activated ls /dev/mapper control nvidia_bacgjebc nvidia_bacgjebc1 nvidia_bacgjebc2 nvidia_bacgjebc3 mount /dev/mapper/nvidia_bacgjebc2 /media/nvraid ls /media/nvraid 6XSourceFilter.grf BOOT config.sys cygwin Downloads hiberfil.sys IO.SYS MSDOS.SYS MSOCache pagefile.sys PerfLogs ProgramData Program Files Recovery $Recycle.Bin System Volume Information Users WinDDK Windows
[UPDATE 2]
It's easy to miss things when you don't know what to look for....... Note line 3,
Feb 5 22:05:43 kubuntu kernel: [ 1.362496] ata2.00: ATA-8: WDC WD5000AADS-00S9B0, 01.00A01, max UDMA/133
Feb 5 22:05:43 kubuntu kernel: [ 1.362499] ata2.00: 976773168 sectors, multi 16: LBA48 NCQ (depth 31/32)
Feb 5 22:05:43 kubuntu kernel: [ 1.362865] ata1.00: HPA unlocked: 976771055 -> 976773168, native 976773168
Feb 5 22:05:43 kubuntu kernel: [ 1.362869] ata1.00: ATA-8: Hitachi HDS721050CLA362, JP2OA25C, max UDMA/133
Feb 5 22:05:43 kubuntu kernel: [ 1.362872] ata1.00: 976773168 sectors, multi 16: LBA48 NCQ (depth 31/32)
At least i know where the hpa started now and it explains why the preboot raid manager failed after booting linux.
I am just hoping now that i can set the hpa back on then tell Linux to leave it TF alone.
[UPDATE]
I dumped the last 3000 sectors of the disk that wasn't detected as this is where the metadata should be stored, opened the file in hexedit and found strings like:
Copyright (c) 2000, Award Software, Inc
6/25/2009-P965-ICH8-6A79LG0DC-
I know whats happened here, The last board i had was a P965 based thing that had a feature (more of an annoyance) that would create a 'host protected area' or hpa on the drive and copy the bios to it. I would usually turn this off but it was on by default so after so many cmos resets i got tired of turning it off and left it on. I only had this one disk on that board and after a while that board died. I never thought about the hpa after that.
Right, so upon realizing this i just booted dos, run hdat2 and yes, there was a small hpa at the end of the disk. I removed it, not deleted anything but now the array isn't even recognized by the preboot manager. oops... I am sure the metadata will still be intact but: a, i didn't take note of the position of the hpa, and b, i don't know what this metadata will look like or how it's interpreted.
Im sure this is the problem anyway and dmraid should be aware of this. I dont know if it is or not so ill go off now to ask them.
Any help or suggestions to get the metadata back to its proper place? Thank you.
------>previous post-------->
Hi guys. I don't know if this is the most appropriate place to post this, but its the closest i can find for my problem. I'm new to Kubuntu but not an absolute beginner with Linux/Gnu. I'm having bit of a problem with my raid setup. Please bear with me while i attempt to explain.
I'm running 10.10 Maverick Meerkat
The chipset is an Nforce 740i. The primary os is M$ Win7 as it is necessary evil. This is a 2x 500GB sataii raid0 using the onboard Nvidia raid. It is partitioned as:
#1 100MB, this is created by the M$ install "to keep your system working properly" and is hidden from the actual os,
#2 100GB os partition where the evil lives,
#3 the rest, this is where i store everything, mp3, video, installers, etc. (don't worry, i have backup of the most important stuff),
Kubuntu is on a 40GB pata disk. This is through a Via vt6330 controller but all is fine here so its not worth the partition info.
I need to mount that M$ raid,
Code:
cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: ATA Model: WDC WD5000AADS-0 Rev: 01.0 Type: Direct-Access ANSI SCSI revision: 05 Host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: ATA Model: Hitachi HDS72105 Rev: JP2O Type: Direct-Access ANSI SCSI revision: 05 Host: scsi3 Channel: 00 Id: 00 Lun: 00 Vendor: PIONEER Model: DVD-RW DVR-216D Rev: 1.09 Type: CD-ROM ANSI SCSI revision: 05 Host: scsi6 Channel: 00 Id: 00 Lun: 00 Vendor: ATA Model: SAMSUNG SP0411N Rev: TW10 Type: Direct-Access ANSI SCSI revision: 05
Code:
fdisk -l Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xe58e4844 Device Boot Start End Blocks Id System /dev/sda1 * 1 13 102400 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 13 13064 104825856 7 HPFS/NTFS /dev/sda3 13064 121603 871839744 7 HPFS/NTFS Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xd0408cae Device Boot Start End Blocks Id System /dev/sdb1 1 60801 488382464 6 FAT16 Disk /dev/sdc: 40.1 GB, 40060403712 bytes 255 heads, 63 sectors/track, 4870 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0007dae5 Device Boot Start End Blocks Id System /dev/sdc1 * 1 4665 37469184 83 Linux /dev/sdc2 4665 4871 1649665 5 Extended /dev/sdc5 4665 4871 1649664 82 Linux swap / Solaris
Code:
dmraid -r /dev/sda: nvidia, "nvidia_bacgjebc", stripe, ok, 976773166 sectors, data@ 0
Code:
dmraid -ay ERROR: nvidia: wrong # of devices in RAID set "nvidia_bacgjebc" [1/2] on /dev/sda ERROR: removing inconsistent RAID set "nvidia_bacgjebc" ERROR: no RAID set found no raid sets
Also if i reboot here the preboot raid manager says something along the lines of "the array nvidia stripe has failed" and no amount of rebooting fixes it only a power cycle will.
From what i have read, dmraid reads the raid info from bios (am i right?) and then mounts the drives as prescribed. I am aware that this isn't hardware raid.
I have experience with Linux software raid, using mdadm to configure pata drives in various configurations but that wouldn't help here.
I just had a thought while writing this, i shut down, disconnected the two sata disks and powered up. The preboot raid manager informed me that it was disabling raid due to no raid disks being installed. I allowed it to boot into Kubuntu then i shut it down. I plugged both the disks back in but into the opposite sockets disk1/disk2 now disk2/disk1. No problems. I allowed it to boot the evil and manually ran chkdsk on all partitions, rebooted and selected Kubuntu to boot.
Now theres a change:
Code:
fdisk -l Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xd0408cae Device Boot Start End Blocks Id System /dev/sda1 1 60801 488382464 6 FAT16 Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xe58e4844 Device Boot Start End Blocks Id System /dev/sdb1 * 1 13 102400 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sdb2 13 13064 104825856 7 HPFS/NTFS /dev/sdb3 13064 121603 871839744 7 HPFS/NTFS Disk /dev/sdc: 40.1 GB, 40060403712 bytes 255 heads, 63 sectors/track, 4870 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0007dae5 Device Boot Start End Blocks Id System /dev/sdc1 * 1 4665 37469184 83 Linux /dev/sdc2 4665 4871 1649665 5 Extended /dev/sdc5 4665 4871 1649664 82 Linux swap / Solaris dmraid -ay ERROR: nvidia: wrong # of devices in RAID set "nvidia_bacgjebc" [1/2] on /dev/sdb ERROR: removing inconsistent RAID set "nvidia_bacgjebc" ERROR: no RAID set found no raid sets dmraid -r /dev/sdb: nvidia, "nvidia_bacgjebc", stripe, ok, 976773166 sectors, data@ 0
Any ideas?
I have no idea about how partition tables work, only what they're for. While i have backup of my most important stuff the stuff on the raid is still to important to just blow away. I don't want to be making amateurish changes and experiments for the sake of just trying it out and then maybe theres the chance that it still won't work.
Thanks all and sorry for the length of this post.