Announcement

Collapse
No announcement yet.

Disk error-unable to recover

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    #16
    The link I gave you in Post 9 was to this: "Fix UEFI+GPT with Testdisk & gdisk -- Deleted partitions & damaged GPT structure" (my how-to)
    An intellectual says a simple thing in a hard way. An artist says a hard thing in a simple way. Charles Bukowski

    Comment


      #17
      What's with the [/size] ? I take it that it is NOT part of the command line cmd?

      Comment


        #18
        Originally posted by Qqmike View Post
        The link I gave you in Post 9 was to this: "Fix UEFI+GPT with Testdisk & gdisk -- Deleted partitions & damaged GPT structure" (my how-to)
        somewhere between unhelpful and very difficult to follow. You posted 5 experiments.
        Experiment 1, = No actual commands given, or refereed too.
        Experiment 2, = same. "Testdisk used easily" this would be the unhelpful part. What was the commands used?
        Rod Smith's Books. Another guide constantly jumping between windows and Mac commands and how to's. Why can't people write a Linux guide; a Separate Mac guide and forget the Windows guide. I read through, copy and paste commands I think are useful, then giggle the crap out of them to make sure it's the right command, and what I should expect to see as a result.

        PS. I tried testdisk, all it recovered was the dd pine64 partitions. Nothing else. The data is there as the VM was reading and writing to it in the background. I managed to recover some data from the VM via the VM before it all went to buggery. Once the vm was rebooted, no "partitions" found.
        Once PC was restarted/rebooted, same as before, just the 4Gb of pine64.

        At this point testdisk and what info I could find on "data recovery" was slim pickings at best. Your novel is just way to long. If you had a conclusion part, that covered 1. Linux data recover. 2. Zero out device to clean up sensitive data, 3. whatever you choose. 4, Etc.
        That would be great. I've been slogging my way through it, but as mentioned about, your experiment 1 Data recovery was writing as "testdisk easily used" ..... HOW?
        PS. I am getting very frustrated at all these so-called what-to-do guides that either jump Linux to Mac to different-version-of-Linux, to windows, and have half assed commands or just do something like a simple you erased the disk, here is how to recover. If you have problem with Windows, go and download and payfor "Get Data Back NTFS" It' works perfectly on windows for windows.
        NOTHING about deep dive into the disk. This is why I've been trying ddrescue. It's the only command that reports to just read the disk raw. Doesn't care about GPT MBR's etc. It just looks for data... BUT it's how to extract that data I cannot find.
        Code:
        sudo gdisk -l /dev/sdd
        
        ​GPT fdisk (gdisk) version 1.0.8
        
        Partition table scan:
         MBR: protective
         BSD: not present
         APM: not present
         GPT: present
        
        Found valid GPT with protective MBR; using GPT.
        Disk /dev/sdd: 11721045168 sectors, 5.5 TiB
        Model: ST6000NE000-2KR1
        Sector size (logical/physical): 512/4096 bytes
        Disk identifier (GUID): F6527CCA-DA43-D241-B4F1-7B3D73B2930A
        Partition table holds up to 128 entries
        Main partition table begins at sector 2 and ends at sector 33
        First usable sector is 34, last usable sector is 11721045134
        Partitions will be aligned on 2048-sector boundaries
        Total free space is 7277 sectors (3.6 MiB)
        
        Number  Start (sector)    End (sector)  Size       Code  Name
          1            2048     11721039871   5.5 TiB     8300  ​
        Well that's a lot of interesting but unknown & unhelpful information. I am glad this command gets used.

        I forgot to as doesent the bs=512 switch withing a command mean use 512 byte sectors? What if my disk doesn't or didn't use 512 byte sectors. Wouldn't this recover garbage information formatted incorrectly?
        Last edited by CharlieDaves; Aug 31, 2023, 05:38 PM.

        Comment


          #19
          I guess we are reading everything differently. I ONLY followed the TestDisk website -- it was very clear, worked well for Linux. Rod Smith: He's a Linux guy, really; I've never felt he was pushing or favoring Mac/Windows. dd: if you don't understand it, be very careful running it. (I have a how-to on that, too, but it probably wouldn't help here.) gdisk is rather self-explanatory, but, again, Rod Smith has excellent material on using it to rescue/repair. Sorry that none of this has helped solve your problem.
          An intellectual says a simple thing in a hard way. An artist says a hard thing in a simple way. Charles Bukowski

          Comment


            #20
            Found this. makes sense to me.
            https://www.wildfiredata.com.au/post/linux-dd-command

            Code:
            sudo dd if=/dev/sdd of=/dev/sde status=progress​
            which gave me another unreadable disk
            so I changed it to
            Code:
            sudo dd if=/dev/sdd of=/dev/sde1 status=progress​
            ​so the disk is mountable and the partition should be readable
            NOW before I waste 40-50hours (Again (uugh) ) should I use the bs= switch, and if yes, what size?

            I ask as most websites that include this bs= switch wither use 512 or 4096. My question is....
            Q- Does the bs= (byte size) make any difference on file data recovery, or is it just how much memory the command is going to use up (4096 = 4Meg )

            Thanks
            PS. QqMike. I have read through ddrescue website and it didn't have that much info. I am still giggling my way around the internet trying to find someone or some page that is a "data recovery" page and an explanation of why they are using the command and switches the way they are. What you've written and the time you took, is impressive, even if difficult to follow (for me).

            I still don't know why you zero out the first 512 bytes of a disk before data recovery.
            Q Are you making the drive unreadable?
            or
            Q Is it needed to actually recover data?
            TIA
            Last edited by CharlieDaves; Sep 01, 2023, 04:58 PM.

            Comment


              #21
              SOAPBOX Time=
              AS mentioned, and FYI for the group/forum, Many decades ago I was a heavy user of GetDataBack (NTFS) on peoples HDD's that were mainly attacked with viruses. It was brilliant. You ran it over night (20-40gig drives back then) and it would recover every-formatted partition. It was just a matter of finding the most recent, recover those files, then move on to other partitions and files.
              LIKE WHAT EVERY SITE WRITES.... Once you "write data" to the drive, you risk over-writing previous deleted damaged files making them impossible to recover.

              FOR ANYONE wanting to fix or repair their precious Windows partition I personally give you 2 choices.
              Choice 1. Purchase (usa$40) GetDataBack from Runtime, & Paragon Backup & Recovery) OR similar software (there is freeware out there.... Paragon WAS free for individual use)
              Choice 2. Put Windows on IT'S Own separate drive and plugged it in as needed. Most Motherboards over last 8-10 years support Hot-Swap. Oh! And as viruses LOVE Windows, back it up Daily. Clone it, whatever. It's your data, your responsibility. Your choice.

              --EDIT-- Just a footnote. Software mentioned above--There is a windows program(Paragon B&R... I used to use this for backups. It did it LIVE whilst Windows was actually running... It could image the drive or partition or copied selected files or Directories (Sorry Folders), to a different drive or network, cloud, etc. It will take me awhile to find it's name Remembered it. At the time it was free for personal use, but last time I looked it up, the company had been sold to a more profitable company and it's now a pay for service. For what it does and how it works, and everything else, just purchase a copy.
              If your too cheap to purchase a couple of these much needed AND easy to use programs (Paragon backup and recovery--- Just remembered it) for data recovery, them stop winging about how you've lost Windows accidentally.... 2 programs. usa$100. Worth every penny In My Very Old Humble Opinion...

              FYI-FYI. I used to be a hired IT guy. I got so fed up with every person I came across wanted "free illegal" windows, office, photoshop, etc, and the concept of "backing up YOUR data is important" wasn't their problem, it was my problem. I had to do this somehow (It got me into Remote Hacking to see if i could backup there PC at their home).
              I used the analogy.....

              "If your house caught fire, would you call me and expect me to rush into the burning flames to fetch your precious photos of of the wall and photo albums"?.

              No. I wouldn't and neither do the Fire Fighters.

              If you want a copy or backup of Important data (or photos or other things), follow the 3 2 1 principles as best you can. [Picture frowny faces and glassy eyes]
              No-body listened. Even if I used finger puppets and was naked at the time

              This is why I get so angry at people (in Linux forums) asking how to fix their "Windozs" data or partition. Jeez. Two (2) programs and your set for life. And I bet by now, there are a lot more programs out there, better too. I just don't care about Windozs due to all the idiots I've encountered over my many years attempting to fix problems.
              https://www.techsupportalert.com/
              This used to be the "go to" site for best researched and tested freeware for Windozs. Now it's full of ads and crap by the looks of it.
              (This forum should see the pile of rust falling out of my ears right now, from all this old memories)
              Thanks for letting me whinge for a bit.​
              Last edited by CharlieDaves; Sep 01, 2023, 05:10 PM.

              Comment


                #22
                Originally posted by CharlieDaves View Post
                Found this. makes sense to me.
                https://www.wildfiredata.com.au/post/linux-dd-command

                Code:
                sudo dd if=/dev/sdd of=/dev/sde1 status=progress​
                ​so the disk is mountable and the partition should be readable
                NOW before I waste 40-50hours (Again (uugh) ) should I use the bs= switch, and if yes, what size?

                I ask as most websites that include this bs= switch wither use 512 or 4096. My question is....
                Q- Does the bs= (byte size) make any difference on file data recovery, or is it just how much memory the command is going to use up (4096 = 4Meg )
                Back to my unanswered Question...... Please.
                Code:
                sudo dd if/dev/sdd of=/sde/sde1 status=progress bs=xyzc
                yes or no on the bs=switch.
                As in is it important for file reading, like in my decades old IT job of FAT vs FAT16 VS FAT32 vs NTFS was important for actual runtime data recovery (even thou it had an "auto" function so I skipped that chapter... Actually I browsed over it and understood very little

                Comment


                  #23
                  As for block size, bs=, usually the writer/user decides on this number, and they do so either from experience or from their own testing or from someone else's testing. I have experimented on * my * system with * my * applications. Sometimes I see a simple linear relationship, and sometimes I see something more complicated, like a parabolic curve. bs affects how quick dd will complete the job. Increase bs, and dd may take more time or it may take less time to finish the job, to run to completion. E.g., increase bs, and perhaps the job will complete faster; continue increasing bs and anything can happen: completion times may increase or decrease, or decrease then increase, or stay the same after a point, etc.

                  I can't make the decision for you. If I were pressed to pick a decent, good number to start with, to run what you are doing, just use 4096 (I have no idea why; it's safe -- not too big (like 16M might be) -- it's larger than the default of 512, and so maybe the program will go a bit quicker, who knows?). You CAN also just simply omit bs and it defaults to bs=512, with no harm done. We are basically talking about how fast dd will complete the job, and how can you know unless you test it?

                  "Block size" or bs is, basically, the number of bytes dd reads/deals with at a time.

                  From my how-to on dd:

                  dd copies binary data, bit by bit, and exactly every bit, and uses buffers to get the source copied to the target; it stops at the end of the input file (when it gets to the maximum addressable sector of source_file). It reads the input one block at a time, using the block size bs. dd then processes the block of data actually returned, which could be smaller than the requested block size (e.g., if not enough data is left on the input file to make a full block of size bs bytes; or if it encounters errors in the input file). Then dd applies any conversions (conv=) that have been specified and writes the resulting data to the output in one block at a time (of size bs bytes; or less than bs bytes in case there's a partial block at the end and conv=sync is NOT specified).
                  ...
                  block size: bs bytes
                  If omitted, the default is 512 bytes = 1b, where b=512.

                  bs can be any number of bytes; often, a power of 2 is used: 512, 1024 (=2b), 2048, 4096, etc; a number may end with k, b, or w to specify multiplication by 1024, 512, or 2, respectively, (w = two bytes = a "word"). Numbers may also be separated by x to indicate multiplication (e.g., 2bx8). b: 512, kB: 1000, K: 1024, MB: 1000x1000, M: 1024x1024, GB: 1000x1000x1000, G: 1024x1024x1024. You can use any block size bs you wish. If bs is too big, it may slow dd down, having to read, work with its buffers, then write large blocks. Of course, if bs is too small, dd may also slow down because it has so many read-write's to do.
                  > Some block sizes to try are bs=512 (the default: 512 bytes = 1 sector of the disk); bs=4096 (= 8 sectors); and bs=63x255b (=63x255x512 = 8,225,280 bytes = 1 "cylinder").
                  (of course, a lot of dd literature refers to older disk geometries ...)

                  FWIW -- and I am not interested in a critical review of it -- here's my 3-part how-to on dd designed to help people get started with dd and actually use it on basic tasks. I am not a dd expert. Everything I know is in that how-to, and everything was tested on * my * system with my sample problems.

                  https://www.kubuntuforums.net/forum/general/documentation/how-to-s/20902-the-dd-command​
                  An intellectual says a simple thing in a hard way. An artist says a hard thing in a simple way. Charles Bukowski

                  Comment


                    #24
                    Okay. dd doing what it is meant to do. it's coping my damaged hdd to another was working hdd and is now unreadable.
                    not very helpful
                    also i read somewhere that it restores files IF SPECIFIED, but will NOT do directories (folders).
                    SO
                    Option 1 = ext4magic
                    Option 2 = extundelete

                    Code:
                    sudo e2fsck -b 32768 /dev/sdd -y
                    e2fsck 1.46.5 (30-Dec-2021)
                    e2fsck: Bad magic number in super-block while trying to open /dev/sdd
                    
                    The superblock could not be read or does not describe a valid ext2/ext3/ext4
                    filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
                    filesystem (and not swap or ufs or something else), then the superblock
                    is corrupt, and you might try running e2fsck with an alternate superblock:
                       e2fsck -b 8193 <device>
                    or
                       e2fsck -b 32768 <device>
                    
                    Found a gpt partition table in /dev/sdd​
                    So I have a GPT partition. I think that's a good start??

                    Option2
                    Code:
                    sudo extundelete /dev/sdd --restore-all
                    extundelete: Bad magic number in super-block when trying to open filesystem /dev/sdd​
                    Option1
                    Code:
                    ext4magic /dev/sdd -M -d /dev/sde1/
                    /dev/sdd Error 13 while opening filesystem  
                    Error: Invalid parameter: -d  /dev/sde1/​
                    I also get Error
                    =2133571347
                    ​=20

                    And the following was tried and failed

                    Code:
                    sudo ext4magic -S dev/sdd
                    dev/sdd Error 2 while opening filesystem  
                    ext4magic : EXIT_SUCCESS​
                    Does anyone have any good news on how to recover files where the original problem was dd the first couple of megs at start of HDD.
                    seriously the files are there. I have NOT written to the drive. I've only created then deleted a partition. As far as I can tell this would reset the superblocks or the backup MBR FGT whatever.
                    Some time ago I attempted to restore all the superblock locations. 12 of them. Don't know what size drive it was for, but I have yet to find a command that would display there location.
                    Q- what are superblocks, and why does so many undelete or data restore refer to them??

                    PS. Qqmike, I am still reading through your novel. I am still trying to understand how things are being done from the commands or code you used.
                    I also think dd is not the solution. Unless I want to convert my hdd to an image file.... and then work on the image file. for this I will need two (2) new 8Tb or bigger Hdd's. Can't afford that at this moment.

                    dd(1) — Linux manual page
                    DD(1) User Commands DD(1) NAME top

                    dd - convert and copy a file​

                    Comment


                      #25
                      Originally posted by CharlieDaves View Post
                      Code:
                      sudo dd if=/dev/sdd of=/dev/sde1 status=progress​
                      ​so the disk is mountable and the partition should be readable
                      [/code]
                      dd works at the device level, mounting is about file systems. A device and partition being readable is a necessary condition for mounting, but a file system with sufficient integrity is needed for mounting (as well as support for the particular file system in the OS).

                      BTW, using the bs (block size) parameter to dd is these days mostly a performance measure. bs=10M may run faster than the default, which is 512 bytes. (Once upon a time, like decades ago, when using tape drives, the block size on the tape would be set with this, and it made a difference because there were gaps on the tape between the blocks. I imagine that tape drives these days ignore any block size specified by dd.)
                      Regards, John Little

                      Comment


                        #26
                        Yeah, right, jlittle, you don't need anything mounted to run dd.
                        And you're correct about bs: it's just a matter of choosing a value that gives you the performance (run time of dd) that you want.
                        An intellectual says a simple thing in a hard way. An artist says a hard thing in a simple way. Charles Bukowski

                        Comment


                          #27
                          So ddrescue finally finished, with error Destination device full.
                          I reset pc, and unplugged and replugged external USB drive SDE.
                          To my surprise, it is still only 1% used. 99% free. So if the output drive is full, where the heck did the files go?
                          This is driving me nuts
                          EDIT- Sorry I forgot to grab a screenshot of the output finished command.
                          see https://www.kubuntuforums.net/forum/...ut-of-commands

                          Comment


                            #28
                            Originally posted by Qqmike View Post
                            Yeah, right, jlittle, you don't need anything mounted to run dd.
                            And you're correct about bs: it's just a matter of choosing a value that gives you the performance (run time of dd) that you want.
                            And thanks to Jlittle as well. I attempted bs=4096 and it bogged my PC down so much, the mouse wouldn't move.
                            I did re-do without bs (so bs=512) and it took 3+ days to get 1Tb and then I accidentally knocked the power supply to the external HDD /SDE without realisie it, and it failed. input output error.

                            Comment


                              #29
                              Based just on what you said, that would suggest to try larger block sizes, like bs=16M.
                              Last edited by Qqmike; Sep 07, 2023, 07:42 PM.
                              An intellectual says a simple thing in a hard way. An artist says a hard thing in a simple way. Charles Bukowski

                              Comment


                                #30
                                Thanks to all, but after a weeks of trying to recover, many different approaches, I've had enough. Most forums state that once you screw up guid it's stuffed for good.
                                I don't know why, but makes me want to return to NTFS days

                                Final question
                                Without using clonezilla, how else does someone "backup" a Drive, considering we are getting into double digit TeraByte Sized Drives ??
                                How to you back up these superblocks (and an actual explanation on what the heck they actually are PLEASE)
                                What about the backup location of the GPT GUID tables? MBR Tables?
                                Is there an easy script file I can run when I shut down my PC once per week?
                                Thanks

                                Comment

                                Working...
                                X