Announcement

Collapse
No announcement yet.

upgrading to raid

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    upgrading to raid

    I just want to start by saying that i've been using Kubuntu for over a year and love everthing about it. I will never buy a Windows product again. I only wish i knew more so I could contribute to the forums and the code.

    Basicly what i would like to do is backup my 90Gb SSD hard drive that holds the OS onto a 1Tb USB drive, then toss in another SSD and setup RAID 0. How do I do this so that when it boots back up, everything is the way it was when i started? I don't want to loose any settings, folder permissions, etc.

    #2
    Ooo, that's a tough one, but you would need 'ghosting' software for sure, preferably Linux friendly ghosting software, but I am not guaranteeing that will work as grub needs to be written to the MBR of the Raid, even if it is a 'fakeraid' MBR. That's pretty complicated and sounds like something I would try to pull off, lol. If I were to try something like that, I would make the backup partition on the USB drive the same size or just a tiny bit smaller that the formatted Raid partition, and don't forget a logical partition for the swap area, figure that in. Good luck.

    My experience with Raid, not SSD but still relevant... http://www.kubuntuforums.net/showthr...t-to-read-this

    Edit: Honestly, you would probably be better off backing up your stuff and doing a fresh install on the Raid, it would be a lot less trouble and you would pull a lot less of your hair out, lol.
    Last edited by tek_heretik; Dec 27, 2012, 09:43 PM.

    Comment


      #3
      It is doable, possibly a little fiddly.

      Basic steps:
      1) Plug in all the drives
      2) Boot a live cd
      3) Copy your ssd file structure to the HDD
      4) Partition the first ssd to include a 1GB partition (for /boot, this much be in raid 1) and the rest how you want it
      5) Copy the partition table to the second SSD
      6) Set up raid 1 on the first partition of both drives
      7) Set up any other raid systems on the rest of the partitions
      8) Format the raid arrays the way you want them
      9) Copy /boot to the first array
      10) Copy the rest of the system where you need it
      11) mount the new root with /boot inside it
      12) edit the new roots etc/fstab and update the UUIDs for the new drives and add the one for /boot
      13) install grub using the new root as the root location and install it to the mbr of the ssd (I tend to install it to the mbr of all drives in the array so I can boot of any of them)
      14) reboot and cross your fingers

      (if you decide to do this I can give you more details on the command you need to run)

      Comment


        #4
        i support tek_heretik's opinion.

        i am not the expert at raid deployment that he and james147 are but i have taught myself everything that i know about RAID.

        i would suggest trying to build a model first using a couple of thumb-drives and a usb hub. (that's how i started)
        Last edited by oznola; Dec 28, 2012, 05:50 AM.
        “The door to the cabinet is to be opened using a minimum of 15 Kleenexes.” ~Howard Hughes

        Linux 3.5.0-21-generic, KDE 4.9.4, Plasma Netbook,
        Grand Unified Bootloader (Grub) 0.97-29ubuntu66 (Legacy version)

        Dell MINI 9, Intel Dual Core Atom (2x) CPU N270 @ 1.60GHz, 32-bits,
        STEC PATA 32GB SSD on IDE Bus, 2Gb RAM.

        Intel Mobile 945SE Express Integrated Graphics Controller with OpenGL/ES extensions

        Comment


          #5
          Originally posted by james147 View Post
          It is doable, possibly a little fiddly.
          14) reboot and cross your fingers
          A little fiddly?! There's an understatement, lol.

          Yes, the finger crossing is the important part, heh. The rest made my head spin.

          Comment


            #6
            Personally if it were my system I wouldn't even mess with FakeRAID at all. The only reason to do so is to support windows on the same system and you said you're not going back that direction. Linux software RAID is 100 times easier to setup, maintain and recover and requires no special skills. We're talking 10 minutes from blank drives to RAID. Backing up and restoring your install will take alot more work than the RAID will.

            Having said that; I'd do some research on using software RAID on SSD's to be sure you do it right the first time. Block and stripe sizes can make a difference.

            In my case - I dumped RAID altogether in favor of BTRFS. Even easier than software RAID and way more features, but not quite as fast (although way improved since earlier this year). You could even convert your current install without a full backup/restore - but I'd make a backup anyway.

            Please Read Me

            Comment


              #7
              Originally posted by tek_heretik View Post
              A little fiddly?! There's an understatement, lol.

              Yes, the finger crossing is the important part, heh. The rest made my head spin.
              It is basically how to do a manual install of any Linux based system more generically it is:

              !) format the disks how you want them
              2) mount the partitions how you want them
              3) copy or install the system to the new root
              4) configure the newly installed system (or update the configs from a copied system, ie /etc/fstab)
              5) install grub to the mbr

              I have done this quite a few times with different Linux systems when I want to change the underlying disk topology.

              This is a good change to learn more about how Linux works with minimal risk as with the system backed up to the HDD you won't lose anything over a clean install and you can always opt for a clean install if you fail to get it working again and then restore from the HDD.

              Originally posted by oshunluvr View Post
              Personally if it were my system I wouldn't even mess with FakeRAID at all. The only reason to do so is to support windows on the same system and you said you're not going back that direction. Linux software RAID is 100 times easier to setup, maintain and recover and requires no special skills. We're talking 10 minutes from blank drives to RAID. Backing up and restoring your install will take alot more work than the RAID will.

              Having said that; I'd do some research on using software RAID on SSD's to be sure you do it right the first time. Block and stripe sizes can make a difference.

              In my case - I dumped RAID altogether in favor of BTRFS. Even easier than software RAID and way more features, but not quite as fast (although way improved since earlier this year). You could even convert your current install without a full backup/restore - but I'd make a backup anyway.
              I also recommend BTRFS, been using it here and it is great... and offers many more features then raid + ext, you can also convert ext to btrfs in place and then add the other SSD to the btrfs raid array afterwards with out needing to copy the system back and forth (though I do recommend backing up first)

              Comment


                #8
                This sounds like an interesting experiment. However, if you are worried at all about your data, a RAID0 (striped) volume statistically reduces your MTBF (mean time before failure) by the number of drives in your stripe. For example, if your drives have a MTBF of 2 years and you use 2 drives, the MTBF is now reduced to 1 year.

                The performance increase from striped volumes comes at the expense of MTBF.

                Just a word of caution.

                Comment


                  #9
                  Originally posted by andystmartin View Post
                  This sounds like an interesting experiment. However, if you are worried at all about your data, a RAID0 (striped) volume statistically reduces your MTBF (mean time before failure) by the number of drives in your stripe. For example, if your drives have a MTBF of 2 years and you use 2 drives, the MTBF is now reduced to 1 year.

                  The performance increase from striped volumes comes at the expense of MTBF.

                  Just a word of caution.
                  This is why you have a separate HD (not part of the raid), USB stick, whatever, to store your stuff on until you get a chance to burn it to an optical disk, that's what I do.

                  Comment


                    #10
                    Ditto.

                    Backup, backup, backup.

                    Please Read Me

                    Comment

                    Working...
                    X