Announcement

Collapse
No announcement yet.

HOWTO: Perform a bare metal backup of a system using LVM

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    HOWTO: Perform a bare metal backup of a system using LVM

    Motivation
    I run a kubuntu system with a hardware RAID mirror setup which protects me from disk failures but does not protect me from
    • Theft of my computer
    • Changes to the OS which may break my system e.g. distribution upgrades
    • User mistakes


    If I ever encountered a problem I did not want to re-install the operating system and start all over again with all the customisations - therefore I needed a bare metal backup. However, I have complicated my installation using LVM and I could not find much information on how to take a backup easily when using LVM. I have found /1/ (see below) to be helpful but when it comes to backing up a system using LVM it only recommends a full dd backup e.g.
    Code:
    dd if=/dev/sda of=/backup/sda.dd
    While this will work my RAID array is 400GB in size and the backup will be too large and include stuff I don't actually need to backup. Therefore I need an approach where I can backup my OS in bits using a tool like rsync to backup the volumes where I do not need to have an exact dd copy of.

    Approach
    I haven't been able to get a definitive list of directories to exclude when taking an online backup so, to keep things simple (for now), the backup is done using a live CD/DVD when the OS is not in use. Below I walk through the commands I use. While I have scripts I won't include them here in their entirety basically because it would be a mistake for someone to run them without a lot of modifications for their personal system. Generalised backup and recovery scripts exists - see /2/ - but I was keen to keep things as simple as possible so I can understand what is going on. In addition there are some steps in my procedure where you have to run fdisk interactively so they can't be included in scripts. If anyone has any suggestions for improvements I'd be very happy to hear them.

    I do not know show any exclusions in the approach below - they will need to be included once when you have a successful backup. Actually, I even backup tmp below too.

    Warnings
    Please do not run this approach on a system that you care about unless you have another reliable backup.
    e.g.
    • A RAID 1 disk which you can remove from the array, when the system is shutdown, acting as a backup
    • You perform the recovery on a brand new disk with the original disk removed


    Alternatively run the approach on a test system or one which you have just installed and are prepared to re-install it again if required

    My system
    I use the Kubuntu version of Ubuntu

    Code:
    $ lsb_release -a
    No LSB modules are available.
    Distributor ID: Ubuntu
    Description:  Ubuntu 6.10
    Release:    6.10
    Codename:    edgy
    My disk set-up looks as follows
    Code:
    $ sudo fdisk -l
    
    Disk /dev/sda: 400.0 GB, 400087408640 bytes
    255 heads, 63 sectors/track, 48641 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    
      Device Boot   Start     End   Blocks  Id System
    /dev/sda1  *      1     12    96358+ 83 Linux
    /dev/sda2       13     377   2931862+ 82 Linux swap / Solaris
    /dev/sda3       378    48641  387680580  5 Extended
    /dev/sda5       378    48641  387680548+ 8e Linux LVM
    So basically I have: a boot partition, a swap partitions and then everything else is in LVM in the extended partition. This appears to the operating system on a single SATA disk (though in hardware RAID there are two acting as a mirror).

    My volumes are mounted as follows, vg1 is my volume group. I have an external volume mounted as /backup to store the backup.
    Code:
    $ df -h
    Filesystem      Size Used Avail Use% Mounted on
    /dev/mapper/vg1-root 9.9G 366M 9.0G  4% /
    varrun        1006M 140K 1006M  1% /var/run
    varlock       1006M   0 1006M  0% /var/lock
    procbususb       10M 112K 9.9M  2% /proc/bus/usb
    udev          10M 112K 9.9M  2% /dev
    devshm        1006M   0 1006M  0% /dev/shm
    /dev/sda1       89M  24M  61M 28% /boot
    /dev/mapper/vg1-home  30G 283M  28G  1% /home
    /dev/mapper/vg1-usermedia
                50G 180M  47G  1% /usermedia
    /dev/mapper/vg1-tmp  9.9G 151M 9.2G  2% /tmp
    /dev/mapper/vg1-usr  9.9G 2.5G 7.0G 26% /usr
    /dev/mapper/vg1-usr_local
               9.9G 176M 9.2G  2% /usr/local
    /dev/mapper/vg1-var  9.9G 1.1G 8.4G 12% /var
    sles1:/backup/kubuntu1
                50G  17G  31G 35% /backup
    Booting a Live CD /DVD

    Every time you start a backup and restore using the Live DVD you will need to do the following. I use the Kubuntu 6.10 AMD 64 Live DVD which unfortunately does not have LVM.

    Set you environment up as you like if necessary.
    Code:
    # Switch Live CD to GB keyboard
    # Do not run as sudo or root but the user that started X
    setxkbmap gb
    
    # Let root use X
    xhost local:root
    Create a backup directory and mount to another machine where you will store your backup using e.g. NFS, SSHFS, Samba etc. I used NFS but it performs slowly when using rsync so I use ssh for that step.

    Code:
    mkdir /backup
    <mount to your backup server using what ever approach you want to use>
    Switch to a root shell or use sudo in front of the steps below
    Code:
    sudo -i
    Install LVM
    Code:
    apt-get -y install lvm2
    Test pvcreate to see if it works
    Code:
    # What we should see is
    pvcreate --help
     pvcreate: Initialize physical volume(s) for use by LVM
    
    pvcreate
        [--restorefile file]
        [-d|--debug]
        [-f[f]|--force [--force]]
        [-h|-?|--help]
        [--labelsector sector]
        [-M|--metadatatype 1|2]
        [--metadatacopies #copies]
        [--metadatasize MetadataSize[kKmMgGtT]]
        [--setphysicalvolumesize PhysicalVolumeSize[kKmMgGtT]
        [-t|--test]
        [-u|--uuid uuid]
        [-v|--verbose]
        [-y|--yes]
        [-Z|--zero {y|n}]
        [--version]
        PhysicalVolume [PhysicalVolume...]
    If you see
    Code:
    # pvcreate --help
    # No program "pvcreate" found for your current version of LVM
    Then your kubuntu installation has the same bug I encountered - see /3/. Fix it with:
    Code:
    ln -s /lib/lvm-200 /lib/lvm-0
    Code:
    # Add LVM
    modprobe dm-mod
    # Make all Logical volumes available
    vgchange -ay
    
    # We see:
    # 7 logical volume(s) in volume group "vg1" now active
    
    # Mount all the logical volumes
    mount /dev/vg1/home   /LVM/home
    mount /dev/vg1/usermedia /LVM/usermedia
    mount /dev/vg1/root   /LVM/root
    mount /dev/vg1/tmp    /LVM/tmp
    mount /dev/vg1/usr    /LVM/usr
    mount /dev/vg1/usr_local /LVM/usr_local
    mount /dev/vg1/var    /LVM/var|
    Backup Procedure

    What we want to do now that we have mounted the LVM volumes above is cycle round the volumes and back them up using any approach we want. I use rsync. It was very important to use the option --numeric-ids otherwise I couldn't boot after my restore. Note how I actually have hard-coded an SSH path to my backup server as if I use NFS through the /backup mount it took too long.

    I do not know if I have to use the -H option - if anyone can advise I would be grateful?

    Code:
    BASE_SOURCE=/LVM
    # Directories to backup. Separate with a space. Exclude trailing slash.
    SOURCES="home usermedia root tmp usr usr_local var"
    
    # Directory to backup to. This is where your backup(s) will be stored.
    # :: NOTICE :: -> Make sure this directory is empty or contains ONLY backups created by
    #	            this script and NOTHING else. Exclude trailing slash.
    TARGET="/backup/set1/volumes"
    
    # Comment out the following line to disable verbose output
    VERBOSE="-v"
    
    echo "Verifying Sources..." 
    for source in $SOURCES; do
    	echo "Checking $BASE_SOURCE/$source..."
    	if [ ! -x $BASE_SOURCE/$source ]; then
       echo "Error with $BASE_SOURCE/$source!"
       echo "Directory either does not exist, or you do not have proper permissions."
       exit 2
      fi
    done
    echo "Sources verified. Running rsync..."
    
    for source in $SOURCES; do
    
     # Create directories in $TARGET to mimick source directory hiearchy 
     if [ ! -d $TARGET/$source ]; then
      mkdir -p $TARGET/$source
     fi
     
     rsync $VERBOSE -a -H --numeric-ids --delete $BASE_SOURCE/$source/ -e ssh root@192.168.1.4:/backup/kubuntu1/set1/volumes/$source/ 2> temp_system_backup_$source.err > system_backup_$source.log
     errorCode=$?
     echo "rsync return code: $errorCode"
    
     if [ ${errorCode} -gt 0 ] ; then 
      echo
      echo Errors were:
      more temp_system_backup_$source.err
      more temp_system_backup_$source.err >> system_backup.err
     fi
    
    done
    With the main data backed up, back up the other partitions using dd

    Code:
    # Backup MBR
    dd if=/dev/sda of=/backup/set1/mbr bs=512 count=1
    
    # Backup boot partition
    dd if=/dev/sda1 of=/backup/set1/sda1.dd
    
    # Optional step - backup swap - either the whole lot if you want to or we can re-create this during recovery
    dd if=/dev/sda2 of=/backup/set1/sda2.dd
    
    # Backup Extended partition
    dd if=/dev/sda3 of=/backup/set1/sda3.dd
    
    # Most importantly - backup LVM configuration
    cp /LVM/root/etc/lvm/backup/vg1 /backup/set1
    
    # Save fdisk output - for information purposes
    fdisk -l > /backup/set1/fdisk.txt
    Tidy-up by un-mounting

    Code:
    umount /LVM/home
    umount /LVM/usermedia
    umount /LVM/root
    umount /LVM/tmp
    umount /LVM/usr
    umount /LVM/usr_local
    umount /LVM/var
    
    umount /backup
    Trashing a hard-disk
    If you are going to restore to the hard-disk you were previously using you will need to remove the stuff that was there. The following approach was taken from /1/.
    For my hard-disk:
    Code:
    echo Before
    fdisk -l
    
    dd if=/dev/zero of=/dev/sda bs=512 count=1
    dd if=/dev/zero of=/dev/sda1 bs=1024k count=50
    dd if=/dev/zero of=/dev/sda2 bs=1024k count=50
    dd if=/dev/zero of=/dev/sda3 bs=1024k count=50
    dd if=/dev/zero of=/dev/sda5 bs=1024k count=50
    
    echo After
    fdisk -l
    Recovery Procedure
    After your Live DVD boots prepare things as we did above in the backup procedure but don't mount the LVM partitions yet as they don't exist.

    Restore MBR
    Code:
    dd if=/backup/set1/mbr of=/dev/sda bs=512 count=1
    fdisk -l at this point should show partitions sda1, sda2, sda3
    The warning:
    Warning: invalid flag 0x0000 of partition table 5 will be corrected by w(rite)
    can be removed by a manual task

    Code:
    fdisk /dev/sda
    and writing the partitions with command w
    Restore boot partition
    Code:
    dd if=/backup/set1/sda1.dd of=/dev/sda1
    Either restore swap
    Code:
    dd if=/backup/set1/sda2.dd of=/dev/sda2
    or
    alternatively, and much more quickly, re-create the swap
    Code:
    # mkswap /dev/sda2
    It will report something like
    Code:
    #Setting up swapspace version 1, size = 3002220 kB
    #no label, UUID=a8bc1fd6-bcdd-4abd-971f-cfc0989e14b4
    Record the UUID as we will need it later on.

    Restore Extended partition
    Code:
    dd if=/backup/set1/sda3.dd of=/dev/sda3
    You should see the "8e Linux LVM" partition under sda5 now

    At this point we have another manual task. Enter fdisk and write the partitions again. This is necessary for pvcreate to use them below.

    Re-create the LVM physical volume
    Find the physical volume UUID in the vg1 restorefile in the section "pv0 { if = ..."

    Why we have do this when the UUID is in the file I don't know but pvcreate requires it on the command line.

    Code:
    UUID="lh9seu-HGWf-5ZkP-yF52-LKqu-xFey-tFWkqe"
    pvcreate --uuid $UUID --restorefile /backup/set1/vg1 /dev/sda5
    And restore the volume group
    Code:
    vgcfgrestore --file /backup/set1/vg1 vg1
    Code:
    # Add LVM
    modprobe dm-mod
    # Make all Logical volumes available
    vgchange -ay
    
    # Format all the logical volumes
    mkfs.ext3 /dev/vg1/home
    mkfs.ext3 /dev/vg1/usermedia
    mkfs.ext3 /dev/vg1/root
    mkfs.ext3 /dev/vg1/tmp
    mkfs.ext3 /dev/vg1/usr
    mkfs.ext3 /dev/vg1/usr_local
    mkfs.ext3 /dev/vg1/var
    Mount all the logical volumes
    Code:
    mount /dev/vg1/home   /LVM/home
    mount /dev/vg1/usermedia /LVM/usermedia
    mount /dev/vg1/root   /LVM/root
    mount /dev/vg1/tmp    /LVM/tmp
    mount /dev/vg1/usr    /LVM/usr
    mount /dev/vg1/usr_local /LVM/usr_local
    mount /dev/vg1/var    /LVM/var
    Now we need to restore the data just as we did when we backed up the data but with the source and destination reversed.

    Code:
    BASE_SOURCE=/backup/set1/volumes/
    # Directories to restore from. Separate with a space. Exclude trailing slash.
    SOURCES="home usermedia root tmp usr usr_local var"
    
    # Directory to restore to. Exclude trailing slash.
    TARGET="/LVM"
    
    # Comment out the following line to disable verbose output
    VERBOSE="-v"
    
    echo "Verifying Sources..." 
    for source in $SOURCES; do
       echo "Checking $source..."
       if [ ! -x $BASE_SOURCE/$source ]; then
       echo "Error with $BASE_SOURCE/$source!"
       echo "Directory either does not exist, or you do not have proper permissions."
       exit 2
      fi
    done
    echo "Sources verified. Running rsync..."
    
    for source in $SOURCES; do
    
     rsync $VERBOSE -a -F --numeric-ids -e ssh root@192.168.1.4:/backup/kubuntu1/set1/volumes/$source/ $TARGET/$source/ 2> temp_system_restore_$source.err > system_restore_$source.log
     errorCode=$?
     echo "rsync return code: $errorCode"
    
     if [ ${errorCode} -gt 0 ] ; then 
      echo
      echo Errors were:
      more temp_system_restore_$source.err
      more temp_system_restore_$source.err >> system_restore.err
     fi
    
    done
    # Change the UUID for the swap, if we re-created it, via an editor using the value we recorded above
    e.g.
    Code:
    sudo vim /LVM/root/etc/fstab
    It is possible that you might need to change file /etc/initramfs-tools/conf.d/resume - see /4/. I didn't because my resume file did not hold a UUID.

    Mine was
    Code:
    RESUME=/dev/sda2
    which makes things simpler.

    Tidy-up
    Code:
    umount /LVM/home
    umount /LVM/usermedia
    umount /LVM/root
    umount /LVM/tmp
    umount /LVM/usr
    umount /LVM/usr_local
    umount /LVM/var
    
    umount /backup
    We are now done. Re-boot and you should have a recovered system. Good luck.

    /1/ Backup & Recovery, W Curtis Preston, O'Reilly
    /2/ http://www.faqs.org/docs/Linux-HOWTO...ery-HOWTO.html
    /3/ https://bugs.launchpad.net/ubuntu/+s...vm2/+bug/96802
    /4/ https://launchpad.net/ubuntu/+bug/66637/comments/23

    #2
    Re: HOWTO: Perform a bare metal backup of a system using LVM

    I recently needed to change my disk on my PC and re-used this approach. I should make the following points.

    Using the Karmic Koala Live CD now means there are no lvm2 bugs that I previously experienced.

    The rsync options I'm now using for both backup and restore are as follows:

    Code:
    EXCLUDE_FILE=/backup/bin/system_backup_excludes.txt
    # Comment out the following line to disable verbose output
    VERBOSE="-v"
    
      # Rsync options:
      #  -a, --archive        archive mode; equals -rlptgoD (no -H,-A,-X)
      #  -H, --hard-links      preserve hard links
      #  -A, --acls         preserve ACLs (implies -p)
      #  -X, --xattrs        preserve extended attributes - not in version 2.6.9
      #  --numeric-ids      don’t map uid/gid values by user/group name
      rsync $VERBOSE $EXCLUDE -aHAX --numeric-ids --delete $BASE_SOURCE/$source/ $TARGET/volumes/$source/ 2> $TARGET/logs/temp_system_backup_$source.err > $TARGET/logs/system_backup_$source.log
      errorCode=$?
    The only directory I exclude is /lost+found. I found that rsync in Karmic Koala works fine while that in Hardy Heron Live CD (rsync version 2.6.9) could not cope with the block device files.

    I found that the use of UUIDs has complicated re-creating swap so I now back it all up
    Code:
    # Backup swap rather than re-create it and avoid needing a new UUID which is difficult to add
    # to the initrd image file: conf/conf.d/resume
    dd if=/dev/sda2 | gzip -c5 > $TARGET/sda2.dd.gz
    and restored it with:
    Code:
    gunzip $SOURCE/sda2.dd.gz - | dd of=/dev/sda2
    If you've found that some LVs were, in the past, really bigger than you needed then this is a convenient time to reduce them during recovery just before you format them with mkfs.ext3:

    Code:
    lvreduce -L 500M /dev/vg1/usermedia
    If you are doing recovery to a disk which is bigger than your previous disk then one of the easiest ways to use that extra space is as follows.

    After you've confirmed the system has been restored okay you can boot from a live CD once again

    • Extend the Extended partition (sda3 in my case) using gparted, etc, so that it uses the free bit of disk space at the end of the disk

    • Create a new logical partition in Extended with fdisk
      Use key presses for
      - n - create partition
      - t - change system ID to 83 (for an LVM partition)

    • Re-boot

    • Create a new physical volume from the new partition
    Code:
    sudo pvcreate /dev/sda6

    • Make Linux scan for any new LVM disk partitions

    Code:
    sudo vgscan

    • Add PV to the VG (vg1)

    Code:
    sudo vgextend vg1 /dev/sda6


    I hope this clears up a few things from my original post.

    Comment

    Working...
    X