1 Reply Latest reply: May 28, 2016 11:59 AM by Nick Triantos RSS

    File restore from Linux VM

    Daryn DeBoer Newbie

      Can a Single File Restore be done of a Linux VM and if so what are the limitations if any?  Like file system type (ext3/4) or limit of vmdks for a VM, we have many Linux VMs with 10+ vmdks.  And what's the process, in a nutshell, same as the Windows demo I've seen?  About 2/3 (300 VMs) of our VM environment is Linux and almost all of file level restores are from those Linux VMs, so this functionality is a big check mark when shopping around for new storage.  Thanks.

        • Re: File restore from Linux VM
          Nick Triantos Wayfarer

          Hi Daryn,

           

          File restore from Linux can be a bit tricky and the process can vary depending on distro, filesystem type and whether or not there's LVM involved. The later can complicate things quite a bit. That said here are some things that will help starting with the obvious

           

          1) Clone the datastore and present it to the vSphere host

          2) Attach the cloned VMDK to the Linux VM and make sure you note the vSCSI adapter number (0:1). In this case, lets assume it's 0.

          3) Inside the Linux VM discover the cloned disks.

          echo "- - -" > /sys/class/scsi_host/hostX/scan (hostX = vSCSI adapter number)

          4) run fdisk -lu /dev/xxx to make sure you can see the partitions

          5) If no LVM then create a directory and mount the device

           

          # mkdir /mnt/clone 

          # mount /dev/sdcX /mnt/clone

           

          6) If you use LVM, then it gets hairy in that the vmdk you're attempting to mount belong to a Volume Group and will have a VG ID and label that will conflict with the original disk. Even outside of virtualization this is hairy situation using Unix volume managers. The way to resolve this is to run

          vgimportclone -n newvg /dev/sdcX

          7) Run vgchange & lvdisplay to activate the newvg and identify the logical volumes

           

          # lvchange -ay newvg (-a = availability -y=activate it) - can also use lvchange to activate a specific Logical volume


          # lvdisplay  (lists the Logical volumes)


            --- Logical volume --- 
            LV Name                /dev/newvg/LV22 
            VG Name                vg12snap 
            LV UUID                kmbb-bn0W-lue6-q7Vn-ikmb3-lkmn3-RBjkgA
            LV Write Access        read/write 
            LV Status              available  
            LV Size                35.63.88 GB 
            Current LE             5385 
            Segments               1 
            Allocation             inherit 
            Read ahead sectors     auto 
            --- Logical volume --- 
            LV Name                /dev/newvg/LV23 
            VG Name                vg12snap 
            LV UUID                Unm9kl-tkn0-Rlm9-cbnm-1z9om-a9kn-89iklM 
            LV Write Access        read/write 
            LV Status              available 
            LV Size                12.1 GB 
            Current LE             238 
            Segments               1 
            Allocation             inherit 
            Read ahead sectors     auto 

          Identify your LV, create a directory & mount. You should be able to access the files
          # mkdir /mnt/LV22 
          # mount /dev/newvg/LV22 /mnt/LV22 

           

          After you're done go the opposite direction.

           

          Unmount the Logical volume

          vgremove -f newvg

          pvremove /dev/sdcX (sometimes you may need to use -f)

          Remove the cloned vmdk from the vm

          rescan ..."echo - - -"

           

          Hope this helps

          Nick