Tag/tag.png

Content Cleanup Required
This article should be cleaned-up to follow the content standards in the Wiki Guide. More info...

Tag/tag.png

Style Cleanup Required
This article does not follow the style standards in the Wiki Guide. More info...

In order to perform this install you will need the Alternate ISO image and have to use the text based installer. Updates are here to support an intial install to a new RAID and onto an existing RAID.

For a New Installation

See below for an existing RAID install - Process is pretty similiar. (Was a 5.x process) This was done on 6.x I'm doing this from memory but it should be pretty close. This was a really simple process on 6.x

Expectations

  • You know how to partition and setup your drives - This won't prescribe a drive setup, but covers some necessities.
  • You have burned a copy of the Alternate ISO and are using the text based installer.

Drive Setup

  • In my example I have 4 drives and a CDROM, no Floppy.
  • I used one drive /boot and /Swap with 50Meg for /boot and 1gig-50Meg for /Swap (Better performance with Swap seperate)
  • You can create Primary partitions on your drives of equal size to make them ready for the RAID setup. Leverage excess space as you think is acceptable or just leave as wasted.
  • I'm using Raid 5 on 3 Drives intended to mount as /root
  • My drives were ~10Gig so some space was wasted.
  • With a typical setup - some drives are master and some are slave - it doesn't matter at this point in terms of will it work.
  • I booted to the live CD and partitioned my drives before the install in the GUI with GParted. I recommend this for the first time installers.

Install

  • Boot from the Ubuntu alternate ISO 6.x
  • Follow the directions to drive setup and choose manual partition
  • Setup /boot and /Swap in the manual setup - mark /boot as bootable. I formatted as ext2
  • Choose configure RAID option
  • Tell it how many disks you have and then select them from the menu.
  • The RAID (Md0) is now in the list => open it.

  • Set MD0 as / for the root and ext3 files - say yes to format it.
  • Select finish
  • should continue just like a normal setup. Really nice.

++ Ubuntu 5.10 ++ Installing onto a Software RAID array, and using LVM for all of your volumes (including root and boot) is well supported in Ubuntu 5.10 (Breezy Badger). You will be able to do almost everything you need to do directly from the Ubuntu installer. This page will describe how to do this, aswell as any gotcha's you might run into. Also included is a section on how to migrate a machine from a different linux distro that is running Software RAID.

If you are doing this on a previous Ubuntu release, this is much more difficult. These directions Installation/RAID1 may help.

Expectations/Caveats

  • All partitions will be on the raid array
  • All partitions will be LVM volumes
  • Swap will be on LVM/RAID
    • This is probably a small performance penalty during swapouts
    • If swap wasn't on RAID however, a single drive failure could cause the system to go down
  • Root/Boot will be on LVM/RAID
    • No need to manually synchronize your root partitions between your RAID-1 drives
  • In the case of a drive failure while the system is powered down, you will be able to boot off either drive (assuming your bios can boot off of either drive)
  • GRUB will not work, LILO is the only viable bootloader (at this time)
  • IMPORTANT - If you load the EVMSGUI or EVMSN, it will get confused and think it's found an LVM partition on /dev/hdX, but a few bytes offset. You do not have an LVM partition on your block devices, you have it on /dev/md0. You need to configure EVMS so that this doesn't happen.

  • This guide has only been tested on an i686 install

Installing over an Existing RAID Array

This is what I actually did, while writing these directions. Obviously I expect my directions to work from a scatch install, but that's not what I actually did. I was upgrading from a Debian Woody, RAID-1 (two drives mirror), EVMS for volume control, that had root/boot and swap on non-RAID. I decided that I didn't want to 'upgrade' my debian install to ubuntu, I wanted to destroy it and rebuild it. However, I didn't want to risk loosing all of my data. I decided what I wanted to do was yank one of the drives from the array, and keep it in reserve. I would do a full install on the first drive, once I confirmed everything was working, I would reattach the drive kept in reserve, copy my drive to the first one, then add the reserve drive to the RAID-1 array. If I had wanted to use my old EVMS volumes and partitions, I believe I could have, however I wanted to get root/boot + swap onto RAID and LVM. I have embedded the directions on how to install over an existing RAID array into the main directions, the directions associated with this type of upgrade are marked with a (i) icon. Although these directions should allow you to not need to backup your data, if you care about your data at all, you will have a second backup available if something goes wrong.

Installation

These directions can be easily tweaked for any RAID level. I'll explain as if you're doing RAID-1. If you're doing something else, you're probably smart enough to tweak the directions yourself. Any directions marked with (i) are for the Existing RAID array upgrade. You need to ignore these if you are doing a fresh install.

  • Install both drives into the machine
    • (i) Actually, disconnect one of the drives from the machine. You should pull the drive that is second in the bootup process. This will mean you don't have to wait for the entire drive to sync before you can boot later on in the process. You will proceed with the full install on one drive, then insert the second drive later. This allows you to always revert to your old mirror if you have to (assuming your old stuff worked!)

  • Boot off of the Ubuntu installation CD
    • Server or desktop install is fine, choose as appropriate
  • When it prompts you to format the drive, or manually partition, choose manually partition

  • Delete any and all existing partitions on both drives
    • (i) You only have one drive installed, so just clear out this drive. You'll do the other one later

  • Create a new primary partition. Use the entire disk. Make it a RAID partition.
    • Mark it as bootable
  • Choose the Configure RAID option
    • It'll probably ask you to write your partition table. Go ahead
  • Create your MD device, choose your raid level, and tell it how many disks you have
    • (i) Configure as RAID-1. When asked how many disks you have, tell it two, even though your second disk is sitting on your desk. Your raid array will be degraded (since it's missing the disk sitting on your desk)

  • Add devices to the MD device as appropriate
  • Select Finish in the Configure RAID screen
  • Set the partition of your raid device as a physical volume for LVM
  • Choose Configure LVM
    • At this point I got an error from the installer. Something like error informing kernel - invalid argument for /dev/md/0p1.
    • I rebooted to let everything get synced to disk, and tried again. Same error. I just ignored it. Everything was okay. I don't know if the reboot was necessary. Probably not.
    • Create your volume group (I used vg1)
      • Use /dev/md0 as the PV
    • Create your LVs
      • I don't make a partition for boot, I just put it on root. (Since there's no need for a separate boot)
    • Don't forget swap
    • Select Finish in the Configure LVM screen
  • Configure your partitions. Make ext3 file systems on each of the volumes, and mount them in appropriate places (root, usr, var, home, swap)
  • Write partitions (ignore the kernel error)
  • The installation could continue normally, and everything should work just dandy.
  • The installer will automatically use lilo for you instead of grub
  • You are now complete, you should test your RAID array for durability (EG: disconnect drives, make sure you can boot off both drives) before you trust it
  • (i) You are not done yet.

  • (i) Once you've finished your install, get your system all working and ready to copy in your old data

  • (i) Hook up your old RAID-1 drive, and boot off of an ubuntu live CD.

  • (i) IMPORTANT - you must boot from a live CD. Do not boot your system with both drives hooked up, this may result in data loss.

  • (i) Once booted on the live CD, you should mount your new logical volumes, and mount your old logical volumes.

  • (i) Copy data from your old LVs to your new LVs as appropriate

  • (i) Continue with these directions only when you are prepared and ready to destroy your old disk

  • (i) fdisk your old drive, delete all partitions (fdisk /dev/hde)

  • (i) Create a single primary partition, using the whole disk, mark it as typeID FD (Linux raid auto)

  • (i) write the partition table

    • (i) Most likely it'll tell you to reboot. Reboot, and boot off of the live CD again

  • (i) Add the new drive to the RAID array

    • (i) mdadm /dev/md0 --add /dev/hde1 (or whatever your block device+partition is)

  • (i) Your devices will start resyncing immediately. If you yanked the second drive in the bootup process, feel free to reboot anytime and boot up regularly, since it'll boot off of the working drive first.


CategoryInstallation

Installation/LVMOnRaid (last edited 2009-07-13 14:23:39 by kmitch87)