Tag/tag.png

Content Cleanup Required
This article should be cleaned-up to follow the content standards in the Wiki Guide. More info...

Introduction

This page will walk you through the process of installing Ubuntu onto three or more hard drives so that Ubuntu is installed onto a RAID1 root and boot device, uses a RAID0 swap device, and has two RAID5 and one RAID1 device for storage, all combined into a LVM (Logical Volume Manager) device. It is not for the particularly faint of heart, this guide assumes you know a good deal about what is going on with computers in general, but not necessarily with Ubuntu.

Installing Ubuntu

First, download and boot from the 7.04 alternate installation CD. Select your language, keyboard and other settings, then choose to manually partition your hard drives.

For my installation, I have 4 hard drives. I have a 120GB SATA drive (sdb), a 400GB SATA drive (sda), a 500GB SATA drive (sdc), and a 200GB e-ide drive (hda). The guide will be using these drives and devices, so you will need to modify it as appropriate for your setup.

I want to install the boot and root areas into a RAID1 partition that spans three of the drives (so that my RAID5 will be working with the same space occupied on all the drives), the swap area onto a RAID0 (since there is no point in redundancy for swap), and I will use RAID5 and RAID1 to make the most of the leftover space so that I will have 1 redundant disk in all my devices.

Partitioning the drives

First, initialize the partition tables of all the drives you will use. Select the three that you want to have the root/boot and swap space on (sda,sdb, and sdc, in my case). Create 2 new partitions on each drive, and set the 'use as' to physical volume for RAID. For the root and boot area, I am making a 15GB partition. Be _sure_ to set the boot flag to yes on this partition. For the swap space, I am making a 1GB partition.

So, at this point, my drives look like this:

  • HDA - Unpartitioned (200GB free)
  • SDA - 15GB B K raid
    • 1 GB K raid (384GB free)
  • SDB - 15GB B K raid
    • 1 GB K raid (104GB free)
  • SDC - 15GB B K raid
    • 1 GB K raid (484GB free)

Since I want to use all the space that I can on these drives, I find the smallest available free space, which is the 104GB free on SDB. I will make a RAID5 that uses this space, and I will use the 2 largest free areas and this area to do that. Since I will need to place the RAID partition on an extended partition on SDA and SDC, so that I can create more than 4 partitions on these drives, I will make it extended on SDB as well, for consistency.

So, I create a 104GB logical partition and set it to physical RAID on sda,sdb, and sdc.

This leaves me with SDB now completely used, 200GB free on HDA, 280GB free on SDA, and 380GB free on SDC. Now we select the smallest free space and repeat the same process as above. So, I create a 200GB logical partition on HDA, SDA, and SDC and set it to raid.

This leaves me with just 80.1GB free on SDA, and 180GB free on SDC. I'll set these up as a RAID1, and leave 100GB unpartitioned on SDC. Create a logical 80.1GB partition on both and set it to RAID. Now, select set up software RAID (you will have to write the partition changes.)

Setting up the software raid

Using the partitioning scheme from above, we've allocated space for a RAID1 root and boot device, a RAID0 swap device, a RAID5 storage device, another RAID5 storage device, and a RAID1 storage device. We'll now set these up.

First, the root/boot partition. Select create MD device, and RAID1. (My computer took a minute to read the device list at this point, so there will be a delay.) Change the active devices from 2 to 3, and 0 spare devices. Select our devices, which are /dev/sda1, /dev/sdb1, and /dev/sdc1.

Now, create a RAID0 for the swap. Create MD -> RAID0. Select our devices, which are /dev/sda2, /dev/sdb2, and /dev/sdc2.

And now we'll create the 2 RAID5 devices and the RAID0, just like before. Our first RAID5 uses /dev/sda5, /dev/sdb5, and /dev/sdc5. Again, set active devices to 3, and spares to 0. The second one is a RAID5 on /dev/hda5, /dev/sda6, and /dev/sdc6. Finally create a RAID1 on /dev/sda7 and /dev/sdc7.

Hit finish. Now, we will finish setting up the root and swap partitions, and set our various storage RAIDs up as a LVM device so that we end up with one large device for storage.

Finishing the partitioning and setting up LVM

Now, go to the 15GB RAID1, and select the unpartitioned space. Change 'use as' to Ext3, select yes to formatting, mount point to /, and label it 'System'. Select the 3GB RAID0 and change 'use as' to swap space.

Go to all of the other RAID devices and set the 'use as' to LVM device. Now, select configure logical volume manager from the top of the list. You have to write your changes and let formatting occur, so you get a small break. In the LVM configuration, create a volume group, and label it (I called mine 'raid'). Add all of the physical volumes to it. Select finish.

Select 'finish partitioning.' At this point, I had to go back and redefine the RAID1 as root and the RAID0 as swap using the above steps, so you may need to do this also. Select finish partitioning and write changes.

Finishing the installation

Configure the time zone, set the system clock to UTC, and create a user. At this point the installation process will copy all of your files.

Select the video modes to use, which depends on your monitor/graphics card. If you don't know, stick with the defaults.

At this point, you have finished the actual installation, and created 3 storage devices: a 15GB system RAID1, a 3GB system RAID0, and a 640GBish LVM volume group consisting of of 2 RAID5s and 1 RAID1.

Take a small break to pat yourself on the back, because you now have roughly 655 gigabytes of usable storage that will persist even if one hard drive fails completely.

Two tasks remain, setting up GRUB to boot from each drive in the RAID1 array, so that if your primary drive fails there will be no noticeable difference in operation, and creating a logical volume that you can format from the LVM volume group.

Configuring GRUB

At this point, I ended up having to use a regular Ubuntu LiveCD to reboot my system onto, because I had some issues getting Ubuntu to boot to my new setup. It's entirely possible (likely, almost) that it was because I was attempting to boot to the wrong drive, but you may run into this issue for other reasons. So, try to reboot to your brand new install, and if that fails reboot using a regular Ubuntu LiveCD. (If your reboot works w/o the LiveCD, edit this and let us know. Thanks.)

After rebooting, open up a terminal and run  sudo grub  to open up the GRUB console. From here, we want to install GRUB into the master boot record (MBR) of each drive that contains our root file system, for redundancy. So, to find the devices that have the root file system, type  find /boot/grub/stage1  (If you partitioned the system with a separate boot partition, omit /boot/). You should be looking at list of three devices, like this

  (hd1,0)
  (hd2,0)
  (hd3,0)

The numbers may be different, that doesn't matter. So, we will take the first device from the list and use that one to work with. Recalling that our root system is installed on /dev/sda, /dev/sdb, and /dev/sdc, we will install grub using the following instructions:

device (hd1) /dev/sda
root (hd1,0)
setup (hd1)
device (hd1) /dev/sdb
root (hd1,0)
setup (hd1)
device (hd1) /dev/sdc
root (hd1,0)
setup (hd1)

This tells GRUB to define each drive as (hd1), set root to the first partition on that drive, and then install GRUB into the MBR. Now the system can boot from any of the three drives in the array.

Configuring LVM

This process is pretty straightforward. Reboot into your main system (if you used the LiveCD above), and run  vgdisplay . You're now looking at the volume group you created during installation. Look at the 'Total PE' value, and remember that (more likely just leave it up in the terminal). Now run  lvcreate --extents $TOTAL_PE --name some_name , replacing $TOTAL_PE with the earlier value and 'some_name' as the name you'd like the device to have (LVM will create the device as /dev/mapper/some_name). Now, you have a device in /dev/mapper that you can use just like any drive, and it is fault tolerant to the tune of one drive.

Final Thoughts

Congrats! You've done something that uses the most possible space on all of the hard drives while ensuring fault tolerance. In the future I'll add instructions for installing common utilities like Samba and DHCP3-Server. Check back here for links.


CategoryInstallation

Installation/FileServerWithRAID (last edited 2009-07-13 14:11:40 by 66)