WARNING UPDATE!
Note from NicholasIstre: The server I was testing this setup on had failed on a bootup with a kernel-panic (*slaps forehead for not writing the exact error down...*) a couple of bootups after I added a new RAID1 array to the main volume group. Depending on further testing and discussion with the community, I may modify these instructions to have a /boot or even the root filesystem on the RAID1 outside of the LVM partition.
Introduction
As I've collected a fair amount of hardware, I decided that I wanted to create a small file server with a few hard drives and a spare system I had lying around, like many others probably have done. I also wanted RAID1 capability and the ability to add to the system, though not necessarily through hot-plugging, though I'm sure this HOW-TO can provide a jump-point if you have such hardware available. No, I'm working with old-fashioned IDE drives, and I want to have a place that I can pretty reliably store anime and music that I... ahem... acquire...
Unfortunately, the documentation for LVM and Software RAID is a bit sparse. Among others, I've read Installation/LVMOnRaid, this very helpful LVM HOW-TO, this software RAID HOW-TO, and this Linux RAID FAQ, but they only really provided the tools and hints. I had to figure out how to setup this server by myself, it seems.
And, for more things for me to do, the only hardware I have available for this are a P-III system with 2 onboard PATA-100, a DVD/CD reader, a network card, 2 60-Gig drive, 2 120-Gig drives, and necessary cables for all. If you didn't notice, I have connections for 4 IDE devices, where as I have 5 that I want to use. Even worse, I'd have to master/slave two of the Hard drives together, which is pretty much a no-no. So off I go to my trusty online computer parts retailer and order a PCI card with two more ATA-133 connections.
In the meantime, I still would like to put something together that would provide me with 2 of the hard drives until the IDE card comes in, where I will install it with the other 2 hard drives, RAID 1 them, then extend the existing filesystems onto that.
So, I've given you the hardware I have. I will be using the Ubuntu 6.06 Server CD to do the following on LVM on RAID 1:
- FTP Server (only for my local network, to use with Acronis True Image software)
- SMB Server (for windows sharing)
- SSH Server (pretty standard on all of my server setups)
I will be creating this HOW-TO for my own reference, especially when I go and upgrade the fileserver with more harddrives. I will be using VMWare Server software on a laptop to provide screenshots and help along when writing this article (So, feel sorry for me as this poor 4500 RPM drive will have to pretend to be up to 4 drives in RAID 1 format...)
Please note, anything in italics will be me asking the community for help on certain things.
Now, onto the actual HOW-TO...
Install
Setup the system to boot from the CD and boot-up your trusty Ubuntu 6.06 Server CD
Choose Install to the hard disk and press ENTER. You will then proceed to choose your language, location, and keyboard layout.
Enter the hostname of your server.
Setting up RAID
This section is incorrect. Setup a /boot section outside of the LVM by creating two RAID 1 arrays, the boot section being at least 200 MBs or so. Be sure to verify/setup the /boot partition after setting up the LVM.
Select Manually edit partition table
Select each drive and select Yes to create new empty partition tables.
Now pick the smaller drive, move to the line saying FREE SPACE under it, and press ENTER.
Create a new primary partition and keep the default size to use the full disk.
Remember this value when creating the partition on the larger disk! Not necessary if both disks are exactly the same size. I do not know how the RAID 1 would work with different size partitions
Change the partition type to RAID, set the bootable flag, and you're done with this disk!
Do the same process on the other disk, but set the size of the partition as the same as the other disk, if they are different sizes. Choose Beginning if asked where to place the partition.
Now that both disks are setup, we need to setup the Software RAID. Say yes to write the changes to disk and select Create MD device.
Select RAID1, choose 2 Active Devices, 0 Spares, Select the two RAID partitions you just created, and you're now finished with setting up the RAID.
Setting up LVM
We now have a Software RAID device to play with, so select that device's space to set it up to use as a physical volume for LVM, and Done setting up the partition.
Now that we have a physical LVM volume, select Configure the Logical Volume Manager, and write the changes to disk.
I got an error here, but when I chose Ignore and continued on with the rest of the setup, it seems to install and setup just fine.
I find that waiting until the RAID arrays finish syncing will cause fewer issues after the install. Switch to another terminal using ALT-F2 and run "cat /proc/mdstat/" until the arrays are finished syncing (This may take some time, depending on how large the hard drives are). Once it is finished, switch back to the installer with ALT-F1.
Setup the Volume Group using the physical LVM volume we just created.
Choose whatever name you wish. Since I will only have one volume group, I'm including the name of the server in the name. Once the name is picked, you can leave the volume group configuration menu.
We will now create our logical volumes. I will be making 4, for the root, /boot, /home, and swap areas. For simplicity, I will name them "root", "boot", "home", and "swap".
I will continue with making a "swap" logical volume that's 1 gig, and a "home" logical volume that takes up the rest of the drive.
Now that the LVs are setup, leave the LVM configuration menu.
Setting up filesystem
We now have the four logical volumes (or however many you setup) in the partition list ready to be setup. Select each one and configure them as wanted. You may notice that I used the XFS filesystem for root and /home because it can be expanded on the fly, while (from what I can tell) ext3 has to be unmounted to be resized. Some of the others have this ability, though. Help for filesystem advice here would be appreciated!
Now that the partitions are how we like them, we will finish with setting up the disks. Please note that same error comes back, but we will ignore it again. The system will then start setting up the filesystem for you.
Finishing up Install
Setup your Timezone and UTC information.
Setup your initial user and password.
The installer will now starting copying packages and setting up your new server
Setup LILO by selecting the default option, /dev/md0: software RAID array
Again, the installer tells you something fails, in this case LILO installation, but continue anyways.
Installation complete! Or so says the installer. Remove your trusty Ubuntu 6.06 Server CD and select continue. Cross your fingers as it reboots...
...and success! Well, in my case. Your mileage may vary.
Now we have a base Ubuntu Server installation up and running, on LVM on RAID1 no doubt!
Verifying installation
If you want to check on the status of your RAID drive, type in cat /proc/mdstat. The output should look like the following:
Personalities : [raid1] md0 : active raid1 hda1[1] sda1[1] 5237056 blocks [2/2] [UU] unused devices: <none>
To see a listing of your LVM volumes groups, use sudo vgdisplay. The display on my system looks like this:
--- Volume group --- VG Name asanoVG System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 5 VG Access read/write VG Status resizable MAX LV 0 Cur LV 4 Open LV 4 Max PV 0 Cur PV 1 Act PV 1 VG Size 4.99 GB PE Size 4.00 MB Total PE 1278 Alloc PE / Size 1278 / 4.99 GB Free PE / Size 0 / 0 VG UUID 3qgmFG-Mioc-R5ue-L3Aw-BNkF-IrhK-6HzIzo
To see a listing of your logical volumes, use sudo lvdisplay. This command outputs the following in my system:
--- Logical volume --- LV Name /dev/asanoVG/boot VG Name asanoVG LV UUID b4VHMI-Axz7-p8xF-3FU5-PQtC-5t6A-MIe1t8 LV Write Access read/write LV Status available # open 1 LV Size 100.00 MB Current LE 25 Segments 1 Allocation inherit Read ahead sectors 0 Block device 253:0 --- Logical volume --- LV Name /dev/asanoVG/root VG Name asanoVG LV UUID x64Jyr-H6ZA-g3IH-cZKd-7sdp-mEL5-nRm8kJ LV Write Access read/write LV Status available # open 1 LV Size 2.00 GB Current LE 512 Segments 1 Allocation inherit Read ahead sectors 0 Block device 253:1 --- Logical volume --- LV Name /dev/asanoVG/swap VG Name asanoVG LV UUID T9c18T-9aH3-6DZF-OuQX-Iu0f-M0fw-72tVbn LV Write Access read/write LV Status available # open 2 LV Size 1.00 GB Current LE 256 Segments 1 Allocation inherit Read ahead sectors 0 Block device 253:2 --- Logical volume --- LV Name /dev/asanoVG/home VG Name asanoVG LV UUID eoodOA-zv2r-84Y7-y9a6-uzBo-A5q8-9TMCe5 LV Write Access read/write LV Status available # open 1 LV Size 1.89 GB Current LE 485 Segments 1 Allocation inherit Read ahead sectors 0 Block device 253:3
And the hard part is over! (I hope!)
Setting up FTP, Samba, and SSH
Now I built this system for a specific purpose, and here I get to actually setup and use it for that purpose. In anycase, the above section can easily apply to any server, desktop, or even workstation purpose, but I want a local FTP, Samba, and SSH server. Please note that I will be using vsFTPd server for the FTP server. But first things first:
Dist-upgrade of base system
For the first thing after a base install, I usually do a dist-upgrade to get my kernel version current and to properly setup LILO to use the new kernel. Before that, I also usually disable the CD as a source, so do sudo nano /etc/apt/sources.list, put a # in front of the line listing the CD-Rom, then Ctrl-X to save and exit. Then do your update then dist-upgrade using your favorite packager (I.E., sudo apt-get update; sudo apt-get dist-upgrade).
LILO will ask to Install a boot block using the existing /etc/lilo.conf? [Yes], and say Yes to that. Once it finishes reboot with sudo reboot.
Setup network interfaces
First we shall setup the server to use a static IP address. Edit the /etc/network/interfaces file and modify the entry for eth0 to something like the following (Inserting your own network values, of course):
# The primary network interface auto eth0 iface eth0 inet static address 192.168.1.10 netmask 255.255.255.0 broadcast 192.168.1.255 gateway 192.168.1.1
Modify your hosts file to use this static address for the host name by editing /etc/hosts with something similar to the following line:
192.168.1.10 asano.br.br.cox.net asano
Finally setup your DNS server entry by editing /etc/resolv.conf to something similar to:
search br.br.cox.net nameserver 192.168.1.1 nameserver 192.168.1.2
Restart the system by running sudo reboot. Update with command that does not require reboot!
Installing server software
Installing is the simple part. Install the following packages:
ssh openssh-server vsftpd samba
Configuring server software
SSH
SSH was ready 'right out of the box' for me, though you may customize it by editing /etc/ssh/sshd_config. The ssh daemon (server) sshd can be controlled through the associated init script. To restart it, for example,
sudo /etc/init.d/ssh restart
Verify the installation by logging in with a client SSH program from another system.
FTP
vsftpd requires some modifying to work like I need it too. Mainly, I wish to disable the anonymous login and let local users login and upload files, so I will open up /etc/vsftpd.conf and change the following entries:
anonymous_enable=NO local_enable=YES write_enable=YES local_umask=022 ftpd_banner=Welcome to asano local FTP server.
Save the file and tell the server to restart vsftp
sudo /etc/init.d/vsftpd restart
Finally, test logging in with a client FTP program from another system.
Samba
Configure Samba by editing /etc/samba/smb.conf. I do the following changes:
workgroup = HOUSE security = user guest account = nobody [homes] comment = Home Directories browseable = yes valid users = %S writable = yes create mask = 0644 directory mask = 0755 [public] comment = Public share path = /home/public browseable = no guest ok = yes writable = yes create mask = 0666 directory mask = 0777
I then make sure to create the public directory with the following commands:
sudo mkdir /home/public sudo chmod 0777 /home/public sudo chown nobody:nogroup /home/public
Restart Samba with:
sudo /etc/init.d/samba restart
Don't forget to setup users with sudo smbpasswd <username> to give them access to windows shares.
Test the setup by accessing the server and writing and reading files on the shares.
Adding hard drives
Now we come to the fun part: Expanding the file system! Say we get two new hard drives to add to the server (in this example, two pathetic little 2 gig drives... ). So we install them, start up the server, and now what?
Finding the drives
First, we need to find out how to access the new drives. One thing that may give a clue is to run cat /proc/partitions. Need to better show how to find out where the new drives are mounted!
Setting up RAID
Once we figure out how they are represented in the server (in my server's case, /dev/hdb and /dev/hdd), they will have to be first setup in a new RAID array. For this, I will use fdisk for each drive. It would be advisable to start with the smaller drive and use the same number of cylinders on the larger drive. I also set the type of each partition to fd, the code for Linux raid autodetect.
$ sudo fdisk /dev/hdb Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. The number of cylinders for this disk is set to 4161. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-4161, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-4161, default 4161): Using default value 4161 Command (m for help): t Selected partition 1 Hex code (type L to list codes): fd Command (m for help): p Disk /dev/hdb: 2147 MB, 2147483648 bytes 16 heads, 63 sectors/track, 4161 cylinders Units = cylinders of 1008 * 512 = 516096 bytes Device Boot Start End Blocks Id System /dev/hdb1 1 4161 2097112+ fd Linux raid autodetect Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. $ sudo fdisk /dev/hdd Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. The number of cylinders for this disk is set to 4369. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-4369, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-4369, default 4369): 4161 Command (m for help): t Selected partition 1 Hex code (type L to list codes): fd Changed system type of partition 1 to fd (Linux raid autodetect) Command (m for help): p Disk /dev/hdd: 2254 MB, 2254857728 bytes 16 heads, 63 sectors/track, 4369 cylinders Units = cylinders of 1008 * 512 = 516096 bytes Device Boot Start End Blocks Id System /dev/hdd1 1 4161 2097112+ fd Linux raid autodetect Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. $
Once the two partitions are created, we use the mdadm command to create and build the new array:
$ sudo mdadm --create --verbose /dev/md1 --level=raid1 --raid-devices=2 /dev/hdb1 /dev/hdd1
The RAID system will sync the drives in the background. We are now free to setup the new array and expand the existing Volume Group into it!
Expanding the LVM Volume Group
Before we can expand the volume group we created (On my system, named asanoVG. You can remind yourself using the sudo vgdisplay command), we need to initialize the new RAID array with:
$ sudo pvcreate /dev/md1 Physical volume "/dev/md1" successfully created $
Then we add this physical volume to our volume group:
$ sudo vgextend asanoVG /dev/md1 Volume group "asanoVG" successfully extended $
If you now run vgdisplay, you will now notice we have free space available in our volume group:
$ sudo vgdisplay --- Volume group --- VG Name asanoVG System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 6 VG Access read/write VG Status resizable MAX LV 0 Cur LV 4 Open LV 4 Max PV 0 Cur PV 2 Act PV 2 VG Size 6.99 GB PE Size 4.00 MB Total PE 1789 Alloc PE / Size 1278 / 4.99 GB Free PE / Size 511 / 2.00 GB VG UUID 3qgmFG-Mioc-R5ue-L3Aw-BNkF-IrhK-6HzIzo $
Expanding Logical Volume and Filesystem
In this case, I want to extend my home logical volume the rest of the drive. The easier way to do this is to use the Free PE (511 in the display above) to expand the drive, by doing the following:
$ sudo lvextend -l+511 /dev/asanoVG/home Extending logical volume home to 3.89 GB Logical volume home successfully resized $
Now for the home stretch! Since I had the foresight to use the XFS file system, I can simply run the following:
$ sudo xfs_growfs /home meta-data=/dev/mapper/asanoVG-home isize=256 agcount=8, agsize=62080 blks = sectsz=512 attr=0 data = bsize=4096 blocks=496640, imaxpct=25 = sunit=0 swidth=0 blks, unwritten=1 naming =version 2 bsize=4096 log =internal bsize=4096 blocks=2560, version=1 = sectsz=512 sunit=0 blks realtime =none extsz=65536 blocks=0, rtextents=0 data blocks changed from 496640 to 1019904 $
If you used ext3fs, it's still easy to resize, you just have to unmount it first. If this is /home, log out as a user, stop anything that uses /home, and log in as root. You need a root password for this. It may be easier to just boot with a live cd if you can't unmount. The commands lsof and fuser will give you information about what files are being used on a filesystem.
umount /home e2fsck -f -C /dev/asanoVG/home resize2fs /dev/asanoVG/home mount /home
e2fsck is a good idea to make sure the filesystem is in a consistent state before resizing.
And I'm finished! I have now expanded my /home partition by adding another drive.
To compare the drive sizes, here's before:
$ df -H Filesystem Size Used Avail Use% Mounted on /dev/mapper/asanoVG-root 2.2G 419M 1.8G 20% / varrun 264M 95k 264M 1% /var/run varlock 264M 4.1k 264M 1% /var/lock udev 264M 74k 264M 1% /dev devshm 264M 0 264M 0% /dev/shm /dev/mapper/asanoVG-boot 99M 25M 69M 26% /boot /dev/mapper/asanoVG-home 2.1G 121M 2.0G 6% /home
And here's after:
$ df -H Filesystem Size Used Avail Use% Mounted on /dev/mapper/asanoVG-root 2.2G 419M 1.8G 20% / varrun 264M 95k 264M 1% /var/run varlock 264M 4.1k 264M 1% /var/lock udev 264M 74k 264M 1% /dev devshm 264M 0 264M 0% /dev/shm /dev/mapper/asanoVG-boot 99M 25M 69M 26% /boot /dev/mapper/asanoVG-home 4.2G 121M 4.1G 3% /home
Other help on LVM
For more information on administrating LVM, use this very informative HOW-TO.
Recovering from failed drive
So now your server has been running and being used for some time. Maybe you've even added several new drives. Say one day, you run cat /proc/mdstat and receive the following output:
Personalities : [raid1] md1 : active raid1 sda1[1] 5237056 blocks [2/1] [_U] md0 : active raid1 hda1[0] hdd1[1] 2097024 blocks [2/2] [UU] unused devices: <none>
What the? Didn't one of the arrays have two working drives? Looks like our md1 array has degraded, and you're off to the store/online to get a replacement. Install it into the system, start it up, and find out how the drive is accessed (In my case, it's `/dev/sdb').
First, we want to look at the size of the partition on /dev/sda, so I'll be using fdisk:
$ sudo fdisk /dev/sda Password: Command (m for help): p Disk /dev/sda: 5476 MB, 5476083200 bytes 255 heads, 63 sectors/track, 665 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 657 5277321 fd Linux raid autodetect Command (m for help): q $
Now configure /dev/sdb with a raid partition of the same size. Don't forget to make it bootable:
$ sudo fdisk /dev/sdb Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-665, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-665, default 665): 657 Command (m for help): t Selected partition 1 Hex code (type L to list codes): fd Changed system type of partition 1 to fd (Linux raid autodetect) Command (m for help): a Partition number (1-4): 1 Command (m for help): p Disk /dev/sdb: 5476 MB, 5476083200 bytes 255 heads, 63 sectors/track, 665 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 * 1 657 5277321 fd Linux raid autodetect Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. $
Now run the following to add the partition to the array:
$ sudo mdadm --add /dev/md1 /dev/sdb1 mdadm: hot added /dev/sdb1 $
The RAID system will automatically start syncing the drives and you're free to continue on your way. If you look at mdstats, it'll look like this:
$ cat /proc/mdstat Personalities : [raid1] md1 : active raid1 sdb1[2] sda1[1] 5237056 blocks [2/1] [_U] [>....................] recovery = 2.5% (133056/5237056) finish=21.5min speed=3949K/sec md0 : active raid1 hda1[0] hdd1[1] 2097024 blocks [2/2] [UU] unused devices: <none> $
Final Statements
I do hope this guide has been informative and helpful to you. I will be using it myself on a server as a reference for different tasks.