Size: 10484
Comment: include the common bundling and running images pages
|
Size: 9680
Comment: trim trailing text, obsoleted by above includes
|
Deletions are marked like this. | Additions are marked like this. |
Line 175: | Line 175: |
Note the shell variables that have been set in the above code snippets. They will be used to test the installation in the steps below. Now, your kernel, ramdisk and image will have been uploaded into Eucalyptus and should be ready to run. To confirm, run the following command: {{{ $ euca-describe-images }}} You should see a registered kernel, ramdisk and image and they should be marked as 'available'. ||<style="background-color: #FAAFBA;">Please note: if you are preparing the bundle on a machine that uses encrypted filesystems be aware that you cannot bundle the machine ramdisk for your AMI. This is because your local ramdisk includes routines to decrypt your local encrypted filesystem and these will make the instance crash at boot (as the encrypted filesystem does not exist).|| |
|
Overview
This tutorial covers UEC installation by adding the Eucalyptus packages to previously installed Ubuntu 9.10 servers.
Objective
From this tutorial, you will learn how to install, configure, register and perform several operations on a basic UEC setup that results in a cloud with a one controller "front-end" and one node for running Virtual Machine (VM) instances.
Tutorial
STEP 1: Prerequisites
A UEC system includes the following high level packages:
eucalyptus-cloud - includes the front-end services (Cloud Controller) as well as the Walrus storage system.
eucalyptus-cc - includes the Cluster Controller that provides support for the virtual network overlay
eucalyptus-sc - includes the Storage Controller
eucalyptus-walrus - includes the Walrus
eucalyptus-nc - includes the Node Controller that interacts with KVM through Libvirt to manage individual VMs
In a basic UEC setup, the system is composed of two machines (a front-end and a node). The front end runs both eucalyptus-cloud and eucalyptus-cc in this configuration. The node runs the node controller, eucalyptus-nc. It is possible to separate the cloud controller and cluster controller in a more complex multi-host setup. The following diagram depicts a simple setup:
Before you install the packages (or shortly thereafter), there are some prerequisites that should be satisfied to end up with a fully functioning Eucalyptus system.
On each node, configure the system's primary ethernet interface as a bridge. (See the Ubuntu Server Guide Bridging for details). The node controller will attach virtual network interfaces to this bridge for VM that is started before it to enable network connectivity.
Note: Remember the name of your node's bridge device (we assume the name of your bridge device is "br0" for the rest of this document).The default Eucalyptus configuration assumes that there is not a DHCP server in your environment that is handing out dynamic IP addresses. The Cluster Controller runs a dhcp server that will statically assign IP addresses to VMs (which will be bridged on the local network).
From any host that you wish to use as a Eucalyptus client, you should install the euca2ools package from universe:
$ sudo apt-get install euca2ools
Also, other tools that can interact with the EC2 and S3 APIs should work with Eucalyptus.If you wish to access Eucalyptus from behind a firewall (i.e. the euca2ools tools and the cloud controller will be on different sides of a firewall) then port 8773 must be open on the cloud controller. Additionally, if you plan to register your Eucalyptus installation with a cloud management platform, 8773 and 8443 must be open.
STEP 2: System Installation and Configuration
Install the eucalyptus-cloud and eucalyptus-cc packages on the front-end machine:
$ sudo apt-get install eucalyptus-cloud eucalyptus-cc eucalyptus-walrus eucalyptus-sc
Next, install the eucalyptus-nc package on each node:
$ sudo apt-get install eucalyptus-nc
Finally, on the node, bring down the eucalyptus-nc service and modify /etc/eucalyptus/eucalyptus.conf with the name of the bridge that you set up as the node's primary interface.
Note that there are several ways to configure a node to have a bridge as its primary interface, depending on the configuration of your machine. We show an example set of steps here but you will need to take care to ensure that this example configuration does not conflict with your local configuration if you wish to use it.
However, if you have arranged for the bridge to be configured, you need to specify that bridge name ("br0" in our examples) in the node controller's configuration. To do so, type
$ sudo /etc/init.d/eucalyptus-nc stop $ sudo vi /etc/eucalyptus/eucalyptus.conf # set VNET_BRIDGE="br0" $ sudo /etc/init.d/eucalyptus-nc start
The following diagram depicts what your setup should now resemble:
You will also need to change your networking configuration to make it so that IPv4 traffic is passed to IPv6 ports since the Eucalyptus web frontend runs by default only on IPv6. To do so, type
$ sudo vi /etc/sysctl.conf # uncomment net.ipv4.ip_forward=1 # (it may not be commented out) $ sudo sysctl -p
Also, you may have noticed a message such as:
apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName
UEC has its own Apache config, you can add your ServerName statement there to make that go away, just edit /etc/eucalpytus/httpd.conf to begin somewhat like this:
# # This is the apache config for eucalyptus: we use it only to load the # axis2c module which will take care of the WebServices # ServerTokens OS ServerName WHATEVER_YOUR_HOSTNAME_IS_HERE ServerRoot "/tmp" Listen 8774 KeepAliveTimeout 30
You can verify that this resolved the issue by running
sudo dpkg-reconfigure eucalyptus-nc
STEP 3: Registering UEC Components
UEC assumes that each node in the system belongs to a cluster and that each cluster belongs to a cloud. Each node (there is only one node in this example) runs a copy of eucalyptus-nc. Similarly, each cluster (again, there is only one cluster in this example) must run a copy of eucalytpus-cc. For simplicity, the eucalyptus-cc in this example runs on the same machine as the cloud controller (eucalyptus-clc). These components must be registered with each other before the system starts. To register a cluster, execute the following on the cloud controller:
$ sudo euca_conf -addcluster <clustername> localhost
where <clustername> is the name that you would like this cluster to appear as to your users. Note that this name is logical and local only to Eucalyptus. It will correspond to an availability zone in the output of the client tools. Next, register your node with the cluster by running the following command on the clound controller:
$ sudo euca_conf -addnode <node_hostname>
Later, you can add more nodes by repeating the above command for each node running a copy of eucalyptus-nc. At this point, your Eucalyptus system should be up and running, ready for first time use.
Your UEC cloud should now look similar to the following logical diagram:
More Information
Log files: /var/log/eucalyptus
Configuration files: /etc/eucalyptus
Init Scripts: /etc/init.d/eucalyptus-cc, /etc/init.d/eucalytpus-cloud and /etc/init.d/eucalytpus-nc
Database: /var/lib/eucalyptus/db
- Reboot note: If you reboot your machine Eucalyptus may not start up and function automatically. You may need to restart the services manually.
- Environment note: Don't forget to source your ~/.euca/eucarc before running the client tools.
Next Steps and Links
Optional procedure to create images
Eucalyptus procedure
The Eucalyptus project is proposing an alternate guide to create images
Using vmbuilder
If you would want to author your own image, you can use the vmbuilder utility utility to create an image that will run in Eucalyptus. First, create a partition description file called 'part'. The contents describe the size, types, and mount points of your VM disk partitions:
$ cat > part <<EOF root 400 /mnt/ephemeral 0 /dev/sda2 swap 1 /dev/sda3 EOF
Next, create a simple script called 'firstboot' that will be executed the first time your image boots inside Eucalyptus to install an ssh daemon. In a file called 'firstboot' create the shell script:
$ cat >firstboot <<EOF #!/bin/sh apt-get -y install openssh-server EOF
Then, create the image with vmbuilder passing the name of the script file as an argument so that it can be installed. Note that even though we are asking vmbuilder to create a 'xen' image (this simply just means that the output format of the image is a disk partition), the resulting image will boot in Eucalyptus using KVM.
$ sudo vmbuilder xen ubuntu --part ./part --firstboot ./firstboot
Bundle a UEC Image
Next, register a new image with your Cloud Controller.
Using UEC consists of creating and registering images with the Cloud Controller.
This page describes the process for Ubuntu 10.04 LTS.
There is more than one way to obtain a virtual image:
- Download an image from the network, bundle and upload it
- Create a custom image using VMBuilder
- Use the Image store to download and install and image
Here we will describe the process of downloading one of the daily builds that are built and published automatically. The process is similar for Official Released Images
Note: the shell variables that are set in the below code snippets are very useful for scripts or to reuse them when typing commands. |
- Download the UEC image for the architecture you want. You can do it from your browser or from the command line:
TIMESTAMP=$(date +%Y%m%d%H%M%S) RELEASE=lucid ARCH=amd64 # Or this might be i386 [ $ARCH = "amd64" ] && IARCH=x86_64 || IARCH=i386 UEC_IMG=$RELEASE-server-uec-$ARCH URL=http://uec-images.ubuntu.com/$RELEASE/current/ [ ! -e $UEC_IMG.tar.gz ] && wget $URL/$UEC_IMG.tar.gz uec-publish-tarball $UEC_IMG.tar.gz $RELEASE-$TIMESTAMP
- Now, your kernel and image will have been uploaded into Eucalyptus and should be ready to run. To confirm, run the following command:
euca-describe-images EMI=$(euca-describe-images | grep emi- | head -n1 | awk '{print $2}')
You should see a registered kernel and image and they should be marked as 'available'.
Running an Image
Finally, run an image in a virtual machine in your Cloud.
There are multiple ways to instantiate an image in UEC:
- Use the command line
- Use one of the UEC compatible management tools such as Landscape
Use the ElasticFox extension to Firefox
Here we will describe the process from the command line:
- Before running an instance of your image, you should first create a keypair (ssh key) that you can use to log into your instance as root, once it boots. The key is stored, so you will only have to do this once. Run the following command:
if [ ! -e ~/.euca/mykey.priv ]; then mkdir -p -m 700 ~/.euca touch ~/.euca/mykey.priv chmod 0600 ~/.euca/mykey.priv euca-add-keypair mykey > ~/.euca/mykey.priv fi
Note: You can call your key whatever you like (in this example, the key is called 'mykey'), but remember what it is called. If you forget, you can always run euca-describe-keypairs to get a list of created keys stored in the system.
- You must make sure to source ~/.euca/eucarc before you run any of the eucatools. It is probably best to add this to the bottom of your .bashrc script.
- You must also allow access to port 22 in your instances:
euca-authorize default -P tcp -p 22 -s 0.0.0.0/0
- Next, you can create instances of your registered image:
euca-run-instances $EMI -k mykey -t m1.small
Note: If you receive an error regarding image_id, you may find it by viewing Images page or click "How to Run" on the Store page to see the sample command.
- The first time you run an instance, the system will be setting up caches for the image from which it will be created. This can often take some time the first time an instance is run given that VM images are usually quite large. To monitor the state of your instance, run:
watch -n5 euca-describe-instances
In the output, you should see information about the instance, including its state. While first-time caching is being performed, the instance's state will be 'pending'. - When the instance is fully started, the above state will become 'running'. Look at the IP address assigned to your instance in the output, then connect to it:
IPADDR=$(euca-describe-instances | grep $EMI | grep running | tail -n1 | awk '{print $4}') ssh -i ~/.euca/mykey.priv ubuntu@$IPADDR
- And when you are done with this instance, exit your SSH connection, then terminate your instance:
INSTANCEID=$(euca-describe-instances | grep $EMI | grep running | tail -n1 | awk '{print $2}') euca-terminate-instances $INSTANCEID