Revision 12 as of 2009-10-21 02:36:41

Clear message

Overview

This tutorial covers UEC installation by adding the Eucalyptus packages to previously installed Ubuntu 9.10 servers.

Objective

From this tutorial, you will learn how to install, configure, register and perform several operations on a basic UEC setup that results in a cloud with a one controller "front-end" and one node for running Virtual Machine (VM) instances.

Tutorial

STEP 1: Prerequisites

To deploy a minimal cloud infrastructure, you’ll need at least two dedicated systems:

  • a Front End

  • one or more Node(s)

The following are recommendations, rather than fixed requirements. However, our experience in developing this documentation indicated the following suggestions.

Front End

Use the following table for a system that will run one or more of:

  • the cloud controller (clc)
  • the cluster controller (cc)
  • walrus (the S3-like storage service)
  • the storage controller (sc)

Hardware

Minimum

Suggested

Notes

CPU

1GHz

2 x 2GHz

for an all-in-one front end, it helps to have at least a dual core processor

Memory

2GB

4GB

the Java web front end benefits from lots of available memory

Disk

5400rpm IDE

7200rpm SATA

slower disks will work, but will yield much longer instance startup times

Disk Space

40GB

200GB

40GB is only enough space for only a single image, cache, etc., Eucalyptus does not like to run out of disk space

Networking

100Mbps

1000Mbps

machine images are hundreds of MB, and need to be copied over the network to nodes

Node(s)

The other system(s) are nodes, which will run:

  • the node controller (nc)

These systems will actually run the instances. You will need one or more systems with:

Hardware

Minimum

Suggested

Notes

CPU

VT extensions

VT, 64-bit, Multicore

64-bit can run both i386, and amd64 instances; by default, Eucalyptus will only run 1 VM per CPU core on a Node

Memory

1GB

4GB

additional memory means more, and larger guests

Disk

5400rpm IDE

7200rpm SATA or SCSI

Eucalyptus nodes are disk-intensive; I/O wait will likely be the performance bottleneck

Disk Space

40GB

100GB

images will be cached locally, Eucalyptus does not like to run out of disk space

Networking

100Mbps

1000Mbps

machine images are hundreds of MB, and need to be copied over the network to nodes

STEP 2: Install the Cloud/Cluster/Storage/Walrus Front End Server(s)

  1. Install Ubuntu 9.10 Server

  2. Update to the most current state in the Ubuntu archive:
    sudo apt-get update
    sudo apt-get dist-upgrade
  3. Install the eucalyptus-cloud and eucalyptus-cc packages on the Front End:

    sudo apt-get install eucalyptus-cloud eucalyptus-cc eucalyptus-walrus eucalyptus-sc
    • Configure postfix for internet delivery

    • Name your cluster
      • e.g. cluster1
    • Add a list of available IP addresses on your network
      • e.g. 192.168.1.200-192.168.1.249

STEP 3: Install and Configure the Node Controller(s)

  1. Install Ubuntu 9.10 Server on one or more nodes

  2. Update to the most current state in the Ubuntu archive:
    sudo apt-get update
    sudo apt-get dist-upgrade
  3. Next, install the eucalyptus-nc package on each Node:

    sudo apt-get install eucalyptus-nc
  4. On each node, configure the system's primary ethernet interface as a bridge. The node controller will attach virtual network interfaces to this bridge for VM that is started before it to enable network connectivity.
    • Note: Remember the name of your node's bridge device (we assume the name of your bridge device is "br0" for the rest of this document).
    • For details on configuring a bridge, see: http://doc.ubuntu.com/ubuntu/serverguide/C/network-configuration.html

    • The following script should configure your bridge correctly in most setups:
      interface=eth0
      bridge=br0
      sudo sed -i "s/^iface $interface inet \(.*\)$/iface $interface inet manual\n\nauto br0\niface $bridge inet \1/" /etc/network/interfaces
      sudo tee -a /etc/network/interfaces <<EOF
              bridge_ports $interface
              bridge_fd 9
              bridge_hello 2
              bridge_maxage 12
              bridge_stp off
      EOF
      sudo /etc/init.d/networking restart
  5. One each node, configure /etc/eucalyptus/eucalyptus.conf with the name of the bridge, and restart the node controller:

    sudo sed -i "s/^VNET_BRIDGE=.*$/VNET_BRIDGE=$bridge/" /etc/eucalyptus/eucalyptus.conf
    sudo /etc/init.d/eucalyptus-nc restart
    • Note that there are several ways to configure a node to have a bridge as its primary interface, depending on the configuration of your machine. We show an example set of steps here but you will need to take care to ensure that this example configuration does not conflict with your local configuration if you wish to use it.

    • The following diagram depicts what your setup should now resemble:

      http://pompone.cs.ucsb.edu/~nurmi/images/euca-topo.png

  6. You will also need to change your networking configuration to make it so that IPv4 traffic is passed to IPv6 ports since the Eucalyptus web frontend runs by default only on IPv6. To do so, type
    sudo sed -i "s/^#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/" /etc/sysctl.conf
    sudo sysctl -p

STEP 4: Register the Cluster, Storage, and Walrus Servers

Cluster Registration

As of Ubuntu 10.04 LTS, all component registration should be automatic, assuming:

  1. Public SSH keys have been exchanged properly
  2. The services are configured properly
  3. The services are publishing their existence
  4. The appropriate uec-component-listener is running
  5. Verify Registration.

Steps a to e should only be required if you're using the UEC/PackageInstall method. Otherwise, if you are following the UEC/CDInstall method, these steps should already be completed automatically for you, and therefore you can skip a to e.

a.

Exchange Public SSH Keys

The Cloud Controller's eucalyptus user needs to have SSH access to the Walrus Controller, Cluster Controller, and Storage Controller as the eucalyptus user.

Install the Cloud Controller's eucalyptus user's public ssh key by:

  • On the target controller, temporarily set a password for the eucalyptus user:

    sudo passwd eucalyptus
  • Then, on the Cloud Controller:
    sudo -u eucalyptus ssh-copy-id -i /var/lib/eucalyptus/.ssh/id_rsa.pub eucalyptus@<IP_OF_NODE>
  • You can now remove the password of the eucalyptus account on the target controller, if you wish:

  • sudo passwd -d eucalyptus

b.

Configure the Services

On the Cloud Controller:

  • For the Cluster Controller Registration:

    • Define the shell variable CC_NAME in /etc/eucalyptus/eucalyptus-cc.conf

    • Define the shell variable CC_IP_ADDR in /etc/eucalyptus/eucalyptus-ipaddr.conf, as a space separated list of one or more IP addresses.

  • For the Walrus Controller Registration:

    • Define the shell variable WALRUS_IP_ADDR in /etc/eucalyptus/eucalyptus-ipaddr.conf, as a single IP address.

On the Cluster Controller:

  • For Storage Controller Registration:

    • Define the cluster name in the shell variable CC_NAME in /etc/eucalyptus/eucalyptus-cc.conf

    • Define the shell variable SC_IP_ADDR in /etc/eucalyptus/eucalyptus-ipaddr.conf, as a space separated list of one or more IP addresses.

c.

Publish

Now start the publication services.

  • Walrus Controller:

    sudo start eucalyptus-walrus-publication
  • Cluster Controller:

    sudo start eucalyptus-cc-publication
  • Storage Controller:

    sudo start eucalyptus-sc-publication
  • Node Controller

    sudo start eucalyptus-nc-publication

d.

Start the Listener

On the Cloud Controller and the Cluster Controller(s), run:

sudo start uec-component-listener

e.

Verify Registration

cat /var/log/eucalyptus/registration.log
2010-04-08 15:46:36-05:00 | 24243 -> Calling node cluster1 node 10.1.1.75
2010-04-08 15:46:36-05:00 | 24243 -> euca_conf --register-nodes returned 0
2010-04-08 15:48:47-05:00 | 25858 -> Calling walrus Walrus 10.1.1.71
2010-04-08 15:48:51-05:00 | 25858 -> euca_conf --register-walrus returned 0
2010-04-08 15:49:04-05:00 | 26237 -> Calling cluster cluster1 10.1.1.71
2010-04-08 15:49:08-05:00 | 26237 -> euca_conf --register-cluster returned 0
2010-04-08 15:49:17-05:00 | 26644 -> Calling storage cluster1 storage 10.1.1.71
2010-04-08 15:49:18-05:00 | 26644 -> euca_conf --register-sc returned 0

Storage Registration

As of Ubuntu 10.04 LTS, all component registration should be automatic, assuming:

  1. Public SSH keys have been exchanged properly
  2. The services are configured properly
  3. The services are publishing their existence
  4. The appropriate uec-component-listener is running
  5. Verify Registration.

Steps a to e should only be required if you're using the UEC/PackageInstall method. Otherwise, if you are following the UEC/CDInstall method, these steps should already be completed automatically for you, and therefore you can skip a to e.

a.

Exchange Public SSH Keys

The Cloud Controller's eucalyptus user needs to have SSH access to the Walrus Controller, Cluster Controller, and Storage Controller as the eucalyptus user.

Install the Cloud Controller's eucalyptus user's public ssh key by:

  • On the target controller, temporarily set a password for the eucalyptus user:

    sudo passwd eucalyptus
  • Then, on the Cloud Controller:
    sudo -u eucalyptus ssh-copy-id -i /var/lib/eucalyptus/.ssh/id_rsa.pub eucalyptus@<IP_OF_NODE>
  • You can now remove the password of the eucalyptus account on the target controller, if you wish:

  • sudo passwd -d eucalyptus

b.

Configure the Services

On the Cloud Controller:

  • For the Cluster Controller Registration:

    • Define the shell variable CC_NAME in /etc/eucalyptus/eucalyptus-cc.conf

    • Define the shell variable CC_IP_ADDR in /etc/eucalyptus/eucalyptus-ipaddr.conf, as a space separated list of one or more IP addresses.

  • For the Walrus Controller Registration:

    • Define the shell variable WALRUS_IP_ADDR in /etc/eucalyptus/eucalyptus-ipaddr.conf, as a single IP address.

On the Cluster Controller:

  • For Storage Controller Registration:

    • Define the cluster name in the shell variable CC_NAME in /etc/eucalyptus/eucalyptus-cc.conf

    • Define the shell variable SC_IP_ADDR in /etc/eucalyptus/eucalyptus-ipaddr.conf, as a space separated list of one or more IP addresses.

c.

Publish

Now start the publication services.

  • Walrus Controller:

    sudo start eucalyptus-walrus-publication
  • Cluster Controller:

    sudo start eucalyptus-cc-publication
  • Storage Controller:

    sudo start eucalyptus-sc-publication
  • Node Controller

    sudo start eucalyptus-nc-publication

d.

Start the Listener

On the Cloud Controller and the Cluster Controller(s), run:

sudo start uec-component-listener

e.

Verify Registration

cat /var/log/eucalyptus/registration.log
2010-04-08 15:46:36-05:00 | 24243 -> Calling node cluster1 node 10.1.1.75
2010-04-08 15:46:36-05:00 | 24243 -> euca_conf --register-nodes returned 0
2010-04-08 15:48:47-05:00 | 25858 -> Calling walrus Walrus 10.1.1.71
2010-04-08 15:48:51-05:00 | 25858 -> euca_conf --register-walrus returned 0
2010-04-08 15:49:04-05:00 | 26237 -> Calling cluster cluster1 10.1.1.71
2010-04-08 15:49:08-05:00 | 26237 -> euca_conf --register-cluster returned 0
2010-04-08 15:49:17-05:00 | 26644 -> Calling storage cluster1 storage 10.1.1.71
2010-04-08 15:49:18-05:00 | 26644 -> euca_conf --register-sc returned 0

Walrus Registration

As of Ubuntu 10.04 LTS, all component registration should be automatic, assuming:

  1. Public SSH keys have been exchanged properly
  2. The services are configured properly
  3. The services are publishing their existence
  4. The appropriate uec-component-listener is running
  5. Verify Registration.

Steps a to e should only be required if you're using the UEC/PackageInstall method. Otherwise, if you are following the UEC/CDInstall method, these steps should already be completed automatically for you, and therefore you can skip a to e.

a.

Exchange Public SSH Keys

The Cloud Controller's eucalyptus user needs to have SSH access to the Walrus Controller, Cluster Controller, and Storage Controller as the eucalyptus user.

Install the Cloud Controller's eucalyptus user's public ssh key by:

  • On the target controller, temporarily set a password for the eucalyptus user:

    sudo passwd eucalyptus
  • Then, on the Cloud Controller:
    sudo -u eucalyptus ssh-copy-id -i /var/lib/eucalyptus/.ssh/id_rsa.pub eucalyptus@<IP_OF_NODE>
  • You can now remove the password of the eucalyptus account on the target controller, if you wish:

  • sudo passwd -d eucalyptus

b.

Configure the Services

On the Cloud Controller:

  • For the Cluster Controller Registration:

    • Define the shell variable CC_NAME in /etc/eucalyptus/eucalyptus-cc.conf

    • Define the shell variable CC_IP_ADDR in /etc/eucalyptus/eucalyptus-ipaddr.conf, as a space separated list of one or more IP addresses.

  • For the Walrus Controller Registration:

    • Define the shell variable WALRUS_IP_ADDR in /etc/eucalyptus/eucalyptus-ipaddr.conf, as a single IP address.

On the Cluster Controller:

  • For Storage Controller Registration:

    • Define the cluster name in the shell variable CC_NAME in /etc/eucalyptus/eucalyptus-cc.conf

    • Define the shell variable SC_IP_ADDR in /etc/eucalyptus/eucalyptus-ipaddr.conf, as a space separated list of one or more IP addresses.

c.

Publish

Now start the publication services.

  • Walrus Controller:

    sudo start eucalyptus-walrus-publication
  • Cluster Controller:

    sudo start eucalyptus-cc-publication
  • Storage Controller:

    sudo start eucalyptus-sc-publication
  • Node Controller

    sudo start eucalyptus-nc-publication

d.

Start the Listener

On the Cloud Controller and the Cluster Controller(s), run:

sudo start uec-component-listener

e.

Verify Registration

cat /var/log/eucalyptus/registration.log
2010-04-08 15:46:36-05:00 | 24243 -> Calling node cluster1 node 10.1.1.75
2010-04-08 15:46:36-05:00 | 24243 -> euca_conf --register-nodes returned 0
2010-04-08 15:48:47-05:00 | 25858 -> Calling walrus Walrus 10.1.1.71
2010-04-08 15:48:51-05:00 | 25858 -> euca_conf --register-walrus returned 0
2010-04-08 15:49:04-05:00 | 26237 -> Calling cluster cluster1 10.1.1.71
2010-04-08 15:49:08-05:00 | 26237 -> euca_conf --register-cluster returned 0
2010-04-08 15:49:17-05:00 | 26644 -> Calling storage cluster1 storage 10.1.1.71
2010-04-08 15:49:18-05:00 | 26644 -> euca_conf --register-sc returned 0

Verify the Registration

You may verify the registration in the logs:

tail -n1 /var/log/eucalyptus/*registration.log
==> /var/log/eucalyptus/cc-registration.log <==
SUCCESS: new cluster 'canyonedge' on host '192.168.1.122' successfully registered.

==> /var/log/eucalyptus/sc-registration.log <==
SUCCESS: new SC for cluster 'canyonedge' on host '192.168.1.122' successfully registered.

==> /var/log/eucalyptus/walrus-registration.log <==
SUCCESS: new walrus on host '192.168.1.122' successfully registered.

STEP 5: Register the Node(s)

As of Ubuntu 10.04 LTS, all component registration should be automatic, assuming:

  1. Public SSH keys have been exchanged properly
  2. The services are configured properly
  3. The services are publishing their existence
  4. The appropriate uec-component-listener is running
  5. Verify Registration.

Steps a to e should only be required if you're using the UEC/PackageInstall method. Otherwise, if you are following the UEC/CDInstall method, these steps should already be completed automatically for you, and therefore you can skip a to e.

a.

Exchange Public SSH Keys

The Cloud Controller's eucalyptus user needs to have SSH access to the Walrus Controller, Cluster Controller, and Storage Controller as the eucalyptus user.

Install the Cloud Controller's eucalyptus user's public ssh key by:

  • On the target controller, temporarily set a password for the eucalyptus user:

    sudo passwd eucalyptus
  • Then, on the Cloud Controller:
    sudo -u eucalyptus ssh-copy-id -i /var/lib/eucalyptus/.ssh/id_rsa.pub eucalyptus@<IP_OF_NODE>
  • You can now remove the password of the eucalyptus account on the target controller, if you wish:

  • sudo passwd -d eucalyptus

b.

Configure the Services

On the Cloud Controller:

  • For the Cluster Controller Registration:

    • Define the shell variable CC_NAME in /etc/eucalyptus/eucalyptus-cc.conf

    • Define the shell variable CC_IP_ADDR in /etc/eucalyptus/eucalyptus-ipaddr.conf, as a space separated list of one or more IP addresses.

  • For the Walrus Controller Registration:

    • Define the shell variable WALRUS_IP_ADDR in /etc/eucalyptus/eucalyptus-ipaddr.conf, as a single IP address.

On the Cluster Controller:

  • For Storage Controller Registration:

    • Define the cluster name in the shell variable CC_NAME in /etc/eucalyptus/eucalyptus-cc.conf

    • Define the shell variable SC_IP_ADDR in /etc/eucalyptus/eucalyptus-ipaddr.conf, as a space separated list of one or more IP addresses.

c.

Publish

Now start the publication services.

  • Walrus Controller:

    sudo start eucalyptus-walrus-publication
  • Cluster Controller:

    sudo start eucalyptus-cc-publication
  • Storage Controller:

    sudo start eucalyptus-sc-publication
  • Node Controller

    sudo start eucalyptus-nc-publication

d.

Start the Listener

On the Cloud Controller and the Cluster Controller(s), run:

sudo start uec-component-listener

e.

Verify Registration

cat /var/log/eucalyptus/registration.log
2010-04-08 15:46:36-05:00 | 24243 -> Calling node cluster1 node 10.1.1.75
2010-04-08 15:46:36-05:00 | 24243 -> euca_conf --register-nodes returned 0
2010-04-08 15:48:47-05:00 | 25858 -> Calling walrus Walrus 10.1.1.71
2010-04-08 15:48:51-05:00 | 25858 -> euca_conf --register-walrus returned 0
2010-04-08 15:49:04-05:00 | 26237 -> Calling cluster cluster1 10.1.1.71
2010-04-08 15:49:08-05:00 | 26237 -> euca_conf --register-cluster returned 0
2010-04-08 15:49:17-05:00 | 26644 -> Calling storage cluster1 storage 10.1.1.71
2010-04-08 15:49:18-05:00 | 26644 -> euca_conf --register-sc returned 0

Your UEC cloud should now look similar to the following logical diagram:

http://pompone.cs.ucsb.edu/~nurmi/images/euca-topo-withinst.png

STEP 6: Obtain Credentials

After installing and booting the Cloud Controller, users of the cloud will need to retrieve their credentials. This can be done either through a web browser, or at the command line.

From a Web Browser

  1. From your web browser (either remotely or on your Ubuntu server) access the following URL:
    https://<cloud-controller-ip-address>:8443/

    Important! You must use a secure connection, so make sure you use "https" not "http" in your URL. You will get a security certificate warning. You will have to add an exception to view the page. If you do not accept it you will not be able to view the Eucalyptus configuration page.

  2. Use username 'admin' and password 'admin' for the first time login (you will be prompted to change your password).
  3. Then follow the on-screen instructions to update the admin password and email address.
  4. Once the first time configuration process is completed, click the 'credentials' tab located in the top-left portion of the screen.

  5. Click the 'Download Credentials' button to get your certificates
  6. Save them to ~/.euca

  7. Unzip the downloaded zipfile into a safe location (~/.euca)
    unzip -d ~/.euca mycreds.zip

From a Command Line

  1. Alternatively, if you are on the command line of the Cloud Controller, you can run:
    mkdir -p ~/.euca
    chmod 700 ~/.euca
    cd ~/.euca
    sudo euca_conf --get-credentials mycreds.zip
    unzip mycreds.zip
    ln -s ~/.euca/eucarc ~/.eucarc
    cd -

Extracting and Using Your Credentials

Now you will need to setup EC2 API and AMI tools on your server using X.509 certificates.

  1. Install the required cloud user tools:
    sudo apt-get install euca2ools
  2. To validate that everything is working correctly, get the local cluster availability details:
    . ~/.euca/eucarc
    euca-describe-availability-zones verbose
    AVAILABILITYZONE   myowncloud                 192.168.1.1
    AVAILABILITYZONE   |- vm types                free / max   cpu   ram  disk
    AVAILABILITYZONE   |- m1.small                0004 / 0004   1    192     2
    AVAILABILITYZONE   |- c1.medium               0004 / 0004   1    256     5
    AVAILABILITYZONE   |- m1.large                0002 / 0002   2    512    10
    AVAILABILITYZONE   |- m1.xlarge               0002 / 0002   2   1024    20
    AVAILABILITYZONE   |- c1.xlarge               0001 / 0001   4   2048    20

STEP 7: Bundle an Image

Using UEC consists of creating and registering images with the Cloud Controller.

This page describes the process for Ubuntu 10.04 LTS.

There is more than one way to obtain a virtual image:

  • Download an image from the network, bundle and upload it
  • Create a custom image using VMBuilder
  • Use the Image store to download and install and image

Here we will describe the process of downloading one of the daily builds that are built and published automatically. The process is similar for Official Released Images

Note: the shell variables that are set in the below code snippets are very useful for scripts or to reuse them when typing commands.

  1. Download the UEC image for the architecture you want. You can do it from your browser or from the command line:
    TIMESTAMP=$(date +%Y%m%d%H%M%S)
    RELEASE=lucid
    ARCH=amd64    # Or this might be i386
    [ $ARCH = "amd64" ] && IARCH=x86_64 || IARCH=i386
    UEC_IMG=$RELEASE-server-uec-$ARCH
    URL=http://uec-images.ubuntu.com/$RELEASE/current/
    [ ! -e $UEC_IMG.tar.gz ] &&  wget $URL/$UEC_IMG.tar.gz
    uec-publish-tarball $UEC_IMG.tar.gz $RELEASE-$TIMESTAMP
  2. Now, your kernel and image will have been uploaded into Eucalyptus and should be ready to run. To confirm, run the following command:
    euca-describe-images
    EMI=$(euca-describe-images | grep emi- | head -n1 | awk '{print $2}')
    You should see a registered kernel and image and they should be marked as 'available'.

STEP 8: Run an Image

There are multiple ways to instantiate an image in UEC:

  • Use the command line
  • Use one of the UEC compatible management tools such as Landscape
  • Use the ElasticFox extension to Firefox

Here we will describe the process from the command line:

  1. Before running an instance of your image, you should first create a keypair (ssh key) that you can use to log into your instance as root, once it boots. The key is stored, so you will only have to do this once. Run the following command:
    if [ ! -e ~/.euca/mykey.priv ]; then
        mkdir -p -m 700 ~/.euca
        touch ~/.euca/mykey.priv
        chmod 0600 ~/.euca/mykey.priv
        euca-add-keypair mykey > ~/.euca/mykey.priv
    fi

    Note: You can call your key whatever you like (in this example, the key is called 'mykey'), but remember what it is called. If you forget, you can always run euca-describe-keypairs to get a list of created keys stored in the system.

  2. You must make sure to source ~/.euca/eucarc before you run any of the eucatools. It is probably best to add this to the bottom of your .bashrc script.
  3. You must also allow access to port 22 in your instances:
    euca-authorize default -P tcp -p 22 -s 0.0.0.0/0
  4. Next, you can create instances of your registered image:
    euca-run-instances $EMI -k mykey -t m1.small

    Note: If you receive an error regarding image_id, you may find it by viewing Images page or click "How to Run" on the Store page to see the sample command.

  5. The first time you run an instance, the system will be setting up caches for the image from which it will be created. This can often take some time the first time an instance is run given that VM images are usually quite large. To monitor the state of your instance, run:
    watch -n5 euca-describe-instances
    In the output, you should see information about the instance, including its state. While first-time caching is being performed, the instance's state will be 'pending'.
  6. When the instance is fully started, the above state will become 'running'. Look at the IP address assigned to your instance in the output, then connect to it:
    IPADDR=$(euca-describe-instances | grep $EMI | grep running | tail -n1 | awk '{print $4}')
    ssh -i ~/.euca/mykey.priv ubuntu@$IPADDR
  7. And when you are done with this instance, exit your SSH connection, then terminate your instance:
    INSTANCEID=$(euca-describe-instances | grep $EMI | grep running | tail -n1 | awk '{print $2}')
    euca-terminate-instances $INSTANCEID

More Information

  • Log files: /var/log/eucalyptus

  • Configuration files: /etc/eucalyptus

  • Init Scripts: /etc/init.d/eucalyptus-cc, /etc/init.d/eucalytpus-cloud and /etc/init.d/eucalytpus-nc

  • Database: /var/lib/eucalyptus/db

  • Reboot note: If you reboot your machine Eucalyptus may not start up and function automatically. You may need to restart the services manually.
  • Environment note: Don't forget to source your ~/.euca/eucarc before running the client tools.