Description

OpenNebula is an open source virtual infrastructure engine that enables the dynamic deployment and re-placement of virtualized services (groups of interconnected virtual machines) within and across sites. OpenNebula extends the benefits of virtualization platforms from a single physical resource to a pool of resources, decoupling the server not only from the physical infrastructure but also from the physical location.

http://opennebula.org/files/front-page.png

OpenNebula can be primarily used as a virtualization tool to manage your virtual infrastructure in the data-center or cluster. This application is usually referred as a private cloud, and with OpenNebula you can also dynamically scale it to multiple external clouds, thus building a hybrid cloud. When combined with a Cloud interface, OpenNebula can be used as the engine for public clouds, providing a scalable and dynamic management of the back-end infrastructure. So, do you want to transform your rigid and compartmentalized infrastructure, into a flexible and agile platform where you can dynamically deploy new services and adjust their capacity? If the answer is yes, you want to build what is nowadays called a private cloud.

Setting up a Private Cloud in 5 steps

In this mini-howto, you will learn to setup such private cloud in 5 steps with Ubuntu and OpenNebula . We assume that your infrastructure follows a classical cluster-like architecture, with a front-end (cluster01, in the howto) and a set of worker nodes (cluster02 and cluster03).

First you’ll need to add the following PPA if you are running Ubuntu 8.04 LTS (Hardy) or Ubuntu 8.10 (Intrepid), you should be fine if you are running Ubuntu 9.04 (Jaunty Jackalope):

    deb http://ppa.launchpad.net/opennebula-ubuntu/ppa/ubuntu intrepid main

If everything is set up correctly you could see the OpenNebula packages:

    $ apt-cache search opennebula
    libopennebula-dev - OpenNebula client library - Development
    libopennebula1 - OpenNebula client library - Runtime
    opennebula - OpenNebula controller
    opennebula-common - OpenNebula common files
    opennebula-node - OpenNebula node

OK, then. Here we go with the recipe:

Step 1 : Front-end installation

[Front-end (cluster01)] Install the opennebula package. As you may see from the output of the apt-get command performs several configuration steps: creates a oneadmin account, generates a rsa key pair, and starts the OpenNebula daemon:

      $ sudo apt-get install opennebula
      Reading package lists... Done 
      Building dependency tree 
      Reading state information... Done 
      The following extra packages will be installed: 
        opennebula-common 
      The following NEW packages will be installed: 
        opennebula opennebula-common 
      0 upgraded, 2 newly installed, 0 to remove and 2 not upgraded. 
      Need to get 280kB of archives. 
      After this operation, 1352kB of additional disk space will be used. 
      Do you want to continue [Y/n]? 
      Setting up opennebula-common (1.2-0ubuntu1~intrepid1) … 
      Adding system user `oneadmin’ (UID 107) … 
      Adding new user `oneadmin’ (UID 107) with group `nogroup’ … 
      Generating public/private rsa key pair. 
      Setting up opennebula (1.2-0ubuntu1~intrepid1) …
      oned and scheduler started 

Step 2 : Add the cluster nodes to the system

[Front-end (cluster01)] In this case, we’ll be using KVM and no shared storage. This simple configuration should work out-of-the-box with Ubuntu:

    $ onehost add cluster02 im_kvm vmm_kvm tm_ssh
    Success! 
    ... 
    $ onehost add cluster03 im_kvm vmm_kvm tm_ssh 
    Success! 
    ... 

Step 3 : Configure ssh access

[Front-end (cluster01)] You need to add the cluster nodes to the known hosts list for the oneadmin user:

    $ sudo -u oneadmin ssh cluster02 
    The authenticity of host 'cluster02 (192.168.16.2)' can't be established. 
    RSA key fingerprint is 37:41:a5:0c:e0:64:cb:03:3d:ac:86:b3:44:68:5c:f9.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'cluster02,192.168.16.2' (RSA) to the list of known hosts. 
      oneadmin@cluster02's password: 

Step 4 : Install the nodes

[Worker Node (cluster02,cluster03)] Install the OpenNebula Node package:

    $ sudo apt-get install opennebula-node 
    Reading package lists... Done 
    Building dependency tree 
    Reading state information... Done 
    The following extra packages will be installed: 
      opennebula-common 
    The following NEW packages will be installed: 
      opennebula-common opennebula-node 
    ... 
    Setting up opennebula-node (1.2-0ubuntu1~intrepid1) ...
    Adding user `oneadmin' to group `libvirtd' ... 
    Adding user oneadmin to group libvirtd 
    Done. 

Note that oneadmin is also created at the nodes (no need for NIS here) and added to the libvirtd, so it can manage the VMs.

Step 5 : Setting authorization

[Worker Node (cluster02,cluster03)] Trust the oneadmin user at the ssh scope, just copy the command from the onehost output Wink ;) :

    $ sudo  tee /var/lib/one/.ssh/authorized_keys << EOT 
    > ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAm9n0E4bS9K8NUL2bWh4F78LZWDo8uj2VZiAeylJzct 
    …7YPgr+Z4+lhOYPYMnZIzVArvbYzlc7HZxczGLzu+pu012a6Mv4McHtrzMqHw== oneadmin@cluster01 
    > EOT

You may want to check that you can login the cluster nodes from the front-end, using the oneadmin account without being asked for a password:

    $ sudo -u oneadmin ssh cluster02

You are done! You have your own cloud up and running! Now, you should be able to see your nodes ready to start VMs:

    $ onehost list
    HID NAME                      RVM   TCPU   FCPU   ACPU    TMEM    FMEM STAT 
    0 cluster02                   0    100    100    100  963992  902768   on 
    1 cluster03                   0    100    100    100  963992  907232   on 

You may want to check the OpenNebula documentation for further information on configuring the system or to learn how to specify virtual machines.

How to launch a VM

First, you need to prepare a virtual network for our VM. Create a text file vmnet.template containing the following to create a virtual network with just one IP (change it accordingly to your LAN address):

    NAME   = "Private LAN"
    TYPE   = FIXED
    BRIDGE = eth0
    LEASES = [IP=130.10.0.1]

    $ onevnet create vmnet.template

You can verify the correct creation of the network using again the onevnet command, first listing all the available virtual networks with onevnet list and then showing details for the recently created virtual network with onevnet show:

    $ onevnet list
    NID NAME            TYPE   SIZE BRIDGE
      0 Public VLAN        1        eth0

    $ onevnet show "Private LAN"
    NID               : 0
    UID               : 0
    Network Name      : Private LAN
    Type              : Fixed
    Size              : 1
    Bridge            : eth0

    ....: Template :....
            BRIDGE=eth0
            LEASES=IP=130.10.0.1,MAC=50:20:20:20:20:20
    ....: Leases :....
    IP = 130.10.0.1  MAC = 50:20:20:20:20:20  USED = 0 VID = -1

Once you have created the virtual network, we can then submit a VM to OpenNebula, by using onevm. We are going to build a VM template to submit the image we had placed in the /opt/nebula/images directory. The following will do:

    NAME   = vm-example 
    CPU    = 0.5
    MEMORY = 128
    OS     = [
       kernel   = "/boot/vmlinuz-2.6.18-4-xen-amd64",
       initrd   = "/boot/initrd.img-2.6.18-4-xen-amd64",
       root     = "sda1" ]
    DISK   = [
       source   = "/opt/nebula/images/disk.img",
       target   = "sda1",
       readonly = "no" ]
    DISK   = [
       type     = "swap",
       size     = 1024,
       target   = "sdb"]
    NIC    = [ NETWORK = "Private LAN" ]

Save it in your home and name it myfirstVM.template.

Once you have tailored the requirements to your needs (specially, CPU and MEMORY fields), ensuring that the VM fits into at least one of both hosts, lets submit the VM (assuming you are currently in your home folder):

    $ onevm submit myfirstVM.template

This should come back with an ID, that you can use to identify the VM for monitoring and controlling, again through the use of the onevm command:

    $ onevm list

The output should look like:

    ID     NAME STAT CPU     MEM        HOSTNAME                TIME
     0    one-0 runn   0   65536        aquila01          00 0:00:02

What next?

  • More information: Documentation, Guides and Use Cases

  • To configure OpenNebula to launch an EC2 instance, go here

  • To configure OpenNebula with Haizea to use advanced scheduling features (including Advance Reservations), go here

OpenNebula (last edited 2010-08-30 09:48:59 by 86-161)