Controlling volume and instance placement in OpenStack (take 2)

An OpenStack cluster is deployed using bare metal hardware provisionned from various hosting companies (eNovance, Hetzner etc.). Each node (bare metal machine) is defined as an availability zone. The SimpleScheduler is set for the OpenStack volumes to support availability zones. The instance is created with nova boot … –availability_zone=bm0001 … and the volume with euca-create-volume –zone bm0001 –size 1. The volume is then attached to the instance with nova volume-attach.

OpenStack setup

The OpenStack cluster is setup using the Debian GNU/Linux puppet HOWTO. All hosts are named bm0001.the.re, bm0002.the.re, bm0003.the.re etc.

Availability zone

An availability zone is defined by adding the node_availability_zone configuration flag in the /etc/nova/nova.conf file for each node. For instance, the following puppet manifest snippet defines that each node is an availability zone.

  $availability_zone = regsubst($::fqdn, '^(bm\d+).*', '\1')
  nova_config { 'node_availability_zone': value => $availability_zone }

translates into the following line in /etc/nova/nova.conf for the node bm0001.the.re:

--node_availability_zone=bm0001

Activate SimpleScheduler

Although undocumented, the SimpleScheduler supports availability zones for nova volume placement.

        availability_zone = instance_opts.get('availability_zone')

        zone, host = FLAGS.default_schedule_zone, None
        if availability_zone:
            zone, _x, host = availability_zone.partition(':')
..
        if zone:
            results = [(service, cores) for (service, cores) in results
                       if service['availability_zone'] == zone]

It can be activated by adding the volume_scheduler_driver configuration flag in the /etc/nova/nova.conf file for the node running the nova scheduler. It can be done with the following puppet snippet:

nova_config { 'volume_scheduler_driver': value => 'nova.scheduler.simple.SimpleScheduler' }

which translates into the following line in /etc/nova/nova.conf:

--volume_scheduler_driver=nova.scheduler.simple.SimpleScheduler

Using euca2ools for volume provisioning

The nova volume-create command does not support the –availability_zone option. However, the API does and it can be used as follows:

curl -H "X-Auth-Token:f8743f0c02944cd087d6a10d1e9e9039"
      -H "Content-Type:application/json"
      -d '{"volume": {"availability_zone": "bm0002", "snapshot_id": null, "display_name": "volume01",
            "volume_type": null, "display_description": null, "size": 1}}'
            http://os.the.re:8776/v1/c776fbcb77374ec7ae4cafb2a6d13402/volumes

This is inconvenient because it requires to first obtain the token from keystone manually. It is easier to use the euca-create-volume command from the euca2ools package. The necessary credentials can be downloaded from the settings => EC2 link of the OpenStack dashboard. For instance:

# euca-create-volume --zone bm0001 --size 1

will create a 1GB volume on the bm0001.the.re node.

Attach the volume to the server

Get the id of the volume using the nova volume-list command line and the id of the instance using the nova list and associate the two with the following command line:

# nova volume-attach 2194fc07-9443-4ce8-8ae0-4b9360757d36 8 /dev/vda

Or use the OpenStack dashboard.
Check that it has been attached successfully:

# nova volume-list
+----+--------+--------------+------+-------------+--------------------------------------+
| ID | Status | Display Name | Size | Volume Type |             Attached to              |
+----+--------+--------------+------+-------------+--------------------------------------+
| 8  | in-use | volume01     | 1    | None        | 2194fc07-9443-4ce8-8ae0-4b9360757d36 |
+----+--------+--------------+------+-------------+--------------------------------------+

Check that it is accessible from the instance with:

root@bm0001:~# ssh -i test_keypair.pem cirros@10.145.0.8
$ sudo fdisk /dev/vdb
Command (m for help): p
Disk /dev/vdb: 1073 MB, 1073741824 bytes