Migrating from ganeti to OpenStack via Ceph

On ganeti, shutdown the instance and activate its disks:

z2-8:~# gnt-instance shutdown nerrant
Waiting for job 1089813 for nerrant...
z2-8:~# gnt-instance activate-disks nerrant

On an OpenStack Havana installation using a Ceph cinder backend, create a volume with the same size:

# cinder create --volume-type ovh --display-name nerrant 10
|       Property      |                Value                 |
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2013-11-12T13:00:39.614541      |
| display_description |                 None                 |
|     display_name    |              nerrant                 |
|          id         | 3ec2035e-ff76-43a9-bbb3-6c003c1c0e16 |
|       metadata      |                  {}                  |
|         size        |                  10                  |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 ovh                  |
# rbd --pool ovh info volume-3ec2035e-ff76-43a9-bbb3-6c003c1c0e16
rbd image 'volume-3ec2035e-ff76-43a9-bbb3-6c003c1c0e16':
        size 10240 MB in 2560 objects
        order 22 (4096 KB objects)
        block_name_prefix: rbd_data.90f0417089fa
        format: 2
        features: layering

On a host connected to the Ceph cluster and running a linux-kernel > 3.8 ( because of the format: 2 above ), map to a bloc device with:

# rbd map --pool ovh volume-3ec2035e-ff76-43a9-bbb3-6c003c1c0e16
# rbd showmapped
id pool image                                       snap device
1  ovh  volume-3ec2035e-ff76-43a9-bbb3-6c003c1c0e16 -    /dev/rbd1

Copy the ganeti volume with:

z2-8:~# pv < /dev/drbd10 | ssh bm0014 dd of=/dev/rbd1
2,29GB 0:09:14 [4,23MB/s] [==========================>      ] 22% ETA 0:31:09

and unmap the device when it completes.

rbd unmap /dev/rbd1

The volume is ready to boot.

Collocating Ceph volumes and instances in a multi-datacenter setup

OpenStack Havana is installed on machines rented from OVH and Hetzner. An aggregate is created for machines hosted at OVH and another for machines hosted at Hetzner. A Ceph cluster is created with a pool using disks from OVH and another pool using disks from Hetzner. A cinder backend is created for each Ceph pool. From the dashboard, an instance can be created in the OVH availability zone using a Ceph volume provided by the matching OVH pool.

Continue reading “Collocating Ceph volumes and instances in a multi-datacenter setup”

Fragmented floating IP pools and multiple AS hack

When an OpenStack Havana cluster is deployed on hardware rented from OVH and Hetzner, IPv4 are rented by the month and are either isolated ( just one IP, not a proper subnet ) or made of a collection of disjoint subnets of various sizes.

OpenStack does not provide a way to deal with this situation and a hack involving a double nat using a subnet of floating IP is proposed.
A L3 agent runs on an OVH machine and pretends that is a subnet of floating IPs, although they are not publicly available. Another L3 agent is setup on a Hetzner machine and uses the subnet.
When an instance is created, it may chose a Hetzner private subnet, which is connected to a Hetzner router for which the gateway has been set to a network providing the Hetzner floating IPs. And the same is done for OVH.
A few floating IP are rented from OVH and Hetzner. On the host running the L3 agent dedicated to the OVH AS, a 1 to 1 nat is established between each IP in the subnet and the OVH floating IPs. For instance the following /etc/init/nat.conf upstart script associates with the floating IP.

description "OVH nat hack"

start on neutron-l3-agent

  iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
  ip addr add dev br-ex
  while read private public ; do
    test "$public" || continue
    iptables -t nat -A POSTROUTING -s $private/32 -j SNAT --to-source $public
    iptables -t nat -A PREROUTING -d $public/32 -j DNAT --to-destination $private
  done <<EOF
end script

Continue reading “Fragmented floating IP pools and multiple AS hack”