HOWTO install Ceph teuthology on OpenStack

Teuthology is used to run Ceph integration tests. It is installed from sources and will use newly created OpenStack instances as targets:

$ cat targets.yaml
targets:
  ubuntu@target1.novalocal: ssh-rsa AAAAB3NzaC1yc2...
  ubuntu@target2.novalocal: ssh-rsa AAAAB3NzaC1yc2...

They allow password free ubuntu ssh connection with full sudo privileges from the machine running teuthology. A Ubuntu precise 12.04.2 target must be configured with:

$ wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | \
  sudo apt-key add -
$ echo '    ubuntu hard nofile 16384' > /etc/security/limits.d/ubuntu.conf

It can then be tried with a configuration file that does nothing but install Ceph and run the daemons.

$ cat noop.yaml
check-locks: false
roles:
- - mon.a
  - osd.0
- - osd.1
  - client.0
tasks:
- install:
   project: ceph
   branch: stable
- ceph:

The output should look like this:

$ ./virtualenv/bin/teuthology targets.yaml noop.yaml
INFO:teuthology.run_tasks:Running task internal.save_config...
INFO:teuthology.task.internal:Saving configuration
INFO:teuthology.run_tasks:Running task internal.check_lock...
INFO:teuthology.task.internal:Lock checking disabled.
INFO:teuthology.run_tasks:Running task internal.connect...
INFO:teuthology.task.internal:Opening connections...
DEBUG:teuthology.task.internal:connecting to ubuntu@teuthology2.novalocal
DEBUG:teuthology.task.internal:connecting to ubuntu@teuthology1.novalocal
...
INFO:teuthology.run:Summary data:
{duration: 363.5891010761261, flavor: basic, owner: ubuntu@teuthology, success: true}
INFO:teuthology.run:pass

teuthology scope

Although mainly used by Inktank at this time, teuthology should be usable by anyone because it’s an essential tool to debug and diagnose problems. The walkthru written back in January 2013 is now obsolete. This document is an updated walkthru of a teuthology installation that should eventually be made redundant when it is more widely adopted.

installing teuthology

On a newly installed Ubuntu precise 12.04 machine run:

sudo apt-get install python-dev python-virtualenv python-pip libevent-dev
sudo apt-get install libmysqlclient-dev python-libvirt
sudo apt-get install git-core
git clone https://github.com/ceph/teuthology.git
cd teuthology
./bootstrap

To get the teuthology repository exactly as it was at the time this document was written:

git checkout 92138621640ce173f74b39288bd27386c2fcfb58

A minor patch must be applied to ensure that teuthology does not rely on a lock server.

creating instances and devices

Two machines are created in an OpenStack tenant, running Ubuntu Precise. teuthology will be driving the integration tests, target1 and target2, i.e. doing the actual work and running the integration scripts. They need at least 4GB of RAM. The teuthology password-less ssh key is added to the tenant:

$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/ubuntu/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/ubuntu/.ssh/id_rsa.
Your public key has been saved in /home/ubuntu/.ssh/id_rsa.pub.
The key fingerprint is:
77:44:a3:e0:2c:4b:17:60:28:cb:4a:8c:14:fa:be:57 ubuntu@teuthology
The key's randomart image is:
+--[ RSA 2048]----+
| ..  .o.o   o    |
|... .. o o o .   |
|=. o  o + . .    |
|.+o  . +   .     |
|...   . S . .    |
|..    E  . .     |
|  .  .           |
|   ..            |
|  ..             |
+-----------------+
$ nova keypair-add --pub_key .ssh/id_rsa.pub teuthology

and used when creating the instances:

nova boot --image 'Ubuntu Precise 12.04'  \
 --flavor e.1-cpu.10GB-disk.4GB-ram \
 --key_name teuthology --availability_zone=bm0002 \
 --poll target1

target[12] are given an additional 10G volume.

# euca-create-volume --zone bm0002 --size 10
# nova volume-list
...
| 67 | available      | None         | 10   | ...
+----+----------------+--------------+------+--...
# nova volume-attach target1 67 /dev/vdb

When running on the target instance (i.e. target1 and target2), teuthology will automatically discover that /dev/vdb are available and use them for testing.
The apt key used to sign the packages is added to target[12]

wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | \
  sudo apt-key add -

and the admin group to which the ubuntu user belongs is given unrestricted root access by setting

%admin ALL=(ALL:ALL) ALL

in /etc/sudoers.

Some ulimit restrictions must be relaxed for the ubuntu user with:

echo '    ubuntu hard nofile 16384' > /etc/security/limits.d/ubuntu.conf

on target1 and target2.

configuring the instance running teuthology

The instance named ceph will run teuthology.

sudo apt-get install git-core
git clone git://github.com/ceph/teuthology.git

The dependencies are installed and the command line scripts created with

sudo apt-get install python-dev python-virtualenv python-pip libevent-dev
cd teuthology ; ./bootstrap

The requirements are light because this instance won’t run ceph. It will drive the other instances ( target[12] ) and instruct them to run ceph integration scripts.
Teuthology expect the targets to have a password-less ssh connection. Although it could be done using ssh agent forwarding, the ssh client does not support it. A new password-less key is generated with ssh-keygen and the content of ~/.ssh/id_rsa.pub is copied over to the ~/.ssh/authorized_keys file for the ubuntu account on each target instance.

configuring teuthology targets

The targets will install packages as instructed by the instance ( see the – install: task description below ).
Ceph is sensitive to time differences between instances and ntp should be installed to ensure they are in sync.

apt-get install ntp apache2 libapache2-mod-fastcgi python-pip python-virtualenv python-dev libevent-dev python-virtualenv

describing the integration tests

The integration tests are described in YAML files intepreted by teuthology. The following YAML file is used : it is a trivial example to make sure the environment is ready to host more complex tests suites.

check-locks: false
roles:
- - mon.a
  - mon.c
  - osd.0
- - mon.b
  - osd.1
tasks:
- install:
   branch: stable
- ceph:
targets:
  ubuntu@target1.novalocal: ssh-rsa AAAAB3NzaC1yc2...
  ubuntu@target2.novalocal: ssh-rsa AAAAB3NzaC1yc2...

The check-locks: false disables the target locking logic and assumes the environment is used by a single user, i.e. each developer deploys teuthology and the required targets within an OpenStack tenant of his own. The roles: section deploys a few osd and a single mon on the two targets listed below. The first subsection of roles:

roles:
- - mon.a
  - osd.0

will be deployed on the first subsection of targets:

targets:
  ubuntu@target1.novalocal: ssh-rsa AAAAB3NzaC1yc2...

and so on. The – install: line needs to be first and will take care of installing the required packages on each target. The actual tests should use another ceph task.
The targets are described with their user and domain name ( ubuntu@target1.novalocal ) which can be tested with ssh ubuntu@target1.novalocal echo good and must work without password. It must be followed by the ssh host key, as found in the /etc/ssh/ssh_host_rsa_key.pub file on each target machine. The comment part ( i.e. the root@target1 that shows at the end of the /etc/ssh/ssh_host_rsa_key.pub file ) must be removed otherwise teuthology will be confused and complain it cannot parse the line.

running teuthology

From the teuthology instance, in the teuthology source directory and assuming noop.yaml contains the description commented above:

$ ./virtualenv/bin/teuthology noop.yaml
INFO:teuthology.run_tasks:Running task internal.save_config...
INFO:teuthology.task.internal:Saving configuration
INFO:teuthology.run_tasks:Running task internal.check_lock...
INFO:teuthology.task.internal:Lock checking disabled.
INFO:teuthology.run_tasks:Running task internal.connect...
INFO:teuthology.task.internal:Opening connections...
DEBUG:teuthology.task.internal:connecting to ubuntu@teuthology2.novalocal
DEBUG:teuthology.task.internal:connecting to ubuntu@teuthology1.novalocal
...
INFO:teuthology.run:Summary data:
{duration: 363.5891010761261, flavor: basic, owner: ubuntu@teuthology, success: true}
INFO:teuthology.run:pass

One Reply to “HOWTO install Ceph teuthology on OpenStack”

Comments are closed.