Sorting Ceph backport branches

When there are many backports in flight, they are more likely to overlap and conflict with each other. When a conflict can be trivially resolved because it comes from the context of a hunk, it’s often enough to just swap the two commits to avoid the conflict entirely. For instance let say a commit on

void foo() { }
void bar() {}

adds an argument to the foo function:

void foo(int a) { }
void bar() {}

and the second commit adds an argument to the bar function:

void foo(int a) { }
void bar(bool b) {}

If the second commit is backported before the first, it will conflict because it will find that the context of the bar function has the foo function without an argument.

When there are dozens of backport branches, they can be sorted so that the first to merge is the one that cherry picks the oldest ancestor in the master branch. In other words given the example above, a cherry-pick of the first commit be merged before the second commit because it is older in the commit history.

Sorting the branches also gracefully handles interdependent backports. For instance let say the first branch contains a few backported commits and a second branch contains a backported commit that can’t be applied unless the first branch is merged. Since it is required for each Ceph branch proposed for backports to pass make check, the most commonly used strategy is to include all the commits from the first branch in the second branch. This second branch is not intended to be merged and the title is usually prefixed with DNM (Do Not Merge). When the first branch is merged, the second is rebased against the target and the redundant commits disapear from the second branch.

Here is a three lines shell script that implements the sorting:

#
# Make a file with the hash of all commits found in master
# but discard those that already are in the hammer release.
#
git log --no-merges \
  --pretty='%H' ceph/hammer..ceph/master \
  > /tmp/master-commits
#
# Match each pull request with the commit from which it was
# cherry-picked. Just use the first commit: we expect the other to be
# immediate ancestors. If that's not the case we don't know how to
# use that information so we just ignore it.
#
for pr in $PRS ; do
  git log -1 --pretty=%b ceph/pull/$pr/merge^1..ceph/pull/$pr/merge^2 | \
   perl -ne 'print "$1 '$pr'\n" if(/cherry picked from commit (\w+)/)'
done > /tmp/pr-and-first-commit
#
# For each pull request, grep the cherry-picked commit and display its
# line number. Sort the result in reverse order to get the pull
# request sorted in the same way the cherry-picked commits are found
# in the master history.
#
SORTED_PRS=$(while read commit pr ; do
  grep --line-number $commit < /tmp/master-commits | \
  sed -e "s/\$/ $pr/" ; done  < /tmp/pr-and-first-commit | \
  sort -rn | \
  perl -p -e 's/.* (.*)\n/$1 /')

Ceph integration tests made simple with OpenStack

If an OpenStack tenant (account in the OpenStack parlance) is available, the Ceph integration tests can be run with the teuthology-openstack command , which will create the necessary virtual machines automatically (see the detailed instructions to get started). To do its work, it uses the teuthology OpenStack backend behind the scenes so the user does not need to know about it.
The teuthology-openstack command has the same options as teuthology-suite and can be run as follows:

$ teuthology-openstack \
  --simultaneous-jobs 70 --key-name myself \
  --subset 10/18 --suite rados \
  --suite-branch next --ceph next
...
Scheduling rados/thrash/{0-size-min-size-overrides/...
Suite rados in suites/rados scheduled 248 jobs.

web interface: http://167.114.242.148:8081/
ssh access   : ssh ubuntu@167.114.242.148 # logs in /usr/share/nginx/html

As the suite progresses, its status can be monitored by visiting the web interface::

And the horizon OpenStack dashboard shows resource usage for the run:


Continue reading “Ceph integration tests made simple with OpenStack”

HOWTO setup a postgresql server on Ubuntu 14.04

In the context of the teuthology (the integration test framework for Ceph, there needs to be a PostgreSQL available, locally only, with a single user dedicated to teuthology. It can be setup from a new Ubuntu 14.04 install with:

    sudo apt-get -qq install -y postgresql postgresql-contrib

    if ! sudo /etc/init.d/postgresql status ; then
        sudo mkdir -p /etc/postgresql
        sudo chown postgres /etc/postgresql
        sudo -u postgres pg_createcluster 9.3 paddles
        sudo /etc/init.d/postgresql start
    fi
    if ! psql --command 'select 1' \
          'postgresql://paddles:paddles@localhost/paddles' > /dev/null
    then
        sudo -u postgres psql \
            -c "CREATE USER paddles with PASSWORD 'paddles';"
        sudo -u postgres createdb -O paddles paddles
    fi

If anyone knows of a simpler way to do the same thing, I’d be very interested to know about it.

restoring an OpenStack ssh public key

When a ssh private key is obtained from OpenStack via

openstack keypair create foobar > foobar.pem

the matching public key is stored in the OpenStack tenant. If it is later deleted with

openstack keypair delete foobar

it can be restored with

ssh-keygen -y  -f foobar.pem > foobar.pub
openstack keypair create --public-key foobar.pub foobar

oneliner to deploy teuthology on OpenStack

Note: this is obsoleted by Ceph integration tests made simple with OpenStack

The teuthology can be installed as a dedicated OpenStack instance on OVH using the OpenStack backend with:

nova boot \
   --image 'Ubuntu 14.04' \
   --flavor 'vps-ssd-1' \
   --key-name loic \
   --user-data <(curl --silent \
     https://raw.githubusercontent.com/dachary/teuthology/wip-6502-openstack/openstack-user-data.txt | \
     sed -e "s|OPENRC|$(env | grep OS_ | tr '\n' ' ')|") teuthology

Assuming the IP assigned to the instance is 167.114.235.222, the following will display the progress of the integration tests that are run immediately after the instance is created:

ssh ubuntu@167.114.235.222 tail -n 2000000 -f /tmp/init.out

If all goes well, it will complete with:

...
========================= 8 passed in 1845.59 seconds =============
___________________________________ summary _________________________
  openstack-integration: commands succeeded
  congratulations :)

And the pulpito dashboard will display the remains of the integration tests at 167.114.235.222:8081 like so:

Running your own Ceph integration tests with OpenStack

Note: this is obsoleted by Ceph integration tests made simple with OpenStack

The Ceph lab has hundreds of machines continuously running integration and upgrade tests. For instance, when a pull request modifies the Ceph core, it goes through a run of the rados suite before being merged into master. The Ceph lab has between 100 to 3000 jobs in its queue at all times and it is convenient to be able to run integration tests on an independent infrastructure to:

  • run a failed job and verify a patch fixes it
  • run a full suite prior to submitting a complex modification
  • verify the upgrade path from a given Ceph version to another
  • etc.

If an OpenStack account is not available (a tenant in the OpenStack parlance), it is possible to rent one (it takes a few minutes). For instance, OVH provides an horizon dashboard showing how many instances are being used to run integration tests:

The OpenStack usage is billed monthly and the accumulated costs are displayed on the customer dashboard:


Continue reading “Running your own Ceph integration tests with OpenStack”

configuring ansible for teuthology

As of July 8th, 2015, teuthology (the Ceph integration test software) switched from using Chef to using Ansible. To keep it working, two files must be created. The /etc/ansible/hosts/group_vars/all.yml file with:

modify_fstab: false

The modify_fstab is necessary for OpenStack provisioned instances but it won’t hurt if it’s always there (the only drawback being that mount options are not persisted in /etc/fstab, but they are set as they should). The /etc/ansible/hosts/mylab file must then be populated with

[testnodes]
ovh224000.teuthology
ovh224001.teuthology
...

where ovh224000.teuthology etc. are the fqdns of all machines that will be used as teuthology targets. The Ansible playbooks will expect to find all targets under the [testnodes] section. The output of a teuthology job should show that the Ansible playbook is being used with something like:

...
teuthology.run_tasks:Running task ansible.cephlab...
...
INFO:teuthology.task.ansible.out:PLAY [all] *****
...
TASK: [ansible-managed | Create the sudo group.] ******************************
...

Continue reading “configuring ansible for teuthology”