write-only ssh based rsync server

A write-only rsync server can be used by anyone to upload content with no risk of deleting existing files. Assuming access to the rsync server is handled via ssh, the following line can be added to the ~/.ssh/authorized_keys file

command="rrsync /usr/share/nginx/html" ssh-rsa AAAAB3NzaC1y...

The rrsync script is found in the rsync package documentation and installed with:

gzip -d < /usr/share/doc/rsync/scripts/rrsync.gz > /usr/bin/rrsync
chmod +x /usr/bin/rrsync

Scaling out the Ceph community lab

Ceph integration tests are vital and expensive. Contrary to unit tests that can be run on a laptop, they require multiple machines to deploy an actual Ceph cluster. As the community of Ceph developers expands, the community lab needs to expand.

The current development workflow and its challenges

When a developer contributes to Ceph, it goes like this:

  • The Developer submits a pull request
  • After the Reviewer is satisfied with the pull request, it is scheduled for integration testing (by adding the needs-qa label)
  • A Tester merges the pull request in an integration branch, together with other pull requests that needs-qa and set a label informing (s)he did so (for instance if Kefu Chai did it, he would set the wip-kefu-testing label)
  • The Tester waits for the packages to be built for the integration branch
  • The Tester schedules a suite of integration tests in the community lab
  • When the suite finishes, the Tester analyzes the integration tests results, finds the pull request responsible for a failure (which can be challenging when there are more than a handfull of pull requests in the integration branch)
  • For each failure the Tester adds a comment to the faulty pull request with a link to the integration test logs, kindly asking the developer to address the issue
  • When the integration tests are clean, the Tester merges the pull requests

As the number of contributors to Ceph increases, running the integration tests and analyzing their results becomes the bottleneck, because:

  • getting the integration tests results usually takes a few days
  • only people with access to the community lab can run integration tests
  • analyzing test results is time consuming

Increasing the number of machines in the community lab would run integration tests faster. But acquiring hardware, hosting it and monitoring it not only takes months, it also require significant system administration work. The community of Ceph developers is growing faster than what the community lab. And to make things even more complicated, as Ceph evolves the number of integration tests increases and require even more resources.

When a developer frequently contributes to Ceph, (s)he is granted access to the VPN that allows her/him to schedule integration tests. For instance Abhishek Lekshmanan and Nathan Cutler who routinely run and analyze integration tests for backports now have access to the community lab and can do that on their own. But the process to get access to the VPN takes weeks and the learning curve to use it properly is significant.

Although it is mostly invisible to the community lab user, the system administration workload to keep it running is significant. Dan Mick, Zack Cerza and others fix problems on a daily basis. As the size of the community lab grows, this workload increases and requires skills that are difficult to acquire.

Simplifying the workflow with public OpenStack clouds

As of July 2015, it became possible to run integration tests on public OpenStack clouds. More importantly, it takes less than one hour for a new developer to register and schedule an integration test. This new facility can be leveraged to simplify the workflow as follows:

  • The Developer submits a pull request
  • The Developer is required to attach a successfull run of integration tests demonstrating the feature or the bug fix
  • After the Reviewer is satisfied with the pull request, it is merged.

There is no need for a Tester because the Developer now has the ability to run integration tests and interpret the results.

The interpretation of the test results is simpler because there is only one pull request for a run. The Developer can compare her/his run to a recent run from the community lab to verify the unmodified code passes. (S)He also can debug a failed test in interactive mode.

Contrary to the community lab, the test cluster has a short life span and requires no system administration skills. It is created in the cloud, on demand, and can be destroyed as soon as the results have been analyzed.

The learning curve to schedule and interpret integration tests is reduced. The Developer needs to know about the teuthology-openstack command and how to interpret a test failure. But (s)he does not need the other teuthology-* commands nor does (s)he have to get access to the VPN of the community lab.

Sorting Ceph backport branches

When there are many backports in flight, they are more likely to overlap and conflict with each other. When a conflict can be trivially resolved because it comes from the context of a hunk, it’s often enough to just swap the two commits to avoid the conflict entirely. For instance let say a commit on

void foo() { }
void bar() {}

adds an argument to the foo function:

void foo(int a) { }
void bar() {}

and the second commit adds an argument to the bar function:

void foo(int a) { }
void bar(bool b) {}

If the second commit is backported before the first, it will conflict because it will find that the context of the bar function has the foo function without an argument.

When there are dozens of backport branches, they can be sorted so that the first to merge is the one that cherry picks the oldest ancestor in the master branch. In other words given the example above, a cherry-pick of the first commit be merged before the second commit because it is older in the commit history.

Sorting the branches also gracefully handles interdependent backports. For instance let say the first branch contains a few backported commits and a second branch contains a backported commit that can’t be applied unless the first branch is merged. Since it is required for each Ceph branch proposed for backports to pass make check, the most commonly used strategy is to include all the commits from the first branch in the second branch. This second branch is not intended to be merged and the title is usually prefixed with DNM (Do Not Merge). When the first branch is merged, the second is rebased against the target and the redundant commits disapear from the second branch.

Here is a three lines shell script that implements the sorting:

#
# Make a file with the hash of all commits found in master
# but discard those that already are in the hammer release.
#
git log --no-merges \
  --pretty='%H' ceph/hammer..ceph/master \
  > /tmp/master-commits
#
# Match each pull request with the commit from which it was
# cherry-picked. Just use the first commit: we expect the other to be
# immediate ancestors. If that's not the case we don't know how to
# use that information so we just ignore it.
#
for pr in $PRS ; do
  git log -1 --pretty=%b ceph/pull/$pr/merge^1..ceph/pull/$pr/merge^2 | \
   perl -ne 'print "$1 '$pr'\n" if(/cherry picked from commit (\w+)/)'
done > /tmp/pr-and-first-commit
#
# For each pull request, grep the cherry-picked commit and display its
# line number. Sort the result in reverse order to get the pull
# request sorted in the same way the cherry-picked commits are found
# in the master history.
#
SORTED_PRS=$(while read commit pr ; do
  grep --line-number $commit < /tmp/master-commits | \
  sed -e "s/\$/ $pr/" ; done  < /tmp/pr-and-first-commit | \
  sort -rn | \
  perl -p -e 's/.* (.*)\n/$1 /')

Ceph integration tests made simple with OpenStack

If an OpenStack tenant (account in the OpenStack parlance) is available, the Ceph integration tests can be run with the teuthology-openstack command , which will create the necessary virtual machines automatically (see the detailed instructions to get started). To do its work, it uses the teuthology OpenStack backend behind the scenes so the user does not need to know about it.
The teuthology-openstack command has the same options as teuthology-suite and can be run as follows:

$ teuthology-openstack \
  --simultaneous-jobs 70 --key-name myself \
  --subset 10/18 --suite rados \
  --suite-branch next --ceph next
...
Scheduling rados/thrash/{0-size-min-size-overrides/...
Suite rados in suites/rados scheduled 248 jobs.

web interface: http://167.114.242.148:8081/
ssh access   : ssh ubuntu@167.114.242.148 # logs in /usr/share/nginx/html

As the suite progresses, its status can be monitored by visiting the web interface::

And the horizon OpenStack dashboard shows resource usage for the run:


Continue reading “Ceph integration tests made simple with OpenStack”

HOWTO setup a postgresql server on Ubuntu 14.04

In the context of the teuthology (the integration test framework for Ceph, there needs to be a PostgreSQL available, locally only, with a single user dedicated to teuthology. It can be setup from a new Ubuntu 14.04 install with:

    sudo apt-get -qq install -y postgresql postgresql-contrib

    if ! sudo /etc/init.d/postgresql status ; then
        sudo mkdir -p /etc/postgresql
        sudo chown postgres /etc/postgresql
        sudo -u postgres pg_createcluster 9.3 paddles
        sudo /etc/init.d/postgresql start
    fi
    if ! psql --command 'select 1' \
          'postgresql://paddles:paddles@localhost/paddles' > /dev/null
    then
        sudo -u postgres psql \
            -c "CREATE USER paddles with PASSWORD 'paddles';"
        sudo -u postgres createdb -O paddles paddles
    fi

If anyone knows of a simpler way to do the same thing, I’d be very interested to know about it.

restoring an OpenStack ssh public key

When a ssh private key is obtained from OpenStack via

openstack keypair create foobar > foobar.pem

the matching public key is stored in the OpenStack tenant. If it is later deleted with

openstack keypair delete foobar

it can be restored with

ssh-keygen -y  -f foobar.pem > foobar.pub
openstack keypair create --public-key foobar.pub foobar

oneliner to deploy teuthology on OpenStack

Note: this is obsoleted by Ceph integration tests made simple with OpenStack

The teuthology can be installed as a dedicated OpenStack instance on OVH using the OpenStack backend with:

nova boot \
   --image 'Ubuntu 14.04' \
   --flavor 'vps-ssd-1' \
   --key-name loic \
   --user-data <(curl --silent \
     https://raw.githubusercontent.com/dachary/teuthology/wip-6502-openstack/openstack-user-data.txt | \
     sed -e "s|OPENRC|$(env | grep OS_ | tr '\n' ' ')|") teuthology

Assuming the IP assigned to the instance is 167.114.235.222, the following will display the progress of the integration tests that are run immediately after the instance is created:

ssh ubuntu@167.114.235.222 tail -n 2000000 -f /tmp/init.out

If all goes well, it will complete with:

...
========================= 8 passed in 1845.59 seconds =============
___________________________________ summary _________________________
  openstack-integration: commands succeeded
  congratulations :)

And the pulpito dashboard will display the remains of the integration tests at 167.114.235.222:8081 like so:

Running your own Ceph integration tests with OpenStack

Note: this is obsoleted by Ceph integration tests made simple with OpenStack

The Ceph lab has hundreds of machines continuously running integration and upgrade tests. For instance, when a pull request modifies the Ceph core, it goes through a run of the rados suite before being merged into master. The Ceph lab has between 100 to 3000 jobs in its queue at all times and it is convenient to be able to run integration tests on an independent infrastructure to:

  • run a failed job and verify a patch fixes it
  • run a full suite prior to submitting a complex modification
  • verify the upgrade path from a given Ceph version to another
  • etc.

If an OpenStack account is not available (a tenant in the OpenStack parlance), it is possible to rent one (it takes a few minutes). For instance, OVH provides an horizon dashboard showing how many instances are being used to run integration tests:

The OpenStack usage is billed monthly and the accumulated costs are displayed on the customer dashboard:


Continue reading “Running your own Ceph integration tests with OpenStack”

configuring ansible for teuthology

As of July 8th, 2015, teuthology (the Ceph integration test software) switched from using Chef to using Ansible. To keep it working, two files must be created. The /etc/ansible/hosts/group_vars/all.yml file with:

modify_fstab: false

The modify_fstab is necessary for OpenStack provisioned instances but it won’t hurt if it’s always there (the only drawback being that mount options are not persisted in /etc/fstab, but they are set as they should). The /etc/ansible/hosts/mylab file must then be populated with

[testnodes]
ovh224000.teuthology
ovh224001.teuthology
...

where ovh224000.teuthology etc. are the fqdns of all machines that will be used as teuthology targets. The Ansible playbooks will expect to find all targets under the [testnodes] section. The output of a teuthology job should show that the Ansible playbook is being used with something like:

...
teuthology.run_tasks:Running task ansible.cephlab...
...
INFO:teuthology.task.ansible.out:PLAY [all] *****
...
TASK: [ansible-managed | Create the sudo group.] ******************************
...

Continue reading “configuring ansible for teuthology”

Public OpenStack providers usable within the hour

The OpenStack marketplace provides a list of OpenStack public clouds, a few of which enable the user to launch an instance at most one hour after registration.

Enter Cloud Suite has a 2GB RAM, 2 CPU, 40GB Disk instance for 0.06 euros / hour (~40 euros per month) and there is no plan to provide a flavor with only 1 CPU instead of 2 CPU. The nova, cinder and neutron API are available.

HP Helion Public Cloud has a 2GB RAM, 2 CPU, 10GB Disk instance for 0.05 euros / hour (0.06 USD / hour) (~40 euros per month).

OVH has a 2GB RAM, 1 CPU, 10GB Disk instance for 0.008 euros / hour (~3 euros per month). The nova API is available, not cinder nor neutron.

Rackspace has a 2GB RAM, 1 CPU, 10GB DIsk instance for ~40 euros per month (plus ~50 euros / month service fee, regardless of the number of instances). The nova and cinder API are available, not neutron.

DataCentred has 2GB RAM, 1CPU, 40GB Disk instance for ~40 euros per month. The nova, cinder and neutron API are available (but the router quota are set to zero by default). There are 2GB RAM, 1CPU, 40GB Disk AARCH64 instances for ~80 euros per month.

Cloudwatt has no 2GB RAM instance but a 3.75GB RAM, 1CPU, 50GB Disk instance for ~35 euros per month which makes it less expensive than all but OVH. The nova, cinder and neutron API are available.