The ceph sources are compiled with code coverage enabled
root@ceph:/srv/ceph# ./configure --with-debug CFLAGS='-g' CXXFLAGS='-g' \ --enable-coverage \ --disable-silent-rules
and the tests are run
cd src ; make check-coverage
to create the HTML report which shows where tests could improve code coverage:
virtual machine
The compilation of ceph is documented to be faster with multiple cores and C++ compilation can require large amounts of RAM. A new OpenStack flavor is created with 16GB RAM and 6 cores.
root@bm0001:~# nova flavor-create e.6-cpu.10GB-disk.16GB-ram 19 16384 10 6 +----+----------------------------+-----------+------+-----------+------+-------+-------------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | +----+----------------------------+-----------+------+-----------+------+-------+-------------+ | 19 | e.6-cpu.10GB-disk.16GB-ram | 16384 | 10 | 0 | | 6 | 1 | +----+----------------------------+-----------+------+-----------+------+-------+-------------+
A virtual machine is created on a designated hardware
nova boot --image Debian GNU/Linux Wheezy' --flavor e.6-cpu.10GB-disk.16GB-ram \ --key_name loic --availability_zone=bm0003 --poll ceph
The default storage for the instances on the OpenStack cluster is qcow2. To improve performances a volume backed by LVM logical volume is created on the same hardware to maximize disqk IO and accelerate compilation.
# euca-create-volume --zone bm0003 --size 100 VOLUME vol-00000042 100 bm0003 ...
It is then attached to the instance, formated with ext4 and mounted on /srv.
installing dependencies
The required dependencies for compilation are installed. The list is however incomplete and the Build-Depends from the debian control file is used instead:
apt-get install debhelper autotools-dev autoconf automake libfuse-dev libboost-dev libboost-thread-dev libedit-dev libnss3-dev libtool libexpat1-dev libfcgi-dev libatomic-ops-dev libgoogle-perftools-dev pkg-config libcurl4-gnutls-dev libkeyutils-dev uuid-dev libaio-dev python libxml2-dev javahelper default-jdk junit4 libboost-program-options-dev
compilation
The master ceph branch is cloned:
git clone --recursive https://github.com/ceph/ceph.git
The build from source instructions are adapted to enable code coverage and display the actual compilation lines to check that the intended flags are being used. Without -j6 the compilation takes about 15 minutes instead of 4 minutes on a Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz.
root@ceph:/srv/ceph# ./autogen.sh root@ceph:/srv/ceph# ./configure --with-debug CFLAGS='-g' CXXFLAGS='-g' \ --enable-coverage \ --disable-silent-rules root@ceph:/srv/ceph# time make -j6 ... real 4m27.493s user 22m21.564s sys 1m44.111s
The tests and the coverage are run in sequence because it is unclear if parallel execution could create race conditions.
root@ceph:/srv/ceph/src# time make check-coverage ... lines......: 35.3% (15287 of 43268 lines) functions..: 48.0% (5169 of 10766 functions) branches...: 16.2% (24080 of 148801 branches) ... real 10m15.344s user 8m3.086s sys 1m0.404s
The difference between user + sys and real is about 45 seconds which is roughly 5% of the time not spent on CPU tasks, that is primarily waiting for I/O.
The HTML coverage report is created using the files created by check-coverage
root@ceph:/srv/ceph/src# genhtml -o html \ --baseline-file check-coverage_base.lcov \ --demangle-cpp check-coverage_tested.lcov
The results are confirmed on irc.oftc.net#ceph
(08:47:05 PM) loicd: This is the coverage report generated from the results of make check-coverage in the master branch of https://github.com/ceph/ceph.git : http://dachary.org/wp-uploads/2013/01/ceph/ . It shows 35% of the LOC are covered. Is it correct or did I miss something ?
(08:59:16 PM) gregaf1: loicd: joshd can correct me on the coverage, but I believe that looks about right
running ceph from sources
Running ceph from the compiled sources requires the installation of
apt-get install python-virtualenv btrfs-tools uuid-runtime
root@ceph:/srv/ceph/src# ./vstart.sh -n -x -l
ip 127.0.0.1
WARNING: hostname resolves to loopback; remote hosts will not be able to
connect. either adjust /etc/hosts, or edit this script to use your
machine’s real IP.
creating /srv/ceph/src/keyring
./monmaptool –create –clobber –add a 127.0.0.1:6789 –add b 127.0.0.1:6790 –add c 127.0.0.1:6791 –print /tmp/ceph_monmap.3005
./monmaptool: monmap file /tmp/ceph_monmap.3005
./monmaptool: generated fsid 2d8e1791-b120-4557-8aa8-66914a736014
epoch 0
fsid 2d8e1791-b120-4557-8aa8-66914a736014
last_changed 2013-01-07 00:40:16.692285
created 2013-01-07 00:40:16.692285
0: 127.0.0.1:6789/0 mon.a
1: 127.0.0.1:6790/0 mon.b
2: 127.0.0.1:6791/0 mon.c
./monmaptool: writing epoch 0 to /tmp/ceph_monmap.3005 (3 monitors)
rm -rf dev/mon.a
./ceph-mon –mkfs -c ceph.conf -i a –monmap=/tmp/ceph_monmap.3005 –keyring=/srv/ceph/src/keyring
./ceph-mon: created monfs at dev/mon.a for mon.a
rm -rf dev/mon.b
./ceph-mon –mkfs -c ceph.conf -i b –monmap=/tmp/ceph_monmap.3005 –keyring=/srv/ceph/src/keyring
./ceph-mon: created monfs at dev/mon.b for mon.b
rm -rf dev/mon.c
./ceph-mon –mkfs -c ceph.conf -i c –monmap=/tmp/ceph_monmap.3005 –keyring=/srv/ceph/src/keyring
./ceph-mon: created monfs at dev/mon.c for mon.c
./ceph-mon -i a -c ceph.conf
starting mon.a rank 0 at 127.0.0.1:6789/0 mon_data dev/mon.a fsid 2d8e1791-b120-4557-8aa8-66914a736014
./ceph-mon -i b -c ceph.conf
starting mon.b rank 1 at 127.0.0.1:6790/0 mon_data dev/mon.b fsid 2d8e1791-b120-4557-8aa8-66914a736014
./ceph-mon -i c -c ceph.conf
starting mon.c rank 2 at 127.0.0.1:6791/0 mon_data dev/mon.c fsid 2d8e1791-b120-4557-8aa8-66914a736014
ERROR: error accessing ‘dev/osd0/*’
add osd0 5090ec02-d63e-4276-8bd3-3aa52ce87db3
0
updated item id 0 name ‘osd.0’ weight 1 at location {host=localhost,rack=localrack,root=default} to crush map
2013-01-07 00:40:32.049873 7f09d3b5e780 -1 filestore(dev/osd0) limited size xattrs — filestore_xattr_use_omap enabled
2013-01-07 00:40:32.115814 7f09d3b5e780 -1 filestore(dev/osd0) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
2013-01-07 00:40:32.177546 7f09d3b5e780 -1 created object store dev/osd0 journal dev/osd0.journal for osd.0 fsid 2d8e1791-b120-4557-8aa8-66914a736014
2013-01-07 00:40:32.177643 7f09d3b5e780 -1 auth: error reading file: dev/osd0/keyring: can’t open dev/osd0/keyring: (2) No such file or directory
2013-01-07 00:40:32.177749 7f09d3b5e780 -1 created new key in keyring dev/osd0/keyring
adding osd0 key to auth repository
2013-01-07 00:40:32.190810 7f6681bdf760 -1 read 56 bytes from dev/osd0/keyring
added key for osd.0
start osd0
./ceph-osd -i 0 -c ceph.conf
starting osd.0 at :/0 osd_data dev/osd0 dev/osd0.journal
creating dev/mds.a/keyring
2013-01-07 00:40:32.279167 7f20093e7760 -1 read 56 bytes from dev/mds.a/keyring
added key for mds.a
./ceph-mds -i a -c ceph.conf
starting mds.a at :/0
creating dev/mds.b/keyring
2013-01-07 00:40:32.391947 7fe382702760 -1 read 56 bytes from dev/mds.b/keyring
added key for mds.b
./ceph-mds -i b -c ceph.conf
starting mds.b at :/0
creating dev/mds.c/keyring
2013-01-07 00:40:32.506457 7f895b3d4760 -1 read 56 bytes from dev/mds.c/keyring
added key for mds.c
./ceph-mds -i c -c ceph.conf
starting mds.c at :/0
./ceph -c ceph.conf -k /srv/ceph/src/keyring mds set_max_mds 3
max_mds = 3
started. stop.sh to stop. see out/* (e.g. ‘tail -f out/????’) for debug output.
root@ceph:/srv/ceph/src# ./ceph health
HEALTH_WARN 24 pgs degraded; 24 pgs stuck unclean; recovery 53/106 degraded (50.000%)