The Ceph command to setup a block device ( ceph-disk) needs to call partprobe after zapping a disk. The patch adding the partprobe call needs a block device to test that it works as expected. The body of the test requires root privileges:
dd if=/dev/zero of=vde.disk bs=1024k count=200 losetup --find vde.disk local disk=$(losetup --associated vde.disk | cut -f1 -d:) ./ceph-disk zap $disk
which is potentially dangerous for the developer machine. The run of the test is delegated to a docker container so that accidentally removing /var/run has no consequence. Although the test recompiles Ceph the first time:
main_docker "$@" --compile main_docker "$@" --user root --dev test/ceph-disk.sh test_activate_dev
it will reuse the compilation results if run a second time. Unless there is a new commit in which case it will recompile whatever make decides. The ceph-disk-root.sh script is added to the list of scripts that are run on make check but will only be considered if –enable-docker has been given to ./configure and docker is available. Otherwise it will be silently ignored
if ENABLE_DOCKER check_SCRIPTS += \ test/ceph-disk-root.sh endif
Here is a sample run
$ make TESTS=test/ceph-disk-root.sh check ... remote: Counting objects: 7, done. remote: Compressing objects: 100% (6/6), done. remote: Total 7 (delta 6), reused 1 (delta 1) Unpacking objects: 100% (7/7), done. From /home/loic/software/ceph/ceph 1800c58..d797ad7 wip-9665-ceph-disk-partprobe -> origin/wip-9665-ceph-disk-partprobe HEAD is now at d797ad7 ceph-disk: test prepare / activate on a device ... test_activate_dev: 233: dd if=/dev/zero of=vde.disk bs=1024k count=200 200+0 records in 200+0 records out 209715200 bytes (210 MB) copied, 0.0916807 s, 2.3 GB/s test_activate_dev: 234: losetup --find vde.disk ttest_activate_dev: 235: cut -f1 -d: ttest_activate_dev: 235: losetup --associated vde.disk test_activate_dev: 235: local disk=/dev/loop2 test_activate_dev: 236: ./ceph-disk zap /dev/loop2 Creating new GPT entries. Warning: The kernel is still using the old partition table. The new table will be used at the next reboot. GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. Warning: The kernel is still using the old partition table. The new table will be used at the next reboot. The operation has completed successfully. test_activate_dev: 237: test_activate /dev/loop2 /dev/loop2p1 test_activate: 192: local to_prepare=/dev/loop2 test_activate: 193: local to_activate=/dev/loop2p1 test_activate: 195: /bin/mkdir -p test-ceph-disk/osd test_activate: 197: ./ceph-disk --statedir=test-ceph-disk --sysconfdir=test-ceph-disk --prepend-to-path= --verbose prepare /dev/loop2 ... DEBUG:ceph-disk:Creating xfs fs on /dev/loop2p1 INFO:ceph-disk:Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/loop2p1 meta-data=/dev/loop2p1 isize=2048 agcount=4, agsize=6335 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=25339, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=1232, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 DEBUG:ceph-disk:Mounting /dev/loop2p1 on test-ceph-disk/tmp/mnt.TyGZkn with options noatime,inode64 ... test_activate: 207: ./ceph osd crush add osd.0 1 root=default host=localhost *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** add item id 0 name 'osd.0' weight 1 at location {host=localhost,root=default} to crush map test_activate: 208: echo FOO test_activate: 209: /usr/bin/timeout 360 ./rados --pool rbd put BAR test-ceph-disk/BAR test_activate: 210: /usr/bin/timeout 360 ./rados --pool rbd get BAR test-ceph-disk/BAR.copy test_activate: 211: /usr/bin/diff test-ceph-disk/BAR test-ceph-disk/BAR.copy ... test_activate_dev: 239: umount /dev/loop2p1 test_activate_dev: 240: ./ceph-disk zap /dev/loop2 Caution: invalid backup GPT header, but valid main header; regenerating backup header from main header. Warning! Main and backup partition tables differ! Use the 'c' and 'e' options on the recovery & transformation menu to examine the two tables. Warning! One or more CRCs don't match. You should repair the disk! **************************************************************************** Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk verification and recovery are STRONGLY recommended. **************************************************************************** Warning: The kernel is still using the old partition table. The new table will be used at the next reboot. GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. Warning: The kernel is still using the old partition table. The new table will be used at the next reboot. The operation has completed successfully. test_activate_dev: 241: status=0 test_activate_dev: 242: losetup --detach /dev/loop2 test_activate_dev: 243: rm vde.disk test_activate_dev: 244: return 0