from zero to ceph in five seconds

The micro-osd bash script is 91 lines long and creates a Ceph cluster with a single OSD and a single MON in less that five seconds

$ time bash micro-osd.sh single-osd
starting osd.0 at :/0 osd_data single-osd/osd single-osd/osd.journal
# id    weight  type name       up/down reweight
-1      1       root default
0       1               osd.0   up      1
export CEPH_ARGS='--conf single-osd/ceph.conf'
ceph osd tree
real    0m4.677s

It is meant to be used for integration tests, for example in the context of the openstack-installer puppet manifest to deploy OpenStack. Ceph is configured to run from a directory, on a single host, without cephx, in a non privileged environment and uses about 100MB of disk space.

$ du -sh single-osd/
103M    single-osd/


The MON and OSD daemons have defaults that are suitable for a packaged installation. They are adjusted to allow running in a directory, in the same way vstart.sh can be used by the developer to run Ceph directly from the source directory.

cluster configuration

[global]
fsid = $(uuidgen)
osd crush chooseleaf type = 0
run dir = ${DIR}/run
auth cluster required = none
auth service required = none
auth client required = none

The osd crush chooseleaf type = 0 is required for the default crushmap to resolve when trying to find a suitable OSD although there is only one. The auth… lines disable authentication and simplify the configuration of the cluster.

export CEPH_ARGS="--conf ${DIR}/ceph.conf"

The CEPH_ARGS environment variable is appended to all Ceph commands and is set so that they all use the configuration file created for tests instead of the default /etc/ceph/ceph.conf.

MON configuration

[mon.0]
log file = ${DIR}/log/mon.log
chdir = ""
mon cluster log file = ${DIR}/log/mon-cluster.log
mon data = ${MON_DATA}
mon addr = 127.0.0.1

The two log files ( log file and mon cluster log file ) are set to the local directory to override the default which is /var/log/ceph. The MON will be accessed from the same machine and is bound to localhost with mon addr. The mon database directory is set with mon data which is a located in the local directory. The chdir set to the empty string prevents the daemon from calling chdir(“/”) which is required when using relative directories to store logs and data.

ceph-mon --id 0 --mkfs --keyring /dev/null

creates the database using the configuration above : mon.0 is found because ceph-mon is of type mon and the identifier of the MON is 0 as set with –id. The –keyring option must be given although it won’t be used for real because authentication is disabled.

ceph-mon --id 0

runs the monitor in the background.

OSD configuration

[osd.0]
log file = ${DIR}/log/osd.log
chdir = ""
osd data = ${OSD_DATA}
osd journal = ${OSD_DATA}.journal
osd journal size = 100

The log file ( log file) is set to the local directory to override the default which is /var/log/ceph. The osd database directory is set with osd data which is a located in the local directory. The osd journal is set to be the file designated with osd journal and its size is reduced to 100MB with osd journal size to reduce the disk usage of the local directory. The chdir set to the empty string prevents the daemon from calling chdir(“/”) which is required when using relative directories to store logs and data.

ceph osd pool set data size 1

There is only one OSD and the replication factor of the data pool is set to 1. The data pool is created by default with a replication factor of 2.

OSD_ID=$(ceph osd create)
ceph osd crush add osd.${OSD_ID} 1 root=default

The osd is created in the MON and its id is returned ( it will always be 0 ). It is added to the default crush map with an weight of 1: it does not matter because there won’t be any other OSD to chose from.

ceph-osd --id ${OSD_ID} --mkjournal --mkfs

Populatte the osd data directory set in the configuration above and initialize the journal set with osd journal.

ceph-osd --id ${OSD_ID}

runs the osd in the background.

testing

rados --pool data put group /etc/group
rados --pool data get group ${DIR}/group
diff /etc/group ${DIR}/group

Copies the /etc/group in the group, retrieves it and compares the two.

5 Replies to “from zero to ceph in five seconds”

  1. thanks very much your script, this is my first time run ceph succ, thanks.

    here is a feedback:

    the scritpt print a error, then exit. the error is:
    “root@localhost:/var/www/project/github/ceph# bash micro-osd.sh /opt/xstorage
    /opt/xstorage/mon already exists

    i get a success when i comment line 60 of the script, like this:
    “””
    59
    60 #touch ${MON_DATA}/keyring
    61 ceph-mon –id 0 –mkfs –keyring /dev/null
    62 ceph-mon –id 0
    63
    “”

    and i comment “auto install ceph”, because i want to install it from source.
    “””
    22 #if ! dpkg -l ceph ; then
    23 #wget -q -O- ‘https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc’ | sudo apt-key add –
    24 #echo deb http://ceph.com/debian-dumpling/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
    25 #sudo apt-get update
    26 #sudo apt-get –yes install ceph ceph-common
    27 #fi
    “””

    i known you from here, and zjf is me.
    loicd> zjf: there is http://dachary.org/?p=2374 and the associated tiny script http://dachary.org/wp-uploads/2013/10/micro-osd.txt 😉
    * neary (~neary@62.129.6.2) has joined #ceph
    * Pedras has quit (Ping timeout: 480 seconds)
    more seriously if you just ceph-deploy with one mon on a given host + use the same host as an an osd, you will end up with a one node cluster. Reducing the required number of replica to 1 will allow you to put data into it ( that’s what ceph osd pool set data size 1 is about )

  2. With just a single OSD, I don’t believe this ever becomes clean, does it? When I try it and use “ceph health”, I get:
    HEALTH_WARN 128 pgs degraded; 128 pgs stuck unclean

    If I add another OSD, I can actually make it a healthy ceph deployment.

    1. You are correct. It would be OK if the default pools were created with a size = 1 instead of the default (2 <= emperor and 3 after ). Here is a patch and micro-osd has been updated.
      This is a very decadent software maintenance workflow 🙂

      @@ -41,6 +41,7 @@
       auth cluster required = none
       auth service required = none
       auth client required = none
      +osd pool default size = 1
       EOF
       export CEPH_ARGS="--conf ${DIR}/ceph.conf"
      
      @@ -75,7 +76,6 @@
       EOF
      
       OSD_ID=$(ceph osd create)
      -ceph osd pool set data size 1
       ceph osd crush add osd.${OSD_ID} 1 root=default host=localhost
       ceph-osd --id ${OSD_ID} --mkjournal --mkfs
       ceph-osd --id ${OSD_ID}
      
      1. Decadent? I don’t understand. I think wordpress is a perfectly adequate bug tracker and version control system 🙂

Comments are closed.