CEPH & Eucalyptus for Block Storage

Today I did my first install of CEPH, which I used as backend system for the Elastic Block Storage (EBS) in Eucalyptus. The advantage of CEPH is that it is a distributed system which is going to provide you replication (persistence for data), redundancy (we are using a pool of resources, not a single target) and scalability : the easiest way to add capacity is to add nodes which will be used for storage, and those nodes can be simple machines.

I won't go too deep into CEPH installation. The official documentation is quite good for anyone to get up to speed quickly. I personally had no troubles using CentOS7 (el7). Also, I won't go too deep into Eucalyptus installation, I will simply share with you the CEPH config files and my NC configuration files which have some values non-indicated in Eucalyptus docs.

I will simply spend some time to configure a non-admin user in CEPH which I will use for my Eucalyptus cloud. Back on your ceph admin node, simply have :

Create the pools

In CEPH, there is a default pool called 'rbd' (pool 0). I don't like to use the default values and settings when I deploy components I can tune / adapt to my use-case. So here, I am going to create 2 pools : 1 to store the EBS volumes and 1 to store the EBS snapshots

ceph osd pool create eucavols 64
ceph osd pool create eucasnaps 64

Here, that's about all we had to do to create the pools.


The number you set after the pool name (here, 64 in my example) depends on how many OSDs and replication factor you want to assign. If you have a dev test cluster, small numbers will do. For larger deployments, ref. to the CEPH pool pg planning. A factor of 2 is always best (save yourself CPU cycles ;) )

Create a CEPH user for our pools

When you installed your CEPH cluster, you used and did every basic activities using the ceph administrator keys and credentials. Just like you tell people not to dev as root, you don't let the softwares (here, Eucalyptus) with too many power over your cluster. So, without any further wait, we are going to create a CEPH user, called "eucalyptus", which will have only read access on the monitors, and full control over eucavols and eucasnaps pools.

ceph auth get-or-create client.eucalyptus mon 'allow r' osd 'allow rwx pool=rbd, allow rwx pool=eucavols, allow rwx pool=eucasnaps, allow x' \
     -o ceph.client.eucalyptus.keyring

Running that command on the monitor as the CEPH admin will create a eucalyptus user and generate the eucalyptys.keyring file. Copy that keyring file on all your NC and on the SC (by default to /etc/ceph/, otherwise, as follows below).

The ceph.conf on the SC and NC

When Eucalyptus implemented CEPH storage, it was on a fairly "old" version of CEPH at first, and some non-default parameters that are generated by the CEPH installation scripts are missing. Here is what the ceph.conf file has to look like on the NC and the SC.

fsid = ef66f3c8-2cbe-4195-8fbc-bc2b14ba6d69
public_network =
cluster_network =
mon_initial_members = nc-0
mon_host =
# mon addr is not by default in the configuration. Do not forget to add it as follows:
# if you have multiple, simply list them with a ',' as separator
mon addr =
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

Eucalyptus NC configuration

In /etc/eucalyptus/eucalyptus.conf, edit files accordingly. Make sure the eucalyptus Linux user/group has read access on the keyring file and ceph.conf file

CEPH_USER_NAME=eucalyptus # BECAREFUL - THIS IS NOT THE EUCALYPTUS LINUX ACCOUNT but the CEPH client you created previously.
CEPH_KEYRING_PATH=/var/lib/eucalyptus/ceph-config/ceph.client.eucalyptus.keyring # my key file path
CEPH_CONFIG_PATH=/var/lib/eucalyptus/ceph-config/ceph.conf; # ceph config path

Here we go. Now to confirm this work as expected, run a new instance, create a new volume and run

euca-attach-volume -i <instance_id> -d <device path # /dev/sdz > <volume_id>

I know it might look too easy to be true, but this is it ! You have successfully configured Eucalyptus to use CEPH as a backend storage for your EBS volumes. Enjoy the IOps and space saver CEPH ;)


Comments powered by Disqus