Following along with the quick start guide the next thing to do is to expand the cluster to add more OSDs and monitoring daemons (I'm about half-way down under "expanding your cluster"). So away we go.
Expanding Cluster
Adding OSDs
Adding a third OSD went just fine using:ceph-deploy osd prepare Node3:/ceph
ceph-deploy osd activate --fs-type btrfs Node3:/ceph
#For those of you just joining us, I'm using btrfs because I can. Recommendation is typically to use xfs or ext4, since btrfs is experimental.
After running those commands, running "ceph -s" shows cluster now has "3 up, 3 in" and is "clean+active". Available storage space has also increased significantly.
Adding a Metadata Server
Next step is to add a metadata server which is used by CephFS. CephFS one option for presenting the Ceph cluster as a storage device to clients. There's not much to be said here I ran the command and it completed.ceph-deploy mds create Node3
# I chose Node3 arbitrarily
Adding More Monitors
So now we set up more monitors so that if one monitor goes down the entire cluster doesn't die. In the previous bit, I ran into an issue where the monitor service started creating very very very very verbose logs to the extent that it filled up my OS partition (several MB a second of logs). I was able to fix this with a change to the ceph.conf file, so I'm hoping that change gets carried between monitors, but I guess we'll see.ceph-deploy mon create Ceph-Admin Node2
This didn't got as well. It installs the monitor on each node, but the monitor process does not start, and does not join the cluster. Some errors during install
- No data was received after 7 seconds, disconnecting
- admin_socket: exception getting command desciptions: [Errno 2] No such file or directory
- failed: 'ulimit -n 32768; /usr/bin/ceph-mon -i Node2 --pid-file /var/run/ceph/mon.Node2.pid -c /etc/ceph/ceph.conf --cluster ceph '
- Node2 is not defined in 'mon initial members'
- monitor Node2 does not exist in monmap
- neither public_addr nor public_network keys are defined for monitors
- monitors may not be able to form quorum
First problem, my admin/deploy box had a bunch of hung create-keys processes. So I killed all those.
rebooted the new monitor node, and the mon service started, but I can't seem to interact with the cluster at all now. That's probably not a good sign. All ceph commands time out even running on Node1.
....
After much troubleshooting that went nowhere, I'm rebuilding the cluster. Uninstalling everything and purging all data. Reinstall is going pretty quick now that I know how everything works (ish). One thing I did find. I'm a bit more clear on the difference between
ceph-deploy new
ceph-deploy mon create-initial
Now. "new" actually creates the 'new' cluster. You have to specify monitor nodes though so I thought 'new' referred to new monitors. Anyway, I following all the previous steps again, trying not to take any shortcuts or anything so hopefully I'll end up right back at the point before I screwed everything up.
...
New problem when trying to do "ceph-deploy osd activate" fails saying that it couldn't find a matching fsid. A bit of Googling suggests that there is data leftover from the first install (despite doing the purge+purgedata while I was remaking the cluster) so I'm reformatting the drive to see if that works.
Yep,
...
Back to pre-expanded pool at "active+clean" another discovery made. The quick start guide tells you to add "osd pool default size = 2" to the ceph.conf file under "[default]". This is a lie, it goes under "[global]". That is why I had to go back and set the size on each pool last time in order to get the "active+clean" state.
...
and the add monitors steps gave the same error
- Starting Ceph mon.Node2 on Node2
- failed: 'ulimit -n 32768; /usr/bin/ceph-mon -i Node2 --pidfile /var/run/ceph/mon.Node2.pid -c /etc/ceph/ceph.conf --cluster ceph '
- Starting ceph-create-keys on Node2
- No data was received after 7 seconds, disconnecting
....
Broke it again, this time trying to use "ceph mon add <host> <ip:port>" to manually add the monitor so that it would stop saying it wasn't in the monmap. This apparently is not the way to do that.
Guess I have to reinstall everything again... joy
....
Broke it a few more times but everything is working now with 3 monitors. For whatever reason ceph-deploy to add second/third monitor was not working at all. So I used this guide to manually add the monitors, which worked, except steps 6 and 7 are backwards, the monitor needs to be started before you run the "ceph add mon" command. "ceph add mon <etc>" will hang if the monitor you tell it to add does not respond, if you kill (ctrl+c) the "ceph add mon" command, that's when the whole cluster becomes unresponsive. You can technically run "Ceph add mon" then start the monitor on the node, but since "ceph add mon" takes over your shell, getting to the Node to start it can be problematic.
So now I've got a cluster with 3 OSDs and 3 montitors, I've got a warning about the clocks being different between the monitors, but other than that it's working and happy. Manually set all the clocks but clock skew warning is still happening. Restarted one node and one warning went away. Trying to restart another node but creating new monitors the way I did means that didn't get put in /etc/init.d so I can't restart them via the "service" command. Trying to find how to add them to this.
Giving up on that for now, may come back to it later.
Finally Using Ceph
Ok, while it's not exactly in prime condition, I want to get down the the functionality so the clock skew (it's a couple seconds) and the whole daemons not being in init.d problems I'm leaving for later.
Going to use the laptop I set up as a proxy as the client, which means I need to update its kernel.
...
Laptop Kernel updated, now using this guide to get the client and block device setup.
Setup ssh keys, hosts file, ceph user + sudo access, ceph repo,
More problems installing ceph
Ceph-deploy install ceph-cleint -- for some reason installing this on the laptop has been much more difficult than on my other machines. Maybe because the laptop wasn't installed with minimal install? Here's a few things I've run intoRepo errors - I sent up the ceph repo according the the pre-flight check guide, but kept getting 404 errors during the install. Looking at the ceph.repo file, ceph-deploy apparently adds in additional repos in addition to the ones set up via the guide, removing these and running ceph-deploy with the --no-adjust-repos flag fixed that. Don't know why ceph-deploy was adding bad repo urls
Python Error after install - After ceph installs it trys to run "ceph --version" to verify install. But this failed with a python traceback error saying it couldn't find the module argparse. I ended up having to install argparse and setuptools manually. It's strange, I didn't have to do this on any of the osd/mon/admin machines, and they're running the same OS, same version of python, same steps to install ceph, not sure why the client was such a jerk about it. Only other difference with client is that it's 32-bit.
"ceph-deploy admin ceph-client" ran fine
Back to using Ceph
Well, with the errors getting the client setup fixed, back to trying to setup a block device."rbd create TestBD --size 10000" Should create a ~10GB block device, runs fine.
"sudo rbd map TestBD --pool rbd --name client.admin" - should map it, does not run fine; get the following errors
- ERROR: modinfo: could not find module rbd
- FATAL: Module rbd not found.
- rbd: modprobe rbd failed! (256)
What does this mean? not a clue.
...
Looking through various mail archives and other blog posts, it seems clear I'm missing a certain driver for rbd (Rados block device). Certain posts suggest that I install ceph-common to get this driver, but "ceph-common" is not a package on the EL6 repo -- apparently I should have done this on ubuntu, it seems to be what most of this is written for.
So looking at the ceph packages I have available to me (assuming the driver is in one of them, which it may not be) I can install: "ceph-deploy","ceph-devel","ceph-fuse","ceph-libcephfs","ceph-radosgw". The descriptions of these from "yum search ceph" aren't much help. I'm going to try devel and libcephfs first, those sound promising.
...
Nope, no help. Yum search for rbd also returns nothing useful.
...
Evidently this is a kernel driver that I didn't compile into my kernel... So that's fun...
Recompiling my Client Kernel again
So, I'm not going to bother updating the kernel on the osd/mon machines, just the client - I don't think the others need it. And in-fact, there are a lot of warnings about not using rbd on osd devices. Whether this means you shouldn't actively use rbd, or that it is dangerous to have installed at all, isn't clear.So, I got back to the extracted kernel, and run "make menuconfig". Under "Device Drivers > Block Devices" I find "Rados block device (RBD)" I'm not entirely sure if I should include this or modularize it, mostly because I'm not sure what the difference between the two is. To Google!.... Seems to be difference of loading it in the base kernel (loading at boot, with no ability to remove it) vs loading it after boot via modprobe. I think I'll modularize it, since that seems to be what ceph is expecting based on the errors.
So now it looks like "<M> Rados block device (RBD)" time to compile the kernel again weeeeeee...
....
Kernel recompiled, rebooted, tried the "rbd map" command again aaaaaaaaaaaaaaaaaaaaaaand crashed. Sweet. I won't reproduce the entire stack trace here, but the word [libceph] is mentioned over and over.
One possibility found at this email archive, is that 3.6.11 kernel is too old. Because you know, THEIR FREAKING OS RECOMMENDATIONS PAGE DOESN'T SAY USE THE LATEST IN 3.6.X OR ANYTHING. Not that I'm bitter.
....
So I downloaded and compiled the latest kernel (3.15.1 at time of writing) but had some issues. Notably my network devices are not installed. Compile had issues finding a bunch of modules, so I'm assuming that was the issue. Debating between trying to fix the 3.15 kernel, or going to a slightly older one and seeing if that works.
Tried 3.12.22, same problem
....
So this is probably my inexperience with upgrading/compiling my own kernel showing. Apparently I copy the default CentOS config from the /boot directory to the unpacked kernel directory, then rename to .config, then use the menu config to add in the things I want. This means any configurations in the current kernel are copied over. Somehow this happened automatically when I upgraded from 2.6 to 3.6, but isn't happening now.
- make clean #Clean up the failed make
- cp /boot/config-2.6.32-431-17.1.el6.i686 /tmp/linux-3.12.22
- # May have forgotten to mention, the client is 32-bit because the laptop is super old
- mv config-2.6.32-431-17.1.el6.i686 .config
- make menuconfig
- #Enable rbd driver
- make
- make modules_install install
Doing it this way there are only a few "could not find module" errors (aes_generic and mperf to be specific) -- I'm not sure what they are, but hopefully they're not too important.
Booted to 3.12.22, and my network is working now, this is good. Let us see if I can finally map the rbd device.
Sweet baby Jesus I think it worked.
- sudo rbd map TestBD
- sudo mkfs.ext4 /dev/rbd1
#It didn't tell me this is what it mapped it as, just had to look for it - sudo mkdir /CephBlock1
- sudo mount /dev/rbd1 /CephBlock1
- cd /CephBlock1
- sudo touch IMadeaBlockDevice.txt
Back to Using Ceph .... Again.
Yep, appears to be working time to test some throughput, just going to do a dd with various block sizes to test the write speed of the drive. Command used:
sudo dd if=/dev/zero of=/CephBlock1/ddspeedtest.txt bs=X count=Y oflag=direct
Vary X and Y to keep amount of data transferred mostly consistant, oflag direct should keep it from buffering the writes, giving a better idea of actual drive performance. Also, the laptop (despite being old) and all the Nodes have gigabit ethernet cards connected to a gigabit switch -- so this shouldn't be a bottle neck.
Ceph Block Device Write:
Speed | Block Size (bs) | Count | Total Data |
76 KB/s | 4K | 10000 | 41MB |
614 KB/s | 32K | 1250 | 41MB |
1.1 MB/s | 64K | 625 | 41MB |
2.1 MB/s | 128K | 156 | 41MB |
4.1 MB/s | 256K | 156 | 41MB |
6.3 MB/s | 512K | 78 | 41MB |
7.3 MB/s | 1024K | 39 | 41MB |
7.9 MB/s | 2048K | 20 | 42MB |
21.1 MB/s | 4096K | 10 | 42MB |
31.0 MB/s | 8192K | 5 | 42MB |
41.9 MB/s | 16384K | 3 | 50MB |
So there's some number, don't tell us much without a comparison so lets run this against one of the drives directly rather than through ceph.
Not well, it turns out. Like, really not well.
Speed for direct-drive-write2
Block Size (KB) | Count | Data | Speed (MB/s) |
4 | 10000 | 41 | 19.4 |
32 | 1250 | 41 | 53.9 |
64 | 625 | 41 | 57.4 |
128 | 312 | 41 | 58.3 |
256 | 156 | 41 | 51.7 |
512 | 78 | 41 | 54.9 |
1024 | 39 | 41 | 58.3 |
2048 | 20 | 42 | 50.7 |
4096 | 10 | 42 | 53.5 |
8192 | 5 | 42 | 57.7 |
16384 | 3 | 50 | 53.8 |
Some quick math, that averages about 20% the speed with a range of .3% to 77%. Running the test a few more times indicates that the non-ceph test is a little more erratic. Except for the 4KB test, which is always lower (around 20MBs), the other vary back and forth between ~48 and ~61 MBs with no apparent pattern. So if we average that out excluding the 4KB, we're still only looking maybe 22% average -- assuming this even mixed block sized workload. So that's unfortunate that we're looking at such a massive performance hit using the ceph block device -- even if we assume large block size work loads (which may be a pretty big assumption), a ~30% performance impact is significant.
To see if performance continued to scale with block size, I ran a test with bs=1G count=5. Result was 25.7 MB/s, so apparently performance becomes parabolic at some point. For comparison the same 5GB all-zeros text file wrote at a rate of 57.6 MB/s directly, and transferred (via scp) between two nodes at an average rate of 44.1 MB/s between two nodes.
Final Thoughts for this Installment
So initial impressions of using Ceph are not good. It's about six-and-a-half pains in the back to get setup and once it's setup performance is suboptimal. I'm going to do a few more posts where I play around with the other functionality of ceph and test out things like CephFS and Object Gateway (alternatives to using the block device), and management (how to get manually added daemons into init.d script). I'm also looking to test out failover and high availability to see what happens to data if a node or two goes offline. I'd also like to look at doing some more in depth performance testing, in a more real-world environment, but I'll have to think up a way to do that. It'd be cool to see if I can find out what the bottleneck is; Clearly it's not network or HDD, could it be processing power, memory, inherent bottleneck in the software?These will be saved for another time though, as once again this post has run (length and time wise) much longer than anticipated. I've also got a demo of Condusiv's V-Locity program I'm doing soon -- not really a competing product, beyond being about storage/IO -- so I may look at doing a "my experience with" post on that as well, so long as the reps I'm working with give me the OK. Til next time.
PS. let me know if there's any flaws in the way I tested the storage here. I know it's not exactly scientific or robust, but as far as I can tell it's not a bad first-impressions type test.