Monday, July 21, 2014

Condusiv V-Locity - setup and first thoughts

Introduction

I'm going to be pretty brief here because I feel I'm not going to have much to actually say about this piece of technology (note from the future: I wasn't able to get this to work in my environment, read on if you're interested in the problems I ran into, but otherwise this article probably isn't worth your time). We'll leave it at : managing Storage IO in a virtualized environment is a pain, so I've taken to investigating some technologies that look at improving storage performance without simply buying more storage devices. This post is written in steam of consciousness style as I go through the setup process. I try to document anything I notice and or am thinking during the install. I do some minimal editing afterwords, but for the most part it'll be a rambling mess.

V-locity is a program from Condusiv, a name that was obviously dreamed up by someone with no respect for spoken language, ease of typing, or autocorrect. Here on I'll probably just refer to it as "the program." The idea behind the program is that windows file system driver is poorly optimized for an age of virtualization and non-local storage. Breaking file read/writes into multiple chunks isn't noticeable on local storage, but can add serious overhead when it has to go over Iscsi. So through a new driver an a bunch of caching, the program hopes to optimize storage to give you better density without buying more hardware (or increasing Cap ex, as they say) </marketing>. I won't go into much of the details of how it works here (I'm still a little fuzzy after a webinar and like 6 sales calls, and let me tell you it's not for lack of paying attention) if you're interested you can read all about it here.

First let me say, Condusiv certainty isn't trying to save you any money over buying more hardware. We were quoted a price of around sixty thousand dollars (+ about seven thousand in yearly licensing) to run on our three servers (32 cores each, which is how it's sold). That's insane, that's roughly three times as much as the server's themselves cost. This more or less makes it only an option if you're out of rack space, or for whatever reason can't move your data to faster storage devices.

Setup

I've got a test environment setup. 50 VMs and a server. VMs are running on some Dell R610 servers connecting to their storage over a 6GBs Direct attach SaS link. Server are running XenServer 6.2.0 (sp1 + all other patches). VMs are 64 bit windows 7, all updates installed, basic office applications for testing. Tests will use XenDesktop to measure login performance (connecting via thin clients) and a more manual approach to measure some application launches (visual studio 2012 is one in particular we've had take a really long time to load on VMs due to excessive file system access during first-run)

Setup is broken into three parts, the VMC (controller) and master node (velocity server), and the clients. Since this is a test setup, my VMC and master node are living on the same server. VMC setup is simple, just click next and it installs inself and a webserver to interface from. One thing, it doesn't tell you to access it via the web page. The installer just finishes and you have to figure it out. The setup instructions don't really say this directly either, you just have to kind of guess at it (I figured it out because the install directly had a bunch of .js, .html, etc. files).

After that, the setup runs a discovery on your domain to find machines to install on. I didn't set any sort of filters on this, but it is currently stuck (about 20 minutes) on 740/742, we'll see if it ever finishes.

... 30 minute mark now, still spinning on 740/742.
... well over an hour now. Neither the "close" or "next" buttons do anything.
... two hours and no sign of movement. I'm about to go home, so I'll let it run overnight and reboot the dumb thing if it hasn't sorted itself out by morning.
...
...
...Still at the same spot, think it's safe to say it's stuck, going to try restarting the service. Now it says discovery complete, 1 record processed. Sounds legit. Looking through the machine list, it seems to have detected a fair amount of my machines, but none of the VMs I created specifically for testing.

After another restart of the VMC service and some time it picked up all my servers, but I've run into a bigger issue. The master node component won't install on my virtual server. The server meets all of the requirements listed in the various install guides and readme files, but it doesn't show up on the list of machines available for deployment. Trying to run the installer manually gives the error "OS not supported". 

Looking further, it is only presenting the option to install the master node component to physical machines. I can't find this listed as a requirement anywhere, and the sales rep/tech people say that it isn't a requirement, but that's the only option it's giving me. 

Worked briefly with the sales rep/tech support team that's been helping me, they gave some new licenses to  try, but for whatever reason the program still only gives me the option to install to physical servers. I don't have spare physical servers lying around, so we're a bit dead in the water.

On a hunch I looked up Vlocity + XenServer (my hypervisor of choice), and have found some conflicting reports of support for the XenServer platform (PDF). At best it has partial support, and that possibly only for the guest/client. So maybe that's the issue. Looking back through my emails I defiantly mentioned that's what I was running on (and I'm pretty sure we covered that more in-depth during one of the 7-8 phone calls they made me sit through), but maybe I wasn't clear enough.

So, unfortunately this is where it my review of  vlocity must end. I'd spend more time with their tech support troubleshooting it, but I have other projects that need my attention. So, take my experiences for what they're worth (probably not much) but if you're looking to evaluate and are using XenServer, maybe be sure you're clear with your reps about the setup.



Thursday, July 10, 2014

Excel Crash: Visual Studio (10) Tools for Offce Add-in -- vs10exceladaptor

Solution

Solution thus-far has just been to disable the add-in for all users. We don't know of anyone actively using this add-in so that works for us. If you need the add-in I would look towards compatibility. 0xC0000005 typically indicates that a program tried to access memory it's not allowed to. This could mean another plug-in isn't playing nice, or you might try disabling DEP (though this is a pain for office, and more than a bit of a security risk).

To disable add-in for all users, found the best way was to log in as admin, find the excel executable (excel.exe) > right click > run as admin. Then go to File > Options > add-in > com add-ins > go. Then uncheck the boxe(s) for the "Visual Studio Tools for Office Design-Time Adaptor for Excel".

Story

Had some users complaining about excel crashing on our terminal server. This is a terminal (RDS) server that students use to remotely access lab applications via thin-clients, so it has just about every program under the sun installed on it. I mention this only because this means we have about 1000 different excel add-ins loading/available which is what I expect is causing the underlying issue. Also worth noting, Thin clients connect via XenDesktop (7.1); this could also be a cause of the error.



Other notes on server: Server 2008R2 (fully updated, x64), Office 2013 x86

Looking at the even logs, I see the excel crash (Error, Aplication Error, Event ID: 1000)

Faulting application name: EXCEL.EXE, version: 15.0.4535.1507, time stamp: 0x52282875
Faulting module name: EXCEL.EXE, version: 15.0.4535.1507, time stamp: 0x52282875
Exception code: 0xc0000005
Fault offset: 0x0005a802
Faulting process id: 0x2380
Faulting application start time: 0x01cf9c61a803a93c
Faulting application path: C:\Program Files (x86)\Microsoft Office\Office15\EXCEL.EXE
Faulting module path: C:\Program Files (x86)\Microsoft Office\Office15\EXCEL.EXE
Report Id: ed04ac54-0854-11e4-9867-d4bed9f3434f
Which doesn't give us much. In past experience, 0xC0000005 is a generic "Memory Access violation" error -- a program tried to access memory it didn't have permission to.  The next entry in the even log is a bit more useful (Error, Microsoft Office 15, EventID 2001)

Microsoft Excel: Rejected Safe Mode action : Excel is running into problems with the 'vs10exceladaptor' add-in. If this keeps happening, disable this add-in and check for available updates. Do you want to disable it now?.
This appears to be something that gets installed with visual studio, no idea what it does. I went ahead and disabled it for all users (see notes in Solution) since I'm not aware of anyone using that add-in. Worth noting that I initially tried disabling the add-in through the registry (HKLM\Software\Microsoft\Office\Excel\Addins\VS10ExcelAdaptor\ -- Set LoadBehavior to 0) but that didn't seem to have any effect.


Thursday, June 19, 2014

Creating A Ceph Storage Cluster using old desktop computers : Part 2

So, in the last part I left off where I had a clean+active cluster with two OSDs (storage locations). No data has yet been created, and, indeed no methods of making the locations available to store data have been set up.

Following along with the quick start guide the next thing to do is to expand the cluster to add more OSDs and monitoring daemons (I'm about half-way down under "expanding your cluster"). So away we go.

Expanding Cluster

Adding OSDs

Adding a third OSD went just fine using:

ceph-deploy osd prepare Node3:/ceph
ceph-deploy osd activate --fs-type btrfs Node3:/ceph
#For those of  you just joining us, I'm using btrfs because I can. Recommendation is typically to use xfs or ext4, since btrfs is experimental.

After running those commands, running "ceph -s" shows cluster now has "3 up, 3 in" and is "clean+active". Available storage space has also increased significantly. 

Adding a Metadata Server

Next step is to add a metadata server which is used by CephFS. CephFS one option for presenting the Ceph cluster as a storage device to clients. There's not much to be said here I ran the command and it completed.

ceph-deploy mds create Node3
# I chose Node3 arbitrarily 


 Adding More Monitors

So now we set up more monitors so that if one monitor goes down the entire cluster doesn't die. In the previous bit, I ran into an issue where the monitor service started creating very very very very verbose logs to the extent that it filled up my OS partition (several MB a second of logs). I was able to fix this with a change to the ceph.conf file, so I'm hoping that change gets carried between monitors, but I guess we'll see.

ceph-deploy mon create Ceph-Admin Node2

This didn't got as well. It installs the monitor on each node, but the monitor process does not start, and does not join the cluster. Some errors during install

  • No data was received after 7 seconds, disconnecting
  • admin_socket: exception getting command desciptions: [Errno 2] No such file or directory
  • failed: 'ulimit -n 32768; /usr/bin/ceph-mon -i Node2 --pid-file  /var/run/ceph/mon.Node2.pid -c /etc/ceph/ceph.conf --cluster ceph '
  • Node2 is not defined in 'mon initial members'
  • monitor Node2 does not exist in monmap
  • neither public_addr nor public_network keys are defined for monitors
  • monitors may not be able to form quorum
 I found a very helpful blog post detailing resolution to many of these errors.

First problem, my admin/deploy box had a bunch of hung create-keys processes. So I killed all those.

rebooted the new monitor node, and the mon service started, but I can't seem to interact with the cluster at all now. That's probably not a good sign. All ceph commands time out even running on Node1.

....

After much troubleshooting that went nowhere, I'm rebuilding the cluster. Uninstalling everything and purging all data. Reinstall is going pretty quick now that I know how everything works (ish). One thing I did find. I'm a bit more clear on the difference between

ceph-deploy new
ceph-deploy mon create-initial 

Now. "new" actually creates the 'new' cluster. You have to specify monitor nodes though so I thought 'new' referred to new monitors. Anyway, I following all the previous steps again, trying not to take any shortcuts or anything so hopefully I'll end up right back at the point before I screwed everything up.
 
...

New problem when trying to do "ceph-deploy osd activate" fails saying that it couldn't find a matching fsid. A bit of Googling suggests that there is data leftover from the first install (despite doing the purge+purgedata while I was remaking the cluster) so I'm reformatting the drive to see if that works.

 Yep, deleting and re-adding the partition reformatting worked. So purge data does not apparently actually purge data, at least not on my systems. Note: deleting and recreating partition reformatting required to edit /etc/fstab again to make sure mount worked correctly (UUID of the file system)

...

Back to pre-expanded pool at "active+clean" another discovery made. The quick start guide tells you to add "osd pool default size = 2" to the ceph.conf file under "[default]". This is a lie, it goes under "[global]". That is why I had to go back and set the size on each pool last time in order to get the "active+clean" state.

...

and the add monitors steps gave the same error

  • Starting Ceph mon.Node2 on Node2
  • failed: 'ulimit -n 32768; /usr/bin/ceph-mon -i Node2 --pidfile /var/run/ceph/mon.Node2.pid -c /etc/ceph/ceph.conf --cluster ceph '
  • Starting ceph-create-keys on Node2
  • No data was received after 7 seconds, disconnecting
monitors do not start on nodes 2 or 3. I'm not going to try rebooting them this time and hope that it doesn't totally destroy my install again.

....

Broke it again, this time trying to use "ceph mon add <host> <ip:port>" to manually add the monitor so that it would stop saying it wasn't in the monmap. This apparently is not the way to do that.

Guess I have to reinstall everything again... joy

....

Broke it a few more times but everything is working now with 3 monitors. For whatever reason ceph-deploy to add second/third monitor was not working at all. So I used this guide to manually add the monitors, which worked, except steps 6 and 7 are backwards, the monitor needs to be started before you run the "ceph add mon" command. "ceph add mon <etc>" will hang if the monitor you tell it to add does not respond, if you kill (ctrl+c) the "ceph add mon" command, that's when the whole cluster becomes unresponsive. You can technically run "Ceph add mon" then start the monitor on the node, but since "ceph add mon" takes over your shell, getting to the Node to start it can be problematic.

So now I've got a cluster with 3 OSDs and 3 montitors, I've got a warning about the clocks being different between the monitors, but other than that it's working and happy. Manually set all the clocks but clock skew warning is still happening. Restarted one node and one warning went away. Trying to restart another node but creating new monitors the way I did means that didn't get put in /etc/init.d so I can't restart them via the "service" command. Trying to find how to add them to this.

Giving up on that for now, may come back to it later.

Finally Using Ceph


Ok, while it's not exactly in prime condition, I want to get down the the functionality so the clock skew (it's a couple seconds) and the whole daemons not being in init.d problems I'm leaving for later.

Going to use the laptop I set up as a proxy as the client, which means I need to update its kernel.

...

Laptop Kernel updated, now using this guide to get the client and block device setup.

Setup ssh keys, hosts file, ceph user + sudo access, ceph repo,

More problems installing ceph

Ceph-deploy install ceph-cleint -- for some reason installing this on the laptop has been much more difficult than on my other machines. Maybe because the laptop wasn't installed with minimal install? Here's a few things I've run into

Repo errors - I sent up the ceph repo according the the pre-flight check guide, but kept getting 404 errors during the install. Looking at the ceph.repo file, ceph-deploy apparently adds in additional repos in addition to the ones set up via the guide, removing these and running ceph-deploy with the --no-adjust-repos flag fixed that. Don't know why ceph-deploy was adding bad repo urls

Python Error after install - After ceph installs it trys to run "ceph --version" to verify install. But this failed with a python traceback error saying it couldn't find the module argparse. I ended up having to install argparse and setuptools manually. It's strange, I didn't have to do this on any of the osd/mon/admin machines, and they're running the same OS, same version of python, same steps to install ceph, not sure why the client was such a jerk about it. Only other difference with client is that it's 32-bit.

"ceph-deploy admin ceph-client" ran fine

Back to using Ceph

Well, with the errors getting the client setup fixed, back to trying to setup a block device.

"rbd create TestBD --size 10000" Should create a ~10GB block device, runs fine.
"sudo rbd map TestBD --pool rbd --name client.admin" - should map it, does not run fine; get the following errors
  • ERROR: modinfo: could not find module rbd
  • FATAL: Module rbd not found.
  • rbd: modprobe rbd failed! (256)

What does this mean? not a clue.

...

Looking through various mail archives and other blog posts, it seems clear I'm missing a certain driver for rbd (Rados block device). Certain posts suggest that I install ceph-common to get this driver, but "ceph-common" is not a package on the EL6 repo -- apparently I should have done this on ubuntu, it seems to be what most of this is written for.

So looking at the ceph packages I have available to me (assuming the driver is in one of them, which it may not be) I can install: "ceph-deploy","ceph-devel","ceph-fuse","ceph-libcephfs","ceph-radosgw". The descriptions of these from "yum search ceph" aren't much help. I'm going to try devel and libcephfs first, those sound promising.

 ...

 Nope, no help. Yum search for rbd also returns nothing useful.

...

Evidently this is a kernel driver that I didn't compile into my kernel... So that's fun...

Recompiling my Client Kernel again

So, I'm not going to bother updating the kernel on the osd/mon machines, just the client - I don't think the others need it. And in-fact, there are a lot of warnings about not using rbd on osd devices. Whether this means you shouldn't actively use rbd, or that it is dangerous to have installed at all, isn't clear.

So, I got back to the extracted kernel, and run "make menuconfig". Under "Device Drivers > Block Devices" I find "Rados block device (RBD)" I'm not entirely sure if I should include this or modularize it, mostly because I'm not sure what the difference between the two is. To Google!.... Seems to be difference of loading it in the base kernel (loading at boot, with no ability to remove it) vs loading it after boot via modprobe. I think I'll modularize it, since that seems to be what ceph is expecting based on the errors.

So now it looks like "<M> Rados block device (RBD)" time to compile the kernel again weeeeeee...

....

Kernel recompiled, rebooted, tried the "rbd map" command again aaaaaaaaaaaaaaaaaaaaaaand crashed. Sweet. I won't reproduce the entire stack trace here, but the word [libceph] is mentioned over and over.

One possibility found at this email archive, is that 3.6.11 kernel is too old. Because you know, THEIR FREAKING OS RECOMMENDATIONS PAGE DOESN'T SAY USE THE LATEST IN  3.6.X OR ANYTHING. Not that I'm bitter.

....

So I downloaded and compiled the latest kernel (3.15.1 at time of writing) but had some issues. Notably my network devices are not installed. Compile had issues finding a bunch of modules, so I'm assuming that was the issue. Debating between trying to fix the 3.15 kernel, or going to a slightly older one and seeing if that works.

Tried 3.12.22, same problem

....

So this is probably my inexperience with upgrading/compiling my own kernel showing. Apparently I copy the default CentOS config from the /boot directory to the unpacked kernel directory, then rename to .config, then use the menu config to add in the things I want. This means any configurations in the current kernel are copied over. Somehow this happened automatically when I upgraded from 2.6 to 3.6, but isn't happening now.

  •  make clean #Clean up the failed make
  • cp /boot/config-2.6.32-431-17.1.el6.i686 /tmp/linux-3.12.22
  • # May have forgotten to mention, the client is 32-bit because the laptop is super old
  • mv config-2.6.32-431-17.1.el6.i686 .config
  • make menuconfig
  • #Enable rbd driver
  • make
  • make modules_install install

Doing it this way there are only a few "could not find module" errors (aes_generic and mperf to be specific) -- I'm not sure what they are, but hopefully they're not too important.

Booted to 3.12.22, and my network is working now, this is good. Let us see if I can finally map the rbd device.


Sweet baby Jesus I think it worked.

  • sudo rbd map TestBD
  • sudo mkfs.ext4 /dev/rbd1
    #It didn't tell me this is what it mapped it as, just had to look for it
  • sudo mkdir /CephBlock1
  • sudo mount /dev/rbd1 /CephBlock1
  • cd /CephBlock1
  • sudo touch IMadeaBlockDevice.txt

Back to Using Ceph .... Again.


Yep, appears to be working time to test some throughput, just going to do a dd with various block sizes to test the write speed of the drive. Command used:

sudo dd if=/dev/zero of=/CephBlock1/ddspeedtest.txt bs=X count=Y oflag=direct

Vary X and Y to keep amount of data transferred mostly consistant, oflag direct should keep it from buffering the writes, giving a better idea of actual drive performance. Also, the laptop (despite being old) and all the Nodes have gigabit ethernet cards connected to a gigabit switch -- so this shouldn't be a bottle neck.

Ceph Block Device Write:
Speed Block Size (bs) Count Total Data
76 KB/s 4K 10000 41MB
614 KB/s 32K 1250 41MB
1.1 MB/s 64K 625 41MB
2.1 MB/s 128K 156 41MB
4.1 MB/s 256K 156 41MB
6.3 MB/s 512K 78 41MB
7.3 MB/s 1024K 39 41MB
7.9 MB/s 2048K 20 42MB
21.1 MB/s 4096K 10 42MB
31.0 MB/s 8192K 5 42MB
41.9 MB/s 16384K 3 50MB


So there's some number, don't tell us much without a comparison so lets run this against one of the drives directly rather than through ceph.

Not well, it turns out. Like, really not well.

Speed for direct-drive-write2

Block Size (KB) Count Data Speed (MB/s)
4 10000 41 19.4
32 1250 41 53.9
64 625 41 57.4
128 312 41 58.3
256 156 41 51.7
512 78 41 54.9
1024 39 41 58.3
2048 20 42 50.7
4096 10 42 53.5
8192 5 42 57.7
16384 3 50 53.8

Some quick math, that averages about 20% the speed with a range of .3% to 77%. Running the test a few more times indicates that the non-ceph test is a little more erratic. Except for the 4KB test, which is always lower (around 20MBs), the other vary back and forth between ~48 and ~61 MBs with no apparent pattern. So if we average that out excluding the 4KB, we're still only looking maybe 22% average -- assuming this even mixed block sized workload. So that's unfortunate that we're looking at such a massive performance hit using the ceph block device -- even if we assume large block size work loads (which may be a pretty big assumption), a ~30% performance impact is significant.

To see if  performance continued to scale with block size, I ran a test with bs=1G count=5.  Result was 25.7 MB/s, so apparently performance becomes parabolic at some point. For comparison the same 5GB all-zeros text file wrote at a rate of 57.6 MB/s directly, and transferred (via scp) between two nodes at an average rate of 44.1 MB/s between two nodes.

Final Thoughts for this Installment

So initial impressions of using Ceph are not good. It's about six-and-a-half pains in the back to get setup and once it's setup performance is suboptimal. I'm going to do a few more posts where I play around with the other functionality of ceph and test out things like CephFS and Object Gateway (alternatives to using the block device), and management (how to get manually added daemons into init.d script). I'm also looking to test out failover and high availability to see what happens to data if a node or two goes offline. I'd also like to look at doing some more in depth performance testing, in a more real-world environment, but I'll have to think up a way to do that. It'd be cool to see if I can find out what the bottleneck is; Clearly it's not network or HDD, could it be processing power, memory, inherent bottleneck in the software?

These will be saved for another time though, as once again this post has run (length and time wise) much longer than anticipated. I've also got a demo of Condusiv's V-Locity program I'm doing soon -- not really a competing product, beyond being about storage/IO -- so I may look at doing a "my experience with" post on that as well, so long as the reps I'm working with give me the OK.  Til next time.

PS. let me know if there's any flaws in the way I tested the storage here. I know it's not exactly scientific or robust, but as far as I can tell it's not a bad first-impressions type test.

Thursday, June 12, 2014

Creating A Ceph Storage Cluster using old desktop computers

Introduction

What I'm using

My place of employment is getting rid of a bunch of old Dell Optiplex 780s in a computer refresh. Typically these would just go to our surplus depart to be sold for cheap to anyone who wants one. Since none of this money ever makes it back to our department, it's of little consequence to my higher ups whether they are sold or repurposed.

So I have free range of several hundred EoL, but still modestly powerful, desktop computers. So I've grabbed up four of them to work my way through the Ceph evaluation instructions. Maybe this will be a valid way to re-purpose some otherwise in-the-trash hardware, or maybe it will just be a learning tool for me.

Optiplex 780 Specs:
  • Core 2 Quad processor (Q9550)
  • 4GB RAM (1066MHz)
  • 500GB-1TB 7600RPM SATA (2.0, 3Gb/s) Drive
    • They came with 1TB, but our replacements, if the drive ever failed, were often not 1TB
    • One disappointing thing is that the power supply in these units only has one SATA power, so I can't hook up a second drive - at least not easily.

Setup Process

I'm writing this as I go, and may or may not feel like editing it later, so bear with me - this is very much a train-of-thought.

Note from the future: Setup has not been as quick as the quick setup guide would lead you to believe, so I'm splitting into multiple posts. This post gets through the very basic setup - getting a cluster with two OSDs to an "active+clean" state. Further information on expanding the cluster, and setting up file shares coming soon (is available here). I'm giving this a quick once-over now, but, barring any glaring errors, will remain largely as it is

OS Install

I'm using CentOS 6.5 x86_64 - minimal installer. I'm using CentOS because that's what I'm most familar with. However I hope to use btrfs (because this is an experiment, and what's an experiment without experimental software), which requires an updated Kernel, so I'm going to figure how to do the Kernel upgrade as well, which I've never done before so that should be fun.

I'm using the minimal installer because GUIs are for jerks, etc. But mostly because I don't want a bunch of unnecessary programs chewing up resources. I'm using 64bit because, seriously who uses 32bit stuff any more. The processor is 64bit, but I'm not positive the Dell MoBo/BIOS are truly 64bit. But either way it should be fine.

The only thing special I'm doing in the install process is leaving a large portion of the drive unformatted to become the brtfs partition later. I'm also not creating a swap partition because using HDD as RAM for a storage device seems a bit silly. (CentOS ended up creating a tempfs parition anyway against my will - I'll probably remove that when I can be asked).

Partition table ended up looking like this:
  • 250MB /boot ext4
  • 10 GB / ext4
  • 8 GB /home ext4
  • ~350GB free to be used later

OS setup

Most of these steps are fairly routine, but I figured I'd include them here just for posterity sake - and maybe so it's more evident what I screwed up later when something goes wrong.

vi /etc/sysconfig/network-scripts/ifcfg-eth0
#disabled netmanager on eth0
#set onboot to yes
service network restart
#eth0 is now up and has an ip

#get all the latest things
yum upgrade 
#get dependencies for kernel upgrade
yum install gcc ncurses ncurses-devel

#create myself a user
useradd myuser
passwd myuser

#remove root from ssh permission
vi /etc/sshd/sshd_config
#change PermitRootLogin from "yes" to "no"
service sshd restart

#so I don't have to download and scp 
yum install wget

#download kernel source
wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.6.11.tar.bz2
#I'm using 3.6.11 here because that is what Ceph currently recommends -- "latest in the 3.6 stable"

#to avoid redundancy I'll just post the link to the steps I'm follow for updating the kernel
#http://www.tecmint.com/kernel-3-5-released-install-compile-in-redhat-centos-and-fedora/
 #Note: I had to install perl to get compiler to complete
yum install perl

#Once new kernel is installed reboot and press a key to during the "Booting Centos in ...." screen to show the new 3.6.11 boot option
#I edited grub.conf to make 3.6.11 the default so I don't have to remember to select it each boot

#Add ceph repo to you, follow instructions on ceph website - under "Redhat Package Manager"
#http://ceph.com/docs/master/start/quick-start-preflight/

Cloning image

So, obviously I don't want have to do all the above to each machine (3 minimum) so I want to clone the disk. But it's a 500GB drive, I don't want to wait for dd to run on each machine. So I found a possibility here that I'm going to try. Theory is to fill the empty part of the drive with zeros so that it can be compressed with gzip. This will be easy with my existing partitions, but I guess I'll have to create a temporary partition to zero-out the unused space. If I had thought of this before I could have zero'd the disk before install, but live and learn I suppose.

So I created a partition, formatted to ext4, mounted to /temp and then issued

cat /dev/zero | tee -a /zero.txt /home/zero.txt /temp/zero.txt

to zero out all unused space on each partition. This took a long time. After that I run this

rm /zero.txt /home/zero.txt /temp/zero.txt
dd if=/dev/sda bs=4M | gzip > /external/CephImage.gz

Where /external is an external drive I've attached to the machine to hold the image. This also takes a long time. A little over 3 hours to be precise. But I ended up with an image that was 3.3GB rather than 400GB - a significant savings. Seriously, that's some ridiculous compression, I'm a little worried it's going to corrupt on the image write... we'll see I guess.

Now I plug in a bare drive and begin the opposite process

dd if=CephImage.gz bs=4M | gunzip > /dev/sdc

where /dev/sdc is an unformatted bare drive I plugged in. This, again, will take awhile. I'm actually wondering if this will take longer than just your standard DD, because it now has to uncompress the whole thing and write it. But still worth it if it means having 3GB image rather than a 400GB one.

A little longer, but not by much.
...and it boots!

It's not a great clone method; ~3 hours does not make for rapid deployment, but it should suit my purposes here. I don't know that this actually saved any time over the standard "dd if=/dev/sda of=/dev/sdc" but this does at least give me an image backup in case something happens.

Setting Up Ceph


Many hours later I've got some cloned harddrive.

I install ceph deploy on my main machine now
yum install ceph-deploy

Boot up the first node (Ceph-Node1) and follow the Preflight Checklist to get it ready. I've moved to a private network so I set the hosts up manually in the hosts file. Then used ceph-deploy to install to each node
Ceph-Deploy new Ceph-Node1
Ceph-Deploy new Ceph-Node2
Ceph-Deploy new Ceph-Node3

At this point, as I went to set up the partitions with btrfs, I noticed I hadn't installed the btrfs userspace programs. While support is built into the kernel for btrfs, the programs to actually use it are not, so I had to install that on each node (since I'm on a private/not-internetted network now, downloaded rpm from pkgs.org and used a flash drive to get it to each node).

Recreated /dev/sda4 by deleting/readding with the full space of each drive (again, this varies drive-to-drive based on what I had lying around). The used mkfs.btrfs to format it. Edited /etc/fstab to make it mount on boot.

Hmmm, so looks like I followed the wrong page before. "Ceph-Deploy new" installs a monitor node, so I purged everything and started over via instructions at the start of the storage cluster quick start guide which I will be subsequently following.

So, now correctly, I do:

Ceph-Deploy new Ceph-Node1
#It knows the correct user for Ceph-Node1 via the ~/.ssh/config file

This creates a monitor node on Node1.

I'm not sure I'll ever understand how Linux user context works. Ceph-Deploy doesn't like being run as root or with sudo, so I had to log in with a non-root account, then run "su" to get permission to run it, but not "su -" so I'm still the other user but with root permissions. Trying to just run as root gives errors (paradoxically) saying the command must be run as root. This does actually make sense, it's the remote machine that needs root, and for whatever reason ceph doesn't run remotely as root if it's root locally.... anyway so now running:

Ceph-Deploy install Ceph-Node1 Ceph-Node2

Gives me an error that it can't get a valid baseurl for the repo. Fantastic. I'm trying to set this up on a private, non-interneted network, and now it wants internet.

After some trying, I've decided a proxy is probably the way to go for this. Trying to resolve all dependencies and download all requisite .rpm files myself is proving too tiresome.  Luckily I've set up proxy servers (with squid) before, so hopefully this won't be too bad. I'm not going to post all the steps involved with that, there's squid guides elsewhere and would just clutter this already cluttered post.

With proxy server setup, I've found the the ceph-deploy install does not appear to respect http_proxy settings in the ~/.bash_profile (I say this because I can wget things from the internet, but when ceph-deploy tries, it fails). So I've had to set proxy settings in /etc/yum.conf, /etc/wgetrc, and /root/.curlrc, in order to get it to complete. Well, that installed it on the admin machine (the one with ceph-deploy installed) so now we've got to get it on the nodes.... Yep, all three of those files have to be set on each node (.curlrc must be in /root), but it's working at least.

Ok, Ceph is installed on all nodes... now back to the storage cluster quick start to continue that.

"ceph-deploy mon create-initial" runs with no issues

"ceph-deploy osd --fs-type btrfs prepare <node>:/ceph" runs with no issues - /ceph is the directory I've mounted the btrfs partition to. Specified using btrfs because it defaults to xfs.  Just did this to Node1 and Node2 for now, as per instructions.

"ceph-deploy osd activate <node>:/ceph" ran fine on node1, but seems to be hanging up on node2. Eventually times out with a "received no response in 300 seconds" type error. ...Got it, default iptables rules were in place and apparently blocking communication between Node2 and the monitoring node (Node1), turned off iptables on all hosts and it worked. Presumably node1 worked because it's also the monitor node so firewall wasn't an issue.

Followed the rest of the steps in the storage quick guide. Having an issue with checking cluster health from anything other than the monitor (Node1). Problem appears to be the monitor service continually shutting down because of space issues..... Monitor generated ~900MB of logs very quickly and (combined with the other installs) filled up the '/' directory (I only had partitioned 20GB). Cleaned some stuff up and trying again.

Note: found this out looking at /var/log/ceph/ceph-mon-Ceph-Node1.log
and saw: "<...>reached critical levels of available space on local monitor storage -- shutdown!"

Why is this log gowing so fast?!? literally several MB a second of logs.

Found (google) adding "debug paxos = 0" line to /etc/ceph/ceph.conf stops the log from logging a million messages (never thought I'd be able to say that non-hyperbolicly)  a minute. Seems like a good feature. Added that under "[global]", stopped the service, removed the current (several GB) log file and started the service back up. Log file is a much more manageable size now.

So, mon node is online now, and I can query information about the cluster from other machines, so they're talking ok, but my cluster is still in an "active+degraded" state (has been for 12 hours or so at this point - I went home between this paragraph and the previous one). Ceph -s gives the following information:

192 pgs degraded, 192 pgs stuck unclean
2 osds, 2 up, 2 in

According to the wiki, "unclean" indicates that 'pgs' (placement groups) have not been replicated the minimum number of times. It's showing both osds I've created so far -- or I'm assuming that's what " 2 up" means (checked the wiki, that is what that means), so it seems likely that the number of replications is set too high - that is, more than 2.

Sure enough running "ceph osd dump | grep 'replicated size'" showed that all 3 pools (data, metadata, rbd) with a size of 3 (size is apparently the code for "number of replications I should have"). So the issued the following command for each pool

ceph osd pool set <pool> size 2

After doing that and waiting a minute, the cluster is now showing "active" but not "active+clean" the way it's supposed to. Still has the 192 pgs stuck unclean, but no more pgs degraded.... Found a solution, shutting down the osd on one node, leaving it down for a bit, then restarting got it to come back in a clean state... a little troubling but what are you going to do. Here's the email archive I found  the solution in.

So, hurray, I have an "active+clean" cluster now, I can continue with the "quick" start guide. Next step is adding additional OSDs and monitors. Neat. Seems like a simi-natural place for a break. Stay tuned for the post where I expand the cluster, add more monitor nodes, and setup block devices, file-shares etc.










Thursday, April 10, 2014

XenDesktop Studio : Add resources - The supplied connection address is invalid

Solution

Turns out, in my case, XenDesktop did not update the connection information correctly when I removed a server from the pool. So the "Connection address is invalid" is not having a problem looking up the servers, but rather the lookup is returning too many addresses. 

The way I found to do this is to uncheck the server in the high availability configuration (it's the only other place the removed server seems to show up). Open Studio and go to configuration > hosting. Click the connection that is having the problem, then right click and select edit connection. Click the edit HA servers button. Uncheck the box next to the server address that matches the one removed from the pool, then click ok.

You should now be able to correctly configure the connection trough the "add connections and resources" menu.

Full Details

I originally did not set up my XenDesktop 7 environment for machine creation services (MCS). I had tried it in 5.6 and found it didn't perform as well as manually doing full clone machines. But I wanted to set up a small environment to give it another shot, so I needed to add the storage and networking that it was going to use (Studio > Configuration > [hosting right-click]add connections and resources). However when I tried to reconfigure my connection I go the following error (the actual error is a bit longer, but this is the important part). 
Update-HypHypervisorConnection : The supplied connection address is invalid. Ch
eck that it exists and is part of the same pool as the connection.
At line:1 char:31
+ Update-HypHypervisorConnection <<<<  -LiteralPath 'xdhyp:\connections\myconnection'
    + CategoryInfo          : InvalidOperation: (:) [Update-HypHypervisorConne
   ction], InvalidOperationException
    + FullyQualifiedErrorId : Citrix.XDPowerShell.HostStatus.ConnectionAddress
   Invalid,Citrix.HostingUnitService.Sdk.Commands.UpdateHypHypervisorConnecti
  onCommand
 For some reason XenDesktop is having trouble managing the pool; thinks it has a bad address. First thing I did was test the connection (Studio > configuration > hosting > [right click connection] test connection). This ran and came back with no errors.

So then I ran this command (which I found via citrix docs as a "related-to" the command that is failing).

PS C:\temp> Get-HypXenServerAddress -LiteralPath 'xdhyp:\connections\myconnection'
http://192.168.0.10
http://192.168.0.11
http://192.168.0.12
This is 100% correct, so the connection object has the correct address (and is returning them correctly). So why the lookup failure. Well... Here I failed a bit in documenting. I ran a command somewhere that showed an error that reported that it was trying to connect to four different server addresses, rather than the correct three; I forgot to write down exactly what command it was - sorry. You should be able to tell by going to "configure HA servers" in the edit connection menu and see what servers are showing up there.

The fourth address that was showing up in the lookup came from a server that I had removed from the pool. Apparently that information did not get updated automatically - or at least not consistently, throughout xendesktop.

Tuesday, March 25, 2014

Upgrading DE45-HG to windows 7 with bootable usb

Here's an interesting problem I ran into trying to upgrade our Visix (Digital Signage) player to windows 7. With the XP EoL coming up, they're pushing for us to get everything upgraded off XP. Problem is the Visix players stayed on XP way longer than they should have, so even relatively new devices are running XP.

Most of the upgrades have been pretty seamless, but hit a snag with a small form-factor device. Its model number is DE45-HG, from what I can tell it's made by AOpen, but with some customization from Visix. Problem is that it doesn't have a DVD drive and won't boot from USB.

First issue is that the POST is hidden behind a "Digital Engine" logo so you can't see what keystrokes get you into the BIOS. Through a little trial and error I found it was "delete" to get into the BIOS. Unfortunately under the boot options only the main SATA drive and the network are listed as boot devices.

The workaround is something like this. With the usb plugged in, get into the BIOS and go to boot > boot device priority; here you will only see the SATA drive (and network, if that's enabled) listed. Go back and select boot > Drive configuration. Here you should see the SATA drive and the USB drive listed. Change the USB drive to "drive 1" (Sata will be reassigned drive 2). Go back to boot priority and now you should see only the USB drive listed as a boot device.

Save changes and exit and the machine should boot to the USB to do the install. Proceed with the install as normal. The first time the machine reboots to continue install, you will need to switch the boot devices back. Get into BIOS, got to boot > drive configuration, and change the SATA drive back to drive 1. Verify it now shows up as the only boot device. Save changes and exit. The setup should now continue like a normal win 7 install.

Update: Also found that when updating the display drivers the screen would go black until the computer was rebooted. Recommend installing display driver separately (without other updates) then hold down power button to reboot after HDD activity light stops.

Friday, February 28, 2014

Microsoft Access 2013 on Server 2008 R2 SP 1 remote access through XenDesktop Screen Freeze

Solution

New Solution:

"Legacy Graphics Mode" is an option is Citrix Policy. Enabling this for the desktop group will stop Access from freezing, and doesn't break 'browse' 'save-as' etc. Unsure exactly what legacy graphics mode does, still trying to get someone to answer that for me, or why it doesn't work in the first place, but this is a less bad work around.


Appears to be some visual theme here that messes up the ICA connection. The "fix" - probably more of a workaround - is to run the program in compatibility mode (don't forget to set for all users). You'll want to select compatibility mode for windows 7 + disable visual themes.

Edit: People on Citrix forums claim that this effects office 2010 as well; I can't confirm that it does, or that this fix works for 2010 if the problem does exist, but there you go.

Warning: This workaround has been found to cause some other undesirable behavior. Notably: "browse" buttons will no longer launch the file browser to allow you to select files. This effects a number of things such as the "save as..." feature and "import from (excel/text/etc)" wizards. This is arguably less detrimental than the screen freeze but please be aware.


The Problem / Full story

As you may guess from the title, this is an insanely specific bug that we ran into in one of our labs. Students were trying to use MS Access (because we teach that for some reason) and their sessions kept on locking up. Here's the sequence of events:

Users log into Wyse Xenith 2 thin clients connecting to a 2008 R2 SP 1 terminal server type environment through XenDesktop (Citrix seems to refer to it as ServerOS connection, or Shared Published Desktops). This works great.

Users then open Access and everything seems to be working fine, but when they try to open a table in design view the screen freezes after a few seconds. At this point the machine is completely unresponsive, the mouse still moves, but no mouse clicks or keystrokes appear to have any effect.

I say "appear to have any effect" because they actually do the user just can't see them. If I use lanschool to remotely watch/control the user's session I can see them clicking around and typing things, and when I control I can interact with the session just fine; The user cannot see anything changing on the screen. It's totally bizarre. 

The only way I've found to restore functionality is to remotely log the user off so they could log in again. 

The really weird thing is that none of the factors individually cause the problem. Using Access on the same server but over remote desktop (RDP) works just fine. Using Access on a windows 7 machine over xendesktop works just fine. No other program I've used on the server over XenDesktop has exhibited this behavior.

I figured out the workaround above pretty much by trial and error. Then figured out the correct solution from citrix forums; Yay citrix forums!