So you’ve decided you want a high availability cluster running on FreeBSD?
A common setup is to run haproxy + CARP together. The main drawback to this method that that it isn’t truly high availability. Here are some of the shortcomings of this approach:
- You will need to upgrade the haproxy binary an awful lot, if your are doing SSL termination. You will need to fail-over the carp master when preforming maintenance on the host
- What happens to existing in-flight requests?
- Do you always have a service window?
- If the haproxy process on the MASTER dies (or say isn’t accepting requests on a reload), those requests aren’t serviced
- Additionally how does CARP know the status of the haproxy process? It doesn’t.
- Only one haproxy instance is serving all of the requests at a time, so you aren’t really load balancing the requests
All of this stems from the fact that we are missing a layer of redundancy. We have redundancy at the physical layer with CARP. We have redundancy at the application (or transport) layer with haproxy.
What we need is redundancy at the network layer to complete the stack.
I am told you can solve this in linux with LVS, but currently (as far as I’m aware) there is no mechanism for this in the BSD world. Read more…
I’ve been trying to finally move some of my file storage off site. Here’s a little script I wrote to help facilitate that.
echo "Usage: "`basename $0`" snapshot-name";
if [ "x$1" = "x" ]; then
if zfs list -t snapshot $1 > /dev/null 2>&1; then
SAN=`echo $1 | sed 's/[^A-Za-z0-9]/-/g'`
echo "Invalid snapshot given. Try zfs list -t snapshot for ideas.";
FIFODIR=$(mktemp -d $BASE-tmp-XXXXXXXX) || exit 2
echo "Sending snapshot "$SNAP;
sha256 < $FIFO > $CHK &
zfs send "$SNAP" | pigz | scrypt enc /dev/stdin | tee $FIFO | ssh -c arcfour256 X_HOSTSPEC_X "umask 0077 && cat > .zfs-backup/$CONTAINER"
printf "%s %s\n" $SHA256 $CONTAINER | ssh X_HOSTSPEC_X "umask 0077 && cat > .zfs-backup/$CONTAINER.sha256sum"
echo "Tranfered snapshot with checksum: "$SHA256;
Some notes about choices of utilities:
- pigz could easily be replaced with gzip or lzma, or whatever.
- I’m debating switching scrypt out for something like openssl or gpg with an actual random key, or possibly a curve25519 chacha20 poly1305 container, I haven’t done the research to see how smart/easy this is. I understand what scrypt is doing, it’s installed on my machine, and It’s a Good Thing.
- I’m using arcfour256 for the bulk transfer, because the security of the stream isn’t important. It’s already protected by scrypt/AES256.
- I tee to a fifo so that I can check that the transfer wasn’t corrupted on the remote end without typing my pass phrase into the untrusted machine. The tee/fifo feels hackish to me, but I don’t have another idea.
- I investigated the scrypt format, and there is no length in the file header, nor any tailing magic bytes, so it’s impossible to tell if the file is truncated without trying to decrypt the file. Based on the code, adding a length header, or tailing magic would break the current on-disk format.
- This checksum won’t help if, say, the scrypt process is interrupted – I’m guessing you will get a partial transfer, and matching checksums.
- I copy the checksum to the remote machine also, in a format that can be parsed by sha256sum
- I run this in a screen session
Having followed the http://wiki.freebsd.org/ZFSTuningGuide, when configuring ZFS, I was aware of the delicate nature of the kernel settings for ZFS on i386. I recently upgraded my server to 4GB ECC from 2GB non-ECC and thought I’d like to take advantage of the extra ram, so I thought I’d play around with these options.
My current kernel config could not be simpler, ZFS-GENERIC:
For this configuration I successfully used in /boot/loader.conf:
#Working options for ZFS-GENERIC 2GB RAM, KVA_PAGES=512
I thought it would be as simple as:
#Trial options for ZFS-GENERIC 4GB RAM, KVA_PAGES=512
But, but to my chagrin, my system responded on boot up with a:
panic: kmem_suballoc: bad status return of 3