<?xml version="1.0" encoding="utf-8"?>
<feed version="0.3" xmlns="http://purl.org/atom/ns#">
<link rel="alternate" type="text/html" href="https://gamma.unpythonic.net/"/>

<title>Jeff Epler's blog</title>
<modified>2013-12-13T23:26:29Z</modified>
<tagline>Photos, electronics, cnc, and more</tagline>
<author><name>Jeff Epler</name><email>jepler@unpythonic.net</email></author>
<entry>
<title>Benchmarking ungeli on real data</title>
<issued>2013-12-13T23:26:29Z</issued>
<modified>2013-12-13T23:26:29Z</modified>
<id>https://gamma.unpythonic.net/01386977189</id>
<link rel="alternate" type="text/html" href="https://gamma.unpythonic.net/01386977189"/>
<content type="text/html" mode="escaped">

Since the goal of &lt;a href=&quot;https://gamma.unpythonic.net/01385778545&quot;&gt;this little project&lt;/a&gt; is to actually
read my geli-encrypted zfs filesystems on a Linux laptop, I had to get a USB
enclosure that supports drives bigger than 2TB; I also got a model which
supports USB 3.0.  The news I have is:

&lt;p&gt;Ungeli and zfs-on-linux work for this task.  I was able to read files and
verify that their content was the same as on the Debian kFreeBSD system.

&lt;p&gt;The raw disk I tested with (WDC WD30 EZRX-00DC0B0) gets ~155MiB/s at the start,
~120MiB at the middle, and ~75MiB/s at the end of the first partition according
to zcav.  Even though ungeli has had no serious attempt at optimization, it
achieves over 90% of this read rate when zcav is run on &lt;tt &gt;/dev/nbd0&lt;/tt&gt; instead
of &lt;tt &gt;/dev/sdb1&lt;/tt&gt;, up to 150MiB/s at the start of the device while consuming
about 50%CPU.

&lt;p&gt;(My CPU does have AES instructions but I don't know for certain whether my
OpenSSL uses them the way I've written the software.  I do use the envelope
functions, so I expect that it will.  &amp;quot;openssl speed -evp aes-128-xts&amp;quot; gets
over 2GB/s on 1024- and 8192-byte blocks)

&lt;p&gt;Unfortunately, zfs read speeds are not that hot.  Running md5sum on 2500 files
totalling 2GB proceeded at an average of less than 35MB/s.  I don't have a
figure for this specific device when it's attached to (k)FreeBSD via sata, but
I did note that the same disk scrubs at 90MB/s.  On the other hand, doing a
similar test on my kFreeBSD machine (but on the raidz pool I have handy, not a
volume made from a single disk) I also md5sum at about 35MB/s, so maybe this is
simply what zfs performance is.

&lt;p&gt;All in all, I'm simply happy to know that I can now read my backups on either
Linux or (k)FreeBSD.
</content>
</entry>
<entry>
<title>Decrypting geli volumes with portable software</title>
<issued>2013-11-30T02:29:05Z</issued>
<modified>2013-11-30T02:29:05Z</modified>
<id>https://gamma.unpythonic.net/01385778545</id>
<link rel="alternate" type="text/html" href="https://gamma.unpythonic.net/01385778545"/>
<content type="text/html" mode="escaped">

The geli infrastructure is strongly linked with FreeBSD and I didn't discover
any documentation of the data formats.  So, in the wake of my 
&lt;a href=&quot;https://gamma.unpythonic.net/01385346693&quot;&gt;concerns about being able to read backups on Linux&lt;/a&gt;
I read a lot of freebsd source code and now I've written a portable (I hope)
userspace program which can decrypt at least a toy geli-encrypted volume.

&lt;p&gt;It's called &lt;a href=&quot;https://github.com/jepler/ungeli&quot;&gt;ungeli&lt;/a&gt; and I'm going
to try letting it live on github instead of a personal git repo.  So far it's a
toy in that I've only tested it on a toy volume, the performance is not tuned, but
it does seem to work and due to is smallness (&amp;lt;600SLOC at present) it may be a
useful second reference if you too wish to understand geli.

&lt;p&gt;&lt;b&gt;Update&lt;/b&gt;: I added nbd support and squashed some bugs.  Now I've
succeeded in retrieving files from a geli-encrypted zfs volume on Linux
using zfs-on-linux:
&lt;pre&gt;
# ./ungeli -j geli-passfile npool.img /dev/nbd0 &amp;
# zpool import -d /dev -o readonly=on npool      # (imports /dev/nbd0)
# cat /npool/example/GPL-3
                    GNU GENERAL PUBLIC LICENSE
                       Version 3, 29 June 2007

 Copyright (C) 2007 Free Software Foundation, Inc.  &amp;lt;http://fsf.org/&amp;gt;
...
&lt;/pre&gt;

</content>
</entry>
<entry>
<title>Encrypted ZFS for off-site backups</title>
<issued>2013-11-25T02:31:33Z</issued>
<modified>2013-11-25T02:31:33Z</modified>
<id>https://gamma.unpythonic.net/01385346693</id>
<link rel="alternate" type="text/html" href="https://gamma.unpythonic.net/01385346693"/>
<content type="text/html" mode="escaped">
As I &lt;a href=&quot;https://gamma.unpythonic.net/01381324272&quot;&gt;recently discussed&lt;/a&gt;,
I use zfs replication for my off-site backups, manually moving volumes
from my home to a second location on a semi-regular schedule.

&lt;p&gt;Of course, I would rather that if one of these drives were stolen or lost
that the thief not have a copy of all my data.  Therefore, I use &lt;a href=&quot;http://en.wikipedia.org/wiki/Geli_%28software%29&quot;&gt;geli&lt;/a&gt; to encrypt
the entire zpool.</content>
</entry>
<entry>
<title>My ZFS replication script</title>
<issued>2013-10-09T13:11:12Z</issued>
<modified>2013-10-09T13:11:12Z</modified>
<id>https://gamma.unpythonic.net/01381324272</id>
<link rel="alternate" type="text/html" href="https://gamma.unpythonic.net/01381324272"/>
<content type="text/html" mode="escaped">

On my &lt;a href=&quot;https://gamma.unpythonic.net/debian-kfreebsd-zfs&quot;&gt;new Debian GNU/kFreeBSD system&lt;/a&gt;, my
backup strategy has changed.
On the previous system, I relied on incremental dumps and a DAT160 tape
drive, which has an 80GB uncompressed capacity.  When you have a few
hundred gigabytes of photos to back up, this is an inconvenient
solution.  On the new system, I am using multiple removable 3TB hard
drives and a set of scripts built around zfs send/receive.

&lt;p&gt;Cron runs the &lt;tt &gt;rep.py&lt;/tt&gt; script 4 times a day, which does a &lt;tt &gt;zfs
send | zfs receive&lt;/tt&gt; pipeline for each filesystem to be backed up.  On a
semi-regular basis (I haven't yet decided on what schedule to do this
on; with tape backups I did it less than once a month even though it was
comparatively easier), I remove the drive to an off-site location,
return the other drive from off-site, and insert it.  (There's also
fiddling with &lt;tt &gt;zpool import/export&lt;/tt&gt;, of course)

&lt;p&gt;The &lt;tt &gt;rep.py&lt;/tt&gt; script relies on the &lt;tt &gt;zfs&lt;/tt&gt; python module, also of my
own creation.  This module has facilities for inspecting and interacting
with zfs filesystems, e.g., to list filesystems and snapshots, to create
and destroy snapshots, and to run replication pipelines.

&lt;p&gt;The &lt;tt &gt;rep.py&lt;/tt&gt; script needs to be customized for your system.
Customization items are:

&lt;p&gt;&lt;pre &gt;
TARGETS = ['bpool', 'cpool']
SRCS = ['mpool', 'rpool']
&lt;/pre&gt;

&lt;p&gt;&amp;quot;SRCS&amp;quot; is a list of zpools which are replicated.  &amp;quot;TARGETS&amp;quot; is a list of
zpools to which backups are replicated.  The first available pool out of
TARGETS is chosen. (so if more than one TARGET is inserted, only one
will ever be used)

&lt;p&gt;You can designate individual filesystems as not replicated by setting
the user property net.unpy.zreplicator:skip to the exact string &amp;quot;1&amp;quot;,
i.e.,
&lt;pre &gt;zfs set net.unpy.zreplicator:skip=1 examplepool/junkfiles&lt;/pre&gt;

&lt;p&gt;&lt;p&gt;&lt;b&gt;Files currently attached to this page:&lt;/b&gt;
&lt;table cellpadding=5 style=&quot;width:auto!important; clear:none!important&quot;&gt;&lt;col&gt;&lt;col style=&quot;text-align: right&quot;&gt;&lt;tr bgcolor=#eeeeee&gt;&lt;td&gt;&lt;a href=&quot;https://media.unpythonic.net/emergent-files/01381324272/rep.py&quot;&gt;rep.py&lt;/a&gt;&lt;/td&gt;&lt;td&gt;1.5kB&lt;/td&gt;&lt;/tr&gt;&lt;tr bgcolor=#dddddd&gt;&lt;td&gt;&lt;a href=&quot;https://media.unpythonic.net/emergent-files/01381324272/zfs.py&quot;&gt;zfs.py&lt;/a&gt;&lt;/td&gt;&lt;td&gt;12.7kB&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;&lt;p&gt;

&lt;p&gt;License: GPLv2+
</content>
</entry>
<entry>
<title>I hope my kfreebsd box is still bootable...</title>
<issued>2013-05-09T18:43:39Z</issued>
<modified>2013-05-09T18:43:39Z</modified>
<id>https://gamma.unpythonic.net/01368125019</id>
<link rel="alternate" type="text/html" href="https://gamma.unpythonic.net/01368125019"/>
<content type="text/html" mode="escaped">

Preserving the upgrade messages for posterity, will try rebooting it later...
(update: It still booted fine after this grub update)

&lt;p&gt;&lt;pre&gt;
Setting up grub-pc (1.99-27+deb7u1) ...
(pass0:ahcich0:0:0:0): READ CAPACITY(10). CDB: 25 0 0 0 0 0 0 0 0 0 
(pass0:ahcich0:0:0:0): CAM status: CCB request was invalid
(pass1:ahcich1:0:0:0): READ CAPACITY(10). CDB: 25 0 0 0 0 0 0 0 0 0 
(pass1:ahcich1:0:0:0): CAM status: CCB request was invalid
(pass2:ahcich2:0:0:0): READ CAPACITY(10). CDB: 25 0 0 0 0 0 0 0 0 0 
(pass2:ahcich2:0:0:0): CAM status: CCB request was invalid
(pass3:ahcich3:0:0:0): READ CAPACITY(10). CDB: 25 0 0 0 0 0 0 0 0 0 
(pass3:ahcich3:0:0:0): CAM status: CCB request was invalid
(pass4:ahcich4:0:0:0): READ CAPACITY(10). CDB: 25 0 0 0 0 0 0 0 0 0 
(pass4:ahcich4:0:0:0): CAM status: CCB request was invalid
camcontrol: cam_lookup_pass: CAMGETPASSTHRU ioctl failed
cam_lookup_pass: No such file or directory
cam_lookup_pass: either the pass driver isn't in your kernel
cam_lookup_pass: or ada0p1 doesn't exist
camcontrol: cam_lookup_pass: CAMGETPASSTHRU ioctl failed
cam_lookup_pass: No such file or directory
cam_lookup_pass: either the pass driver isn't in your kernel
cam_lookup_pass: or ada0p1 doesn't exist
Generating grub.cfg ...
Found background image: /usr/share/images/desktop-base/desktop-grub.png
Found kernel of FreeBSD: /boot/kfreebsd-9.0-2-amd64.gz
Found kernel module directory: /lib/modules/9.0-2-amd64
done
&lt;/pre&gt;

&lt;p&gt;&lt;a href=&quot;http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=612128&quot;&gt;Related?&lt;/a&gt;
</content>
</entry>
</feed>
