OneRNG on Arch Linux

If your a Linux Crypto guy you probably care about your entropy.  If you don’t, you problably should.

I recently purchased a OneRNG device from the kickstarter project.  Unfortunately it didn’t work out of the box with Arch Linux.  This blog post documents what I hit and how I overcame some of the issues to get my OneRNG working.

First thing I did was install the prerequisites:

yaourt -S at python-gnupg rng-tools
sudo systemctl enable atd.service
sudo systemctl start atd.service

as per the instructions at http://onerng.info/onerng/.  Note that we do NOT enable/start rngd.service as the daemon management is done from the udev stuff.

I then downloaded the tar file from https://github.com/OneRNG/onerng.github.io/blob/master/sw/onerng_3.4.orig.tar.gz?raw=true, verified the md5sum and the sha256 sum, and installed it with slightly tweaked instructions:

tar -xvzf onerng_3.4.orig.tar.gz
cd onerng_3.4
sudo make install
sudo udevadm control --reload-rules

I now plugged in my OneRNG with my fingers crossed.  At this point it died.  I did some brief reverse engineering and ended up figuring out how the /sbin/onerng.sh worked.  I then tried to run it manually and got:

$ sudo bash -ax /sbin/onerng.sh daemon ttyACM0
[snip]
Exception in thread Thread-7:
Traceback (most recent call last):
File "/usr/lib/python3.4/threading.py", line 920, in _bootstrap_inner
self.run()
File "/usr/lib/python3.4/threading.py", line 868, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.4/site-packages/gnupg.py", line 753, in _read_response
result.handle_status(keyword, value)
File "/usr/lib/python3.4/site-packages/gnupg.py", line 284, in handle_status
raise ValueError("Unknown status message: %r" % key)
ValueError: Unknown status message: 'NEWSIG'
[snip]

This issue is logged upstream against python-gnupg and has been fixed but the current release doesn’t have it active. See the patch file at https://bitbucket.org/vinay.sajip/python-gnupg/commits/1337e6ce364f.

I manually applied the patch to /usr/lib/python3.4/site-packages/gnupg.py because I didn’t feel like rebuilding the package with the patch included.

Now when I unplugged the OneRNG and plugged it back in I noticed rngd successfully started in the background:

rngd -f -n 1 -d 1 -p /var/lock/LCK..ttyACM0 -r /dev/stdin

It is reading from stdin since there is an openssl pipe for aes whitening in front of it.  Unfortunately, removal didn’t result in the rngd process being killed. My udev foo is a little lacking so I hacked the onerng.sh script as follows:

--- /sbin/onerng.sh.orig        2015-06-20 21:27:33.000000000 -0700
+++ /sbin/onerng.sh     2015-07-04 16:31:45.478019740 -0700
@@ -158,7 +158,10 @@
 #      when something is removed kill the daemon
 #
 if [ "$1" = "kill" ]; then
-       if [ -e /var/lock/LCK..$2 ]
+       if [ -e /var/lock/LCK..ttyACM0 ]
+       then
+               kill -9 `cat /var/lock/LCK..ttyACM0`
+       elif [ -e /var/lock/LCK..$2 ]
        then
                kill -9 `cat /var/lock/LCK..$2`
        else

This enabled my pull of the OneRNG to kill the rngd process. Unfortunately this is a hack since if anything else used the OpenMoko ttyACM stuff it would kill it but I don’t have such things.  At this point I hit the “good enough for me” wall.

I verifed running

cat /dev/random > /dev/null

dimmed the LED on the OneRNG and I was good to go.  Time to regenerate my ssh hostkeys 😉

NAS Part 2

In December of 2010 I married the love of my life.  She brought forth into my home the woman’s touch.  She also brought forth all her data!  My poor struggling original NAS was already 80% full.  There was no way I was going to RIP all her movies, MASH episodes, and everything else onto my poor old NAS.  About this time I set out in search of new solutions.

In a previous life I was a systems administrator.  I had a very bad day on the job once.  When I arrived at work, one of the drives in our 18 drive RAID5 Sun Solaris raid array had failed.  Being the good administrator I was, I carefully replaced the drive and started a rebuild.  About half-way through the raid rebuild another drive in the RAID failed.  RAID5 protects against 1 drive failure.  The second drive failure was too much for the raid array and it went down never to come back up.

In hind sight, creating a RAID5 array with 18 drives was not a smart move and we shouldn’t have done that.  We had incorrectly convinced ourselves that the enterprise drives we were using were robust and the likely-hood of two drives failing at the same time was low.  What we hadn’t fully considered is how much stress the rebuild would put on the other drives.

Haunted by that experience, I was worried that if one of the drives in my raid5 failed another might fail in the rebuild and I would lose my data.  I had backups of it, so it wouldn’t be a disaster, but it sounds like a pretty major inconvenience.  Add this to the fact I am a nerd and I didn’t want to take the risk.

Armed with this information I wrote a list of requirements for my new NAS:

  • At least 8 TB of storage
  • Ability to have 2 drives fail and no data loss
  • Hot swap capable
  • Still quiet and energy efficient

About this point in time I started reading about the various options available as far as operating systems/software is concerned.  I read about Unraid, ZFS, FreeNAS, OpenMediaVault, and others.  In the past, I had worked a lot with Solaris and the *BSDs.  I have to admit that I was missing playing with that technology.  This pushed me towards OpenSolaris or FreeNAS so that I would use ZFS with a RAIDZ2 array.  Using RAIDZ2 would allow me to lose two drives and still have no data loss.  ZFS supports up to 16 exbibytes of data (giga->tera->peta->exa).  ZFS also added scrubbing capabilities to protect against bit rot.  All in all, it had a TON of features I wanted to play with that my Linux md raid didn’t have.

I placed a order for the following components:

  • 1 x LIAN LI PC_Q25B Black Aluminum Mini-ITX Tower Computer Case
  • 1 x APC BE650G1 650 VA Back-UPS 650
  • 1 x SILVERSTONE 500W ATX Power Supply
  • 2 x 8 GB Crucial Ballistix DDR3 1600 Ram Sticks
  • 1 x ASUS C60M1-I AMD Fusion CPU C-60
  • 6 x Seagate Barracuda 7200 ST3000DM001 3TB 7200 RPM Hard Drives
  • 1 Intel EXPI9301CT 1000Mbps PCI-Express Card
  • 1 x Silverstone PP05 Short Cable Set for Modular PSU

One of the coolest things about the C60M1-I was that it had 6 onboard SATA connectors so I wouldn’t need any sort of IO expansion card.

I assembled the system after all the components arrived.  I can’t say enough good things about the Lian-Li case.  It is a very quality piece of equipment and worth the price.  I have never had such a nice case before and I will seriously be considering them for my next PC build.

I built this server in May of 2013.

IMG_7417 IMG_7416 IMG_7415 IMG_7413

Overall I was very happy with this build except for the errors I made which I will outline below.

Pros:

  • ~ 12 TB of usable storage
  • Ability to lose two drives without loss of data
  • Lian-Li case/backplane is Hot Swap capable
  • Fast
  • Quiet
  • Doesn’t use much electricity
  • Better Airflow = Cooler Drives (Hover around 37 degree C)
  • ZFS Awesomeness

Cons:

  • Lian-Li case can’t hold more than 8 drives
  • No ECC
  • No Encryption
  • Limited Expandability

Honestly the biggest issues with this NAS device stemmed from my lack of understanding of ZFS at the time I built it.  ZFS can do some really bad things if memory gets corrupted. See the ECC vs non-ECC RAM and ZFS discussion on forums.freenas.org for more information.

Another related issue is my processor (AMD C-60) doesn’t support the AES-NI instruction set.  This means I couldn’t encrypt my drives and still have acceptable performance.

The lack of ECC and AES-NI could be fixed by moving to an AMD Kabini GX mini-itx board, but as of January 2014, they still are not readily available.

At the core, all of these issues come about because I chose commodity hardware over server hardware.  My next build will overcome these limitations, but it will come at a cost of more expensive components, more electricity usage, and more noise; but that is a post for another day.

Barry

 

 

 

 

 

NAS Part 1

As with many modern computer users, I have a large collection of digital data ranging from pictures, manuals, backups, game-data, etc.  During college, I had a few external hard drives and I would routinely backup my data to those drives but it was always a pain and it meant I couldn’t access all the data concurrently without plugging all my drives into the various USB ports on my computer.

After graduating and having a “real” job, I decided to build my first NAS.  For those not in-the-know, a NAS is a “Network Attached Storage”.  It is basically a computer with lots of hard drive space that sits on your network and allows other computers on the network access to its storage pools.

My first adventure into the world of NAS was composed of the following items:

  • 1 x APEX TX-381 Black Steel MicroATX case w/300 Watt PS
  • 4 x Hitachi GST 1TB 7200 RPM hard drives
  • 2 GB of Crucial 240-Pin DDR2
  • Intel mini-itx motherboard with 4 SATA ports w/Atom 330 processor

TX381aThe OS for this particular system was Ubuntu LTS.  I setup the drives in Linux
using softward RAID (mdadm) using RAID5.  This gave me a total of 3 TB of storage that I could export to my clients using Samba and NFS. It also meant that if any drive failed my data wouldn’t be lost.

I built this system in May of 2009 for ~$800.  It served me well until I replaced it in April of 2013.

Designing home storage solutions is about compromise, and this server was no different.  Here are some of the pros and cons of this particular build:

Pros:

  • Fairly Cheap
  • Provided 3TB of disk storage
  • Provided a dedicated storage NAS
  • Compact
  • Quiet
  • Low power consumption

Cons:

  • No hot-swap capability
  • Hard drives ran hot (would hit 40 degree C at ambient temp)
  • Linux’s Raid wasn’t as awesome as some of the competition
  • “Only” 3TB of storage started filling up!

These Con’s all lead me to build my next NAS which will be covered in another post.

Barry

 

 

Time Lapse Videos in Linux

I love watching time lapse videos.  Some time ago I discovered http://timescapes.org and enjoy watching some of the amazing work they have done.  It has even motivated me to do some experimenting with making my own time lapse videos.

So far the best information I have found on creating time lapse videos in linux are http://ubuntuforums.org/showthread.php?t=2022316 and http://ultrawide.wordpress.com/2009/01/27/timelapse-photography-on-linux/.  CyberAngel’s post on ubuntuforums is really awesome because he was kind enough to release his deflickering script.

So, in way of testing everything, I setup my tripod at the edge of our office, set my camera into manual mode, setup my intervalometer, and took 512 JPG images.

I copied these images off my CF card onto my computer and decided I wanted to encode them to an H.264 video.  This lead me to following the instructions at https://ffmpeg.org/trac/ffmpeg/wiki/UbuntuCompilationGuide so that I could get an H.264 compatible ffmpeg.

After I had my ffmpeg working, I following CyberAngel’s advice and ran:

cd source_folder_of_pictures
mkdir resized
mogrify -path resized -resize 800x533 *.jpg
cd resized

I also ran CyberAngel’s script (which I have mirrored here) to deflicker the images.

Unfortunately I didn’t have good luck with the mencoder command, so I ended up running:

ffmpeg -start_number 1055 -start_number_range 512 -f image2 -i IMG_%d.JPG -vcodec libx264 -vf "scale=iw:trunc(ih/2)*2" -r 30 video.avi

instead.  My jpeg images started at IMG_1055.JPG and ran up to IMG_1568.JPG, hence the start_number and start_number_range.  If I didn’t include the scale option to vf I got an irritating error “height not divisible by 2 (800×533)”.

Without further ado, here is the result:

Not too bad.  Now I just need to snap some pictures of something more useful.  Maybe if this weekend is nice I will get to make time lapse videos of the sky instead of the dog!

Focus Stacking in Linux

One of my favorite areas of photography is macro photography.   To me taking images of tiny things is just freaking awesome.  I don’t know why I love it, but so far it is my favorite type of photos to take and to edit.

Macro photography also poses many changes.  For instance, macro lenes and macro methods tend to produce a very shallow depth of field.  This means that the amount of stuff actually in focus in the picture is very thin.  If you want to take photos that have lots of stuff in focus, you have two options:

  1. Shoot with a very small aperture
  2. Focus stacking

Lets look at both of these options a little closer.

Shoot with a very small aperture: As the hole in the camera/lens gets smaller, the depth of field gets deeper.  This means that shooting at f11 will have a much larger depth of field than shooting at f2.8.  Perfect.  Just shoot at small apertures and stuff will be in focus.

There are some issues with this though.  You still may not have enough depth of field.  So if you are shooting something really thick and you want it all in focus f11 (or smaller) may not be enough.

Some lenses may also not be at there best at f11.  Maybe your lens looks best at f5.6 and you don’t want to go all the way to f11.

Add to this the fact that f11 takes a lot of light and you realize that shooting with a small aperture may not get the result you want.

Focus Stacking: Focus stacking is the process of taking many pictures with different elements of the image in focus and then combining the images so that the resultant image is in focus.

It’s a little related to HDR in that in HDR one combines the dynamic ranges of many images into one HDR image.  In focus stacking, one combines the in-focus regions of various images to render one image.

In this post we will talk about how I focus stack in Linux.

The Software: I do all my photography work in Linux (Gimp/Aftershot Pro/DarkTable/Hugin/QtPFSGui/etc).  I prefer to work with opensource software, but I am not against purchasing software when a good alternative doesn’t exist in the OSS world.  As to focus stacking directly the only software package I use is Hugin.  Although used primarily for panorama stitching (which I used to make the Venice stitch) it also works well for focus stacking.

The Process: When shooting Macro photography for focus stacking I use a tripod and rails to adjust the area of the shot that is in focus.  For the example used in this post, I shot 9 images:

These nine images were shot in RAW on my Canon 30D.  I copied them off my CF card into a directory on my Linux box and ran the following command:

dcraw -T *.CR2

There are lots of different options you can pass to dcraw to make it do different coolness for you, but for the sake of this post I kept the defaults.  This command reads in all the CR2 files and generates TIFF files from them.  This is needed because Hugin works best with tiff files.

Next we need to align the images so that they are all exactly over each other.  To do this I invoke:

align_image_stack -a aligned_ -v -m -g 10 -C *.tiff

This command makes new aligned_* images that are cropped and stacked on top of one another (I had to add -g 10 for align_image_stack to be able to align my image).

Now we just need to do the dirty work of actually focus stacking the image.  This could be done manually in Gimp by carefully erasing and layering the images, but that sounds like a pain and I am lazy so instead I just use Hugin again to focus stack the images for me using the following command:

enfuse -o result.tiff --exposure-weight=0 --saturation-weight=0 --contrast-weight=1 --hard-mask aligned_*

Hugin/enfuse now does the heavy lifting for us and combines our individually focused sections into one masterpiece image; hopefully everything in focus. As with align_imge_stack and dcraw, enfuse takes many command line options that you can use to tweak its results.

So how does the output look?  Well this is the result:

Not too shabby but it has some halo effect around it.  I am still working on figuring out how to get rid of this.  Any input is welcome and I will update this article as I learn more.

Thanks,

Barry