Monday, December 22, 2014

Completed My Masters Degree in Electrical Engineering

Needless to say, things have been quiet on here for a while now. I've been deep in the thick of finishing my masters thesis at Cal Poly for my MS EE degree, and that has finally been done!

I'm now graduated, and starting on the job hunt! If your company is hiring in the bay area, CA, drop me a line.

Below is recordings of me presenting the first chapter of my thesis at the ARRL/TAPR DCC conference in Austin, TX, and then me in San Luis Obispo defending my entire thesis in front of my committee.



A few small tweaks before I submit this paper to the Cal Poly library for archive there, but it's essentially done.

Now that I'm not spending all of my work time writing, I'm looking forward to maybe spending some of my play time writing. I'm involved in a lot of interesting projects lately, just haven't had the will to write about them until now. You can look forward to that.

Friday, October 3, 2014

Unboxing the Atmel SAM4L8-XSTK Dev Kit


Video:


Thanks again to Atmel for giving this to me yesterday. I had a good time at ARM TechCon with them and everyone else.

Friday, August 1, 2014

Using Squid StoreIDs to optimize Steam's CDN

As part of my new router build, I'm playing around with transparent HTTP caching proxies.

Caching proxies are a really neat idea; when one computer has already downloaded a web page or image, why download it again when another device right next to it asks for the same image? Ideally, something between all of the local devices and the bottleneck in the network (namely, my DSL connection) would intercept every HTTP request, save all of the answers, and interject its own responses when it already knows the answer.

My setup is pretty typical for caching proxies. On my router, I have a rule in iptables that any traffic from my local 10.44.0.0/20 subnet headed for the Internet on port 80 should be redirected to port 3127 on my router, where I have a squid proxy running in "transparent" mode.

The basic transparent proxy deserves a post of its own once I finish polishing it, but for right now I'm writing this mainly as notes to myself, because the lead time on the next part is going to be pretty long.

My protocol-compliant caching proxy seems to be able to answer about 2-5% of HTTP requests from the local cache, which means that the responses are coming back in the 1-3ms range instead of 40-200ms. 2-5% isn't something to sneeze at, but it isn't particularly profound either. Squid does allow you to write all kinds of rules about when to violate a response's cacheable meta-data or how to completely make up your own. A common rule is:

refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 3600 90% 43200

which indicates to cache any images missing a cacheable header for 90% of their current age (with upper and lower bounds). This opens a whole rabbit hole of how deeply you want to abuse and mangle cacheable headers in the name of squeezing out a few more hits. I've played that game before, and it usually ends up causing a lot of pain because incorrectly cached items tend to break websites in very subtle ways...




Another problem with caching proxies is the opposite of the previously mentioned over-caching. While that was an issue of a single URL consecutively mapping to different content, there is the issue of multiple URLs mapping to the same content.

This is very common; any large content delivery network will have countless different servers each locally serving the same content. The apt repositories for Ubuntu or Debian are perfect examples of this: universityA.edu/mirror/ubuntu/packagename and universityB.edu/mirror/ubuntu/packagename are the same file, even though they have different URLs.

Squid, in version 3.4, has finally added a feature called StoreID which lets your fight around this multiple URLs to one content problem. It allows you to have Squid pass every URL through an exterior filter program that mangles each URL to try and generate a one-to-one mapping between URLs and content. I decided to play with this on the Steam CDN.

When you download a game in Steam, it is actually downloaded as 1MB chunks from something on the order of four different servers at once. In the menu Steam - Settings - Downloads - Download Region you can tell Steam which set of servers to download from, but it still selects exactly which servers to use beyond your control.

A typical Steam chunk URL looks like this:

http://valveSERVERID.cs.steampowered.com/depot/GAMEID/chunk/CHUNKID

  • SERVERID is a relatively small number (two or three digits) and identifies which server this chunk is coming from. At any one point, a Steam client seems to be hitting about four different servers. valve48 and valve271 are two that I'm seeing a lot in San Jose, but the servers seem to come and go throughout the day.
  • GAMEID is a number assigned to each game, although I've seen some games move from one ID to another halfway through the download. The largest game ID I've seen is in the high 50,000s. I strongly suspect that these are sequentially issued.
  • CHUNKID is a 160 bit hex number. Presumably a SHA1 checksum of the chunk? I haven't bothered poking at it.
The main takeaway is that, even when I have three computers downloading the same update, since each one of them is going to hit different servers for each chunk, I'm only seeing 25-40% cache hits for three sets of the exact same {GAMEID, CHUNKID} pairs.

Using Squid's new StoreID feature, I'm able to map each {SERVERID, GAMEID, CHUNKID} vector to the correct {GAMEID, CHUNKID} and now see 100% cache hits for every download after the first. With the VM I'm using for testing, I'm seeing about 20MBps throughput for anything that has already been accessed by any other system, and that is limited by the VM's NIC maxing out. I expect to be seeing close to Gigabit throughput once I move this to my router with it's SSD.



In hindsight, I think rewriting all the URLs to a consistent steamX.cs.steampowered.com is a poor choice. If you're going to rewrite URLs, you may as well go all in and rewrite it as an invalid hostname so there isn't the chance to break some future change on Valve's part. A rewrite to something like valveX.cs.steampowered.squid likely prevents any future possible namespace problems. I really hope the documentation for StoreID catches up and starts presenting some best practices, because I'm finding their documentation short of reading the code a little lacking...

Related rant: I really wish the Internet DNS system codified a top level domain for site-local use like IPv4 did in RFC1918 for the 10.0.0.0/8, 192.168.0.0/16 and 172.16.0.0/12 subnets. There exists a draft RFC from 2002 proposing "private.arpa.", but I'd like to see a shorter TLD like "lan." I personally use "lan.", but with how ICANN keeps talking about making TLDs a free-for-all, I dread the day that they make "lan." live.

In the end, the drag here is that Squid3.4 is so new that there doesn't exist any packages for it in Ubuntu or Debian. Even Debian bleeding edge is 3.3.8. It's obviously possible to compile and run squid3.4.6 on your own, but I really hate trying to maintain software outside of the package manager unless I really have to. I don't see myself using this new StoreID feature until Ubuntu 16.04 unless Debian packages it really soon in Jessie and I'm somehow convinced to switch my router back to Debian.

Saturday, July 12, 2014

WNDR3800 External Antenna Mod

How Size Really Matters

As I mentioned before, my previous primary router was a Netgear WNDR3800. A good piece of WiFi kit that I've simply outgrown as my main router, it now gets to serve as my network's main access point.
Unfortunately, I've always been a little disappointed with it's range as an access point. Throughput inside the apartment is great, but 10' beyond the front door you're toast. Part of this is because of the anemic 50mW it puts out on 5GHz, but I like to blame the fact that it uses tiny internal antennas more than anything else.  I can understand the appeal of the "slick" look from internal antennas, but I've never been one to go for popular aesthetic, so I figured I'd finally fix that.

This is a popular mod for the 3800. All you need is a pair of u.fl to RP-SMA pigtails and two dual-band RP-SMA WiFi antennas, both of which you can get on eBay for a total cost around $10. I've even seen some pre-packaged "WNDR3700/3800 external antenna mod" kits for sale.
Getting into the 3800 isn't quite as easy as the classic WRT54GL, but all you need is a T9 torx driver for six screws, and the four rubber feet are actually snap-in, not adhesive, so they go back in quite nicely.
The stock antennas are crazy small.
Seriously. They are tiny. They are foam taped to the inside of the case, so they're real easy, if a little destructive, to peel off.
I then drilled two 1/4" holes in the top of the case, centered in the short way and 1.3" from the two sides in the long way. The plastic is surprisingly soft and not brittle, which was a relief since I was afraid of shattering the case while drilling.

ANTENNA PLACEMENT HERE IS UTTERLY CRITICAL!

If you don't mount the antennas perfectly symmetrically, you'll always suffer from them fundamentally lacking symmetry, which would drive me nuts.
I had been under the impression that the 3800 had four internal antennas, and while there are four connectors on the board, it only uses two of them. The two red boxes are u.fl connectors as I expected, but the blue-boxed connectors left me a little befuddled.
They sort of look like u.fl connectors without their center pin populated?  Anyone know what these are?
In the end, I've taken a sleek router and given it that solid industrial two-giant-antennas-sticking-out-the-top look. What's not to like?

Quantitative measurements are... underwhelming. At a fixed distance, I saw no measurable change in RSSI from before to after... Qualitatively, it seems to have slightly longer range, but nowhere near what I expected.  The 15cm pigtails I got were a little long, so I might replace them with much shorter ones to reduce loss there, and I never put much faith in either $3 WiFi antennas I get from eBay or RSSI readings from devices. A little disappointing, but not a failure either. Now that I've moved out of San Luis Obispo, I need to figure out how to sweep these antennas now that I've lost access to Cal Poly's microwave lab...

Would I recommend performing this mod? Having failed to disprove the null hypothesis, I can't really say. If you have the parts sitting around, go for it, but it may not be worth going out and buying new parts for. Has anyone seen improvements from this kind of mod before?

Friday, July 11, 2014

Building My Own Router - Hardware

For a long time, my apartment's main router has been a WNDR3800. It's a great run-of-the-mill dual band Gigabit SOHO router that supports OpenWRT, and I've been playing with CeroWRT, but my network has been outgrowing it's capabilities for a while now. With some assistance from my buddy Sean, I've put together an Atom-based router that I've quite happy with.

This post will mainly be a hardware run-down for the build. Documenting my network topology and why it needs a router of this caliber is a post for another day. Until then, know that you can run pfsense as a great router OS on this box, but I personally am using Ubuntu as the basis for an entirely hand-configured software stack.

Parts list:



The heart of the router is the D2500CCE motherboard. This is a fantastic router board with it's dual Intel Gigabit Ethernet adapters. It comes with a PCI and mini-PCIe expansion slot, which I loaded with a WiFi card that I'm going to use as one of the nodes in my ad-hoc mesh projects. The AR5B95 can't be an access point, so I still need my WNDR3800 to act as an access point which is trunked off of this router. If and when I upgrade to higher-end or more access points, they'll be a drop-in replacement.

The dual core Atoms come with a decent stock heat sink, which lets you almost get away with passive cooling, but I opted to install a single 4cm fan for piece of mind.
This thing is just loaded with IO too. 7 USB ports in total, 2 RS-232 ports on the back, another two serial ports on headers, a parallel port, and even an LVDS port for an LCD.

The graphics support in 64 bit Ubuntu for this motherboard is garbage. Not a huge deal since I only plugged a monitor into it to run the OS installer, but you won't have a good time trying to install any kind of GUI on this. To get anything to display on an external monitors at all after you run the installer, you need to disable LVDS by appending the "video=LVDS-1:d" kernel argument. Good luck doing that if you didn't install an SSH server...
The M350 case makes for a nice stand-alone router case. I thought long and hard about building this all around a 19" rack mount enclosure, but opted for the flexibility of a normal case. It comes with both a standard 1/4" mounting hole for the power barrel connector and a punchout for a WiFi antenna which I used.

The M350 does only come with one hard drive bracket, so I had to order a second one separately for the fan. The stock mounting screws for the brackets were garbage; I've already stripped one by hand and had to drill it out, which was annoying. You'll want to replace them if you have the spare hardware available.
The front of the M350 has a removable cover (Which only comes off after removing the top! You'll break the plastic tabs otherwise...) which exposes another 4cm fan mount and a daughter board with the power switch and two USB ports. These USB ports are meant for hidden USB dongles, since there's no way to get at them without disassembling the entire case. Since I've already got WiFi on the Mini-PCIe slot, I might eventually install a Bluetooth dongle on these.

Of course, the stock power LEDs were bright blue, so the first thing I did was desolder them and replace them with low intensity red 3mm LEDs.
The PicoPSU is an interesting little board that plugs straight into 20 pin ATX power connectors (which still works on the 24 pin socket on the D2500 motherboard). It comes with a 2.5x5.5mm barrel connector for 12V input, and then converts that to the multitude of voltages you need on an ATX power connector, as well as a single Molex and single SATA power connectors. You're going to need a pretty beefy 12V power supply to pull 80W from this thing, so the 2.5mm barrel connector isn't wrong, but my system only draws ~12W and I've standardized on 2.1mm barrel connectors for my apartment 12V system, so I replaced the input connector. I've already got a 12V 12A power supply I use to run my main networking stack, so you'll need to make sure you have something capable of powering your system.

The network design and software stack I run on this deserves one if not several more of their own posts, but the basics of it is that one NIC is used as a WAN uplink, and the other is my primary LAN adapter, with each other subnet encapsulated in VLAN tags that get broken out by my 802.1Q aware managed switches and access point. I've been real happy with the performance so far; when performing a 100Mbps transfer between subnets routed through this single trunk line, the Atoms loaf along at about 88% idle.

Friday, July 4, 2014

Building a Strata 1 NTP Server

Despite all indications to the contrary, I am in fact still alive. Things have been quiet on here because I've been spending the last two months recovering from the Wildflower Triathlon and writing my thesis, so not much to blog about other than writing about how I'm writing about my thesis, which would be a little meta.

As a quick sanity break, I went back to playing with my apartment's time network, which I've already shown how to configure multicast discovery on. Now that all of my NTP clients can automatically discover my NTP servers, even across subnets due to my apartment's network supporting multicast routing, the next logical step is to improve the actual NTP servers.

Since my NTP servers are drawing their time from a vote of multiple lower strata time servers on the Internet, the limiting factor to how closely synchronized to them I can be is based on my apartment's Internet connection. Highly variable latency and an asymmetric connection puts the lower limit on how closely my NTP servers can be to actually correct is about 5ms (which is perfectly FINE for any reasonable person, BTW).  Given these limitations on Internet-base time servers, the next logical step is to locally define time myself. There are multiple ways to do this:
  • Building a WWV receiver 
  • Physically synchronizing an atomic clock and transferring time to my apartment
  • Using a GPS receiver to derive local time from the GPS constellation
This third option is one of the more reasonable solutions to an entirely unreasonable problem, so I decided to go with that as a first pass.

I long ago somehow ended up with a Motorolla OnCore GPS engine that was missing it's antenna. Luckily, MCX active GPS antennas can be had on eBay for <$10, so once that problem was solved I could play with an ancient industrial-grade GPS receiver. This thing not only supports the standard TTL NEMA sentences that tell you where you are and what time it is, but also has a one pulse per second output and supports feeding in differential GPS corrections for a fixed "time-keeping" mode.


As a quick first test, I built a quick adapter board between the GPS engine's 2x5 pin header to the standard FTDI serial cable, with a diode and 0.3F capacitor (Yes, not micro Farads, 0.3 actual Farads) to keep the SRAM from losing the current GPS almanac every time I power cycle it (which cuts the GPS lock time down from 12.5 minutes to 20-30 seconds). The 1pps output from the received is connected to the CTS line on the FTDI cable.

Configuring GPSd is ridiculously simple: Tell it which serial port you have your GPS receiver connected to, and it figures out baud rate, which pin you have 1pps connected to, etc. Then just point your NTP server at two special local host addresses where GPSd makes the time information available.


The NMEA sentences are off by 200ms due to the delay through the serial port, so I'm thinking I might tag it as a strata 8 source or something so that it avoids using something that confuses the NTP algorithm that much.

Reading 1pps over USB isn't ideal, and is likely the source of most of my clock jitter. Initial measurements are putting my clock jitter around the 0.1ms range, but I really haven't gotten a chance to do too much analysis of it.  Ideally, I'd use an actual hardware serial port, but I need to build the RS-232 translators first...  My new router I'm building has four hardware serial ports, so I won't have any excuse.

Wednesday, March 26, 2014

Range Testing NanoBridge M5s

In preparation for the Wildflower Triathlon in May, I've been testing many of the major components of the computer network I'm helping build for communications. One of the big unknowns this year is the unusually large number of mid-range microwave links, which I've never had too much experience with. WDS links across buildings or small parks I'm comfortable with, but we are looking at streaming HD video through WiFi hops that are multiple kilometers long, and I've just never played with links like that before.
To gain some experience before the final deployment, I bought a pair of Ubiquiti NanoBridge M5 nodes. These are 200mW 5GHz WiFi access points with optional high-gain dishes, and at $80 a piece are pretty reasonable for what you get.

The NanoBridges come with either 22dBi or 25dBi dishes, pole mounting hardware, and a passive PoE injector. Luckily, Ubiquiti uses the kind of standard passive PoE injector pinout, so I often use the cheap 2.1mm barrel PoE injectors you can get on eBay for $2 instead of using the OEM injectors from Ubiquiti so I can power them off 12V batteries for portable operations. I selected the lower gain 22dBi dishes because I plan to be moving these around a lot so the smaller dishes are a nice convenience.
I think the NanoBridges claim to have a 20km range, which isn't an unreasonable number if you look at the link margin calculations, but since I've never done a deployment like this before I did want to do a real-life deployment before trying to do it across lake San Antonio. I gave one of my buddies, Robbie, a six pack of beer and one of the two dishes and told him to point it at Cuesta Ridge north of SLO. I then drove up the horrific road to the top of the ridge and pointed the second dish back at his apartment. Once we got both of them turned on and roughly pointed in the right direction, the link came right up at full speed.

At 6.5km, this test is longer than any of the links that we're going to need for Wildflower, so I was very happy when the link came up without any fine tuning of either dish's aim. I was even able to aim one of the dishes 30-40 degrees off center and the link stayed up, although with a smaller link margin.
AirOS, which is what most Ubiquiti nodes run, has a really neat spectrum analyzer mode that I used to help select channels for the test. The first screen shot is from Cuesta Ridge looking at the second node down the hill beaconing its SSID.
This is a capture of my MacBook Air transferring files from my apartment's AP using a 40MHz channel.
This is another spectrum scan from the radio site on top of Cuesta. Interestingly, none of these have the distinctive 802.11 spectrums, but are other modulation types from non-802.11 equipment using the 5GHz ISM band. I thought they were really interesting.

So in short, I'm really happy with my NanoBridge M5s, which couldn't be much easier for setting up a ~80Mbps effective throughput link between any two points with line of sight. 6.5km was no problem, and shorter links will only enjoy a larger power margin to allow for non-perfect aiming, rain fade, etc.

Thursday, March 20, 2014

Pushing VLAN Tags Through Unmanaged Switches

Now that it's Spring Break and I'm in San Luis Obispo, it's full speed ahead on building the communications network for the Wildflower Triathlon that CPARC supports every year. Wildflower is a very big event in the middle of nowhere (Lake San Antonio), so we have to build quite a bit of infrastructure to support the operation.

This year, I was designated as the computer network zonie, so it's my job to make sure that there's IP connectivity between all the major sites in the network. This involves building a computer network that includes a couple Internet hand-off points, multiple routers, several medium-range (2-5km) microwave links, QoS enforcement for a few hundred devices to support VoIP and streaming video while sharing a 15Mbps Internet uplink, a couple 802.1Q VLAN trunks, etc.

Needless to say, we are building a network beyond the budget we're being given, so duct tape and Linksys devices are being applied liberally throughout this project.

One problem we've encountered this year is that we need a few network devices to be on the same layer two network while being two miles apart. These two sites don't have line of sight, so we're using two microwave links to bounce off a third site between them, while these links also need to carry a few other L2 domains. A perfect application for VLAN tagging.

The problem is that this middle site needs to run several repeaters and all of it's network gear off of a generator and batteries for all weekend. The traditional technique of using managed rack-mount switches on every hop of a VLAN trunk is problematic since a single rack-mount switch exceeds our power budget for all the network gear at the middle radio site. Ideally, we find a small low-power managed switch to use, but I really want to just use a 5 port dumb workgroup switch in the middle since it runs straight off of 12V and only consumes a few watts.

Conventional wisdom dictates that you can NOT move 802.1Q VLAN tagged traffic through unmanaged network switches.


Plot twist: apparently this is wrong. I took the time to set up a test where I used two L2 managed switches to tag and untag Ethernet traffic, and then put various unmanaged switches between them on their trunk line, and the VLAN tunnel kept working... This is really unexpected; I've had several networking techs tell me prior that what I did wasn't possible, since the MTU of Fast Ethernet switches is only 1514 and the extra four bytes added by 802.1Q will break things.

As far as I can tell, none of the possible failure conditions we came up with cropped up during testing:

  • Dropping the 1514+4 frames.
  • Crashing
  • Truncating the last four bytes
  • Severely lowered throughput (The switches even continued to perform MAC learning)
Taking my experiment a set further, I plugged the unmanaged switches between a pair of GigE Linux systems and bisected the maximum L2 MTU that the switches could handle. The minimum needed for standard Ethernet is 1514, for VLAN tags 1518:
  • SD216 v2.1 - 16 port Linksys Fast Ethernet switch - 1532
  • SR224 - 24 port Linksys switch - VLANs worked, but physically destroyed before maximum MTU could be measured.
  • ASW308P vA2 - 8 port AirLink 101 PoE switch - 1532
  • FS608 v3 - 8 port NetGear switch - 1532
  • DS104 - 4 port dual-speed hub (!) - at least 4014. NIC MTU limited further testing
So it would appear that the standard MTU for Fast Ethernet switches isn't 1514, but actually 1532, which leaves a comfortable margin for the extra four bytes needed for 802.1Q tagging. Am I missing something, because I really thought this wouldn't work before I tested it.


For reference, here is the MTU's of the rest of the hardware I used for these experiments, on layer 3:
  • RTL-8139 Fast NIC: 1500 L3
  • BCM5722 GigE: 1500 L3
  • RTL-8169 GigE: 7152 L3
  • Intel 82571EB dual-GigE: 9216 L3
  • RTL-8111G GigE: 4080 L3
  • NanoBridge M5: 2024 L3 (set via web interface)
I didn't bother testing if anything allowed a larger than 14 byte Ethernet header with these L3 MTUs, so you may have a bad time trying to run VLANs with the MTU cranked all the way up. I also didn't bother researching device drivers, so you may be able to push these higher with not stock Debian drivers.

I also never saw ANY device successfully send ICMP responses for MTU discovery, so jumbo frames appear to still definitely be a thing for specially designed networks, for the record.



My testing process involved increasing the Linux system MTUs via "ifconfig eth0 mtu ####" until I got a "SIOCSIFMTU: Invalid argument" error, then placing the unmanaged switch between the two systems running iperf and lowered the MTU on one of them until the TCP connection stopped black-holing into the switch, which all silently dropped over-sized jumbo frames.

Sunday, January 12, 2014

NTP Multicast Servers

Anyone who's followed this blog or gone back and read my past posts has probably picked up on my somewhat unhealthy obsession with time. Every time I get my hands on a new display or microcontroller, the first thing I do is build a clock out of it (one, two, three, four, five, six, seven, eight, nine). I've also dabbled in high precision time with my rubidium frequency standard and the multitude of frequency counters in my apartment. One area that I haven't had as much success with time in the past is keeping all of my Linux systems synchronized, and that's been bothering me.

I finally sat down this week and spent a few hours figuring out how to get multicast network time protocol working on Debian, since I've already put so much work into getting multicast routing working on my apartment's LAN and want to actually use it for something. There were a few critical pieces that all came from different places online, so I figured I'd collect them all in one place.

Typically, NTP operates by configuring each client with a server address to periodically poll for the current time. During each poll, the client tries to determine the remote server's time and how long it took for the server's response to get back to the client. For remote Internet servers, this time delay can be in the order of 20-100ms and change wildly based on network conditions. The traditional solution to this is to have a single local system poll several remote NTP servers, and then manually configure each local client to poll just this one local server, which in my case is my WNDR3800 router. This works acceptably well, but just isn't complicated enough to meet my desire to make life hard on myself.

Multicast NTP is based on multiple servers beaconing their local time towards a multicast address, which clients listen for to discover these servers. When a client discovers a new server, it first performs a standard unicast query with the server to determine network delay between the two systems, and then sits back and silently listens for future multicast beacons from the server to keep the client's local time in sync. All this does for me is allow my NTP clients to dynamically find new and forget old NTP servers on my network when I swap in and out computers. Another advantage where multicast really shines on huge computer networks is that after the initial propagation measurement, clients don't generate any additional network traffic, but only passively listen for the multicast beacons from the servers.


Configuring NTP Multicast Servers

On my systems that I've designated as my NTP servers, they do the typical pulling of four unique servers from *.pool.ntp.org DNS entries, but then act as both multicast servers and clients so they can discover and synchronize with each other.

Sample Config and ntpq output:


A few tripping points in the configuration file:

  • The default restrict statements prevent discovering peers, so you need to remove the nopeer statement from the IPv4 and/or IPv6 restrict statements.
  • The broadcast statement defaults to "ttl 0", which means the 0th entry in the eight value ttl array used by manycasting. This means that the argument to ttl isn't actually a time to live value, but an index into the default array of ttl values {1, 32, 64, 96, 128, 160, 192, 224}, which you can re-declare if that doesn't meet your needs.
  • Unlike broadcast clients, for multicastclient you need to specify the address to listen to.
  • /etc/ntp.conf isn't the config file that NTP actually uses. If you look at /etc/init.d/ntp, you'll discover that by default, ntpd uses the config file /var/lib/ntp/ntp.conf.dhcp, which is built by appending /etc/ntp.conf to any NTP server advertisements from the system's DHCP server. /var/lib/ntp/ntp.conf.dhcp isn't rebuilt when you run "service ntp restart", which is why your changes to /etc/ntp.conf don't appear to do anything until you reboot the entire system! I haven't bothered to figure out which init script triggers this config rebuild, but have been doing testing on the /var/lib/... file and then manually copying the config back to /etc/ntp.config
The selection of the multicast address is pretty arbitrary. What I have shown isn't the actual address I use, but anything in the 239.192.0.0/14 or ffx8::/16 organizationally-local multicast address subnets are technically correct and should work fine as long as all of your systems match.

Monday, January 6, 2014

My First Adventure in Homebrewing Beer

Yes, I am in fact alive. Last quarter ended up being more work than I had expected, and teaching a section of Electronics Lab for Non-Majors sapped away a lot of the instructional energy that I normally blow off on this blog.

Every year, I try and find a new hobby to broaden my life experience with. 2012, it was getting back into Amateur Radio; 2013, it was shooting (mostly trap, though I'm slowly getting into rifle...); 2014, I'm thinking it's going to be homebrewing beer.

I've been toying with fermentation for a while. Most notably, last winter/spring I had a few false starts that ended with me throwing up in my bathroom and my friends laughing their asses off. After that, two experiences really convinced me to go all in and re-approach the hobby more rigorously this year.

  • I helped my buddy Robbie brew a batch of beer using a grossly expired Mr. Beer brew kit, and the beer still came out drinkable. Tell-tale soapy tastes of a 4 year old extract malt, but not the worst beer I've had.
  • Which was the second experience. Killing time with another buddy, Sean, late last quarter, we stopped at a liquor store and bought a variety pack of "Brewer's Mystery" beer from some local brewery that I had never heard of, mainly because it was the only case of beer they had that we hadn't already been drinking in the last week. It was awful. It was the worst beer I had ever experienced in my life. It wasn't a "eh, this isn't very good" bad; it was a "I immediately regret opening this bottle and not leaving it for someone else" bad. We eventually managed to pawn it off at some random college party, but the trauma was already done.
So, when Robbie and I manage to brew a better beer with four year old malt and yeast than someone seriously sold us in a liquor store, it's hard not to figure that this is a hobby I can't be too unsuccessful at. Queue a trip to the local homebrew shop here in San Luis Obispo (Doc's Cellar), where I buy a copy of Palmer's How to Brew. It's a very nicely laid out book:
  • The first chapter is a crash course in turning the crank to brew your first batch of beer, without focusing on understanding too much of what's happening.
  • The rest of the book is then divided into three major sections for extract, partial grain, and full grain brewing. This means that you get to start simple and make the brewing hobby as complicated as you want to.
  • It is also available in website form, although I definitely don't regret having a hard copy to read in bed and have in the kitchen.
Thanks to my apartment kitchen already doubling as a wet lab for various projects (PCB etching, etc.), I can already cobble together most of what I needed to brew my first batch. What little I still needed (mainly a hydrometer and bottle capper), I put on my Christmas list for my parents, who always appreciate when I ask for more tactile things than a textbook about knots for holidays. 


Starting My First Baseline Batch

Being the engineer that I am, the first thing to do is run a small batch of the most generic beer recipe I can come up with as a baseline to compare against. This is based on Palmer's Cincinnati Pale Ale recipe, with substitutions based on availability at Doc's Cellar and only doing a half batch at a time (2.5 gallons).
  • 1.5 lbs pale malt syrup ($3.38)
  • 1.5 lbs Munich malt syrup ($3.38)
  • 7g Safale US-05 yeast ($1.75)
  • 0.5oz Cascade hops for 60 minutes for bittering ($1)
  • 0.5oz Cascade hops for 10 minutes for finishing ($1)
Putting the expense of consumables for the first batch at $10.51, not counting the ~$100 of equipment I've collected/bought/been gifted over the last six months.
First thing is to boil the malt and hops. This pasteurizes the wort so that the only thing growing in it ends up being the yeast we want, changes the chemical composition of the malt, and steeps the hops. I did a partial boil with 1 gallon of water, which I then diluted to 2.5 gallons in the final fermentation chamber, which isn't ideal for steeping the hops, but the best my largest pot allows.
By adding half of the Cascade hops at the beginning of the boil, and holding the second half until the last ten minutes, I get to reduce half of it to just the bittering agents, while still retaining some of the aroma and more volatile flavoring from the second half.
Picture well into the boil. I didn't bother tracking boil-off water loss, so I'm likely going to end up with about 2 gallons of beer in the end, once you factor in water loss here and in the bottom sediment when I bottle in a few weeks.
The hour of boiling the wort breaks down lots of complex proteins that you want to settle out of the beer. Once that happens (which makes it look a lot like egg drop soup), you want to cold shock the wort to pull out more proteins, which mainly cause a hazy appearance when you chill the beer, but also do affect the long-term stability of the beer (which is unlikely to be a problem around college students...)
The pot is placed in one sink filled with cold water, while ice water is circulated through the copper tubing coil I fashioned from left-over copper tubing from a boiler experiment I did in the past.
Once the wort is cooled down to below 70°F, I dilute it to 2.5 gallons, pitch 7g of rehydrated yeast, and split the wort between a 3 quart carboy and a 5 gallon bucket so I can visually track a small sample of the batch.

The initial specific gravity of the batch ended up being 1.047, using only the malt syrup and no added sugar. This, minus the final gravity in a few weeks, can tell us how much alcohol my little yeasty buddies here created, which isn't going to be anything to phone home about. Then again, if I were doing this just to try and get drunk, I'd save the trouble and just drink more rum from my liquor closet...

The first definite signs of life took about 16 hours. These last two pictures were taken 41 hours after pitching the yeast. Lots of churning going on in the wort, the airlock is bubbling every ~12 seconds, and all that nasty looking foam on the top is slowly growing (that is totally normal).

Now I need to wait three weeks, bottle this batch with a small addition of sugar for carbonation, and let it bottle condition for another few weeks. Come about mid-February, I'll get to test this first batch. Until then, I just need to leave it alone and figure out how I want to specialize in this hobby.