Sunday, January 11, 2015

ARKnet - Cupertino Emergency WiFi Network

So now that I'm graduated and looking for work full time (hey, you hiring in the Silicon Valley?), I've been spending my free time volunteering with the city of Cupertino as the technical lead building out a test network for a project we're calling the ARKnet.

In the name of emergency preparedness, Cupertino maintains a number of shipping containers throughout the city filled with emergency response equipment and supplies. Traditionally, if a disaster happens to be large enough to knock out communications, when citizens respond to activate these ARKs, these teams include Cupertino ARES members who tender communications back to the city Emergency Operations Center (EOC). In addition to voice communications, we also provide 1200 baud packet for long textual messages via the Santa Clara County's amazing AX.25 BBS system (Callsigns W1XSC-W6XSC). 1200 baud is a very useful data rate when compared to zero in the case of an emergency, but we decided that the time is ripe to start running some serious experiments moving data on the 5.8GHz band instead of on 144MHz.
For this pilot, the city gave us funding for a minimalist network linking the city EOC to a single ARK. To accomplish this, we got permission from one of the few tall buildings in the city to erect a 90 degree beam width access point. This sector antenna is pointed towards the EOC and one of the ARKs, both of which have high-gain uplink radios connected to it. The EOC contains an applications server and an edge router to provide Internet access to the entire network.

Since emergency communications is a major part of the charter for ARKnet, we're only considering uses for the network where the entire application can be entirely self-contained within the network. There's no interacting with the cloud when the Internet is down, so all our applications need to be running on a local server with emergency power. This network will be "Cloud-Free!™"
For the point-to-point links, we're using Mikrotik SXT 802.11ac routers (which come in both 90 degree beam width and 28 degree beam width variants). Mikrotik is unusual in that their products combine both good long-haul WiFi and commercial grade routing features in a single package. We looked at using Ubiquiti for the long-haul links (which I've used before), but being able to have the links also speak OSPF to make the network self-healing is attractive.
Our test ARK isn't profoundly far away from our sector AP (about 1.25 miles), but we do need to punch through a large cluster of trees, so the link isn't great. After a few weeks of lab testing, this weekend was the big rollout where we brought up all the gear and testing the throughput from the ARK to the EOC, which ended up being about 10Mbps.

This kind of bandwidth is certainly usable for any of the applications we've come up with so far, and the next big step for ARKnet is that we need to start developing viable applications to run on this network which will be useful in the case of a disaster.
Of course, I haven't been doing this alone. Thanks to all my fellow CARES volunteers who have helped make this first test-link possible!

Monday, December 22, 2014

Completed My Masters Degree in Electrical Engineering

Needless to say, things have been quiet on here for a while now. I've been deep in the thick of finishing my masters thesis at Cal Poly for my MS EE degree, and that has finally been done!

I'm now graduated, and starting on the job hunt! If your company is hiring in the bay area, CA, drop me a line.

Below is recordings of me presenting the first chapter of my thesis at the ARRL/TAPR DCC conference in Austin, TX, and then me in San Luis Obispo defending my entire thesis in front of my committee.



A few small tweaks before I submit this paper to the Cal Poly library for archive there, but it's essentially done.

Now that I'm not spending all of my work time writing, I'm looking forward to maybe spending some of my play time writing. I'm involved in a lot of interesting projects lately, just haven't had the will to write about them until now. You can look forward to that.

Friday, October 3, 2014

Unboxing the Atmel SAM4L8-XSTK Dev Kit


Video:


Thanks again to Atmel for giving this to me yesterday. I had a good time at ARM TechCon with them and everyone else.

Friday, August 1, 2014

Using Squid StoreIDs to optimize Steam's CDN

As part of my new router build, I'm playing around with transparent HTTP caching proxies.

Caching proxies are a really neat idea; when one computer has already downloaded a web page or image, why download it again when another device right next to it asks for the same image? Ideally, something between all of the local devices and the bottleneck in the network (namely, my DSL connection) would intercept every HTTP request, save all of the answers, and interject its own responses when it already knows the answer.

My setup is pretty typical for caching proxies. On my router, I have a rule in iptables that any traffic from my local 10.44.0.0/20 subnet headed for the Internet on port 80 should be redirected to port 3127 on my router, where I have a squid proxy running in "transparent" mode.

The basic transparent proxy deserves a post of its own once I finish polishing it, but for right now I'm writing this mainly as notes to myself, because the lead time on the next part is going to be pretty long.

My protocol-compliant caching proxy seems to be able to answer about 2-5% of HTTP requests from the local cache, which means that the responses are coming back in the 1-3ms range instead of 40-200ms. 2-5% isn't something to sneeze at, but it isn't particularly profound either. Squid does allow you to write all kinds of rules about when to violate a response's cacheable meta-data or how to completely make up your own. A common rule is:

refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 3600 90% 43200

which indicates to cache any images missing a cacheable header for 90% of their current age (with upper and lower bounds). This opens a whole rabbit hole of how deeply you want to abuse and mangle cacheable headers in the name of squeezing out a few more hits. I've played that game before, and it usually ends up causing a lot of pain because incorrectly cached items tend to break websites in very subtle ways...




Another problem with caching proxies is the opposite of the previously mentioned over-caching. While that was an issue of a single URL consecutively mapping to different content, there is the issue of multiple URLs mapping to the same content.

This is very common; any large content delivery network will have countless different servers each locally serving the same content. The apt repositories for Ubuntu or Debian are perfect examples of this: universityA.edu/mirror/ubuntu/packagename and universityB.edu/mirror/ubuntu/packagename are the same file, even though they have different URLs.

Squid, in version 3.4, has finally added a feature called StoreID which lets your fight around this multiple URLs to one content problem. It allows you to have Squid pass every URL through an exterior filter program that mangles each URL to try and generate a one-to-one mapping between URLs and content. I decided to play with this on the Steam CDN.

When you download a game in Steam, it is actually downloaded as 1MB chunks from something on the order of four different servers at once. In the menu Steam - Settings - Downloads - Download Region you can tell Steam which set of servers to download from, but it still selects exactly which servers to use beyond your control.

A typical Steam chunk URL looks like this:

http://valveSERVERID.cs.steampowered.com/depot/GAMEID/chunk/CHUNKID

  • SERVERID is a relatively small number (two or three digits) and identifies which server this chunk is coming from. At any one point, a Steam client seems to be hitting about four different servers. valve48 and valve271 are two that I'm seeing a lot in San Jose, but the servers seem to come and go throughout the day.
  • GAMEID is a number assigned to each game, although I've seen some games move from one ID to another halfway through the download. The largest game ID I've seen is in the high 50,000s. I strongly suspect that these are sequentially issued.
  • CHUNKID is a 160 bit hex number. Presumably a SHA1 checksum of the chunk? I haven't bothered poking at it.
The main takeaway is that, even when I have three computers downloading the same update, since each one of them is going to hit different servers for each chunk, I'm only seeing 25-40% cache hits for three sets of the exact same {GAMEID, CHUNKID} pairs.

Using Squid's new StoreID feature, I'm able to map each {SERVERID, GAMEID, CHUNKID} vector to the correct {GAMEID, CHUNKID} and now see 100% cache hits for every download after the first. With the VM I'm using for testing, I'm seeing about 20MBps throughput for anything that has already been accessed by any other system, and that is limited by the VM's NIC maxing out. I expect to be seeing close to Gigabit throughput once I move this to my router with it's SSD.



In hindsight, I think rewriting all the URLs to a consistent steamX.cs.steampowered.com is a poor choice. If you're going to rewrite URLs, you may as well go all in and rewrite it as an invalid hostname so there isn't the chance to break some future change on Valve's part. A rewrite to something like valveX.cs.steampowered.squid likely prevents any future possible namespace problems. I really hope the documentation for StoreID catches up and starts presenting some best practices, because I'm finding their documentation short of reading the code a little lacking...

Related rant: I really wish the Internet DNS system codified a top level domain for site-local use like IPv4 did in RFC1918 for the 10.0.0.0/8, 192.168.0.0/16 and 172.16.0.0/12 subnets. There exists a draft RFC from 2002 proposing "private.arpa.", but I'd like to see a shorter TLD like "lan." I personally use "lan.", but with how ICANN keeps talking about making TLDs a free-for-all, I dread the day that they make "lan." live.

In the end, the drag here is that Squid3.4 is so new that there doesn't exist any packages for it in Ubuntu or Debian. Even Debian bleeding edge is 3.3.8. It's obviously possible to compile and run squid3.4.6 on your own, but I really hate trying to maintain software outside of the package manager unless I really have to. I don't see myself using this new StoreID feature until Ubuntu 16.04 unless Debian packages it really soon in Jessie and I'm somehow convinced to switch my router back to Debian.

Saturday, July 12, 2014

WNDR3800 External Antenna Mod

How Size Really Matters

As I mentioned before, my previous primary router was a Netgear WNDR3800. A good piece of WiFi kit that I've simply outgrown as my main router, it now gets to serve as my network's main access point.
Unfortunately, I've always been a little disappointed with it's range as an access point. Throughput inside the apartment is great, but 10' beyond the front door you're toast. Part of this is because of the anemic 50mW it puts out on 5GHz, but I like to blame the fact that it uses tiny internal antennas more than anything else.  I can understand the appeal of the "slick" look from internal antennas, but I've never been one to go for popular aesthetic, so I figured I'd finally fix that.

This is a popular mod for the 3800. All you need is a pair of u.fl to RP-SMA pigtails and two dual-band RP-SMA WiFi antennas, both of which you can get on eBay for a total cost around $10. I've even seen some pre-packaged "WNDR3700/3800 external antenna mod" kits for sale.
Getting into the 3800 isn't quite as easy as the classic WRT54GL, but all you need is a T9 torx driver for six screws, and the four rubber feet are actually snap-in, not adhesive, so they go back in quite nicely.
The stock antennas are crazy small.
Seriously. They are tiny. They are foam taped to the inside of the case, so they're real easy, if a little destructive, to peel off.
I then drilled two 1/4" holes in the top of the case, centered in the short way and 1.3" from the two sides in the long way. The plastic is surprisingly soft and not brittle, which was a relief since I was afraid of shattering the case while drilling.

ANTENNA PLACEMENT HERE IS UTTERLY CRITICAL!

If you don't mount the antennas perfectly symmetrically, you'll always suffer from them fundamentally lacking symmetry, which would drive me nuts.
I had been under the impression that the 3800 had four internal antennas, and while there are four connectors on the board, it only uses two of them. The two red boxes are u.fl connectors as I expected, but the blue-boxed connectors left me a little befuddled.
They sort of look like u.fl connectors without their center pin populated?  Anyone know what these are?
In the end, I've taken a sleek router and given it that solid industrial two-giant-antennas-sticking-out-the-top look. What's not to like?

Quantitative measurements are... underwhelming. At a fixed distance, I saw no measurable change in RSSI from before to after... Qualitatively, it seems to have slightly longer range, but nowhere near what I expected.  The 15cm pigtails I got were a little long, so I might replace them with much shorter ones to reduce loss there, and I never put much faith in either $3 WiFi antennas I get from eBay or RSSI readings from devices. A little disappointing, but not a failure either. Now that I've moved out of San Luis Obispo, I need to figure out how to sweep these antennas now that I've lost access to Cal Poly's microwave lab...

Would I recommend performing this mod? Having failed to disprove the null hypothesis, I can't really say. If you have the parts sitting around, go for it, but it may not be worth going out and buying new parts for. Has anyone seen improvements from this kind of mod before?