Monday, February 2, 2015

Designing and Building a 2m Low Pass Filter

 I've been playing with the DRA818V modules that have been making quite a stir in the amateur radio world at the moment. I haven't gotten one on a spectrum analyzer yet, but I have reason to believe that it will require a low pass filter to be RF legal. I'll write more about that once I get a look at it, but figured I'd first built myself a low pass filter in case I need it (if not for these modules, but some other VHF project in the future).

My process for building a low pass filter went as follows:

  • Select the type of filter and cutoff frequency desired
  • Look up normalized coefficients in the ARRL Handbook
  • Divide these coefficients by the cutoff frequency
  • Convert the inductances into turns on some core and capacitors into the nearest values
  • Build the filter.
Since I wanted this filter for 2m, the highest frequency I'm interested in passing is 148MHz, so I selected a cutoff frequency of 150MHz. In hind-sight, this was a poor choice, since a -3dB point only 2MHz above the band caused for a lousy insertion loss. A better choice would have been 10% higher than the top of the band, so 148MHz * 1.10 = 162MHz

I decided to build a 5 pole T configuration Chebyshev filter with 0.1dB of ripple.

Looking this filter up in a random copy of the ARRL Handbook (1981, but any recent one will do), it gives the component values needed for a 50 ohm filter at 1MHz. I'm also building this for 50 ohms, so all I need to convert is the frequency by dividing by 162MHz.

  • L1 = 9.126uH / 162 = 56nH -- 3 turns on 1/4" air core
  • L2 = 15.72uH / 162 = 97nH -- 5 turns on 1/4" air core
  • L3 = 9.126uH / 162 = 56nH -- 3 turns on 1/4" air core
  • C1 = 4364.7pF / 162 = 27pF -- 30pF on hand
  • C2 = 4364.7pF / 162 = 27pF -- 30pF on hand
To convert the inductance values into solenoid designs, I have an old magazine article from the 70s that published a whole table of different diameters of air wound inductors.
For extra blog cred, I built the filter in an Altoids tin with SMA connectors on each side. The four solder pads were formed from a strip of copper clad with three hacksaw cuts.
Next step was hooking this up to a vector network analyzer to see how far off I ended up, which is when I realized that 150MHz was a poor corner frequency choice. Reforming the inductors and taking a turn out of L2 got me closer, but I would have ended up with better performance if I had designed it all correctly from the start.


 The SWR at 2m is about where it's expected to land at 1.3
 The insertion loss at 2m isn't so great at 0.8dB. This would be improved by better component layout, but I built this filter kind of sloppy. It does suppress all the harmonics of 2m as expected. I only bothered to note that 200MHz was down ~23dB, so 440MHz will be well below that.
In an ideal world, points 1 and 2 on the smith chart would fall at the center (1,0), but 45+19j ohms is close enough for me.

Now once I get to the point of experimenting with keying up my DRA818V modules, I'll have a nice LPF on hand in case the harmonics do end up too high for on the air testing.

Sunday, January 11, 2015

ARKnet - Cupertino Emergency WiFi Network

So now that I'm graduated and looking for work full time (hey, you hiring in the Silicon Valley?), I've been spending my free time volunteering with the city of Cupertino as the technical lead building out a test network for a project we're calling the ARKnet.

In the name of emergency preparedness, Cupertino maintains a number of shipping containers throughout the city filled with emergency response equipment and supplies. Traditionally, if a disaster happens to be large enough to knock out communications, when citizens respond to activate these ARKs, these teams include Cupertino ARES members who tender communications back to the city Emergency Operations Center (EOC). In addition to voice communications, we also provide 1200 baud packet for long textual messages via the Santa Clara County's amazing AX.25 BBS system (Callsigns W1XSC-W6XSC). 1200 baud is a very useful data rate when compared to zero in the case of an emergency, but we decided that the time is ripe to start running some serious experiments moving data on the 5.8GHz band instead of on 144MHz.
For this pilot, the city gave us funding for a minimalist network linking the city EOC to a single ARK. To accomplish this, we got permission from one of the few tall buildings in the city to erect a 90 degree beam width access point. This sector antenna is pointed towards the EOC and one of the ARKs, both of which have high-gain uplink radios connected to it. The EOC contains an applications server and an edge router to provide Internet access to the entire network.

Since emergency communications is a major part of the charter for ARKnet, we're only considering uses for the network where the entire application can be entirely self-contained within the network. There's no interacting with the cloud when the Internet is down, so all our applications need to be running on a local server with emergency power. This network will be "Cloud-Free!™"
For the point-to-point links, we're using Mikrotik SXT 802.11ac routers (which come in both 90 degree beam width and 28 degree beam width variants). Mikrotik is unusual in that their products combine both good long-haul WiFi and commercial grade routing features in a single package. We looked at using Ubiquiti for the long-haul links (which I've used before), but being able to have the links also speak OSPF to make the network self-healing is attractive.
Our test ARK isn't profoundly far away from our sector AP (about 1.25 miles), but we do need to punch through a large cluster of trees, so the link isn't great. After a few weeks of lab testing, this weekend was the big rollout where we brought up all the gear and testing the throughput from the ARK to the EOC, which ended up being about 10Mbps.

This kind of bandwidth is certainly usable for any of the applications we've come up with so far, and the next big step for ARKnet is that we need to start developing viable applications to run on this network which will be useful in the case of a disaster.
Of course, I haven't been doing this alone. Thanks to all my fellow CARES volunteers who have helped make this first test-link possible!

Monday, December 22, 2014

Completed My Masters Degree in Electrical Engineering

Needless to say, things have been quiet on here for a while now. I've been deep in the thick of finishing my masters thesis at Cal Poly for my MS EE degree, and that has finally been done!

I'm now graduated, and starting on the job hunt! If your company is hiring in the bay area, CA, drop me a line.

Below is recordings of me presenting the first chapter of my thesis at the ARRL/TAPR DCC conference in Austin, TX, and then me in San Luis Obispo defending my entire thesis in front of my committee.



A few small tweaks before I submit this paper to the Cal Poly library for archive there, but it's essentially done.

Now that I'm not spending all of my work time writing, I'm looking forward to maybe spending some of my play time writing. I'm involved in a lot of interesting projects lately, just haven't had the will to write about them until now. You can look forward to that.

Friday, October 3, 2014

Unboxing the Atmel SAM4L8-XSTK Dev Kit


Video:


Thanks again to Atmel for giving this to me yesterday. I had a good time at ARM TechCon with them and everyone else.

Friday, August 1, 2014

Using Squid StoreIDs to optimize Steam's CDN

As part of my new router build, I'm playing around with transparent HTTP caching proxies.

Caching proxies are a really neat idea; when one computer has already downloaded a web page or image, why download it again when another device right next to it asks for the same image? Ideally, something between all of the local devices and the bottleneck in the network (namely, my DSL connection) would intercept every HTTP request, save all of the answers, and interject its own responses when it already knows the answer.

My setup is pretty typical for caching proxies. On my router, I have a rule in iptables that any traffic from my local 10.44.0.0/20 subnet headed for the Internet on port 80 should be redirected to port 3127 on my router, where I have a squid proxy running in "transparent" mode.

The basic transparent proxy deserves a post of its own once I finish polishing it, but for right now I'm writing this mainly as notes to myself, because the lead time on the next part is going to be pretty long.

My protocol-compliant caching proxy seems to be able to answer about 2-5% of HTTP requests from the local cache, which means that the responses are coming back in the 1-3ms range instead of 40-200ms. 2-5% isn't something to sneeze at, but it isn't particularly profound either. Squid does allow you to write all kinds of rules about when to violate a response's cacheable meta-data or how to completely make up your own. A common rule is:

refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 3600 90% 43200

which indicates to cache any images missing a cacheable header for 90% of their current age (with upper and lower bounds). This opens a whole rabbit hole of how deeply you want to abuse and mangle cacheable headers in the name of squeezing out a few more hits. I've played that game before, and it usually ends up causing a lot of pain because incorrectly cached items tend to break websites in very subtle ways...




Another problem with caching proxies is the opposite of the previously mentioned over-caching. While that was an issue of a single URL consecutively mapping to different content, there is the issue of multiple URLs mapping to the same content.

This is very common; any large content delivery network will have countless different servers each locally serving the same content. The apt repositories for Ubuntu or Debian are perfect examples of this: universityA.edu/mirror/ubuntu/packagename and universityB.edu/mirror/ubuntu/packagename are the same file, even though they have different URLs.

Squid, in version 3.4, has finally added a feature called StoreID which lets your fight around this multiple URLs to one content problem. It allows you to have Squid pass every URL through an exterior filter program that mangles each URL to try and generate a one-to-one mapping between URLs and content. I decided to play with this on the Steam CDN.

When you download a game in Steam, it is actually downloaded as 1MB chunks from something on the order of four different servers at once. In the menu Steam - Settings - Downloads - Download Region you can tell Steam which set of servers to download from, but it still selects exactly which servers to use beyond your control.

A typical Steam chunk URL looks like this:

http://valveSERVERID.cs.steampowered.com/depot/GAMEID/chunk/CHUNKID

  • SERVERID is a relatively small number (two or three digits) and identifies which server this chunk is coming from. At any one point, a Steam client seems to be hitting about four different servers. valve48 and valve271 are two that I'm seeing a lot in San Jose, but the servers seem to come and go throughout the day.
  • GAMEID is a number assigned to each game, although I've seen some games move from one ID to another halfway through the download. The largest game ID I've seen is in the high 50,000s. I strongly suspect that these are sequentially issued.
  • CHUNKID is a 160 bit hex number. Presumably a SHA1 checksum of the chunk? I haven't bothered poking at it.
The main takeaway is that, even when I have three computers downloading the same update, since each one of them is going to hit different servers for each chunk, I'm only seeing 25-40% cache hits for three sets of the exact same {GAMEID, CHUNKID} pairs.

Using Squid's new StoreID feature, I'm able to map each {SERVERID, GAMEID, CHUNKID} vector to the correct {GAMEID, CHUNKID} and now see 100% cache hits for every download after the first. With the VM I'm using for testing, I'm seeing about 20MBps throughput for anything that has already been accessed by any other system, and that is limited by the VM's NIC maxing out. I expect to be seeing close to Gigabit throughput once I move this to my router with it's SSD.



In hindsight, I think rewriting all the URLs to a consistent steamX.cs.steampowered.com is a poor choice. If you're going to rewrite URLs, you may as well go all in and rewrite it as an invalid hostname so there isn't the chance to break some future change on Valve's part. A rewrite to something like valveX.cs.steampowered.squid likely prevents any future possible namespace problems. I really hope the documentation for StoreID catches up and starts presenting some best practices, because I'm finding their documentation short of reading the code a little lacking...

Related rant: I really wish the Internet DNS system codified a top level domain for site-local use like IPv4 did in RFC1918 for the 10.0.0.0/8, 192.168.0.0/16 and 172.16.0.0/12 subnets. There exists a draft RFC from 2002 proposing "private.arpa.", but I'd like to see a shorter TLD like "lan." I personally use "lan.", but with how ICANN keeps talking about making TLDs a free-for-all, I dread the day that they make "lan." live.

In the end, the drag here is that Squid3.4 is so new that there doesn't exist any packages for it in Ubuntu or Debian. Even Debian bleeding edge is 3.3.8. It's obviously possible to compile and run squid3.4.6 on your own, but I really hate trying to maintain software outside of the package manager unless I really have to. I don't see myself using this new StoreID feature until Ubuntu 16.04 unless Debian packages it really soon in Jessie and I'm somehow convinced to switch my router back to Debian.