Monday, May 20, 2019

Twitter-Connected Camera for Maker Faire

This last weekend was Maker Faire Bay Area, which is easily one of my favorite events of the year. Part county fair, part show and tell, and a big part getting to see all my Internet friends in one place for a weekend.

This year, I got struck with inspiration a few weeks before the event to build a camera that could immediately post every picture taken with it to Twitter. I don't like trying to live tweet events like that since I find it too distracting taking the photo, opening it in Twitter, adding text and hashtags, and then posting it. This camera would instead upload every picture with a configurable canned message, so every picture will be hashtagged, but I have a good excuse for not putting any effort into a caption for each picture: "this thing doesn't even have a keyboard"

I went into this project with the following requirements:

  • Simple and fast to use. Point, click, continue enjoying the Faire.
  • Robust delivery of tweets to Twitter tolerant of regularly losing Internet for extended periods of time when I walk inside buildings at Maker Faire. The camera would be tethered off my cell phone, and while cell service at Maker Faire has gotten pretty good, the 2.4GHz ISM band usually gets trashed in the main Zone 2 hall, so the camera will definitely be losing Internet
  • A very satisfying clicky shutter button.
  • A somewhat silly large size for the enclosure, to make the camera noticable and more entertaining.
  • Stretch goal: A toggle switch to put the camera in "timer delay" mode so I could place it somewhere, press the shutter button, and run into position.

The most interesting challenge for me was coming up with a robust way for the camera to be able to take one or multiple photos without Internet connectivity, and then when it regains Internet ensure that all of the pictures got delivered to Twitter. It would be possible to code some kind of job queue to add new photos to a list of pending tweets and retry until the tweet is successfully posted, but I only had a few weeks to build this whole thing, so chasing all the edge cases of coding a message queue that could even tolerate the camera getting power cycled sounded like a lot of work.

I eventually realized that guaranteed message delivery for a few MBs of files while experiencing intermittent Internet is already actually a very well solved problem; email! This inspiration got me to where the camera will be running a local Python script to watch the shutter button and capture the photo, but instead of trying to get the picture to Twitter, instead attach it to an email and send it to a local mail server (i.e. Postfix) running on the Raspberry Pi. The mail server can then grapple with spotty Internet and getting restarted while trying to deliver the emails to some API gateway service to get the photos on Twitter, while the original Python script is immediately ready to take another photo and add it to the queue regardless of being online.

YouTube Video:


The Python script running on the Raspberry Pi is available on GitHub.

The camera was really successful at Maker Faire! I enjoyed being able to snap photos of everything without needing to pull out my phone, and really loved the looks of amusement when people finally figured out what I was doing with a colorful box covered in stickers with a blinking LED on the front.

I ended up taking several HUNDRED photos this weekend (the good ones are available to browse here). Unfortunately, this means I very quickly found out that IFTTT has a 100 tweet per day API limit, which they don't tell you about until you hit it. Talking to others, this is apparently a common complaint about IFTTT where it's great to prototype with, but as soon as you try and actually demo it, you hit some surprise limit and they cut you off. If I had known they were going to cut me off, I would have sat down and written my own email to Twitter API gateway, but by the time I realized this problem Friday of Maker Faire, sitting down to try and write an email parser and using Twitter's API directly wasn't an option anymore. GRRR. Two out of Five stars, would not recommend IFTTT.

When I opened a ticket with IFTTT support, they said I was out of luck on the 100 tweet/day limit, and suggested I went and got a Buffer account, which is a paid social media queue management service, so my per day tweet limit would be higher, but we've now got one more moving part in that the photos are going Raspberry Pi - my mail server - IFTTT - Buffer - Twitter. UNFORTUNATELY, Something wasn't happy between IFTTT and Buffer, so only about 20% of the API calls to add my photos to my Buffer queue were successful. Buffer also sucked for what I was trying to do because it's more meant for uploading a weeks worth of content as a CSV to post three times a day on social media. To get Buffer to post all of my photos, I had to manually go in and schedule it to post the next photo in my queue every five minutes... all day... so I was sitting there on my phone selecting the next five minute increment and clicking "add to schedule" for quite a while. 1/5 stars, will definitely never use again.

So the irony here is that the photo delivery from the Raspberry Pi in the field back to my mail server was rock solid all weekend, but getting it from my mail server to Twitter fell on its face pretty hard.

The other notable problem that I ran into while stress testing the camera the week prior to Maker Faire was the fact that Postfix was getting its DNS settings when it started, and seemed to not expect the server to be roaming between various WiFi networks, so I needed to edit the /etc/dhcpcd.conf file to force the Raspberry Pi (and thus also Postfix) to just use some public DNS resolvers like 8.8.8.8 and 1.1.1.1 instead of trying to use my Phone's DNS resolver, which obviously wasn't available when my camera roamed to other WiFi networks like the one at Supply Frame's office.
I also spent all weekend adding stickers to the camera's box, so by the end of the weekend the cardboard box was beautifully decorated.


Material used for this build:

Starting from a clean Raspbian Lite image, run raspi-config and
  • Set a new user password and enable ssh
  • Set the locale and keyboard map to suit your needs
  • Set the WiFi country and configure your WiFi network credentials
Install the needed software and my prefered utilities
sudo apt update
sudo apt install postfix mutt vim dstat screen git

Starting with Postfix, we need it to be able to relay email out via a remote smarthost, since pretty much no consumer Internet connection allows outgoing connections on port 25 to send email. It's possible to use Gmail as your smarthost, so feel free to search for guides on how to specifically do that, but I just used a mail relay I have running for another project.

To do this, I first created a new file /etc/postfix/relay_passwd and added one line to it:
smtp.example.com USERNAME:PASSWORD

This file gets compiled into a database (relay_passwd.db) that postfix uses to look up your username and password when it needs to log into your relay. Conceivably you could have multiple sets of credentials for different hosts in here, but I've only ever needed one for my relay.

I then changed the permissions on it so only root can read it, and generated the database file postfix actually uses to perform lookups against this host to username/password mapping.

chmod 600 /etc/postfix/relay_passwd
postmap /etc/postfix/relay_passwd

To configure Postfix to use this relay, I added these lines to my /etc/postfix/main.cf file:
relayhost = [smtp.example.com]:587
smtp_use_tls=yes
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/relay_passwd
smtp_sasl_security_options =

At this point, you should be able to use a program like mutt or sendmail on the Raspberry Pi to send an email, and watch the respective /var/log/mail.log files to see the email flow out to the Internet or get stuck somewhere.

Tweaking Postfix defer queue behavior - Since using Postfix as a message queue on a portable Raspberry Pi is a bit unusual compared to the typical Postfix application, which probably involves a datacenter, we expect the Internet connection to be quite a bit more flaky, so retries on messages should happen a lot more often than on typical mail delivery. I changed it to start retrying emails at 100 seconds and then Postfix doubles that up to the maximum of 15 minutes, which seems like a reasonable upper limit for the time my phone would go offline and then find signal again.

queue_run_delay = 30s (300s default)
minimal_backoff_time = 30s (300s default)
maximal_backoff_time = 300s (4000s default)

The actual camera script is available on GitHub, including the modified dhcpcd config file, a systemd service to start the Python script on boot, and an example copy of the configuration file that the tweetcam script looks for at /etc/tweetcam.conf which includes where to send the picture emails and what to put in the subject and body of the emails.


The hardware was a small square of perf board to act as an interface between the Raspberry Pi 40 pin header and the buttons / LEDs mounted in the box. For the inputs, the Pi's internal pull-up resistors were used so the button just needed to pull the GPIO pin to ground. A 0.1uF capacitor was included for debounce, which ended up not really mattering since it took a little more than a second for the python script to capture the photo and send the email.

I put an NPN transistor buffer on both of the LEDs using a 2N3904 transistor, which I probably could have done without, but I hadn't decided how big, bright, or how many LEDs I wanted to drive when I soldered the perf board.

The camera module was plugged into the camera CSI port on the Pi.

In the end, this project was a huge hit and I particularly appreciated the nice tweets I was getting from people who couldn't make it to Maker Faire and were enjoying getting to see ~500 photos of it tweeted live from me walking around. I suspect I'll end up continuing work on this project and try and fix some of the issues I encountered like the IFTTT Twitter API limits and how really underwhelming the image quality is coming out of the OEM Raspberry Pi Foundation camera module... 

Saturday, May 11, 2019

FCIX - Lighting Our First Dark Fiber

 The Fremont Cabal Internet Exchange project continues!

The first eight months of FCIX was spent spinning up the exchange in the Hurricane Electric FMT2 building running out of my personal cabinet there. At the end of 2018, in the typical 51% mostly kidding style that is starting to become an integral part of the FCIX brand, we asked our data center host Hurricane Electric if they would be willing to sponsor us for a second cabinet in their other facility (HE FMT1) and a pair of dark fiber between the two buildings.

This felt like a pretty big ask for a few reasons:

  1. Hurricane Electric has already been a very significant sponsor in enabling FCIX to happen by their sponsoring of free cross connects for new members into the exchange so joining the exchange is a $0 invoice. 
  2. Asking for a whole free cabinet feels pretty big regardless of who you're talking to.
  3. Hurricane Electric is in the layer 2 transport business, so us asking to become a multi-site IXP puts us in the somewhat tricky position of walking the line of being in the same business as one of our sponsors while using their donated services. 
That last point is an interesting and possibly somewhat subtle one. It's also not an unusual awkward position that Internet Exchange Points are put in; when an Internet Exchange Point goes multi-site, their multiple sites are often connected by donated fiber or transport from a sponsor who's already in that business. This means that once an IXP starts connecting networks between the two facilities over the donated link, pairs of networks which might have already been leasing capacity between the two buildings might be motivated to drop their existing connection and move their traffic onto the exchange's link. Some IXPs will also enforce a policy that single networks can only connect to the IXP in one location, so members can't use the fiber donated to the IXP as internal backhaul for their own network, since that's a service they should be buying from the sponsor themselves, not enjoying the benefit of it being donated to the IXP.

Do I think this is a major concern between HE FMT1 and HE FMT2 for FCIX? No. These two buildings are about 3km apart, and Hurricane Electric had made sure there is a generous bundle of fiber between the two buildings, so it is unlikely that HE is making a lot of money on transport between two buildings within walking distance.
So we asked, and Hurricane Electric said yes.

At that point, we had an empty cabinet in FMT1, and a pair of dark fiber between the two buildings allocated for us, but a second site means we need a second peering switch...

"Hey, Arista..."

Arista was kind enough to give us a second 7050S-64 switch to load into FMT1, so we now have a matching pair of Ethernet switches to run FCIX on. Cool.

The final design challenge was lighting this very long piece of glass between the two sites. Thankfully, 3km is a relatively "short" run in the realm of single mode fiber, so the design challenge of moving bits that far isn't too great; pretty much every single generation of fiber optic has off-the-shelf "long-reach" optics which are nominally rated for 10km of fiber, so we weren't going to need to get any kind of special long range fiber transponders or amplifiers to light the pair.

In reality, they aren't really rated for 10km, so much as they're rated for a certain signal budget that usually works out well enough for 10km long links. For an example, lets take the Flexoptix 10G LR optic, which has the following technical specifications:
The important numbers to focus on in this table are on the top right:

  • Powerbudget (db): 6.2db
  • Minimum transmit power: -8.2dbm (db over a milliwatt)
  • Minimum receive power: -14.4dbm
The powerbudget is the acceptable amount of light that can be lost from end to end over the link, be it through linear km of fiber, connections, splices, attenuators, etc. So the 10km "distance" parameter is more a rule of thumb statement that a 10km link will typically have 6.2db of attenuation along it than the optic really being able to tell exactly how far it is from the other end of the fiber. 

The minimum transmit power and minimum receive power are actually related to the powerbudget by the fact that the power budget is the difference between these two numbers. Usually, your powerbudget will be much better than this to begin with, because most optics will put out much more than -8.2dbm of power when they're new, but some of them might be that low, and the lasers will actually "cool" over their life-span, so even if an optic comes out of the box putting out -2dbm of light, as it ages that number will go down. 

When it comes to serious long fiber links, it's likely you'll want to get a signal analysis done of the actual fiber you plan on using, and then using that information to plan your amplifiers accordingly. We, on the other hand, very carefully looked at how far the two buildings were on Google Maps, stroked our chins knowingly like we knew what the hell we were doing, and decided that a 10km optic would probably be good enough. Time to call another FCIX sponsor.

"Hey, Flexoptix..."

Given that we wanted to use 10km optics, we really had three choices for optics, given what our 7050S-64 switches could support: We could light it with a pair of 1G-LX optics, which would be pretty lame in this day and age, or we could light it with a pair of 10G-LR optics, which would probably be pretty reasonable given the amount of traffic we expect to be moving to/from this FMT1 extension, OR, we could ask Flexoptix for a pair of $400 40G LR4 optics and use some of those QSFP+ ports on our Aristas... because why not? faster is better.
So that's what we did. FCIX now has a 40G backbone between our two sites. 

40G LR4 is actually a little mind blowing with how they get 40G across a single pair of fibers, because a single 40G transceiver isn't how they actually did it. 40G was really an extension of 10G by putting 4x10G in one optic, and there was two ways of then transporting these 4x10G streams to the other optic:
  1. PLR4, or "parallel" LR4, where you use an 8 fiber cable terminated with MPO connectors, so each 10G wavelength is on its own fiber.
  2. LR4, which uses the same duplex LC fiber as 10G-LR, but uses four different wavelengths for the four 10G transceivers, and then integrates a CWDM (coarse wave division multiplexing) mux IN THE FREAKING OPTIC.

It's not entirely correct, but imagine a prism taking four different wavelengths of light and combining them into the single fiber, then a second prism on the receiver splitting them back out to the four receivers. Every time I think about it, it still blows my mind how awesome all of these Ethernet transceivers are once you dig into how they work.

So we now have 40G between our two sites, and like always, it wouldn't have been possible without all of our generous FCIX sponsors; they're pretty cool. If you happen to be an autonomous system running in either of those facilities, or want to talk to us about extending FCIX to another facility in the Silicon Valley, feel free to shoot us an email at contact@fcix.net.