Wednesday, June 5, 2019

Using 0603 Surface Mount Components for Prototyping

 As a quick little tip, when I'm prototyping circuits on 0.1" perf board, I like using 0603 surface mount components for all of my passives and LEDs, since they nicely fit between the pads. This way I don't need to bother with any wiring between LEDs and their resistors, since I can just use three pads in a row to mount the LED and their corresponding current limiting resistor and just need to wire the two ends to where they're going.
I also like using it as a way to put 0.1uF capacitors between adjacent pins on headers, so filtering pins doesn't take any additional space or wiring.

The fact that this works makes sense, since "0603" means that the surface mount chips are 06 "hundredths of an inch" by 03 "hundredths of an inch", so they're 0.06" long, which fits nicely between 0.1" spaced pads.

I definitely don't regret buying an 0603 SMT passives book kit, which covers most of the mix of resistors and capacitors I need. I then buy a spool of anything that I manage to fully use up since it's obviously popular, so I eventually bought a spool of 0.1uF capacitors and 330 ohm resistors to restock my book when those strips ran out.

Monday, May 20, 2019

Twitter-Connected Camera for Maker Faire

This last weekend was Maker Faire Bay Area, which is easily one of my favorite events of the year. Part county fair, part show and tell, and a big part getting to see all my Internet friends in one place for a weekend.

This year, I got struck with inspiration a few weeks before the event to build a camera that could immediately post every picture taken with it to Twitter. I don't like trying to live tweet events like that since I find it too distracting taking the photo, opening it in Twitter, adding text and hashtags, and then posting it. This camera would instead upload every picture with a configurable canned message, so every picture will be hashtagged, but I have a good excuse for not putting any effort into a caption for each picture: "this thing doesn't even have a keyboard"

I went into this project with the following requirements:

  • Simple and fast to use. Point, click, continue enjoying the Faire.
  • Robust delivery of tweets to Twitter tolerant of regularly losing Internet for extended periods of time when I walk inside buildings at Maker Faire. The camera would be tethered off my cell phone, and while cell service at Maker Faire has gotten pretty good, the 2.4GHz ISM band usually gets trashed in the main Zone 2 hall, so the camera will definitely be losing Internet
  • A very satisfying clicky shutter button.
  • A somewhat silly large size for the enclosure, to make the camera noticable and more entertaining.
  • Stretch goal: A toggle switch to put the camera in "timer delay" mode so I could place it somewhere, press the shutter button, and run into position.

The most interesting challenge for me was coming up with a robust way for the camera to be able to take one or multiple photos without Internet connectivity, and then when it regains Internet ensure that all of the pictures got delivered to Twitter. It would be possible to code some kind of job queue to add new photos to a list of pending tweets and retry until the tweet is successfully posted, but I only had a few weeks to build this whole thing, so chasing all the edge cases of coding a message queue that could even tolerate the camera getting power cycled sounded like a lot of work.

I eventually realized that guaranteed message delivery for a few MBs of files while experiencing intermittent Internet is already actually a very well solved problem; email! This inspiration got me to where the camera will be running a local Python script to watch the shutter button and capture the photo, but instead of trying to get the picture to Twitter, instead attach it to an email and send it to a local mail server (i.e. Postfix) running on the Raspberry Pi. The mail server can then grapple with spotty Internet and getting restarted while trying to deliver the emails to some API gateway service to get the photos on Twitter, while the original Python script is immediately ready to take another photo and add it to the queue regardless of being online.

YouTube Video:


The Python script running on the Raspberry Pi is available on GitHub.

The camera was really successful at Maker Faire! I enjoyed being able to snap photos of everything without needing to pull out my phone, and really loved the looks of amusement when people finally figured out what I was doing with a colorful box covered in stickers with a blinking LED on the front.

I ended up taking several HUNDRED photos this weekend (the good ones are available to browse here). Unfortunately, this means I very quickly found out that IFTTT has a 100 tweet per day API limit, which they don't tell you about until you hit it. Talking to others, this is apparently a common complaint about IFTTT where it's great to prototype with, but as soon as you try and actually demo it, you hit some surprise limit and they cut you off. If I had known they were going to cut me off, I would have sat down and written my own email to Twitter API gateway, but by the time I realized this problem Friday of Maker Faire, sitting down to try and write an email parser and using Twitter's API directly wasn't an option anymore. GRRR. Two out of Five stars, would not recommend IFTTT.

When I opened a ticket with IFTTT support, they said I was out of luck on the 100 tweet/day limit, and suggested I went and got a Buffer account, which is a paid social media queue management service, so my per day tweet limit would be higher, but we've now got one more moving part in that the photos are going Raspberry Pi - my mail server - IFTTT - Buffer - Twitter. UNFORTUNATELY, Something wasn't happy between IFTTT and Buffer, so only about 20% of the API calls to add my photos to my Buffer queue were successful. Buffer also sucked for what I was trying to do because it's more meant for uploading a weeks worth of content as a CSV to post three times a day on social media. To get Buffer to post all of my photos, I had to manually go in and schedule it to post the next photo in my queue every five minutes... all day... so I was sitting there on my phone selecting the next five minute increment and clicking "add to schedule" for quite a while. 1/5 stars, will definitely never use again.

So the irony here is that the photo delivery from the Raspberry Pi in the field back to my mail server was rock solid all weekend, but getting it from my mail server to Twitter fell on its face pretty hard.

The other notable problem that I ran into while stress testing the camera the week prior to Maker Faire was the fact that Postfix was getting its DNS settings when it started, and seemed to not expect the server to be roaming between various WiFi networks, so I needed to edit the /etc/dhcpcd.conf file to force the Raspberry Pi (and thus also Postfix) to just use some public DNS resolvers like 8.8.8.8 and 1.1.1.1 instead of trying to use my Phone's DNS resolver, which obviously wasn't available when my camera roamed to other WiFi networks like the one at Supply Frame's office.
I also spent all weekend adding stickers to the camera's box, so by the end of the weekend the cardboard box was beautifully decorated.


Material used for this build:

Starting from a clean Raspbian Lite image, run raspi-config and
  • Set a new user password and enable ssh
  • Set the locale and keyboard map to suit your needs
  • Set the WiFi country and configure your WiFi network credentials
Install the needed software and my prefered utilities
sudo apt update
sudo apt install postfix mutt vim dstat screen git

Starting with Postfix, we need it to be able to relay email out via a remote smarthost, since pretty much no consumer Internet connection allows outgoing connections on port 25 to send email. It's possible to use Gmail as your smarthost, so feel free to search for guides on how to specifically do that, but I just used a mail relay I have running for another project.

To do this, I first created a new file /etc/postfix/relay_passwd and added one line to it:
smtp.example.com USERNAME:PASSWORD

This file gets compiled into a database (relay_passwd.db) that postfix uses to look up your username and password when it needs to log into your relay. Conceivably you could have multiple sets of credentials for different hosts in here, but I've only ever needed one for my relay.

I then changed the permissions on it so only root can read it, and generated the database file postfix actually uses to perform lookups against this host to username/password mapping.

chmod 600 /etc/postfix/relay_passwd
postmap /etc/postfix/relay_passwd

To configure Postfix to use this relay, I added these lines to my /etc/postfix/main.cf file:
relayhost = [smtp.example.com]:587
smtp_use_tls=yes
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/relay_passwd
smtp_sasl_security_options =

At this point, you should be able to use a program like mutt or sendmail on the Raspberry Pi to send an email, and watch the respective /var/log/mail.log files to see the email flow out to the Internet or get stuck somewhere.

Tweaking Postfix defer queue behavior - Since using Postfix as a message queue on a portable Raspberry Pi is a bit unusual compared to the typical Postfix application, which probably involves a datacenter, we expect the Internet connection to be quite a bit more flaky, so retries on messages should happen a lot more often than on typical mail delivery. I changed it to start retrying emails at 100 seconds and then Postfix doubles that up to the maximum of 15 minutes, which seems like a reasonable upper limit for the time my phone would go offline and then find signal again.

queue_run_delay = 30s (300s default)
minimal_backoff_time = 30s (300s default)
maximal_backoff_time = 300s (4000s default)

The actual camera script is available on GitHub, including the modified dhcpcd config file, a systemd service to start the Python script on boot, and an example copy of the configuration file that the tweetcam script looks for at /etc/tweetcam.conf which includes where to send the picture emails and what to put in the subject and body of the emails.


The hardware was a small square of perf board to act as an interface between the Raspberry Pi 40 pin header and the buttons / LEDs mounted in the box. For the inputs, the Pi's internal pull-up resistors were used so the button just needed to pull the GPIO pin to ground. A 0.1uF capacitor was included for debounce, which ended up not really mattering since it took a little more than a second for the python script to capture the photo and send the email.

I put an NPN transistor buffer on both of the LEDs using a 2N3904 transistor, which I probably could have done without, but I hadn't decided how big, bright, or how many LEDs I wanted to drive when I soldered the perf board.

The camera module was plugged into the camera CSI port on the Pi.

In the end, this project was a huge hit and I particularly appreciated the nice tweets I was getting from people who couldn't make it to Maker Faire and were enjoying getting to see ~500 photos of it tweeted live from me walking around. I suspect I'll end up continuing work on this project and try and fix some of the issues I encountered like the IFTTT Twitter API limits and how really underwhelming the image quality is coming out of the OEM Raspberry Pi Foundation camera module... 

Saturday, May 11, 2019

FCIX - Lighting Our First Dark Fiber

 The Fremont Cabal Internet Exchange project continues!

The first eight months of FCIX was spent spinning up the exchange in the Hurricane Electric FMT2 building running out of my personal cabinet there. At the end of 2018, in the typical 51% mostly kidding style that is starting to become an integral part of the FCIX brand, we asked our data center host Hurricane Electric if they would be willing to sponsor us for a second cabinet in their other facility (HE FMT1) and a pair of dark fiber between the two buildings.

This felt like a pretty big ask for a few reasons:

  1. Hurricane Electric has already been a very significant sponsor in enabling FCIX to happen by their sponsoring of free cross connects for new members into the exchange so joining the exchange is a $0 invoice. 
  2. Asking for a whole free cabinet feels pretty big regardless of who you're talking to.
  3. Hurricane Electric is in the layer 2 transport business, so us asking to become a multi-site IXP puts us in the somewhat tricky position of walking the line of being in the same business as one of our sponsors while using their donated services. 
That last point is an interesting and possibly somewhat subtle one. It's also not an unusual awkward position that Internet Exchange Points are put in; when an Internet Exchange Point goes multi-site, their multiple sites are often connected by donated fiber or transport from a sponsor who's already in that business. This means that once an IXP starts connecting networks between the two facilities over the donated link, pairs of networks which might have already been leasing capacity between the two buildings might be motivated to drop their existing connection and move their traffic onto the exchange's link. Some IXPs will also enforce a policy that single networks can only connect to the IXP in one location, so members can't use the fiber donated to the IXP as internal backhaul for their own network, since that's a service they should be buying from the sponsor themselves, not enjoying the benefit of it being donated to the IXP.

Do I think this is a major concern between HE FMT1 and HE FMT2 for FCIX? No. These two buildings are about 3km apart, and Hurricane Electric had made sure there is a generous bundle of fiber between the two buildings, so it is unlikely that HE is making a lot of money on transport between two buildings within walking distance.
So we asked, and Hurricane Electric said yes.

At that point, we had an empty cabinet in FMT1, and a pair of dark fiber between the two buildings allocated for us, but a second site means we need a second peering switch...

"Hey, Arista..."

Arista was kind enough to give us a second 7050S-64 switch to load into FMT1, so we now have a matching pair of Ethernet switches to run FCIX on. Cool.

The final design challenge was lighting this very long piece of glass between the two sites. Thankfully, 3km is a relatively "short" run in the realm of single mode fiber, so the design challenge of moving bits that far isn't too great; pretty much every single generation of fiber optic has off-the-shelf "long-reach" optics which are nominally rated for 10km of fiber, so we weren't going to need to get any kind of special long range fiber transponders or amplifiers to light the pair.

In reality, they aren't really rated for 10km, so much as they're rated for a certain signal budget that usually works out well enough for 10km long links. For an example, lets take the Flexoptix 10G LR optic, which has the following technical specifications:
The important numbers to focus on in this table are on the top right:

  • Powerbudget (db): 6.2db
  • Minimum transmit power: -8.2dbm (db over a milliwatt)
  • Minimum receive power: -14.4dbm
The powerbudget is the acceptable amount of light that can be lost from end to end over the link, be it through linear km of fiber, connections, splices, attenuators, etc. So the 10km "distance" parameter is more a rule of thumb statement that a 10km link will typically have 6.2db of attenuation along it than the optic really being able to tell exactly how far it is from the other end of the fiber. 

The minimum transmit power and minimum receive power are actually related to the powerbudget by the fact that the power budget is the difference between these two numbers. Usually, your powerbudget will be much better than this to begin with, because most optics will put out much more than -8.2dbm of power when they're new, but some of them might be that low, and the lasers will actually "cool" over their life-span, so even if an optic comes out of the box putting out -2dbm of light, as it ages that number will go down. 

When it comes to serious long fiber links, it's likely you'll want to get a signal analysis done of the actual fiber you plan on using, and then using that information to plan your amplifiers accordingly. We, on the other hand, very carefully looked at how far the two buildings were on Google Maps, stroked our chins knowingly like we knew what the hell we were doing, and decided that a 10km optic would probably be good enough. Time to call another FCIX sponsor.

"Hey, Flexoptix..."

Given that we wanted to use 10km optics, we really had three choices for optics, given what our 7050S-64 switches could support: We could light it with a pair of 1G-LX optics, which would be pretty lame in this day and age, or we could light it with a pair of 10G-LR optics, which would probably be pretty reasonable given the amount of traffic we expect to be moving to/from this FMT1 extension, OR, we could ask Flexoptix for a pair of $400 40G LR4 optics and use some of those QSFP+ ports on our Aristas... because why not? faster is better.
So that's what we did. FCIX now has a 40G backbone between our two sites. 

40G LR4 is actually a little mind blowing with how they get 40G across a single pair of fibers, because a single 40G transceiver isn't how they actually did it. 40G was really an extension of 10G by putting 4x10G in one optic, and there was two ways of then transporting these 4x10G streams to the other optic:
  1. PLR4, or "parallel" LR4, where you use an 8 fiber cable terminated with MPO connectors, so each 10G wavelength is on its own fiber.
  2. LR4, which uses the same duplex LC fiber as 10G-LR, but uses four different wavelengths for the four 10G transceivers, and then integrates a CWDM (coarse wave division multiplexing) mux IN THE FREAKING OPTIC.

It's not entirely correct, but imagine a prism taking four different wavelengths of light and combining them into the single fiber, then a second prism on the receiver splitting them back out to the four receivers. Every time I think about it, it still blows my mind how awesome all of these Ethernet transceivers are once you dig into how they work.

So we now have 40G between our two sites, and like always, it wouldn't have been possible without all of our generous FCIX sponsors; they're pretty cool. If you happen to be an autonomous system running in either of those facilities, or want to talk to us about extending FCIX to another facility in the Silicon Valley, feel free to shoot us an email at contact@fcix.net.

Sunday, February 3, 2019

Running NAT64 in a BGP Environment

IPv6 is one of those great networking technologies, like multicast, which makes your life as a network operator much easier, but is generally pretty useless outside of your own network if you try to expect it to work across the Internet.

People like to focus on how the percentage of traffic going over IPv6 has slowly been creeping up, but unfortunately until that number reaches practically 100%, an Internet connection without IPv4 connectivity is effectively broken. Ideally, everyone runs dual-stack, but the whole shortage of IPv4 addresses was the original motivator for IPv6, so increasingly networks will need to run native IPv6 plus some form of translation layer for IPv4, be it carrier grade NAT on 100.64.0.0/10, 464XLAT, or as I'll cover today, NAT64.

NAT64 is a pretty clever way to add IPv4 connectivity to an IPv6-only network, since it takes advantage of the fact that the entire IPv4 address space can actually be encoded in half of a single subnet of IPv6. Unfortunately, I also found it somewhat confusing since there are actually quite a few moving parts in a functional NAT64-enabled network, so I figured I'd write up my experience adding NAT64 to my own network.

There are really two parts to NAT64, and to make it a little more confusing, I decided to add BGP to the mix because it's a Sunday and I'm bored.

  1. A DNS64 server, which is able to give a AAAA response for every website. For what few websites actually have AAAA records, it just returns those. For websites which are IPv4 only, it returns a synthetic AAAA record which is the A record packed into a special IPv6 prefix. The RFC recommended global prefix for this packing is 64:FF9B::/96, but there could possibly be reasons to use any /96 out of your own address space instead (I can't really think of any). I'm using the standard 64:FF9B::/96 prefix mainly because it means I can use Google's DNS64 servers instead of running my own. 
  2. A NAT64 server, which translates each packet destined for an IPv4 address packed in this special IPv6 /96 prefix to an IPv4 packet from an IPv4 address in the NAT64's pool destined for the actual IPv4 address to be routed across the Internet. My NAT64 server goes a step further and NAT's all of these IPv6-mapped IPv4 addresses to one public address, since mapping IPv6 to IPv4 addresses one-for-one doesn't gain me much.
  3. BGP to inject a route to 64:FF9B::/96 towards the NAT64 server into my core router. Realistically, you could just do this with a static route on your router pointed towards your NAT64's IPv6 address, but I want to be able to share my NAT64 server with friend's I'm peered with over FCIX, so since BGP is a routing policy protocol, it helps. If you're not running a public autonomous system, just ignore anything I say about BGP and point a static route instead.


So to do this, I'm using a software package called Tayga, along with these numbers (to make my specific example clearer):

  • 23.152.160.4/26 is the public IPv4 address of my NAT64 server. You would replace this address with whatever IPv4 address you wanted all your NATed traffic to be sourced from in the end.
  • 2620:13B:0:1000::4/64 is the public IPv6 address of my NAT64, which is the gateway to the special /96 prefix containing all IPv4 addresses. This address again needs to be something specific to your network, and is mainly important because it's where your route to the /96 needs to point.
  • 64:FF9B::/96 is the standard prefix for NAT64. There's no reason to change it to anything else.
  • 100.65.0.0/16 is an arbitrary big chunk of local-use-only addresses I picked for this project. For how I have my NAT64 server configured, it really didn't matter what prefix of CGNAT or RFC1918 address space I picked here, since it's really only used as an intermediary between the client's actual IPv6 source address and the single public IPv4 address that they're all NATed to. A /16 is plenty of address space, since trying to NAT even 64k hosts to a single public IPv4 address is going to be a bad time. Theoretically there is ways to use CGNAT concepts to use a whole pool of IPv4 addresses on the public side, but one address is plenty for my needs.
  • AS4264646464 is an arbitrary private-use ASN I picked to be able to set up an eBGP peering session between my NAT64 server and my core AS7034 router.


The source IPv6-only host makes a DNS query against a DNS64 server, which encodes IPv4 A records into the bottom 32 bits of the 64:FF9B::/96 subnet as a AAAA record, which gets routed to the NAT64 server:
[SOURCE IPv6 ADDRESS] > 64:FF9B:0:0:0:0:[DESTINATION IPv4 ADDRESS]
Tayga allocates an address out of its 100.65.0.0/16 NAT pool for the IPv6 source address, and translates the packet on the nat64 loopback interface to:
100.65.X.Y > [DESTINATION IPv4 ADDRESS]
The iptables nat MANGLE rule set up by Tayga then NATs the local 100.65.0.0/16 subnet to the one public address and sends the packet on its way:
23.152.160.4 > [DESTINATION IPv4 ADDRESS]
And dramatically, the IPv6 packet has been translated into an IPv4 packet, with the NAT64 server holding two pieces of NAT state; one in Tayga for the IPv6 to IPv4 translation, then one in iptables for the CGNAT address to public address translation.

Setting up Tayga


Starting from a stock Ubuntu 18.04 system, sudo apt install tayga gets us the needed NAT64 package, and edit the /etc/netplan/01-netcfg.yaml file to match your network.

The Tayga config (/etc/tayga.conf) is relatively short, ignoring all the comments. The main parameters to update are the "ipv4-addr" so it's inside your local use IPv4 block, "dynamic-pool" to match the ipv4-addr, "ipv6-addr" to your public IPv6 address on the server, and turn on the "prefix 64:ff9b::/96" option.

Tayga config listing:


The second file you need to edit to enable Tayga is the /etc/default/tayga file, which feeds parameters into the /etc/init.d/tayga RC file (which you don't need to touch).

Key parameters to note in the defaults file is to change RUN to yes, and make sure both CONFIGURE_IFACE and CONFIGURE_NAT44 are turned on so the RC file will do all the iptables setup for us.

Tayga default listing:


At this point, you should be able to sudo service tayga start and do a sanity check by looking at your IPv4 and IPv6 routing tables and seeing the prefixes referenced in the Tayga config added to a new nat64 tun network interface. Listing:


Enabling Forwarding in sysctl.conf


Unfortunately, while the Tayga RC file does a good job of setting up the tun interface and NAT iptable rules, the one thing it doesn't do is turn on IPv4 and IPv6 forwarding in general, which is needed to forward packets back and forth between the public interface and the nat64 tun device. Uncomment the two relevant lines in sysctl.conf and restart. Listing:


Setting up BGP to advertise the NAT64 prefix


For exporting the /96 of NAT64 prefix back into my networks core, I decided to use the Quagga BGP daemon. This is a pretty standard configuration for Quagga, exporting a directly attached prefix, but for completeness...

sudo apt install quagga
sudo cp /usr/share/doc/quagga-core/examples/zebra.conf.sample /etc/quagga/zebra.conf
sudo touch /etc/quagga/bgpd.conf
sudo service zebra start
sudo service bgpd start
sudo vtysh

I then interactively configured BGP to peer with my core and issued the write mem command to save my work. There's plenty of Quagga BGP tutorials out there, and the exact details of this part is a little out of scope for getting NAT64 working.

Quagga BGPd config listing:


At this point, my network has NAT64 support, so any IPv6-only hosts which are configured with a DNS64 name server can successfully access IPv4 hosts. I wouldn't depend on this one VM to serve a whole ISP's worth of IPv6-only hosts, but for a single rack of VMs which just need to be able to reach GitHub (which is still amazingly IPv4 only) to clone my config scripts, this setup has been working great for me.

Friday, January 25, 2019

Implementing BCP214 on Catalyst 6500

When it comes to running an Internet Exchange, at its most basic level, you're a metro Ethernet provider with a range of as little as 19". The most basic IXP is a single rack mount Ethernet switch, that you plug in, power on, and start plugging customers into.

Which is great, right up until you want to unplug or move any of those customers.

The problem is that, unlike a private interconnect between two routers, with an IXP switch in the middle, if you just yank the cord on one of the routers, it may be able to see that the interface is down and start calculating better routes from other BGP peers than those on the IXP, but every other customer on the IXP sending traffic towards the poor sap who you just disconnected won't see any change.  When unplugged customer A, customer B will continue to see their link to the IXP switch up, and will continue to send traffic towards customer A until the BGP session between them eventually times out, which can be on the order of minutes.

So in an ideal world, before yanking the cord on an IXP peer, you'd like to be able to make it seem like you've yanked the cord (without actually doing it), give BGP the minutes it takes to reconverge around the soon-to-be-down link, and then finally unplug the physical cable only once all of the traffic has drained and doing so won't result in a minute or two of black-holing traffic.

The simplest way to do this is to send an email to the customer under question the day/week/month before and say "hey, I'm going to be unplugging your port, so turn down all of your BGP sessions with others first" but that's a pretty unrealistic expectation to see that level of cooperation from another autonomous system, and wastes a lot of time on the customers part manually turning down all their peering sessions before, and then turning them back up after.

A better way to do it is to actively force the BGP sessions to go down without disrupting any other traffic, then wait for the reconvergence that will happen because of that. This technique is called BCP214. and basically involves using the IXP's switches' ability to filter traffic to specifically filter the IPv4 and IPv6 BGP packets going between peers on the exchange.

I've been doing this "turn down to move the peer to another switch" action quite a bit in the last few months for FCIX, where we've been moving everyone off my Cisco 6506 to a much nicer Arista 7050S-64. The problem is that, while BCP214 helpfully provides some sample configs in the appendix to implement this technique on Cisco IOS, for some reason which is beyond my understanding of the history of IOS command syntax, the Cisco sample doesn't seem to work on my Cisco 6500 running IOS 15.1(2)SY11.

It took some digging to figure out the exact syntax needed to implement the needed ACLs and then apply them to an interface on my 6506, so just in case anyone else needs these, enjoy:

The first part of implementing BCP214 is permanently creating two ACLs for specifically dropping BGP traffic from one IXP address to another (one for IPv4 and the second for IPv6). It's important to appreciate why you want to be specific in filtering only BGP with IXP addresses on it; multi-hop BGP could be flowing over the IXP between two routers not connected to the IXP for some reason, and that traffic shouldn't be dropped.  These example ACLs use the FCIX subnets of 206.80.238.0/24 and 2001:504:91::/64, but need to be modified accordingly to your own IXP subnets.

ip access-list extended acl-v4-bcp214
 deny   tcp 206.80.238.0 0.0.0.255 eq bgp 206.80.238.0 0.0.0.255
 deny   tcp 206.80.238.0 0.0.0.255 206.80.238.0 0.0.0.255 eq bgp
 permit ip any any
!
ipv6 access-list acl-v6-bcp214
 deny tcp 2001:504:91::/64 eq bgp 2001:504:91::/64
 deny tcp 2001:504:91::/64 2001:504:91::/64 eq bgp
 permit ipv6 any any
!

There's two deny lines on each ACL because you don't know if this customer's router happened to be the initiator of the TCP connection for BGP or not, so the source TCP port might be port 179, or the destination port might be 179. You want to drop both of those.

With those ACLs now part of the config, when you need to cause a port to drain its traffic, you temporarily apply those two ACLs to the interface's config and give it a few minutes for the BGP sessions to time out, the routers on both sides to re-converge, and the rest of the Internet to pick up the slack with no black-holed traffic when you then shutdown the interface.

interface GigabitEthernet1/3
 description Peering: FCIX Peer
 switchport
 switchport access vlan 100
 switchport mode access
 ip access-group acl-v4-bcp214 in
 ipv6 traffic-filter acl-v6-bcp214 in
!