Friday, September 20, 2019

Getting Android to Work in an IPv6-Only Network

As previously covered, I recently set up NAT64 on my network so that I didn't need to assign IPv4 addresses to all of my subnets or hosts and they'd still have a usable connection to the Internet at large.

Unfortunately, functional NAT64 is only most useful when hosts can auto-configure themselves with IPv6 addresses, default routes, and DNS64 name servers to resolve with. Figuring out how to get this to work, and then figuring out the workarounds required to get Android to work, took some time...

What Was Intended


Since my network is running off a Cisco 6506, which is a relatively ancient Ethernet switch, it isn't running what you might call the latest and greatest version of an operating system, so it took some historical research to figure out how IPv6 host configuration was even supposed to work at the time IPv6 was implemented on the 6500:

  • A new host connects to the network, and sends out a neighbor discovery protocol/ICMPv6 router solicitation to discover the local IPv6 subnet address and the address of a default gateway to reach the rest of the Internet.
  • The router(s) respond with the local subnet and default gateway, but the ND protocol did not originally include any way to also configure DNS servers, so the router would set the "other-config" flag, which tells the ND client that there is additional information they need that they should query over stateless DHCPv6.
  • The client, now having a local IPv6 address, would then send out a stateless DHCPv6 query to get the addresses for some recursive DNS servers, which the router would answer.
  • The client would now have a self-configure SLAAC address, default gateway, and RDNSS (recursive DNS server), which together enable it to usefully interact with the rest of the Internet.
Great! How do you implement this in IOS?

First you need to define an IPv6 DHCP pool, which ironically isn't really an address pool at all, but just some DNS servers and a local domain name. Realistically, it could be a pool, since IOS does support prefix delegation, but we're not using that here, so we just define what DNS server addresses to use and a local domain name if we felt like it:

ipv6 dhcp pool ipv6dns
 dns-server 2001:4860:4860::6464
 dns-server 2001:4860:4860::64
 domain-name lan.example.com

Since this pool doesn't even really have any state to it, other than maybe defining a different domain-name per subnet, you can reuse the same pool on every subnet that you want DHCPv6 service on, which is what I'm doing on my router, since the domain-name doesn't really make any difference:

interface Vlan43
 description IPv6 only subnet
 no ip address
 ipv6 address 2001:DB8:0:1::1/64
 ipv6 nd other-config-flag
 ipv6 dhcp server ipv6dns

IOS sends out router advertisements by default, so to enable handing out addresses and default gateways we just need to not disable that, but the "ipv6 nd other-config-flag" option is what sets the bit in the router advertisements to tell clients to come back over DHCPv6 to also ask for the DNS server addresses. 

Now, before I outline the pros and cons for this design, I will disclaim that this is my perspective on the issue, so I'm not speaking from a place of authority, having not yet graduated elementary school when all of this originally was designed... This back and forth of ND - DHCPv6 does have some upsides:
  • Since both the ND and DHCPv6 queries are "stateless", all that the router/DHCP server is doing is handing out information it knows the answers to, and isn't adding any per-device information into any sort of database like a DHCP lease table, so a single router could now conceivably assign address to a metric TON of devices.
  • The separation of the DNS configuration from the IPv6 addressing configuration preserves the elegant separation of concerns that protocol designers like to see because it makes them feel more righteous.
There are also some pretty serious downsides:
  • Instead of just a DHCP server, you now need both a correctly configured router advertisement for the L3 addressing information and a correctly configured DHCPv6 server to hand out DNS information.
  • I still don't understand how hostname DNS resolution is supposed to work for this. In IPv4 land, you use something like dnsmasq which both hands out the DHCP leases and then resolves the hostnames back to those same IP addresses. Since all of this host configuration in IPv6 is stateless, by design DNS can't work... Maybe the presumption was that dynamic DNS wouldn't turn out to be a still-born feature?
  • The Android project, for reasons which defy understanding, refuses to implement DHCPv6. 
That last point is a hugely serious issue for my network, since without DHCPv6, there is no mechanism for my Cisco 6506 to communicate to my phone what DNS servers to use over IPv6. My phone gets an IPv6 address and default gateway from my Cisco's router advertisement ICMPv6 packet, and then ignores the "other-config" flag, and is left without any way to resolve DNS records. 

Making the network... you know... useless.

For the record, how Android is presumed to work is by utilizing a later addition to the ICMPv6 router advertisement format, RFC6106, which added a Recursive Domain Name Service Server (RDNSS) option to the router advertisement to allow DNS information to be included in the original RA broadcast along with the local subnet and default gateway addressing information. Unfortunately, since this addition to ICMPv6 was made about fifteen years late, RDNSS options aren't supported by the version of IOS I'm running on my Cisco 6506, so it would seem I'm pretty shit out of luck when it comes to running Android on my IPv6-only subnets of my network.

My (Really Not) Beautiful Solution


So we've got a router that doesn't support the RDNSS option in its router advertisements since it predates the concept of RA RDNSS, and we have one of the most popular consumer operating systems which refuses to support DHCPv6, leaving us as an impasse for configuring DNS servers. I actually spent a few weeks thinking about this one, including slowly digging deeper and deeper into the relevant protocols, before I eventually was reading the raw specifications for the ICMPv6 router advertisement packets (kill me), and realized that there was the ability to have a router broadcast Router Advertisement packets while indicating in the RA packet that they shouldn't be used for routing.
So here's my solution, which admittedly even I think feels a little dirty, but an ugly solution that works... still works.

  • My Cisco 6506 sends out Router Advertisements specifying the local subnet, and that it should be used as a default gateway.
  • I then spun up a small Linux VM on the same subnet running radvd, which advertises that this VM shouldn't be used as a default gateway, but does advertise an RDNSS option pointing to my DNS64 resolver as the DNS server to use, since radvd supports RDNSS.
  • Any normal functional IPv6 stack will receive the Cisco's RA packet, note the "other-config" flag, and send a DHCPv6 query to receive the address for a DNS server.
  • Android receives the Cisco's RA, configures its address and gateway, but ignores the "other-config" flag. The phone will then receive a second RA packet from the Linux server running radvd, which includes the RDNSS option which I'm not able to configure on my router, and Android maybe will merge the two RA configurations together to collectively generate a functional subnet, gateway, and DNS server configuration for the network.
Now let us be very clear; while I was setting this up, I was very confident that this was not going to work. Expecting Android to receive two different RAs from two different hosts with only parts of the information it needed, and combining those together, seems like an insane solution to this problem.

The bad news is that this actually worked.

So we spin up a VM, install radvd on it, and assign it one network interface on my WiFi vlan. The /etc/radvd.conf file is relatively short:
kenneth@kwfradvd:/etc$ cat radvd.conf 
interface ens160 {
AdvSendAdvert on;
IgnoreIfMissing on;
AdvDefaultLifetime 0;
RDNSS 2001:4860:4860::6464 {};
RDNSS 2001:4860:4860::64 {};
};

The network interface happens to be ens160, the "AdvDefaultLifetime 0;" parameter indicates that this router shouldn't be used as a default gateway, and the two RDNSS parameters specify which IP addresses to use for DNS resolution. Here I show it with both of Google's DNS64 servers, but in reality I'm running my own local DNS64 server, because I'm directly peered with two root DNS servers and b.gtld, so why not run my own resolver?

I still feel a little dirty that this works, but it does! I then installed an access point in each of my data center cabinets on this vlan advertising the SSID "_Peer@FCIX" so we're both advertising my little Internet Exchange project and I've got nice fast WiFi right next to my cabinet.

Thursday, September 19, 2019

Using Catalog Zones in BIND to Configure Slave DNS Servers

I was recently asked to help one of my friends think about re-architecting his anycast DNS service, so I've been thinking about DNS a lot on the backburner for the last few months. As part of that, I was idly reading through the whole ISC knowledge base this week, since they've got a lot of really good documentation on BIND and DNS, and I stumbled across a short article talking about catalog zones.

Catalog zones are this new concept (well, kind of new; there's speculative papers about the concept going back about a decade) where you use DNS and the existing AXFR transfer method to move configuration parameters from a master DNS server to its slaves.

In the context of the anycast DNS service I'm working on now, this solves a pretty major issue which is the question on how to push new configuration changes to all of the nodes. This DNS service is a pretty typical secondary authoritative DNS service with a hidden master. This means that our users are expected to run their own DNS servers serving their zones, and we then have one hidden master which transfers zones from all these various users' servers, and then redistribute the zones to the handful of identical slave servers distributed worldwide which all listen on the same anycast prefix addresses.
Updates to existing zones is a standard part of the DNS protocol, where the customer updates their zone locally, and when they increment their SOA serial number, their server sends a notify to our hidden master, which initiates a zone transfer to get the latest version of their zone, and then sends a notify to the pool of anycast slaves to all update their copy of the customer zone from the hidden master. Thanks to the notify feature in DNS, pushing an updated zone to all of these slave authoritative servers happens pretty quickly, so the rest of the Internet sending queries to the anycast node nearest to them start seeing the new zone updates right away.

The problem is when you want to add new zones to this whole pipeline. After our customer has created the new zone on their server and allowed transfers from our master, we need to configure our hidden master as a slave for that zone pointed to the customer server, and we need to configure all of the anycast nodes to be a slave for the zone pointed back at our hidden master. If we miss one, you start experiencing very hard to troubleshoot issues where it seems like we aren't correctly being authoritative for the new zone, but only in certain regions of the Internet depending on which specific anycast node you're hitting. Anycast systems really depend on all the instances seeming identical, so it doesn't matter which one you get routed to.

There are, of course, hundreds of different ways to automate the provisioning of these new zones on all of the anycast nodes, so this isn't an unsolved issue, but the possible solutions range anywhere from a bash for loop calling rsync and ssh on each node to using provisioning frameworks like Ansible to reprovision all the nodes any time the set of customer zones changes.

Catalog zones is a clever way to move this issue of configuring a slave DNS server for what zones it should be authoritative for into the DNS protocol itself, by having the slaves transfer a specially formatted zone from the master which lists PTR records for each of the other zones to be authoritative for. This means that adding a new zone to the slaves no longer involves changing any of the BIND configuration files on the slave nodes and reloading, but instead is a DNS notify from the master that the catalog zone has changed, an AXFR of the catalog zone, and then parsing this zone to configure all of the zones to also transfer from the master. DNS is already a really good protocol for moving dynamic lists of records around using the notify/AXFR/IXFR mechanism, so using it to also manage the dynamic list of zones to do this for is in my opinion genius.

Labbing It at Home


So after reading the ISC article on catalog zones, and also finding an article by Jan-Piet on the matter, I decided to spin up two virtual machines and have a go at using this new feature available in BIND 9.11.

A couple things to note before getting into it:

  • Catalog zones are a pretty non-standard feature which is currently only supported by BIND. There's a draft RFC on catalog zones, which has already moved past version 1 supported in BIND, so changes are likely for this feature in the future.
  • Both of the tutorials I read happened to use BIND for the  master serving the catalog zone, and used rndc to dynamically add new zones to the running server, but this isn't required. Particularly since we're using a hidden master configuration, there's no downside to generating the catalog zone and corresponding zone configurations on the master using any provisioning system you like, and simply restarting or reloading that name server to pick up the changes and distribute them to the slaves, since the hidden master is only acting as a relay to collect all the zones from various customers and serve as a single place for all the anycast slaves to transfer zones from.
  • This catalog zone feature doesn't even depend on the master server running BIND. As far as the master is concerned, the catalog zone is just another DNS zone, which it serves just like any other zone. It's only the slaves which receive the catalog zone which need to be able to parse the catalog to dynamically add other zones based on what they receive.
We want to keep this exercise as simple as possible, so we're not doing anything involving anycast, hidden masters, or adding zones to running servers. We're only spinning up two servers, in this case both running Ubuntu 18.04, but any distro which includes BIND 9.11 should work:
  • ns1.lan.thelifeofkenneth.com (10.44.1.228) - A standard authoritative server serving the zones "catalog.ns1.lan.thelifeofkenneth.com", "zone1.example.com", and "zone2.example.com". This server is acting as our source of zone transfers, so there's nothing special going on here except sending notify messages and zone transfers to our slave DNS server.
  • ns2.lan.thelifeofkenneth.com (10.44.1.234) - A slave DNS server running BIND 9.11 and configured to be a slave to ns1 for the zone "catalog.ns1.lan.thelifeofkenneth.com" and to use this zone as a catalog zone with ns1 (10.44.1.228) as the default master. Via this catalog zone, ns2 will add "zone1.example.com" and "zone2.example.com" and transfer those zones from ns1.
We first want to set up ns1, which is a normal authoritative DNS configuration, with the one addition that I added logging for transfers, since that's what we're playing with here. 

Nothing too unexpected there; turn off recursion service, and turn on logging.

The only particularly unusual thing about the definition of the zone files is that I'm explicitly listing the IP address for the slave server under "also-notify". I'm doing that here because I couldn't get it to work based on the NS records like I think it should, but that might also be because I'm using zones not actually delegated to me. 
In my actual application, I'll need to to use also-notify anyways, because I need to send notify messages to every anycast node instance on their unicast addresses. In a real application I would also lock down the zone transfers to only allow my slaves to transfer zones from the master, since it's generally bad practice to allow anyone to download your whole zone file.

The two example.com zone files are also pretty unremarkable. 

Up until this point, you haven't actually seen anything relevant to the catalog zone, so this is where you should start paying attention! The last file on ns1 of importance is the catalog file itself, which we'll dig into next:

Ok, so that might look pretty hairy, so let us step through that line by line.
  • Line 1: A standard start of authority record for the zone. A lot of the examples use dummy zones like "catalog.meta" or "catalog.example" for the catalog zone, but I don't like trying to come up with fake TLDs which I just have to hope isn't going to become a valid TLD later, so I named my catalog zone under my name server's hostname. In reality, the  name of this zone does not matter, because no one should ever be sending queries against it; it's just a zone to be transferred to the slaves and processed there.
  • Line 3: Every zone needs an NS record, which again can be made a dummy record if you'd like, because no one should ever be querying this zone. 
  • Line 5: To tell the slaves to parse this catalog zone as a version 1 catalog format, we create a TXT record for "version" with a value of "1". It's important to remember the importance of a trailing dot on record names! Since "version" doesn't end in a dot, the zone is implicitly appended to it, so you could also define this record as "version.catalog.ns1.lan.thelifeofkenneth.com." but that's a lot of typing to be repeated, so we don't.
  • Lines 7 and 8: This is where the actual magic happens, by defining unique PTR records with values for each of the zones which this catalog file is listing for the slaves to be authoritative for. This is somewhat of an abuse of the PTR record meaning, but adding new record types has proven impractical, so here we are. Each record is a [unique identifier].zones.catalog.... etc.
The one trick with the version 1 catalog zone that's implemented by BIND is that the value of the unique identifier per cataloged zone is pretty specific. It is the hexadecimal representation of the SHA1 sum of the on-the-wire format of the cataloged zone.

I've thought about it quite a bit, and while I can see some advantages to using a stable unique identifier like this per PTR record, I don't grasp why BIND should strictly require it, and reading the version 2 spec in the draft RFC, it looks like they might loosen this up in the future, but for now we need to generate the exact hostname expected for each zone. I did this by adding a python script to my local system based on Jan-Piet's code:

I needed to install the dnspython package (pip3 install dnspython), but I could then use this python script to calculate the hash for each zone, and add it to my catalog zone by appending ".zones" to it and adding it as a PTR record with the value of the zone itself. So looking back at line 7 of the catalog zone file, by running "dns-catalog-hash zone1.example.com" the python script spit out the hash "ddb8c2c4b7c59a9a3344cc034ccb8637f89ff997" which is why I used that for the record name.

Now before we talk about the slave name server, I want to again emphasize that we haven't utilized any unusual features yet. NS1 is just a normal DNS server serving normal DNS zones, so generate the catalog zone file any way you like, and ns1 can be running any DNS daemon which you like. Adding each new zone to ns1 involves adding it to the daemon config like usual, and the only additional step is also adding it as a PTR record to the catalog zone.

On to ns2! This is where things start to get exciting, because what I show you here will be the only change ever needed on ns2 to continue to serve any additional zones we like based on the catalog zone.

Again, we've turned off recursion, and turned on transfer logging to help us see what's going on, but the important addition to the BIND options config is the addition of the catalog-zones directive. This tells the slave to parse the named zone as a catalog zone. We do explicitly tell it to assume the master for each new zone should be 10.44.1.228, but the catalog zone format actually supports you explicitly defining per zone configuration directives like masters, etc. So just appreciate that we're using the bare minimum of the catalog zone feature here by just adding new zones to transfer from the default master.

This is the totally cool part about catalog zones right here; our local zones config file just tells the slave where to get the catalog from, and BIND takes it from there based on what it gets from the catalog.

If you fire up both of these daemons, with IP addresses and domain names changed to suit your lab environment, and watch the /var/cache/bind/zone_transfers log files, you should see:
  • ns1 fires off notify messages for all the zones
  • ns2 start a transfer of catalog.ns1.lan.thelifeofkenneth.com and processes it
  • Based on that catalog file ns2 starts additional transfers for zone1.example.com and zone2.example.com
  • Both ns1 and ns2 are now authoritative for zone[12].example.com!

To verify that ns2 is being authoritative like it should be, you can send it queries like "dig txt test.zone1.example.com @ns2.lan.thelifeofkenneth.com" and get the appropriate answer back. You can also look in the ns2:/var/cache/bind/ directory to confirm that it has local caches for the catalog and example.com zones:
The catalog file is cached in whatever filename you set for it in the named.conf.local file, but we never told it what filenames to use for each of the cataloged zones, so BIND came up with its own filenames for zone[12].example.com starting with "__catz__" and based on the catalog zone's name and each zone's name itself.

Final thoughts


I find this whole catalog zone concept really appealing since it's such an elegant solution to exactly the problem I've been pondering for quite a while.

It's important to note that this set of example configs aren't production worthy, since this was just me in a sandbox over two evenings. A couple problems off the top of my head:
  • You should be locking down zone transfers so only the slaves can AXFR the catalog zone and all of your other zones, since otherwise someone could enumerate all of your zones and all the hosts on those zones.
  • You probably should disallow even any queries against the catalog zone. I didn't, since it made debugging the zone transfers easier, but I can see the downside to answering queries out of the catalog. It wouldn't help enumerate zones, since it'd be easier to guess zone names and query for their SOAs than guessing the SHA1 sums of the same zone names and asking for the PTR record for it out of the catalog, but if you start using more sophisticated features of the catalog zone like defining per-zone masters or other configuration parameters, you might not want to allow those to be available for query by the public.

Making a Walnut Guest WiFi Coaster


I was recently reading about the 13.56MHz NFC protocol and the standard tags you can write and read from your phone, when I realized that one of the features of NFC is that you can program tags with WiFi credentials, via the concept of NDEF records, which let you encode URLs, vCards, plain text, etc.

I thought this would be a good gift idea, so I bought some NFC tags on eBay, and then built a coaster around it using walnut and sand blasted glass.



The main thing for building one of these is getting some NFS tags, which you can easily find on Amazon, and writing your WiFi credentials to it as an NDEF record, which is possible using various phone apps, including the TagWritter app from NXP, which is surprisingly good for being a vendor app.

Thursday, September 5, 2019

Adding Webseed URLs to Torrent Files

I was recently hanging out on a Slack discussing the deficiencies in the BitTorrent protocol for fast file distribution. A decade ago when Linux mirrors tended to melt down on release day, Bittorrent was seen as a boon for being able to distribute the relatively large ISO files to everyone trying to get it, and the peer-to-peer nature of the protocol meant that the swarm tended to scale with the popularity of the torrent, kind of by definition.

There were a few important points raised during this discussion (helped by the fact that one of the participants had actually presented a paper on the topic):

  1. HTTP-based content distribution networks have gotten VASTLY better in the last decade, so you tend not to see servers hugged to death anymore when the admins are expecting a lot of traffic.
  2. Users tend to see slower downloads from the Bittorrent swarm than they do from single healthy HTTP servers, with a very wide deviation as a function of the countless knobs exposed to the user in Bittorrent clients.
  3. Maintaining Bittorrent seedbox infrastructure in addition to the existing HTTP infrastructure is additional administrative overhead for the content creators, which tends to not be leveraged as well as the HTTP infrastructure for several reasons, including Bittorrent's hesitancy to really scale up traffic, its far from optimal access patterns across storage, the plethora of abstract knobs which seem to have a large impact on the utilization of seedboxes, etc.
  4. The torrent trackers are still a central point of failure for distribution, and now the content creator is having to deal with a ton of requests against a stateful database instead of just serving read-only files from a cluster of HTTP servers which can trivially scale horizontally.
  5. Torrent files are often treated as second class citizens since they aren't as user-friendly as an HTTP link, and may only be generated as part of releases to quiet the "hippies" who still think that Bittorrent is relevant in the age of big gun CDNs.
  6. Torrent availability might be poor at the beginning and end of a torrent's life cycle, since seedboxes tend to limit how many torrents they're actively seeding. When a Linux distro drops fifteen different spins of their release, their seedbox will tend to only seed a few of them at a time and you'll see completely dead torrents several hours if not days into the release cycle. 
As any good nerd discussion on Slack goes, we started digging into the finer details of the Bittorrent specification like the Distributed Hash Table that helped reduce the dependence on the central tracker, peer selection algorithms and their tradeoffs, and finally the concept of webseed.

Webseed is a pretty interesting concept which was a late addition to Bittorrent where you could include URLs to HTTP servers serving the torrent contents, to hopefully give you most of the benefits of both protocols; the modern bandwidth scalability of HTTP, and the distributed fault tolerance and inherent scaling of Bittorrent as a function of popularity.

I was aware of webseed, but haven't seen it actually used in years, so I decided to dig into it and see what I could learn about it and how it fits into the torrent file structure.

The torrent file, which is the small description database which you use to start downloading all of the actual content of a torrent, at the very least contains a list of the files in the torrent and checksums for each of the fixed-size chunks making up those files. Of course, instead of using a popular object serializer like XML or JSON (which I appreciate might not have really been as popular at the inception of Bittorrent), the torrent file uses a format I've never seen anywhere else called BEncoding.

The BEncoding format is relatively simple; key-value pairs can be stored as byte strings or integers, and the file format supports dictionaries and lists, which can contain sets of further byte strings, integers, or even other lists/dictionaries. Bittorrent then uses this BEncoding format to create a dictionary named "info" which contains a list of the file names and chunk hashes which define the identity of a torrent swarm, but beyond this one dictionary in the file, you can modify anything else in the database without changing the identity of the swarm, including which tracker to use as "announce" byte-strings, or "announce-list" lists of byte-strings, comments, creation dates, etc.

Fortunately, the BEncoding format is relatively human readable, since length fields are encoded as ASCII integers, field delimiters are characters like ':', 'l', and 'i', but unfortunately this is all encoded as a single line with no breaks, so trying to edit this database by hand with a text editor might be a little hairy.
I wasn't able to find a tremendous amount of tooling for interactively editing BEncode files; there exists a few online "torrent editors" which give you basic access to changing some of the fields which aren't part of the info dictionary, but none of them seemed to give the arbitrary key-value editing capabilities I needed to play with webseed, so I settled on a Windows tool called BEncode Editor. The nice thing about this tool is that it's designed as an arbitrary BEncode editor, instead of specifically a torrent editor, so it has that authentic "no training wheels included" hacker feel to it. User beware. 

As an example, I grabbed the torrent file for the eXoDOS v4 collection, which is a huge collection of 7000 DOS games with various builds of DOSBOX to make it all work on a modern system. Opening the torrent file in BEncode Editor, you can see the main info dictionary at the end of the main root dictionary, which is the part you don't want to touch since the info dictionary is what defines the identity of the torrent. In addition to that, you can see five other elements in the root dictionary, including a 43 byte byte string named "announce" which is a URI to a primary tracker to use to announce yourself to the rest of the swarm, a list of 20 elements named "announce-list" which is alternative trackers to use (the file likely contains both the single tracker and a list of trackers for backwards compatibility for Bittorrent clients which predate the concept of announce-lists?) and some byte strings labeled "comment", "created by", and an integer named "creation date", which looks like a Unix timestamp.

Cool! So at this point, we have an interactive tool to inspect and modify a BEncode database, and know which parts to not touch to avoid breaking things (The "info" dictionary).

Now back to the original point of somehow adding webseed URLs to a torrent file


Webseeding is defined in Bittorrent specification BEP_0019, which I didn't find particularly clear, but the main takeaway for me is that to enable webseeding, I just need to add a list to the torrent named "url-list", and then add byte-string elements to that list which are URLs to HTTP/FTP servers serving the same contents.

So first step, log into one of my web servers and download the torrent and throw the contents in an open directory. (In my case, https://mirror.thelifeofkenneth.com/lib/) For actual content creators, this should be part of their normal release workflow for HTTP hosting of the content, so this is only really needed for when you're retrofitting webseed into an existing torrent.
Now we start editing the torrent file, by adding a "url-list" list to the root dictionary, and the part I found a little tricky was figuring out how to add the byte-string child to the list, which is done in BEncode Editor by clicking on the empty "url-list" list, and clicking "add" and specifying that the new element should be added as a "child" of the current element.
Referring back to BEP_0019, if I end the URL with a forward slash, the client should append the info['name'] to the URL, so the binary string I'm adding as a child to the list is "https://mirror.thelifeofkenneth.com/lib/" such that the client will append "eXoDOS" to it, looking for the content at "https://mirror.thelifeofkenneth.com/lib/eXoDOS/", which is correct.

Save this file as a new .torrent file, and success! Now I have a version of the eXoDOS torrent with the swarm performance supplemented by my own HTTP server! The same could be done for any other torrent where the exact same content is available via HTTP, and honestly I'm a little surprised that I don't tend to see Linux distros using this, since it reasonably removes the need for them to commit to maintaining torrent infrastructure since the torrent swarm can at least survive off of an HTTP server, which the content creator is clearly already running.