<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
	<title>KerbyHughes.com</title>
	<link>https://kerbyhughes.com</link>
	<description>Website of Kerby Hughes</description>
	<atom:link href="https://kerbyhughes.com/rss.xml" rel="self" type="application/rss+xml"></atom:link>
	<item>
		<title>How this site is hosted</title>
		<link>http://kerbyhughes.com/2023/06/29/how-this-site-is-hosted.html</link>
		<guid>http://kerbyhughes.com/2023/06/29/how-this-site-is-hosted.html</guid>
		<description>&lt;a class=&#34;title&#34; href=&#34;/2023/06/29/how-this-site-is-hosted.html&#34;&gt;&#xA;How this site is hosted&#xA;&lt;/a&gt;&#xA;&lt;/h2&gt;&#xA;&lt;div class=&#34;date&#34;&gt;2023-06-29&lt;/div&gt;&#xA;&lt;div class=&#34;content&#34;&gt;&#xA;&lt;p&gt;&#xA;Update: This post has been moved to the &lt;a href=&#34;/colophon.html&#34;&gt;Colophon&lt;/a&gt;, where it will be kept updated as changes to the site are made.&#xA;&lt;/p&gt;&#xA;&lt;/div&gt;</description>
		<pubDate>2023-06-29 14:30:00 -0400 EDT</pubDate>
	</item>
	<item>
		<title>Homelab 2023</title>
		<link>http://kerbyhughes.com/2023/07/22/homelab-2023.html</link>
		<guid>http://kerbyhughes.com/2023/07/22/homelab-2023.html</guid>
		<description>    &lt;a class=&#34;title&#34; href=&#34;/2023/07/22/homelab-2023.html&#34;&gt;&#xA;    Homelab 2023&#xA;    &lt;/a&gt;&#xA;&lt;/h2&gt;&#xA;&lt;div class=&#34;date&#34;&gt;2023-07-22&lt;/div&gt;&#xA;&lt;div class=&#34;content&#34;&gt;&#xA;&lt;p&gt;&#xA;&#x9;This is the first of an annual series of snapshots about the state of my homelab. Self-hosting is a hobby, and one that I find provides online privacy and freedom. Running your own infrastructure provides a space to test and deploy anything you want.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;I have a few requirements for the homelab - mostly around security as well as availability for &lt;i&gt;some&lt;/i&gt; components, like hosting this website and data backup. The rest is best effort. This is not the cheapest or most efficient setup, and it&#39;s not automated in almost any way. And while I do sometimes test tools and techniques that I might be interested in using in production, this isn&#39;t meant to be an example of that type of work.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;It&#39;s a playground.&#xA;&lt;/p&gt;&#xA;&lt;/p&gt;&#xA;&#x9;&lt;h4&gt;Hardware&lt;/h4&gt;&#xA;&#x9;&lt;ul&gt;&#xA;&#x9;&lt;li&gt;MacBook Pro&lt;/li&gt;&#xA;&#x9;&lt;li&gt;Mac Studio&lt;/li&gt;&#xA;&#x9;&lt;li&gt;Synology 1821+&lt;/li&gt;&#xA;&#x9;&lt;li&gt;Synology 1522+&lt;/li&gt;&#xA;&#x9;&lt;li&gt;Digital Ocean VPS&lt;/li&gt;&#xA;&#x9;&lt;li&gt;Vultr VPS&lt;/li&gt;&#xA;&#x9;&lt;li&gt;iPhone&lt;/li&gt;&#xA;&#x9;&lt;/ul&gt;&#xA;&lt;p&gt;&#xA;    The primary server for my Homelab is a Mac Studio, running &lt;a href=&#34;https://asahilinux.org/&#34;&gt;Asahi linux&lt;/a&gt;. Running a desktop Mac didn&#39;t catch on for me, but this makes a fantastic server. It&#39;s a 10-core ARM chip, with 32 GB of RAM, 10 Gb ethernet, and crazy-fast internal storage. It&#39;s silent and low-power, which are requirements for the homelab, especially as this is currently a small-apartment-lab.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    Other &#34;compute&#34; nodes in the homelab are currently my MacBook Pro, and two Synologies. One NAS is local to the Mac Studio and one is offsite, used for data replication and also providing offsite compute as needed.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;&lt;h4&gt;Networking&lt;/h4&gt;&#xA;&#x9;I use a combination of &lt;a href=&#34;https://nebula.defined.net/docs/&#34;&gt;Nebula&lt;/a&gt; and &lt;a href=&#34;https://developers.cloudflare.com/cloudflare-one/connections/connect-networks&#34;&gt;Cloudflared&lt;/a&gt; tunnels to network my infrastructure in a secure way. The main goals I had for this are that I should not need any cooporation from routers or firewalls - parts of the homelab run behind routers that were previously configured and are working for family members. I&#39;ll assign static IPs if desirable, but no port forwarding, VPN configurations, or anything like that. I don&#39;t want to require any particular routing hardware or software. Second, no open ports on home internet connections, even for UDP, which rules out some raw WireGuard configurations.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;Nebula&#39;s main purpose is to connect my Synology NAS boxes between houses  for backup replication, and to connect my laptop and phone to the main compute node for ingress to services (see Envoy below). I run two lighthouses on VPCs - one in DigitalOcean and one in Vultr, in different metros. So far, this is my favorite mesh network product I&#39;ve used and highly recommend it for self-hosting. It provides authentication and authorization via PKI certificates and has good tooling for generating a CA and signing certs. And once you have public lighthouses (these do have a single open UDP port, but otherwise even ssh is disabled - I can always enable it via the cloud dashboard if I need to do maintenance) - the rest is all handled by the clients. No need for any hosted databases or other single points of failure.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;If you&#39;re curious about the type of problem this class of software solves, &lt;a href=&#34;https://tailscale.com/blog/how-nat-traversal-works/&#34;&gt;this article from Tailscale&lt;/a&gt; is a good primer.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;All of the hardware listed above runs a Nebula client - even my iPhone, providing access to all the machines, and notably acting as the resolving IP address for a domain that I use for my private cloud - it resolves to the Nebula address of the main linux server which runs Envoy (see below), so from any client with Nebula I can access my self-hosted services via subdomains. Testing out Nebula&#39;s DNS support is on the TODO list.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;Cloudflared tunnels provide a second, backup connectivity option for access to the various nodes. If I needed to do maintenance on Nebula for example, I can get into the network space of a node via Warp and the cloudflared tunnel also running on the node. I&#39;d like to test out the Warp to Warp functionality as a backup for Nebula, but since the primary use of Nebula is shuttling gigabytes of backup data (specifially BTRFS snapshots), I&#39;d rather do that directly between peers than through a proxy.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    &lt;h4&gt;Storage&lt;/h4&gt;&#xA;    For storage I run a Synology 1821+, which is an 8-bay NAS, currently with about 16TB of usable capacity. Like the Mac Studio, this was not originally spec&#39;d to be homelab storage; it&#39;s the anchor of my backup strategy. But, it works great for the task. I have a large share allocated just for media and storage that the homelab compute nodes can mount.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    &lt;i&gt;What&#39;s next:&lt;/i&gt; I added a 10Gbe network card to the Synology, and want to get that working with the Mac Studio&#39;s 10Gbe port in Asahi linux. I might experiment with NVMe storage pools, but that effort would likely be better spent learning FreeNAS, BTRFS directly, or other storage architectures (e.g. Rook/Ceph). Right now mounting the synology to a &lt;code&gt;/mnt&lt;/code&gt; destination in Asahi linux is rock solid, but SMB over less reliable networking is not ideal. I want each Synology to provide the storage for its &#34;site&#34;. This might be as a CSI driver, or selecting an object store to run (e.g. SeaweedFS) either directly on the Synology or on an attached compute node.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    &lt;h4&gt;Network&lt;/h4&gt;&#xA;    The main network hardware feature of the homelab in 2023 is that I have a separate WAN for it. This started as a way to have WAN-failover, but I ended up using it to provide a completely isolated playground for my homelab without any worry of affecting the primary WAN which is critial as it&#39;s used for work-from-home access. The secondary WAN is a Verizon 5G Home Internet gateway, which I&#39;m lucky enough to have access to in the homelab&#39;s current location. The everyday work-from-home WAN is a cable ISP. I call these the low-latency network (cable), used for general purpose and gaming workloads, and high-latency network (5G) used for the homelab. I can swap a cable to to utilize the 5G when the cable connection goes down, which inevitably happens a couple of times a year.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    There are a surprising number of upsides to dual WANs for homelab use. I can easily test various tunneling protocols and implementations and see how they behave, test bandwidth expectations for accessing the homelab offsite, and generally break whatever I want without impact to others.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    Besides a connection to the homelab WAN, the Mac Studio is additionally connected to our main router via WiFi. This connection to the main LAN allows for a sufficiently high-bandwidth connection between our Apple TV to stream 4K video from the homelab.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    &lt;i&gt;What&#39;s next:&lt;/i&gt; The home of the lab will be relocating in a couple of months, and will lose access to 5G options. An upgrade to fiber will also have to wait. Starlink &lt;i&gt;is&lt;/i&gt; available, perhaps that will make an appearance? I should document as much performance information as I can from 5G before returning the equipment.&#xA;    &lt;/p&gt;&#xA;    &lt;p&gt;&#xA;    &lt;h3&gt;Software&lt;/h3&gt;&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;All of the following programs run behind Envoy as the reverse proxy. Via Nebula networking, all these services are available to all other machines, regardless of location.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;&lt;h4&gt;&lt;a href=&#34;https://github.com/envoyproxy/envoy&#34;&gt;Envoy&lt;/a&gt;&lt;/h4&gt;&#xA;&#x9;I learned Envoy at work, where we run it in production. It&#39;s a sledgehammer for a nail in the homelab, but it&#39;s by far my preferred reverse proxy at this point, and there&#39;s actually a pretty simple way to run it. You can generate a static config, but have it load listeners (the thing that parses a request and figures out where to send it) and the clusters (the &#34;upstream&#34; service that listeners send requests to) in &lt;a href=&#34;https://www.envoyproxy.io/docs/envoy/latest/start/quick-start/configuration-dynamic-filesystem&#34;&gt;separate files that are loaded on startup&lt;/a&gt;.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;&lt;i&gt;What&#39;s next:&lt;/i&gt; There&#39;s no ACME client implementation in Envoy, so adding something to get signed TLS certicates would be nice. &lt;a href=https://github.com/FiloSottile/mkcert&gt;mkcert&lt;/a&gt; looks promising to get a cert installed into root stores of myriad machines, which is the real challenge.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;&lt;h4&gt;&lt;a href=&#34;https://about.gitea.com/&#34;&gt;Gitea&lt;/a&gt;&lt;/h4&gt;&#xA;&#x9;Works great as a git remote destination with a web interface. I haven&#39;t explored many of the features, but it&#39;s been solid.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;&lt;h4&gt;&lt;a href=&#34;https://hub.docker.com/_/registry&#34;&gt;Docker Registry&lt;/a&gt;&lt;/h4&gt;&#xA;&#x9;Gitea can also act as a container registry, but I ran into some issues with HTTP 206 Partial Content ranges when pulling images between machines over nebula with high latency, and the official registry image didn&#39;t seem to have the same issue with the default config. It&#39;s also nice to keep these separated. Any images I build and distribute across the homelab (like the one serving this website) are pushed to the registry, and can be pulled down by any homelab machine. Accessing an &#34;external&#34; registry from a client, even if it&#39;s over an overlay network in this case, requires TLS, so an Envoy listener fronts this service to terminate TLS. As long as the registry domain is added to the list of insecure registries on all client&#39;s docker config, a self-signed certificate is sufficient.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;I&#39;d also like to make this a pull-through cache, but so far haven&#39;t had time to debug some auth issues with this configuration. I&#39;d ideally like to run a more custom registry with more options for behavior than just proxying Docker Hub.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;&lt;h4&gt;&lt;a href=&#34;https://github.com/qdm12/gluetun&#34;&gt;Gluetun&lt;/a&gt;&lt;/h4&gt;&#xA;&#x9;Handy container for connecting to a WireGuard peer.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;&lt;h4&gt;&lt;a href=&#34;https://prometheus.io/&#34;&gt;Prometheus&lt;/a&gt; and &lt;a href=&#34;https://grafana.com/&#34;&gt;Grafana&lt;/a&gt;&lt;/h4&gt;&#xA;&#x9;Vanilla Prometheus and Grafana containerized installations. At the moment I&#39;m only running cadvisor for basic metrics on the main server, and scraping Envoy which has a rich stats endpoint. The hard part is building out all the Grafana dashboards.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;Next project for monitoring is to run some type of metrics generator on the Synologies that can be scraped by Prometheus.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;&lt;h4&gt;&lt;a href=&#34;https://www.navidrome.org/&#34;&gt;Navidrome&lt;/a&gt;&lt;/h4&gt;&#xA;&#x9;Music server. This runs on the main server and mounts a share on the NAS with all my music. There&#39;s a web interface, but it also implements the Subsonic API, so I can run clients on my laptop and iPhone that use Navidrome as the backend. Currently using Sonixd on my Mac, and Substreamer on my iPhone.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;&lt;h4&gt;&lt;a href=&#34;https://jellyfin.org/&#34;&gt;Jellyfin&lt;/a&gt;&lt;/h4&gt;&#xA;&#x9;Video server. Similar to Navidrome, the API is the main feature. I run Infuse on my Mac and our Apple TV and can stream anything added to a share of the Synology. I&#39;m not yet doing any DNS resolving locally on the main LAN, so for now I just allocate a static IP for the main server, and point Infuse to that.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;&lt;h4&gt;&lt;a href=&#34;https://netboot.xyz/&#34;&gt;netbootxyz&lt;/a&gt;&lt;/h4&gt;&#xA;&#x9;I want to be able to PXE boot machines and VMs from the homelab LAN and bootstrap into the Nebula overlay network. This does require some assistance from the DHCP server, so I need to come up with a good solution to that.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;&lt;h4&gt;&lt;a href=&#34;https://github.com/miniflux/v2&#34;&gt;Miniflux&lt;/a&gt;&lt;/h4&gt;&#xA;&#x9;RSS reader with a nice web UI.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;&lt;h4&gt;&lt;a href=&#34;/colophon.html&#34;&gt;kerby_website&lt;/a&gt;&lt;/h4&gt;&#xA;&#x9;Origin for this website. It runs two containers - a Go web server and cloudflared that provides the networking for the server to reverse-proxy requests from Cloudflare&#39;s edge.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;If I wasn&#39;t using cloudflared, I&#39;d be using WireGuard to connect directly to VPS instances that I&#39;d point to from the DNS records for my domain. I&#39;d run Envoy on these to provide load balancing, circuit breakers, and access logging.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;This currently runs on the Mac Studio, both Synologies, and my laptop. I can ensure at least one of these machines is available at any time to serve this site.&#xA;&lt;/p&gt;&#xA;&lt;/p&gt;&#xA;&lt;/div&gt;&#xA;</description>
		<pubDate>2023-07-22 20:00:00 -0400 EDT</pubDate>
	</item>
	<item>
		<title>IPv6 and IPv4 Translation</title>
		<link>http://kerbyhughes.com/2024/02/22/ipv6-and-ipv4-translation.html</link>
		<guid>http://kerbyhughes.com/2024/02/22/ipv6-and-ipv4-translation.html</guid>
		<description>&lt;a class=&#34;title&#34; href=&#34;/2024/02/22/ipv6-and-ipv4-translation.html&#34;&gt;&#xA;IPv6 and IPv4 Translation&#xA;&lt;/a&gt;&#xA;&lt;/h2&gt;&#xA;&lt;div class=&#34;date&#34;&gt;2024-02-22&lt;/div&gt;&#xA;&lt;div class=&#34;content&#34;&gt;&#xA;&lt;p&gt;&#xA;This post is an overview of the technologies needed to convert between the IPv4 and IPv6 protocols and when they might be used. The original inspiration for this was building a Nebula mesh network to connect machines, some of which had dual-stack, some of which were IPv4-only, and some of which I was bringing up in a cloud environment with only IPv6 networking. Some hosting providers give a discount on VMs if you run IPv6-only, or charge additional fees to allocate IPv4 addresses. For reasons we&#39;ll see, trying to 1) run peer-to-peer software 2) on a linux host 3) with only IPv6 networking 4) connecting to an IPv4 host is a difficult scenario. There are a number of rabbit holes left unexplored, but this post will never see the light of day if I try to include all of them here, and I suggest reading through the linked RFCs.&lt;/p&gt;&#xA;&lt;p&gt; &#xA;&#x9;&lt;h3&gt;Building a network&lt;/h3&gt;&#xA;First, let&#39;s review what the Internet is, at least from layer 3 of the &lt;a href=&#34;https://en.wikipedia.org/wiki/OSI_model&#34;&gt;OSI model&lt;/a&gt;. We&#39;re not interested in the physical and data layers beneath us or the protocols running on top, we just need to build IP packets and route them between computers.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;An entity wishing to provide service to the internet purchases some IP space from their regional registry &lt;a id=&#34;footnote-arin-ref&#34; href=&#34;#footnote-arin&#34;&gt;[1]&lt;/a&gt;, and becomes an &lt;a href=&#34;https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/&#34;&gt;Autonomous System&lt;/a&gt;, identified by an Autonomous System Number (ASN). For IPv4, that IP space is some portion of the 32-bit addressable space. You can divide up an address space by significant bits, so if you own a /16 prefix, everything that is a more-specific subset of those first 16 bits is within your space. For example, if it&#39;s 1984 and you&#39;re a &lt;a href=&#34;https://bgp.tools/as/17&#34;&gt;university&lt;/a&gt;, you might own the 128.10.0.0/16 prefix. In order to join the Intenet, you peer with other Autonomous Systems near you, and announce your prefix over the (external) Border Gateway Protocol (BGP) which advertises routing information about your prefix to neighbors. All of these providers together form the Internet, and as a result we get get globally routable addresses. Does IPv6 change this? Not really, except with a 128-bit address space, we haven&#39;t allocated anywhere near all of the space,&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;Having an &lt;a href=&#34;https://blog.apnic.net/2023/01/06/bgp-in-2022-the-routing-table&#34;&gt;understanding of BGP advertisements&lt;/a&gt; is important for understanding what we can and can&#39;t do with only access to the code on a single machine within a network. We may have full control over the packets on any interface connected to the machine, but in most cases we need assistance from the access/transport network (the ISP) to advertise the address space we&#39;re using. Just because we can construct an IPv6 address with an embedded IPv4 address, it isn&#39;t routable without assistance from the network.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;&lt;h3&gt;Getting an address&lt;/h3&gt;&#xA;Before we can even send packets, we need to have an IP address ourselves if we want the response to our packets to get back to us. With IPv4, we usually get a single IPv4 address from our ISP, and then our router can multiplex private addresses over that single address to the internet -- a stateful NAT (Network Address Translation). A new device on your network talks to your router, and receives a private IPv4 address via DHCP request. The NAT table on the router is updated so that when the device reaches out to a public IPv4 address, it will rewrite the packet source as if it came from the public IPv4 address. All devices behind your NAT show up as coming from this public IPv4 address externally. As hinted at by &lt;i&gt;private&lt;/i&gt;, the local IPv4 address is not globally routable. So if someone wants to reach out to a device behind your NAT, they have to use that public IPv4 address, and based on firewall rules, your router picks the private address to forward the packet to, usually based on the port (port-forwarding). IPv6 has ULAs, Unique Local Addresses, which are somewhat equivalent to IPv4 private address space, but they are not globally routable, and generally this isn&#39;t the method we would prefer for getting an IPv6 address.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;IPv6 is meant to be globally routable, from one device to another. In that way, it&#39;s actually correcting what ended up not being possible over IPv4. Each device should be reachable on the internet via a unique address.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;IPv6 is usually handed out from your ISP via DHCPv6 request, similar to how you get a single IPv4 address to NAT via DHCP. However with DHCPv6, your router can request not just a single address (which it would then have to NAT), but instead an entire prefix that the ISP is said to &lt;i&gt;delegate&lt;/i&gt; to you. Prefix delegation means you&#39;ll usually receive a /56 or /60 and then your router can further delegate that prefix into smaller prefixes.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;There are tricks you can employ any time you have a sufficiently large address space, one of which is that you can take a random value in that space, and have a high probability that there won&#39;t be any overlap the next time you take another random value. That&#39;s basically what &lt;a href=&#34;https://datatracker.ietf.org/doc/html/rfc4862&#34;&gt;SLAAC&lt;/a&gt; is doing for IPv6 -- instead of stateful DHCP assignment, SLAAC runs &lt;a href=&#34;https://www.networkacademy.io/ccna/ipv6/stateless-address-autoconfiguration-slaac&#34;&gt;an algorithm&lt;/a&gt; to convert the MAC address of your interface into the interface part of an IPv6 address, or randomly pick an address. Both of these are then combined with &lt;a href=&#34;https://www.rfc-editor.org/rfc/rfc4429&#34;&gt;Duplicate Address Detection&lt;/a&gt; -- a quick check that nothing else on the network is listening on the selected address. There are some additional protocols I don&#39;t know the details of where your device takes multiple IPv6 addresses, and may rotate the interface portion of the address periodically. All of this falls under &lt;i&gt;autoconfiguration&lt;/i&gt; -- the ability of a device to get a routable address on its own with just the prefix information being provided, which is a benefit and simplification provided by adopting IPv6.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;Carrier-grade NAT (&lt;a href=&#34;https://datatracker.ietf.org/doc/html/rfc6598&#34;&gt;CGNAT&lt;/a&gt;) is like NAT on your home router, except at the ISP level. A single IP address can be statefully NAT&#39;d to many other IP addresses which the ISP hands out. Since these addresses behind the CGNAT can&#39;t be public, and can&#39;t be private, there&#39;s a range of &#34;shared&#34; address space that is used specifically for CGNAT: the 100.64.0.0/10 range. I actually use some of this space for a Nebula network, which hasn&#39;t been a problem for me because I&#39;m not currently on an ISP that uses CGNAT so no routing is actually done with this address -- it&#39;s the internal overlay network only. I won&#39;t be using this for future deployments however because it&#39;s not the correct usage of this space.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    &lt;h3&gt;NAT security&lt;/h3&gt;  &#xA;This warrants a quick note on NAT and Firewalls. The IPv4 address space is tiny, such that it&#39;s easy to enumerate and probe the entire space. As soon as you put a device on a public IPv4 address, it&#39;s going to get attacked by all sorts of things trying to log in. A common misconception is that because the devices behind your router only have private IP addresses, they are unreachable from the internet. It&#39;s hopefully true that they are mostly unreachable behind even consumer routers, but really you wouldn&#39;t say it&#39;s because of NAT -- your router is choosing what to do with incoming packets. It&#39;s not NAT that provides security (it could happily pass on all packets to any device as part of the NAT implementation, for instance), it&#39;s the fact that there&#39;s a stateful firewall also on your router deciding to drop any traffic it doesn&#39;t expect that provides the security. Another way to think about it is that it&#39;s not that devices are less reachable over IPv4, it&#39;s that reaching them requires more complexity (a stateful intermediary) and uses fixed resources (port numbers) to identify machines within a single address. We can have the same security without this complexity with IPv6.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;And devices &amp;mdash; usually &lt;a href=&#34;https://en.wikipedia.org/wiki/Internet_of_things&#34;&gt;IoT&lt;/a&gt; devices &amp;mdash; might be breaching this contract without you knowing. Through protocols like &lt;a href=&#34;https://datatracker.ietf.org/doc/html/rfc6970&#34;&gt;UPnP&lt;/a&gt;, the device itself can configure the router to port-forward traffic to it, making it globally accessible, and now access to your private network is only as secure as the IoT device (the &#34;s&#34; in IoT stands for security). This one of many reasons these devices should be cordoned to their own &lt;a href=&#34;https://en.wikipedia.org/wiki/VLAN&#34;&gt;vlan&lt;/a&gt; or similarly isolated.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;   &lt;h3&gt;Translation&lt;/h3&gt;&#xA;IPv4-only networks are unfortunately still common because they are legacy and exist until replaced. IPv6-only is where we&#39;d like to get to, and if you read the RFCs linked in this post, it&#39;s interesting how all of the technologies are presented as temporary measures until IPv6 replaces IPv4. A &lt;i&gt;transition&lt;/i&gt; is the goal, even though the speed may make it feel like we&#39;ve reached a steady state. For now, it usually makes sense to run dual-stack (IPv4 and IPv6) networks for broad compatibility. But there are cases where we might want to use the solutions afforded by IPv6, and then deploy the transition technologies for legacy IPv4 compatibility.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;We might have more devices than available IPv4 addresses and want them all to be uniquely routable. This might be because we&#39;re running on a cellular network that only provides IPv6 connectivity because there aren&#39;t enough IPv4 addresses for every mobile device. This is common in many parts of the world, or with sub-providers like T-Mobile. It could also be something like a Kubernetes cluster where we have thousands of ephemeral pods that we want to make addressable without NAT. We may want to deploy IPv6-only internally for routing simplicity (it really is faster!), but devices will need to connect to some external hosts that are IPv4-only. We may want to take advantage of IPv6 auto configuration.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;And in the other direction, while our network may only provide IPv4, we may want to connect to devices running on IPv6-only networks despite not being able to upgrade the network capabilities. To come up with solutions, we need to figure out what we can control -- is it just the program itself, the host, services external to the network that we can configure to assist our program or host? Depending on the translation direction and the amount of control we have end-to-end, let&#39;s see how different types of translations are implemented.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;IPv4 and IPv6 are incompatible, meaning that if I only have IPv6 networking, I can&#39;t reach a device that only has an IPv4 address. IPv6 is not just a bit extension of IPv4, they are entirely separate protocols.&#xA;&lt;p&gt; &#xA;&lt;ul&gt;&#xA;&#x9;&lt;li&gt;If we only have IPv6 connectivity, we&#39;ll need a translator if we want to talk to a device that is only connected via IPv4.&lt;/li&gt;&#xA;&#x9;&lt;li&gt;If we only have IPv4 connectivity, we&#39;ll need a translator if we want to talk to a device that is only connected via IPv6.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;   &lt;h3&gt;6 to 4&lt;/h3&gt;&#xA;As an example, let&#39;s try to connect to an IPv4-only hostname from a device that only has IPv6 connectivity (a Vultr virtual machine with no IPv4 allocation).&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;github.com is IPv4-only &lt;a id=&#34;footnote-github-ref&#34; href=&#34;#footnote-github&#34;&gt;[3]&lt;/a&gt;, no IPv6 DNS records are returned.&#xA;&lt;/p&gt;&#xA;&lt;p class=&#34;code&#34;&gt;kerby@vultr-worker-ipv6:~# dig +short github.com A&#xA;140.82.114.4&#xA;kerby@vultr-worker-ipv6:~# dig +short github.com AAAA&#xA;kerby@vultr-worker-ipv6:~#&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;Here&#39;s what happens if we try to connect with our device that only has IPv6:&#xA;&lt;p class=&#34;code&#34;&gt;kerby@vultr-worker-ipv6:~# curl -vvv github.com&#xA;*   Trying 140.82.113.3:80...&#xA;* Immediate connect fail for 140.82.113.3: Network is unreachable&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;In order for us to reach github.com, we need to first send a packet to something with an IPv6 address -- because that&#39;s all &lt;i&gt;our&lt;/i&gt; network speaks -- and have it converted to an IPv4 address on some device that can speak both IPv4 and IPv6 (a translator).&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;IPv6 is big enough to contain the entire IPv4 space in the remaining bits of a /96 prefix!  We can send the IPv4 request to a translator using a &lt;i&gt;well-known&lt;/i&gt; prefix. &lt;a href=&#xA;https://datatracker.ietf.org/doc/html/rfc8215&#xA;&gt;RFC 8215&lt;/a&gt; defines this as 64:ff9b::/96, and as long as a translator in our access network advertises this prefix, our packet will get to it. Then, the translator will remove the well-known prefix and generate an IPv4 packet that can be routed to Github and back. When it gets back, it will send it to the IPv6 source address of our device. If this sounds a lot like the description of a NAT above, that&#39;s accurate, it&#39;s a stateful NAT -- it will usually use ports to track mappings between devices on both sides of the translator.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&lt;i&gt;There is a protocol for doing stateless 1-1 translation, but this would require a different IPv4 address for every device and does not scale. As far as I know all translators like this in practice are doing stateful NAT, utilizing ports and a mapping table to uniquely identify traffic and route it to the same address.&lt;/i&gt;&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;So that covers the case where our program knows it needs to use a translator and knows to use the well-known prefix. But what if that&#39;s not the case? How can the program know when it needs to use a translator? And what if the &lt;/i&gt;program&lt;/i&gt; can&#39;t use a translator, is there a way that the host it&#39;s running on can detect the case of needing a translator and handle it for the program?&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;If the program can speak IPv6, we can handle part of this with DNS. A standard DNS implementation will return A and AAAA (pronounced &lt;i&gt;quad-A&lt;/i&gt;) records that exist for a host. But if it&#39;s deployed in an IPv6-only network, we know A records won&#39;t do the requester any good.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;   &lt;h3&gt;DNS64 and NAT64&lt;/h3&gt;&#xA;&lt;/p&gt;&#xA;DNS64 works by exploiting what we discussed earlier about the massive disparity between IPv4 and IPv6 address space. There&#39;s plenty of empty space in IPv6, so a /96 prefix, 64:ff9b::/96, is used for this purpose only. First, we connect to the DNS64 resolver via its IPv6 address. Cloudflare, Google, and others run public DNS64 resolvers. If the hostname (e.g. kerbyhughes.com) has AAAA records, those are returned directly because you can then use IPv6 the entire way. But if the hostname (like github.com) returns only A records, then DNS64 returns a &lt;i&gt;synthesized&lt;/i&gt; AAAA record by taking the 64:ff9b::/96 prefix and appending the IPv4 address (hex encoded). The host just routes this like any other IPv6 destination to its interface, and then when it gets to the ISP&#39;s NAT64, it sees the well-known prefix and knows to use the last 32 bits as the destination in a new IPv4 packet. It maintains a translation table to do the reverse for the response (stateful NAT64). This works without the program knowing that the entire flow wasn&#39;t IPv6.&#xA;&lt;/p&gt;&#xA;&lt;a href=&#34;https://www.rfc-editor.org/rfc/rfc6146.html&#34;&gt;RFC 6146: Stateful NAT64: Network Address and Protocol Translation from IPv6 Clients to IPv4 Servers. 1. Introduction&lt;/a&gt;&#xA;&lt;blockquote cite=&#34;https://www.rfc-editor.org/rfc/rfc6146.html&#34;&gt;&#xA;This document specifies stateful NAT64, a mechanism for IPv4-IPv6&#xA;transition and IPv4-IPv6 coexistence.  Together with DNS64 [RFC6147],&#xA;these two mechanisms allow an IPv6-only client to initiate&#xA;communications to an IPv4-only server.  They also enable peer-to-peer&#xA;communication between an IPv4 and an IPv6 node, where the&#xA;communication can be initiated when either end uses existing, NAT-&#xA;traversal, peer-to-peer communication techniques, such as Interactive&#xA;Connectivity Establishment (ICE) [RFC5245].  Stateful NAT64 also&#xA;supports IPv4-initiated communications to a subset of the IPv6 hosts&#xA;through statically configured bindings in the stateful NAT64.&#xA;&lt;/blockquote&gt;&#xA;&#xA;&lt;p&gt; &#x9;&#xA;The problem for my IPv6-only linux VM is that while I can install something like &lt;a href=&#34;https://github.com/toreanderson/clatd&#34;&gt;clatd&lt;/a&gt; or install &lt;a href=&#34;https://nlnetlabs.nl/projects/unbound/about/&#34;&gt;unbound&lt;/a&gt; and point it to a DNS64 resolver, there&#39;s no dual-stack NAT64 resolver already in Vultr&#39;s network. So we have to provide that piece ourselves. We can actually use any IPv6 prefix. We can use the prefix of an external NAT64 gateway that is both the route to the gateway and the prefix that will be removed. Then we just route the packet out to the returned IPv6 address, Vultr passes it off to eventually get to the prefix owned by the NAT64 resolver, and that takes the embedded IPv4 address and routes it to the final destination. It&#39;s able to get back because the source address gets set to the IPv6 host that Vultr advertises.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&lt;a href=&#34;https://nat64.xyz&#34;&gt;https://nat64.xyz&lt;/a&gt; maintains a list of public NAT64 services. We can see that they provide their IPv6 DNS64 resolver, and then a routable /96 prefix that is used instead of that synthetic /96 that would require an in-network translator.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;Again though, host and/or access network needed configuration. We had to have a DNS64 resolver configured in the DNS resolution path that knew about a NAT64 device. On an enterprise network, or your home network, where this can be configured for all devices, that might be sufficient. However a non-configurable IPv6-only network is common out in the wild, for instance a phone on some cellular networks, or a laptop on a hotspot that only provides IPv6. To solve this, platform providers like Apple and Google have equipped their hosts with client-side translators to do this first part of the translation. This is referred to as the customer-side translator, or CLAT.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;There are a couple of ways a host can determine if it&#39;s in an IPv6-only situation. One is that when it first connects to the network, the network can tell it that IPv4 is not available. Most modern operating systems like macOS/iOS &lt;a id=&#34;footnote-ios-ref&#34; href=&#34;#footnote-ios&#34;&gt;[4]&lt;/a&gt; expect this situation where they may be on an IPv6-only host, so they bundle a CLAT implementation. This is activated by &lt;a href=&#34;https://www.iana.org/assignments/bootp-dhcp-parameters/bootp-dhcp-parameters.xhtml&#34;&gt;DHCP option 108&lt;/a&gt; which instructs the client that there is no IPv4 networking available. Similar to tunneling tools, the CLAT sets up a gateway on the host for IPv4 destinations and translates to an IPv6 packet that goes to the router set up via the DHCP configuration.&#xA;&lt;/p&gt;&#xA;&#xA;&lt;a href=&#34;https://datatracker.ietf.org/doc/html/rfc8925#section-5&#34;&gt;RFC 8925: IPv6-Only Preferred Option for DHCPv4: Section 5&lt;/a&gt;&#xA;&lt;blockquote cite = &#34;https://datatracker.ietf.org/doc/html/rfc8925#section-5&#34;&gt;&#xA;&lt;p&gt;&#xA;This document specifies a DHCPv4 option to indicate that a host supports an IPv6-only mode and is willing to forgo obtaining an IPv4 address if the network provides IPv6 connectivity.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;...&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;The IANA has assigned a new DHCPv4 option code for the IPv6-Only&#xA;Preferred option from the &#34;BOOTP Vendor Extensions and DHCP Options&#34;&#xA;registry, located at https://www.iana.org/assignments/bootp-dhcp-&#xA;parameters/.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&lt;pre&gt;&#xA;Tag:  108&#xA;Name:  IPv6-Only Preferred&#xA;Data Length:  4&#xA;Meaning:  Number of seconds that DHCPv4 should be disabled&#xA;Reference:  RFC 8925&#xA;&lt;/pre&gt;&#xA;&lt;/p&gt;&#xA;&lt;/blockquote&gt;&#xA;&#xA;&lt;p&gt;&#xA;There seems to basically be one Tweet with a screenshot of macOS in this setup that is authoritative -- it&#39;s the single &lt;a href=&#34;https://en.wikipedia.org/wiki/IPv6_transition_mechanism#cite_note-22&#34;&gt;citation on Wikipedia&lt;/a&gt;. So I took the liberty of recreating the setup on one of my own networks via OPNSense.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;Setting option 108 to disable IPv4 in the DHCP(v4) settings:&#xA;&lt;/p&gt;&#xA;&lt;img src=&#34;https://kerbyhughes.com/2024/02/11/assets/opnsense.png&#34; alt=&#34;OPNSense DHCPv4 Option 108 setting&#34;&gt;&#xA;&lt;p&gt;&#xA;Configuring Tayga &lt;a id=&#34;footnote-ipv6-only-ref&#34; href=&#34;#footnote-ipv6-only&#34;&gt;[5]&lt;/a&gt; -- a NAT64 implementation -- to use the well-known /96 prefix:&#xA;&lt;/p&gt;&#xA;&lt;img src=&#34;https://kerbyhughes.com/2024/02/11/assets/tayga_ipv6_prefix.png&#34; alt=&#34;Tayga IPv6 well-known prefix&#34;&gt;&#xA;&lt;p&gt;&#xA;Here&#39;s the ethernet interface on my laptop after the DHCP request.&#xA;&lt;/p&gt;&#xA;&lt;p class=code&gt;en0: flags=88e3&lt;UP,BROADCAST,SMART,RUNNING,NOARP,SIMPLEX,MULTICAST&gt; mtu 1500&#xA;&#x9;options=6463&lt;RXCSUM,TXCSUM,TSO4,TSO6,CHANNEL_IO,PARTIAL_CSUM,ZEROINVERT_CSUM&gt;&#xA;&#x9;ether 5c:e9:1e:69:30:04 &#xA;&#x9;inet6 fe80::40c:5be1:1a3b:5137%en0 prefixlen 64 secured scopeid 0xf &#xA;&#x9;inet6 2603:7081:702:2860:cad:9c33:471f:81ab prefixlen 64 autoconf secured &#xA;&#x9;inet6 2603:7081:702:2860:c1d0:7356:827:53bb prefixlen 64 autoconf temporary &#xA;&#x9;inet 192.0.0.2 netmask 0xffffffff broadcast 192.0.0.2&#xA;&#x9;inet6 2603:7081:702:2860:8f9:eaf5:1d7:b1e7 prefixlen 64 clat46  &amp;#128072;&#xA;&#x9;nat64 prefix 64:ff9b:: prefixlen 96  &amp;#128072;&#xA;&#x9;nd6 options=201&lt;PERFORMNUD,DAD&gt;&#xA;&#x9;media: autoselect&#xA;&#x9;status: active&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;With unbound configured for NAT64, our synthesized A record is the well-known IPv6 prefix plus a hex-encoded version of the IPv4 A record answer.&#xA;&lt;/p&gt;&#xA;&lt;p class=code&gt;kerby@tycho % dig +short github.com&#xA;140.82.113.3&#xA;kerby@tycho % dig +short github.com AAAA&#xA;64:ff9b::8c52:7103&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;Another way for clients to detect IPv6-only connectivity is by using the special domain &lt;i&gt;ipv4only.arpa&lt;/i&gt;. ARPA is a special Top-Level Domain that can be used for these purposes. Devices make a request to determine if DNS64 is available in the network.&#xA;&lt;/p&gt;&#xA;&lt;a href=&#34;https://datatracker.ietf.org/doc/html/rfc7050&#34;&gt;RFC 7050: Discovery of the IPv6 Prefix Used for IPv6 Address Synthesis&lt;/a&gt;&#xA;&lt;blockquote cite=&#34;https://datatracker.ietf.org/doc/html/rfc7050&#34;&gt;&#xA;&lt;p&gt;&#xA;A node requiring information about the presence (or absence) of&#xA;NAT64, and one or more Pref64::/n used for protocol translation SHALL&#xA;send a DNS query for AAAA resource records of the Well-Known&#xA;IPv4-only Name (WKN) &#34;ipv4only.arpa.&#34;.  The node MAY perform the DNS&#xA;query in both IPv6-only and dual-stack access networks.&#xA;...&#xA;A DNS reply with one or more AAAA resource records indicates that the&#xA;access network is utilizing IPv6 address synthesis&#xA;&lt;/p&gt;&#xA;&lt;/blockquote&gt;&#xA;&lt;p&gt;&#xA;With DNS being provided by unbound on my router:&#xA;&lt;/p&gt;&#xA;&lt;p class=code&gt;kerby@tycho % dig ipv4only.arpa +short AAAA&#xA;kerby@tycho %&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;Here I&#39;ve told my laptop to use the nameservers provided by &lt;a href=&#34;https://nat64.net/&#34;&gt;nat64.net&lt;/a&gt;.&#xA;&lt;/p&gt;&#xA;&lt;p class=code&gt;kerby@tycho % dig ipv4only.arpa +short AAAA&#xA;2a01:4f8:c2c:123f:64:5:c000:ab&#xA;2a00:1098:2c::5:c000:ab&#xA;2a00:1098:2b::1:c000:aa&#xA;2a00:1098:2c::5:c000:aa&#xA;2a00:1098:2b::1:c000:ab&#xA;2a01:4f8:c2c:123f:64:5:c000:aa&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;There are a bunch of rules about how the client should use these addresses, in which order, etc.&#xA;&lt;/p&gt;&#xA;&#xA;&lt;p class=&#34;code&#34;&gt;kerby@tycho % dig github.com +short AAAA&#xA;2a00:1098:2b::1:8c52:7903&#xA;2a01:4f8:c2c:123f:64:5:8c52:7903&#xA;2a00:1098:2c::5:8c52:7903&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;As we know though, this is only half of the equation. The CLAT translates to IPv6, but this relies on a translator to listen for that IPv6 packet and translate it to IPv4. Therefore networks that set DHCP option 108 must also provide the translator. This translator is considered to be on the provider side, or PLAT.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;These two things together, CLAT and PLAT, are referred to as a system as 464XLAT.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;&lt;h3&gt;464XLAT&lt;/h3&gt;&#xA;&lt;a href=&#34;https://datatracker.ietf.org/doc/html/rfc6877#section-1&#34;&gt;RFC 6887: 464XLAT: Combination of Stateful and Stateless Translation&lt;/a&gt;&#xA;&lt;blockquote cite=&#34;https://datatracker.ietf.org/doc/html/rfc6877#section-1&#34;&gt;&#xA;&lt;p&gt;&#xA;1.  Introduction&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;This document describes an IPv4-over-IPv6 solution as one of the&#xA;techniques for IPv4 service extension and encouragement of IPv6&#xA;deployment. 464XLAT is not a one-for-one replacement of full IPv4&#xA;functionality.  The 464XLAT architecture only supports IPv4 in the&#xA;client-server model, where the server has a global IPv4 address.&#xA;This means it is not fit for IPv4 peer-to-peer communication or&#xA;inbound IPv4 connections. 464XLAT builds on IPv6 transport and&#xA;includes full any-to-any IPv6 communication.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;...&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;2.  Terminology&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;PLAT:   PLAT is provider-side translator (XLAT) that complies with&#xA;         [RFC6146].  It translates N:1 global IPv6 addresses to global&#xA;         IPv4 addresses, and vice versa.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;CLAT:   CLAT is customer-side translator (XLAT) that complies with&#xA;         [RFC6145].  It algorithmically translates 1:1 private IPv4&#xA;         addresses to global IPv6 addresses, and vice versa.  The CLAT&#xA;         function is applicable to a router or an end-node such as a&#xA;         mobile phone.&#xA;&lt;/p&gt;&#xA;&lt;/blockquote&gt;&#xA;&lt;/blockquote&gt;&#xA;&lt;p&gt;&#xA;As we can see from the RFC, there are some limitations, which make sense now that we know the parts that are required to make it work. It&#39;s not peer-to-peer because the PLAT has to NAT the traffic and is therefore an intermediary -- all of the traffic goes through this translator (though only routing is required, no packet inspection, which as we&#39;ll see next is another benefit of being IPv6 by default). No direct connection is possible. And inbound IPv4 isn&#39;t possible because there is no IPv4 address on one of the peers; the network told the client it couldn&#39;t give it an IPv4 address over DHCP.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;   &lt;h3&gt;4 to 6&lt;/h3&gt;&#xA;What about the other direction? What if the program/host are in an IPv4-only network, but need to communicate with IPv6-only peers? IPv6-only destinations in practice are still rare, because all of these things we&#39;re talking about show how IPv4 hasn&#39;t gone away at all. So generally IPv6 implementations are additive, especially on origins for things like a website, where we have a lot of control over network and infrastructure choices. Generally a website origin is dual-stack. However, for peer-to-peer this isn&#39;t the case -- we totally might want to connect to a single device that only has an IPv6 address, and to get further along in the transition to IPv6, the best case is to use IPv6 by default and treat this specific case -- a device on an IPv4 network -- as the exceptional case that we patch over until it can go away.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;In the case of an IPv4-only host, all we can do is add a proxy somewhere that translates  IPv4 to IPv6. We&#39;re out of tricks, because there&#39;s no way to represent a 128-bit IPv6 address inside a 32-bit IPv4 address. So unlike DNS64 and NAT64, we can&#39;t route these packets using just the addresses. If we can only use an IPv4 address for routing, we have to smuggle the 128 bits that represent where we &lt;i&gt;actually&lt;/i&gt; want the packet to end up somewhere else in our packet. And that means we need a new protocol running on our translator that knows how to extract those bits from the data portion of the packet and build a new packet with an IPv6 header using those bits. And do the reverse when it gets a response.&#xA;&lt;/p&gt;&#xA;In terms of implementation, this is actually a pretty easy case, but there&#39;s no way to configure it all automatically. Most tunneling software fits in the category -- the tunnel software, like cloudflared or Nebula, configures tunnel interfaces for both IPv4 and IPv6 on the host, making it appear dual-stack to programs. Then when an IPv6 packet goes into the tunnel, it forwards it over IPv4 to a dual-stack host on the other end of the tunnel, which understands the tunnel protocol and can therefore extract the IPv6 address that the tunnel software put into the packet.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;And if we don&#39;t want to run the tunnel software on every host, we can do it at the edge of the IPv4 network -- for example a router that is doing NAT for all the devices behind it can tunnel to a translator and convert packets for everyone.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;These tunnels by themselves are just providing one-way connectivity though. If all we have is the tunnel to a dual-stack host, we&#39;re in a similar situation to our 464XLAT system above where other client&#39;s can&#39;t reach us, because they have no way to know that we&#39;re behind the translator. The translator can&#39;t advertise our IPv4 address, because to the translator that IPv4 address is just the address our ISP handed us -- they are already advertising it, the translator can&#39;t also advertise it (well, they could, but things break badly when you do this over BGP so providers have agreements to try to prevent hijacking). Instead, for the case of something like cloudflared, we can introduce DNS records that point a hostname to our particular tunnel. The hostname resolves to A and AAAA records, and Cloudflare&#39;s system makes sure that those addresses get routed to the correct tunnel (via the hostname that was requested). We can do something similar with our Nebula network by publicly advertising the lighthouse address. Interestingly, my understanding is there were some attempts to build a more generic system that works similarly to help with the IPv4 to IPv6 transition.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;   &lt;h3&gt;Teredo&lt;/h3&gt;&#xA;The &lt;a href=&#34;https://datatracker.ietf.org/doc/html/rfc4380&#34;&gt;Teredo protocol&lt;/a&gt; defines a way to connect peers with mixed IPv4 and IPv6 capabilities by defining two additional types of systems that operators run as general infrastructure -- servers and relays (Nebula uses the term relay for this as well). Relays are just like the cloudflared translator -- they are reachable at a known IPv4 address which either the operators of the host or the network configure some or all IPv4 traffic to go over. Once at the dual-stack relay, the relay connects to a Teredo server which is reachable at a well-known, globally-routable IPv6 prefix. My understanding is that this routable IPv6 prefix is meant to be advertised as general infrastructure. Then practitioners can implement their own relays that use the shared servers. The traffic goes from the relay, to the server, and the server&#39;s job is not to forward any traffic but to inform the relays (and IPv6-native peers) of each peer&#39;s globally routable IPv6 Teredo address. We&#39;re back 128-bit tricks because the Teredo protocol embeds lots of information into the IPv6 address it assigns to the Teredo peer. In this way, the peers can check in with the server (possibly via a relay) and then connect directly.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;The most well-known use of Teredo to my knowledge was the XBox network. Due to the number of devices, IPv6 was the best way to uniquely identify each XBox, but you have to make it work over many IPv4-only home networks. And it has to use peer-to-peer semantics. So by utilizing Teredo servers and running Xbox-specific relay servers that the Xbox can be hardcoded to reach out to if it only has IPv4 connectivity, you can connect every XBox over unique IPv6 addresses.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;   &lt;h3&gt;Nebula&lt;/h3&gt;&#xA;And this out-of-network setup is essentially what we can set up with Nebula. Our lighthouses can be configured as relays for a node, and then as long as you can reach the lighthouse, the lighthouse will, if necessary, convert between IPv4 and IPv6 internally and translate over the appropriate interface that can reach the target. For hosts that don&#39;t share the same IP protocol, it removes the peer-to-peer aspect which may be important especially if you&#39;re sending a lot of bandwidth one way or the other (for example, backups between the two Nebula clients counting against traffic bandwidth for the relay), but does facilitate connectivity which may be more important.&#xA;&lt;/p&gt;&#xA;Without a relay, connecting to an IPv6-only client (100.100.0.50) fails:&#xA;&lt;/p&gt;&#xA;&lt;p class=&#34;code&#34;&gt;root@vultr-worker-ipv4:~# ping 100.100.0.50&#xA;PING 100.100.0.50 (100.100.0.50) 56(84) bytes of data.&#xA;^C&#xA;--- 100.100.0.50 ping statistics ---&#xA;16 packets transmitted, 0 received, 100% packet loss, time 15283ms&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;And connecting to an IPv4-only host (100.100.0.51) from an IPv6-only host (vultr-worker-ipv6) fails:&#xA;&lt;/p&gt;&#xA;&lt;p class=&#34;code&#34;&gt;root@vultr-worker-ipv6:~# ping 100.100.0.51&#xA;PING 100.100.0.51 (100.100.0.51) 56(84) bytes of data.&#xA;^C&#xA;--- 100.100.0.51 ping statistics ---&#xA;7 packets transmitted, 0 received, 100% packet loss, time 6141ms&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;These are the configuration settings needed to utilize a Nebula lighthouse as a relay: &#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;Lighthouses:&#xA;&lt;p class=&#34;code&#34;&gt;relays:&#xA;  am_relay:true&#xA;...&#xA;listen:&#xA;  # To listen on both any ipv4 and ipv6 use &#34;[::]&#34;&#xA;  host: &#34;[::]&#34;&#xA;...&#xA;&lt;/p&gt;&#xA;&#xA;Clients:&#xA;&lt;p class=&#34;code&#34;&gt;relay:&#xA;  relays:&#xA;&#x9;- ${lighthouse-ip}&#xA;&#x9;- ${lighthouse-ip}&#xA;...&#xA;listen:&#xA;  # To listen on both any ipv4 and ipv6 use &#34;[::]&#34;&#xA;  host: &#34;[::]&#34;&#xA;...&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;With a lighthouse acting as a relay from IPv4 (vultr-worker-ipv4) to IPv6 (100.100.0.50), we have connectivity:&#xA;&lt;/p&gt;&#xA;&lt;p class=&#34;code&#34;&gt;root@vultr-worker-ipv4:~# ping 100.100.0.50&#xA;PING 100.100.0.50 (100.100.0.50) 56(84) bytes of data.&#xA;64 bytes from 100.100.0.50: icmp_seq=1 ttl=64 time=6.86 ms&#xA;64 bytes from 100.100.0.50: icmp_seq=2 ttl=64 time=6.84 ms&#xA;64 bytes from 100.100.0.50: icmp_seq=3 ttl=64 time=6.86 ms&#xA;64 bytes from 100.100.0.50: icmp_seq=4 ttl=64 time=6.83 ms&#xA;^C&#xA;--- 100.100.0.50 ping statistics ---&#xA;4 packets transmitted, 4 received, 0% packet loss, time 3005ms&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;And with a lighthouse acting as a relay from IPv6 (vultr-worker-ipv6) to IPv4 (100.100.0.51) we have connectivity:&#xA;&lt;/p&gt;&#xA;&lt;p class=&#34;code&#34;&gt;root@vultr-worker-ipv6:~# ping 100.100.0.51&#xA;PING 100.100.0.51 (100.100.0.51) 56(84) bytes of data.&#xA;64 bytes from 100.100.0.51: icmp_seq=1 ttl=64 time=6.97 ms&#xA;64 bytes from 100.100.0.51: icmp_seq=2 ttl=64 time=6.80 ms&#xA;64 bytes from 100.100.0.51: icmp_seq=3 ttl=64 time=6.49 ms&#xA;64 bytes from 100.100.0.51: icmp_seq=4 ttl=64 time=23.4 ms&#xA;64 bytes from 100.100.0.51: icmp_seq=5 ttl=64 time=14.8 ms&#xA;^C&#xA;--- 100.100.0.51 ping statistics ---&#xA;5 packets transmitted, 5 received, 0% packet loss, time 4006ms&#xA;&lt;/p&gt;&#xA;&lt;hr&gt;&#xA;&lt;div class=&#34;footnotes&#34;&gt;&#xA;&lt;ol&gt;&#xA;&lt;li id=&#34;footnote-arin&#34;&gt;&#xA;Can anyone purchase an IPv6 block? ARIN has some requirements, like needing to peer with two ISPs, so you probably don&#39;t qualify and you probably don&#39;t want to pay for that anyway. And that&#39;s not scaleable for everyone to do, or practical for provisioning all the devices in the world. &lt;a href=&#34;#footnote-arin-ref&#34;&gt;&amp;#8617;&lt;/a&gt; &#xA;&lt;/li&gt;&#xA;&lt;li id=&#34;footnote-github&#34;&gt;&#xA;At this point, Github is providing a public service for every blog post and tutorial that uses it as a non-IPv6-enabled hostname. Think of the blog posts it will break once it&#39;s dual stack! &lt;a href=&#34;#footnote-github-ref&#34;&gt;&amp;#8617;&lt;/a&gt; &#xA;&lt;/li&gt;&#xA;&lt;li id=&#34;footnote-ios&#34;&gt;&#xA;On iOS, apps are required to have IPv6 connectivity. iOS handles this by providing APIs that can do the CLAT translation, like NSURLSession. This is just the stateless CLAT part of the translation and requires a PLAT on the network. &lt;a href=&#34;https://mailarchive.ietf.org/arch/msg/v6ops/Ft1Zry30PYkAybvpNUPqOYVRLf4&#34;&gt;[v6ops] Apple and IPv6, a few clarifications&lt;/a&gt; &lt;a href=&#34;#footnote-ios-ref&#34;&gt;&amp;#8617;&lt;/a&gt; &#xA;&lt;/li&gt;&#xA;&lt;li id=&#34;footnote-ipv6-only&#34;&gt;&#xA;A future project is going to be an IPv6-only homelab. I&#39;m not quite there yet for all devices, but this is a good start at understanding how to transition legacy devices. Tayga, a NAT64 implementation available as a plugin for OPNSense, should be a good part of that setup if I want, but I had some issues getting the gateways configured for it, so I will have to try again to get that working.&lt;a href=&#34;#footnote-ipv6-only-ref&#34;&gt;&amp;#8617;&lt;/a&gt; &#xA;&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;/div&gt;&#xA;&lt;/div&gt;&#xA;&#xA;</description>
		<pubDate>2024-02-22 12:00:00 -0500 EST</pubDate>
	</item>
	<item>
		<title>Homelab 2024</title>
		<link>http://kerbyhughes.com/2025/01/25/homelab-2024.html</link>
		<guid>http://kerbyhughes.com/2025/01/25/homelab-2024.html</guid>
		<description>&lt;h2&gt;&#xA;    &lt;a class=&#34;title&#34; href=&#34;/2025/01/25/homelab-2024.html&#34;&gt;&#xA;    Homelab 2024&#xA;    &lt;/a&gt;&#xA;&lt;/h2&gt;&#xA;&lt;div class=&#34;date&#34;&gt;2025-01-25&lt;/div&gt;&#xA;&lt;div class=&#34;content&#34;&gt;&#xA;&lt;p&gt;&#xA;&#x9;This is the (belated) second post in a series of snapshots about the state of my homelab.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    The homelab actually made two transitions between this post and the &lt;a href=&#34;/2023/07/22/homelab-2023.html&#34;&gt;introductory post last year&lt;/a&gt;. The first was to a new apartment, where the main change was to my networking stack and some compute experimentation. The second was to our first house!&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    &lt;h4&gt;Home&lt;i&gt;lab&lt;/i&gt; or Home&lt;i&gt;prod&lt;/i&gt;?&lt;/h4&gt;&#xA;&#xA;&#x9;I still stand by the homelab as a playground and testbed, however the house needs a backbone of solid internet connectivity, and this overlaps with the needs of regular, non-lab computing in our home. While I previously used a 5G gateway as a failover WAN, the move to a new location without 5G Ultrawide coverage removed that possibility. Altough I no longer have a backup internet connection, I upgraded my networking stack to be able to multiplex the common household needs with my homelab needs over the same internet connection, such that I could get the isolation benefits without just routing some devices over a separate WAN. This was done with VLAN segmentation, and because I can tag Wi-Fi networks with a VLAN, and because Wi-Fi devices will pick up the same SSID regardless of the hardware or any other implementation details, I was able to swap out for a new network stack without requiring any changes to any of the other clients in the house. So while I think the next year will focus more on building tooling that is used by more than just myself in the household, this year was a good exercise in supporting the infrastructure needs of all our work-from-home equipment and being able to make changes with as little disruption as possible. This is really more of a home networking infrastructure post.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    &lt;h4&gt;Apartmentlab&lt;/h4&gt;&#xA;&#xA;&#x9;The main focus of the apartment lab in the last year was setting up OPNsense as my router and firewall. I&#39;ve enjoyed having full control over my router, and the main motivation was to get VLAN support. In order to not interfere with other devices in the house, I set it up to have one Wi-Fi SSD per VLAN. The &#34;commons&#34; is the 1-1 replacement for our old Wi-Fi network, with the same SSID so all devices migrated seamlessly. Then I have one network that I use to connect to the router and management interfaces. Then a separate Homelab VLAN for all of my equipment. And another for my work laptop and phone so they are cordoned from the rest of the house as I don&#39;t own or manage those. I also created an IOT network without any internet connectivity. So far I haven&#39;t done much with that VLAN except test some smart outlets.&#xA;&#x9;&lt;img src=&#34;https://kerbyhughes.com/2025/01/25/assets/apartment_lab_1.jpeg&#34;&gt;&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;I had some success with tethering my iPhone to my OPNsense router and using that as a secondary WAN Gateway for failover in the event the internet went out, but it was pretty rough and only worked sometimes after a few reboots. I still like this idea and might circle back to it.&#xA;&#x9;&lt;img src=&#34;https://kerbyhughes.com/2025/01/25/assets/OPNsense_iphone.jpeg&#34;&gt;&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;&lt;h4&gt;Proxmox&lt;/h4&gt;&#xA;&#xA;&#x9;I collected a set of 4 Beelink EQ12 boxes slowly as they went on sale. These have Intel N100 processors, which means they have 4 of Intel&#39;s &#34;E&#34; cores (for efficiency, as apposed to &#34;P&#34; for performance). They also have dual 2.5 Gb ethernet, which is why I chose this model. They work great as an OPNsense router, and have room for an internal NVME SSD as well as an internal 2.5&#34; SATA SSD. This made them a good choice for building out a cheap experimental cluster.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;I set up a &lt;a href=&#34;https://www.proxmox.com/en/&#34;&gt;Proxmox&lt;/a&gt; cluster on the remaining 3 nodes, including adding a SATA SSD in each one to use for a Ceph storage cluster. The benefit of a Proxmox cluster is you can manage all of the nodes in the cluster from a single administration web UI, create VMs (or LXC containers) on any of the nodes, and even migrate the VMs between nodes. &#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;I was even able to get OPNsense virtualized and running on the Proxmox cluster, as these have two ethernet ports. This was done by connecting one port of each node to a layer 2 switch, which was connected to my ISP modem. Packet routing is done by MAC address, and the VM retains the same MAC address for the ethernet interface when moving across nodes. The other ethernet port on each node was used for the LAN side, connected to a layer 3 switch. Surprisingly all that was required to migrate from bare metal to virtualized Proxmox with the same config was to change the WAN and LAN interface names. I exported the config, did a find-and-replace of the interface names, and restored the virtualized OPNsense install from the config, and everything worked.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;Before adding Ceph, the only storage available was the SSD on each node. This meant that migrating from one node to another required downtime. Basically Proxmox sets up a schedule to sync the VM&#39;s on-disk data to the other nodes. Then when you migrate, the machine can come up in the same state. However, if you have shared storage, where no external sync is required, then you can do live migrations, which means that Proxmox will copy the memory from the source node to the destination node, and then can pause and resume the VM quickly to migrate the VM without downtime. From the VM&#39;s perspective, nothing changed and it keeps running.&#xA;&lt;/p&gt;&#xA;&lt;/p&gt;&#xA;&#x9;This even worked for OPNsense - I was able to migrate the router between nodes to do maintenance or rewiring. While this let me do some maintenance without interrupting our internet connectivity, it did require triggering the migration manually - if a node was disconnected or otherwise died, it turns out that Proxmox has some hardcoded internal timers and configuration that means it takes about 5 minutes for a node to be detected as down and trigger a failover migration to a healthy node. This is okay for some workloads, but means 5 minutes of internet downtime. It&#39;s certainly faster than I could restore OPNsense to new hardware on my own if a node dies, but was a bit disappointing from a high-availability perspective given that there&#39;s no technical limitation that I&#39;m aware of except for Proxmox&#39;s lack of available tuning.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;Migrating a VM:&#xA;&#x9;&lt;img src=&#34;https://kerbyhughes.com/2025/01/25/assets/proxmox_migration.jpeg&#34;&gt;&#xA;&lt;/p&gt;&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;&lt;h4&gt;Grafana&lt;/h4&gt;&#xA;&#xA;&#x9;I set up a Prometheus instance running on my NAS, and collected metrics from OPNsense, and used Grafana to visualize the bandwidth across the different VLANs.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;&lt;img src=&#34;https://kerbyhughes.com/2025/01/25/assets/OPNsense_monitoring.jpeg&#34;&gt;&#xA;&#x9;Speed test:&#xA;&#x9;&lt;img src=&#34;https://kerbyhughes.com/2025/01/25/assets/OPNsense_under_load.jpeg&#34;&gt;&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;Backups from the NAS uploading overnight:&#xA;&#x9;&lt;img src=&#34;https://kerbyhughes.com/2025/01/25/assets/homelab_bandwidth.jpeg&#34;&gt;&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;Work meetings during the day:&#xA;&#x9;&lt;img src=&#34;https://kerbyhughes.com/2025/01/25/assets/OPNsense_corporate_annotated.jpeg&#34;&gt;&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;&lt;h4&gt;netboot.xyz&lt;/h4&gt;&#xA;&#xA;&#x9;I successfully got netboot.xyz working in tandem with OPNsense, which points DHCP clients to the right files for PXE booting. I can bring up any VM or physical machine connected to the homelab network with a remote image. Netboot.xyz doesn&#39;t seem very compatible with custom images, but for basics like Ubuntu or Debian it&#39;s been nice to have.&#xA;&#x9;&lt;img src=&#34;https://kerbyhughes.com/2025/01/25/assets/pxe_1.jpeg&#34;&gt;&#xA;&#x9;&lt;img src=&#34;https://kerbyhughes.com/2025/01/25/assets/pxe_2.jpeg&#34;&gt;&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;&lt;h4&gt;Network-Attached Storage&lt;/h4&gt;&#xA;&#xA;&#x9;I need to do a &#34;how I backup&#34; post separately, but not much changed regarding how I use the storage on these devices. I still run Nebula to connect two Synology&#39;s, one at home and one at my parent&#39;s house, and use the snapshot features to sync backups in both directions.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;I did do some upgrades on my 1821+, adding two NVMe SSDs and increasing the memory to 32GB. The Virtual Machine Manager app is pretty great, and I&#39;ve been running a VM for monitoring (running Prometheus + Grafana), and another for compute jobs that are best with direct access to the storage pools. These are Jellyfin for video and Navidrome for audio, although I really haven&#39;t had time to do much lately with these.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;I did have one scare with my Synology. We lost power at our apartment, which was not uncommon, so I had a large UPS attached. The Synology was configured to shut down after 5 minutes on UPS power so that the battery can be used for powering the networking equipment instead of async tasks like backup replication. However, I found that once power was restored, the Synology did not come up cleanly. I was unable to log in, with the UI giving an error that it was waiting for services to start. As I had SSH access I was able to look up some commands to bypass this, but then found that a couple of major services, including the Virtual Machine Manager and Hyper Backup (which does snapshot replication) were down. Rebooting put it back into the same loop. Luckily I knew I had plenty of copies of this data, so I started one of the &#34;restore&#34; jobs that was supposed to reinstall the OS layer that Synology adds while retaining the storage pools. This seemed to pretty quickly go into a failed state and I figured I would be wiping the system, however a reboot after this brought the system up cleanly and it has been working ever since. None of this was confidence-inspiring, but I&#39;ll monitor and see how it handles future events.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;&lt;h4&gt;Hosting this site&lt;/h4&gt;&#xA;&#xA;&#x9;I decided to move this site onto Cloud VMs. I&#39;ve got one running in DigitalOcean, and another running in Vultr, just load balanced by putting both IPs in the DNS records for the site. It was cool to have it served entirely from my homelab and was a fun exercise in keeping the site available through upgrades and hardware moves. However, I had two reasons for moving it to the cloud. One was that I wanted it to not have to be fronted by anything and control the entire flow as part of being a blog on the open web. If I have Denial of Service (DoS/DDoS) issues I can always throw the domain behind Cloudflare temporarily, but not having to do this and letting the server deal with the flood of traffic being on the open web is a good exercise. I also had some issues with the cloudflared tunnel dropping out and requiring a restart despite saying it was healthy, so there were some availability issues. I haven&#39;t hand any availability issues since hosting directly from VMs in the cloud. As they are just Debian boxes and I&#39;m running a light static binary, I can move to any hosting platform with minimal requirements.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;&lt;h4&gt;&lt;i&gt;Home&lt;/i&gt;lab&lt;/h4&gt;&#xA;&#xA;&#x9;When you can put holes in the walls you can really make the changes you want to see. I wanted the rack of networking equipment to live in the basement so it was out of sight and earshot. Putting the rack in the center of the basement up high worked well, and I was able to fish five runs of CAT6 up from the basement to our upstairs offices. One run was for the access point - up high worked best for its broadcast pattern and made it close to our offices for Wi-Fi 6E coverage. The other four runs I terminated in a wall panel.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;Fishing CAT6 up through the walls.&#xA;&#x9;&lt;img src=&#34;https://kerbyhughes.com/2025/01/25/assets/ethernet_pull_1.jpeg&#34;&gt;&#xA;&#x9;&lt;img src=&#34;https://kerbyhughes.com/2025/01/25/assets/ethernet_basement.jpeg&#34;&gt;&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;We replaced the flooring on the landing of our second floor, and removed an electric baseboard heater in the process. Most of the holes in the studs were already there, and luckily one of them worked for reaching down to the basement, two floor up but directly above the rack. This required a borescope in addition to wire fishing tape in order to route the cable down through the floor and wall cavities. I put metal plates between the wires and the drywall to protect them before sealing it up.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;Before:&#xA;&#x9;&lt;img src=&#34;https://kerbyhughes.com/2025/01/25/assets/ethernet_pull_2.jpeg&#34;&gt;&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;After:&#xA;&#x9;&lt;img src=&#34;https://kerbyhughes.com/2025/01/25/assets/landing_finished.jpeg&#34;&gt;&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;The backside, where the wires come in to a closet in the office:&#xA;&#x9;&lt;img src=&#34;https://kerbyhughes.com/2025/01/25/assets/closet_fishing.jpeg&#34;&gt;&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;I forgot to take a good finished picture, but the wires are routed around the inside of the closet door to hide them, and protected by a track.&#xA;&#x9;&lt;img src=&#34;https://kerbyhughes.com/2025/01/25/assets/closet_wire_run.jpeg&#34;&gt;&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;The Ubiquiti Access Points like being ceiling mounted, so this has worked well to cover all of our house with only one AP. The slanted ceiling in the closet that it&#39;s mounted on is the slant of the roof of the second floor.&#xA;&#x9;&lt;img src=&#34;https://kerbyhughes.com/2025/01/25/assets/closet_ap.jpeg&#34;&gt;&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;Four 10-gig ethernet runs into the office:&#xA;&#x9;&lt;img src=&#34;https://kerbyhughes.com/2025/01/25/assets/ethernet_wiring.jpeg&#34;&gt;&#xA;&#x9;&lt;img src=&#34;https://kerbyhughes.com/2025/01/25/assets/ethernet_jacks.jpeg&#34;&gt;&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;Making lots of cables for the keystone patch panel&#xA;&#x9;&lt;img src=&#34;https://kerbyhughes.com/2025/01/25/assets/ethernet_cable_1.jpeg&#34;&gt;&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;I used an audio rack which works well as a low-depth rack for network equipment, since I don&#39;t need room for full-depth servers. I ran a power outlet to feed the UPS and extended battery pack which feeds the rack. The rack is located in the center of the house near the stair well. We&#39;ve got a dehumidifier and improved the insulation of the basement, so it sits under 50% humidity and around 60 degrees year-round.&#xA;&#x9;&lt;img src=&#34;https://kerbyhughes.com/2025/01/25/assets/rack_wiring.jpeg&#34;&gt;&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;&lt;h4&gt;Switching to fiber&lt;/h4&gt;&#xA;&#xA;&#x9;Despite my &lt;a href=&#34;/2024/02/22/ipv6-and-ipv4-translation.html&#34;&gt;IPv6 adventures in the apartmentlab&lt;/a&gt;, I opted for the bandwidth of fiber at the new house over having native IPv6 connectivity. The only fiber provider on our street, Fidium fiber, does not provide IPv6 in 2024. I debated this for quite a while, but a few things swayed me to switch off of cable with IPv6 to fiber without. The first was just wanting to experiment with the higher bandwidth, being able to saturate my equipment and see what it was capable of, since most of it was able to pull around 2 Gbps now. The second was that I knew it wouldn&#39;t affect our work-from-home requirements. I do need IPv6 for some things, but I just flip on Cloudflare Warp on my laptop for those cases. Last, I wanted the buried fiber connection to the house, and to get off flakey cable. We had so many outages on Spectrum cable in this area that I was ready to switch. We haven&#39;t had a single internet outage yet with fiber. And Fidium has been very accommodating of customers running their own routers. They provide an ONT with 1 gig and 10 gig copper ports, and I just have to register the MAC address of the NIC (an SFP module at this point) that I&#39;m connecting on the other end. I also was able to remove a lot of the coaxial cable in the basement.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;I added conduit and a 50 foot extension for the fiber coming into the house to run it over to the rack:&#xA;&#x9;&lt;img src=&#34;https://kerbyhughes.com/2025/01/25/assets/fiber_conduit_4.jpeg&#34;&gt;&#xA;&#x9;&lt;img src=&#34;https://kerbyhughes.com/2025/01/25/assets/fiber_conduit_2.jpeg&#34;&gt;&#xA;&#x9;&lt;img src=&#34;https://kerbyhughes.com/2025/01/25/assets/fiber_conduit_5.jpeg&#34;&gt;&#xA;&#x9;&lt;img src=&#34;https://kerbyhughes.com/2025/01/25/assets/fiber_conduit_6.jpeg&#34;&gt;&#xA;&#x9;&lt;img src=&#34;https://kerbyhughes.com/2025/01/25/assets/fiber_speedtest_net.jpeg&#34;&gt;&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;&lt;h4&gt;10 Gigabit&lt;/h4&gt;&#xA;&#xA;&#x9;With 2.5GbE ports on my router, 2 Gbps fiber service, and 10GbE ports on my NAS and laptop (via an adapter) and 2.5Gbe and Wi-Fi 6E (capable of about 1.4 Gbps wirelessly) at the Access Point, it was time to get above the 1 Gbps limit of my main ethernet managed switch. I searched for a long time and ended up with a QNAP switch, which -- while it doesn&#39;t have any Power over Ethernet (PoE) -- it has 8x 10GbE RJ45 ports as well as 8x 10 gigabit SFP ports at a reasonable price point. This let me have enough copper (RJ45) ports to support my existing 10 gigabit devices, a couple of SFP to bridge my switches, and a bunch of extra SFP ports to expand in the future.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;The switch has been rock solid, the only downside being the web interface, namely the VLAN selector that took multiple forum posts to figure out how to get my ports tagged successfully along with the trunk ports for the router and access point.&#xA;&#x9;&lt;img src=&#34;https://kerbyhughes.com/2025/01/25/assets/rack_01.jpeg&#34;&gt;&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;Initial setup. (The names are &lt;a href=&#34;https://www.halopedia.org/Marathon-class_heavy_cruiser&#34;&gt;Marathon-class heavy cruisers&lt;/a&gt;. This organization didn&#39;t last long!&#xA;&#x9;&lt;img src=&#34;https://kerbyhughes.com/2025/01/25/assets/rack_03.jpeg&#34;&gt;&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;&lt;h4&gt;Back to Ubiquiti&lt;/h4&gt;&#xA;&#xA;&#x9;After a year or so of enjoying the flexibility of OPNsense over my old Amplifi HD router, I started having an issue. All of our laptops and phones -- anything not being routed through a tunnel like Cloudflare Warp -- started to fail to load some web pages periodically. Most importantly this was reported by my wife, so it was happening on my common subnets and not just on my experimental ones. It seemed to &lt;i&gt;mostly&lt;/i&gt; be Google properties that were affected, but this distinction would only be helpful for debugging, we use too many services for home and work that I couldn&#39;t ignore these. The symptom on our phones was just that the webpage would start connecting and never get to rendering any of the page - just a blank page was shown. I could reproduce less often on my laptop - just a browser starting to connect but never loading the page. YouTube was a common example, reloading often helped but was still untenable. An error message was never displayed. I ended up finding a reproducer on my laptop with Google maps, and could periodically show the curl failing and not completing the TLS handshake:&#xA;&#xA;&lt;p class=&#34;code&#34;&gt;&lt;small&gt;kerby@tycho % curl -vvvvv https://www.google.com:443/maps&#xA;*   Trying 142.251.41.4:443...&#xA;* Connected to www.google.com (142.251.41.4) port 443 (#0)&#xA;* ALPN: offers h2,http/1.1&#xA;* (304) (OUT), TLS handshake, Client hello (1):&#xA;*  CAfile: /etc/ssl/cert.pem&#xA;*  CApath: none&#xA;^C&#xA;kerby@tycho %&lt;/small&gt;&lt;/p&gt;&#xA;&lt;p class=&#34;code&#34;&gt;&lt;small&gt;kerby@tycho % openssl s_client -showcerts -connect maps.google.com:443 &lt; /dev/null&#xA;Connecting to 142.251.40.206&#xA;CONNECTED(00000006)&#xA;write:errno=60&#xA;---&#xA;no peer certificate available&#xA;---&#xA;No client certificate CA names sent&#xA;---&#xA;SSL handshake has read 0 bytes and written 323 bytes&#xA;Verification: OK&#xA;---&#xA;New, (NONE), Cipher is (NONE)&#xA;This TLS version forbids renegotiation.&#xA;Compression: NONE&#xA;Expansion: NONE&#xA;No ALPN negotiated&#xA;Early data was not sent&#xA;Verify return code: 0 (ok)&#xA;---&lt;/small&gt;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;&lt;img src=&#34;https://kerbyhughes.com/2025/01/25/assets/wireshark.jpeg&#34;&gt;&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;As best I could tell, something was preventing the TLS handshake response from the server back to the client after the client hello. I still don&#39;t know what was the cause was. I spent a few tries resetting everything I could think of in OPNsense, trying to get it back to a working configuration. Eventually, due to the impact, I tried replacing the router with a Ubiquity Dream Machine Pro Max. I haven&#39;t had a single issue since. This was unsatisfying from a curiosity perspective, but sometimes things just need to work. This has been a bit of a theme with this post! Mostly spending time on house projects and not root cause analysis.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;I&#39;ve been running the Dream Machine and plan to continue to do so -- the performance has been great, and with recent updates the Wireguard options are sufficient for my needs.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;&lt;h4&gt;Rack at the end of 2024&lt;/h4&gt;&#xA;&#xA;&#x9;Plenty of cable management left to do after the router and switch changes.&#xA;&#x9;&lt;img src=&#34;https://kerbyhughes.com/2025/01/25/assets/rack.jpeg&#34;&gt;&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;&#x9;&lt;h4&gt;Next up&lt;/h4&gt;&#xA;&#xA;&#x9;Things I&#39;m working on:&#xA;&#xA;&#x9;&lt;ul&gt;&#xA;&#x9;&lt;li&gt;Z-Wave relays to fill some gaps where we don&#39;t have a good way to wire up switches to lights. For example, being able to turn off the outside garage lights from the mudroom.&lt;/li&gt;&#xA;&#x9;&lt;li&gt;paperless-ngx and a scanner to handle incoming household documents&lt;/li&gt;&#xA;&#x9;&lt;li&gt;Ersatz tv with Jellyfin for commercial-free reruns&lt;/li&gt;&#xA;&#x9;&lt;/ul&gt;&#xA;&lt;/p&gt;&#xA;&#xA;&lt;/p&gt;&#xA;&lt;/div&gt;&#xA;</description>
		<pubDate>2025-01-25 14:00:00 -0500 EST</pubDate>
	</item>
	<item>
		<title>Homelab 2025</title>
		<link>http://kerbyhughes.com/2025/12/31/homelab-2025.html</link>
		<guid>http://kerbyhughes.com/2025/12/31/homelab-2025.html</guid>
		<description>&lt;div class=&#34;post&#34;&gt;&#xA;&lt;h2&gt;&#xA;    &lt;a class=&#34;title&#34; href=&#34;/2025/12/31/homelab-2025.html&#34;&gt;&#xA;    Homelab 2025&#xA;    &lt;/a&gt;&#xA;&lt;/h2&gt;&#xA;&lt;div class=&#34;date&#34;&gt;2025-12-31&lt;/div&gt;&#xA;&lt;div class=&#34;content&#34;&gt;&#xA;&lt;p&gt;&#xA;    The homelab saw its most sustained usage this year, and I&#39;ve found several applications that are fully replacing cloud services for me or adding new capabilities. I&#39;m working on another post about how I host these services, but for the end of the year this is a quick write-up of the state of the homelab, covering the tools I found most useful.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    &lt;h3&gt;Hardware&lt;/h3&gt;&#xA;&#xA;    Overall no major hardware changes this year. I&#39;m still using the same Ubiquiti Dream Machine, QNAP 10G switch, a handful of mini PCs for compute, and a Synology for storage. I did have a NIC on one mini PC die, so I waited for a sale on a barebones MS-01 and migrated the RAM and SSD over. As a result, I have a bit more available compute  with the Intel i5-12600H (the other PCs are just 4-core N100s) and a PCI-e slot to play around with.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    I added two new runs of CAT-5e up from the basement to our downstairs den/office where I&#39;ve been working. I have my work computer and desktop workstation there.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    Depending on how much pruning of data I do on my NAS, I may need to replace a couple of drives to get some more space on my Synology within the next year or so.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    &lt;h4&gt;Z-Wave and Shelly Relays&lt;/h4&gt;&#xA;&#xA;    We have a couple of places where the electrical wiring in our house falls a bit short. One is the outdoor lights on our detached garage. These provide the majority of the light for our driveway at night, but they run off the panel in the garage and so the only switch is inside the garage.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    The second is the upstairs lighting. In the main bedroom, office, and nursery there are no overhead lights, and no light switches which control electrical outlets.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    I decided to try  solving both of these with smart devices. The garage doesn&#39;t have a good solution to making a switch in the house be on the same circuit as the garage lights, and upstairs we would instead be looking at installing recessed lights in the ceiling if we were to do any serious electrical work.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    My requirements for the smart devices were that I wanted something that didn&#39;t look obviously like a smart switch  and that didn&#39;t require apps or internet access to work.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    Z-Wave seemed to be the best ecosystem that works over its own radio protocols and doesn&#39;t rely on WiFI. I bought a few Shelly Z-Wave relays, light switches, and power outlets and got them all automated with HomeAssistant.&#xA;&#xA;    &lt;img src=&#34;https://kerbyhughes.com/2025/12/31/assets/shelly_instructions.jpeg&#34; alt=&#34;Shelly relay and Z-Wave lightswitch&#34;/&gt;&#xA;    &lt;img src=&#34;https://kerbyhughes.com/2025/12/31/assets/shelly_garage_relay.jpeg&#34; alt=&#34;Shelly relay and Z-Wave lightswitch&#34;/&gt;&#xA;    It&#39;s in there somewhere.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    I&#39;ve been impressed with the Shelly relays. They get installed inside the electrical box and take over control of the circuit, using the light switch as an input that can also be used for automation. For the garage, I installed one into the circuit for the lights such that I can turn the lights on and off with HomeAssistant in addition to the real switch. Then I installed a second relay in a switch box inside the house (which only controls a small light by the side door otherwise) and used that as an input to HomeAssistant. Now when I switch on the side door lights for the house, the garage lights come on as well.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    &lt;video controls width=&#34;500&#34;&gt;&#xA;         &lt;source src=&#34;https://kerbyhughes.com/2025/12/31/assets/garage_lights.webm&#34; type=&#34;video/webm&#34;/&gt;&#xA;    &lt;/video&gt;&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    This works about 98% of the time. Occasionally it seems like something misses the signal and I toggle the light switch again to fix it. There&#39;s also a reset procedure of toggling the input switch for the relay five times to perform a reset, which I&#39;ve only had to do once or twice. Overall I&#39;ve been very happy with this setup, and it&#39;s worked throughout the year at all temperatures. It&#39;s also been handy to turn on the outside lights from my phone through the VPN a couple of times when we were away.&#xA;&#xA;    &lt;img src=&#34;https://kerbyhughes.com/2025/12/31/assets/homeassistant_iphone.jpeg&#34; alt=&#34;Shelly relay and Z-Wave lightswitch&#34;/&gt;&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    Upstairs, I found some light switches that look just like regular paddle-style switches, but are actually Z-Wave remotes. These got mounted in each of the three rooms upstairs where you&#39;d expect a light switch, and they are paired up with a floor lamp to light their respective room.&#xA;&#xA;    &lt;img src=&#34;https://kerbyhughes.com/2025/12/31/assets/shelly_lightswitch.jpeg&#34; alt=&#34;Shelly relay and Z-Wave lightswitch&#34;/&gt;&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    I also bought an outdoor Z-Wave plug for the Christmas lights we put up on the front of the house. I was able to add the switch to the same automation as the garage lights, turning the Christmas lights on and off with the rest of the outdoor lights.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    For a hub, I&#39;m using a Zooz 700-series USB controller plugged into one of the mini PCs in my Proxmox cluster. I installed HomeAssistant via an LXC, as this seemed like the easiest way to pass through the USB device, and I haven&#39;t had any issues, including migrating the device and LXC across PCs. I run a backup job for the LXC which will hopefully be sufficient to restore my automations if needed, as I really do not want to deal with the HomeAssistant UI again.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    Another option is to use a controller that can be connected to the Apple Home ecosystem, turning an Apple TV into a Z-Wave hub. Then instead of HomeAssistant you&#39;d use the Home application to configure your automations. I trust this setup to be available and reliable less than I do my Proxmox and LXC setup, however  HomeAssistant has been so difficult to do the most basic operations that I might give this a try at the next house.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    The only real downside I&#39;ve found regarding the Z-Wave hardware is that adopting a device can be really flaky and the best method depends on the device. Typically it&#39;s done with a UUID provided by a QR code on a little sticker on the device, and then usually a copy of that QR code is supplied in the box.  I&#39;ve lost at least one of the extra QR codes for the relays in the wall, so I would potentially have to open up the switch boxes and pull out the relays if I needed to adopt them to a new hub. If I remember right there is a discovery mode that you can put the hub in, but I did not have success with this method. I always had to add the device by UUID first, put the device in adoption mode, and then initiate the search. HomeAssistant made this pretty miserable, and there are multiple ways to add a device, only one of which worked, so I always forgot how to do it each time I added a new device.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    &lt;h3&gt;Software&lt;/h3&gt;&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    &lt;h4&gt;&lt;a href=&#34;https://www.home-assistant.io/&#34;&gt;HomeAssistant&lt;/a&gt;&lt;/h4&gt;&#xA;&#xA;    My review of HomeAssistant is mostly included above regarding Z-Wave devices. I really find the UI inscrutable and the most basic things everyone want to use it for (add a device and use it in an automation) is  buried behind weird menus and settings. That said, day-to-day it&#39;s been rock solid and has a decent iOS application whose dashboard lets me control my lights easily.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    The rest of my list is much more favorable:&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    &lt;h4&gt;&lt;a href=&#34;https://docs.paperless-ngx.com/&#34;&gt;Paperless-ngx&lt;/a&gt; and &lt;a href=&#34;https://apps.apple.com/us/app/swift-paperless/id6448698521&#34;&gt;Swift Paperless&lt;/a&gt;&lt;/h4&gt;&#xA;&#xA;    Once I set up solid base for hosting software in my cluster that needed both persistent storage and databases, including reliable backups, I installed &lt;a href=&#34;https://docs.paperless-ngx.com/&#34;&gt;Paperless-ngx&lt;/a&gt; which is a document management suite. The basic idea is that you scan in your receipts, mail, tax documents, etc, and it helps you organize it and make it searchable via tags, OCR, and other features.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    I originally thought I would need to buy a scanner like a ScanSnap, but before doing so I downloaded five or six iOS client apps for Paperless, and found that phone scanning is plenty good enough that there&#39;s no need for a dedicated scanner. All the apps seem to use the same scanning engine, probably provided by an Apple framework, and then they each have a slightly different workflow for ingesting and searching through documents.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    I ended up favoring &lt;a href=&#34;https://apps.apple.com/us/app/swift-paperless/id6448698521&#34;&gt;Swift Paperless&lt;/a&gt;, as it provided the best scanning and naming workflow, and search seemed to work the best (a feature that did not work at all in some clients).&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    I use it any time we get some documents in the mail, just taking a few seconds to do a scan. You can even omit naming and use the app or web UI later to triage new documents. Once you&#39;ve added tags to the app, I&#39;ve found that the auto-tagging feature works quite well.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    &lt;h4&gt;&lt;a href=&#34;https://miniflux.app/&#34;&gt;MiniFlux&lt;/a&gt; and &lt;a href=&#34;https://apps.apple.com/us/app/readkit-reading-hub/id1615798039&#34;&gt;Readkit&lt;/a&gt;&lt;/h4&gt;&#xA;&#xA;    I love RSS feeds (&lt;a href=&#34;https://kerbyhughes.com/rss.xml&#34;&gt;here&#39;s mine!&lt;/a&gt;). It feels like a bastion of sanity in the modern web, and it&#39;s become my main source of news. Tech news is not terrible to find and consume, but especially for current events all the major apps and websites are nearly unreadable, especially on a phone. I don&#39;t use social media and don&#39;t want to, which can make it challenging to get important updates. I&#39;ve subscribed to &lt;a href=&#34;https://politicalwire.com/&#34;&gt;Political Wire&lt;/a&gt; and read it entirely via RSS, even though it&#39;s a bit of a firehose of a feed. I maintain a list of blogs, news, and enthusiast sites -- largely from independent publishers -- that I  enjoy, and reading the content on my phone via ReadKit is always my first stop. It&#39;s also refreshing to sometimes open the app and see that there are no new articles.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    For a long time I used &lt;a href=&#34;https://netnewswire.com/&#34;&gt;NetNewsWire&lt;/a&gt;, but I wanted to see if there were more options if I self-hosted some portion of it. I&#39;ve been using two tools, &lt;a href=&#34;https://miniflux.app/&#34;&gt;MiniFlux&lt;/a&gt;  running as a container in my cluster which maintains my list of feeds and pulls in updates on a schedule, and &lt;a href=&#34;https://apps.apple.com/us/app/readkit-reading-hub/id1615798039&#34;&gt;Readkit on iOS&lt;/a&gt; as my primary reading interface. I tried out a few different Miniflux-compatible readers and in addition to having a very nice rendering engine, Readkit has a  killer feature which is a button to go fetch the full web page for sites that only provide the first few lines or a summary in the RSS feed. The result is a reading experience similar to the Reader functionality of web browsers, with no ads, redirect loops, or other junk that crashes the browser. It works beautifully and I&#39;m more than happy to pay for a good client application.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    &lt;h4&gt;&lt;a href=&#34;https://immich.app/&#34;&gt;Immich&lt;/a&gt;&lt;/h4&gt;&#xA;&#xA;    Immich is a photo management tool that can be used as an alternative to Apple Photos or Google Photos. For me though, I don&#39;t plan on ever having it be a replacement. Instead, I want to continue to use iCloud photos to interoperate with family members because that&#39;s what they will continue to use, and it&#39;s the native photos app for all of our iPhones. Instead, Immich has replaced Synology Photos as the best way to create a backup of all of my photos, with a great management interface in addition. The Immich iOS app&#39;s &#34;Backup&#34; feature can be enabled, which syncs your iCloud photos over into Immich. This removes my dependence on Synology and gives me the peace of mind that I have a mirrored library at all times for all photos I take or save on my phone.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;   It also works much better than iCloud/Apple Photos for images taken with my &#34;real&#34; camera, a Canon M50. It can handle RAW photos just as well as JPEGs, which means I can have access to the full resolution originals without taking up any cloud storage. I currently am running a cloudflared container next to Immich to expose it to family members behind Cloudflare Access; again, not as a replacement for another tool, but as a way to share higher-resolution photos.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;   &lt;h4&gt;&lt;a href=&#34;https://www.navidrome.org/&#34;&gt;Navidrome&lt;/a&gt;, &lt;a href=&#34;https://www.mp3tag.de/en/index.html&#34;&gt;Mp3tag&lt;/a&gt;, and &lt;a href=&#34;https://apps.apple.com/us/app/play-sub-music-streamer/id955329386&#34;&gt;play:Sub&lt;/a&gt;&lt;/h4&gt;&#xA;&#xA;    I&#39;ve been continuing to enjoy using this stack as an alternative to Apple Music or Spotify with my own music collection. &lt;a href=&#34;https://www.mp3tag.de/en/index.html&#34;&gt;Mp3tag&lt;/a&gt; was a nice app I found for fixing mp3 metadata, Navidrome serves the library over the subsonic protocol, and after trying out many, many clients, &lt;a href=&#34;https://apps.apple.com/us/app/play-sub-music-streamer/id955329386&#34;&gt;play:Sub&lt;/a&gt; has been my favorite iOS player. It even has a CarPlay app so I can stream music from my server over Wireguard to my phone, connected to wireless CarPlay in my truck.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    &lt;h4&gt;Wireguard and the &lt;a href=&#34;https://apps.apple.com/us/app/wireguard/id1441195209&#34;&gt;Wireguard iOS app&lt;/a&gt;&lt;/h4&gt;&#xA;&#xA;    Seems a little silly to call out a VPN protocol here, but Wireguard and mainly the &lt;a href=&#34;https://apps.apple.com/us/app/wireguard/id1441195209&#34;&gt;Wireguard iOS app&lt;/a&gt; get a mention because it works so well for my needs. I run Wireguard on my Ubiquiti router, which also has a reverse DNS hook to update a Cloudflare DNS record with the IP from my ISP, and then I use the Wireguard app on my iPhone. I have it set to connect on-demand when switching to LTE or a non-home WiFi network, which means I have seamless connectivity to my cluster at home. All the apps listed here work perfectly no matter where I am, without any special DNS or needing to toggle anything on or off, which has really made them viable as selfhosted alternatives. I also haven&#39;t had any issues with battery life when leaving Wireguard on all the time.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    &lt;h4&gt;Miscellaneous&lt;/h4&gt;&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    &lt;a href=&#34;https://github.com/slackhq/nebula&#34;&gt;Nebula&lt;/a&gt; continues to work for WAN hole-punching to connect my Synologys together without needing any configuration on the router at my parent&#39;s house. We both use this setup for one set of offsite backups.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    I&#39;m testing out &lt;a href=&#34;https://github.com/dani-garcia/vaultwarden&#34;&gt;Vaultwarden&lt;/a&gt; and the &lt;a href=&#34;https://apps.apple.com/us/app/bitwarden-password-manager/id1137397744&#34;&gt;Bitwarden iOS app&lt;/a&gt; as a selfhosted contingency to 1Password.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    &lt;a href=&#34;https://kb.synology.com/en-us/DSM/help/SynologyDrive/drive_desc?version=7&#34;&gt;Synology Sync&lt;/a&gt; has continued to work well as a Dropbox alternative, giving me a local copy of code and projects on my laptop while also having the files available over network mounts. Similar to Synology Photos and Immich though, I want to replace this with an open-source solution. Currently looking into Syncthing.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    I&#39;ve been running &lt;a href=&#34;https://www.nlnetlabs.nl/projects/unbound/about/&#34;&gt;unbound&lt;/a&gt; in my cluster for a while, providing DNS to my personal devices. I&#39;d like to see if I can improve my experience on some devices by having unbound load community-provided adblock lists and then trial it as the resolver on the household network.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    I&#39;ve been testing &lt;a href=&#34;https://github.com/goauthentik/authentik&#34;&gt;Authentik&lt;/a&gt; and &lt;a href=&#34;https://github.com/pocket-id/&#34;&gt;Pocket ID&lt;/a&gt; for single sign-on, but honestly I&#39;m not sure that I want to have all the services behind a single identity. So far Pocket ID providing Passkey logins via Yubikeys and TouchID/FaceID seems promising though.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    I run a container &lt;a href=&#34;https://github.com/distribution/distribution&#34;&gt;registry&lt;/a&gt; locally for my cluster which lets nodes pull images without needing to rely on outside registries. If you go this route, just set up a publicly-trusted TLS certificate from Let&#39;s Encrypt that the registry serves, this is a far better solution than trying to add overrides for insecure registries.  &#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    &lt;a href=&#34;https://github.com/versity/versitygw/&#34;&gt;Versity Gateway&lt;/a&gt; has been working well as an S3 endpoint that dumps the data directly to my NAS. I&#39;ve been testing this out as a replacement for both TimeMachine and &lt;a href=&#34;https://www.arqbackup.com/&#34;&gt;Arq&lt;/a&gt; + Backblaze &lt;a href=&#34;https://www.backblaze.com/cloud-storage&#34;&gt;B2&lt;/a&gt;. The Versity Gateway endpoint works as a backup location in Arq to send the files to my NAS which I can then back up along with the rest of the data there.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    I set up a &lt;a href=&#34;https://forgejo.org/&#34;&gt;Forgejo&lt;/a&gt; instance for personal git hosting. Not sure how much I&#39;ll use this but so far it&#39;s been preferable to all the alternatives available.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    &lt;a href=&#34;https://github.com/coder/code-server&#34;&gt;Code-Server&lt;/a&gt; has been an interesting experiment for providing a web-based IDE across all my machines. VS Code is not my favorite but it works well enough in this context.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    I&#39;ve been testing &lt;a href=&#34;https://github.com/jellyfin/jellyfin&#34;&gt;Jellyfin&lt;/a&gt;, &lt;a href=&#34;https://github.com/ErsatzTV/ErsatzTV&#34;&gt;ErsatzTV&lt;/a&gt;, and the &lt;a href=&#34;https://apps.apple.com/us/app/senplayer-media-player/id6443975850#productRatings&#34;&gt;SenPlayer&lt;/a&gt; client application for setting up my own streaming TV. The idea is to have local stations with educational programming and cartoons that are ad-free for the kid.&#xA;&lt;/p&gt;&#xA;&lt;p&gt;&#xA;    I tried out the latest versions of a whole bunch of Linux distributions and window managers for use on my workstation. I landed on &lt;a href=&#34;https://fedoraproject.org/spins/cinnamon/&#34;&gt;Fedora Cinnamon&lt;/a&gt; and have been very happy with it. I didn&#39;t take notes during the trial process but there were so many bugs with so many distributions that it was a little surprising. Some of this was the fault of me running a NVidia card and using proprietary drivers, but many other non-gpu based issues like installers failing, window managers completely locking up immediately on login, etc. Anyway, I&#39;ll be staying on Fedora Cinnamon for the forseeable future.&#xA;&lt;/p&gt;&#xA;&lt;/div&gt;</description>
		<pubDate>2025-12-31 19:00:00 -0500 EST</pubDate>
	</item>
</channel>
</rss>