IPv6 and IPv4 Translation

2024-02-22

This post is an overview of the technologies needed to convert between the IPv4 and IPv6 protocols and when they might be used. The original inspiration for this was building a Nebula mesh network to connect machines, some of which had dual-stack, some of which were IPv4-only, and some of which I was bringing up in a cloud environment with only IPv6 networking. Some hosting providers give a discount on VMs if you run IPv6-only, or charge additional fees to allocate IPv4 addresses. For reasons we'll see, trying to 1) run peer-to-peer software 2) on a linux host 3) with only IPv6 networking 4) connecting to an IPv4 host is a difficult scenario. There are a number of rabbit holes left unexplored, but this post will never see the light of day if I try to include all of them here, and I suggest reading through the linked RFCs.

Building a network

First, let's review what the Internet is, at least from layer 3 of the OSI model. We're not interested in the physical and data layers beneath us or the protocols running on top, we just need to build IP packets and route them between computers.

An entity wishing to provide service to the internet purchases some IP space from their regional registry [1], and becomes an Autonomous System, identified by an Autonomous System Number (ASN). For IPv4, that IP space is some portion of the 32-bit addressable space. You can divide up an address space by significant bits, so if you own a /16 prefix, everything that is a more-specific subset of those first 16 bits is within your space. For example, if it's 1984 and you're a university, you might own the 128.10.0.0/16 prefix. In order to join the Intenet, you peer with other Autonomous Systems near you, and announce your prefix over the (external) Border Gateway Protocol (BGP) which advertises routing information about your prefix to neighbors. All of these providers together form the Internet, and as a result we get get globally routable addresses. Does IPv6 change this? Not really, except with a 128-bit address space, we haven't allocated anywhere near all of the space,

Having an understanding of BGP advertisements is important for understanding what we can and can't do with only access to the code on a single machine within a network. We may have full control over the packets on any interface connected to the machine, but in most cases we need assistance from the access/transport network (the ISP) to advertise the address space we're using. Just because we can construct an IPv6 address with an embedded IPv4 address, it isn't routable without assistance from the network.

Getting an address

Before we can even send packets, we need to have an IP address ourselves if we want the response to our packets to get back to us. With IPv4, we usually get a single IPv4 address from our ISP, and then our router can multiplex private addresses over that single address to the internet -- a stateful NAT (Network Address Translation). A new device on your network talks to your router, and receives a private IPv4 address via DHCP request. The NAT table on the router is updated so that when the device reaches out to a public IPv4 address, it will rewrite the packet source as if it came from the public IPv4 address. All devices behind your NAT show up as coming from this public IPv4 address externally. As hinted at by private, the local IPv4 address is not globally routable. So if someone wants to reach out to a device behind your NAT, they have to use that public IPv4 address, and based on firewall rules, your router picks the private address to forward the packet to, usually based on the port (port-forwarding). IPv6 has ULAs, Unique Local Addresses, which are somewhat equivalent to IPv4 private address space, but they are not globally routable, and generally this isn't the method we would prefer for getting an IPv6 address.

IPv6 is meant to be globally routable, from one device to another. In that way, it's actually correcting what ended up not being possible over IPv4. Each device should be reachable on the internet via a unique address.

IPv6 is usually handed out from your ISP via DHCPv6 request, similar to how you get a single IPv4 address to NAT via DHCP. However with DHCPv6, your router can request not just a single address (which it would then have to NAT), but instead an entire prefix that the ISP is said to delegate to you. Prefix delegation means you'll usually receive a /56 or /60 [2] and then your router can further delegate that prefix into smaller prefixes.

There are tricks you can employ any time you have a sufficiently large address space, one of which is that you can take a random value in that space, and have a high probability that there won't be any overlap the next time you take another random value. That's basically what SLAAC is doing for IPv6 -- instead of stateful DHCP assignment, SLAAC runs an algorithm to convert the MAC address of your interface into the interface part of an IPv6 address, or randomly pick an address. Both of these are then combined with Duplicate Address Detection -- a quick check that nothing else on the network is listening on the selected address. There are some additional protocols I don't know the details of where your device takes multiple IPv6 addresses, and may rotate the interface portion of the address periodically. All of this falls under autoconfiguration -- the ability of a device to get a routable address on its own with just the prefix information being provided, which is a benefit and simplification provided by adopting IPv6.

Carrier-grade NAT (CGNAT) is like NAT on your home router, except at the ISP level. A single IP address can be statefully NAT'd to many other IP addresses which the ISP hands out. Since these addresses behind the CGNAT can't be public, and can't be private, there's a range of "shared" address space that is used specifically for CGNAT: the 100.64.0.0/10 range. I actually use some of this space for a Nebula network, which hasn't been a problem for me because I'm not currently on an ISP that uses CGNAT so no routing is actually done with this address -- it's the internal overlay network only. I won't be using this for future deployments however because it's not the correct usage of this space.

NAT security

This warrants a quick note on NAT and Firewalls. The IPv4 address space is tiny, such that it's easy to enumerate and probe the entire space. As soon as you put a device on a public IPv4 address, it's going to get attacked by all sorts of things trying to log in. A common misconception is that because the devices behind your router only have private IP addresses, they are unreachable from the internet. It's hopefully true that they are mostly unreachable behind even consumer routers, but really you wouldn't say it's because of NAT -- your router is choosing what to do with incoming packets. It's not NAT that provides security (it could happily pass on all packets to any device as part of the NAT implementation, for instance), it's the fact that there's a stateful firewall also on your router deciding to drop any traffic it doesn't expect that provides the security. Another way to think about it is that it's not that devices are less reachable over IPv4, it's that reaching them requires more complexity (a stateful intermediary) and uses fixed resources (port numbers) to identify machines within a single address. We can have the same security without this complexity with IPv6.

And devices — usually IoT devices — might be breaching this contract without you knowing. Through protocols like UPnP, the device itself can configure the router to port-forward traffic to it, making it globally accessible, and now access to your private network is only as secure as the IoT device (the "s" in IoT stands for security). This one of many reasons these devices should be cordoned to their own vlan or similarly isolated.

Translation

IPv4-only networks are unfortunately still common because they are legacy and exist until replaced. IPv6-only is where we'd like to get to, and if you read the RFCs linked in this post, it's interesting how all of the technologies are presented as temporary measures until IPv6 replaces IPv4. A transition is the goal, even though the speed may make it feel like we've reached a steady state. For now, it usually makes sense to run dual-stack (IPv4 and IPv6) networks for broad compatibility. But there are cases where we might want to use the solutions afforded by IPv6, and then deploy the transition technologies for legacy IPv4 compatibility.

We might have more devices than available IPv4 addresses and want them all to be uniquely routable. This might be because we're running on a cellular network that only provides IPv6 connectivity because there aren't enough IPv4 addresses for every mobile device. This is common in many parts of the world, or with sub-providers like T-Mobile. It could also be something like a Kubernetes cluster where we have thousands of ephemeral pods that we want to make addressable without NAT. We may want to deploy IPv6-only internally for routing simplicity (it really is faster!), but devices will need to connect to some external hosts that are IPv4-only. We may want to take advantage of IPv6 auto configuration.

And in the other direction, while our network may only provide IPv4, we may want to connect to devices running on IPv6-only networks despite not being able to upgrade the network capabilities. To come up with solutions, we need to figure out what we can control -- is it just the program itself, the host, services external to the network that we can configure to assist our program or host? Depending on the translation direction and the amount of control we have end-to-end, let's see how different types of translations are implemented.

IPv4 and IPv6 are incompatible, meaning that if I only have IPv6 networking, I can't reach a device that only has an IPv4 address. IPv6 is not just a bit extension of IPv4, they are entirely separate protocols.

  • If we only have IPv6 connectivity, we'll need a translator if we want to talk to a device that is only connected via IPv4.
  • If we only have IPv4 connectivity, we'll need a translator if we want to talk to a device that is only connected via IPv6.

6 to 4

As an example, let's try to connect to an IPv4-only hostname from a device that only has IPv6 connectivity (a Vultr virtual machine with no IPv4 allocation).

github.com is IPv4-only [3], no IPv6 DNS records are returned.

kerby@vultr-worker-ipv6:~# dig +short github.com A 140.82.114.4 kerby@vultr-worker-ipv6:~# dig +short github.com AAAA kerby@vultr-worker-ipv6:~#

Here's what happens if we try to connect with our device that only has IPv6:

kerby@vultr-worker-ipv6:~# curl -vvv github.com * Trying 140.82.113.3:80... * Immediate connect fail for 140.82.113.3: Network is unreachable

In order for us to reach github.com, we need to first send a packet to something with an IPv6 address -- because that's all our network speaks -- and have it converted to an IPv4 address on some device that can speak both IPv4 and IPv6 (a translator).

IPv6 is big enough to contain the entire IPv4 space in the remaining bits of a /96 prefix! We can send the IPv4 request to a translator using a well-known prefix. RFC 8215 defines this as 64:ff9b::/96, and as long as a translator in our access network advertises this prefix, our packet will get to it. Then, the translator will remove the well-known prefix and generate an IPv4 packet that can be routed to Github and back. When it gets back, it will send it to the IPv6 source address of our device. If this sounds a lot like the description of a NAT above, that's accurate, it's a stateful NAT -- it will usually use ports to track mappings between devices on both sides of the translator.

There is a protocol for doing stateless 1-1 translation, but this would require a different IPv4 address for every device and does not scale. As far as I know all translators like this in practice are doing stateful NAT, utilizing ports and a mapping table to uniquely identify traffic and route it to the same address.

So that covers the case where our program knows it needs to use a translator and knows to use the well-known prefix. But what if that's not the case? How can the program know when it needs to use a translator? And what if the program can't use a translator, is there a way that the host it's running on can detect the case of needing a translator and handle it for the program?

If the program can speak IPv6, we can handle part of this with DNS. A standard DNS implementation will return A and AAAA (pronounced quad-A) records that exist for a host. But if it's deployed in an IPv6-only network, we know A records won't do the requester any good.

DNS64 and NAT64

DNS64 works by exploiting what we discussed earlier about the massive disparity between IPv4 and IPv6 address space. There's plenty of empty space in IPv6, so a /96 prefix, 64:ff9b::/96, is used for this purpose only. First, we connect to the DNS64 resolver via its IPv6 address. Cloudflare, Google, and others run public DNS64 resolvers. If the hostname (e.g. kerbyhughes.com) has AAAA records, those are returned directly because you can then use IPv6 the entire way. But if the hostname (like github.com) returns only A records, then DNS64 returns a synthesized AAAA record by taking the 64:ff9b::/96 prefix and appending the IPv4 address (hex encoded). The host just routes this like any other IPv6 destination to its interface, and then when it gets to the ISP's NAT64, it sees the well-known prefix and knows to use the last 32 bits as the destination in a new IPv4 packet. It maintains a translation table to do the reverse for the response (stateful NAT64). This works without the program knowing that the entire flow wasn't IPv6.

RFC 6146: Stateful NAT64: Network Address and Protocol Translation from IPv6 Clients to IPv4 Servers. 1. Introduction
This document specifies stateful NAT64, a mechanism for IPv4-IPv6 transition and IPv4-IPv6 coexistence. Together with DNS64 [RFC6147], these two mechanisms allow an IPv6-only client to initiate communications to an IPv4-only server. They also enable peer-to-peer communication between an IPv4 and an IPv6 node, where the communication can be initiated when either end uses existing, NAT- traversal, peer-to-peer communication techniques, such as Interactive Connectivity Establishment (ICE) [RFC5245]. Stateful NAT64 also supports IPv4-initiated communications to a subset of the IPv6 hosts through statically configured bindings in the stateful NAT64.

The problem for my IPv6-only linux VM is that while I can install something like clatd or install unbound and point it to a DNS64 resolver, there's no dual-stack NAT64 resolver already in Vultr's network. So we have to provide that piece ourselves. We can actually use any IPv6 prefix. We can use the prefix of an external NAT64 gateway that is both the route to the gateway and the prefix that will be removed. Then we just route the packet out to the returned IPv6 address, Vultr passes it off to eventually get to the prefix owned by the NAT64 resolver, and that takes the embedded IPv4 address and routes it to the final destination. It's able to get back because the source address gets set to the IPv6 host that Vultr advertises.

https://nat64.xyz maintains a list of public NAT64 services. We can see that they provide their IPv6 DNS64 resolver, and then a routable /96 prefix that is used instead of that synthetic /96 that would require an in-network translator.

Again though, host and/or access network needed configuration. We had to have a DNS64 resolver configured in the DNS resolution path that knew about a NAT64 device. On an enterprise network, or your home network, where this can be configured for all devices, that might be sufficient. However a non-configurable IPv6-only network is common out in the wild, for instance a phone on some cellular networks, or a laptop on a hotspot that only provides IPv6. To solve this, platform providers like Apple and Google have equipped their hosts with client-side translators to do this first part of the translation. This is referred to as the customer-side translator, or CLAT.

There are a couple of ways a host can determine if it's in an IPv6-only situation. One is that when it first connects to the network, the network can tell it that IPv4 is not available. Most modern operating systems like macOS/iOS [4] expect this situation where they may be on an IPv6-only host, so they bundle a CLAT implementation. This is activated by DHCP option 108 which instructs the client that there is no IPv4 networking available. Similar to tunneling tools, the CLAT sets up a gateway on the host for IPv4 destinations and translates to an IPv6 packet that goes to the router set up via the DHCP configuration.

RFC 8925: IPv6-Only Preferred Option for DHCPv4: Section 5

This document specifies a DHCPv4 option to indicate that a host supports an IPv6-only mode and is willing to forgo obtaining an IPv4 address if the network provides IPv6 connectivity.

...

The IANA has assigned a new DHCPv4 option code for the IPv6-Only Preferred option from the "BOOTP Vendor Extensions and DHCP Options" registry, located at https://www.iana.org/assignments/bootp-dhcp- parameters/.

Tag:  108
Name:  IPv6-Only Preferred
Data Length:  4
Meaning:  Number of seconds that DHCPv4 should be disabled
Reference:  RFC 8925

There seems to basically be one Tweet with a screenshot of macOS in this setup that is authoritative -- it's the single citation on Wikipedia. So I took the liberty of recreating the setup on one of my own networks via OPNSense.

Setting option 108 to disable IPv4 in the DHCP(v4) settings:

OPNSense DHCPv4 Option 108 setting

Configuring Tayga [5] -- a NAT64 implementation -- to use the well-known /96 prefix:

Tayga IPv6 well-known prefix

Here's the ethernet interface on my laptop after the DHCP request.

en0: flags=88e3 mtu 1500 options=6463 ether 5c:e9:1e:69:30:04 inet6 fe80::40c:5be1:1a3b:5137%en0 prefixlen 64 secured scopeid 0xf inet6 2603:7081:702:2860:cad:9c33:471f:81ab prefixlen 64 autoconf secured inet6 2603:7081:702:2860:c1d0:7356:827:53bb prefixlen 64 autoconf temporary inet 192.0.0.2 netmask 0xffffffff broadcast 192.0.0.2 inet6 2603:7081:702:2860:8f9:eaf5:1d7:b1e7 prefixlen 64 clat46 👈 nat64 prefix 64:ff9b:: prefixlen 96 👈 nd6 options=201 media: autoselect status: active

With unbound configured for NAT64, our synthesized A record is the well-known IPv6 prefix plus a hex-encoded version of the IPv4 A record answer.

kerby@tycho % dig +short github.com 140.82.113.3 kerby@tycho % dig +short github.com AAAA 64:ff9b::8c52:7103

Another way for clients to detect IPv6-only connectivity is by using the special domain ipv4only.arpa. ARPA is a special Top-Level Domain that can be used for these purposes. Devices make a request to determine if DNS64 is available in the network.

RFC 7050: Discovery of the IPv6 Prefix Used for IPv6 Address Synthesis

A node requiring information about the presence (or absence) of NAT64, and one or more Pref64::/n used for protocol translation SHALL send a DNS query for AAAA resource records of the Well-Known IPv4-only Name (WKN) "ipv4only.arpa.". The node MAY perform the DNS query in both IPv6-only and dual-stack access networks. ... A DNS reply with one or more AAAA resource records indicates that the access network is utilizing IPv6 address synthesis

With DNS being provided by unbound on my router:

kerby@tycho % dig ipv4only.arpa +short AAAA kerby@tycho %

Here I've told my laptop to use the nameservers provided by nat64.net.

kerby@tycho % dig ipv4only.arpa +short AAAA 2a01:4f8:c2c:123f:64:5:c000:ab 2a00:1098:2c::5:c000:ab 2a00:1098:2b::1:c000:aa 2a00:1098:2c::5:c000:aa 2a00:1098:2b::1:c000:ab 2a01:4f8:c2c:123f:64:5:c000:aa

There are a bunch of rules about how the client should use these addresses, in which order, etc.

kerby@tycho % dig github.com +short AAAA 2a00:1098:2b::1:8c52:7903 2a01:4f8:c2c:123f:64:5:8c52:7903 2a00:1098:2c::5:8c52:7903

As we know though, this is only half of the equation. The CLAT translates to IPv6, but this relies on a translator to listen for that IPv6 packet and translate it to IPv4. Therefore networks that set DHCP option 108 must also provide the translator. This translator is considered to be on the provider side, or PLAT.

These two things together, CLAT and PLAT, are referred to as a system as 464XLAT.

464XLAT

RFC 6887: 464XLAT: Combination of Stateful and Stateless Translation

1. Introduction

This document describes an IPv4-over-IPv6 solution as one of the techniques for IPv4 service extension and encouragement of IPv6 deployment. 464XLAT is not a one-for-one replacement of full IPv4 functionality. The 464XLAT architecture only supports IPv4 in the client-server model, where the server has a global IPv4 address. This means it is not fit for IPv4 peer-to-peer communication or inbound IPv4 connections. 464XLAT builds on IPv6 transport and includes full any-to-any IPv6 communication.

...

2. Terminology

PLAT: PLAT is provider-side translator (XLAT) that complies with [RFC6146]. It translates N:1 global IPv6 addresses to global IPv4 addresses, and vice versa.

CLAT: CLAT is customer-side translator (XLAT) that complies with [RFC6145]. It algorithmically translates 1:1 private IPv4 addresses to global IPv6 addresses, and vice versa. The CLAT function is applicable to a router or an end-node such as a mobile phone.

As we can see from the RFC, there are some limitations, which make sense now that we know the parts that are required to make it work. It's not peer-to-peer because the PLAT has to NAT the traffic and is therefore an intermediary -- all of the traffic goes through this translator (though only routing is required, no packet inspection, which as we'll see next is another benefit of being IPv6 by default). No direct connection is possible. And inbound IPv4 isn't possible because there is no IPv4 address on one of the peers; the network told the client it couldn't give it an IPv4 address over DHCP.

4 to 6

What about the other direction? What if the program/host are in an IPv4-only network, but need to communicate with IPv6-only peers? IPv6-only destinations in practice are still rare, because all of these things we're talking about show how IPv4 hasn't gone away at all. So generally IPv6 implementations are additive, especially on origins for things like a website, where we have a lot of control over network and infrastructure choices. Generally a website origin is dual-stack. However, for peer-to-peer this isn't the case -- we totally might want to connect to a single device that only has an IPv6 address, and to get further along in the transition to IPv6, the best case is to use IPv6 by default and treat this specific case -- a device on an IPv4 network -- as the exceptional case that we patch over until it can go away.

In the case of an IPv4-only host, all we can do is add a proxy somewhere that translates IPv4 to IPv6. We're out of tricks, because there's no way to represent a 128-bit IPv6 address inside a 32-bit IPv4 address. So unlike DNS64 and NAT64, we can't route these packets using just the addresses. If we can only use an IPv4 address for routing, we have to smuggle the 128 bits that represent where we actually want the packet to end up somewhere else in our packet. And that means we need a new protocol running on our translator that knows how to extract those bits from the data portion of the packet and build a new packet with an IPv6 header using those bits. And do the reverse when it gets a response.

In terms of implementation, this is actually a pretty easy case, but there's no way to configure it all automatically. Most tunneling software fits in the category -- the tunnel software, like cloudflared or Nebula, configures tunnel interfaces for both IPv4 and IPv6 on the host, making it appear dual-stack to programs. Then when an IPv6 packet goes into the tunnel, it forwards it over IPv4 to a dual-stack host on the other end of the tunnel, which understands the tunnel protocol and can therefore extract the IPv6 address that the tunnel software put into the packet.

And if we don't want to run the tunnel software on every host, we can do it at the edge of the IPv4 network -- for example a router that is doing NAT for all the devices behind it can tunnel to a translator and convert packets for everyone.

These tunnels by themselves are just providing one-way connectivity though. If all we have is the tunnel to a dual-stack host, we're in a similar situation to our 464XLAT system above where other client's can't reach us, because they have no way to know that we're behind the translator. The translator can't advertise our IPv4 address, because to the translator that IPv4 address is just the address our ISP handed us -- they are already advertising it, the translator can't also advertise it (well, they could, but things break badly when you do this over BGP so providers have agreements to try to prevent hijacking). Instead, for the case of something like cloudflared, we can introduce DNS records that point a hostname to our particular tunnel. The hostname resolves to A and AAAA records, and Cloudflare's system makes sure that those addresses get routed to the correct tunnel (via the hostname that was requested). We can do something similar with our Nebula network by publicly advertising the lighthouse address. Interestingly, my understanding is there were some attempts to build a more generic system that works similarly to help with the IPv4 to IPv6 transition.

Teredo

The Teredo protocol defines a way to connect peers with mixed IPv4 and IPv6 capabilities by defining two additional types of systems that operators run as general infrastructure -- servers and relays (Nebula uses the term relay for this as well). Relays are just like the cloudflared translator -- they are reachable at a known IPv4 address which either the operators of the host or the network configure some or all IPv4 traffic to go over. Once at the dual-stack relay, the relay connects to a Teredo server which is reachable at a well-known, globally-routable IPv6 prefix. My understanding is that this routable IPv6 prefix is meant to be advertised as general infrastructure. Then practitioners can implement their own relays that use the shared servers. The traffic goes from the relay, to the server, and the server's job is not to forward any traffic but to inform the relays (and IPv6-native peers) of each peer's globally routable IPv6 Teredo address. We're back 128-bit tricks because the Teredo protocol embeds lots of information into the IPv6 address it assigns to the Teredo peer. In this way, the peers can check in with the server (possibly via a relay) and then connect directly.

The most well-known use of Teredo to my knowledge was the XBox network. Due to the number of devices, IPv6 was the best way to uniquely identify each XBox, but you have to make it work over many IPv4-only home networks. And it has to use peer-to-peer semantics. So by utilizing Teredo servers and running Xbox-specific relay servers that the Xbox can be hardcoded to reach out to if it only has IPv4 connectivity, you can connect every XBox over unique IPv6 addresses.

Nebula

And this out-of-network setup is essentially what we can set up with Nebula. Our lighthouses can be configured as relays for a node, and then as long as you can reach the lighthouse, the lighthouse will, if necessary, convert between IPv4 and IPv6 internally and translate over the appropriate interface that can reach the target. For hosts that don't share the same IP protocol, it removes the peer-to-peer aspect which may be important especially if you're sending a lot of bandwidth one way or the other (for example, backups between the two Nebula clients counting against traffic bandwidth for the relay), but does facilitate connectivity which may be more important.

Without a relay, connecting to an IPv6-only client (100.100.0.50) fails:

root@vultr-worker-ipv4:~# ping 100.100.0.50 PING 100.100.0.50 (100.100.0.50) 56(84) bytes of data. ^C --- 100.100.0.50 ping statistics --- 16 packets transmitted, 0 received, 100% packet loss, time 15283ms

And connecting to an IPv4-only host (100.100.0.51) from an IPv6-only host (vultr-worker-ipv6) fails:

root@vultr-worker-ipv6:~# ping 100.100.0.51 PING 100.100.0.51 (100.100.0.51) 56(84) bytes of data. ^C --- 100.100.0.51 ping statistics --- 7 packets transmitted, 0 received, 100% packet loss, time 6141ms

These are the configuration settings needed to utilize a Nebula lighthouse as a relay:

Lighthouses:

relays: am_relay:true ... listen: # To listen on both any ipv4 and ipv6 use "[::]" host: "[::]" ...

Clients:

relay: relays: - ${lighthouse-ip} - ${lighthouse-ip} ... listen: # To listen on both any ipv4 and ipv6 use "[::]" host: "[::]" ...

With a lighthouse acting as a relay from IPv4 (vultr-worker-ipv4) to IPv6 (100.100.0.50), we have connectivity:

root@vultr-worker-ipv4:~# ping 100.100.0.50 PING 100.100.0.50 (100.100.0.50) 56(84) bytes of data. 64 bytes from 100.100.0.50: icmp_seq=1 ttl=64 time=6.86 ms 64 bytes from 100.100.0.50: icmp_seq=2 ttl=64 time=6.84 ms 64 bytes from 100.100.0.50: icmp_seq=3 ttl=64 time=6.86 ms 64 bytes from 100.100.0.50: icmp_seq=4 ttl=64 time=6.83 ms ^C --- 100.100.0.50 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3005ms

And with a lighthouse acting as a relay from IPv6 (vultr-worker-ipv6) to IPv4 (100.100.0.51) we have connectivity:

root@vultr-worker-ipv6:~# ping 100.100.0.51 PING 100.100.0.51 (100.100.0.51) 56(84) bytes of data. 64 bytes from 100.100.0.51: icmp_seq=1 ttl=64 time=6.97 ms 64 bytes from 100.100.0.51: icmp_seq=2 ttl=64 time=6.80 ms 64 bytes from 100.100.0.51: icmp_seq=3 ttl=64 time=6.49 ms 64 bytes from 100.100.0.51: icmp_seq=4 ttl=64 time=23.4 ms 64 bytes from 100.100.0.51: icmp_seq=5 ttl=64 time=14.8 ms ^C --- 100.100.0.51 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4006ms


  1. Can anyone purchase an IPv6 block? ARIN has some requirements, like needing to peer with two ISPs, so you probably don't qualify and you probably don't want to pay for that anyway. And that's not scaleable for everyone to do, or practical for provisioning all the devices in the world.
  2. The IPv6 prefixes line up on the nibble (4 bits) boundary, because each IPv6 hexadecimal (base16) character represents 4 bits (2^4=16), which is why you won't see a /59, for example.
  3. At this point, Github is providing a public service for every blog post and tutorial that uses it as a non-IPv6-enabled hostname. Think of the blog posts it will break once it's dual stack!
  4. On iOS, apps are required to have IPv6 connectivity. iOS handles this by providing APIs that can do the CLAT translation, like NSURLSession. This is just the stateless CLAT part of the translation and requires a PLAT on the network. [v6ops] Apple and IPv6, a few clarifications
  5. A future project is going to be an IPv6-only homelab. I'm not quite there yet for all devices, but this is a good start at understanding how to transition legacy devices. Tayga, a NAT64 implementation available as a plugin for OPNSense, should be a good part of that setup if I want, but I had some issues getting the gateways configured for it, so I will have to try again to get that working.