(If you’ve not seen the movie, Spoilers! Also none of this post will make any sense without knowing the movie. But mostly spoilers.)
Episode 1: The Signal
Cold Open
A silent, eerie shot of the buried lunar monolith, half‑revealed by excavation lights. No explanation. Cut to black.
Main Plot
Dr. Heywood Floyd travels to Space Station V amid growing secrecy and political tension. His briefing on the lunar discovery hints at something ancient and potentially dangerous.
The journey to the Moon is procedural, grounded, and quietly unsettling. Floyd and the team descend into the excavation site.
Cliffhanger
The monolith awakens. A piercing radio signal erupts. Cut to black.
Episode Function
Establishes the mystery and the tone: humanity has found something older and more powerful than itself.
Episode 2: The Glitch
Cold Open
A serene shot of Discovery One gliding through deep space. HAL’s calm voice runs diagnostics.
Main Plot
Life aboard Discovery. Routines, tensions, and HAL’s subtle behavioural shifts. Introduce Bowman and Poole as professionals, not victims‑in‑waiting.
HAL is charming, helpful, and almost too perfect. A few subtle glitches, a misreported sensor reading, a hesitation in speech. Flashbacks show HAL’s creation, training and the conflicting directives that seed his paranoia.
Cliffhanger
HAL reports the AE‑35 unit failure. The first crack in the façade.
Episode Function
Introduce main characters. Build sympathy for HAL but establish as a machine.
Episode 3: The Malfunction
Cold Open
Heywood Floyd tells an computer operator, “I’ve recorded a video message to play to the crew when they reach Jupiter. Make sure HAL knows that the details of the mission is top secret.”
Main Plot
The EVA sequence to retrieve the AE-35 with the humans being unable to find an issue with the unit. The “human error” scene and the lip reading scene follow. HAL’s behaviour becomes defensive, then paranoid.
The hibernating crew and Frank Poole’s deaths unfold with slow‑burn dread. Bowman’s confrontation with HAL becomes the emotional core. Bowman’s attempt to reason with HAL is framed as a genuine dialogue between two intelligences. The shutdown sequence is the emotional climax of the episode.
Cliffhanger
Bowman plays the secret prerecorded message. The Jupiter mission was never about exploration but the monolith.
Episode Function
Turns the story into a conspiracy‑tinged thriller and deepens HAL’s tragedy.
Episode 4: The Dawn of Humanity
Cold Open
A scientist from Episode 1 lectures about early hominid evolution. “We still don’t know what triggered the leap.” Cut to prehistoric Africa.
Main Plot
The original Dawn of Man sequence plays as a revelation, not a prologue. The monolith’s appearance is now understood as the first intervention in a long chain.
The audience sees the evolutionary spark that began humanity’s ascent.
Cliffhanger
A match‑cut from the bone thrown skyward to a modern spacecraft, but instead of continuing the original film’s cut, we land on Frank Poole’s drifting body.
Episode Function
Recontextualises the monolith as a recurring cosmic agent and sets up the next millennium‑jump.
Episode 5: The Future
Cold Open
It is the year 3001. A salvage crew discovers Poole’s frozen body in deep space.
Main Plot
Poole is revived on Earth a thousand years after he was killed by HAL. He struggles with trauma, dislocation and the revelation of Bowman’s disappearance.
Humanity has advanced but the monolith remains an enigma. Poole learns of the monolith’s repeated interventions across history.
Cliffhanger
Poole is shown the last known transmission from Bowman: “Something’s going to happen. Something wonderful.” Cut to the Jupiter monolith.
Episode Function
This episode summaries the 4th novel, but without the computer virus ending. Jupiter igniting into a star is a treated as a matter of fact that doesn’t get explored, maybe a hook for a second TV series.
Provides emotional grounding before the metaphysical finale and bridges Clarke’s broader universe into the series. Also gives Frank Poole an episode to balance Dave Bowman’s final episode.
Episode 6: The Infinite
Cold Open
Bowman’s pod approaches the monolith orbiting Jupiter.
Main Plot
The Stargate sequence unfolds as a full‑episode experiential journey. Bowman’s transformation into the Star Child becomes the culmination of the monolith’s long arc. The alien intelligence remains unseen, preserving the cosmic mystery.
Final Scene
A silent epilogue. Poole, now fully recovered, looks up at the sky as the Star Child appears above Earth.
Series Function
Ends on awe, ambiguity, and the sense that humanity’s story has only just begun.
This post was inspired by an episode of the Skeptics With a K podcast, where the hosts expressed a wish that movies were re-cut as episodic TV shows, allowing a slow enjoyment of the story.
How would you re-cut your favourite movies into TV shows? Am I a philistine for even writing this in the first place? Leave a comment.
You hear this a lot. Someone wants to make a point about censorship, cryptography, or the futility of banning a particular blob of data. The argument goes like this: a file is just a sequence of 1s and 0s, and numbers are also sequences of 1s and 0s when written in binary. Therefore, banning a file is like banning a number. What next? Will the government outlaw 85?
It’s a neat rhetorical trick. But every time I see it, I wince just a little, because here’s the awkward truth…
Sometimes, digital files aren’t numbers.
How long has this been going on?
Start with the simplest case, the empty file. A file of length zero. That’s a perfectly valid file and I have many copies of it, but what number is it?
You might say “zero”.
Fine — but then what number is a file containing a single zero byte? It can’t also be zero; that slot is taken.
Even if you carve out a special rule for the empty file (perhaps negative 1), you’re still not done. Suppose you want to encode a number as a file. As a 32‑bit integer? A 64‑bit integer? A 128‑bit integer? All of these are different files but they’re the same number.
Numbers are infinite objects. Even the humble 42 has infinitely many zeros to the left and infinitely many zeros to the right after the decimal point. All numbers do, not just answers to metaphysical questions.
Files are finite objects. They have a length. Numbers don’t.
That difference matters.
Leading me along…
A favourite stunt in crypto‑activist circles is to turn a small text file into a prime number, but the trick only works because of a convenient accident.
Almost no real‑world file formats begin with a zero byte.
So when you convert that prime number back into the file, you simply chop off all the infinite leading zeros and start at the first non‑zero byte. It works because the file didn’t need those zeros anyway. But this is a convention, not a truth of nature.
The only widely‑encountered counterexample is UTF‑16 text. If the first character is ASCII, the first byte will be 0x00. Beyond that, you can find raw data formats such as uncompressed pixel dumps, PCM samples, and similar. These might begin with zero bytes if the first value happens to be zero. But structured formats almost never do.
And that’s deliberate. Most file formats begin with a header or “magic number” that identifies the type. These signatures are chosen to be distinctive and printable, which means the humble zero byte is out. Those file prefixes aren’t chosen to enable the prime number trick but rather more mundane practical reasons.
Channelling Gödel
If you really want to treat every possible file as a number, you can but you must choose a convention that encodes both the bits and the length. Only then do you get a true one‑to‑one mapping.
And once you’ve done that, you’ve quietly admitted the point. A file is not just a number. A file is a number plus its length.
Which is exactly the thing the slogan tries to ignore. To do that you need a little help from Gödel, but that’s a story for another day.
Years ago, I was watching a feel‑good segment on the local news about a garden centre that had launched a recycling scheme for their used plastic plant pots. Customers were encouraged to bring back the empties once they’d transplanted their new purchases into the garden. A wholesome little initiative, I thought.
My warm glow lasted about thirty seconds.
In my head, those pots were being returned to the supplier, washed, de‑labelled, and stacked up ready for another round of seedlings. A simple loop. A sensible loop.
But the report cut to a conveyor belt feeding perfectly good pots into a machine that crushed them into plastic pellets. The can be used to make fleece jackets!
That was the moment I realised the clue had been there all along. They never said the pots would be reused, but recycled. I’d filled in the rest with wishful thinking.
Some of the pots on that conveyor were pristine. I can understand melting down the broken ones, but a pot that’s held a plant for all of three weeks surely has a few more seasons left in it. Instead, it was being sacrificed to the great fleece‑jacket economy.
Loop the Loop!
When I was a child in the 70s and 80s, drinks came in glass bottles with a deposit. You paid a little extra, returned the empties, and got the deposit back. The bottles went back to the plant, were washed, and went straight back into circulation. No shredding. No melting. No existential questions about the global fleece surplus.
There are plans to revive that deposit‑and‑return system today and I genuinely hope they succeed. I just can’t shake the suspicion that somewhere, in a boardroom, someone is pitching the exciting potential of turning all those bottles into yet more fleece jackets.
“When it’s burning hot on summer days, she’s exactly what I need. She’s soothing like the ocean rushing on the sand. She takes care of me.”
Credits 📸 “Norfolk” by me. 🀄 Thanks to Heather and Hilde for their feedback.
Wilf’s Programmers’ Workshop, PC Plus, November, 1991.
This was my first ever publication. Wilf Hey, writer of the Programmers’ Workshop column in PC Plus, had run a contest to write what we’d now call a quine, but which he described as a “self‑creating program”, one that tells you what it does by producing its own source.
By coincidence, I had just been experimenting with PKZIP’s ability to create self‑extracting EXE files. ZIP archives fused with the decompressor into a single executable. It occurred to me that if I took a few liberties with the definition of “programming language”, I could use that mechanism to produce a rather cheeky entry.
I wrote a batch file that performed the trick, copied it onto a 3.5‑inch floppy, and posted it off.
When the issue finally came out, I was thrilled to see my name in print! I proudly showed it to all my friends in the sixth‑form lounge. But I also winced a little. I hadn’t given any thought to “source code” and he quite rightly pointed out that I wasn’t the author of PKZIP. At the time, I’d simply bundled PKZIP.EXE itself as the “source” because I needed something to be the source code and that seemed good enough. A quite inconsequential decision at the time.
After reading his comment, I started working on a revised version that included a small text file to act as the actual source. But I stopped. He wasn’t going to publish the same joke again!
Decades later, having lost my copy, I resolved to find it again. I only remembered that it must have been during my sixth-form years because I remembered the sixth-form lounge where I was showing the magazine around at school. That gave me a rough window but not the exact issue.
I got close when I discovered that archive.org had scans of Programmers’ Workshop from around that period. I found the edition that announced the quine contest, which gave me a lower bound, but none of the twenty or so scanned issues contained my entry. Still, at least I now knew which ones it wasn’t in.
Armed with the eight remaining issue dates, I posted on various retro‑computing forums to see if anyone might have a copy. A few people were selling old issues on eBay, but I wasn’t keen on paying for a one‑in‑eight chance.
What finally bore fruit was posting on Hacker News. A very helpful man, Paul Robinson, offered to go to the British Library, which holds archive copies of every issue of PC Plus, including the eight I was hunting for.
Paul’s trip to the British Library finally closed the loop. After decades of half‑memories and dead ends, there it was. My first published line of code‑adjacent mischief. Reading it again, I could see both the charm and the flaw. I’d tried to be clever, and succeeded — just not in the way the contest intended. In hindsight, the headline fits better than ever. I wasn’t just making a self‑creating program. I was being, in every sense, too clever by half.
Many thanks to archive.org, The British Library, Hacker News and Paul Robinson.
Oh yes, and congratulations to PKWare for inadvertantly writing my entry to the contest.
In these environmentally conscious times, we are constantly exhorted to recycle. This is a tedious ritual involving coloured bins, soggy cardboard and the faint smell of moral superiority. Yet I submit that recycling is, in fact, the least responsible thing we could do with our waste paper.
Instead, I propose a bold, innovative, and unquestionably sustainable alternative.
Burn it. All of it. Immediately. Preferably with gusto.
“He’s kinda sexy and mysterious about it. Hot eyes, all my judgement is clouded. I’m in your city, can you show me round it? You can put your money right where my mouth is.”
“But burning paper releases CO₂!” you cry. But here is the true genius of the plan.
Once we stop recycling paper, industries that rely on recycled pulp will face a sudden, catastrophic shortage. Their only recourse will be to turn to lumber mills for virgin fibre. Lumber mills, in turn, will be forced to plant vast new forests to meet this demand. All that CO₂ we released earlier will be taken up by all those newly planted trees and taken out of the atmosphere!
Compare that to recycling paper. Send slow-moving fossil-fuel burning trucks to everyone’s house to pick up waste paper. Pulp it, bleach it and flatten it and you’ve got recycled paper that’s not quite as good as wood paper. Why not cut out the middleman and let nature grow the wood for us?
By burning paper, we create a powerful economic incentive for growing more trees, bigger forests and a greener planet. In short, every time you set fire to a stack of old utility bills, you are personally contributing to reforestation.
Furthermore, the lumber industry will enjoy a renaissance. Rural economies will flourish. Entire regions will be revitalised by the sudden need to plant millions of trees to replace the ones being fed into the paper mills at industrial speed.
🔥 Imagine the community cohesion created by weekly neighbourhood paper bonfires. 🔥Imagine the joy of watching junk mail fulfil its highest purpose. 🔥Imagine the catharsis of consigning tax forms to cleansing flame.
Recycling never gave anyone that. ♻️
“Daisy’s bare naked, I was distraught. He loves me not, he loves me not. Penny’s unlucky. I took him back and then stepped on a crack and the black cat laughed.”
But seriously…
Yes, I am joking, but only a little bit.
I had the idea when I was reminded of a TV advert from some years ago from a toilet paper company. Their bold promise was that for every tree they used to make toilet paper, they would replant three. Sounds good, but I had a slightly more cynical rewording of that slogan. “We’ve taken steps to secure the future supply of the raw materials we need.”
I mean, those three trees they planted are going to end up being cut down and turned into more toilet paper, right?
But I can’t be really cynical about toilet paper manufacturing. Even if you apply maximum cynicism, the worst I could say is that their industry is neutral. They grow trees which take carbon out of the air, but then they turn those trees into toilet paper which gets used and decomposes, releasing that carbon. Which then becomes more trees. The circle of life.
And honestly, I could hope to be neutral too. I buy potatoes, made from carbon that was taken out of the air, but then I cook and eat those potatoes which means that carbon ends up in the air again. The paper industry has more in common with a potato farmer than a plastic factory. Agriculture, even at industrial scale, is the application of the natural carbon cycle.
Recycling plastic and metals make sense, but recycling paper? Sure, I’m going to keep dropping my cardboard packaging into the recycling bin, but if I were to suggest we should recycle potatoes after we eat them, you’d call me insane. (And I must apologise for the disgusting image I put in your head just now.)
The real issue isn’t how many trees we plant, but what we do with them. Cutting down a tree and planting three saplings doesn’t remove carbon from the atmosphere; it just keeps the short carbon cycle spinning. If we actually want to lower atmospheric CO₂, we need trees that grow, mature, and stay standing. Planting three trees only matters if at least some of them are allowed to become forests rather than future toilet rolls.
And if that means lighting the odd ceremonial blaze in the name of carbon sequestration, well, that’s just responsible citizenship.
Credits 📸 “Bonfire” by Jonas Bengtsson. (Creative Commons.) 📸 “Cat on Laptop” by Doug Woods. (Creative Commons)
Thanks to my sis and sis-in-law Heather and Hilde for their insightful review. Thanks also to The YIMBY Pod for the inspiration to write this.
Lately, I’ve been posting in flat earth social media, mostly asking why ships appear to sink behind the horizon when they sail away. Why? Because of XKCD number 386.
One thing I’ve noticed is the flat‑earther’s need to deny gravity. I’m not sure why because gravity is one of the easiest things to demonstrate. You drop something, it falls towards the ground instead of hanging in mid-air. That’s it.
Flat earthers seem to tie themselves in knots trying to replace gravity. They’ll invoke buoyancy without realising that it depends on gravity to work. They’ll invoke electrostatic forces which doesn’t depend on gravity but breaks down when you ask where the like-charges-repel effect has gone.
One even told me that Beyoncé was responsible for objects staying on the earth, but I’m quite sure that was an autocorrect error.
All I Want for Christmas is Glue
So, as a friendly glober with a GCSE in high school physics, I thought I’d help. Let’s see if we can integrate some form of gravity into a flat earth.
“Many times I’ve been alone and many times I’ve cried. Anyway, you’ll never know the many ways I’ve tried.”
Introducing Flat‑Earth Gravity™.
Put away your books on Newtonian motion or relativistic physics. This isn’t that.
This is the force that pulls objects towards the ground. Exactly what you see when you pick up a ball and drop it. This force behaves exactly as required for a flat earth to function and not at all like any force known to Glober physics.
What direction? It pulls towards the ground. Perpendicular to the surface of the flat Earth at a constant 9.8 m/s² everywhere. The same acceleration you can measure for yourself with tennis balls and a stopwatch.
Drop a ball in London: down. Drop a ball in Brazil: also down. Drop a ball on the ice wall at the edge of the earth: Get down tonight!
These “down” directions are parallel, because this force is uniform across the entire disc of the earth. All perpendicular to the flat earth surface. This conveniently avoids the need for the Earth to be infinitely large, which is what Glober physics would require.
And crucially, Flat‑Earth Gravity™ does not pull toward the centre of mass. This force is always downwards, never sideways. This is why the flat earth doesn’t collapse into a sphere and why walking outwards towards the edge isn’t like climbing a hill. That’s what you’d get with Glober Gravity, so don’t get them mixed up.
There you go. Gravity fixed. Now flat earthers can stop arguing about density and start arguing about why their new custom‑built force doesn’t also pull the Sun and Moon into the ground. Maybe that’s because of Beyoncé. Or buoyancy.
And to those who point out that “All I Want for Christmas” is Maria Carey. Shuddup.
Credits
📸 “Daventry Ducks” by me.
Thanks to “Ozteric Oz” for the inspiration.
IP address exhaustion is still with us. Back in the early 1990s, when the scale of the problem first became obvious, the projections were grim. Some estimates had us running out of IPv4 addresses as early as 2005 and even the optimistic ones didn’t push much beyond the following decade.
As the years passed, we got clever. Or perhaps, more desperate.
We managed to put off that imminent exhaustion for decades, breaking the Internet in the process in small, subtle ways. Want multiple home machines behind a single IP? No problem — we invented NAT. ISPs want in on that “multiple devices, one address” trick? Sure, have some Carrier‑Grade NAT. Want to run a server on your home machine? Er… no.
And through all of this, IPv6 has been sitting there patiently. It fixes the address shortage. It fixes autoconfiguration. It fixes fragmentation. It fixes multicast. It fixes almost everything people complain about in IPv4.
But hardly anyone wants it.
The problem isn’t that IPv6 is bad, but that deploying it means spending money before your neighbors do and no one wants to be the first penguin off the ice shelf. So we’ve ended up in a long, awkward stalemate. IPv6 is waiting for adoption and IPv4 is stretched far beyond what anyone in 1981 imagined.
But what if it hadn’t gone that way? What if the “IP Next Generation” team that designed IPv6 had chosen a different path? One that extended IPv4 instead of replacing it.
Let’s take a visit to that parallel universe.
“Images of broken light which dance before me like a million eyes, they call me on and on across the universe. Thoughts meander like a restless wind inside a letter box, they tumble blindly as they make their way…”
1993 — The Birth of IPv4x
It’s 1993, and the IP‑Next‑Generation working group has gathered to decide the future of the Internet. The mood is earnest, a little anxious, and very aware that the world is depending on them.
One engineer proposes a bold idea: a brand‑new version of IP with 128‑bit addresses. It would need a new version number but it would finally give the Internet the address space it deserved. Clean. Modern. A fresh start. IPv6!
Another engineer pushes back. A brand‑new protocol sounds elegant but IPv4 is already everywhere. Routers, stacks, firmware, embedded systems, dial‑up modems, university labs, corporate backbones. Replacing it outright would take decades and no one wants to be the first to deploy something incompatible with the rest of the world.
So the group settles down and agrees what that would look like. People want the same IP they’re used to but with more space. But if this idea is going to have legs, there is one requirement that is going to be unavoidable.
The new protocol must work across existing IPv4 networks from day one.
The Version field must remain 4.
The destination must remain a globally routable 32‑bit IPv4 address.
The packet must look, smell, and route like IPv4 to any router that doesn’t understand the new system.
And so IPv4x is born.
In a nutshell, an IPv4x packet is a normal IPv4 packet, just with 128‑bit addresses. The first 32 bits of both the source and target address sit in their usual place in the header, while the extra 96 bits of each address (the “subspace”) are tucked into the first 24 bytes of the IPv4 body. A flag in the header marks the packet as IPv4x, so routers that understand the extension can read the full address, while routers that don’t simply ignore the extra data and forward it as usual.
Who owns all these new addresses? You do. If you own an IPv4 address, you automatically own the entire 96‑bit subspace beneath it. Every IPv4 address becomes the root of a vast extended address tree. It has to work this way because any router that doesn’t understand IPv4x will still route purely on the old 32‑bit address. There’s no point assigning part of your subspace to someone else — their packets will still land on your router whether you like it or not.
An IPv4 router sees a normal IPv4 packet and routes according to the 32‑bit target address in the header, while an IPv4x router sees the full 128‑bit target address and routes according to that instead.
This does mean that an ordinary home user with a single IPv4 address will suddenly find themselves in charge of 96-bits of address space they never asked for nor will ever use, but that’s fine. There are still large regions of the IPv4 space going unused.
“If you’re still there when it’s all over, I’m scared I’ll have to say that a part of you has gone.”
1996 — The First IPv4x RFC
By 1996, the IPv4x design had stabilized enough for the working group to publish its first formal RFC. It wasn’t a revolution so much as a careful extension of what already worked.
DNS received a modest update. A normal query still returned the familiar A record, but clients could now set a flag indicating “I understand IPv4x”. If the server had an extended address available, it could return a 128‑bit IPv4x record alongside the traditional one. Old resolvers ignored it. New resolvers used it. Nothing broke.
DHCP was updated in the same spirit. Servers could hand out either 32‑bit or 128‑bit addresses depending on client capability.
Dial‑up stacks were still distributed with modem software, not the OS, which turned out to be a blessing: the major dial‑up packages all added IPv4x support within a year.
Adoption was slow but steady. The key advantage was that the network didn’t have to change. If your machine spoke IPv4x but the next router didn’t, the packet still flowed. Old routers forwarded based on the top 32 bits. New routers used the full 128.
MIT and the First Large‑Scale Deployment
The first major adopter was MIT. They had been allocated the entire 18.0.0.0/8 block in the early ARPANET era and they were famously reluctant to give it up. Stories circulated about individual buildings — some zoned for fewer than a dozen residents — sitting on entire /24s simply because no one had ever needed to conserve addresses.
IPv4x gave them a way forward and to show their credentials as responsible stewards. Every IPv4 address in their /8 became the root of a 96‑bit subspace. MIT deployed IPv4x experimentally across their campus backbone and the results were encouraging. Nothing broke. Nothing needed to be replaced overnight. It was the perfect demonstration of the “no flag day” philosophy the IPng group had hoped for.
Their success reassured everyone else that IPv4x was not only viable, but practical. Other large networks began making small updates during their weekend maintenance windows.
Buoyed by this success, IANA announced a new policy. All /8 blocks that are currently unused are reserved for IPv4x only.
“Hypnotizing, mesmerizing me. She was everything I dreamed she’d be.”
2006 — Ten Years of IPv4x
By 2006, IPv4x had firmly established itself. Dial‑up was fading, broadband was everywhere, and homes with multiple computers were now normal. IPv4 hadn’t vanished — some ISPs and server farms stuck to an “if it ain’t broke” philosophy.
“IPv4x when we can. NAT when we must.”
Windows XP embodied this mindset. It always asked DNS for an IPv4x address first, falling back to IPv4 when necessary and relying on NAT only as a last resort.
Residential ISPs began deploying IPv4x in earnest. Customers who wanted a dedicated IPv4 address could still have one — for a fee. Everyone else received an IPv4x /64, giving them 64 bits for their own devices. ISPs used carrier‑grade NAT as a compatibility shim rather than a lifeline: if you needed to reach an IPv4‑only service, CGNAT stepped in while IPv4x traffic flowed natively and without ceremony.
The old IPv4 pool finally ran dry in 2006, just in time for the anniversary. There were still plenty of unused /8 blocks, but these had all been earmarked for IPv4x, and IANA refused to budge. If you wanted more addresses, they would have to be the IPv4x kind.
Peer‑to‑Peer and the IPv4x Backlash
IPv4x had its fans, but it also had one determined opponent: the music industry.
Under IPv4 and NAT, peer‑to‑peer networking had always been awkward, especially if you weren’t a nerd who understood IP addresses. If you wanted to participate in peer-to-peer, you needed to log into your router’s admin panel and mess with arcane configurations which every router manufacturer had a different name for. Many gave up and simply decided music piracy wasn’t for them.
IPv4x removed all that friction. Every device had a stable and globally reachable address. File‑sharing exploded as peer-to-peer software was simple to use. You didn’t need to poke about with your router, it all just worked.
One trade group identified IPv4x as the cause of the growth in music file sharing and ran with the slogan “The X is for exterminating your favorite bands.” That stung a little but it didn’t stick. Cheap international calls, multiplayer games, chat systems and early collaboration tools all flourished. IPv4x didn’t just make peer‑to‑peer easier but it made direct communication the default again.
“They’re Justified, and they’re Ancient and they drive an ice cream van. They’re Justified and they’re Ancient, with still no master plan. The last train left an hour ago, they were singing All Aboard, All bound for Mu Mu Land.”
2016 — The Tipping Point
By 2016, IPv4x was the norm. The only major holdouts were the tier‑1 backbones, but that had always been part of the plan. They didn’t need IPv4x at all as the top 32 bits were enough for global routing between continents. But mostly, their eye‑wateringly expensive hardware didn’t really need replacing.
A few websites still clung to IPv4, forcing ISPs to maintain CGNAT systems, until one popular residential ISP broke ranks.
“IPv4x, or don’t.”
For customers of this ISP, those last few IPv4‑only sites simply failed. Support staff were given a list of known‑broken websites and trained to offer an alternative plan if a customer insisted the fault lay with the ISP. Most customers just shrugged and moved on. As far as they were concerned, those websites were simply malfunctioning.
Eventually, a technically minded customer pieced together what was happening and blew the whistle. A few dogs barked, but almost no one cared. The ISP spun the story that these websites were using “out‑of‑date” technology, but not to worry, they had an option for customers who really needed CGNAT support, provided they were willing to pay for it.
2020 — The Pandemic and IPv4x’s Quiet Triumph
When the world locked down in 2020, IPv4x quietly proved its worth.
The most popular video‑conferencing platforms had long since adopted a hybrid model. The operators centralized authentication and security, but handed the actual media streams over to peer‑to‑peer connections. Under IPv4/NAT, that had always been fragile but under IPv4x it was effortless.
Remote desktop access surged as well. People had left their office machines running under their desks and IPv4x made connecting to them trivial. It simply worked.
“You run to catch up with the sun, but it’s sinking. Racing around to come up behind you again. The sun is the same in a relative way, but you’re older. Shorter of breath, and one day closer to death.”
2026 — Thirty Years of IPv4x
By 2026, as the world celebrated the thirtieth anniversary of IPv4x, only a few pain points remained. The boundary between the first 32 bits and the remaining 96 was still visible.
If you were a serious network operator, you wanted one of those 32-bit IP addresses to yourself which you could attach your own router to. If you weren’t important or wealthy enough for one of those, you were at the mercy of whoever owned the router that was connected to those top 32-bits. But it wasn’t a serious problem. The industry understood the historical baggage and lived with it.
Public DNS resolvers were still stuck on IPv4. They didn’t want to be — DNS clients had spoken IPv4x for years — but the long tail of ancient DHCP servers kept handing out 32‑bit addresses. As long as those relics survived in wiring cupboards and forgotten branch offices, DNS had to pretend it was still 1998.
MIT still held onto their legendary 18.0.0.0/8 allocation, but their entire network now lived comfortably inside 18.18.18.18/32. They remained ready to release the rest of the block if the world ever needed it.
It was around this time that a group of engineers rediscovered an old, abandoned proposal from the 1990s: a clean‑slate protocol called IPv6, which would have discarded all legacy constraints and started fresh with a new address architecture. Reading it in 2025 felt like peering into a parallel universe.
Some speculated that, in that world, the Internet might have fully migrated by now, leaving IPv4 behind entirely. Others argued that IPv4 would have clung on stubbornly, with address blocks being traded for eye‑watering sums and NAT becoming even more entrenched.
IPv4x had avoided both extremes. It hadn’t replaced IPv4 but absorbed it. It hadn’t required a revolution but enabled an evolution. In doing so, it had given the Internet a smooth transition that no one noticed until it was already complete.
“Bells will ring. Sun will shine. I’ll be his and he’ll be mine. We’ll love until the end of time and we’ll never be lonely anymore.”
Back in the Real World
Of course, none of this really happened.
IPv4x was never standardized, no university ever routed a 96‑bit subspace beneath its legacy /8, and the world never enjoyed a seamless, invisible transition to a bigger Internet. Instead, we built IPv6 as a clean break, asked the world to deploy it alongside IPv4, and then spent the next twenty‑five years waiting for everyone else to go first.
And while imagining the IPv4x world is fun, it’s also revealing. That universe would have carried forward a surprising amount of legacy. IPv6 wasn’t only only a big chunk of address pace but a conscious modernization of the Internet. In our IPv4x world, NAT would fade, but ARP and DHCP would linger. The architecture would still be a patchwork of 1980s assumptions stretched across a 128‑bit address space.
IPv6, for all its deployment pain, actually fixes these things. It gives us cleaner auto-configuration, simpler routing, better multicast, and a control plane designed for the modern Internet rather than inherited from the ARPANET. The road is longer, but the destination is better.
Still, imagining the IPv4x world is useful. It reminds us that the Internet didn’t have to fracture into two incompatible address families. It could have grown incrementally, compatibly, without a flag day. It could have preserved end‑to‑end connectivity as the default rather than the exception.
And yet, the real world isn’t a failure but a different story. IPv6 is spreading while IPv4 is receding. The transition is messy, but it is happening. And perhaps the most important lesson from our imaginary IPv4x universe is this.
The Internet succeeds not because we always choose the perfect design, but because we keep moving forward anyway.
Epilogue
It was while writing this speculative history that the idea for SixGate finally clicked for me. In this alternate timeline, there’s a moment when an old IPv4‑only router hands off to a router that understands IPv4x. The handover is seamless because there’s always a path from old to new. The extended IPv4x subspace lives under the IPv4 address and the transition is invisible.
In our real world — the one split between IPv4 and IPv6 — we don’t have that luxury. But it led me to realize that if only an IPv4 user had a reliable way to reach into the IPv6 world, the transition could be smoother and more organic.
That’s where SixGate steps in. A user stuck on IPv4 can ask the IPv6 world for a way in. By returning a special SRV record from the ip6.arpa space, the user receives a kind of magic portal, provided by someone who wants them to be able to connect. Not quite as seamless as the router handover in our parallel universe, but impressively close given the constraints we live with.
So I hope SixGate can grow into something real — something that helps us get there a little faster. Maybe it will give IPv6 the invisible on‑ramp that IPv4x enjoyed in that parallel world.
Either way, imagining the road not taken has made the road we’re on feel a little clearer, and a little more hopeful.
And yes, this whole piece was a sneaky way to get you to read my SixGate proposal. Go on. You know you want to.
“The Watusi. The Twist. El Dorado.”
Credits 📸 “Chiricahua Mountains, Portal, AZ” by Karen Fasimpaur. (Creative Commons) 📸 “Splitting Up” by Damian Gadal. (Creative Commons) 📸 “Reminder Note” by Donal Mountain. (Creative Commons) 📸 “Cat on Laptop” by Doug Woods. (Creative Commons) 📸 “Up close Muscari” by Uroš Novina. (Creative Commons) 🎨 “Cubic Rutabaga” generated by Microsoft Copilot. 📸 “You say you want a revolution” by me. 🌳 Andrew Williams for inspiring me to pick the idea up. 🤖 Microsoft Copilot for helping me iron out technical details and reviewing my writing.
In brief, if a client with IPv4 only wishes to connect to an IPv6‑only service, SixGate bridges the gap by looking up a special SRV record in the service’s ip6.arpa zone, normally used for reverse‑DNS. This record points to a gateway service provided by the operator of that IPv6 network. Using that gateway, any machine that only has IPv4 can connect to the world of IPv6 — all working silently, without the user needing to know what’s going on.
This page is a security analysis of the risks of publishing and reading that SRV record. As of writing, SixGate is incomplete because I’ve chosen not to specify how the gateway will work, leaving that to a later discussion. As such, this analysis focuses solely on the mechanism of the SRV record. Once a gateway mechanism is proposed, further security analysis will be needed. For now, the concerns are entirely about DNS ownership, DNS integrity, and the correctness of the reverse‑DNS hierarchy.
Here are the relevant concerns.
Reverse‑DNS becomes more important — but not dangerously so
The ip6.arpa tree has historically been used for PTR lookups and the occasional policy check. SixGate gives it a more operational role by using it to publish the gateway location for a given IPv6 address.
This does not introduce a new trust model. It simply means that whoever legitimately controls the IPv6 block also legitimately controls the corresponding ip6.arpa delegation where the SRV record is found. This is exactly how reverse‑DNS is supposed to work.
DNS integrity matters
If an attacker can spoof or poison the SRV record, they can redirect IPv4‑only clients to a malicious gateway. This is not a new class of attack — it is the same DNS‑spoofing problem that already affects PTR records, MX records, and everything else in DNS.
The mitigation is the same. DNSSEC is strongly recommended. Operators should sign their reverse‑DNS zones, and clients should validate signatures when possible.
SixGate does not require DNSSEC, but it benefits from it in exactly the same way as any other DNS‑based discovery mechanism. Even without DNSSEC, TLS establishes security at a higher layer, just as it does for any other DNS‑based attack.
Reverse‑DNS delegation must be correct
If the ip6.arpa delegation for an IPv6 block is mis‑assigned or mis‑configured, the wrong party could publish the SRV record. This is not a SixGate‑specific vulnerability — it is simply the consequence of incorrect DNS delegation.
The fix is straightforward. IPv6 allocations should always include the correct ip6.arpa delegation. RIRs and LIRs already have established processes for this, and operators should ensure their reverse‑DNS matches their actual address ownership.
SixGate does not introduce a new dependency here; it relies on the same delegation correctness that every IPv6 deployment already needs.
Multiple gateways are fine — DNS just needs to point to them
The SRV record may legitimately resolve to an anycast IPv4 address, a hostname with multiple A records, or a load‑balanced cluster.
DNS already supports all of these patterns. SixGate does not impose any new constraints on how operators structure their gateway fleet. The only requirement is that the SRV record accurately reflects the operator’s intended gateway endpoints.
(Any operational requirements about how those gateways behave internally belong to the later “gateway mechanism” discussion, not to the SRV record itself.)
No new trust anchors
SixGate does not introduce any new certificate authorities, new trust hierarchies, new cryptographic material, or new DNS record types.
It uses SRV, which is already widely deployed, and the existing ip6.arpa delegation structure. The trust model is exactly the same as for MX records, SIP SRV records, Kerberos SRV records, and other DNS‑based service discovery mechanisms.
Summary
At this stage of the proposal — focusing only on the SRV record — the security considerations are simple and familiar:
The SRV record must be published in the correct ip6.arpa zone.
DNSSEC is recommended to prevent spoofing.
Reverse‑DNS delegation must match actual IPv6 address ownership.
Multiple gateways are fine; DNS already supports that pattern.
No new trust anchors or cryptographic systems are introduced.
Everything else — packet formats, gateway behaviour, statefulness, return paths — can be discussed later, once the community agrees that the SRV‑based discovery mechanism is sound.
Please raise any additional security concerns you may have and I will consider them accordingly and update this page.
Coming Soon: What about IPv6‑over‑UDP‑over‑IPv4?
🤖 Thanks to Microsoft Copilot for discussing these concerns with me — and for promising not to use Sixgate to hunt for Sarah Connor’s homepage.
The internet is inching toward IPv6, but not nearly fast enough. Server operators are increasingly comfortable running services on IPv6‑only infrastructure, yet a stubborn reality remains: many users are still stuck behind IPv4‑only networks. Legacy ISPs, old routers, and long‑lived hardware keep a surprising number of clients stranded on the old protocol.
That creates a familiar kind of stalemate. Just as the lack of SNI support in Windows XP once discouraged sites from adopting HTTPS, today’s pockets of IPv4‑only clients discourage operators from going IPv6‑only. No one wants to cut off part of their audience, so IPv4 lingers on long after its sell‑by date.
IPv6 was meant to break us free from IPv4’s scarcity, but the slowest movers in the ecosystem are holding the rest of the internet back.
“Ulysses, Ulysses, soaring through all the galaxies. In search of Earth, flying into the night.”
🌉 What Is Sixgate?
Sixgate is a lightweight mechanism that allows IPv4-only clients to reach IPv6-only servers without requiring IPv4 infrastructure. It works by letting clients automatically discover and use a gateway operated by the server network to tunnel IPv6 packets over IPv4.
Here’s how Sixgate bridges the gap:
The client attempts to connect to a website and receives only an IPv6 address from DNS. The client knows that it cannot connect, either due to a missing IPv6 configuration or a previously failed attempt.
The client performs a second DNS query, asking for a special SRV record associated with the IPv6 address at “_sixgate._udp.<addr>.ip6.arpa” from the same zone as reverse DNS.
The SRV record points to an IPv4‑reachable gateway, operated by the server’s network, which will forward traffic to and from the IPv6‑only service.
What does the client do with that gateway IP? That’s intentionally left open. There are several mature tunnelling technologies that could sit behind a SixGate gateway, and choosing one is a separate design decision.
An earlier draft of this proposal bundled the discovery mechanism together with a specific tunnel protocol, but that turned out to be premature. SixGate is the discovery step. If we can agree that this step is useful, we can standardise the transport that follows it later.
🛠 Practical Realities
Even an IPv6-only cluster will need a single IPv4 address to operate the gateway. That’s a small concession. Far less costly than maintaining IPv4 across all services. The gateway becomes a focused point of compatibility, not a sprawling legacy burden. The gateway itself need not be part of the server farm it supports, but should be close to the same network path that normal IPv6 traffic takes.
Additionally, DNS itself must remain reachable over IPv4. Clients need to resolve both the original IPv6 address and the SRV record for the gateway. Fortunately, DNS infrastructure is already deeply entrenched in IPv4, and this proposal doesn’t require any changes to that foundation.
The SRV lookup is for each IPv6 address, but I imagine that in practice, there will be a single SixGate service covering a large range of IPv6 addresses, perhaps for the whole data center. In this case, the DNS service would respond with the same SRV record for all those IPv6 addresses in that block.
Delegation of the reverse-DNS ip6.arpa space can be messy. I considered alternatives but it was clear that this is only place the all-important SRV record could go. Any new domain or alternative distributed lookup system would bring along those same problems.
🚀 Why Sixgate Matters
The beauty of Sixgate is that it shifts the burden away from ISPs and toward software. Updating an operating system or browser to support this fall-back logic is vastly easier than convincing hundreds of ISPs to overhaul their networks. Software updates can be rolled out in weeks. ISP transitions take years — sometimes decades.
By giving server operators a graceful way to drop IPv4, we accelerate the transition without leaving legacy clients behind. It’s a bridge, not a wall.
⛄ The Server’s Problem
You might be thinking that there are already tunnel services that allow IPv4-only users to access the IPv6 world. Those are great but they’re fixing the wrong problem.
Technologies like CGNAT have made IPv4 “good enough” for ISPs. Moving to IPv6 would require a whole bunch of new equipment and no-one other than a few nerds are actually asking for it. This has been the status quo for decades.
As a server operator, I’m not going to deploy new services on IPv6‑only infrastructure, until I can be sure my customers can access them, which means I’m going to have to keep IPv4 around, with all the costs that implies.
From the user’s perspective, their internet connection “just works”. They don’t know what IPv4 or IPv6 is and they shouldn’t have to. If they try to connect to my service and it fails, they won’t start thinking they should sign up for a tunnel, they’ll think, quite reasonably, that my website is broken.
Tunnels put the burden on the least‑equipped party: the end‑user. They require sign‑ups, configuration and payment. They assume technical knowledge that most customers simply don’t have. They create friction at exactly the wrong place.
Telling a potential customer to “go fix your internet” is not a viable business model.
🥕 The Tipping Point
Over time, as more clients can reach IPv6‑only services—either natively or through Sixgate—the balance shifts. Once a meaningful share of users can connect without relying on IPv4, the economic pressure on server operators changes. New deployments can finally consider dropping IPv4 entirely, because the remaining IPv4‑only clients aren’t left stranded; they’re automatically bridged.
That’s the real goal. As long as servers must maintain IPv4 for compatibility, the whole ecosystem stays anchored to the past. But if software can route around the ISPs that refuse to modernise, IPv6 can start to stand on its own merits. Sixgate doesn’t replace IPv4 overnight; it simply removes the last excuse for keeping it alive.
🔭 What Comes Next?
This is a sketch, but it’s one that could be prototyped quickly. A browser extension, a client library, or even a reference implementation could demonstrate its viability. Once the action of the gateway itself has been agreed, it could be standardized, adopted, and quietly become part of the internet’s connective tissue.
If you’re a server operator dreaming of an IPv6-only future, this might be your missing piece. And if you’re a protocol designer or systems thinker, I’d love to hear your thoughts.
Let’s build the bridge and let IPv6 finally stand on its own.
“Your smile is like a breath of spring. Your voice is soft like summer rain. I cannot compete with you.”
Credits: 📸 “Snow Scot” by Peeja. (With permission.) 📸 “El Galeón” by Dennis Jarvis. (Creative Commons.) ☕ “Lenny S” for reminding me I need to write something about tunnel services. 🤖 Microsoft Copilot for the rubber-ducking. 📂 Previous version at archive.org
Python lets you use any value in if and while conditions, not just Booleans. This permissiveness relies on a concept known as “truthiness”. Values like None, 0, "", and empty collections are treated as false and everything else is true.
It’s convenient. It’s idiomatic. It’s also, I argue, a code smell.
The Bug That Changed My Mind
Here’s a real example from one of my side projects:
def final_thing_count() -> Optional[int]:
"""Returns an integer count of things, but only
if the number of things is complete. If there may
yet be things to come, returns None to indicate
the count isn't final."""
# Code Redacted.
current_thing_count: Optional[int] = final_thing_count()
if current_thing_count:
# Process the final count of things.
Looks fine, right? But it’s broken.
If final_thing_count() returns 0, a perfectly valid and reasonable number of things, the condition fails. The code treats it the same as None, which was meant to signal “not ready yet”. The fix?
if current_thing_count is not None:
# Process the final count of things.
Now the intent is clear. We care whether the value is None or not, not whether it’s “truthy.” Zero is no longer misclassified.
Even if the function never returns zero, I’d still argue the revised version is better. It’s explicit. It’s readable. It doesn’t require the reader to know Python’s rules for truthiness.
I’ve looked at a wide variety of Python code, and every time I see truthiness used, I ask if this be clearer if it were explicit. The answer is almost always yes. The one exception I’ve observed is this use of the or operator…
# Read string, replacing None with an empty string.
my_string = read_optional_string() or ""
This is elegant and expressive. If Python ever adopted stricter Boolean rules, I’d hope this idiom would be preserved or replaced with a dedicated None-coalescing operator.
“I used to know who I was. Now I look in the mirror and I’m not so sure. Lord, I don’t want to listen to the lies anymore!”
C# Chose Clarity Over Convenience
In C, you could write terse, expressive loops like this:
while (*p++) { /* do something */ }
This worked because C, similar to Python, treated any non-zero value as true, and NULL, 0, '\0', and 0.0 as false. It was flexible but also fragile. A mistyped condition or misunderstood return value could silently do the wrong thing.
C# broke from that tradition. It introduced a proper Boolean type and made it a requirement for conditional expressions. If you try to use an int, string, or object directly in an if statement, the compiler rejects it.
Yes, we lost the ability to write one-line loops that stop on a null. But we gained something more valuable, a guardrail against a whole class of subtle bugs. The language nudges you toward clarity. You must say what you mean.
I believe that was the right trade.
A Modest Proposal (and a Practical One)
I’d love it if Python could raise an error when a non-Boolean value is used in a Boolean context. It would need to be an opt-in feature via a “__future__” import or similar, as it would break too much otherwise.
This would allow developers to opt into a stricter, more explicit style. It would catch bugs like the one I described and make intent more visible to readers and reviewers.
As a secondary proposal, linter applications could help bridge the gap. They could flag uses of implicit truthiness, especially in conditionals involving optional types, and suggest more explicit alternatives. This would nudge developers toward clarity without requiring language changes.
Until then, I’ll continue treating truthiness as a code smell. Not always fatal, but worth sniffing.
“I saw your picture in the magazine I read. Imagined footlights on the wooden fence ahead. Sometimes I felt you were controlling me instead.”
Credits 📸 “100_2223” by paolo. (Creative Commons) 📸 “Elephant” by Kent Wang. (Creative Commons)