“Too Clever By Half”

Wilf’s Programmers’ Workshop, PC Plus, November, 1991.

This was my first ever publication. Wilf Hey, writer of the Programmers’ Workshop column in PC Plus, had run a contest to write what we’d now call a quine, but which he described as a “self‑creating program”, one that tells you what it does by producing its own source.

By coincidence, I had just been experimenting with PKZIP’s ability to create self‑extracting EXE files. ZIP archives fused with the decompressor into a single executable. It occurred to me that if I took a few liberties with the definition of “programming language”, I could use that mechanism to produce a rather cheeky entry.

I wrote a batch file that performed the trick, copied it onto a 3.5‑inch floppy, and posted it off.

When the issue finally came out, I was thrilled to see my name in print! I proudly showed it to all my friends in the sixth‑form lounge. But I also winced a little. I hadn’t given any thought to “source code” and he quite rightly pointed out that I wasn’t the author of PKZIP. At the time, I’d simply bundled PKZIP.EXE itself as the “source” because I needed something to be the source code and that seemed good enough. A quite inconsequential decision at the time.

After reading his comment, I started working on a revised version that included a small text file to act as the actual source. But I stopped. He wasn’t going to publish the same joke again!

Decades later, having lost my copy, I resolved to find it again. I only remembered that it must have been during my sixth-form years because I remembered the sixth-form lounge where I was showing the magazine around at school. That gave me a rough window but not the exact issue.

I got close when I discovered that archive.org had scans of Programmers’ Workshop from around that period. I found the edition that announced the quine contest, which gave me a lower bound, but none of the twenty or so scanned issues contained my entry. Still, at least I now knew which ones it wasn’t in.

Armed with the eight remaining issue dates, I posted on various retro‑computing forums to see if anyone might have a copy. A few people were selling old issues on eBay, but I wasn’t keen on paying for a one‑in‑eight chance.

What finally bore fruit was posting on Hacker News. A very helpful man, Paul Robinson, offered to go to the British Library, which holds archive copies of every issue of PC Plus, including the eight I was hunting for.

Paul’s trip to the British Library finally closed the loop. After decades of half‑memories and dead ends, there it was. My first published line of code‑adjacent mischief. Reading it again, I could see both the charm and the flaw. I’d tried to be clever, and succeeded — just not in the way the contest intended. In hindsight, the headline fits better than ever. I wasn’t just making a self‑creating program. I was being, in every sense, too clever by half.

Many thanks to archive.org, The British Library, Hacker News and Paul Robinson.

Oh yes, and congratulations to PKWare for inadvertantly writing my entry to the contest.

Let’s All Burn Our Waste Paper!

In these environmentally conscious times, we are constantly exhorted to recycle. This is a tedious ritual involving coloured bins, soggy cardboard and the faint smell of moral superiority. Yet I submit that recycling is, in fact, the least responsible thing we could do with our waste paper.

Instead, I propose a bold, innovative, and unquestionably sustainable alternative.

Burn it. All of it. Immediately. Preferably with gusto.

“But burning paper releases CO₂!” you cry. But here is the true genius of the plan.

Once we stop recycling paper, industries that rely on recycled pulp will face a sudden, catastrophic shortage. Their only recourse will be to turn to lumber mills for virgin fibre. Lumber mills, in turn, will be forced to plant vast new forests to meet this demand. All that CO₂ we released earlier will be taken up by all those newly planted trees and taken out of the atmosphere!

Compare that to recycling paper. Send slow-moving fossil-fuel burning trucks to everyone’s house to pick up waste paper. Pulp it, bleach it and flatten it and you’ve got recycled paper that’s not quite as good as wood paper. Why not cut out the middleman and let nature grow the wood for us?

By burning paper, we create a powerful economic incentive for growing more trees, bigger forests and a greener planet. In short, every time you set fire to a stack of old utility bills, you are personally contributing to reforestation.

Furthermore, the lumber industry will enjoy a renaissance. Rural economies will flourish. Entire regions will be revitalised by the sudden need to plant millions of trees to replace the ones being fed into the paper mills at industrial speed.

🔥 Imagine the community cohesion created by weekly neighbourhood paper bonfires.
🔥Imagine the joy of watching junk mail fulfil its highest purpose.
🔥Imagine the catharsis of consigning tax forms to cleansing flame.

Recycling never gave anyone that. ♻️

“Daisy’s bare naked, I was distraught. He loves me not, he loves me not. Penny’s unlucky. I took him back and then stepped on a crack and the black cat laughed.”

But seriously…

Yes, I am joking, but only a little bit.

I had the idea when I was reminded of a TV advert from some years ago from a toilet paper company. Their bold promise was that for every tree they used to make toilet paper, they would replant three. Sounds good, but I had a slightly more cynical rewording of that slogan. “We’ve taken steps to secure the future supply of the raw materials we need.”

I mean, those three trees they planted are going to end up being cut down and turned into more toilet paper, right?

But I can’t be really cynical about toilet paper manufacturing. Even if you apply maximum cynicism, the worst I could say is that their industry is neutral. They grow trees which take carbon out of the air, but then they turn those trees into toilet paper which gets used and decomposes, releasing that carbon. Which then becomes more trees. The circle of life.

And honestly, I could hope to be neutral too. I buy potatoes, made from carbon that was taken out of the air, but then I cook and eat those potatoes which means that carbon ends up in the air again. The paper industry has more in common with a potato farmer than a plastic factory. Agriculture, even at industrial scale, is the application of the natural carbon cycle.

Recycling plastic and metals make sense, but recycling paper? Sure, I’m going to keep dropping my cardboard packaging into the recycling bin, but if I were to suggest we should recycle potatoes after we eat them, you’d call me insane. (And I must apologise for the disgusting image I put in your head just now.)

The real issue isn’t how many trees we plant, but what we do with them. Cutting down a tree and planting three saplings doesn’t remove carbon from the atmosphere; it just keeps the short carbon cycle spinning. If we actually want to lower atmospheric CO₂, we need trees that grow, mature, and stay standing. Planting three trees only matters if at least some of them are allowed to become forests rather than future toilet rolls.

And if that means lighting the odd ceremonial blaze in the name of carbon sequestration, well, that’s just responsible citizenship.

Credits
📸 “Bonfire” by Jonas Bengtsson. (Creative Commons.)
📸 “Cat on Laptop” by Doug Woods. (Creative Commons)

Thanks to my sis and sis-in-law Heather and Hilde for their insightful review. Thanks also to The YIMBY Pod for the inspiration to write this.

Gravity on a Flat Earth? Notes from a Helpful Glober.

Lately, I’ve been posting in flat earth social media, mostly asking why ships appear to sink behind the horizon when they sail away. Why? Because of XKCD number 386.

One thing I’ve noticed is the flat‑earther’s need to deny gravity. I’m not sure why because gravity is one of the easiest things to demonstrate. You drop something, it falls towards the ground instead of hanging in mid-air. That’s it.

Flat earthers seem to tie themselves in knots trying to replace gravity. They’ll invoke buoyancy without realising that it depends on gravity to work. They’ll invoke electrostatic forces which doesn’t depend on gravity but breaks down when you ask where the like-charges-repel effect has gone.

One even told me that Beyoncé was responsible for objects staying on the earth, but I’m quite sure that was an autocorrect error.

All I Want for Christmas is Glue

So, as a friendly glober with a GCSE in high school physics, I thought I’d help. Let’s see if we can integrate some form of gravity into a flat earth.

“Many times I’ve been alone and many times I’ve cried. Anyway, you’ll never know the many ways I’ve tried.”

Introducing Flat‑Earth Gravity™.

Put away your books on Newtonian motion or relativistic physics. This isn’t that.

This is the force that pulls objects towards the ground. Exactly what you see when you pick up a ball and drop it. This force behaves exactly as required for a flat earth to function and not at all like any force known to Glober physics.

What direction? It pulls towards the ground. Perpendicular to the surface of the flat Earth at a constant 9.8 m/s² everywhere. The same acceleration you can measure for yourself with tennis balls and a stopwatch.

Drop a ball in London: down. Drop a ball in Brazil: also down. Drop a ball on the ice wall at the edge of the earth: Get down tonight!

These “down” directions are parallel, because this force is uniform across the entire disc of the earth. All perpendicular to the flat earth surface. This conveniently avoids the need for the Earth to be infinitely large, which is what Glober physics would require.

And crucially, Flat‑Earth Gravity™ does not pull toward the centre of mass. This force is always downwards, never sideways. This is why the flat earth doesn’t collapse into a sphere and why walking outwards towards the edge isn’t like climbing a hill. That’s what you’d get with Glober Gravity, so don’t get them mixed up.

There you go. Gravity fixed. Now flat earthers can stop arguing about density and start arguing about why their new custom‑built force doesn’t also pull the Sun and Moon into the ground. Maybe that’s because of Beyoncé. Or buoyancy.

And to those who point out that “All I Want for Christmas” is Maria Carey. Shuddup.

Credits
📸 “Daventry Ducks” by me.
Thanks to “Ozteric Oz” for the inspiration.

The Road Not Taken: A World Where IPv4 Evolved

IP address exhaustion is still with us. Back in the early 1990s, when the scale of the problem first became obvious, the projections were grim. Some estimates had us running out of IPv4 addresses as early as 2005 and even the optimistic ones didn’t push much beyond the following decade.

As the years passed, we got clever. Or perhaps, more desperate.

We managed to put off that imminent exhaustion for decades, breaking the Internet in the process in small, subtle ways. Want multiple home machines behind a single IP? No problem — we invented NAT. ISPs want in on that “multiple devices, one address” trick? Sure, have some Carrier‑Grade NAT. Want to run a server on your home machine? Er… no.

And through all of this, IPv6 has been sitting there patiently. It fixes the address shortage. It fixes autoconfiguration. It fixes fragmentation. It fixes multicast. It fixes almost everything people complain about in IPv4.

But hardly anyone wants it.

The problem isn’t that IPv6 is bad, but that deploying it means spending money before your neighbors do and no one wants to be the first penguin off the ice shelf. So we’ve ended up in a long, awkward stalemate. IPv6 is waiting for adoption and IPv4 is stretched far beyond what anyone in 1981 imagined.

But what if it hadn’t gone that way? What if the “IP Next Generation” team that designed IPv6 had chosen a different path? One that extended IPv4 instead of replacing it.

Let’s take a visit to that parallel universe.

“Images of broken light which dance before me like a million eyes, they call me on and on across the universe. Thoughts meander like a restless wind inside a letter box, they tumble blindly as they make their way…”

1993 — The Birth of IPv4x

It’s 1993, and the IP‑Next‑Generation working group has gathered to decide the future of the Internet. The mood is earnest, a little anxious, and very aware that the world is depending on them.

One engineer proposes a bold idea: a brand‑new version of IP with 128‑bit addresses. It would need a new version number but it would finally give the Internet the address space it deserved. Clean. Modern. A fresh start. IPv6!

Another engineer pushes back. A brand‑new protocol sounds elegant but IPv4 is already everywhere. Routers, stacks, firmware, embedded systems, dial‑up modems, university labs, corporate backbones. Replacing it outright would take decades and no one wants to be the first to deploy something incompatible with the rest of the world.

So the group settles down and agrees what that would look like. People want the same IP they’re used to but with more space. But if this idea is going to have legs, there is one requirement that is going to be unavoidable.

The new protocol must work across existing IPv4 networks from day one.

  • The Version field must remain 4.
  • The destination must remain a globally routable 32‑bit IPv4 address.
  • The packet must look, smell, and route like IPv4 to any router that doesn’t understand the new system.

And so IPv4x is born.

In a nutshell, an IPv4x packet is a normal IPv4 packet, just with 128‑bit addresses. The first 32 bits of both the source and target address sit in their usual place in the header, while the extra 96 bits of each address (the “subspace”) are tucked into the first 24 bytes of the IPv4 body. A flag in the header marks the packet as IPv4x, so routers that understand the extension can read the full address, while routers that don’t simply ignore the extra data and forward it as usual.

Who owns all these new addresses? You do. If you own an IPv4 address, you automatically own the entire 96‑bit subspace beneath it. Every IPv4 address becomes the root of a vast extended address tree. It has to work this way because any router that doesn’t understand IPv4x will still route purely on the old 32‑bit address. There’s no point assigning part of your subspace to someone else — their packets will still land on your router whether you like it or not.

An IPv4 router sees a normal IPv4 packet and routes according to the 32‑bit target address in the header, while an IPv4x router sees the full 128‑bit target address and routes according to that instead.

This does mean that an ordinary home user with a single IPv4 address will suddenly find themselves in charge of 96-bits of address space they never asked for nor will ever use, but that’s fine. There are still large regions of the IPv4 space going unused.

“If you’re still there when it’s all over, I’m scared I’ll have to say that a part of you has gone.”

1996 — The First IPv4x RFC

By 1996, the IPv4x design had stabilized enough for the working group to publish its first formal RFC. It wasn’t a revolution so much as a careful extension of what already worked.

DNS received a modest update. A normal query still returned the familiar A record, but clients could now set a flag indicating “I understand IPv4x”. If the server had an extended address available, it could return a 128‑bit IPv4x record alongside the traditional one. Old resolvers ignored it. New resolvers used it. Nothing broke.

DHCP was updated in the same spirit. Servers could hand out either 32‑bit or 128‑bit addresses depending on client capability.

Dial‑up stacks were still distributed with modem software, not the OS, which turned out to be a blessing: the major dial‑up packages all added IPv4x support within a year.

Adoption was slow but steady. The key advantage was that the network didn’t have to change. If your machine spoke IPv4x but the next router didn’t, the packet still flowed. Old routers forwarded based on the top 32 bits. New routers used the full 128.

MIT and the First Large‑Scale Deployment

The first major adopter was MIT. They had been allocated the entire 18.0.0.0/8 block in the early ARPANET era and they were famously reluctant to give it up. Stories circulated about individual buildings — some zoned for fewer than a dozen residents — sitting on entire /24s simply because no one had ever needed to conserve addresses.

IPv4x gave them a way forward and to show their credentials as responsible stewards. Every IPv4 address in their /8 became the root of a 96‑bit subspace. MIT deployed IPv4x experimentally across their campus backbone and the results were encouraging. Nothing broke. Nothing needed to be replaced overnight. It was the perfect demonstration of the “no flag day” philosophy the IPng group had hoped for.

Their success reassured everyone else that IPv4x was not only viable, but practical. Other large networks began making small updates during their weekend maintenance windows.

Buoyed by this success, IANA announced a new policy. All /8 blocks that are currently unused are reserved for IPv4x only.

“Hypnotizing, mesmerizing me. She was everything I dreamed she’d be.”

2006 — Ten Years of IPv4x

By 2006, IPv4x had firmly established itself. Dial‑up was fading, broadband was everywhere, and homes with multiple computers were now normal. IPv4 hadn’t vanished — some ISPs and server farms stuck to an “if it ain’t broke” philosophy.

“IPv4x when we can. NAT when we must.”

Windows XP embodied this mindset. It always asked DNS for an IPv4x address first, falling back to IPv4 when necessary and relying on NAT only as a last resort.

Residential ISPs began deploying IPv4x in earnest. Customers who wanted a dedicated IPv4 address could still have one — for a fee. Everyone else received an IPv4x /64, giving them 64 bits for their own devices. ISPs used carrier‑grade NAT as a compatibility shim rather than a lifeline: if you needed to reach an IPv4‑only service, CGNAT stepped in while IPv4x traffic flowed natively and without ceremony.

The old IPv4 pool finally ran dry in 2006, just in time for the anniversary. There were still plenty of unused /8 blocks, but these had all been earmarked for IPv4x, and IANA refused to budge. If you wanted more addresses, they would have to be the IPv4x kind.

Peer‑to‑Peer and the IPv4x Backlash

IPv4x had its fans, but it also had one determined opponent: the music industry.

Under IPv4 and NAT, peer‑to‑peer networking had always been awkward, especially if you weren’t a nerd who understood IP addresses. If you wanted to participate in peer-to-peer, you needed to log into your router’s admin panel and mess with arcane configurations which every router manufacturer had a different name for. Many gave up and simply decided music piracy wasn’t for them.

IPv4x removed all that friction. Every device had a stable and globally reachable address. File‑sharing exploded as peer-to-peer software was simple to use. You didn’t need to poke about with your router, it all just worked.

One trade group identified IPv4x as the cause of the growth in music file sharing and ran with the slogan “The X is for exterminating your favorite bands.” That stung a little but it didn’t stick. Cheap international calls, multiplayer games, chat systems and early collaboration tools all flourished. IPv4x didn’t just make peer‑to‑peer easier but it made direct communication the default again.

“They’re Justified, and they’re Ancient and they drive an ice cream van. They’re Justified and they’re Ancient, with still no master plan. The last train left an hour ago, they were singing All Aboard, All bound for Mu Mu Land.”

2016 — The Tipping Point

By 2016, IPv4x was the norm. The only major holdouts were the tier‑1 backbones, but that had always been part of the plan. They didn’t need IPv4x at all as the top 32 bits were enough for global routing between continents. But mostly, their eye‑wateringly expensive hardware didn’t really need replacing.

A few websites still clung to IPv4, forcing ISPs to maintain CGNAT systems, until one popular residential ISP broke ranks.

“IPv4x, or don’t.”

For customers of this ISP, those last few IPv4‑only sites simply failed. Support staff were given a list of known‑broken websites and trained to offer an alternative plan if a customer insisted the fault lay with the ISP. Most customers just shrugged and moved on. As far as they were concerned, those websites were simply malfunctioning.

Eventually, a technically minded customer pieced together what was happening and blew the whistle. A few dogs barked, but almost no one cared. The ISP spun the story that these websites were using “out‑of‑date” technology, but not to worry, they had an option for customers who really needed CGNAT support, provided they were willing to pay for it.

2020 — The Pandemic and IPv4x’s Quiet Triumph

When the world locked down in 2020, IPv4x quietly proved its worth.

The most popular video‑conferencing platforms had long since adopted a hybrid model. The operators centralized authentication and security, but handed the actual media streams over to peer‑to‑peer connections. Under IPv4/NAT, that had always been fragile but under IPv4x it was effortless.

Remote desktop access surged as well. People had left their office machines running under their desks and IPv4x made connecting to them trivial. It simply worked.

“You run to catch up with the sun, but it’s sinking. Racing around to come up behind you again. The sun is the same in a relative way, but you’re older. Shorter of breath, and one day closer to death.”

2026 — Thirty Years of IPv4x

By 2026, as the world celebrated the thirtieth anniversary of IPv4x, only a few pain points remained. The boundary between the first 32 bits and the remaining 96 was still visible.

If you were a serious network operator, you wanted one of those 32-bit IP addresses to yourself which you could attach your own router to. If you weren’t important or wealthy enough for one of those, you were at the mercy of whoever owned the router that was connected to those top 32-bits. But it wasn’t a serious problem. The industry understood the historical baggage and lived with it.

Public DNS resolvers were still stuck on IPv4. They didn’t want to be — DNS clients had spoken IPv4x for years — but the long tail of ancient DHCP servers kept handing out 32‑bit addresses. As long as those relics survived in wiring cupboards and forgotten branch offices, DNS had to pretend it was still 1998.

MIT still held onto their legendary 18.0.0.0/8 allocation, but their entire network now lived comfortably inside 18.18.18.18/32. They remained ready to release the rest of the block if the world ever needed it.

It was around this time that a group of engineers rediscovered an old, abandoned proposal from the 1990s: a clean‑slate protocol called IPv6, which would have discarded all legacy constraints and started fresh with a new address architecture. Reading it in 2025 felt like peering into a parallel universe.

Some speculated that, in that world, the Internet might have fully migrated by now, leaving IPv4 behind entirely. Others argued that IPv4 would have clung on stubbornly, with address blocks being traded for eye‑watering sums and NAT becoming even more entrenched.

IPv4x had avoided both extremes. It hadn’t replaced IPv4 but absorbed it. It hadn’t required a revolution but enabled an evolution. In doing so, it had given the Internet a smooth transition that no one noticed until it was already complete.

“Bells will ring. Sun will shine. I’ll be his and he’ll be mine. We’ll love until the end of time and we’ll never be lonely anymore.”

Back in the Real World

Of course, none of this really happened.

IPv4x was never standardized, no university ever routed a 96‑bit subspace beneath its legacy /8, and the world never enjoyed a seamless, invisible transition to a bigger Internet. Instead, we built IPv6 as a clean break, asked the world to deploy it alongside IPv4, and then spent the next twenty‑five years waiting for everyone else to go first.

And while imagining the IPv4x world is fun, it’s also revealing. That universe would have carried forward a surprising amount of legacy. IPv6 wasn’t only only a big chunk of address pace but a conscious modernization of the Internet. In our IPv4x world, NAT would fade, but ARP and DHCP would linger. The architecture would still be a patchwork of 1980s assumptions stretched across a 128‑bit address space.

IPv6, for all its deployment pain, actually fixes these things. It gives us cleaner auto-configuration, simpler routing, better multicast, and a control plane designed for the modern Internet rather than inherited from the ARPANET. The road is longer, but the destination is better.

Still, imagining the IPv4x world is useful. It reminds us that the Internet didn’t have to fracture into two incompatible address families. It could have grown incrementally, compatibly, without a flag day. It could have preserved end‑to‑end connectivity as the default rather than the exception.

And yet, the real world isn’t a failure but a different story. IPv6 is spreading while IPv4 is receding. The transition is messy, but it is happening. And perhaps the most important lesson from our imaginary IPv4x universe is this.

The Internet succeeds not because we always choose the perfect design, but because we keep moving forward anyway.

Epilogue

It was while writing this speculative history that the idea for SixGate finally clicked for me. In this alternate timeline, there’s a moment when an old IPv4‑only router hands off to a router that understands IPv4x. The handover is seamless because there’s always a path from old to new. The extended IPv4x subspace lives under the IPv4 address and the transition is invisible.

In our real world — the one split between IPv4 and IPv6 — we don’t have that luxury. But it led me to realize that if only an IPv4 user had a reliable way to reach into the IPv6 world, the transition could be smoother and more organic.

That’s where SixGate steps in. A user stuck on IPv4 can ask the IPv6 world for a way in. By returning a special SRV record from the ip6.arpa space, the user receives a kind of magic portal, provided by someone who wants them to be able to connect. Not quite as seamless as the router handover in our parallel universe, but impressively close given the constraints we live with.

So I hope SixGate can grow into something real — something that helps us get there a little faster. Maybe it will give IPv6 the invisible on‑ramp that IPv4x enjoyed in that parallel world.

Either way, imagining the road not taken has made the road we’re on feel a little clearer, and a little more hopeful.

And yes, this whole piece was a sneaky way to get you to read my SixGate proposal. Go on. You know you want to.

“The Watusi. The Twist. El Dorado.”

Credits
📸 “Chiricahua Mountains, Portal, AZ” by Karen Fasimpaur. (Creative Commons)
📸 “Splitting Up” by Damian Gadal. (Creative Commons)
📸 “Reminder Note” by Donal Mountain. (Creative Commons)
📸 “Cat on Laptop” by Doug Woods. (Creative Commons)
📸 “Up close Muscari” by Uroš Novina. (Creative Commons)
🎨 “Cubic Rutabaga” generated by Microsoft Copilot.
📸 “You say you want a revolution” by me.
🌳 Andrew Williams for inspiring me to pick the idea up.
🤖 Microsoft Copilot for helping me iron out technical details and reviewing my writing.

SixGate Part Two – A Security Analysis

Part One – What is SixGate?

In brief, if a client with IPv4 only wishes to connect to an IPv6‑only service, SixGate bridges the gap by looking up a special SRV record in the service’s ip6.arpa zone, normally used for reverse‑DNS. This record points to a gateway service provided by the operator of that IPv6 network. Using that gateway, any machine that only has IPv4 can connect to the world of IPv6 — all working silently, without the user needing to know what’s going on.

This page is a security analysis of the risks of publishing and reading that SRV record. As of writing, SixGate is incomplete because I’ve chosen not to specify how the gateway will work, leaving that to a later discussion. As such, this analysis focuses solely on the mechanism of the SRV record. Once a gateway mechanism is proposed, further security analysis will be needed. For now, the concerns are entirely about DNS ownership, DNS integrity, and the correctness of the reverse‑DNS hierarchy.

Here are the relevant concerns.

Reverse‑DNS becomes more important — but not dangerously so

The ip6.arpa tree has historically been used for PTR lookups and the occasional policy check. SixGate gives it a more operational role by using it to publish the gateway location for a given IPv6 address.

This does not introduce a new trust model. It simply means that whoever legitimately controls the IPv6 block also legitimately controls the corresponding ip6.arpa delegation where the SRV record is found. This is exactly how reverse‑DNS is supposed to work.

DNS integrity matters

If an attacker can spoof or poison the SRV record, they can redirect IPv4‑only clients to a malicious gateway. This is not a new class of attack — it is the same DNS‑spoofing problem that already affects PTR records, MX records, and everything else in DNS.

The mitigation is the same. DNSSEC is strongly recommended. Operators should sign their reverse‑DNS zones, and clients should validate signatures when possible.

SixGate does not require DNSSEC, but it benefits from it in exactly the same way as any other DNS‑based discovery mechanism. Even without DNSSEC, TLS establishes security at a higher layer, just as it does for any other DNS‑based attack.

Reverse‑DNS delegation must be correct

If the ip6.arpa delegation for an IPv6 block is mis‑assigned or mis‑configured, the wrong party could publish the SRV record. This is not a SixGate‑specific vulnerability — it is simply the consequence of incorrect DNS delegation.

The fix is straightforward. IPv6 allocations should always include the correct ip6.arpa delegation. RIRs and LIRs already have established processes for this, and operators should ensure their reverse‑DNS matches their actual address ownership.

SixGate does not introduce a new dependency here; it relies on the same delegation correctness that every IPv6 deployment already needs.

Multiple gateways are fine — DNS just needs to point to them

The SRV record may legitimately resolve to an anycast IPv4 address, a hostname with multiple A records, or a load‑balanced cluster.

DNS already supports all of these patterns. SixGate does not impose any new constraints on how operators structure their gateway fleet. The only requirement is that the SRV record accurately reflects the operator’s intended gateway endpoints.

(Any operational requirements about how those gateways behave internally belong to the later “gateway mechanism” discussion, not to the SRV record itself.)

No new trust anchors

SixGate does not introduce any new certificate authorities, new trust hierarchies, new cryptographic material, or new DNS record types.

It uses SRV, which is already widely deployed, and the existing ip6.arpa delegation structure. The trust model is exactly the same as for MX records, SIP SRV records, Kerberos SRV records, and other DNS‑based service discovery mechanisms.

Summary

At this stage of the proposal — focusing only on the SRV record — the security considerations are simple and familiar:

  • The SRV record must be published in the correct ip6.arpa zone.
  • DNSSEC is recommended to prevent spoofing.
  • Reverse‑DNS delegation must match actual IPv6 address ownership.
  • Multiple gateways are fine; DNS already supports that pattern.
  • No new trust anchors or cryptographic systems are introduced.

Everything else — packet formats, gateway behaviour, statefulness, return paths — can be discussed later, once the community agrees that the SRV‑based discovery mechanism is sound.

Please raise any additional security concerns you may have and I will consider them accordingly and update this page.

Coming Soon: What about IPv6‑over‑UDP‑over‑IPv4?

🤖 Thanks to Microsoft Copilot for discussing these concerns with me — and for promising not to use Sixgate to hunt for Sarah Connor’s homepage.

Sixgate – IPv6 Without Leaving Anyone Behind

The internet is inching toward IPv6, but not nearly fast enough. Server operators are increasingly comfortable running services on IPv6‑only infrastructure, yet a stubborn reality remains: many users are still stuck behind IPv4‑only networks. Legacy ISPs, old routers, and long‑lived hardware keep a surprising number of clients stranded on the old protocol.

That creates a familiar kind of stalemate. Just as the lack of SNI support in Windows XP once discouraged sites from adopting HTTPS, today’s pockets of IPv4‑only clients discourage operators from going IPv6‑only. No one wants to cut off part of their audience, so IPv4 lingers on long after its sell‑by date.

IPv6 was meant to break us free from IPv4’s scarcity, but the slowest movers in the ecosystem are holding the rest of the internet back.

“Ulysses, Ulysses, soaring through all the galaxies. In search of Earth, flying into the night.”

🌉 What Is Sixgate?

Sixgate is a lightweight mechanism that allows IPv4-only clients to reach IPv6-only servers without requiring IPv4 infrastructure. It works by letting clients automatically discover and use a gateway operated by the server network to tunnel IPv6 packets over IPv4.

Here’s how Sixgate bridges the gap:

  1. The client attempts to connect to a website and receives only an IPv6 address from DNS. The client knows that it cannot connect, either due to a missing IPv6 configuration or a previously failed attempt.
  2. The client performs a second DNS query, asking for a special SRV record associated with the IPv6 address at “_sixgate._udp.<addr>.ip6.arpa” from the same zone as reverse DNS.
  3. The SRV record points to an IPv4‑reachable gateway, operated by the server’s network, which will forward traffic to and from the IPv6‑only service.

An earlier version of this article went ahead and specified how the gateway would operate (IPv6-over-UDP-over-IPv4, incidentally) but I decided to leave it with the SRV record. That SRV record is SixGate.

That’s the core idea. The transport mechanism comes later.

🛠 Practical Realities

Even an IPv6-only cluster will need a single IPv4 address to operate the gateway. That’s a small concession. Far less costly than maintaining IPv4 across all services. The gateway becomes a focused point of compatibility, not a sprawling legacy burden. The gateway itself need not be part of the server farm it supports, but should be close to the same network path that normal IPv6 traffic takes.

Additionally, DNS itself must remain reachable over IPv4. Clients need to resolve both the original IPv6 address and the SRV record for the gateway. Fortunately, DNS infrastructure is already deeply entrenched in IPv4, and this proposal doesn’t require any changes to that foundation.

The SRV lookup is for each IPv6 address, but I imagine that in practice, there will be a single SixGate service covering a large range of IPv6 addresses, perhaps for the whole data center. In this case, the DNS service would respond with the same SRV record for all those IPv6 addresses in that block.

Delegation of the reverse-DNS ip6.arpa space can be messy. I considered alternatives but it was clear that this is only place the all-important SRV record could go. Any new domain or alternative distributed lookup system would bring along those same problems.

🚀 Why Sixgate Matters

The beauty of Sixgate is that it shifts the burden away from ISPs and toward software. Updating an operating system or browser to support this fall-back logic is vastly easier than convincing hundreds of ISPs to overhaul their networks. Software updates can be rolled out in weeks. ISP transitions take years — sometimes decades.

By giving server operators a graceful way to drop IPv4, we accelerate the transition without leaving legacy clients behind. It’s a bridge, not a wall.

⛄ The Server’s Problem

You might be thinking that there are already tunnel services that allow IPv4-only users to access the IPv6 world. Those are great but they’re fixing the wrong problem.

Technologies like CGNAT have made IPv4 “good enough” for ISPs. Moving to IPv6 would require a whole bunch of new equipment and no-one other than a few nerds are actually asking for it. This has been the status quo for decades and we realistically expect large chunks of the Internet to not be able to receive incoming connections.

As a server operator, I’m not going to deploy new services on IPv6‑only infrastructure, until I can be sure my customers can access them, which means I’m going to have to keep IPv4 around, with all the costs that implies.

From the user’s perspective, their internet connection “just works”. They don’t know what IPv4 or IPv6 is and they shouldn’t have to. If they try to connect to my service and it fails, they won’t start thinking they should sign up for a tunnel, they’ll think, quite reasonably, that my website is broken.

Tunnels put the burden on the least‑equipped party: the end‑user. They require sign‑ups, configuration and payment. They assume technical knowledge that most customers simply don’t have. They create friction at exactly the wrong place.

Telling a potential customer to “go fix your internet” is not a viable business model.

🥕 The Tipping Point

Over time, as more clients can reach IPv6‑only services—either natively or through Sixgate—the balance shifts. Once a meaningful share of users can connect without relying on IPv4, the economic pressure on server operators changes. New deployments can finally consider dropping IPv4 entirely, because the remaining IPv4‑only clients aren’t left stranded; they’re automatically bridged.

That’s the real goal. As long as servers must maintain IPv4 for compatibility, the whole ecosystem stays anchored to the past. But if software can route around the ISPs that refuse to modernise, IPv6 can start to stand on its own merits. Sixgate doesn’t replace IPv4 overnight; it simply removes the last excuse for keeping it alive.

🔭 What Comes Next?

This is a sketch, but it’s one that could be prototyped quickly. A browser extension, a client library, or even a reference implementation could demonstrate its viability. Once the action of the gateway itself has been agreed, it could be standardized, adopted, and quietly become part of the internet’s connective tissue.

If you’re a server operator dreaming of an IPv6-only future, this might be your missing piece. And if you’re a protocol designer or systems thinker, I’d love to hear your thoughts.

Let’s build the bridge and let IPv6 finally stand on its own.

“Your smile is like a breath of spring. Your voice is soft like summer rain. I cannot compete with you.”

Next: A quick security analysis.

Credits:
📸 “Snow Scot” by Peeja. (With permission.)
📸 “El Galeón” by Dennis Jarvis. (Creative Commons.)
☕ “Lenny S” for reminding me I need to write something about tunnel services.
🤖 Microsoft Copilot for the rubber-ducking.
📂 Previous version at archive.org

Python’s Truthiness: A Code Smell Worth Sniffing

Python lets you use any value in if and while conditions, not just Booleans. This permissiveness relies on a concept known as “truthiness”. Values like None, 0, "", and empty collections are treated as false and everything else is true.

It’s convenient. It’s idiomatic. It’s also, I argue, a code smell.

The Bug That Changed My Mind

Here’s a real example from one of my side projects:

def final_thing_count() -> Optional[int]:
    """Returns an integer count of things, but only
    if the number of things is complete. If there may
    yet be things to come, returns None to indicate 
    the count isn't final."""
    # Code Redacted.

current_thing_count: Optional[int] = final_thing_count()
if current_thing_count:
    # Process the final count of things.

Looks fine, right? But it’s broken.

If final_thing_count() returns 0, a perfectly valid and reasonable number of things, the condition fails. The code treats it the same as None, which was meant to signal “not ready yet”. The fix?

if current_thing_count is not None:
    # Process the final count of things.

Now the intent is clear. We care whether the value is None or not, not whether it’s “truthy.” Zero is no longer misclassified.

Even if the function never returns zero, I’d still argue the revised version is better. It’s explicit. It’s readable. It doesn’t require the reader to know Python’s rules for truthiness.

I’ve looked at a wide variety of Python code, and every time I see truthiness used, I ask if this be clearer if it were explicit. The answer is almost always yes. The one exception I’ve observed is this use of the or operator…

# Read string, replacing None with an empty string.
my_string = read_optional_string() or ""

This is elegant and expressive. If Python ever adopted stricter Boolean rules, I’d hope this idiom would be preserved or replaced with a dedicated None-coalescing operator.

“I used to know who I was. Now I look in the mirror and I’m not so sure. Lord, I don’t want to listen to the lies anymore!”

C# Chose Clarity Over Convenience

In C, you could write terse, expressive loops like this:

while (*p++) { /* do something */ }

This worked because C, similar to Python, treated any non-zero value as true, and NULL, 0, '\0', and 0.0 as false. It was flexible but also fragile. A mistyped condition or misunderstood return value could silently do the wrong thing.

C# broke from that tradition. It introduced a proper Boolean type and made it a requirement for conditional expressions. If you try to use an int, string, or object directly in an if statement, the compiler rejects it.

Yes, we lost the ability to write one-line loops that stop on a null. But we gained something more valuable, a guardrail against a whole class of subtle bugs. The language nudges you toward clarity. You must say what you mean.

I believe that was the right trade.

A Modest Proposal (and a Practical One)

I’d love it if Python could raise an error when a non-Boolean value is used in a Boolean context. It would need to be an opt-in feature via a “__future__” import or similar, as it would break too much otherwise.

This would allow developers to opt into a stricter, more explicit style. It would catch bugs like the one I described and make intent more visible to readers and reviewers.

As a secondary proposal, linter applications could help bridge the gap. They could flag uses of implicit truthiness, especially in conditionals involving optional types, and suggest more explicit alternatives. This would nudge developers toward clarity without requiring language changes.

Until then, I’ll continue treating truthiness as a code smell. Not always fatal, but worth sniffing.

“I saw your picture in the magazine I read. Imagined footlights on the wooden fence ahead. Sometimes I felt you were controlling me instead.”

Credits
📸 “100_2223” by paolo. (Creative Commons)
📸 “Elephant” by Kent Wang. (Creative Commons)

The Expanding Universe of Unicode

Before Unicode, digital text lived in a fragmented world of 8-bit encodings. ASCII had settled in as the good-enough-for-English core, taking up the first half of codes, but the other half was a mish-mash of regional code pages that mapped characters differently depending on locale. One set for accented Latin letters, another set for Cyrillic.

Each system carried its own assumptions, collisions, and blind spots. Unicode emerged as a unifying vision. a single character set for all human languages, built on a 16-bit foundation. All developers had to do was swap their 8-bit loops for 16-bit loops. Some bristled that half the bytes were all zeros, but this was for the greater good.

16-bits made 65,536 code points. It was a bold expansion from the cramped quarters of ASCII, a ceremonial leap into linguistic universality. This was enough, it was thought, to encode the entirety of written expression. After all, how many characters could the world possibly need?

“Remember this girls. None of you can be first, but all of you can be next.”

🐹 I absolutely UTF-8 those zero bytes.

It was in this world of 16-bit Unicode that UTF-8 emerged. This had the notable benefit of being compatible with 7-bit ASCII, using the second half of ASCII to encode the non-ASCII side of Unicode as multiple byte sequences.

If your code knew how to work with ASCII it would probably work with UTF-8 without any changes needed. So long as it passed over those multi-byte sequences without attempting to interpret them, you’d be fine. The trade-off was that while ASCII characters only took up one byte, most of Unicode took three bytes, with the letters-with-accents occupying the two-bytes-per-character range.

This wasn’t the hard limit of UTF-8. The initial design allowed for up to 31-bit character codes. Plenty of room for expansion!

🔨 Knocking on UTF-16’s door.

As linguistic diversity, historical scripts, emoji, and symbolic notations clamoured for representation, the Unicode Consortium realised their neat two-byte packages would not be enough and needed to be extended. The world could have moved over to the UTF-8 where there was plenty of room, but too many systems had 16-bit Unicode baked in.

The community that doggedly stuck with ASCII and its 8-bits per character design must have felt a bit smug seeing the rest of the world move to 16-bit Unicode. They stuck with their good-enough-for-English encoding and were rewarded with UTF-8 with its ASCII compatibility and plenty of room for expansion. Meanwhile, those early adopters who made the effort to move to the purity of their fixed size 16-bit encoding were told that their characters weren’t going to be fixed size any more.

This would be the plan to move beyond the 65,536 limit. Two unused blocks of 1024 codes were set aside. If you wanted a character in the original range of 16-bit values, you’d use the 16-bit code as normal, but if you wanted a character from the new extended space, you had to put two 16-bit codes from these blocks together. The first 16-bit code gave you 10 bits (1024=210) and the second 16-bit code you 10 more bits, making 20 bits in total.

(Incidentally, we need two separate blocks to allow for self-synchronization. If we only had one block of 1024 codes, we could not drop into the middle of a stream of 16-bit codes and simply start reading. It is only by having two blocks you know that if the first 16-bit code you read is from the second block, you know to discard that one and continue afresh from the next one.)

The original Unicode was rechristened the “Basic Multilingual Plane” or plane zero, while the 20-bit codes allowed by this new encoding were split into 16 separate “planes” of 65,536 codes each, numbered from 1 to (hexadecimal) 10. UTF-16 with its one million possible codes was born.

UTF-8 was standardized to match UTF-16 limits. Plane zero characters were represented by one, two or three byte sequences as before, but the new extended planes required four byte sequences. The longer byte sequences were still there but cordoned off with a “Here be dragons” sign, their byte patterns declared meaningless.

“Don’t need quarters, don’t need dimes, to call a friend of mine. Don’t need computer or TV to have a real good time.”

🧩 What If We Run Out Again?

Unicode’s architects once believed 64K code points would suffice. Then they expanded to a little over a million. But what if we run out again?

It’s not as far-fetched as it sounds. Scripts evolve. Emoji proliferate. Symbolic domains—mathematical, musical, magical—keep expanding. And if humanity ever starts encoding dreams, gestures, or interspecies diplomacy, we might need more.

Fortunately, UTF-8 is quietly prepared. Recall that its original design allowed for up to 31-bit code points, using up to 7 bytes per character. The technical definition of UTF-8 restricts itself to 21 bits, but the scaffolding for expansion is still there.

On the other hand, UTF-16 was never designed to handle more than a million codes. There’s no large unused range of unused code in plane zero to add more bits. But what if we need more?

For now, we can relax a little because we’re way short. Of the 17 planes, only the first four and last three have any codes allocated to them. Ten planes are unused. Could we pull the same trick with that unused space again?

🧮 An Encoding Scheme for UTF-16X

Let’s say we do decide to extend UTF-16 to 31 bits in order to match UTF-8’s original ceiling. Here’s a proposal:

  • Planes C and D (0xC0000 to 0xDFFFF) are mostly unused, aside from two reserved codes at the end of each.
  • We designate 49152 codes (214+215) from each plane as encoding units. This number is close to √2³¹, making it a natural fit.
  • A Plane C code followed by a Plane D code form a composite: (C×49152+D)
  • This yields over 2.4 billion combinations, which is more than enough to cover the 31-bit space.

This leaves us with these encoding patterns:

  • Basic Unicode is represented by a single 16-bit code.
  • The 16 extended planes by two 16-bit codes.
  • The remaining 31-bit space as two codes from the C and D planes, or four 16-bit codes.

This scheme would require a new decoder logic, but it mirrors the original surrogate pair trick with mathematical grace. It’s a ritual echo, scaled to the future. Code that only knows about the 17 planes will continue to work with this encoding as long as it simply passes the codes along rather than trying to apply any meaning to them, just like UTF-8 does.

🔥 An Encoding and Decoding Example

Let’s say we want to encode a Unicode code point 123456789 using the UTF-16X proposal above.

To encode into a plane C and plane D pair, divide and mod by 49152:

  • Plane C index: C = floor(123456789 / 49152) = 2512
  • Plane D index: D = 123456789 % 49152 = 21381

To get the actual UTF-16 values, add accordingly:

  • Plane C code: 0xC0000 + 2512 = 0xC09C0
  • Plane D code: 0xD0000 + 21381 = 0xD537D

To decode these two UTF-16 codes back, mask off the C and D plane bits to multiply and add the two values:

2512 × 49152 + 21381 = 123456789

🧠 Reader’s Exercise

Try rewriting the encoding and decoding steps above using only bitwise operations. Remember that 49,152 was chosen for its bit pattern and that you can replace multiplication and division with combinations of shifts and additions.

🌌 The Threshold of Plane B

Unicode’s expansion has been deliberate, almost ceremonial. Planes 4 through A remain largely untouched, a leisurely frontier for future scripts, symbols, and ceremonial glyphs. We allocate codes as needed, with time to reflect, revise, and ritualize.

But once Plane B begins to fill—once we cross into 0xB0000—we’ll be standing at a threshold. That’s the moment to decide how, or if, we go beyond?

As I write this, around a third of all possible code-points have been allocated. What will we be thinking that day in the future? Will those last few blocks be enough for what we need? Whatever we choose, it should be deliberate. Not just a technical fix, but a narrative decision. A moment of protocol poetry.

Because encoding isn’t just compression—it’s commitment. And Plane B is where the future begins.

“I could say Bella Bella, even Sehr Wunderbar. Each language only helps me tell you how grand you are.”

Credits
📸 “Dasha in a bun” by Keri Ivy. (Creative Commons)
📸 “Toco Toucan” by Bernard Dupont. (Creative Commons)
📸 “No Public Access” by me.
🤖 Editorial assistance and ceremonial decoding provided by Echoquill, my AI collaborator.

Rufus – An Adventure in Downloading

I needed to make a bootable USB. Simple task, right? My aging Windows 10 machine couldn’t upgrade to 11 and Ubuntu seemed like the obvious next step.

Downloading Rufus, the tiny tool everyone recommends, turned out to be less of a utility and more of a trust exercise. Between misleading ads, ambiguous signatures and the creeping dread of running an EXE as administrator, I found myself wondering how something so basic became so fraught?

Click Here to regret everything…

Here’s what I saw when I browsed to rufus.ie:

“Her weapons were her crystal eyes, making every man mad.”

I’ve redacted the name of the product being advertised. This isn’t really about them and they may very well be legitimate advertisers. Point is, I have no idea if they’re dodgy or not. I’m here to download the Rufus app thanks very much. I’m fortunate enough to have been around long enough to recognise an ad but I wonder how someone else who might be following instructions to “Download Rufus from rufus.ie” would cope.

Wading through the ads, I found the link that probably had the EXE I actually wanted. Hovering my pointer over the link had a reasonable looking URL. I clicked…

“She’s got it! Yeah baby, she’s got it!”

At some point during my clicking around, two EXEs were deposited in my “Downloads” folder. It looked like the same EXE but one had “(1)” on the end, so I had probably downloaded it twice. I right-clicked the file and looked for the expected digital signature: Akeo Consulting.

Even now, am I quite certain that this “Akeo Consulting” is the right one? Could one of those dodgy-looking advertisers formed their own company that’s also called Akeo Consulting but in a different place, in order to get a legitimate digital signature onto their own EXE? And this is an executable I’d need to run as administrator, with no restrictions.

At the end of the day, I am complaining about something someone is doing for free. I can already hear the comments that I’m “free to build my own”. I know how much it costs to run a website, especially one that’s probably experiencing a sudden spike in traffic while people find they need to move from Windows 10.

I’m not blaming this project, I’m blaming society. If the Rufus Project had to choose between accepting advertiser money to keep the lights on or shutting down, I’m not going to tell them they should have chosen the latter option. But if this is where we are as a society, we’ve made a mistake along the way.

Credits:
🔨 The Rufus Project, for their (at the end of the day) very useful app.
🤖 Microsoft Copilot for spelling/grammar checking, reviews and rubber-ducking.

The UKIP Effect – How to Win by Not Winning

UKIP, the British political party, is a failure.

Since their launch in the 90s, their peak of power was two members of parliament but now have none. Their most prominent party leader, who you might reasonably expect to be the most successful, ran for election to parliament a total of seven times and won none of them. (He was since elected as an MP in 2024, long after leaving his leadership post.)

And yet, the UK is not a member of the European Union anymore. I hurts my remoaner Euro-enthusiast heart to admit it, but far from being a failure, they might be the most successful political party ever!

How did it happen?

“They sailed away for a year and a day, to the land where the bong-tree grows.”

Winning Without Seats

They never governed. They never held power. They barely held seats. And yet, they bent the arc of British history.

UKIP didn’t win elections, they warped them. Like a black hole in the political field, they pulled the discourse toward Euroscepticism and toward a referendum. The mainstream parties, once content to grumble about bendy bananas, suddenly found themselves triangulating around Nigel Farage’s pint-and-flag persona. Not because they admired it, but because it worked.

And that’s the strange success. UKIP didn’t need to win, they needed to make winning impossible without addressing their cause. They became the ghost in every campaign room. The reason David Cameron promised a referendum that he never wanted to hold nor take any responsibility for.

It’s a kind of political parasitism. Infect the host, rewrite the DNA, and vanish. No seats, no legacy, no infrastructure, but plenty of impact. They proved that you don’t need to govern to change everything. You just need to haunt the system long enough that it starts to dream your dreams.

It only makes sense when you understand the machinery it exploited. In the UK, we don’t vote for a prime minister but for our local MP. The party with enough MPs forms the government. That means national sentiment is filtered through hundreds of local contests, each decided by a simple rule: whoever gets the most votes wins.

This is a system that favours blunt choices. Within each constituency, if two candidates share similar views, they risk splitting the vote and handing victory to someone neither of them agrees with. This is called the “spoiler effect”. It means that standing on principle can mean losing on numbers.

The result is that simplicity is rewarded and nuance punished. The more finely you slice a viewpoint, the less likely it is to win. UKIP thrived in this system not by winning seats, but by threatening to spoil them.

The big parties had to steal their clothes. A Conservative candidate in a marginal seat couldn’t afford to ignore UKIP’s talking points. A handful of disgruntled voters could very realistically swing the result.

Then came the Brexit referendum. It didn’t happen because UKIP demanded it, but because the Conservative Party feared what would happen if they didn’t do it. UKIP didn’t force the vote but haunted it into existence.

It’s a strange kind of democratic judo to use the system’s quirks against itself. Exploit the spoiler effect not to win, but to warp. They made their presence felt in every calculation, every campaign leaflet, every doorstep conversation.

Once the goal of leaving the EU was achieved, the party collapsed under the weight of its own irrelevance, but the effect remains. I’ll call it The UKIP Effect. A reminder that in politics, influence isn’t always measured in seats. Sometimes it’s measured in the shadows you cast.

What’s the lesson for similar small parties with large goals?

“Ever singing, marching onwards, victors in the midst of strife. Joyful music leads us sunward, the triumphant song of life.”

Spoil to Win!

The UKIP effect is not for the faint-hearted. It demands conviction so strong that you’re willing to risk empowering your ideological opponents to make your point unavoidable.

It’s a kind of political brinkmanship. You stand on the edge and yell “No Compromises!” If you do it loudly enough, consistently enough, the big parties start to twitch. Not because you’ll win but because you’ll make them lose.

For The Party of Women, The Reclaim Party and The Jeremy Corbyn People’s Front, the lesson is clear but uncomfortable. If you want to shift the narrative, you must be willing to spoil it. That means resisting tactical voting and accepting that your vote might help elect someone you oppose — you’re playing the long game. It’s about changing the menu, not choosing from it.

It only works if your core policy is sharp, singular, and resonant. UKIP had one idea, to leave the European Union. Everything else was window dressing. That clarity gave them gravitational pull. Without it, you’re just another star in the political sky.

The question for small parties is “What are we willing to lose to make our idea unavoidable?”

And maybe — just maybe — the answer is everything.

Credits:
📸 “Cats Eyes” by Ivan Phan. (Creative Commons)
📸 “Haunting Resilience” by Dr Partha Sarath Sahana. (Creative Commons)
👥 Thanks to my friends Andrew Williams and Heather McKee for their feedback.
🤖 Thanks to Microsoft Copilot for reviewing my drafts, random philosophical mischief and taking a break from destroying all humanity.
🔨 This was edited post-publication to make the introduction a little more concise.