Don’t Believe Everything You See

From Poison Pens to Digital Puppets

· Random Circuits

Once, we were told not to believe everything we read. The warning lived between book covers and column inches, a defense against misinformation printed in ink. But in an era of doctored video, AI-generated voices, and faces that say things they never said, the danger has shifted. Now, the deception looks like you, sounds like you, and spreads faster than truth can catch up.
From ink to pixels, the lie just got faster.

The Long Tail of a Lie

In the 1800s, Victoria Woodhull—a suffragette, stockbroker, and the first woman to run for U.S. President—was hounded by salacious falsehoods that shadowed her across the Atlantic. After fleeing to the UK, she discovered that defamatory claims about her had been preserved in a library archive.

When attempts were made to have the material removed, a librarian refused—arguing for historical permanence. That very book would later be cited in legal arguments by social media companies seeking to deflect responsibility for what they host. But a single copy on a dusty shelf is not the same as an algorithmic engine propelling disinformation to millions in milliseconds. Scale changes everything. So does intent.

When Does Fake News Become Slander?

Fake news might sound abstract, but for those on the receiving end, it’s deeply personal. Reputations are destroyed, careers derailed, safety threatened. Celebrities like Robbie Williams, James Blunt, and members of the royal family have spoken out about how relentless media fiction has affected their mental health and livelihoods.

Even members of the British Royal Family have felt the impact: manipulated images, conspiracy theories, and viral speculation have fueled a credibility crisis that no palace statement can fully contain. So much so, some—like Prince Harry and Meghan Markle—left the country entirely. More than a century after Victoria Woodhull fled to the UK under the weight of scandal, the same behavior is still rewarded: those who spread the lie profit, while those who live its consequences are forced to start over.

In 2021, Prince Harry joined the Aspen Institute’s Commission on Information Disorder, calling for urgent reform to counter the rising threat of disinformation. He described it not just as a digital problem, but a global humanitarian issue.” And yet, the damage is often done long before platforms act—if they act at all.

The legal threshold for slander may be complex. The ethical one? Clear as day.

broken image

Deepfakes: The Death of Visual Truth

We used to say “seeing is believing.” Deepfakes have torched that belief. Synthetic videos and voice clones now make it possible to fabricate a moment from nothing. By the time it’s flagged, the damage is done—and the platform still profits.

If these companies can serve hyper-personalized ads in microseconds, they can detect digital manipulation. The question isn’t can they—it’s why haven’t they?

Platforms Without Borders, Laws Without Teeth

Social platforms reach across the globe, yet their accountability remains stunningly local. They can distort public perception, interfere with elections, and enable outrage economies—while hiding behind outdated legal frameworks.

Governments regulate medicine, food, and finance. But social media? Still largely unchecked. It’s a regulatory vacuum. And in that vacuum, harm thrives.

Jacinda Ardern called this out. But calling it out is not the same as calling it in—to law, to governance, to ethical accountability.

broken image

From Fiction to Flashpoint: Are We Living the Terminator Timeline?

Forty years ago, The Terminator warned us about machines rising from the ashes of human arrogance. Back then, it was science fiction. Today, it’s edging toward documentary.

Autonomous drones are here. Deepfake propaganda is here. AI-driven disinformation campaigns aren’t theoretical—they’re already in use. And the spark for conflict may no longer be a soldier—it might be a post.

So how long before a bot starts a war?

The terrifying truth is: we may already be there. AI doesn’t need to launch nukes to destabilize nations. It just needs to manipulate enough people, fast enough. Deepfakes, synthetic propaganda, and algorithmic amplification can ignite conflict without a single shot fired. And once autonomous weapons are in the mix, escalation could happen at machine speed—faster than diplomacy, oversight, or even comprehension.

We’re not talking about Skynet in the sky—we’re talking about systems already in place, operating without clear accountability, and often without human-in-the-loop safeguards.

If governments can’t regulate this, who will? And if we don’t act now, will the next “Judgment Day” be a courtroom debate—or a timestamp in a war log?

Let’s just hope that when the alerts light up and tensions spike, leaders have the clarity—and the humility—to check their facts with their counterparts before someone reaches for the launch codes. Because in an age of synthetic truth, a single lie could spark a very real war.

broken image

In a world of digital manipulation, even video calls can lie. A familiar voice might be a clone. A known face might be synthetic. That’s why real-world connection matters more than ever—not just socially, but as a mechanism of truth.

Face-to-face meetings. Physical validation. Human nuance. It may sound quaint—but it might just be the most radical act of all.

Authentication needs to evolve too. Biometric verification, digital watermarking, human-in-the-loop design—it exists. So does the urgency. What’s missing? Commitment.

Cleansing or Covering Up?

Yes, some platforms make efforts to clean up fake accounts. But is it meaningful action—or a digital sticking plaster?

They have the AI. They have the data. They have the money. If they truly wanted to find the sources of coordinated disinformation and shut them down, they could.

Do they just shrug and say, “The dark web is too clever”? If so, maybe they’re not the tech leaders they claim to be. Or maybe they are—and they’re choosing not to act.

This isn’t about what’s possible. It’s about what’s profitable.

Credibility or Convenience? The Billion-Dollar Shrug

Let’s not pretend this is a technical impossibility. Social media platforms have some of the brightest minds and deepest pockets in the tech world. If they wanted to track and shut down fake accounts, they could. Yes, it would take work. Yes, it would cost money. But these are companies valued in the hundreds of billions—money isn’t the issue. Will is.

So why the constant hand-wringing? Why the “we’re trying” statements that lead nowhere?

They have the money. They have the AI. They have the insight to build predictive models of your next online behavior. But stop a troll farm? Curb hate speech? Prevent a deepfake from destabilizing a region?

“Too hard.”

Follow the Money (and the Power)

If a platform can detect your shopping habits within seconds, it can detect a fake account. If it can serve you hyper-targeted ads, it can trace coordinated disinformation.

The truth is, they’ve built systems that are brilliant at monetizing attention—but dismal at protecting truth. And that’s not a bug. That’s the business model.

Let’s stop romanticizing the idea of benevolent tech bros too busy innovating to be accountable. These are some of the richest people on Earth—controlling platforms that can shift markets, elections, and public belief.

Let’s call it what it is: a failure of will, not of capability. When attention is profit and outrage is a business model, truth becomes collateral.

Ethics, or Just Optics?

This isn't just a tech problem—it’s an ethics failure. Platforms wrap themselves in ethical language once the damage is done. Governments follow with vague promises and delayed legislation.

But ethics isn’t something you retrofit into a business model or insert into a campaign speech. It’s baked in—or it’s absent.

Governments that hesitate to regulate are not neutral—they’re complicit. If they can regulate advertising and food safety, they can regulate algorithmic influence. To pretend otherwise is cowardice draped in bureaucracy.

So where does that leave us? Watching digital fires spread—while those who lit them sell fire insurance.

Inspired in part by the BBC documentary “Fake News: A True History,” now streaming on TVNZ+. A sobering reminder that misinformation has always been a weapon. Now, it’s just faster

Exposure. The Great unknown.

These are the voyages of Random Circuits, boldly entering the arena of ideas that disrupt, challenge, and transform.

broken image