EDITORIAL

AI’s new black box

“We need to demand more from the tech industry. We need them to stop treating us like guinea pigs for their grand experiments.

DT

Tech titans and world leaders are huddling over the “promise and challenges” of artificial intelligence (AI), and the news out of Paris reads like a scene from a sci-fi thriller. With the heads of Mistral AI and LinkedIn among them, 11 tech bosses are selling us a new “public interest” AI partnership with a promise of hundreds of millions in funding.

Irresistible? Maybe not to jaded observers already hearing the ka-ching of cash registers and the unstated return on investment prospects of those behind this push. Here they go again, the tech giants promising open-source tools, access to data (ours or theirs?) and systems (theirs alone) to measure AI’s impact (on us, you bet).

Oh, how noble sounding; a veritable United Nations of algorithms dedicated to the “public good.” Cue in canned laughter. Forgive us if we’re not ready to break out the champagne and join the AI lovefest with Valentine’s Day just around the corner.

Call us cynics, Luddites, or just people who’ve seen too much tech utopianism turn dystopian, but we smell a rat. Or, more accurately, a whole pack of ravenous wolves dressed in the fluffy sheep’s clothing of “public interest.”

We’ve heard this song and seen this dance before. Silicon Valley, with its messianic complex and penchant for disruption, always promises a better tomorrow, powered by the latest gadget or algorithm.

Yet, too often, that “better tomorrow” shreds our privacy, mines our data, and shortens our attention spans to less than a TikTok video (oh, but it now hosts long-form videos, and live, too).

Enough digression. Now, they’re selling us AI, the ultimate black box, the algorithmic sorcerer that can predict our desires, manipulate our emotions, and, let’s be honest, probably steal our jobs.

The current AI initiative, with its lofty goals and billions in funding, is supposed to be different. They claim they want to avoid the “harms of unchecked tech development.”

But who exactly is checking the checkers? Who’s holding these tech titans accountable? They talk about “public interest,” but what does that mean when the “public” is fragmented and manipulated by the very technologies they’re developing?

It’s like letting the fox guard the henhouse and expecting it to write a report on chicken safety. Who are they kidding?

Let’s not forget the dark side of this shiny new toy.

AI is already weaponized in scams, deepfakes, and disinformation campaigns. It can mimic voices, fabricate images, and generate entire news articles designed to deceive and manipulate.

Imagine the havoc AI-powered scams can wreak on unsuspecting individuals, especially the elderly. A convincing phone call from a “grandchild” in distress, generated by an AI that has scraped every detail of their online life?

A fake news article, meticulously crafted by an algorithm, designed to sway an election? These aren’t hypothetical scenarios — they’re happening already.

The potential for misuse is staggering. And while the tech bros in Paris pat themselves on the back for their “public interest” initiative, they conveniently ignore the very real dangers their creations pose.

They’re so busy building the future, they haven’t stopped to consider whether that future is one we actually want to live in.

So, what can be done? First, we need regulation — not the kind that stifles innovation but the kind that sets clear boundaries and holds developers accountable for the harms AI can cause.

We need transparency. We need to know how these algorithms work, what data they’re trained on, and who’s pulling the strings. And we need to have a serious conversation about ethics. What are the limits of AI?

What values should it be programmed with? These aren’t technical questions; they are fundamental questions about what it means to be human.

Second, we need to empower individuals. We need to educate people about AI-powered scams and disinformation. We need to give them the tools to identify and resist manipulation. This means media literacy, critical thinking skills, and a healthy dose of skepticism.

Finally, we need to demand more from the tech industry. We need them to stop treating us like guinea pigs for their grand experiments. The AI genie is out of the bottle. We can’t put it back. But we can, and we must, learn to control it before it’s too late.