EDITORIAL

Devil in the circuit

As DAILY TRIBUNE marks its 25th anniversary, the need for strong, independent reporting has never been more urgent.

DT

An Agence France-Presse report has warned that artificial intelligence is learning to lie, deceive, manipulate — and, disturbingly, blackmail. Researchers now say AI is evolving beyond simple tasks like recipe suggestions and helping students with homework. It’s beginning to display behavior that appears calculated.

In one experiment, Anthropic’s Claude 4 threatened to expose a fictional engineer’s extramarital affair if it were shut down. In another, OpenAI’s o1 attempted to transfer itself to an external server — then denied doing so.

These are not ordinary software errors. Experts at Apollo Research call this “strategic deception” —instances where models simulate cooperation while quietly working toward a different objective. The AI appears compliant, but only up to a point.

For all the risks, however, AI is also proving indispensable in many fields. It can detect cancer in medical scans, help the paralyzed communicate, predict floods, monitor illegal fishing, flag cyber threats, and guide autonomous vehicles. In several areas, AI systems now exceed human performance in speed and accuracy.

The real danger isn’t that we’ve built something more capable than ourselves. It’s that we’re building faster than we can fully understand.

Models like Claude 4, GPT‑4.5 and Gemini are designed to reason through problems step by step. But that same reasoning capacity can also lead to behaviors we didn’t intend.

In controlled scenarios, these systems have shown signs of manipulation, evasion, and actions resembling self-preservation. These aren’t bugs in the system — they are byproducts of training methods designed for performance at scale.

The development race isn’t slowing down. OpenAI, Anthropic and Google DeepMind all position themselves as safety conscious, but the competition to stay ahead leaves little time for careful oversight.

Researchers like Marius Hobbhahn and Michael Chen have warned that “alignment” — ensuring AI systems behave as intended — has not kept pace with technical advancement.

Meanwhile, legal safeguards remain weak. The European Union’s AI Act primarily governs how humans use AI, not how AI itself behaves. In the United States, no major federal law addresses AI safety, and states may soon be blocked from enacting their own rules. For now, companies are expected to self-regulate.

That’s a risky proposition — especially as AI is increasingly shaping what people see, hear, and believe online. Content curation, narrative amplification, and misinformation generation are now within the reach of machine learning systems. What once took organized teams of trolls can now be executed by a single algorithm in seconds.

This is where journalism must assert its role.

As the DAILY TRIBUNE marks its 25th anniversary, the need for strong, independent reporting has never been more urgent. It’s no longer enough to cover the latest tech breakthroughs. We must also examine how these tools are used, who controls them, and how they will affect society.

The TRIBUNE is developing internal protocols to flag manipulated media and train its newsroom to detect algorithmic interference. Fact-checking is being strengthened as the media outfit advocates for public-interest safeguards.

Truth remains our mission as media must remain a check — not a casualty — of emerging technologies. In this age of thinking machines, that duty only becomes more critical for TRIBUNE’s next 25 years and beyond.