In the age of artificial intelligence, what we normally see can no longer be trusted. As AI-generated content becomes more convincing, the distinction between reality and deceit grows increasingly blurred, leaving journalists, researchers and the general public racing to keep up.
Recent events in our country have made it painfully clear how high those stakes are. The so-called “polvoron video,” which showed a low-quality recording of President Ferdinand “Bongbong” Marcos Jr. supposedly snorting an illegal substance, was exposed through a House inquiry as a deepfake intended to provoke public indignation and destabilize the administration.
Social media personality Vincent “Pebbles” Cunanan testified that the doctored video was shared on social media before and after it went viral, purportedly in an attempt to destabilize the government. The Philippine National Police and independent fact-checkers subsequently confirmed that the video had been manipulated with AI, turning an already poor-quality recording into a weapon of disinformation.
A Senate inquiry, a few days later on 25 April, revealed another troubling development: an agreement between a Philippines-based PR firm and the Chinese Embassy to deploy so-called “keyboard warriors” on social media. These operatives, posing as ordinary citizens, were well-paid to propagate stories favorable to Beijing’s interests — creating confusion and mistrust among Filipinos.
Although they are not deepfakes in the traditional sense, these fabricated personas and coordinated online operations represent the same chilling phenomenon — the deliberate bending of the truth to manipulate public belief.
These two incidents, so close in time, sound a loud alarm: disinformation is evolving rapidly, driven by technologies that allow propagandists to spread falsehoods to vast audiences at the speed of a tweet. Deepfakes, once confined to sophisticated technical labs, are now accessible to anyone with a degree of technical skill and a motivation to deceive.
But then the question is, how can you tell what is real?
Identifying deepfakes without specialized tools is daunting. Tools like the DeepFake-o-meter, developed by Siwei Lyu and his team at the University of Buffalo, assist by examining videos and images and assigning a probability score for AI manipulation. However, these tools are not infallible. As Lyu notes, detection algorithms can vary widely in their results — and a false sense of reliability can sometimes be more dangerous than no detection at all.
Ultimately, the most powerful detection tool remains the human mind — trained, skeptical and methodical.
Media experts, including researchers at the MIT Media Lab, emphasize the need for critical observation. Subtle clues often betray even the most polished deepfakes. Discrepancies in facial features — unnaturally smooth skin, irregular lighting, unnatural blinking rates, mismatched facial hair — can be revealing. Deepfakes frequently misrepresent the physics of light and shadow or the natural movement of muscles under the skin. A stare that lingers unnaturally long or eyeglasses that fail to reflect glare where they should can suggest synthetic origins.
And yet, spotting these is becoming increasingly difficult as AI technology improves. This is why vigilance must extend beyond the screen. In a media landscape saturated with potential fabrications, every suspicious video and overly perfect image demands the mindset of a researcher. Just as a cautious online shopper compares prices across multiple platforms before making a purchase, information seekers must cross-check sources, verify details, and check — then recheck — information before trusting it as real.
It is no longer enough to read one article or watch one video. It requires scouring multiple news sources, cross-examining timelines, comparing statements, and sometimes seeking out the original, unedited footage. Every piece of information must be treated not as a fact to be consumed but as a hypothesis to be tested.
Experts agree that the fight against disinformation must be a partnership between humans and machines. No AI tool alone can protect us. It is a political and technical challenge — one that demands skepticism, patience, and, perhaps most importantly, a refusal to be passive.
We now live in a time when seeing is not believing. When a false video can ignite political crises and when foreign actors can orchestrate mass deception campaigns in a neighboring country’s backyard, the responsibility to discern truth from fiction falls increasingly on every one of us.
In the newsroom, on social media, and in everyday conversations, trust must now be replaced with verification. To adapt to this new reality, we must think not as passive consumers of information, but as scientists — probing, questioning, testing, and discarding what might be false.
Because in a world where anything can be faked, only critical thinking remains real.