
Deepfakes were once confined to sci-fi films and Hollywood effects. Today, anyone with a smartphone and an internet connection can create them. While the technology fascinates some, it has quickly become a weapon for spreading misinformation, fake endorsements, and fabricated news.
In the Philippines, where millions rely on social media for their daily news, AI-generated disinformation threatens both individual safety and the integrity of democratic processes. The May 2025 midterm elections highlighted this danger, with manipulated videos and fake content circulating widely online.
Senator Raffy Tulfo has urged the Cybercrime Investigation and Coordinating Center (CICC) to strengthen the defenses against deepfakes, warning that they could be weaponized to mislead voters and smear political rivals. The risk, he pointed out, is that by the time a fake video goes viral, reputational damage was often irreversible.
President Ferdinand Marcos Jr. has also raised concerns.
The Department of Information and Communications Technology (DICT), in response, is working with telecom providers on a real-time dashboard to track websites hosting harmful content.
DICT Secretary Henry Aguda described the project as both technical and regulatory. Telecom firms will handle infrastructure, while oversight will be by the National Telecommunications Commission, the National Privacy Commission and the CICC.
Aguda went further in July, writing directly to Meta founder Mark Zuckerberg.
In his letter, he urged Meta to take more aggressive action in removing deepfakes and fake news from its platforms. He noted that Facebook remains dominant in the Philippines, with more than 80 million daily users, yet the company relocated its content moderation teams to Singapore in 2019, leaving the Philippines without on-the-ground oversight.
Aguda also pointed to examples abroad. In Brazil, regulators warned Meta of possible prosecution for the harm caused by disinformation. Other countries suspended social media platforms altogether when they failed to meet local standards.
His message implied that similar steps could be considered in the Philippines if Meta continues to fall short.
Meta maintained that it works with fact-checkers and labeled manipulated media when necessary. Critics argued, however, that the company’s response was largely reactive, often removing harmful content only after it had already spread widely.
In an era where AI-generated content moves faster than fact-checkers can keep up, delays can prove disastrous. The threat extends beyond politics. Deepfakes are now fueling online scams, bogus product endorsements, and misleading advertisements using the likeness of celebrities and influencers.
Beyond confusing voters, such tactics erode public trust in information and in legitimate businesses. The challenge is no longer about awareness but about coordinated action.
Governments, technology companies, and watchdog groups need stronger detection tools, local moderation teams, and greater transparency in how manipulated content is handled.
The Philippines may soon become a test case. If deepfakes remain unchecked in the next election cycle, the consequences will extend far beyond politics. Citizens risk being misled, defrauded, or silenced in ways that remain difficult to fully predict.
Social media has the power to inform and connect. Without stronger safeguards, however, it can just as easily deceive, divide, and destroy trust. The tools to combat deepfakes exist — the question is whether companies and regulators will use them before the damage becomes irreversible.