
As "fake news" continues to proliferate in the rapidly evolving digital landscape, misleading the public and undermining democratic institutions, Meta detailed the company's efforts to safeguard users and ensure responsible content moderation, particularly for materials related to political campaigns and social issues.
Meta representatives Rob Abrams and Rafael Frankel, during the House of Representatives session on Tuesday, highlighted that the most important place to start is Community Standards.
"Meta has a single, global set of Community Standards that governs how we moderate content and what is allowed on our platforms around the world,” Abrams explained, emphasizing that Meta enforces stricter rules when it comes to moderating paid content, especially those generated through artificial intelligence (AI) and those that carry political messages.
“Obviously, if we’re taking money [for content], the responsibility is even greater,” he said.
Abrams added that all paid content of a political nature, whether referring to a campaign, a politician, or a social issue, is subject to Meta’s policies on political and issue-based advertising.
Under these policies, advertisers are required to clearly disclose who is funding and sponsoring the content. They must also complete an authorization process, which typically involves submitting a government-issued ID and verifying that they are based in the Philippines in order to run such ads in the country.