BUSINESS

The ethics of AI and protecting privacy

This is why ethical use, human oversight, and transparency are critical in guiding AI forward.

Carl Magadia

Artificial Intelligence (AI) is reshaping the way people live and work, from advancing medical research to strengthening business operations.

As these systems grow more powerful, they also raise pressing questions: How can society guard against bias? Who ensures privacy is protected? And what rules should govern the use of AI in the Philippines?

Experts warn that while AI holds promise, it can also magnify risks if left unchecked.

Poorly trained models may produce biased or harmful results.

Malicious applications, such as deepfakes or fake news, can erode trust and threaten public safety.

This is why ethical use, human oversight, and transparency are critical in guiding AI forward.

One of the debates gaining traction is how to safeguard the data that fuels AI.

Data sovereignty refers to the principle that information is subject to the laws of its country of origin, wherever it is stored.

The Philippines does not yet have legislation enforcing this, which makes the discussion even more urgent.

Closely related is data localization, which calls for certain types of sensitive or highly confidential information to remain within national borders.

Beyond security, localization also improves latency when data is accessed locally — an important factor in making AI systems more efficient.

For countries like the Philippines, these principles will shape how AI develops.

Building local AI models that draw on domestic data, while ensuring strong privacy protections, is one way to maximize innovation while minimizing exposure to external risks.

It also ensures that algorithms and the insights they generate remain safe from undue foreign influence.

Ultimately, ethical AI is not the responsibility of one group alone. Regulators must create frameworks that protect citizens, companies must design systems that are transparent and fair, and individuals must be given the tools to understand and control how their data is used.

International examples such as Europe’s GDPR and California’s CCPA show what stronger privacy safeguards can look like, but adaptation to the Philippine context is essential.

Globe has begun laying down its own guardrails to ensure AI adoption is both innovative and responsible.

The company has formed an AI Council and an AI Advocates Guild that oversee how AI is designed and deployed across the business.

These groups ensure that decisions are guided by clear governance principles and that teams are trained to spot risks, limit bias, and maintain human oversight.

By formalizing these structures, Globe is creating accountability from the ground up. At the same time, Globe applies privacy-by-design standards in every AI initiative, from customer service to network management to cybersecurity.

This means data protection is built into systems from the start, not treated as an afterthought.

The company also advocates for reforms that strengthen informed consent, giving Filipinos more control over how their information is collected and used.

These steps place Globe in line with global best practice while staying rooted in the local context.

The path forward for AI in the Philippines will depend on how responsibly the technology is developed and governed. Globe’s commitment is clear: to help build a digital future where progress never comes at the expense of human dignity.

By combining foresight with accountability, the company is working to ensure that AI empowers Filipinos, protects their privacy, and strengthens trust in the technologies that are shaping tomorrow.