
We continue with insights from Day 2 of “Beyond the Algorithm: Exploring the Cybersecurity and AI Revolution,” a two-day webinar hosted by the Institute of Corporate Directors’ Technology Governance Committee.
Dr. Erika Fille Legara, managing director and chief AI and data officer of the Education Center for AI Research, emphasized that boards must understand the technologies they seek to govern. “You cannot govern what you do not understand,” she said, pointing out that fairness and explainability lie at the core of ethical AI. She offered a striking example: if a model penalizes people from Leyte because of typhoon history, it effectively punishes them twice. She also noted the critical need for human oversight. After surveying over 150 compliance officers about who should be held accountable if AI fails, 40 percent pointed to the algorithm. “That tells us we have more work to do,” she said.
Carmelo Alcala, who leads MIS, Compliance, Risk and Audit, as well as Data Privacy at Visaya Knowledge Process Outsourcing Corp., highlighted the hidden risks of third-party GenAI use. “Your vendors might be using ChatGPT to summarize your board decks,” he warned. “That’s proprietary data you don’t control anymore.” To address this, his team established clear boundaries: if it’s not Office365 Copilot, it’s blocked, regardless of whether it’s used by employees or vendors. He added that cyber governance can’t be siloed anymore. “Your cyber posture is only as strong as your weakest endpoint — and that might be a partner’s intern using a GenAI plugin.”
Manuel Joey Regala, president and CEO of CyberViser Inc., said their organization created a formal model governance committee, with all AI deployments requiring its review. “You want to empower innovation,” he said, “but within a gated sandbox, not an open playground.” His company rolled out mandatory GenAI literacy training across the bank — no exceptions — because, in his words, “organization-wide AI literacy is non-negotiable.”
Romeo Fernando Aquino Jr., Chair of the ICD’s Technology Governance Committee, challenged boards to define AI’s value in business terms. “If the answer is, ‘It will make things faster,’ that’s not enough. Quantify it.” He encouraged companies to bring in independent directors with tech backgrounds to help decode complex discussions. “AI is not just a CIO issue,” he stressed. “Cybersecurity is not just an IT problem. These are strategic board issues.”
At this point, it helps to look back in order to move forward.
A quarter-century ago, boardrooms were just beginning to confront the risks of the internet and the occasional rogue floppy disk — not to mention Y2K, which was less “end of days” and more “have you tried turning it off and on again?” Cybersecurity then often meant installing antivirus software once a year. AI belonged to the realm of science fiction or Hollywood, not strategy meetings.
And yet, the core dilemmas remain: how do boards exercise oversight if they don’t fully understand the technology? How do we balance speed with safety, innovation with accountability?
What’s changed is the speed and the stakes. AI is no longer just a department-level experiment; it’s a strategic lever across entire conglomerates. Cyber risk no longer lives in the IT closet — it sits squarely on the board’s agenda. The technology may be new, but the governance challenge has simply evolved.
My takeaway is this: we need to go back to basics. That means designing governance frameworks with clear objectives and well-defined guardrails. We must build working knowledge of technology at the board level — no need to code, but we must ask the right questions. We must trust our teams and let governance be a guide, not a bottleneck. Feedback must be timely, constructive, and focused on alignment. Above all, our decisions must center on human impact.
In the end, AI and all other technologies are just tools. Governance exists to ensure those tools serve people — not the other way around.
We’ve seen this before — with desktops, the internet, even email (remember when “reply all” was a revolution?). AI won’t erase the fundamentals — it will magnify them.