GLOBAL GOALS

UN principles and AI safety

There is broad agreement that AI applications must be reliable, transparent, accountable, overseen by humans, and capable of being shut down. But without global oversight, there is a real risk of incoherence and gaps.

Antonio Guterres

The gap between AI and its governance is wide and growing.

AI-associated risks are many and varied. Like AI itself, they are still emerging, and they demand new solutions.

But let's be clear: They do not demand new principles. The principles for AI governance should be based on the United Nations Charter and the Universal Declaration of Human Rights.

We urgently need to incorporate those principles into AI safety.

I see three areas for action.

First, we are playing catchup on today's threats. We need to get ahead of the wave.

In the past year, we experienced the release of powerful AI models with little consideration for the safety and security of users.

Every time this happens, it increases the risk that technology will be used maliciously by criminals or even terrorists; that it will undermine security or information integrity; that people could lose control of it; and that it could develop in unintended directions. 

We urgently need frameworks to deal with these risks, so that both developers and the public are safe and can have confidence in AI.

The second area for action concerns AI's possible long-term negative consequences.

These include disruption to job markets and economies, and the loss of cultural diversity that could result from algorithms that perpetuate biases and stereotypes.

The concentration of AI in a few countries and companies could increase geopolitical tensions.

Right now, the vast majority of advanced AI chips are made in one of the most geopolitically sensitive places on earth.

Longer-term harms extend to the potential development of dangerous new AI-enabled weapons, the malicious combination of AI with biotechnology and threats to democracy and human rights from AI-assisted misinformation, manipulation and surveillance. 

We need frameworks to monitor and analyze these trends, in order to prevent them.

The third concern is that without immediate action, AI will exacerbate the enormous inequalities that already plague our world.

This is not a risk, it's a reality.

One recent report found that no African country is in the top 50 for AI preparedness. Twenty-one out of the 25 lowest scoring countries were African.

AI has huge potential to help developing economies still recovering from the Covid-19 pandemic and struggling with a mountain of debt.

It can help governments to budget, help businesses to expand, and help climate scientists to predict droughts and storms.

It can help ordinary people access vital healthcare and education. 

It can be a huge accelerator and enabler for the 17 Sustainable Development Goals.

But for that to happen, every country and every community must have access to AI — and to the digital and data infrastructure it requires. 

Right now, AI technologies are limited to a few countries and companies.

So, we need a systematic effort to change that.

In response to these three areas of concern, different stakeholders have developed over 100 sets of ethical principles for AI — which have much in common.

There is broad agreement that AI applications must be reliable, transparent, accountable, overseen by humans, and capable of being shut down.

But without global oversight, there is a real risk of incoherence and gaps.

We need a sustained, structured conversation around risks, challenges and opportunities.

The United Nations — an inclusive, equitable and universal platform for coordination on AI governance — is now fully engaged in that conversation.   

We need a united, sustained, global strategy, based on multilateralism and the participation of all stakeholders.

The United Nations is ready to play its part.

Excerpts from the UN Secretary-General's statement at the UK AI Safety Summit, 2 November 2023.