SUBSCRIBE NOW

AI takedown requests seen rising

Scientists are forging a new field called ‘machine unlearning’ that tries to train algorithms to ‘forget’ offending chunks of data.
SCREENS in Toulouse, southwestern France, show the logos of Bard AI, a conversational artificial intelligence software application developed by Google. Google this week infused its Bard chatbot with a new-generation artificial intelligence model called Gemini, which it touts as being able to reason better than ChatGPT and other rivals. | Lionel BONAVENTURE/Agence france-presse
SCREENS in Toulouse, southwestern France, show the logos of Bard AI, a conversational artificial intelligence software application developed by Google. Google this week infused its Bard chatbot with a new-generation artificial intelligence model called Gemini, which it touts as being able to reason better than ChatGPT and other rivals. | Lionel BONAVENTURE/Agence france-presse
Published on

When Australian politician Brian Hood noticed ChatGPT was telling people he was a convicted criminal, he took the old-fashioned route and threatened legal action against the AI chatbot's maker, OpenAI.

His case raised a potentially huge problem with such AI programs: What happens when they get stuff wrong in a way that causes real-world harm?

Chatbots are based on AI models trained on vast amounts of data and retraining them is hugely expensive and time-consuming, so scientists are looking at more targeted solutions.

Hood said he talked to OpenAI who "weren't particularly helpful."

But his complaint, which made global headlines in April, was largely resolved when a new version of their software was rolled out and did not return the same falsehood — though he never received an explanation.

"Ironically, the vast amount of publicity my story received actually corrected the public record," Hood, mayor of the town of Hepburn in Victoria, told AFP this week.

OpenAI did not respond to requests for comment.

Hood might have struggled to make a defamation charge stick, as it is unclear how many people could see results in ChatGPT or even if they would see the same results.

But firms like Google and Microsoft are rapidly rewiring their search engines with AI technology. It seems likely they will be inundated with takedown requests from people like Hood, as well as over copyright infringements.

While they can delete individual entries from a search engine index, things are not so simple with AI models.

To respond to such issues, a group of scientists is forging a new field called "machine unlearning" that tries to train algorithms to "forget" offending chunks of data.

One expert in the field, Meghdad Kurmanji from Warwick University in Britain, told AFP the topic had started getting real traction in the last three or four years.

Among those taking note has been Google DeepMind, the AI branch of the trillion-dollar Californian behemoth.

Google experts co-wrote a paper with Kurmanji published last month that proposed an algorithm to scrub selected data from large language models — the algorithms that underpin the likes of ChatGPT and Google's Bard chatbot.

Google also launched a competition in June for others to refine unlearning methods, which so far has attracted more than 1,000 participants.

Kurmanji said unlearning could be a "very cool tool" for search engines to manage takedown requests under data privacy laws, for example.

He also said his algorithm had scored well in tests for removing copyrighted material and fixing bias.

However, Silicon Valley elites are not universally excited.

Yann LeCun, AI chief at Facebook-owner Meta, which is also pouring billions into AI tech, told AFP the idea of machine unlearning was far down his list of priorities.

"I'm not saying it's useless, uninteresting, or wrong," he said of the paper authored by Kurmanji and others. "But I think there are more important and urgent topics."

LeCun said he was focused on making algorithms learn quicker and retrieve facts more efficiently rather than teaching them to forget.

Latest Stories

No stories found.
logo
Daily Tribune
tribune.net.ph