john henry dodson

AI Frankensteins

“Insofar as the threats fended off by ChatGPT were concerned, they did not result, or so OpenAI claimed, in success rates to crow about.

Now comes the company behind the wildly successful ChatGPT admitting last week that it has been dealing with covert influence operations the past three months, leveraging generative artificial intelligence for “deceptive activities.”

OpenAI says with a hint of a boast that it has disrupted five such operations that originated from what many in the free world would consider an axis of rogue states, namely Russia, China and Iran. Throw into the mix a private company based in Israel that may or may not have links to the Israeli government.

With elections around the world expected to be heavily influenced by the use of AI, ChatGPT and its main rival, Google’s Genesis, will have to be on guard against threat actors using their powerful language models to sow disinformation and misinformation.

Insofar as the threats fended off by ChatGPT were concerned, they did not result, or so OpenAI claimed, in success rates to crow about, with tasks like generating comments, articles and social media profiles detected.

Those are routine tasks for large language models, thus the challenge for generative AI operators would be determining the misuse of their platforms, including for debugging code for bots and websites, for dubious operations.

According to a report by Agence France-Presse (AFP), “companies like OpenAI are [coming] under close scrutiny over fears that apps like ChatGPT or image generator Dall-E can generate deceptive content within seconds and in high volumes.”

“This is especially a concern with major elections about to take place across the globe and countries like Russia, China and Iran known to use covert social media campaigns to stoke tensions ahead of polling day,” AFP said.

It went on to cite a disrupted operation, “Bad Grammar,” creating political comments in Russian and English in Telegram to target Moldova, the United States and, of course, Ukraine.

“The well-known Russian ‘Doppelganger’ operation employed OpenAI’s artificial intelligence to generate comments across platforms like X in languages including English, French, German, Italian and Polish,” the French news service said.

“OpenAI also took down the Chinese ’Spamouflage’ influence op which abused its models to research social media, generate multi-language text, and debug code for websites like the previously unreported revealscum.com,” it added.

For the Iranian group dubbed the International Union of Virtual Media, and the Israel-based company STOIC, their operations involved generating content for propagation on state-linked websites, with the intent to appear legitimate.

OpenAI said ChatGPT was used by the bad players to generate content across IG, Facebook, Twitter and similar social media sites, but that “none managed to engage a substantial audience.”

This early in the AI game, those inhabiting the underbelly of the internet, the so-called Dark Web, are seen to use large language models to generate high text and image volumes, mixing them with traditional content, and then “faking engagement via AI replies.”

Meta, the company behind Facebook, maintains that it, too, has racked up successes in stopping coordinated disinformation campaigns created through generative AI, saying it has put in place mechanisms to red flag “inauthentic behavior.” Specifically, it cited the upcoming United States presidential election as an area of concern, as AI may be used to trick or confuse people.

For Meta, according to its director for disruption policy David Agranovich, systems are being trained not just to look for coordinated inauthentic content, but also for questionable behavior of bots and machine learning apps.

This is nothing new as it’s an established fact that Russian operatives used FB and other US-based social media platforms to try to shape the result of the 2016 election that was won by Donald Trump.

AI has taken quantum leaps since then, and so the sophistication of fake content and deep fakes can only add up to a global concern about AI-created Frankensteins.

logo
Daily Tribune
tribune.net.ph