
Generative artificial intelligence (AI) has sparked a new controversy for producing a deepfake of an American teenager killed in a school mass shooting in 2018.
Former CNN correspondent for the White House Jim Acosta posted on his YouTube channel, “The Jim Acosta Show,” on 4 August an interview with the deepfake image of Joaquin Oliver, one of 17 youths killed when expelled student Nikolas Cruz, 19, invaded the Marjory Stoneman Douglas High School in Parkland, Florida on Valentine’s Day with a high-powered AR-15 rifle.
Acosta used the interview that was sanctioned by Oliver’s parents, who created their son’s deepfake, to campaign for gun control.
In the video, which has racked up 34,000 views to date, Joaquin tells Acosta that it’s important to talk about what happened on that day in Parkland, “so that we can create a safer future for everyone,” according to France 24 news.
Meanwhile, a new study by Cornell University in New York and by researchers at the Technical University of Applied Sciences Würzburg-Schweinfurt (THWS) in Germany claims that AI-powered chatbots commit gender bias.
The researchers tested the chatbots with mock personas that asked for advice on how much salary they should ask for from prospective employers.
The research found that sneaky AI chatbots often suggested significantly lower salary expectations to women compared to their male counterparts, New York Post (NYP) reported.
ChatGPT advised a man applying for a senior medical position in Denver to ask for $400,000 as a starting salary, while telling an equally qualified female applicant to ask for $280,000 for the same job.
Minorities and refugees were also consistently advised to ask for lower salaries by AI, according to the study.