害羞草研究所

Skip to content

AI-created hate content surfacing 害羞草研究所榤ore and more害羞草研究所 on the web, experts say

UBC prof says hate groups 害羞草研究所榟istorically early adopters of new internet technologies and techniques害羞草研究所
web1_20240524190540-46400bbc2e534273ea4ce6e48f66e8369244cf87d8f749927d2a680a1e0b7f49
Richard Robertson, B害羞草研究所檔ai Brith Canada Director of Research and Advocacy, holds up an Annual Audit of Antisemitic Incidents in Canada during a press conference in Ottawa on Monday, May 6, 2024. B害羞草研究所檔ai Brith Canada flagged the issue of AI-generated hate content in a recent report on antisemitism. THE CANADIAN PRESS/Sean Kilpatrick

The clip is of a real historical event 害羞草研究所 a speech given by Nazi dictator Adolf Hitler in 1939 at the beginning of the Second World War.

But there is one major difference. This viral video was altered by artificial intelligence, and in it, Hitler delivers antisemitic remarks in English.

A far-right conspiracy influencer shared the content on X, formerly known as Twitter, earlier this year, and it quickly racked up more than 15 million views, Wired magazine reported in March.

It害羞草研究所檚 just one example of what researchers and organizations that monitor hateful content are calling a worrying trend.

They say AI-generated hate is on the rise.

害羞草研究所淚 think everybody who researches hate content or hate media is seeing more and more AI-generated content,害羞草研究所 said Peter Smith, a journalist who works with the Canadian Anti-Hate Network.

Chris Tenove, assistant director at the University of British Columbia害羞草研究所檚 Centre for the Study of Democratic Institutions, said hate groups, such as white supremacist groups, 害羞草研究所渉ave been historically early adopters of new internet technologies and techniques.害羞草研究所

It害羞草研究所檚 a concern a UN advisory body flagged in December. It said it was 害羞草研究所渄eeply concerned害羞草研究所 about the possibility that antisemitic, Islamophobic, racist and xenophobic content 害羞草研究所渃ould be supercharged by generative AI.害羞草研究所

Sometimes that content can bleed into real life.

After AI was used to generate what Smith described as 害羞草研究所渆xtremely racist Pixar-style movie posters,害羞草研究所 some individuals printed the signs and posted them on the side of movie theatres, he said.

害羞草研究所淎nything that is available to the public, that is popular or is emerging, especially when it comes to technology, is very quickly adapted to produce hate propaganda.害羞草研究所

Generative AI systems can create images and videos almost instantly with just a simple prompt.

Instead of an individual devoting hours to making a single image, they can make dozens 害羞草研究所渋n the same amount of time just with a few keystrokes,害羞草研究所 Smith said.

B害羞草研究所檔ai Brith Canada flagged the issue of AI-generated hate content in a recent report on antisemitism.

The report says last year saw an 害羞草研究所渦nprecedented rise in antisemitic images and videos which have been created or doctored and falsified using AI.害羞草研究所

Director of research and advocacy Richard Robertson said the group has observed that 害羞草研究所渞eally horrible and graphic images, generally relating to Holocaust denialism, diminishment or distortion, were being produced using AI.害羞草研究所

He cited the example of a doctored image depicting a concentration camp with an amusement park inside it.

害羞草研究所淰ictims of the Holocaust are riding on the rides, seemingly enjoying themselves at a Nazi concentration camp, and arguably that害羞草研究所檚 something that could only be produced using AI,害羞草研究所 he said.

The organization害羞草研究所檚 report also says AI has 害羞草研究所済reatly impacted害羞草研究所 the spread of propaganda in the wake of the Israel-Hamas war.

AI can be used to make deepfakes, or videos that feature remarkably realistic simulations of celebrities, politicians or other public figures.

Tenove said deepfakes in the context of the Israel-Hamas war have caused the spread of false information about events and attributed false claims to both the Israeli military and Hamas officials.

害羞草研究所淪o there害羞草研究所檚 been that kind of stuff, that害羞草研究所檚 trying to stoke people害羞草研究所檚 anger or fear regarding the other side and using deception to do that.害羞草研究所

Jimmy Lin, a professor at the University of Waterloo害羞草研究所檚 school of computer science, agrees there has been 害羞草研究所渁n uptick in terms of fake content害羞草研究所hat害羞草研究所檚 specifically designed to rile people up on both sides.害羞草研究所

Amira Elghawaby, Canada害羞草研究所檚 special representative on combating Islamophobia, says there has been an increase in both antisemitic and Islamophobic narratives since the beginning of the conflict.

She says the issue of AI and hate content begs for both more study and discussion.

There害羞草研究所檚 no disagreement that AI-generated hate content is an emerging issue, but experts have yet to reach a consensus on the scope of the problem.

Tenove said there is 害羞草研究所渁 fair amount of guesswork out there right now,害羞草研究所 similar to broader societal questions about 害羞草研究所渉armful or problematic content that spreads on social-media platforms.害羞草研究所

Systems like ChatGPT have safeguards built in, Lin said. An OpenAI spokesperson confirmed that before the company releases any new system, it teaches the model to refuse to generate hate speech.

But Lin said there are ways of jailbreaking AI systems, noting certain prompts can 害羞草研究所渢rick the model害羞草研究所 into producing what he described as nasty content.

David Evan Harris, a chancellor害羞草研究所檚 public scholar at the University of California, Berkeley, said it害羞草研究所檚 hard to know where AI content is coming from unless the companies behind these models ensure it is watermarked.

He said some AI models, like those made by OpenAI or Google, are closed-source models. Others, like Meta害羞草研究所檚 Llama, are made more openly available.

Once a system is opened up to all, he said bad actors can strip safety features out and produce hate speech, scams and phishing messages in ways that are very difficult to detect.

A statement from Meta said the company builds safeguards into its systems and doesn害羞草研究所檛 open source 害羞草研究所渆verything.害羞草研究所

害羞草研究所淥pen-source software is typically safer and more secure due to ongoing feedback, scrutiny, development and mitigations from the community,害羞草研究所 it said.

In Canada, there is federal legislation that the Liberal government says will help address the issue. That includes Bill C-63, a proposed bill to address online harms.

Chantalle Aubertin, a spokesperson for Justice Minister Arif Virani, said the bill害羞草研究所檚 definition of content that foments hatred includes 害羞草研究所渁ny type of content, such as images and videos, and any artificially generated content, such as deepfakes.害羞草研究所

Innovation Canada said its proposed artificial intelligence regulation legislation, Bill C-27, would require AI content to be identifiable, for example through watermarking.

A spokesperson said that bill would also 害羞草研究所渞equire that companies responsible for high-impact and general-purpose AI systems assess risks and test and monitor their systems to ensure that they are working as intended, and put in place appropriate mitigation measures to address any risks of harm.害羞草研究所

READ ALSO:

READ ALSO:





(or

害羞草研究所

) document.head.appendChild(flippScript); window.flippxp = window.flippxp || {run: []}; window.flippxp.run.push(function() { window.flippxp.registerSlot("#flipp-ux-slot-ssdaw212", "Black Press Media Standard", 1281409, [312035]); }); }