Facebook CEO Mark Zuckerberg said Wednesday that the company is reevaluating its policies around “deepfakes” after a doctored video of House Speaker Nancy Pelosi went viral on the platform in May, an episode the tech executive called an “execution mistake.”
Zuckerberg, speaking at the Aspen Ideas Festival in Colorado, said Facebook is considering creating a newer, clearer definition for deepfakes in its policies, to better distinguish the seemingly-real, AI-manipulated videos from traditional misinformation and address them accordingly.
The House of Intelligence Committee is examining the problem of ‘deepfakes’ – false images or videos that have been heavily edited with machine learning to seem real.
Law school professor Danielle Citron pointed to the facts involving journalist Rana Ayyub during her hearing testimony. Rana Ayyub had to hide for her safety after an online mob spread a fake pornographic video involving her.
California lawmakers are doing more than putting the final touches on the nation’s most aggressive data-privacy law.
A new bill proposed late Monday by Assemblyman Marc Berman (D., Palo Alto) would ban the proliferation of so-called deepfake videos or photos 60 days before an election.
The technology, which uses artificial intelligence to produce misleading images such as a recent video of House Speaker Nancy Pelosi (D., Calif.) apparently drunk, has bedevilled security experts at Facebook Inc. FB, -1.95% and alarmed public figures. The bogus Pelosi video went viral and was viewed more than 3 million times.
New York (CNN Business)Instagram head Adam Mosseri said the photo-sharing platform is still figuring out how to address doctored videos, also known as deepfakes.
“We don’t have a policy against deepfakes currently,” Mosseri told CBS This Morning co-host Gayle King in an interview that aired Tuesday. “We’re trying to evaluate if we wanted to do that and if so, how you would define deepfakes.”
Machines can now look for visual inconsistencies to identify AI-generated dupes, a lot like humans do.
Here’s a scenario that’s becoming increasingly common: you see that a friend has shared a video of a celebrity doing or saying something on social media.
You watch it, because you’re only human, and something about it strikes you as deeply odd. Not only is Jon Snow from Game of Thrones apologizing for the writing on the show’s last season, but the way his mouth is moving just looks off.
The technology to create believable deepfakes — computer-generated media depicting real people doing or saying things that never occurred — is already here and widely accessible.
And yet, as America moves towards its 2020 presidential election, Axios reports that not a single candidate can point to measures they’ve taken to prevent the spread of this potentially-dangerous media.
Fake news, the Momo hoax and reality shows that are anything but — in a world where it’s getting pretty difficult to tell fact from fiction, a new artificial intelligence bot might make it even harder.
OpenAI, a nonprofit backed by Elon Musk, developed a language algorithm called GPT-2. It’s also known as deep fakes for text, and you can feed it a single sentence and it’ll continue the paragraph, or write a full essay, matching your tone and using proper syntax. ThisYouTube video shows that the algorithm can even write a shockingly convincing news article.