Generative adversarial networks, the algorithms responsible for deepfakes, have developed a bit of a bad rap of late. But their ability to synthesize highly realistic images could also have important benefits for medical diagnosis.
Deep-learning algorithms are excellent at pattern-matching in images; they can be trained to detect different types of cancer in a CT scan, differentiate diseases in MRIs, and identify abnormalities in an x-ray.
To combat the growing threat of spreading misinformation ahead of the U.S. 2020 general election, The Wall Street Journal has formed a committee to help reporters navigate fake content.
Last September, the publisher assigned 21 of its staff from across its newsroom to form the committee. Each of them is on-call to answer reporters’ queries about whether a piece of content has been manipulated.
Created by YouTuber Ctrl Shift Face, the video seems harmless enough. It’s kind of fun to see such iconic performers juxtaposed into such a classic cinema moment. (For action movie buffs, it’s also a nice nod to a moment in Schwarzenegger’s Last Action Hero where we see a T2poster with Stallone in the starring role).
Facebook CEO Mark Zuckerberg said Wednesday that the company is reevaluating its policies around “deepfakes” after a doctored video of House Speaker Nancy Pelosi went viral on the platform in May, an episode the tech executive called an “execution mistake.”
Zuckerberg, speaking at the Aspen Ideas Festival in Colorado, said Facebook is considering creating a newer, clearer definition for deepfakes in its policies, to better distinguish the seemingly-real, AI-manipulated videos from traditional misinformation and address them accordingly.
The House of Intelligence Committee is examining the problem of ‘deepfakes’ – false images or videos that have been heavily edited with machine learning to seem real.
Law school professor Danielle Citron pointed to the facts involving journalist Rana Ayyub during her hearing testimony. Rana Ayyub had to hide for her safety after an online mob spread a fake pornographic video involving her.
California lawmakers are doing more than putting the final touches on the nation’s most aggressive data-privacy law.
A new bill proposed late Monday by Assemblyman Marc Berman (D., Palo Alto) would ban the proliferation of so-called deepfake videos or photos 60 days before an election.
The technology, which uses artificial intelligence to produce misleading images such as a recent video of House Speaker Nancy Pelosi (D., Calif.) apparently drunk, has bedevilled security experts at Facebook Inc. FB, -1.95% and alarmed public figures. The bogus Pelosi video went viral and was viewed more than 3 million times.
New York (CNN Business)Instagram head Adam Mosseri said the photo-sharing platform is still figuring out how to address doctored videos, also known as deepfakes.
“We don’t have a policy against deepfakes currently,” Mosseri told CBS This Morning co-host Gayle King in an interview that aired Tuesday. “We’re trying to evaluate if we wanted to do that and if so, how you would define deepfakes.”
Machines can now look for visual inconsistencies to identify AI-generated dupes, a lot like humans do.
Here’s a scenario that’s becoming increasingly common: you see that a friend has shared a video of a celebrity doing or saying something on social media.
You watch it, because you’re only human, and something about it strikes you as deeply odd. Not only is Jon Snow from Game of Thrones apologizing for the writing on the show’s last season, but the way his mouth is moving just looks off.