California lawmakers are doing more than putting the final touches on the nation’s most aggressive data-privacy law.
A new bill proposed late Monday by Assemblyman Marc Berman (D., Palo Alto) would ban the proliferation of so-called deepfake videos or photos 60 days before an election.
The technology, which uses artificial intelligence to produce misleading images such as a recent video of House Speaker Nancy Pelosi (D., Calif.) apparently drunk, has bedevilled security experts at Facebook Inc. FB, -1.95% and alarmed public figures. The bogus Pelosi video went viral and was viewed more than 3 million times.
New York (CNN Business)Instagram head Adam Mosseri said the photo-sharing platform is still figuring out how to address doctored videos, also known as deepfakes.
“We don’t have a policy against deepfakes currently,” Mosseri told CBS This Morning co-host Gayle King in an interview that aired Tuesday. “We’re trying to evaluate if we wanted to do that and if so, how you would define deepfakes.”
Machines can now look for visual inconsistencies to identify AI-generated dupes, a lot like humans do.
Here’s a scenario that’s becoming increasingly common: you see that a friend has shared a video of a celebrity doing or saying something on social media.
You watch it, because you’re only human, and something about it strikes you as deeply odd. Not only is Jon Snow from Game of Thrones apologizing for the writing on the show’s last season, but the way his mouth is moving just looks off.
The technology to create believable deepfakes — computer-generated media depicting real people doing or saying things that never occurred — is already here and widely accessible.
And yet, as America moves towards its 2020 presidential election, Axios reports that not a single candidate can point to measures they’ve taken to prevent the spread of this potentially-dangerous media.
Fake news, the Momo hoax and reality shows that are anything but — in a world where it’s getting pretty difficult to tell fact from fiction, a new artificial intelligence bot might make it even harder.
OpenAI, a nonprofit backed by Elon Musk, developed a language algorithm called GPT-2. It’s also known as deep fakes for text, and you can feed it a single sentence and it’ll continue the paragraph, or write a full essay, matching your tone and using proper syntax. ThisYouTube video shows that the algorithm can even write a shockingly convincing news article.
DENVER — When Peter Cushing turned to face the camera in Rogue One, Star Wars fans were as excited as they were confused. After all, the actor had died more than 20 years earlier, and yet, there was no mistaking him.
For a major Hollywood movie, this is a clever trick. But not everyone is trying to entertain us, and you don’t need a million-dollar budget to deceive.
“You take the face of one person and put it on the body of another,” said Jeff Smith, associate director at the National Center for Media Forensics at the University of Colorado Denver.
A new form of online disinformation has some government officials uneasy about its potential effects on upcoming political campaigns and elections, but policy efforts to address it are sparse.
“Deepfakes” — videos altered with the help of AI that can make people (typically celebrities or politicians) appear to do and say things they actually did not — are not only weird, uncanny manifestations of a new era of technological progress, they’re also a national security threat, according to some.