The chairman of the House Intelligence Committee wants to know how social media will handle deepfake videos ahead of the next presidential election.
Rep. Adam Schiff, chairman of the House Intelligence Committee, has sent letters asking Facebook, Google and Twitter how they plan to deal with deepfakes ahead of the 2020 presidential election. Schiff’s concerns follow the disinformation campaigns that spread across social media during the 2016 presidential campaign, according to a statement released Monday.
Generative adversarial networks, the algorithms responsible for deepfakes, have developed a bit of a bad rap of late. But their ability to synthesize highly realistic images could also have important benefits for medical diagnosis.
Deep-learning algorithms are excellent at pattern-matching in images; they can be trained to detect different types of cancer in a CT scan, differentiate diseases in MRIs, and identify abnormalities in an x-ray.
To combat the growing threat of spreading misinformation ahead of the U.S. 2020 general election, The Wall Street Journal has formed a committee to help reporters navigate fake content.
Last September, the publisher assigned 21 of its staff from across its newsroom to form the committee. Each of them is on-call to answer reporters’ queries about whether a piece of content has been manipulated.
Created by YouTuber Ctrl Shift Face, the video seems harmless enough. It’s kind of fun to see such iconic performers juxtaposed into such a classic cinema moment. (For action movie buffs, it’s also a nice nod to a moment in Schwarzenegger’s Last Action Hero where we see a T2poster with Stallone in the starring role).
Facebook CEO Mark Zuckerberg said Wednesday that the company is reevaluating its policies around “deepfakes” after a doctored video of House Speaker Nancy Pelosi went viral on the platform in May, an episode the tech executive called an “execution mistake.”
Zuckerberg, speaking at the Aspen Ideas Festival in Colorado, said Facebook is considering creating a newer, clearer definition for deepfakes in its policies, to better distinguish the seemingly-real, AI-manipulated videos from traditional misinformation and address them accordingly.
The House of Intelligence Committee is examining the problem of ‘deepfakes’ – false images or videos that have been heavily edited with machine learning to seem real.
Law school professor Danielle Citron pointed to the facts involving journalist Rana Ayyub during her hearing testimony. Rana Ayyub had to hide for her safety after an online mob spread a fake pornographic video involving her.