There may be a new weapon in the war against misinformation: mice.
As part of the evolving battle against “deep fakes” – videos and audio featuring famous figures, created using machine learning, designed to look and sound genuine – researchers are turning to new methods in an attempt to get ahead of the increasingly sophisticated technology.
And it’s at the University of Oregon’s Institute of Neuroscience where one of the more outlandish ideas is being tested. A research team is working on training mice to understand irregularities within speech, a task the animals can do with remarkable accuracy.
This is a snippet from an article by the BBC, you can read the full article here…
Turn selfies into classical portraits with the AI that fuels deepfakes
It’s the same AI technique behind deepfakes, but also a $432,500 artwork.
The news: The tool lets users upload their photos, then view a classical-style faux watercolour, oil, or ink portrait based on the photo a few seconds later. Each one is unique. You can give it a go here.
The threat deepfake audio poses to businesses cannot be understated. While someone using deepfake audio to pretend they’re the CEO of a company and getting that company’s accounting department to wire them $1 million because of an “emergency” is one thing, the tech could also be used for sabotage.
What if one rival–or even a nation-state–wanted to sink Apple’s stock price? A well-timed deepfake audio clip that purports to show Tim Cook having a private conversation with someone about iPhone sales tanking could do just that–wiping billions off the stock market in seconds.
This is a snippet from an article by fastcompany, you can read the full article here…
The chairman of the House Intelligence Committee wants to know how social media will handle deepfake videos ahead of the next presidential election.
Rep. Adam Schiff, chairman of the House Intelligence Committee, has sent letters asking Facebook, Google and Twitter how they plan to deal with deepfakes ahead of the 2020 presidential election. Schiff’s concerns follow the disinformation campaigns that spread across social media during the 2016 presidential campaign, according to a statement released Monday.
This is a snippet from an article by cnet, you can read the full article here…
The PetSwap tool by Nvidia uses an algorithm similar to the ones used for so-called deepfakes to transform an image of your beloved pet into an image of another animal.
Ever wanted to know what your pet would look like if it were a different animal?
Well, whether you have or not, there’s now a new web tool that allows you to find out.
This is a snippet from an article by thestar, you can read the full article here…
Generative adversarial networks, the algorithms responsible for deepfakes, have developed a bit of a bad rap of late. But their ability to synthesize highly realistic images could also have important benefits for medical diagnosis.
Deep-learning algorithms are excellent at pattern-matching in images; they can be trained to detect different types of cancer in a CT scan, differentiate diseases in MRIs, and identify abnormalities in an x-ray.
This is a snippet from an article by technology review, you can read the full article here…
To combat the growing threat of spreading misinformation ahead of the U.S. 2020 general election, The Wall Street Journal has formed a committee to help reporters navigate fake content.
Last September, the publisher assigned 21 of its staff from across its newsroom to form the committee. Each of them is on-call to answer reporters’ queries about whether a piece of content has been manipulated.
This is a snippet from an article by digiday, you can read the full article here…