Deepfake Porn Is Still a Threat, Particularly for K-Pop Stars

When we talk about deepfakes, the term used to describe a type of digitally manipulated videos, most of the discussion is focused on the implications of deepfake technology for spreading fake news and potentially even destabilizing elections,  particularly the upcoming U.S. 2020 election.

A new study from Deeptrace Labs, however, a cybersecurity company that detects and monitors deepfakes, suggests that the biggest threat posed by deepfakes has little to do with politics at all and that women all over the world may be at risk.

This is a snippet from an article by RollingStone, you can read the full article here…

Deepfake videos ‘double in nine months’

New research shows an alarming surge in the creation of so-called deepfake videos, with the number online almost doubling in the last nine months. There is also evidence that the production of these videos is becoming a lucrative business.

And while much of the concern about deepfakes has centred on their use for political purposes, the evidence is that pornography accounts for the overwhelming majority of the clips.

This is a snippet from an article by the BBC, you can read the full article here…

The Next Wave of Digital Paranoia: Full-Body Deepfakes Are Now Here

So, we’ve already warned you of the dangers of deepfakes.

Security experts have provided a cautionary tale that deepfakes will play a sinister role in the 2020 election. And we’ve already seen the mayhem that erupted when a Nancy Pelosi video was slowed down to make it appear like she was drunk.

Though not a deepfake, the footage showcased how fast an altered video can go viral and make people question the validity of what they are seeing.

This is a snippet from an article by the Observer, you can read the full article here

Google makes deepfakes to fight deepfakes

Google has released a database of 3,000 deepfakes – videos that use artificial intelligence to alter faces or to make people say things they never did.

The videos are of actors and use a variety of publicly available tools to alter their faces.

The search giant hopes it will help researchers build the tools needed to take down “harmful” fake videos.

There are fears such videos could be used to promote false conspiracy theories and propaganda.

This is a snippet from an article by the BBC, you can read the full article here

Scammer Successfully Deepfaked CEO’s Voice To Fool Underling Into Transferring $243,000

The CEO of an energy firm based in the UK thought he was following his boss’s urgent orders in March when he transferred funds to a third-party. But the request actually came from the AI-assisted voice of a fraudster.

The Wall Street Journal reports that the mark believed he was speaking to the CEO of his businesses’ parent company based in Germany. The German-accented caller told him to send €220,000 ($243,000 USD) to a Hungarian supplier within the hour. The firm’s insurance company, Euler Hermes Group SA, shared information about the crime with WSJ but would not reveal the name of the targeted businesses.

This is a snippet from an article by gizmodo, you can read the full article here…

Deepfake evidence so realistic ‘innocent people will go to jail’ warns expert

Deepfake content is getting ‘really good, really fast’ warns Shamir Allibhai

Deepfake material including fabricated evidence will become so realistic it will land innocent people in jail, an expert has warned.

Shamir Allibhai, CEO of video verification company Amber, believes content including CCTV and voice recordings will be subject to gross manipulation.

He spoke amid alarming concerns over deepfake technology raising eyebrows online, including a recent viral video of comedian Bill Hader morphing into actor Tom Cruise.
This is a snippet from an article by The DailyStar, you can read the full article here

In fighting deep fakes, mice may be great listeners

There may be a new weapon in the war against misinformation: mice.

As part of the evolving battle against “deep fakes” – videos and audio featuring famous figures, created using machine learning, designed to look and sound genuine – researchers are turning to new methods in an attempt to get ahead of the increasingly sophisticated technology.

And it’s at the University of Oregon’s Institute of Neuroscience where one of the more outlandish ideas is being tested. A research team is working on training mice to understand irregularities within speech, a task the animals can do with remarkable accuracy.

This is a snippet from an article by the BBC, you can read the full article here…