Deep Video Portraits are way better (and worse)

Deep Video Portraits

The strange, creepy world of “deepfakes,” videos (often explicit) with the faces of the subjects replaced by those of celebrities, set off alarm bells just about everywhere early this year.

And in case you thought that sort of thing had gone away because people found it unethical or unconvincing, the practice is back with the highly convincing “Deep Video Portraits,” which refines and improves the technique.

You can read the full article on TechCrunch…

PSA in which fake Obama warns about ‘deep fakes’ goes viral

A viral video now viewed by millions appears to show former President Obama – but it turns out actually to be an effort by “Get Out” director Jordan Peele to educate the public about “deep fakes” made with artificial intelligence. NBC’s Gadi Schwartz reports for TODAY.

You can watch the discussion on Today

and here’s the full video they talk about

 

YouTube shooting surfaces new deepfake conspiracy theories

New conspiracies over YouTube shooting are dangerously built on a real threat

The video in question, “Y Does the Youtube Shooter Looks Like An A.I. Computer Program?” has only garnered around 86,000, views but it’s still quite a weird one, suggesting that Aghdam is actually an AI creation.

And this is where a conspiracy can often turn dangerous, by including a kernel of truth. Because these face-swapped/fake AI videos — called deepfakes — are, indeed, a real thing and a real problem.

You can read the full article on Mashable

DeepFakes Explained

Video by Siraj Raval

There’s a new trend on the interwebs called ‘Deepfakes’, a machine learning system that can be trained to paste one person’s face onto another person’s body, complete with facial expressions.

The effect isn’t yet more convincing than conventional computer graphics techniques, but it could democratize Hollywood-level special effects fakery — and, potentially, lead to a flood of convincing hoaxes.

I’ll explain how DeepFakes works both programmatically and theoretically in this video. It’s essentially 2 autoencoders trained on 2 image datasets and then we reconstruct image A using image B’s decoder.

Code for this video (with coding challenge): https://github.com/llSourcell/deepfakes