Doing a Double Take

Four Deepfake Scenarios That Mess With Our Minds

By Zoe Harwood


Photo by Youtube / Matthias Niessner

As if it weren’t hard enough to separate truth from lies on the internet, more and more so-called "deepfakes" are surging onto the scene. Fake news is nothing new, but deepfakes have been making recent headlines for their scary ability to spread misinformation that’s extremely hard to detect.

Simply put, a deepfake — sometimes referred to as artificially synthesized media — is a kind of photoshop for videos powered by artificial intelligence to manipulate images, audio and video to create altered content. You may have seen an example of a deepfake two years ago, when a fake video of Mark Zuckerberg made national news. Or maybe you saw a video of Jordan Peele impersonating former president Barack Obama. But these only scratch the surface of how deepfakes can be used. The following four scenarios highlight the various ways deepfakes are having an impact - and not just on politics.

Your friends send you a text message of a video with a politician talking about an inflammatory topic, but the video seems kind of strange. First off, the title seems like it’s trying to make you angry. Also, in the video, the politician isn’t blinking and seems to be stuttering a little. That’s not normal, is it? But the voice sounds real enough, and this definitely isn’t an impostor ... so you can’t quite decide. Is this video real and can you trust the information you’re getting from it?

You get a phone call from one of your coworkers at a really late hour. You don’t recognize the number they're calling from (maybe they got a new phone?), but you recognize their voice immediately even though it’s kind of mangled. They’re calling to tell you that the company needs you to put $1000 into this coworker’s bank account for “business purposes.” When you ask for details, they don’t say anything more, instead asking again for the $1000. Will you put the money in the bank account?

An aspiring actor, you land your first big part in the newest installment of a franchise. Upon close inspection of the paperwork, you read that the production company can “digitally resurrect” both your likeness and your voice if you were to be injured or worse during production. The company claims your character is essential to the plot and they wouldn’t want to anger anyone if, god forbid, anything happened to you. Time is running out and if you don’t sign soon they’ll be sure to replace you with someone else. Do you accept the job?

You’re listening to the radio and hear a new song that sounds like Tupac but you sure as hell know that it’s not. The radio host announces that the song you just heard was a synthesized version of Tupac’s voice, music and sound. New technology like this is making it easier to imitate people, dead or alive, so you might be hearing new Tupac songs from now on. Do you think it’s okay this song was made without Tupac’s permission or involvement?

So, how can you spot a deepfake to prevent some of these scenarios from happening? Unfortunately, because this technology is rapidly getting better, detecting a deepfake is a bit like playing whack-a-mole — as soon as someone finds a telltale sign of a deepfake, someone else will find a way to make deepfakes more convincing. For example, as soon as one researcher noted that subjects in deepfake videos didn’t blink or didn’t blink often, people started creating deepfakes that did so.

As of right now, some ways you can try to detect deepfakes include:

  • Looking for skin discoloration
  • Poor lip synchronization
  • Mismatched voice
  • Parts of the face that don't match (i.e. a nose or mouth being too small for the face)

However, here’s something even more important when you’re out to spot deep fakes: thinking critically. If you’re watching a video, ask yourself: “Does what this person is saying or doing make sense for them? Does this match up with what I know about this person?” Remember, deepfakes are getting smarter all the time, but so are you.


YR Media has been investigating artificial intelligence and making stories, apps and learning resources about A.I. through an equity lens. Check out our other content here. We are grateful for support in this work from the National Science Foundation. Views expressed here are our own and do not necessarily reflect those of the NSF.

Zoe Harwood
Marjerrie Masicat, Lissa Soep
Zoe Harwood, Dante Ruberto, Bayani Salgado, Elisabeth Guta, Nimah Gobir, Devin Glover
Marjerrie Masicat
Radamés Ajna
GIPHY / Tones And I, Marc Rodriguez