fbpx

Need peaceful screen time negotiations?

Get your FREE GKIS Connected Family Screen Agreement


We are consuming more online media than ever. A recent poll showed that 85% of adults receive their news through a mobile device. And 67% get their news from social media websites.[1] Still fresh in the minds of most Americans are the Internet propaganda attacks performed by Russian hackers. With sensational headlines, these hackers significantly affected the thoughts and beliefs of the American people. What would a hacker be able to accomplish if they could create videos of our heroes and celebrities performing any act they choose? What if we couldn’t distinguish real from fake? What if you or your family were targeted?

Deepfake Attacks Hollywood Celebrities

In December 2017, Reddit user Deepfake released a series of pornographic videos featuring Scarlett Johansson, Gal Gadot, Taylor Swift, and Aubrey Plaza. Using a process called human image synthesis, the hacker created photorealistic images and video renditions of celebrity faces indistinguishable from the real thing.

To do this, he compiled multiple photos and videos of his victims and fed them into specialized software. An artificially intelligent (AI) algorithm then ran the data through multiple computations, training itself to perform the task. Deepfake trained his AI to convincingly swap celebrities’ faces with the faces of pornographic video actors. Voila! A Hollywood scandal was born.

How in the …

Computer-generated imagery (CGI) has been a staple of Hollywood special effects for decades. It’s been used to make cartoon toys come to life in Toy Story and turn people into wholly different creatures in The Lord of the Rings. The software and technology that made it possible for big Hollywood studios to put someone’s face onto a toy or Hobbit was incredibly expensive and laborious work. But now, anyone with a few thousand dollars can afford the computer and software necessary for Hollywood-quality special effects.

After the scandal, the deepfake community worked hard and fast to make face-swapping technology available to the masses.

In January 2018, only a month after the release of Deepfake’s videos online, an app was publicly released called Fakeapp. Fakeapp uses a machine learning tool called TensorFlow which was developed by Google AI. Fakeapp is free and relatively easy to use if you have a powerful enough computer. That means we are likely to see more victims and increasingly dangerous scenarios.

How to Implant False Memories

Not only can hackers create a fake event to trick us, but they can also impact our recollection of events. Memory isn’t simply a black-and-white retrieval system where information is accurately laid down and later retrieved from your brain’s database. Instead, memory is a reconstructive process. The original memory is impacted by several environmental and perceptual factors before being consolidated for memory storage. Our brains also modify the memory during each retrieval. This process is referred to as applying post-event misinformation. Post-event misinformation can dramatically affect attitudes and behavioral intentions.[2]

Post-event misinformation can be invisibly and intentionally created. In 2010, Slate Magazine released a series of political photos (some real, some fake) to approximately 1000 of its readers. They later asked those readers if they could remember the photos. The results were alarming. Readers inaccurately recalled 50% of the events in the faked photos. Fifteen percent of the time the readers could even recall emotions associated with the faked photos. The readers were even more likely to remember a faked photo when it fit their political view.[3]

Hollywood Magic Impacts World Security

Even before the recent deepfake celebrity scandal and Russian election meddling, there was deepfaking happening online with a dangerous political impact.

In September 2017, an Iranian video was released claiming the country had successfully launched a new ballistic missile. The video was, in fact, a failed missile launch filmed several months prior. President Trump believed the video was real and condemned the country of Iran for actions it did not commit. Iran responded claiming it would not tolerate any threats from the president. This faked missile launch further divided the two nations. Luckily, the mistake did not result in a military response. However, it clearly could have!

Considering the sophistication of digital technology, will we be able to tell the truth from fake quickly enough to prevent a global catastrophe in the future?

Government Intervention

The United States Government is reportedly working on it. A research group called SRI International has been awarded three contracts by the Defense Advanced Research Projects Agency (DARPA) to develop tools capable of identifying whether a video or image has been altered and how the manipulations were performed.[4]

Other steps that can be taken to reduce the potential dangers of deepfakes are to equip photos and videos with a digital code that proves authenticity. Increasingly, websites are attending to fraudulent image and video activity and making special efforts for identification and removal.

Unsure if an image, video, or news report is fake? Get in the habit of searching for truth analysis on the popular website Snopes before you make false assumptions or forward deepfakes to friends or on social media.

Your Legal Rights

If you find yourself to be the victim of a video or image with your likeness, it is your legal right to act against it. Here are a few ways the legal system may apply to cases involving deepfakes.

  • Extortion – using deepfakes to force or threaten someone into obtaining something.
  • Harassment – using deepfakes to pressure or intimidate.
  • False Light – the invasion of privacy by utilizing a deepfake.
  • Defamation – damage to reputation due to deepfake.
  • Intentional Infliction of Emotional Stress – emotional stress caused by deepfake.
  • Right of Publicity – deepfake was produced and distributed without consent.
  • Copyright Infringement – facial image in deepfake is copyrighted material.

Thank you to CSUCI Intern, Dylan Smithson for giving us factual, interesting information to share with our kids during a screen-free dinner. Haven’t implemented that best-practice family habit yet?

I’m the mom psychologist who will help you GetKidsInternetSafe.

Onward to More Awesome Parenting,

Tracy S. Bennett, Ph.D.
Mom, Clinical Psychologist, CSUCI Adjunct Faculty
GetKidsInternetSafe.com

Works Cited

[1]Kristen B, Katerina M, (2017) Key Trends in social and digital news media http://www.pewresearch.org/fact-tank/2017/10/04/key-trends-in-social-and-digital-news-media/

[2]DARIO S , FRANCA A ,and ELIZABETH L (2007) Changing History: Doctored Photographs Affect Memory for Past Public Events 10.1002/acp.1394 https://webfiles.uci.edu/eloftus/Sacchi_Agnoli_Loftus_ACP07.pdf

[3]William S. 2010 The Ministry of Truth http://www.slate.com/articles/health_and_science/the_memory_doctor/2010/05/the_ministry_of_truth.html

[4]Taylor H. 2018 DARPA is funding new tech that can identify manipulated videos and ‘deepfakes’ https://techcrunch.com/2018/04/30/deepfakes-fake-videos-darpa-sri-international-media-forensics/

Photo Credits

M U Opening The Objectivist Drug Party – Zach Blas & Genomic Intimacy – Heather Dewey-Hagborg. CC BY-NC-ND 2.0

Mike MacKenzie Fake News – Computer Screen Reading Fake News CC BY 2.0

Dave 109 / 365 It’s definitively a candlestick holder CC BY-NC 2.0

Dr. Tracy Bennett
Dr. Tracy Bennett
Facebook
Twitter
LinkedIn