The daunting technology of deep fake

By Kyle Egner

You’ve probably seen videos of Tobey Maguire as Tom Holland, or Nicholas Cage in famous movie scenes. All these videos are an example of the new deep fake technology, but while these videos are mostly played for laughs the implications of this technology may change how we see others on the internet

Image of a Nicholas cage deep fake. Uploaded by: Nick Cage DeepFakes,Jan 29, 2018

First what is deep fake? Deep fake a portmanteau of “deep learning” and “fake,” is an AI that screens someone’s face and matches the motions of the face to another. It essentially photoshops every frame of a video with excellent speed and quality.

Many film companies have seen the potential in this new technology and are looking into deep fake to de-age or even replace actors. An example of this is Stan Lee whose face was scanned by Disney and may soon be used in movies.

Actors have even started copywriting their own faces to avoid deep fake. While this is notable this is only a small part of the potential danger of deep fake.

A worrying possibility of deep fake could be someone videotaping themselves saying “I committed 9/11” then graphing that their facial movements onto George W. Bush. This would cause thousands of uniformed people to think George Bush caused 9/11.

Its fake news on steroids and most people especially those not adept at the internet could be tricked into believing it. With this technology entire presidential campaigns could be ruined just because of a few bogus videos.

Deepfakes of Sarah Palin and Trump on their SNL counterparts

Professionally made deep fakes aren’t the only concern. Someone with a good P.C can make a deep fake that looks real to an untrained eye. While these could be disproven upon further expectation the potential flood of videos could make video footage of actual problems unreliable. Leading to an internet were nothing is for certain.

It gets even worse when you consider the phycological significance of deep fake. Seeing someone commit an atrocious act permanently stores that image in the brain regardless if its proven fake.

It’s not all doom and gloom though. Faking someone’s voice is still extremely difficult in deep fake. This at least prevents people from being able to completely imitate someone.

Also, thanks to the aforementioned concerns the Defense Advanced Research Projects Agency (DARPA) has been working to counteract deep faking. Tech giants such as Microsoft and Facebook have also help lead the crusade against deep fakes.

The hope is that eventually the researchers will make an algorithm that can automatically detect a deep fake. Making any controversial deep fake easily disprovable.

While some are hopeful that DARPA’s efforts will pay off, rapid advancements in deep fake technology and the potential political gain make it seem unlikely. Right now, we can just hope none of us will be victims of deep fake.

Sources

CNN pentagons race against deep fakes-https://www.cnn.com/interactive/2019/01/business/pentagons-race-against-deepfakes/

CNBC what “deepfakes” are and how they may be dangerous-https://www.cnbc.com/2019/10/14/what-is-deepfake-and-how-it-might-be-dangerous.html

CSO how and why deepfake videos work and what is at risk-https://www.csoonline.com/article/3293002/deepfake-videos-how-and-why-they-work.html

Kyle Egner is a freshmen student in the Warrior Word club. He is also an active member of the Speech and Debate team.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s