The Effects Of Deep Fakes On The World
The Effects of Deep Fakes on the World
Daniel Schudlich
George Mason University
IT 104-A01
May 27, 2023
"By placing this statement on my webpage, I certify that I have read and understand the GMU Honor Code on https://catalog.gmu.edu/policies/honor-code-system/ and as stated, I as a student member of the George Mason University community pledges not to cheat, plagiarize, steal, or lie in matters related to academic work. In addition, I have received permission from the copyright holder for any copyrighted material that is displayed on my site. This includes quoting extensive amounts of text, any material copied directly from a web page, and graphics/pictures that are copyrighted. This project or subject material has not been used in another class by me or any other student. Finally, I certify that this site is not for commercial purposes, which is a violation of the George Mason Responsible Use of Computing (RUC) Policy posted on https://universitypolicy.gmu.edu/policies/responsible-use-of-computing/ website."
Introduction
Imagine you
are scrolling on Twitter or Facebook, and you see on your feed a video of
yourself. How would you feel? Shocked, Surprised? Now, in this video, what
would you do if you were saying things that could potentially ruin your life
even though you never said them? Would you think that this is a far-off
fantasy? Well, it’s not; the possibility of this happening is quite high. People
can now make fake videos of anyone they want and make you say anything; this is
known as a “deep fake”. Deep fakes are an up-and-coming new technology that has
been around for only a few years, and yet it has already taken the world by
storm. Deep fakes are a very useful but dangerous technology that has many
positive but even more negative applications. In this paper, we will cover all
the effects of deep fakes on the world, from their current use to their
security aspects, ethical and social applications, and future applications. As
each year passes, it becomes easier and easier to make deep fakes, and soon it
could be done with just an app. Even with the technology only being around for
a few years, there are already many examples of some of the consequences of
this technology. For example, here (Diep Nep, 2021) is a video of Morgan Freeman. While it
appears to be Morgan Freeman and sounds like him, it is not Morgan Freeman in
the video but rather a deep fake. This video is proof of how convincing this
technology can be; now just imagine, as more time passes, how hard it could be
to tell the difference.
Current Use
Today, the
most common application of deep fakes is by individuals, but there have been
some cases of organizations starting to use the technology. Overall, the main
users of the technology are individuals. The use of this technology has
skyrocketed over the years, as John Letzing for the World Economic Forum (WEF)
said that “from 2019 to 2020 the number of deep fakes online rose from 14,678
to 145,277” (Letzing, 2021), which is an increase of 900%. The most common use
of deep-fake technology by individuals has mainly been to get reactions and
responses from others who end up viewing their videos. Many of these videos
could be used to influence other people's thoughts and beliefs. For example,
this video (Peretti, 2018) can make people
think that former President Obama is saying some weird and uncharacteristic
things that you’d normally not expect from him. While this video is made to
spread awareness of the threat of deep fakes, if someone were to take a clip of
the video out of context, they could get reactions from people. This is the
true threat of deep fakes, as stated by Mustak, who says that “they can be used
for the purpose of widespread marketplace deception” (Mustak, 2023). In this
quote, Mustak gets at the main threat of deep fakes, which can cause widespread
marketplace deception. For example, when the “CEO of Pepsi (Indra Nooyi) was
deliberately misquoted as saying that Donald Trump supporters should “take
their business elsewhere.” This prompted boycott calls and a 3.75 percent
decline in PepsiCo’s stock price.” (Mustak, 2023) This shows the catastrophic
results of a deep fake because if something similar were to happen to Pepsi now,
they would lose almost 9 billion dollars.
Security Aspects
As seen with all the examples, some
could cause a lot of issues. Due to the ability of the technology to almost
completely mimic someone, it causes many security issues with people not being
able to tell the difference between what is fake and what is real. For example,
some of the potential risks of deep fakes are “disinformation, division, fraud,
and extortion” (Letzing, 2021). This quote just summarizes some of the
potential security risks that come with deep fakes. A simple comparison would
be to a more common issue plaguing people currently, like phishing and scam
emails, texts, and phone calls. This is similar to deep fakes because people
are trying to act and seem like someone or something that they are not really.
Both of these technologies try to deceive people for their own gain or even
just to cause chaos. Now that the use of this technology for more malicious
reasons has increased, many organizations and governments have started to work
on ways to be able to differentiate the differences so that they can figure out
which video is real and which one is a deep fake. One of the methods to
identify videos is forensically; this idea is being worked on by the US
Department of Defense. Now Abigail Summerville describes what forensic
identification means, and she said, “The idea behind the forensic approach is
to look for inconsistencies in pictures and videos that serve as clues to
whether the images have been manipulated—for example, inconsistent lighting,
shadows, and camera noise.” (Summerville, 2019). Now, based on how the forensic
approach works, it can give you a pretty good clue as to how hard it is to
identify deep fakes because they have to search for the smallest of details to
be able to determine whether it is a deep fake or not. Another method mentioned
in Summerville’s article was to geotag videos and pictures so that you’d be
able to tell when and where they were taken. While all these methods are good
for dealing with issues, it is almost inevitable that some deep fakes will
still seep through the cracks. Due to this risk, it is still necessary to find
a way that is absolute in being able to identify if it is a deep fake or not
because the consequences could be drastic. This is especially true if the
videos become more and more realistic; it could cause all sorts of issues.
Ethical and Social Applications
There are many ethical and social
applications for deep fakes, and most of these are bad and some of them are
good. With the ability to mimic people, this technology could be used to damage
reputations as well as simplify things and make people not trust videos of
people because they won’t know for sure if they are real. Cassandra Cross
mentions in her article that romance fraud is an issue where people try to scam
money from a romantic partner, hence the name “Romance Fraud”. Now the only way
to combat these scams is by using a technique to search the images used and
find out if a profile is fake. Cross states that “the creation of deep fakes
and allows individuals to produce and promote images that are either altered or
entirely synthetic in their creation, jeopardizes societal understandings of
trust and authenticity. However, when combined, this takes the threat posed by
romance fraud to an entirely new level of risk.” (Cross, 2022) These statements
show why deep fake technology can be so dangerous because if they can make
synthetic images, there will be no way for anyone to know if that profile is
real or fake. This would make more people susceptible to this type of scam.
This example shows just one of the ethical risks that come along with this
technology and some of the applications that it can have for society as a
whole. If this type of scam becomes commonplace, it could cause people to start
distrusting videos, which could lead to a lot more problems. Now imagine if you
were watching TV and you saw a commercial with a dead celebrity. How would you
react? Confused, shocked? As Jan Kietzmann states, “Bringing back Michael
Jackson for more Pepsi commercials is possible, and probably a very lucrative
consideration” (Kietzmann, 2021). This is one potential rabbit hole of ethical
and social issues that could arise from the use of deep fakes. In the example
mentioned by Kietzmann, if Pepsi were to bring Michael Jackson back for more
ads through deep fakes, would it be ethical to take advantage of people’s
emotions attached to the music star? The use of this technology could lead to
thousands of debates around the simple issue of using the technology to bring
someone back from the dead and using it for the sole purpose of gaining more
sales. Now, that isn’t the only possible use of bringing people back from the
dead; they could also do it in movies and pictures. All these potential uses
could raise many questions about whether it is okay to use that technology for
that purpose. Another potential scenario would be using deep fakes of living
celebrities to promote a product. Normally, if someone were to promote a product,
they would get paid to do it, but now if you just use a deep fake, you don’t
need to pay the person anything unless some legislative or legal action is
taken to deal with this issue. Going further with this idea, imagine a company
was to use you in a video or just audio to convince your family and friends to
buy something; they wouldn’t need your permission. This just highlights even
more potential social and ethical issues that could arise, with people getting
scammed all around and organizations trying to take advantage of people. This
could potentially happen soon, as everyone’s lives are online now more than
ever. These scenarios all bring up major questions about whether it would be
right for organizations to do this or whether they should even be able to do
something like this. The social implications of this technology could cause
major issues because people would stop trusting videos, which would create
issues if a message needed to be given out; how would people trust that it was
real? This could shake the very core of society because people would be
doubting whether or not they should follow the instructions just because they
weren’t sure if they were real or not.
Future Use
There could be many potential uses
for this technology in the future. For example, if the technology gets good
enough, you could make videos of dead people to be able to remember them, or
you could use the technology to make movies faster and cheaper by just having
the technology make the actors say lines or do certain things. For example,
Rebecca Roberts said that “the likelihood of indistinguishably lifelike digital
clones also increases. Today, a phone call with a deceased loved one is not
entirely out of the question.” (Roberts, 2023) This statement just highlights
one of the many possibilities for how this technology can be used in the
future. Just imagine being able to have a conversation with a deceased relative
or friend. Some type of version of this already exists, with the main caveat
being that the person in question needs to provide many of the details, but in
the future, you may just need a few audio clips and access to a person's social
media and/or online presences as everything moves online. Some other potential
uses of deep fake technology could be in making videos and movies, where
instead of having to green screen a person in a video or movie, they could just
use deep fake technology to digitally add them in. In the case of movies and
videos, since you would not need all the people to be at the location, you would
just need enough footage of the person along with enough audio so that you
could realistically recreate them. This could allow for more fluent transitions
and edits in videos and movies so that the quality could improve while also
decreasing the time it would take to produce them.
Conclusion
While deep fakes are still a relatively new technology, their impact can be felt around the world by everyone. Deep fakes can be used for good, but in the wrong hands, they can be disastrous. While there are many different applications for deep fake technology, from bringing back the dead to manipulating people and even just for humor, there are many different aspects that need to be taken into account. The use of this technology can come with a lot of security risks, from increasing the success rates of scams to destroying people’s reputations and even causing financial harm. Now you can’t only think about security but also about social and ethical issues, from using people’s likeness without permission to taking advantage of people’s feelings. As long as the technology is monitored and kept in check, while also making sure there are many ways to tell if a video is real or fake, then the potential for good can truly blossom. It could allow people to have videos of loved ones, dead or alive, as a simple reminder to ease pain and suffering. It could also lead to better movies, shows, and videos that could level the playing field so that even everyday individuals could make blockbuster movies. In the end, if the use of this technology is walked along a tightrope to keep it in check, the benefits could outweigh the risks.
References
Roberts, R. J. (2023). You're Only Mostly Dead: Protecting
Your Digital Ghost from Unauthorized Resurrection. Federal Communications Law
Journal, 75(2), 273-296. http://mutex.gmu.edu/login?url=https://www.proquest.com/scholarly-journals/youre-only-mostly-dead-protecting-your-digital/docview/2780917268/se-2
Annotation: The reason this reference is relevant to my
research is that it talks about the usage of deep fakes and how the technology
has been used to be able to bring dead people back to life, this also helps my
research because the example alone that is provided in this journal can bring
up multiple ethical, moral and social issues along with potential benefits that
the use of this technology could bring. This journal also brings up many
potential issues of deep fakes with correspondence to law and how anyone could
make a deep fake of you without your consent or knowledge and there would be
nothing that you would be able to do about it. It also lists possible solutions
to deal with some of the issues that it outlined.
Sylvester, S. (2021). Don't Let Them Fake You Out: How
Artificially Mastered Videos Are Becoming the Newest Threat in the
Disinformation War and What Social Media Platforms Should Do About It. Federal
Communications Law Journal, 73(3), 369-392. http://mutex.gmu.edu/login?url=https://www.proquest.com/scholarly-journals/dont-let-them-fake-you-out-how-artificially/docview/2584566077/se-2
Annotation: The reason this reference is relevant to my
research is that in the beginning it shows and lists another example of the
usage of deep fake technology and some of the potential issues that could arise
when digitally altered media can seem so convincing that people are unable to
tell if what they are seeing is real or not.
This journal also lists some of the history of deep fakes and how the
technology emerged, as well as some of the pros and cons of the tech. This
journal also lists some of the issues with regulating the technology and what
some companies can do about the deep fakes and how they can manage it as well.
Finally, this journal also lists some actions that can be taken by social media
companies to combat deep fakes.
Cross, C. (2022). Using artificial intelligence (AI) and deep
fakes to deceive victims: the need to rethink current romance fraud prevention
messaging. Crime Prevention and Community Safety, 24(1), 30-41. https://doi.org/10.1057/s41300-021-00134-w
Annotation: The reason this reference is relevant to my
research is that this journal goes over a very specific situation where the
usage of deep fakes can harm people financially and emotionally. It also goes
over the potential risks and issues if this particular situation were to occur.
This journal also goes over many different methods to try and prevent the usage
of this technology in romance fraud. It also goes over some of the issues that
would need to be resolved so that the problems can be prevented from occurring
in the first place. The journal also goes over what it thinks would happen if deep
fakes and romance fraud were to become intertwined and what would happen due to
the combination of these two things.
Mustak, Salminen, J., Mäntymäki, M., Rahman, A., &
Dwivedi, Y. K. (2023). Deep fakes: Deceptions, mitigations, and opportunities.
Journal of Business Research, 154, 113368–. https://doi.org/10.1016/j.jbusres.2022.113368
Annotation: The reason this reference is relevant to my
research is that this article goes over the threats posed by deep fakes, how to
reduce those threats, and the potential opportunities that arise from this
technology. This article also presents positive and negative examples where deep
fakes are used. It also goes over some of the threats that firms and clients
would face from deep fakes and it also goes over a way to deal with the issues
in each case. Finally, it also goes over positive uses for the technology and
how firms and clients could use the technology to their benefit.
Kietzmann, Mills, A. J., & Plangger, K. (2021). Deep
fakes: perspectives on the future “reality” of advertising and branding.
International Journal of Advertising, 40(3), 473–485. https://doi.org/10.1080/02650487.2020.1834211
Annotation: The reason this reference is relevant to my
research is that this article goes over how deep fakes would impact advertising
and what would be some of the negatives and positives would arise from the
usage of deep fakes in advertising. This article also lists multiple examples
of deep fakes of famous people. This article then goes over the basics of deep
fakes and how they would affect certain concepts of advertising and how there
are many opportunities to use deep fakes but there are potential downsides and
risks associated with using it for advertising.
Summerville. (2019). “Deep fakes” Trigger a Race to Fight
Manipulated Photos and Videos; Startups and government agencies are researching
ways to combat doctored images ahead of the 2020 election. The Wall Street
Journal. Eastern Edition.
Annotation: The reason this reference is relevant to my
research is that this newspaper goes over how different organizations,
agencies, and people are trying to figure out how to be able to identify deep
fakes before the technology becomes so good that it is almost impossible to
tell if it is real or fake. It then also shows a real-world example and how
Facebook decided to deal with it and what it decided to do about it.
Letzing, J. (2021, April 21). How to tell reality from a Deep fake?. World Economic Forum. https://www.weforum.org/agenda/2021/04/are-we-at-a-tipping-point-on-the-use-of-deep
fakes/
Annotation: The reason this reference is relevant to my
research is that this article goes over specific statistics on deep fakes like
how the use and study of them have increased dramatically over the years. This
article also mentions how much it would cost to make a “professional” deep fake
and it gave an example of the use of the technology in popular media. It also
mentions how different organizations and governments have tried to deal with deep
fakes.
This is not Morgan Freeman - A Deepfake Singularity. (2021).
YouTube. Retrieved June 5, 2023, from https://youtu.be/oxXpB9pSETo.
Annotation: The reason this reference is relevant to my
research is that this video shows an example of the potential uses and it also
shows the potential effectiveness of the technology and how it could still
improve in the future. This source also gave me a better understanding of how
the technology works and what is needed to recreate a deep fake, along with how
you can be able to make someone say anything just by having their voice online.
Jonah Peretti, J., & Sosa, J. (2018). You Won’t
Believe What Obama Says In This Video. BuzzFeed. Retrieved June 5, 2023, from https://www.buzzfeed.com/watch/video/52602.
Annotation: The reason this reference is relevant to my
research is that this video shows another example of how convincing you can
make a video look to deceive people, while it also gave me a better
understanding of some of the potential risks that are posed by this technology.
It also shows in a more well-known example how they can make anyone say
anything. This video proves how well you can mimic people by just having
videos and audio files of the person.