Deepfakes: What You Should Know
Abraham Lincoln—or was it George Washington?—said, “You shouldn’t believe everything you see on the Internet.” Among many areas of concern on the Internet is the developing area of deepfake videos and audio; we will explore identifying what they are, how they are used, and the danger these media types pose. We then provide some suggestions on how to spot them.
“Deepfake” is a term used to describe “synthetic media in which a person in an existing image or video is replaced with someone else’s likeness. While the act of faking content is not new, deepfakes leverage powerful techniques from machine learning and artificial intelligence to manipulate or generate visual and audio content with a high potential to deceive.” (https://en.wikipedia.org/wiki/Deepfake). “Deepfakes are modern tools of manipulation.” (https://deepfakenow.com/what-are-deepfakes-used-for/)
An early example of a deepfake video appears in the movie Forrest Gump (1994). In that movie, Forrest purportedly meets three different presidents, including one scene of Forrest receiving the Medal of Honor from President Johnson. These scenes were synthetically created from existing video of those presidents, with the overlay of Forrest added to make it appear that he actually had met with the presidents. (Note: we always recommend discretion when viewing movies or watching any video online).
When the movie studio created Forrest Gump, the technology necessary to create deepfake videos was in its infancy. Today, 30 years later, anyone with a mind to be devious, and enough computer power, can download the tools necessary to create a deepfake video or audio.
“Deepfakes are created by computers using machine learning and Artificial Intelligence (AI) algorithms. Special software is used to apply these algorithms to existing media content. A clever whizzkid (sic) with a simple laptop and an internet connection can, in theory, use the technology to alter an electoral result, or become a millionaire through fraud.” (https://deepfakenow.com/what-are-deepfakes-used-for/)
Consider for a moment the ease with which a person can change his/her background on a video conference platform (Skype, MS Teams, and Zoom). Most of us have likely played with changing the wall of our home office to appear as though we are somewhere else—a different office, a different city, even a different planet. The ability to present a different background on these video platforms demonstrates the ease with which a deepfake video can be produced.
Deepfakes are often used to exact revenge or inflict harm on another person. DeepFakeNow.com suggests five ways deepfakes may be used to manipulate, deceive, or inflict real harm:
- Political gain or political manipulation;
- Financial fraud through phishing or “vishing,” a term to identify “voice phishing;”
- Deepnudes and revenge pornography;
- Deepfake identity fraud;
- Fake news and hoaxes;
Business Insider suggests another popular use for deepfakes: “Deepfakes are not limited to just videos. Deepfake audio is a fast-growing field that has an enormous number of applications. Realistic audio deepfakes can now be made using [sophisticated] algorithms with just a few hours (or in some cases, minutes) of audio of the person whose voice is being cloned, and once a model of a voice is made, that person can be made to say anything, such as when fake audio of a CEO was used to commit fraud last year.” They stole almost $250,000
EuroNews suggests that deepfake video technology could also be used by state actors to alter satellite imagery.
The technology to create a deepfake, either from scratch or by manipulation of an existing video or audio file, is becoming more available and requires less computing resource all the time.
Deepfake videos and audio files are successful primarily because they are viewed by so many people very quickly; audiences like to forward and share content they view on the internet, especially if it confirms their personal viewpoints. Even if the content is fake.
Some Deepfakes are created with no malicious intent, such as the scene in Forrest Gump. During Christmas 2020, Channel 4 in the UK produced and aired a deepfake video showing the Queen purportedly talking about controversial topics that she would not normally address in a public forum. They publicly acknowledged that this video was a deepfake, named the voice-over actress, and indicated this is “a stark warning about the advanced technology that is enabling the proliferation of misinformation and fake news in a digital age.“ https://www.channel4.com/press/news/deepfake-queen-deliver-channel-4s-alternative-christmas-message
As the technology to create Deepfakes develops and becomes more sophisticated, so does the technology to detect them. An article by Norton Security suggests that deepfakes can be identified.
”Poorly made deepfake videos may be easy to identify, but higher quality deepfakes can be tough. Continuous advances in technology make detection more difficult.
They list several syncing, lighting, audio, and coloration inconsistencies of deepfake videos to determine if they are fake or not.
“Researchers are developing technology that can help identify deepfakes. For example, researchers at the University of Southern California and University of California, Berkeley are using machine learning that looks at soft biometrics such as how a person speaks along with facial quirks. Detection has been successful 92 to 96 percent of the time.”
They offer 15 ways to spot deepfake videos.
Be aware; you shouldn’t believe everything you see, especially if it is on the internet.
How to reduce the chances of being deceived by and propagating Deepfakes?
- Research the content to see if anyone has reported it as being fake.
- Watch the video or listen to the audio very carefully to see if you can detect anything that doesn’t seem right. (You may not).
- Evaluate if the content aligns with what you already know about the speaker(s) and what they would say.
- Ask yourself if you agree with what you see or hear simply because it confirms what you already believe.
- Never forward or share any content unless you are certain of its validity.