Deepfake technology has been called a powerful feat in artificial intelligence and machine learning at its best, and unsettling — even sinister — at its worst.
Deepfakes are media — usually videos, audio recordings or photographs — that have been doctored through artificial intelligence (AI) software to fabricate a person’s facial or body movements. They can easily spread by sharing over social media platforms and other websites.
One well-known example is a video that circulated in August 2019, in which actor Bill Hader does an impersonation of Tom Cruise. The video is edited so Hader’s face morphs into a realistic image of Cruise, giving the impression that it’s the latter talking.
Beyond that, deepfake circulation could be damaging in 2020 and future election cycles. Along with celebrities, government leaders are the most common subjects of deepfakes, according to a February Science and Tech Spotlight from the U.S. Government Accountability Office (GAO).
“Deepfakes could be used to influence elections or incite civil unrest, or as a weapon of psychological warfare,” per the report. It also notes that much of deepfake content online “is pornographic, and deepfake pornography disproportionately victimizes women.”
In 2018, Reddit shut down r/deepfakes, a forum that distributed videos of celebrities whose faces had been superimposed on actors in real pornography. The computer-generated fake pornography was banned because it was “involuntary,” or created without consent.
Much of the same technology used to make those videos could be used to exploit women running for office, according to a GAO official.
“We can’t speak to intent, but the result is definitely that the majority of these do target women,” said Karen Howard, a director on GAO’s Science, Technology Assessment and Analytics (STAA) team.
More than 90% of generated deepfakes are pornographic in nature, Howard said.
“They’re used to create non-consensual pornographic videos, where somebody’s face is inserted into a video, implying that they participated in making this video, when, in fact, they didn’t,” Howard said.
It has evolved into a form of election meddling in other countries, Howard said.
“Looking at the election angle, one of the things we’ve seen in the media — thankfully, this has not touched the U.S. in any significant way yet in terms of election meddling — but one of the things that’s been seen in other countries is taking women politicians and putting their faces into pornographic videos to try and shame them into dropping out,” she said.
In May 2019, U.S. House Speaker Nancy Pelosi (D-Calif.) became the subject of a doctored viral video that made the rounds on Facebook. The clip of Pelosi speaking was slowed down to give the impression that she was drunk. It’s not technically a deepfake, as it wasn’t created with A.I. software, but the intent is the same: spread false information or try to deceive.
Facebook refused to remove the video, saying it does not require posts to be true, Company executives said they slowed the video’s distribution once they deemed it false. Pelosi said the ordeal contributed to the spread of misinformation – something the social network has been under fire for since the 2016 election.
Deepfakes can be created by people who possess basic computer skills, as A.I. software is becoming widely available at lower costs. But there’s a range of sophistication at play, according to Howard – the more advanced someone’s computing and technical skills are, the more convincing their deepfake can turn out to be.
But there are several ways to detect if a video is real or altered. If there’s inconsistent eye blinking, lack of defined facial features or blurriness, it’s likely doctored, per the GAO study.
“There are perfectly legitimate uses for this: e-commerce, entertainment and other settings where it makes a lot of sense to do this,” Howard said.
For instance, deepfake software could be used to translate somebody’s voice so that it appears they’re speaking another language when they’re conducting business with an entity in another country that doesn’t speak English, she said.
“It’s like so many other tools that have and were initially created for some beneficial purpose, but have been twisted by those who have nefarious intent,” she said.
Laura Holliday, assistant director on the spotlight, said efforts to more efficiently detect deepfakes are underway.
“There is some work being done by Defense Advanced Research Projects Agency, DARPA, to better automate the detection of deepfakes,” Holliday said. “It’s supposed to scan the internet more broadly to try to detect deepfakes, and then on top of that, assign videos they identify integrity for.”
Tech industry response
Major players in the tech industry are also getting involved in advancing detection efforts.
“Microsoft, Google, Facebook and others are aware of this. They are working on technologies to detect them,” Howard said. “They’re just not quite there yet, to the point where they’re automatically scanning the entire horizon of internet content to identify and pull these down.”
Policymakers in the United States and abroad now have to weigh the potential negative impacts of realistic deepfakes and what legal consequences might be if they’re used for exploitation, humiliation or election meddling, Howard added.
“What rights do individuals have to their privacy and likeness? What options exist regarding the use of these videos to humiliate or exploit? Those are questions that policymakers in the U.S. and elsewhere are going to have to grapple with,” Howard said.
U.S. Rep. Haley Stevens (D-Rochester Hills) joined the conversation around deepfake technology last year.
In September, she introduced a bipartisan bill directing the National Science Foundation (NSF) and National Institute of Standards and Technology (NIST) to accelerate research on technologies that could pinpoint deepfakes.
“In recent years, the development of deepfake technology has made it easier to create convincing fake videos, which have already been used for malicious purposes,” Stevens said. “The Identifying Outputs of Generative Adversarial Networks Act will help us better understand deepfakes and learn how to prevent the proliferation of fake news, hoaxes, and other harmful applications of video manipulation technology.”
So what’s a key way for public audiences to detect deepfakes right now, while those options are still being fleshed out? Check the source it’s coming from, Holliday said.
“Pay close attention to the source of information,” Holliday said. “Depending on the source, it might give you some clue as to whether it might be credible or not.”
Our stories may be republished online or in print under Creative Commons license CC BY-NC-ND 4.0. We ask that you edit only for style or to shorten, provide proper attribution and link to our web site. Please see our republishing guidelines for use of photos and graphics.