Skip to main content

The Threat and Promise of Deepfakes

Though often seen as a dangerous tool, deepfake technology can be used for good.

V.S. Subrahmanian smiles for a headshot.
V.S. Subrahmanian is a faculty fellow at Northwestern’s Buffett Institute for Global Affairs. Image: University of Maryland Institute for Advanced Computer Studies

By V.S. Subrahmanian
Fall 2023
Voices

In recent years, deepfake videos — seemingly realistic digital representations created with sophisticated artificial intelligence (AI) — have been used to demand ransom, distribute revenge porn and influence elections. With the clamor for AI regulation growing louder every day, it is time to reflect on the threats posed by deepfakes — as well as potential benefits. 

In 2019 the app DeepNude allowed users to upload a photo of a fully clothed person. It then automatically generated a synthetic image of that individual completely nude. Unsurprisingly, some used the app to harm and humiliate their ex-lovers. Many U.S. states have banned such apps, but the threat of misusing this sophisticated technology persists. 

In a completely different setting — the May 2023 presidential election in Turkey — deepfakes showed candidate Kemal Kilicdaroglu and Kurdish Workers’ Party militants side by side at rallies. And closer to home, deepfakes were used in the 2023 Chicago mayoral election, when an image and accompanying audio depicted candidate Paul Vallas making racially insensitive comments about police shootings. 

More recently, there are reports of criminals demanding ransom from a mother based on deepfake audio of her daughter saying that she had been kidnapped. Imagine the combination of a ChatGPT-powered chatbot with deepfake audio or video that can, in real time, carry on a seemingly real conversation in the child’s voice with her mother. The prospects of malicious deepfakes are indeed frightening. 

But not all uses of deepfakes are nefarious. Imagine, for instance, a surgery simulator that generates a deepfake model of a patient’s body that can react in real time to surgeons’ actions. Years from now, such simulators could generate realistic 3D models of many different ailments, training surgeons to treat conditions that they have never seen in real life.  

Now imagine a company files a patent for an invention, and to prevent theft of its intellectual property, it uses deepfake AI to generate 99 fake versions of its invention documentation. If an outside entity steals this intellectual property, the thief will struggle to determine which of the 100 documents he has stolen is the real thing. 

Finally, imagine the CIA trying to destabilize a terrorist network by generating a deepfake of a terrorist leader consorting with a known CIA operative or criticizing other leaders within his group. This might sow distrust and bring down the network. Some of our adversaries are already using deepfakes against the U.S. 

The new Northwestern Security and AI Lab (NSAIL), a partnership between the McCormick School of Engineering and the Buffett Institute for Global Affairs, is already developing deepfake techniques that can be used responsibly. The Terrorism Reduction with AI Deepfakes system developed at NSAIL, for example, can generate deepfake video intended to destabilize terrorist networks. NSAIL researchers also have built on existing AI techniques to generate fake documents and fake databases to deter data breaches and intellectual property theft.  

A knee-jerk reaction to ban all deepfakes risks throwing out the baby with the bathwater. The innumerable benefits of synthetically generated media objects would be lost. Instead, we must develop tools that can help us defend against malicious deepfakes, such as detection technology and human factchecking. And existing legislation can and should be used — and adapted when necessary — to prosecute those using deepfakes for malicious purposes.  

V.S. Subrahmanian is the Walter P. Murphy Professor of Computer Science in the McCormick School of Engineering. 

Share this Northwestern story with your friends via...

Reader Responses

No one has commented on this page yet.

Submit a Response