© 2024
Virginia's Public Radio
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Fact or Fiction: Two UVA Students Tackle Deepfakes

Alexandra Angelich / UVA University Communications

Machine learning has allowed video producers to create images that look real but are actually fakes, and viewers can’t often tell when they’re being tricked.  Now, two students from the University of Virginia have come up with a way to spot bogus video.

Star Wars fans were delighted when a new film – The Rise of Skywalker – was released.  To their amazement, the producers had brought the late actress Carrie Fisher back to life for a brief but convincing scene.

“What is it they’ve sent us?  Hope.”

Equally amazing was a video crafted by comedian and director Jordan Peele who does a remarkable imitation of Barack Obama’s voice and was able to manipulate an existing video of the former president speaking from the Oval Office.

“We’re entering an era in which our enemies can make it look like anyone is saying anything at any point in time, even if they would never say those things.  For instance, they could have me say things like, ‘President Trump is a total and complete (bleep)!’  Now you see I would never say these things, but someone else would – someone like Jordan Peele.”

Peele used the video to warn the public about so-called deep fakes – videos manipulated by computers and crafty programmers to look like the real deal.  It’s a subject that also concerns UVA engineering major Zachary Yahn.  He says people may be able to spot fakes.

“The computer algorithms that are generating these things aren’t perfect, but these things happen so quickly," Yahn says. "You’re at the gym watching the news or you’re just scrolling through your social media feed.  You don’t really have the attention to sit there and really stare at this video or this image even for the 30-seconds it would take.”

And some frauds are much harder to detect.

“Some of them are really, really good, and even the experts can’t really tell,” he explains.

So when he and fellow engineering major Ahmed Hussain heard about a contest to devise a way to automatically flag deepfakes, they jumped at the chance.  They spent about three weeks doing research, conferring with Professor Mircea Stan and post-doc Samiran Ganguly. Hussain says the time passed quickly.

“It was a competition that I thought would take a lot longer, and would be a lot more difficult, but as soon as we got into it I got so interested and passionate about the subject that I lost track of time and had a lot of fun with it,” he says.

Hussain says the approach they came up with is very different from what others have tried, but it only exists on paper.

“The way that research goes is you sort of create something.  You kind of fidget with it, and you say, ‘Okay, I’ve got it.  This is how it’s going to work.’  And then you publish your findings," explains Hussain.  "Turning it into a full end-to-end product is a much bigger task.”

But their proposal was strong enough to beat 16 other entries in the competition, landing them a $6,000 prize and bragging rights.  Again Zach Yahn.

“The second place team was composed of four PhD students," Yahn says.  "We were pretty proud to have beaten them out.”

The two plan to further develop their concept and maybe start a company to make the model available, but Hussain says their goal is not to get rich.

“We don’t want cybersecurity, we don’t want checking the legitimacy of whether or not a video is real to be trapped within the realm of the corporate world," he explains.  "We want it to just be available to everybody.”

He notes that deepfakes and widespread distrust of the media have been branded the fifth generation of cyber warfare, and he’s hopeful the model devised with Yahn will protect the world from that angle of attack.

***Editor's Note: The University of Virginia is a financial supporter of Radio IQ.

Sandy Hausman is Radio IQ's Charlottesville Bureau Chief