Emerj CEO Spoke About Deepfakes On Channel News Asia TV

Daniel Faggella

Daniel Faggella is Head of Research at Emerj. Called upon by the United Nations, World Bank, INTERPOL, and leading enterprises, Daniel is a globally sought-after expert on the competitive strategy implications of AI for business and government leaders.

Emerj CEO Spoke About Deepfakes On Channel News Asia TV

Event Title: Channel News Asia TV Spot

Event Host: Channel News Asia TV

Date: July 3, 2019

Team Member: Daniel Faggella, Emerj Founder and CEO

What Happened

I was in Singapore for INTERPOL World 2019 and had some media opportunities lined up, of which a spot on Channel News Asia (CNA) TV was one. The channel wanted me to do a primetime feature about the state of deepfake technology, how to spot deepfakes, and where the technology is going.

Watch the full TV spot below:

 What Was Covered

  • How deepfakes can be detected: Although it will be difficult for the average person to detect deepfakes they come across in everyday life on their smartphones, corporations, such as news media, that rely on trust in their brand, will employ technology to detect real from fake videos and images. The average person could in theory pause a video and look for markers of manipulation, such as things that are off about a person’s face, eyes, or mouth. Most people won’t do this, however. Those with technical know-how may also be able to look at the file itself for evidence that it’s been tampered with. If it’s a streaming video, this will be much more difficult.
  • The biggest near-term threat of deepfakes: Faking audio is easier than faking video. For video, one needs a lot of visual and audio data and need to be able to put them together. If one has enough audio of a person, it will be possible to synthesize someone’s voice relatively easily, making them say something they didn’t actually say. This has already been done, and as such, audio manipulation is a bigger near-term threat than video. In addition, plausible deniability is a near-term threat. A politician could say something they didn’t mean to and they’ll be able to say the video of them saying it is a deepfake, even if it wasn’t. Perhaps a bigger threat than the populous being fooled by a deepfake is the ability for people in the limelight to say that they didn’t do something they actually did.
  • Where deepfakes are ultimately taking us: Deepfakes will eventually lead to the ability for us to enter a programatically generated, preferenced world. Instead of watching a movie that’s already been made, someone will be able to say they’d like to learn about the french revolution through the perspective of one of its key players and skip certain sections, and AI will be able to generate a movie that’s just for them.

Stay Ahead of the AI Curve

Discover the critical AI trends and applications that separate winners from losers in the future of business.

Sign up for the 'AI Advantage' newsletter:

Subscribe