[seopress_breadcrumbs]

Emerj CEO Spoke About Deepfakes On Channel News Asia TV

Event Title: Channel News Asia TV Spot

Event Host: Channel News Asia TV

Date: July 3, 2019

Team Member: Daniel Faggella, Emerj Founder and CEO

What Happened

I was in Singapore for INTERPOL World 2019 and had some media opportunities lined up, of which a spot on Channel News Asia (CNA) TV was one. The channel wanted me to do a primetime feature about the state of deepfake technology, how to spot deepfakes, and where the technology is going.

Watch the full TV spot below:

 What Was Covered

  • How deepfakes can be detected: Although it will be difficult for the average person to detect deepfakes they come across in everyday life on their smartphones, corporations, such as news media, that rely on trust in their brand, will employ technology to detect real from fake videos and images. The average person could in theory pause a video and look for markers of manipulation, such as things that are off about a person’s face, eyes, or mouth. Most people won’t do this, however. Those with technical know-how may also be able to look at the file itself for evidence that it’s been tampered with. If it’s a streaming video, this will be much more difficult.
  • The biggest near-term threat of deepfakes: Faking audio is easier than faking video. For video, one needs a lot of visual and audio data and need to be able to put them together. If one has enough audio of a person, it will be possible to synthesize someone’s voice relatively easily, making them say something they didn’t actually say. This has already been done, and as such, audio manipulation is a bigger near-term threat than video. In addition, plausible deniability is a near-term threat. A politician could say something they didn’t mean to and they’ll be able to say the video of them saying it is a deepfake, even if it wasn’t. Perhaps a bigger threat than the populous being fooled by a deepfake is the ability for people in the limelight to say that they didn’t do something they actually did.
  • Where deepfakes are ultimately taking us: Deepfakes will eventually lead to the ability for us to enter a programatically generated, preferenced world. Instead of watching a movie that’s already been made, someone will be able to say they’d like to learn about the french revolution through the perspective of one of its key players and skip certain sections, and AI will be able to generate a movie that’s just for them.

Share article

Subscribe to updates

Subscribe to weekly email with our best articles Financial Services updates that have happened in the last week.

Recommended from Emerj

This Content is Exclusive to Emerj Plus Members

You’ve reached a category page only available to Emerj Plus Members.

Members receive full access to Emerj’s library of interviews, articles, and use-case breakdowns, and many other benefits, including:

In-Depth Analysis

Consistent coverage of emerging AI capabilities across sectors.

Created with Sketch.

Exclusive AI Capabilities Matrix

An explorable, visual map of AI applications across sectors.

Created with Sketch.

Exclusive AI White Paper Library

Every Emerj online AI resource downloadable in one-click

Created with Sketch.

Best Practices and executive guides

Generate AI ROI with frameworks and guides to AI application

View membership options

Register