Access a world-class intelligence capability tailored to your specific needs. Control a multi-million dollar program without the time or expense and solve problems both lasting and acute.

What is Managed Intelligence?


3 min read

Analyzing a Trump Video for Deepfake Potential

Oct 28, 2020 10:19:49 AM

With the presidential election upon us, the looming threat of deepfake videos is most certainly on everyone's minds. 

While the threat of malicious use of this ever-evolving technology has not reached the point where most companies need to dedicate extensive resources into its detection and appropriate defenses, Nisos took a look at the current state of deepfake detection technologies.

Using a real world example, we analyzed a recent video of President Donald Trump to determine what current detection technology can tell us. 

Doing so allowed us to understand what we can and cannot detect and also whether these tools and techniques can stand up to the challenges ahead.


Deepfake video technology relies on tools that perform face-swapping and lip-syncing operations in an attempt to present someone in a negative or embarrassing light, or to deliver disinformation. 

These operations are, however, currently performed in a way that is actually detectable. 

Asymmetries in the frames of the video, jerkiness in the eye and head movements, inconsistency in the color, lighting, and shading of the video, and other related pixel-encoded artifacts betray the fact that the video or image has been manipulated. 

These strategies are among the set of detection techniques that deepfake technology researchers leverage when they build comparative technology to detect inaccuracies and then label videos and images as being real or fake.

Detection Process

We obtained early access to the deepfake detection technology offered by Sensity (formerly Deeptrace Labs), which allowed us to analyze images or videos likely created using the same tech in sites such as ThisPersonDoesNotExist and in deepfake apps like FaceSwap

These queries could be analyzed in local as well as remote image and video resources, with the Sensity API output generating a 'real' or 'fake' label which is used as the deciding verdict. This is also coupled with a confidence score that allows the caller to make a determination of trust once this assessment is provided. 

We decided to test this technology against a video allegedly produced by the Trump administration where the president addressed his health and potential COVID diagnosis. 

We chose this particular video due to the fact that it was recorded in a way that raised doubts about its authenticity. The background of the video, in particular, seemed to be a digital backdrop or greenscreen.  

We provided the original source and a collection of extracted video frames to the GAN and face manipulation endpoints provided by the Sensity API to satisfy our curiosity about the veracity of the video. 

After a few minutes, we received responses from both endpoints that indicated the video, which initially appeared to have been manipulated at least for the backdrop, was labelled as real with a high level of confidence.  

We successfully integrated a third party vendor capability into our image analysis pipeline and we found that the speed of analysis and detection results provided a workflow that was more than able to handle the queries that we provided.


As a methodology, deepfake detection provides some measure of defense against digital disinformation when videos, images, or audio recordings are the mechanism for delivery. 

Due to the fact that deepfake production and detection both rely on the same methods, techniques, and hardware, there will continue to be an ongoing cat-and-mouse game with the winner continuously changing as one side outpaces the other.  

Should this continue, detection services would then allow an entire industry of services and tooling to be deployed, possibly indefinitely. In fact, detractors of the deepfake detection service industry, who also are knowledgeable on the techniques being used to both create and detect deepfakes, argue that like any good arms-race, the deepfake detection strategies and algorithms that work today will not necessarily work tomorrow. 

While each new advance in technologies like cuDNN, GANs, and other neural network architectures could signal that deepfake creation tools will always end up outpacing the same technology used for detection, cyber security technology partnering offers a formidable challenge to this thinking.

Written by Justin Simms

Post a Comment