Weaponization for Disinformation

by | Nov 9, 2020 | Blog

Continuing our series on the adversarial mindset, we focus on how actors weaponize narratives for disinformation operations.

In a previous blog post, we wrote about the reconnaissance steps that disinformation actors take prior to launching their operations, including recruitment of individuals with native language proficiency.

After setting up the infrastructure and recruiting content creators, influence campaign actors next turn to creating a compelling narrative that they seek to amplify through defined outlets.

The Narrative

The narrative is the message being conveyed through disinformation. This can range from half-truths and conspiracy theories to outright lies. Regardless of the content, the intention is to manipulate popular opinion for political, financial, or other means.

Narratives that are more likely to go viral often target their audiences using primal emotions such as uncertainty, fear, and anger.

The Platform

“Platform” in the context of disinformation operations often refers to the media – whether traditional or social – through which disinformation reaches its intended audience.

Regardless of a narrative’s origin – whether first posted on a fake news website or provided to a legitimate news outlet – disinformation actors use both illegal and legal means to spread their message across multiple platforms.

Some disinformation actors use illegal or underhanded techniques to manipulate platform algorithms.

Such actors create bot networks to artificially increase their clickthrough rates and thus create the illusion of high activity and popularity across multiple platforms at once, gaming recommendation and rating algorithms.

Disinformation actors often do not need to resort to these tactics to spread their message. Instead, they use the tools available to all marketers who seek to increase narrative reach.

The Audience

Arguably the most important element in a disinformation campaign is the audience.

No number of bots would be effective if the disinformation narrative did not exploit fundamental human behavior and biases in a target audience.

Actors will often infiltrate a closed group unrelated to the future narrative to be amplified.

Friendly dialogue will ensue and then when the purveyor of the disinformation has exploited behavior and biases within the group, they will amplify that dialogue with the narrative that supports that bias.

Using marketing tools for mass broadcast in addition to engaging directly with people and groups, a disinformation actor does not require many resources to achieve massive scale, especially when people believe and share information that adheres to their worldview.

In our next blog, we will discuss methodologies for combatting disinformation at the technological level while understanding the converging factors of technology, media, and human behaviors.