Coordinated Inauthentic Behavior (CIB) is a common phrase heard in the news regarding disinformation, misinformation, and influence operations; but what exactly does it mean?
First, let’s define our terms: inauthentic behavior, and coordinated.
Inauthentic behavior in the context of disinformation refers to an online entity (a social media account or a website) that misrepresents itself for the purpose of deceiving readers or followers.
For example, this could be a political news site that claims to be based out of America, but is actually run in Macedonia or it could be a Russian-created social media account using a fake name and photos to pretend to be an American posting about US politics.
"Coordinated" behavior is when multiple inauthentic accounts / personas work together in order to push a certain item or media theme.
In our previous work looking at Macedonian fake-news actors, they worked in coordination with an American company in order to push their for-profit fake news empire.
So how does Nisos identify CIB?
We previously listed the four parts of disinformation research that are critical to attribute disinformation actors: narrative, outlet, account, and signature.
Many companies and journalists specialize in identifying the narrative and outlet from which disinformation is being propagated.
We excel at the ability to identify the accounts and signatures that provide tell-tale signs indicating coordinated inauthentic behavior.
Important elements of identifying accounts and signatures include the following:
- Initial Signature Discovery
- Mapping Signature Telemetry
- Pattern Identification
- Location and Identity Attribution
How does Nisos discover CIB?
Initial Signature Discovery
When you start looking at a coordinated inauthentic behavior, you want to find technical signatures that are unique to the actor (some other analysts call these ‘signals’).
These signatures can be website domain registration data, shared ad trackers, information about who or what group shared the information or URL first, languages the content is spread in, region-specific phrases or colloquial language, etc.
When we start to find several accounts that share the same unique signatures, we can start peeling back the onion to determine whether they are part of a CIB network.
Mapping Signature Telemetry
If our initial signature discovery research provides sufficient starting points for further analysis, we can utilize proprietary datasets such as mobile data, netflow telemetry, and domain registration data to map the technical infrastructure associated with these unique signatures.
If a foreign government, a digital marketing firm, or a private intelligence company is spreading disinformation, they will need infrastructure that creates a telemetry footprint that can be used to not only identify a current CIB effort, but also identify future efforts or parallel, previously-unknown efforts.
In CIB identification efforts, originating IP addresses that correlate with disinformation communications signals can also be mapped to physical locations.
Using proprietary datasets, we can associate those locations with additional previously unknown telemetry footprints that provide further leads for investigation. These physical locations can also help us develop patterns of activity associated with online infrastructure and key personnel.
Overlaying the signatures, telemetry, and infrastructure and real-life activity can help us see patterns that assist in identifying and attributing CIB.
Actual Attribution of the Actor in Terms of Identity and Location
Our next step is attribution, to determine who is responsible for the campaign.
There are multiple qualitative metrics we can use to make an informed assessment regarding the capabilities and intent (financial or ideological) behind a suspected CIB campaign.
However, Nisos’ method of rigorously pursuing and analyzing technical leads provides us with the ability to make qualitative assessments backed up with solid technical data.
This methodology gives our clients the peace of mind that we are not just making assessments based on “appearance” - which can be easily spoofed, but on empirical evidence derived from deep technical expertise.