The Cyber5 Podcast

EP80: The DISARM Framework Helps Bring Focus to the Disinformation Problem

Episode 80 | July 27, 2022

In episode 80 of The Cyber5, we are joined by Executive Director of the DISARM Foundation, Jon Brewer.

Episode 80 | July 27, 2022

In episode 80 of The Cyber5, we are joined by Executive Director of the DISARM Foundation, Jon Brewer.

We discuss the mission of the DISARM Framework, which is a common framework for combating disinformation. Much like how the MITRE ATT&CK framework is used for combating cyber attacks, the DISARM framework is used to identify what Jon calls “cognitive security.” What that means is all the tactics, techniques, and procedures used in crafting disinformation attacks and influencing someone’s mind. This includes the narratives, accounts, outlets, and technical signatures used to influence a large population. We chat about what success looks like for the foundation and specific audiences used to help the population in understanding how disinformation actors work.

 

Here are the 3 Topics We Cover in This Episode:

 

1) What is the DISARM Framework?

DISARM is the open-source, master framework for fighting disinformation through the coordination of effective action. It was created by cognitive security expert SJ Terp. It is used to help communicators, from whichever discipline or sector, to gain a clear, shared understanding of disinformation incidents and to immediately identify the countermeasure options that are available to them. It is similar to the MITRE ATT&CK framework which provides a list of TTPs that malicious actors conduct cyber attacks.

 

2) Similarities Between DISARM and MITRE ATT&CK Frameworks: Cognitive Security vs Cyber Security:

Cognitive security and the DISARM framework is analogous to cyber security and the MITRE ATT&CK framework. Cognitive security are the TTPs that actors influence minds and cyber security are actors’ ability to steal data from networks. MITRE ATT&CK’s list covers the different TTPs of the cyber kill chain:

    1. Reconnaissance
    2. Resource Development
    3. Initial Access
    4. Execution
    5. Persistence
    6. Privilege Escalation
    7. Defense Evasion
    8. Credential Access
    9. Discovery
    10. Lateral Movement
    11. Collection
    12. Command and Control
    13. Exfiltration

DISARM’s list covers different TTPs of the disinformation chain:

    1. Plan Strategy
    2. Plan Objectives
    3. Target Audience Analysis
    4. Develop Narratives
    5. Develop Content
    6. Establish Social Assets
    7. Establish Legitimacy
    8. Microtarget
    9. Select Channels and Affordances
    10. Conduct Pump Priming
    11. Deliver Content
    12. Maximize Exposure
    13. Drive Online Harms
    14. Drive Offline Activity
    15. Persist in Information Environment
    16. Assess Effectiveness

 

3) Disinformation: A Whole of Society Problem:

 

While MITRE ATT&CK is mostly a business to business framework for enterprises to defend against cyber attacks. The DISARM framework is both a B2B framework for companies like technology and journalism, but also more broadly to consumers. This will take much more support from non-profits and public sector organizations like police and education systems.

 

Listen to other podcast episodes

 
Read Transcript

LANDON: Welcome to the “Cyber5,” where security experts and leaders answer five burning questions on one hot topic and actionable intelligence enterprise. Topics include adversary research and attribution, digital executive protection, supply chain risk, brand reputation and protection, disinformation, and cyber threat intelligence. I’m your host, Landon Winkelvoss, co-founder of Nisos, Managed Intelligence company.

In this episode, I talk with the executive director of the DISARM Foundation, Jon Brewer. Jon talks about the mission of the DISARM framework, which is a common framework for combating disinformation. Much like the MITRE ATT&CK Framework is used for combating cyber attacks, the DISARM Framework is used to identify what Jon calls cognitive security, meaning all tactics, techniques, and procedures, using crafting disinformation attacks, influencing someone’s mind. This includes the narratives, accounts, outlets, and technical signatures used to influence a large population. We chat about what success looks like for the foundation and specific audiences being used to help the population in understanding how disinformation actors work. Stay with us.

Jon, sir, welcome to the show. Would you mind sharing a little bit your background with our listeners, please?

JON: Well, thank you Landon, great to be with you. I’m Jon Brewer. I’m based in the UK as your listeners can probably tell from my voice, and I’ve had sort of three careers so far, really. I’ve been in communications and marketing, in government and nonprofit, and then I’ve had another 10 years of B2B tech and sales and marketing. And then another decade, the recent one has been more in sort of well tech for good, you might say. So it’s sort of like apps that do good things and working with tech companies to help charities and that sort of thing. Three years I’ve had actually with the Global Cyber Alliance. So I’ve just been fundraising and development director there 2018 to 2021, but we’re gonna talk about disinformation misinformation. And for me, that sort of kicked off when Brexit happened here in the UK, and I could tell something was a bit amiss, that things were awry. So I started looking into it back in 2016, and I met the wonderful SJ Terp and Pablo Breuer as well. So they’re the two people who are the design authorities on the framework that we’re gonna talk about. And basically I just got alongside them and have been helping them set up this foundation. The website is disarm.foundation and I’m Jon without an H, jon.brewer@disarmfoundation.

So I’m delighted to hear from anybody. We need partnerships. We’ve got some good funding conversations on the go, but we’re always looking for more funders, especially we are kicking off a piece of work with OASIS Open, which is gonna help us with our governance. And so there’s not a great gap to close, but there is a bit of a gap to close there. So we’re looking out for sponsors as well. So anyone that’s interested to know more or to see if they can help in some way, I’d be delighted to talk.

LANDON: Jon, I think a lot of people, they know what happened. Certainly a lot of people in the United States know what happened within the 2016 election, and Russia’s influence in that. Can you just provide a brief overview as well how the Brexit issue and how disinformation shaped the Brexit issue? Just so our listeners are kind of also seeing and are well aware of how disinformation has affected the UK.

JON: I mean, I suppose we had our very own Steve Bannon over here and there was a guy Dominic Cummings who got pretty pivotal from the ruling party’s side and he came across. I’m sure that someone put it across his desk, the Cambridge Analytica tools, these were things that were tried and tested in military context. And then they were taken out to some other nation states and successfully manipulated elections in say Trinidad and Tobago. And all of this is well documented. You can see all the clips on YouTube. And in many ways, while some of those other countries were a test bed before we got to Brexit, Brexit was itself a test bed before it came to the U.S. and the 2016 election in the U.S.

So there’s loads of documentation on this, that even the select committee that happened in the DCMS, which are the Department of Culture, media and support ran a select committee in the House of Commons. And all of this came out with, we had Brittany Kaiser and all sorts actually coming and testifying within our committee. Still, even though that was happening down one end of the houses of Parliament, none of that was actually being seen within the main chamber and therefore on the BBC. All of this sort of manipulation was going on in the background. Some of it was well very much in the mainstream media as well, but because of course, Rupert Murdoch and then there were other media owners who were also on the same side, but then… And then the influence of Putin in this was undoubted. The Russian ambassador when the Brexit vote went their way said, “My work is done.” So, I mean, there’s loads more I could say on the subject, but there’s a few.

LANDON: That’s fantastic. And it’s not every day that I come across and meet folks that are passionate about disinformation and trying to frame it in a way that the world can really think about everything that happens from the narratives, the outlet, the accounts, the signature technical analysis, all these technical nuances in a way, and then kind of have it lay out very systematically of how you can have a variety of different, everything from analysis to disruption chains around disinformation, because it is really truly the challenge of our day in a lot of ways.

Starting out, provide an overview of the DISARM Framework. I’m sure a lot of folks have not heard of that term extensively, but certainly they will be here in the coming weeks and months and years ahead. So I’m just kinda curious, gonna discuss the framework, and then what is exactly the DISARM Foundation trying to accomplish?

JON: I think, yeah, what we’re trying to accomplish, I probably would wanna just preface that with what you were saying about how it is such an important issue for us right now. I mean, I think that’s the rationale for what we’re trying to accomplish and you need to sort of get that context. I think most people now, well, many people, if not most, are now aware of just how the internet scale is a whole new problem when it comes to disinformation, misinformation, and influence operations. So none of these things are new. It’s just the fact that we’ve now got internet scale and it’s not just internet scale. It’s also the ad model and it’s AI and data and behavioral analytics.
And we got hyper targeting and all these things that have come together just in very recent years, meaning that bad actors have got more tools at their disposal than they’ve just ever had before. So that’s really the context. I think most people get that. The threats are really existential for humanity, ‘cuz how are we gonna survive if we can’t all agree who’s won an election or whether greenhouse gases are a real problem? Or your vaccinations, if you can’t agree that they’re actually gonna be good for you or not, we are sort of stuck. And there’s some pretty catastrophic things that are gonna happen if we can’t all have a common view of what we believe. So that’s sort of part of the context.

The other big bit of the context from a DISARM point of view is what we see is the asymmetry between the attacker and the defense and this is nothing new as well, obviously is to cyber security. But the attacker can attack at will, can do what he or she wants anywhere around the world, but if we’re gonna respond to it and defend against it on our side, we’ve all gotta be joined up. So we’ve all gotta get joined up. And it’s a whole of society problem. You need people working together across languages and disciplines and sectors and all that sort of stuff. So there are some real challenges in getting joined up and here’s the rub for us. It’s just that if you haven’t got a common language and you haven’t got a common framework, a common way of actually looking at the problem together, then you’re stuck.

So really, that’s what the DISARM Framework is all about. It’s here to provide that common language and that common shared approach for disparate responders and defenders to work together effectively.

LANDON: This is exactly where cyber threat intelligence was seven or eight years ago, right? Lockheed Martin comes out with the cyber kill chain and that just gives you a methodology of how an attacker gains a foothold in a network. And this is really against that CIA triad, right? The confidentiality, integrity, and availability of data systems and networks. So, how an attacker, it gets a foothold, how they gain reconnaissance, gain a foothold, move laterally, escalate privileges, and exfiltrate data. That’s cyber kill chain. So of course, MITRE comes out with their attack framework that provides the laundry lists of ways of which attackers can gain reconnaissance, and a laundry list of ways of how they gain a foothold, et cetera, et cetera, right?

That’s how I understand this seven or eight years ago that MITRE ATT&CK Framework is used by most enterprises when they’re doing threat hunting and doing network security is that similar in the ways that the DISARM framework is set up, right? Because when I think of when we do disinformation investigations, it’s almost kind of the same type of thing. Actors are creating fake websites. They’re interpreting SSL certificates and reusing those. They’re doing a lot of different technical things to ultimately create a narrative that gets to a population. So I guess it’s a long way of saying, am I thinking about that the right way? And then who do you want to use the DISARM framework?

JON: Well, you’ve you mentioned MITRE ATT&CK. I mean, let’s definitely talk about attack. I mean, is it like attack? Yes, it absolutely is, and totally by design. So a lot of the people that actually came together with SJ and Pablo to create the framework in the first place. So 2019 was a multidiscipline team that came together under the misinfo set working group of the wonderful Credibility Coalition. They came together with a number of people in their ranks from cyber security. So when they started thinking about what framework are we gonna use as a base, like a model looking around the world. I mean, the obvious one was MITRE ATT&CK. There were so many good reasons for actually basing the approach on MITRE ATT&CK. So MITRE ATT&CK is partly because we sort of see cognitive security. I dunno if that’s the term that’s gonna be well-shared around the world, but you got three aspects of information security. You’ve got your physical security. You’ve gotta make sure that the boxes are protected. You’ve got your cyber security where people are trying to get into your networks and your applications, but cognitive security is where people are trying to actually hack people. Yeah, you’ve actually got people trying to hack brains and it’s actually seeing people’s brains inside people’s heads around the world within networks.

And so there, you start to see that there’s a very similar landscape, if you like, to apply a framework similar to MITRE ATT&CK. And so, yeah, I mean, they came together, the similarities between attack as well. I mean, it’s using tactics, techniques and procedures, TTPs. It’s helping codified data that can be shared at scale machine to machine. You’ve got a framework also in stages. So there’s that kill chain thinking. Similarly, there’s like a red framework and a blue framework. So we can see here are the red things that are happening. They’ve been set out, as you said earlier, you can actually sort of create that list that people can refer to. And then you can see if they’ve done that on the red side, what are the blue things that we can do in response? And all of that obviously speeds up your ability to identify relevant actions. Then there are loads of other similarities as well. In fact, it’s built on STIX/TAXI. So you can leverage all of these protocols from cyber is so much more mature, so why not learn the lessons, use the assets and all that sort of stuff. And then just a final point I’ll probably make is that the influence operations sits alongside kinetic warfare. If we’re gonna talk about nation state stuff, you’ve virtually got all of this sitting really sort of within what people are calling hybrid warfare or blended. There’s something kinetic happening, but then there could also be a cyber attack happening. There could also be a big influence happening. And for all of those three things to be taken in the round is also helpful. So, for all those sorts of reasons, we’ve got a real sort of marriage with MITRE ATT&CK framework and cybersecurity.

LANDON: The way that you laid out the DISARM framework, thinking about the MITRE framework, that very much applied to almost any cyber security or information security organization that’s starting to gain the maturity to do threat hunting, right? So it had a vast majority of potential users to ultimately set themselves up and put themselves up to metrics that were gonna be successful. I’m curious how you’re thinking about the audience from the DISARM framework. I can certainly see the relevance, of course, the social media companies. I certainly see the relevance certainly to any in all journalism that is amplifying a message.

What else are we not thinking in terms of the audience and potential stakeholders that are going to be using the DISARM framework from a enterprise or even political government view?

JON: It is a whole of society problem. I’ve heard that said a number of times. There’s all sorts of people that need to get involved here a little bit beyond cybersecurity. You’re looking at fact checkers, you’re looking at people who have domain expertise around say race issues or health issues. So all of these other things coming into play, and then there’s marketers as well. ‘Cuz there’s no one silver bullet, there’s a lot of things needed. And one of them is sort of counter narratives, and people actually getting engaging with false narratives online. There’s all these sorts of people that are needed to get involved. We see the framework being used by groups, coordinating large scale responses, geopolitical health related events.

As I said, also, analysts who wanna sort of tag reports and share intelligence around. We’re seeing aggregated data sets, so that people can identify patterns, and work on actionable intelligence when they find it. So there’s a number of different ways. There’s obviously red to blue can be used for real-life incidents, but you can also set up team simulations so that team leaders can make sure their teams are ready for when the fight comes. There are huge different numbers of ways we can do this, and all sorts of people that are potentially gonna get involved there.

LANDON: How are you thinking about, from a foundation perspective, how you measure success? There’s no shortage of, when you think about, I like the term you used, cognitive security, right? Because if a nation state wants to outsource and pay said marketing company to propagate a message that is blatantly false. Just blatantly not true, but they’re paying them a lot of money.

In today’s world, there’s very little recourse for that to happen. For that marketing company to say, “You know what? Maybe I shouldn’t do that.” Whereas if a nation state goes and pays a set of hackers to go exfiltrate data, there’s a computer fraud and abuse act. They at least leverage against it. Doesn’t really exist really if we’re talking about cognitive security using your parlance there. What do you see as overall success, short term and long term, for what you want to accomplish with the DISARM Foundation?

JON: Well, I suppose success for us is because it is a whole of society problem. It’s actually getting the framework used by as many different sets as possible. I mean, I didn’t call out the likes of the intelligence community or the police or other groups, or the pretty obvious other ones as well. But actually even to the extent that could we get members of the public using a tool like this, that’s a bit of a stretch.

So when you say, what does success look like? This is long term success. Long term success looks like us having a tool that is really nice and simple, could be on a mobile and could actually help people to spot a piece of dis/misinformation and call it out. I mean, we’ve had this with email, so people can report spam with a click of a button and that’s had a big impact. And so, I think sort of mobilizing, democratizing, and actually helping interested individuals that wanna play a part in this around the world, that would be great success. So I think that’s it really. I mean, it’s just basically, for us success will be the widest possible, broadest possible take up of its use.

And then for us, success also means that we as an organization wanna actually remain relatively small. So success for us doesn’t mean ending up being a big organization. So we are talking open source here. We’re not talking about anything that we actually wanna make money out of. What we wanna do is enable the wider world to use something that is free to use. So open standard, open source. Give it out, but just have some governance around that. And for us, we don’t wanna get in other people’s way. We wanna enable that to happen.

And then I suppose personally, I’ve always had this little thing about success for me has been when I first met SJ and Pablo, I just felt like this is just so important for the world, and yet it was still in its cradle. It hadn’t really sort of got out because everything that SJ and Pablo had done to date was zero funding. It’s phenomenal what they’d done with no funding, absolutely phenomenal. And it’s just a lot of hours outside also working with an amazing team. They were an inspiration to a bunch of other people who came together in the coset collaborative bunch of other volunteers. So, I’ve sort seen this thing and I’ve just thought, the success is actually getting this thing out of the cradle, getting its nappies on, getting it out to kindergarten, getting it off to early years’ school. And I think that we are there. We’re just about to hopefully start packing it off to early years’ school. And then real success will be when it’s an adult with a life of its own out in the world and recognized as one of the world’s weapons in the armory against disinformation.

LANDON: So wrapping up here. And I’m just trying to think big picture, MITRE ATT&CK, cyber security, B2B problem, business to business problem. What you’re talking about is almost even moreso a true B to C. If we’re using business terminology, you have to get a tangible thing to consumers to say, this is not real. This is fundamentally inaccurate in terms of the information source that they’re reviewing. And you’ve gotta be able to combat almost that dis inhibition effect that people can get into online when they’re talking to people, they’re behind a computer, how do you get this to the broader public to be consumed? And is it a technology problem, is it an education problem? Kind of just curious how this looks in execution almost.

JON: Yeah, well, I mean, part of us not being a too big an organization is, we’ve just gotta be realistic about our role here. There is no silver bullet to this. It’s a massive problem. And it needs to be tackled in hundreds of different ways with the platforms, with government policy, with tech, AI, fighting AI, we need big data sets. So the framework piece is the common language piece. But I think the thing that you’ve just laid out there is another big bit of the picture that is not for us, but is critical, is education. And so, I know plenty of folks that are working on that side of things.It can start in the schools, it can start in the churches, it can start all over. From my personal point of view, I’ve always seen this problem as something that needs to be very much top down as well as bottom up.

So I think there’s a real need for education of leaders around the world. Be they leaders in churches, leaders in politics, leaders in business. I’d love to see more funding, more effort going into that because if we can then actually have local pastors actually seeing things being said within the congregation, and they actually understood how social media is working, they could say, “Well, actually I think I recognize what that problem is. I think I know where it came from. I think that I could probably tell my lot on a Sunday morning that actually we need to be aware of, we need wise up to some of the things that are going on.” All of that education is gonna be a massive part in what we need to do to get on top of this problem.

LANDON: If you don’t mind, what’s next for the foundation?

JON: We have just finished a major update on the framework, and that’s met with really good approval from the community. That’s on top of a merge that we’ve done with MITREs SPICE framework. So actually, MITRE took a fork of the original AMITT framework and many of the enhancements that they created. They had partners at Florida International University, and they helped merge back in with DISARM to help create this update. So we’re really grateful to MITRA and Florida for that collective effort. There are more enhancements coming on, an Explorer app that’s gonna help folks get acquainted with the framework, and also spinning up OpenCTI.

I’m not sure if folks are aware of the OpenCTI movement, but basically there’s a platform there that’s gonna help us get the data tagged and shared. So we’re starting on that piece of work as well. And then importantly too, there is the governance aspect. So as I said a bit earlier, working on a relationship with OASIS Open, wanted to become one of their projects and take advantage of their smarts, ‘cuz they got long experience in the governance of open standards and open source. So looking forward to working with them. When people take forks of the framework and they’ve created something good that could get rolled back up into the master version for the good of everybody, OASIS can help manage that process.

So we just want one framework that’s managed by the community and for the community and agreeing which enhancements should be either politely declined or which should be taken on board. That’s the sort of governance process we need. And so we’re looking forward to working with OASIS on that. I suppose the last final pieces, we actually were expecting a few reports to be released in the next few weeks. So I better not say exactly who they’re from. I’m not sure I’m at liberty to say exactly who is writing those, but one or two are quite high profile. So of course, we are hoping they’ll be supportive of what we are doing. And so we’re looking forward to that as well. But yeah, there’s some of the things that are upcoming for us.

LANDON: Any details on what the reports will be about?

JON: They’re reports around how the DISARM framework has been used in certain scenarios. There is one report where a really significant body has actually tasked someone with an analysis of the framework, and to actually really look at it from a due diligence point of view and say, does it do what it says it’s doing? How does it work? Will it work? Should we adopt it? And we are hoping that they’re gonna come out and say, “Yes, not just we should adopt it, but the world. The world should adopt it.” And if we get their endorsement, that would be quite a significant thing for us as well. So looking forward to that.

LANDON: Jon, like I said, I can’t thank you enough for joining the show. I absolutely love what you guys are doing and look forward to certainly helping.

LANDON: For the latest subject matter expertise around managed intelligence, please visit us at www.nison.com. There, we feature all the latest content from Nisos experts on solutions ranging from supply chain risk, adversary research and attribution, digital executive protection, merger and acquisition diligence, brand protection and disinformation, as well as cyber threat intelligence. A special thank you to Nisos teammates who engage with our clients to conduct some of the world’s most challenging security problems on the digital plane, and conduct high state security investigations. Without the value the team provides day in, day out, this podcast would not be possible. Thank you for listening.