- Despite the recent publicity on the SolarWinds attacks by Russian espionage agencies, supply chain attacks are nothing new, and have been happening for years. They take careful, low and slow, methodical reconnaissance and persistence that occurs over weeks, months, and sometimes years. But how have foreign state nation adversaries evolved and escalate supply chain attacks over the years? Roughly five years ago in the 2013, 2015 timeframe, we detailed an attack attempted to exploit a security company certificates used for software releases by the company. Five years before that in 2007, they also had an attempted attack that was very similar in nature. Think about the ability to compromise and create back doors for a software security company. Not only could it be the keys to the kingdom, but it's also the keys to the armory that guard the kingdom. Listen to how the supply chain attacks start, how the foreign adversaries evolved over the years with particular focus on compromising certificate authorities, and understand how certain movements in a network really provide context to an attacker of what they're trying to access and steal. Finally, and probably the most important lesson, even relevant to the current security landscape, listen to how compliance checks ensure the right controls are in place to recover quickly. That is that what was observed during the incident was complete, truthful, and accurate, as oftentimes these are the issues that lead to compromise in disclosures to customers. This is the sixth episode of Know Your Adversary. - We were in the middle of too many projects, a SIM upgrade, and switching over our threat intelligence to another product when we got a note from one of our subsidiary companies that they saw activity through a router that was connected internally to a VPN concentrator that provided ingress, access. - This is Joel Fulton. Joel was a previous security practitioner before starting his own technology company, Lucidum, an asset discovery company that discovers blind spots across Cloud, security, and IT operations. Setting the stage a bit here, many years ago, he was working for a security company that produced preventative software. Think about the access of a security company that makes software. The software is supposed to protect enterprises, so naturally it needs the highest level of access into networks, and updates just like any piece of software. One day he was alerted by a subsidiary to an anonymous alert, which was someone accessing the company's routers. These events can be normal, but they can also be the bad guys trying to gain unauthorized access. So which one was it? - So the question that presented, as the form of an alert, rather than an incident initially is who was accessing this system? We had done a maintenance outage in order to bounce this router, and provide software upgrades for it. And it was during a time of day during the week when the users who are authorized to access did not typically, read ever access that VPN concentrator during the hours we were going to bounce it for maintenance. So we reviewed the logs the next day, indicated that somebody had access to the router, and then had made several attempts to access the inside-facing VPN concentrator. When we first went to pull the logs from the system, the logs had been wiped. That was the point at which we decided this was not an alert. Somebody had taken an affirmative action to remove those logs. - Let's start flipping, so this makes some sense. To understand how backdoor software attacks start, software made by technology or security companies that deploys upstream and thousands of enterprise clients must be exploited, and this usually happens with the certificates used for updates. Imagine you want to steal a safe from a specific shop in house, and in that safe is a big prize, say, a million dollars, and imagine that safe has five different choke points. So think locked doors, alarm systems, a safe within a safe, get the idea. If you knew you were going to encounter five choke points, you probably wouldn't break into the front door in the morning when there's heavy foot traffic. You'd conduct long-standing reconnaissance that would take weeks, and months, and learn when your targets were going to be away. You'd determine an entry point that looked completely normal, say, tailgating behind a cleaning lady, and learning where a hidden key was kept, or surveilling a key code entry, all at the same time, gaining intelligence how to access the five choke points. That's the essence of a supply chain attack, and what Joel is at the beginning stages of describing. Someone gained access to this company's environment by gaining initial access to a router that laterally was connected to the VPN concentrator, which is how remote employees access the environment. So this was an ideal place to be in the network, because it looked, could look completely normal. Further, they hid their tracks by erasing all the logs, so this is really suspicious. - We were able to recover some of the logs that included keystrokes that were taken by the access to a host behind that router in front of the VPN concentrator. And from those, we started walking through what are we looking at here? What did this person, if it was a person attempt to access? Were they successful? What's at risk? How far might this lateral movement have gone? What do we see from this? And as we're doing this, like most companies in the security team, there's not a lot of longevity from the folks on staff. There was however, a longstanding member of the team who interestingly was no longer part of the SOC incident response threat intelligence, or even security architecture, but this person had moved over and done the e-Discovery. And as we were walking through this incident, somebody thought, I don't recall who, to bring this person in, and say, "This reminds me of something that happened five years ago." And sure enough, from that person's work papers, and their private recollection, we brought them into the room, this was identical to the pattern of behavior, the access, and the means of the steps taken to attempt to establish lateral movement from an incident that had happened approximately five years earlier through that same router during an upgrade window, inferring from it, I think reasonably that we had a repeat action from somebody who was paying attention. How would they know when they bounce the router? That's where the investigation went after that. The fact someone attempted to gain access to this system, and was it coincidental that that access attempt was made during this maintenance window? It's maybe a big leap, but your mind likes to make narrative connections, so you want to believe that. But when we uncovered this other historical connection, and in review of the logs, the tactics once established access to that router had been taken were identical to what had been documented five years earlier. We looked into that issue, and that issue elicited a lot of context that caused us an enormous amount of concern. - The steps Joel is outlining here are the initial access and foothold attempts an actor's attempting. In any attack, they have to gain initial access and keep that access, meaning they have to set a foothold so they can survive reboot or log off attempt to a workstation or server. So it's unlikely in these early hours of an incident that this could be categorized as a supply chain attack yet. It just looks like any normal unauthorized access attempt taken to resolve, but the additional context from the former security team employee now doing e-Discovery indicates this could be something bigger. Let's listen to Joel break this down from the top, starting with the malicious router access. - That router connected our internal systems to the internet. The purpose of that connection was to allow remote workers to access internal systems, but to protect those internal systems, we created, and it's typically called a DMZ, right? We steal a lot from war and battlefield, so this demilitarized zone ought to have low sensitivity systems on it, and act as a buffer, where we can pick up logs, we can watch what's happening, kind of like the main corridor in a shopping mall. It's where I can watch the flow of traffic, and then the stores themselves can discriminate who can access. So too, in this DMZ, once you were through that front door or router, then whether or not you gained access to any of the storefronts, the other networks that were connected was monitored, and access granted separate from that router. So we watched this bad guy walk in through the front door when that front door should have been locked. The front door was under maintenance, and they knew it was under maintenance, so they knew the locks weren't working. They knew that router had a vulnerability on boot. So at the moment, like in a heist movie, when they flipped the power, and they replaced the images in the cameras with a static image so the security guards don't know the bad guys are walking through, they knew when that router rebooted, they had a moment where they could access that system. And in that brief moment, they accessed the system, established connectivity to that router, and then through that router, and tried the door handles of all of the stores in the mall. They tried getting in to the interior networks that that router was connected to. And when they did this, we were able to see the kinds of things they tried. Those sorts of things gave us a picture that help us identify sort of the footprint of this attacker. - What Joel is describing is a slightly more technical version of my earlier example of breaking into the house. Understanding what this actor tried to access, and the context of these actions is ever so critical to understanding your adversary, because your defenses have to be able to keep up with a high standard to decrease the mean time alert and mean time to respond. So how did they gain the initial access to the router? - It was an exploitation of the vulnerability that was firmware related, so it was early in the boot process. So you've got to shoot a gap, right? You gotta get this vulnerability on a very tight window during the reboot, indicated to us that they knew some things they shouldn't have known. How did they know when this maintenance window was? How did they know when that system went down? How did they know the model of the firmware that was there? Those were hints that we had been under observation for longer than required by a drive-by shooter. - Oftentimes in these cyber threat hunting and attribution engagements, you have to make a call early if you're a target of opportunity, or target of attack. By the sophistication of watching for a reboot of a router during a maintenance window to explore the vulnerability that will let them basically slide into the crack of the door, so to speak, you have to be reasonably sophisticated. And this is certainly a target attack, not a drive-by shooting as Joel puts it. But now that the attacker has accessed the router, they need to be able to get off the router, and be able to do more sophisticated actions, steal more sensitive information, or gain more access. This is usually called the lateral movement effort. So how exactly did the attackers get off the router and move laterally in the network? - The access to the router precluded the actions they took, stopped us from monitoring their access to the router. To continue the analogy that I've started, if you were able to break in that front door of a mall, but you couldn't see anything, and you had to feel your way around. They started feeling their way around, and the way you do that technically tends to be drop the router table for me. Show me what networks this router connects to. Show me what the IP addresses are. Dump the table, the ARP cache, so that I can see the MAC addresses that are out there. So the MAC address is very much like a physical address of your home. You might know the zip code, but the physical address gives me information even about your home. And I start understanding, by dropping those tables, what is it that is connected to the thing I'm on? Because the thing they're on isn't my goal. It isn't that valuable to break into a mall when all the stores are locked. What I really want to know is which one's the Hot Topic, and which one's the bank, because one of those is far more valuable to me, and then I need to enter that. I need another break in attempt. So that was the position that they were in, doing that in secondary reconnaissance. Now that I have a foothold, what can I see from this foothold? And then logically, once I see what my potential next targets are, now I'm looking for means of accessing them, either exploiting vulnerabilities, trying commonly used username password combinations. I'm looking for a means to access that secondary and more important target as I move laterally. - Now that lateral movement is taking place, they needed to get somewhere more sensitive. This usually happens by escalating to administrative privileges in the network, because they had the elevated access. In physical security parlance, it's like finding access to the building maintenance person carrying around the keys to every door. - The first the thing we saw was their attempts to establish network connectivity. It presumes, remember they wiped the memory of that router, so a lot of what I'm about to say is inference. And we presume they had dropped that ARP table, because they didn't do a ping sweep of all the connected networks. Now, there's a couple of reasons for that. One is that it's pretty noisy, so if you ran through the mall and rattled all the doorknobs to see if any of them were open, that's very noisy. You're going to get picked up by security cameras in those stores in the mall. You're going to make a lot of noise. An intelligent attacker won't do that if their goal is to successfully move to the next stage in that lateral movement. And lateral movement really just means moving from one hop to another. What's connected to me? Does it have something more valuable? Can I move to it from where I am? So they were targeted, and we didn't see this from the router, so we don't know that they dropped the ARP table. We don't know that they pulled information about connectivity, but because of from another system's log that they didn't access, because of their selectivity in accessing other connected systems, we believe that they had done that. They targeted certain systems, and the systems that they targeted, included systems that managed digital certificates. Digital certificates can be a particularly valuable asset for an attacker, because they're a particularly valuable asset for most companies. It's the way I prove I am who I am when you're connecting to me. So when you go to a website and it says HTTPS, and there's that little lock, there's that little key up in the bar, and you double click it, and Mozilla says, "Yes, you're absolutely connected to the person you think you're to." Or when you go to a site and it says, "Warning, this site may be an imposter." What they're saying is there's no certificate for the site. But the other place we use this is to sign to authenticate software releases, and that is just as important as authenticating the website that you're going to. And that certificate system was used for that. - As I said earlier, clearly this is a targeted attack, so the attackers are pretty sophisticated. The attackers went right after the digital certificate's authorities. They didn't attempt to escalate to domain administrator, which is common in nation state attacks, because it gives the attacker God-mode access to a network that controls all authentication to sensitive servers, data stores, and databases. Instead, they went right for the certificate authorities, which could mean they were thinking upstream to the security company's clients, not just sensitive information laying on the security company's databases or file servers. So the next question is, are these certificates used for the company's clients, or do these just serve to use for internal systems access? - The purposes of the certs were for internal systems, and they should not have been accessible, but everybody does a work around, and remote workers needed access to something, and had deleted the security review process, making them available on a system they felt was secure, but was not behind the layers of defense that we had intended. So the question you ask at this point, did that router really know enough to give that attacker insight into the contents of a connected system? That's not only unlikely, it's probably impossible. Routers don't store data type. What this other member of the team, who again, moved to a different position, but we brought him in, owing to the historical knowledge of a similar type of movement. So same router, same maintenance window, this guy had been on that incident. We brought him in on this, and this prior move had been connected to an incident that had taken place simultaneously across many software companies, including security software companies. That was at the time, I don't know that we called them APTs, but it was attributed to a foreign country wherein it, there was the insertion of malicious code into product and product updates that were signed by the software vendor releasing them. So the goal, certainly part of the goal was obvious, and that is if I can put a back door into software that you trust to protect you, then I not only have the keys to the kingdom, I got the keys to the armory you're using to guard the gate. - For anyone that's been following cybersecurity news lately, this is very similar to the beginnings of the SolarWind attack that ultimately culminated in certificates being used to push legitimate software release updates by Russian intelligence agencies, only it's the bad guy's code that is used to push the software updates, giving them the initial access factor. This can be the holy grail, because now an attacker doesn't need to try a phish a user via email, and trick a user to install malware from a link. That's pretty common for initial access through email. Using the digital certificate method as a back door is much more disguised, and less noisy. So what was the difference between the attack that happened 2007, and the attack that happened five years later with the security company? - The first difference is the attack that happened previously was very widespread and simultaneous. I guess I can't assert that that's different than SolarWinds, because SolarWinds disclosed. Who knows who did not disclose? But what had happened previously is a number of companies got together, and shared the attack that they were experiencing, brought law enforcement in, and it was apparent to all of us that we were all being simultaneously attacked, using similar methodology, with a similar end goal to compromise certificate systems, to insert false certificates, so that a non-authorized entity could ever that official software was being produced, and no systems could tell the difference. You have an official Windows update. I don't think Windows was involved in this. You have an official Windows update, you click install, and it's the bad guy's software running. That's the holy grail of pushing, that you don't need a phishing attack, I can push it through an update system. So that's what had occurred the five years previously, and the same approach, the same access point, the same methodology had been repeated in this attempt five years later. - Now we see what Joel was talking about when he says not only do they have the keys to the kingdom, but they also have the keys to the armory that guard the kingdom, as this was security software that was supposed to protect all these different companies. And the attacker's ability to reproduce the certificates upon an update that are now malicious, but look completely normal is frankly the ultimate backdoor that won't raise security alerts. You think this is any less of a threat now? Of course not; it's only going to get worse. Supply chain attacks like SolarWinds, which by the way, had security functions, have all the headlines, but it's also clear these types of attacks have been happening well before 2010. However, we can assume that the nation state attackers' capabilities weren't at the same level, since they were caught so early in the process. - Think about the principle behind it. We've long had Kerberos attacks. We've long had admin replacement, admin look-alike attacks. We've long had pseudo attacks, and it's the basic principle of, I don't have to exploit a system if I have authorization to use it. So it's an old, old attack and methodology. My guess is they were looking to compromise a certificate-signing authority, so that they could move upstream, and instead of attacking software vendor A's certificates, I want to attack the entity that issues not only their, but all the other certificates. - To clarify what Joel was saying, Kerberos and Sudu are network authentication administrator functions that allow a user to run security privileges of another user. These are common IT administrative functions, but also used by hackers to blend in with normal traffic. But they're often employed after an attacker gets in the front door, usually by a phishing attempt. Compromising the certificates is just another way to get in the front door, and that's what's happening here. So how did Joel detect and respond? As we often say in the security industry, it's not about how a company is compromised, it's how they respond in a manner that keeps the security alerts and incidents from becoming breaches. - What had happened earlier had put in place, and this is going to sound terrible, compliance checks. And we had been in the process of doing both of the updates for those certifications. And as a consequence, the owners of the systems had been required to validate the controls that were in place in, including privileged user account updates, hardening of the system, uniform deployment of security software, matching of the technical controls on a system with the requirements. And there had been like, there always are, a number of deficiencies, but all of those systems had been tightened. And in fact, the logging system that was in that DMZ, that wasn't compromised, because it was listened only to packet catcher, to wire traffic going through. That system had been put in place owing to a gap in the prior controls process. So although that the bounce of the router, and I think there's probably some fault from compliance too, but I'll share that in a minute, the attempts to access the adjacent systems immediately triggered alerts, and so that's what spun up the incident. And so the first thing we did after figuring out this is more than an alert, this is an incident, is we killed connectivity. After getting executive approval, understanding the disruption to the business, it was agreed that the business disruption was worth the isolation of this incident, the investigation, and then the recovery to normal business operations. And I think that was accomplished in something like four hours, which on a scale was reasonable. And we were able to verify that there was no other activity. We were able to match up the logs of access to internal systems with the packet capture logs. And so because there wasn't any deviation between the two, we were able to infer from that that logs had not been deleted from internal systems, which means we could trust the logs to tell us who did or didn't access those systems. And so based on our investigation, that incident stopped, went no further than the breach of that router. And because it happened during off hours, it stood out. And so there was very little access to systems other than service accounts. So accounts that run automatically are used for maintenance activity. There wasn't interactive user accounts. So kind of like in that mall example, there wasn't any foot traffic, and so you could see the one guy that's walking around the mall. So it was fortuitous that those things were in place, but ironically, the reason they were patching that router, it was one of the control deficiencies as part of the compliance audit. So they would not have bounced the router had the compliance process not required it. And at that time, you know that timely, but they would not have had the controls in place to mitigate that breach of subsequent steps had the compliance audit not been there. - Keying in on what Joel was saying here, all the compliance checks that need to be put in place were critical to understanding and verifying that they could see all the malicious activity, what is happening. This is why so many breaches are in the news. For every breach that's in the news, they almost certainly do not have the level of visibility to what's happening, and what Joel's methodically describing here. For every breach that's in the news, they almost certainly did not have the level of visibility that Joel is methodically explaining here. The attackers either cleaned the logs, or frankly, the organization doesn't collect them to begin with. Further, listeners probably don't have an appreciation for four hours as a response time, not only to an alert, but respond and remediate an alert back in 2007, as security technology was not as advanced as it is today. Attacks that have occurred over the last 10 years remained undetected for months at a time sometimes. The critical metrics to keep incidents from becoming breaches is low mean time to alert, respond, and remediate. So how long did attackers have access to the environment? - Matching up the reboot time from the change control logs to the time the incident started, to the time we pulled the plug on it, it was something like 47 minutes, it was under an hour. I think that mean time to resolution was something like six hours, so from alert to return to normal business operations. Mean time to respond was minutes, because the SOC caught it, triggered an alert, did a review, but as you know, reviews of alerts don't happen contemporaneous with the alert. You've got a backlog of alerts, typically, but because of the sensitive nature of this subsidiary, it was a higher priority alert. So mean time to quarantine was that less than an hour. The lessons learned, I thought were interesting, and that is people diminish the value of compliance. And what's interesting to me is if you take the underlying work of compliance seriously, you actually build some better foundations. It's the other aspects of compliance that seem bureaucratic, policy oriented, et cetera, that people tend to diminish. But in this case, it actually served as a valid health check that prompted some controls that, had they not been in place, we wouldn't have known, not that it happened. We wouldn't have known where that lateral movement was. We wouldn't have confidence in the analysis, because we wouldn't be able to tell that these logs were or were not valid, that had integrity. Had that been the case, had we not been able to do a comparison on logs captured from packets with logs on systems to be able to validate, I know what I'm looking at is truthful, valid, and complete, then we would have had to notify tons of customers that you can't trust your certificates, and we have to rebuild a whole new process. It would have been devastating to the business. - This is a critical aspect of security, as so many breaches occur based on visibility gaps on what has actually happened inside of a network. How do you know what you were looking at is truthful, valid, complete? That's really the bottom line. Nevermind the security tools you put in place to detect malware, or malicious lateral movement. Do you have the ability to see and detect what systems were accessed even by legitimate users? If you don't, how are you going to see what the bad guys are doing, especially if they clean and wipe the login? This is the critical aspect of so many governance, risk, and compliance programs within security teams. Does it really matter to attribute your adversary? If you can identify, neutralize, and shut them out in less than six hours, because you have controls in place through a combination of compliance checks and amazing subject matter expertise? My argument would be probably not. However, in the current state of cybersecurity, so much different tooling and alerting is needed, because companies small and large have a hard time doing the basics to even have visibility, to have visibility of what even the legitimate activity looks like. - There are occasions where you let people play in a sandbox. This was not that. This was right after our crown jewels, crown jewels had significant complications and consequences. So it was more important to us to get enough attribution to foreshadow where we should scrutinize than it was to name the attribution. This was one of the major building blocks that motivated me to rely on the NIST framework for a specific reason. There's five phases, or steps in NIST. You identify, you protect what you've identified, you detect if something goes wrong to what you're protecting, you respond to what you've detected, you recover. So identify, protect, detect, respond, recover. And what I learned through pain, what I learned through this incident was the importance of being able to rely on what you're detecting. And it sounds obvious when I say it, but the realization mid-incident that if our logging system has been breached, I am completely at sea, and I've got to go to the executives and say, "I don't know, and I can't know, and no one can know." That is the worst position to be in as a defender during a security incident. And so the fact we had identified where logs were possible, we'd protected this other logging system meant our investigation was coherent, had integrity. We could actually decide, yes, they have gone this far. No, they haven't gone farther. The inability to do that, when people talk about incidents, they focus on jumping in front of the bullet, but a little more thought reveals it's often harder and more important to delimit what they were able to do, what they tried, and whether they were successful. Nobody wants to go to a customer and say, "We think we stopped somebody that was trying to access your information." Well, are you sure? How do you know you're sure? Being able to answer that question is paramount. - For background, the NIST that he's describing is the National Institute of Standards for Technology under the Department of Commerce. They have a compliance framework often used by security teams. So what are the bigger lessons learned, and where could threat intelligence have played a part? Threat intelligence wasn't used extensively in the 2007, 2008 time frame, and has evolved under the primary premise that a preventative strategy in a security software implementation with primary IT functions in is of itself alone, incomplete. Organizations have to go outside of the firewall and interact with the adversaries and gain visibility through external threat hunting to really gain the necessary context for what is actionable against an organization. - If we had known who that actor was five years ago, and what their tactics, techniques, and procedures were, we would have been better aware to alert to signs of their reconnaissance. We would have known they like to jump in when a router bounces, like, even that specificity. So therefore notification of a maintenance update needs to be confidential. Like, there are things we would have done materially differently if we had a good picture of that attacker, but you always, no matter who you are, you succumb to the fact they won't notice. The fact they noticed us bouncing a router, and caught it in the window of opportunity opened my eyes. So that I think was one important, significant lesson learned. And the second is, I know it's not cool anymore to talk about moats. It's not cool to talk about zones of control. We talk about molecularize the controls, and controlling data wherever it goes, but that didn't work to control the coronavirus curve, and it doesn't work to control data exfil. If I can't see that bad guy surveilling my system, then my first awareness he has been is when he makes contact with my system, successful or unsuccessful. And that ability to monitor, and alert, and notice early, use the MITRE attack framework. That ability to notice early does yield too much noise to your SOC, to your SIM, to your alerting team. But over time, algorithms marinading in the data, you'll be able to tune that. And it's better to turn that noise up, and then later turn the gain down than to say, I'm only going to look at things that are further right in that attack chain, because often it's too late. Getting outside expertise, having somebody on retainer, having somebody that brings in that outside intelligence is crucial when you, if you've ever been pulled over for a traffic infraction, and you try to talk your way out of a ticket. You may do that once every two years, but that officer, she's been pulling people over 30 times a day. She's got far more experience than you do talking out of tickets. So too, when you have these incidents, the value of expertise that has seen hundreds, thousands, 10 thousands of these alerts, incidents, issues, to be able to know where to look to point the team, to be able to have that fingerprint, that TTP collection in their short-term memory, or in their work papers, that gives you an advantage. You know, we talk a lot about the OODA loop. Often we talk about it without understanding it, but that is what you are engaged in. That attacker got the first move, and you were blind to it. If you can bring in outside, broader, seasoned expertise that can know this is where they're going next, that's how you get ahead of, and cut that loop short. - The MITRE attack framework is produced by the MITRE corporation. This is another framework that gives very technical details of how the attack chain often happens. The first steps are reconnaissance, and then the last steps are exfiltration, when an attacker achieves his collection, objectives and steals all the information. So in this reconnaissance case, this is when attackers were surveilling that router to study when they were going to be balanced, so they could exploit the vulnerability, and gain access to the network. And then once they gained access to the network, they move laterally, and then they gain the certificate authorities, and then they ultimately go after their clients, which is in the exfiltration phase. Threat intelligence is absolutely critical to picking up this type of reconnaissance, and working with outside experts, like VSOs that keeps incidents from becoming breaches. Moving on, how about documentation, so the past can be retold in an expeditious manner? - We had short term memory. We had forgotten what happened, the significance of it, the practices they used, and the fact that they've done on this work to simultaneously attack several software companies. They're not going to give up easy, but we missed that. And if we hadn't, we would have then realized, therefore we need to monitor things on the outside, and we need to time our maintenance updates, and they need to be confidential. So we need to both protect them, but also identify who's attempting to access when we do this. We didn't do either one, and therefore we had six hours of a breath-holding moment. - So where does it all end, and how has the current skill set gap meeting the current threat? - I think it will get worse before it gets better. I think that procurement happens too promiscuously. I think that Cloud has made things more difficult to identify, protect, detect, respond, than it has made things easy, and I think that we're losing a lot of our technical chops in most places. When we were on prem, I had a Unix guy. I had a Windows gal. There were people that were experts in these areas. And now the push is everyone's becoming a generalist, and as people become more general in their skills, they're not being replaced with people with specific, technical knowledge. And I'm talking in broad sweeps here, but in as a generality, that's what I see. And as a consequence, these supply chain attacks, these sub-components to your software that you bring in, who's evaluating the open source components that you integrate into your product? Then go to SolarWinds. All of those I think are going to continue to become a more manifest problem. We'll throw products at them, products that need to be run by specialists, so those will not be largely successful, and I think pride in craftsmanship is something that was a big deal that has become less as we've gotten more abstract in our use of technology, and that I think will point us towards a solution. - So what is the solution to problems like supply chain attacks? - I think it's boring and foundational, documentation of systems, patching, maintenance, update, segregation of systems, understanding the data that are on various systems. Those are really foundational, and that is going to continue to allow ransomware to devastate us, because we aren't identifying what systems have, where they are, segregating them by zones, and dealing with security commensurate to the risk of those systems. - Thank you for Joel Fulton of Lucidum for joining us on Know Your Adversary. Cyber security is hard. It's really hard as so many technical nuances in collaboration with information technology go into day-to-day efforts of defending a network. It's not just the investigations and advanced tooling that catch advanced persistent threats. It's really about the basics of compliance, patch management through rigorous updates, segregation of systems, logging, and asset management that keeps security incidents just that, incidents, not breaches. When the basics are not done correctly, and a security team does not have fidelity on what it's seeing is accurate, truthful, and complete, breach notifications will be inevitable. Thanks for listening. Thank you for listening to Know Your Adversary. Every other week, we will bring you a new cyber crime attribution investigation that is representative of the work of MISO software operators, past, present, and future. If you have any good stories to pitch, please reach out, as no two investigations are the same, and simultaneously fascinating how clues come together to bring context to crimes that victimize enterprise. For more information, please visit www.misos.com. Thanks for listening.