A Postcard From the Future - An Interview with Dr. Ron Ross
Show Notes:
On today’s episode we welcome Dr. Ron Ross, who heads up the computer security division at NIST. Dr. Ross has something of a rockstar reputation in the cyber security world at present after his work at NIST and the resiliency framework they created has pushed things forward notably. In our conversation, we look at Dr. Ross’ role at NIST and how this relates to the current particulars of the cyber crime world. We also cover the increasingly popular topic of ‘resiliency’, which is now seeming to trump the idea of ‘security’ in common parlance. Dr. Ross details just how this concept plays out in the systems he is helping to create, and how resiliency can strengthen all cyber fortifications. Dr. Ross then comments on the the three prongs of diversity, deception and dynamism and the configuration of these against threats. We finish off by looking at virtualization and spreading information on cyber resiliency. The work Dr. Ross has been spearheading has made leaps for the field and will lead the practices of many for years to come even as tactics evolve and change, so for a direct link to a leading expert, make sure to listen in to this episode.
Key Points From This Episode:
Dr. Ross’ job specifics and NIST’s role in cyber security.The current climate of cyber danger and how this relates to the internet of things.Cyber resiliency as compared with the idea of cyber security.Counter measures and tactics that typify cyber resiliency.The characteristics of diversity and homogeneity in security systems.The idea of deception as a tactic in defense.Dynamism and reconfiguration in the ongoing battle against adversaries. Minimizing the time that a cyber criminal has to operate within a system.Utilizing virtualization and shielding in the framework.Accelerating dissemination of the information available on cyber securityAnd much more!Links Mentioned in Today’s Episode:
Dr. Ron Ross — https://www.nist.gov/people/ronald-s-ross
Dr. Ron Ross on Twitter — https://twitter.com/ronrossecure
Dr. Ron Ross on LinkedIn - https://www.linkedin.com/in/ronross-cybersecurity/
NIST 800-160 Vol 2. Draft Systems Security Engineering: Cyber Resiliency Considerations for the Engineering of Trustworthy Secure Systems- https://csrc.nist.gov/publications/detail/sp/800-160/vol-2/draft
NIST — https://www.nist.gov/
NIST on LinkedIn — https://www.linkedin.com/company/nist/
NIST Cyber Resiliency Framework — https://www.nist.gov/cyberframework
Sinet — https://www.security-innovation.org/
OPM Breach — https://www.opm.gov/cybersecurity/cybersecurity-incidents/
Cambridge Analytica and Facebook — https://en.wikipedia.org/wiki/Facebook%E2%80%93Cambridge_Analytica_data_scandal
Introduction:
Welcome to another edition of Cyber Security Dispatch. This is your host Andy Anderson. In this episode, A Postcard from the Future, we talk with Dr. Ron Ross, a cyber rockstar and member of the cyber security hall of fame. I'm truly humbled to have him on the show and it has been one of my favorite conversations so far. We spoke just after NIST released a draft of the new cyber resiliency framework which gives an incredibly lucid and compelling vision of how systems can be designed to move beyond not just being secure but to be resilient - to survive respond and adapt. I do not think it is an understatement that this document will chart the future of cyber defense likely for the next decade.
TRANSCRIPT
[0:00:08.5] Ron Ross: Okay. My name is Ron Ross. I work at the National Institute of Standards and Technology and my particular division is the computer secure division. Our basic mission is to produce security standards and guidelines for the federal government. A lot of the folks in the private sector are choosing to use our guidance, because of the protection things that we espouse are really or equally applied to the private sector systems, as well as those that are in the federal government.
It's a really exciting place to work and this is a very exciting time to live in with all of the new technologies that we're experiencing and, of course, we have an incredibly difficult and challenging problem ahead of us in this country. We take advantage of the great technology to the maximum extent, whether we're talking about smartphones, or we're talking about tablets, or those devices that are in your house now that can listen to your requests and they can actually make things happen.
I think we're at this point in time to include the autonomous vehicles that we're seeing now and all the automation going into medical devices. We're living in an exciting time and we're seeing the full convergence of cyber and physical systems, and that's a pretty important thing. A lot of people hear the term Internet of Things, to me, that's described by one simple phrase where we're pushing computers to the edge today; that edge can be everything from power plants, to Wall Street financial systems, to Fortune 500 companies, tech companies out in the Silicon Valley. We're pushing computers into medical devices and almost any place, sensors, the typical new automobile has little dozens of computers, millions of lines of code. All of those computers are helping to bring greater safety to the car and all the sensors that control the braking system and all the critical navigation system.
It's just a great time, but the central issue that we have to deal with in this new world that we're building is the issue of complexity - we're literally talking about billions of lines of code and millions and hundreds of millions of devices, billions of devices in the Internet of Things and all these things are hooked up together through the Internet in many cases and that's what we've been dealing with.
How do we deal with all that complexity when we have to understand very clear-eyed that we have adversaries out there that want to steal our intellectual property, they would to steal our national secrets, and – we have to deal in a world where the technology can benefit us; we can take maximum advantage of it to be productive, have a strong economy, a strong military, but at the same time how do we protect ourselves in cyberspace?
That's really what the NIST standards and guidelines are focusing on starting in 2018 and beyond. We have a series of publications that we talk about a systems security engineering. How do we build security and privacy for that matter into the lifecycle? The analogy I use is the automobile industry. My first car - and this goes back a long, long time ago, I just had my 67th birthday - so my first car barely had a seatbelt and then of course we had airbags and for many years they were optional, and you could have a choice between the airbag or the 8-track tape; that's where we used to listen to music back in the 70s.
Eventually, the airbags became standard equipment and then there were steel reinforced doors and then there were things improvements for the engine compartment. Then we had the – in current modern automobiles, we have all of those sensors and safety systems. Those things get built into the automobile and the objective is to make a safer car so if you are in an accident the damage to the vehicle and to you as a driver or passenger is minimized. That's the same thing we're trying to do with our system security engineering publications is to try to put out guidance that industry when they build IT components, or systems - or if you're on the federal side and you're letting contracts and for new systems - we can have a greater level of assurance that the trustworthiness we need in those systems, especially if we go into critical applications and things they have to work under stress, we can get that level of trustworthiness.
It doesn't happen automatically. You have to work to get better security and privacy features into these systems. That's what we're doing with the engineering work. The most recent publication, we followed our flagship engineering guidance document which was the 800-160 volume one, that was in 2016.
We recently put out for public review - most of our documents - in fact, almost every document we put out goes through a public vetting process - and we just published volume two a couple of weeks ago on cyber resiliency. What we're really trying to do there is to say not only do we want to build security into these systems at the start of the lifecycle and actually all the way through the lifecycle, but what do we do when we have systems, most of our systems are already deployed; they're in operation out there.
Ninety-five percent of our systems are more, or legacy, or installed-based. What do we do with those systems and how do we make sure they're as resilient as they need to be? The idea there is that you do the best you can to put up a set of set of defenses, whether you're using the classic safeguards of two-factor authentication, or encryption or access controls, whatever it might be. What happens when the adversaries get through that initial line of defense? We know that happens because we have a lot of evidence that even people who do this right can get hit at the opportune time for the adversary. What do you do when you can't stop them at the front door? You harden the target first, and then the next objective is to try to limit the damage the adversary can do once they're inside. That's been a very sad story because you heard of everything from the OPM breach, where we lost 22 and a half million records; there's story after story about breaches that occur and they don't just get little data, they get a lot of data.
We can do better than that, and the cyber resiliency objectives in our new publication 800-160 volume 2, they talk about the goals and objectives of having a cyber resilience system and they can even be applied to legacy systems. Then what techniques and approaches can you actually use that can be effective to help make those systems more resilient? Limiting the damage and making sure that they're survivable so you can carry out your critical missions and business functions after the attack has occurred.
That's what we're trying to do. It's really a clear-eyed view of reality and understanding there are very sophisticated threats out there; we call those the advanced persistent threat, where they try to get in to your system, establish a long-term presence and then continue to steal information from you - that could be anything, from the next-generation design of the military aircraft, or it could be some cutting-edge technology from Silicon Valley that they're working on for the tech industry.
This ends up being all going back to one central idea. You have to defend yourself in cyberspace in order to have a strong economy and be able to defend the country, because those things are all tied together. You can't have this massive amount of bleed of intellectual property and state secrets that are part of the national security community and survive long-term as a country. That's why we're so passionate about this work.
We have a lot of different tools now that our customers within the federal government can use and also the private sector has access to those same tools and guidelines and standards that they can use on a voluntary basis. Big problem, we've had challenges like this before - I recall back in the 60s when we were in the old, that missile race with the old Soviet Union back in those days. President Kennedy had a very famous speech that challenged the country and said we're going to go to the moon and we're going to do other things by the end of this decade, not because they are easy, but because they are hard.
That was in 1961 when that missile race was going hot and heavy. Eight short years later, 1969 July the 20th we landed our first man on the moon. The important thing about that story is that when the president challenged the country, he didn't just challenge the government, he challenged three – I call this the essential partnership - it's the government, industry, the academic community.
That's when NASA was born and that's when we had this incredible coming together with the best and the brightest minds in the academic community, fueled by industry who could actually take those ideas and build those huge rockets and the spacecraft and the Mercury program and the Gemini and the Apollo and then the shuttle.
The government was providing some leadership and some direction and had the vision of where we had to go to meet that existential threat. I don't think today that people view cyber threats as existential, largely because they fly below the radar; you really can't see them, when people are stealing things from you, when that rootkit has been installed in your operating system and you think everything's going just fine, but they're stealing stuff from you every day.
Most of these attacks occur and we don't discover them for a long time. They can do a lot of damage and a couple of minutes versus six, eight months, or even a year, or sometimes two years before that malicious code is detected.
If you can't see it, it's flying below the radar and that's the real danger I think to the country is that we – it isn’t a kinetic attack in the World Trade Center, the Pentagon at 9/11 where you could see the actual destruction and damage. A lot of this stuff's going on in the invisible space of computers. It's a lot of geek speak and a lot of things that most average folks are not going to be – they don't need to know all that stuff, but yet they do need to know how important the failure to act and to defend ourselves in cyberspace -What are the net long-term effects of that? That's why we do what we do.
[0:10:27.1] Andy Anderson: Yeah. I mean, so much there - jumping up and down in the background, because I think so much that I agree with. I think, just to take a few steps back, I think one – I'd love to just start with the different view of cyber resiliency versus cyber security, because I think in volume two, you start to lay some of those ideas out. For those who will get a teaser for that document. What those factors that are different, makes resiliency different from security?
[0:11:12.0] RR: Well, I think the major difference to me is that there's an assumption - see for many years the cyber security community had this idea of our strategy was penetration resistance harden the target, stop them at the front door - that was that was what we did. You tried to rely on deploying the strongest mechanisms and safeguards that you possibly could and there are a lot of people that thought in the early days, we can stop these folks totally.
We discovered that - and this got worse with the complexity of systems, as complexity grew, the job of protection grew harder. You can probably member in the early days of antivirus protection that some of the vendors made claims they would stop 95% of the malicious code. Well, that number dropped precipitously because it wasn't that they weren't doing a good job in developing the signatures for the malicious code that they were discovering, it's just that the growth rate of the malicious code in the malware was outstripping their ability to write the signature.
They had to come up with new techniques to enhance the signature-based antivirus type of products that were being produced. Very quickly we learned and this comes from empirical data, because you can see what people are doing. Now sometimes people don't do a good enough job in what I call the blocking and tackling of cyber security. We know the fundamental things that we should be doing, sometimes they don't get done and that's where the attacks typically start at the low-hanging fruit.
For me, it was more interesting to look at the people who were doing this well and still getting hit. That was really what makes the cyber resiliency a little bit different than the classic cyber security, because it assumes that you're going to have some things happen. I guess, I took a page from the military playbook; the military develops their battle plans - I was in the Army for 20 years - and they always have a battle plan that you develop and then that plan rarely survives the first day of combat operations, because you're working against a determined adversary.
When they do something, you have to react. Well the same thing holds true, the notion of static defenses I think is now gone by the wayside, because we have to realize that our systems are in a constant state of evolution, they're getting more complicated, and adversaries are going to be coming at us 24/7 with different techniques and tactics and procedures that they use.
We have to figure out ways to be more clever than they are. I think, I use the term you want to be able to operate on your terms and not on their terms. We want to have the tactical advantage, instead of them giving them the tactical advantage. Resiliency - cyber resiliency - means you've got to do things that are not traditional.
Some of those things are obvious, but the one that I thought was interesting was: that when malicious code first makes its way into a system, one of the bad things that happens if you're not paying attention to the architecture and the engineering of how your systems are constructed, the malicious code can spread from one part of the system to another and then it can actually go from one system to another - we call this a transitive attack - and the adversary can keep on hopping until they get to the target of opportunity when they find it.
Cyber resiliency would say, “Well, okay what happens if I quarantine that malicious code and maybe I want to observe that, maybe I want to let them get a little ways into the system, but not all the way, and I want to see what they do when they get there.” You want to observe what they're doing, because you're more interested in in developing countermeasures for what their tactics and techniques and procedures might be.
On the other hand, people would say, “No. As soon as I understand they're there, I'm going to reimage the system. I'm going to flush that part. I'm going to start with a clean image.” There are ways to approach the problem when you know that they're going to be there eventually. Then the question is how do you limit the damage they can do?
Another thing that a cyber resilience system may have and one of this goes back to the architecture and the engineering things that you do to make that a stronger system. A lot of databases are in flat files. The databases are huge and they're all in one place; if there is a compromise, the adversary not only can get that first record, they can get the last record and that could be as many as 22 and a half million as we saw in the case of OPM, or any of these large databases.
A strategy in the cyber resiliency world would be let's separate our data and only make data available on a real-time basis, or a near real-time basis to our people who need that to do their job for the task they're doing today, or in the in the very limited capacity. Because if there is a cyber-attack and they get that data, that's not good, but at least it's not the entire treasure trove of corporate information that could be highly critical, or sensitive, or intellectual property.
You can build these things called security domains where you move the information - the data - that's the most sensitive to a more secure domain, just like you do in a safe deposit box where you have some things in your house that are – you have a lot of important stuff in your house and you have locks on your doors and maybe your windows, but you don't necessarily have a strong enough security system to prevent professional burglars from getting in and stealing things like jewelry or coin collections or whatever - so you get a safe deposit box, separate domain, a much stronger domain. You got to go the bank, you got to get into the vault, you got to open up with the two keys and it's much more safe for those really critical possessions that you have.
There's things like that that are different in the cyber resiliency realm. We talked about deception and things like that. It's almost like when you're reading the document and you read some of the terminology, it seems strange in a cyber security context to be talking about those things, but we really take those things from a lot of the things that have been around for many, many years but not necessarily applied to our systems, but they do work in a larger context of defense and defense in depth and those kinds of things.
[0:17:32.7] AA: Yeah. I mean, I think it's partly a maturation of the space overall. You should have seen the length of time that we’ve been thinking about cyber conflicts. It's not as long as we've been thinking about other forms of conflict and other forms of engineering and protection and intelligently designed systems. It's interesting to see that.
I think having read the document, I was – you walked through these 14 different techniques of – that underlie resiliency. It seems and I hope I'm not taking words out of your mouth, but it seems they get grouped into a couple of categories: one is diversity, like having segmentation and a few and differences across your system; then deception, would be another overarching category of some of those techniques; then, the third would be change in dynamism in terms of a lot of time and space whatnot.
For those who haven't read the document yet, or haven't thought about it, walk through those three categories and how you think they play together.
[0:18:49.2] RR: Yeah. I just want to go back and use an analogy first and then we'll go through each of those and try to say a few things about each one of those. I think in general, I look at cyber resiliency for systems like the human body. If you look at the human's immune system, we have a tremendously powerful immune system and fairly complicated immune system.
When when bad things are introduced to your to your system, whether it's a cold or a disease that's more compelling, your immune system goes into action and tries to isolate the problem and fix the problem. Now it works most of the time. In fact, it's amazing how well it does work, but occasionally your immune system is going to be up against an adversary that may be too powerful. For example, cancer cells in many cases.
Your immune system can deal with a very low-level of cancer cells, but eventually when those things start to multiply and they have a certain doubling rate, they will overwhelm the immune system and that's why we're still looking for the big cure for cancer.
The systems can act the same way. It may never be possible to protect these systems to an absolute 100% degree of certainty, where you have that high-level of assurance, because they are so complicated. The same as the body, we can apply these resiliency techniques to a large degree and be very successful at reducing our susceptibility to a devastating cyber attack by limiting the damage that can actually be done and actually making, just like the human body become – it is resilient and it's survivable in many cases. That's the analogy we're trying to use with the guidance in this document.
Go ahead and give me those three categories again maybe one at a time and I'll try to address each one of those.
[0:20:41.0] AA: Yeah. Diversity. Just the way that genetic diversity in populations that's a value to stopping diseases and whatnot. Diversity seems one of the categories that touches on a couple of those different techniques.
[0:20:58.1] RR: Right. I mean, the diversity can be used. That's a pretty important characteristic and a technique. There are a lot of ways to achieve that, but just think about this, there's a lot of discussion today in enterprises where we're moving to the cloud in a lot of cases, or from a systems administration point of view most sysadmins would rather deal with all Windows systems, versus a combination of Windows systems and Linux systems and maybe Apple systems, iOS, the different operating systems.
They want to have a homogeneous base of stuff, because it's easier to do administration. Think about that - that lack of diversity can be a single point of failure, so if you've got one particular vulnerability that is a Windows system is discovered let's say in a zero day and that exploit is launched against an organization, not only will it bring down that particular workstation or device, but it can propagate and that same vulnerability will occur in every one of those homogeneous elements across the entire system and maybe system of systems.
Diversity - and we talk of this a lot - it can be more complicated for the sysadmins, but that lack of homogeneity can be a benefit, because having a heterogeneous network of a mix and match of things, it can be very beneficial because a single attack is not necessarily going to be successful, because these systems are different in that respect. Every one of those components where it's different may provide a little bit of time, or lack of totality of the devastation that could happen.
We see that a lot. There's scanning tools, another example, or antivirus products. A lot of organizations will not just deploy one particular vendor’s antivirus product, but they will deploy several. Or they use different types of scanning tools, because they're all built a little bit differently, they all have different scopes, if you will, of what they can see and what they can feel and touch as they're doing their work on the scanning space; that again, that diversity is good because you're going to pick up things. It's having more people more eyes on target. A simple example is when you have, when you're writing a document like we do at NIST and we have people that look at that from a tech editing point of view, or our reviewers during our public review period, just think about all the people out there that look at our documents and provide us public comments.
There's a lot of diversity. People are really smart. They're looking at the document from a different perspective and all of that diversity and diversity of comments and opinions is going to come in to us and we can make a stronger document based upon that. Diversity is one of those things that's really important.
It may be a little more expensive some time to do that, but again, when you look at the expense of applying cyber resiliency techniques and we say this in the document, every organization is going to be a little bit different in how they do that, because you have corporate missions and business objectives that you're trying to achieve, you have budgets you have to live within, and you have to always weigh those and in a risk-based decision-making process.
How much do I need? How much is going to make me more effective? If I spend another 10% or 20% does it get me any closer to being – more cyber resilient? Those are trade-offs that are made every day within organizations. That's a little bit about diversity, and then you can go the next category.
[0:24:27.8] AA: Yeah, so the second was deception, right? That that idea which is not something that – I mean, it's been in favor and went out of favor and now back in favor a little bit in the cyber security community.
[0:24:49.0] RR: Yeah, deception. I mean, a lot of people may raise their eyebrows at that but, deception has always been important. Of course, it's important in the kinetic world of warfighting obviously, but deception and systems can also be important because when the adversaries come in, they're looking for the target of opportunity.
They have something in mind. Sometimes they'll come in and just do window shopping and see what's available, but a lot of times they understand what's in that system and what they want to get out of that system. There are ways to implement deception databases that appear to be one thing and maybe the data is intentionally corrupted. If they get to that target, they take what they think is the good data out and they're getting really corrupted data.
There's those kinds of things that can really delay and confuse the adversary as to what they're getting and how much trust can they put in that data that they're getting. One of the things that's interesting about all of these resiliency techniques is, I was looking – our documents are posted publicly, so our adversaries have access to the same things that we do.
There's been a debate for a long, long time about publishing all the stuff in the open because, you're giving your playbooks away to everybody; the bad guys and the good guys. We've always made the decision to err on the side of providing maximum guidance to our customers, so they can defend themselves, whether it's the federal agencies, or whether it's our customers who in their private sector who are using our guidance on a voluntary basis.
That playbook on cyber resiliency, now they know, the adversaries know that all of these things are possible. In some sense, it may plant that little seed saying, “Okay, I got all this data but am I in a deception net here? Is this the real stuff?” It just plants that seed of doubt and sometimes that's all you need to do. You bring every possible tool, technique, strategy, tactic approach to make it absolutely as difficult as you can make it on the on the bad guys to get into to your stuff.
Because our stuff is valuable and it can be valuable for national security purposes, it can be valuable for economic security - I mentioned over the strong economy and our great R&D community that does a huge investment in R&D - that deception and all those – the techniques and approaches and things there are not used a lot. Like you said, they were talked about years ago.
What we're saying is now, don't rule out any one of these things because sometimes when you bring in a combination of the cyber resiliency techniques to achieve some of the objectives that we talk about, in combination the benefit has a multiplier effect - each one of these things individually, maybe not that affected - when you have many of these deployed, these techniques and approaches, they have reinforcing characteristics to them. That's why I like deception. It's a good strategy you can bring forward.
[0:28:02.5] AA: Yeah. I didn't get a chance to go to Sinet, but I was reading the notes from it and that was one of the key themes that I think came out of the most recent one, was that deception was – those who had begun using it perhaps again were surprised by both its minimal cost and effectiveness as a strategy. Then the last, I mean, each of them are somewhat tied, but I think dynamism or just the idea of changing the underlying system in terms of – go ahead.
[0:28:35.3] RR: Sure. That's an important one. Dynamic reconfiguration and things are important. If you assume that the adversary is going to get in some point, then you're going to have a system that's corrupted. The level of corruption really depends on the malicious code. We've seen adversaries try to insert themselves throughout the entire stack of the system. When I talk about system stack, I mean from application, to middleware, to operating system, down to the firmware.
The basic input/output system is an example of the firmware that boots up the operating system and then down into the integrated circuit, into the supply chain, that's the ultimate. The lower they get in the stack, the more control they have. If an adversary is able to establish a rootkit within your system; they have complete control of the operating system, then everything above that really is subject to be questioned and is untrustworthy.
Dynamic reconfiguration, or you may have heard the term - I use and I like to use analogy - in the old days when you bought a laptop computer, you would get the master disks with that laptop. Then you could go back and reimage the operating system. If you got a some malicious code into your system, you could actually reinstall the operating system and bring it back to the factory state.
Sometimes you can still do that if you – a lot of it's done online now through downloading and the original master disk, but we call that reimagining. The idea is that adversaries can do damage in two ways. We limit the damage by limiting their movement laterally across the system or from system to system; we talked about that earlier, but we can also limit the damage they do by limiting the time they have on target.
Just visualize, if an adversary is in your system for seven months or three months, they can do an awful lot of damage. If they're only in there for minutes, or an hour, you reduce that window of time to exploit. We can do that with a couple of techniques; dynamic reconfiguration, we can we can re-image certain components - virtualization is an example of that. When you bring up a new virtual machine, you're bringing up a clean image so to speak of that virtual machine operating on top of a hypervisor.
There's some new techniques now that are currently being explored called micro-virtualization, where you basically are turning the infrastructure so fast that even if the adversary gets in, they don't have time to do damage and there's a whole bunch of things you can do in that – under that umbrella of dynamic reconfiguration as opposed to a static view.
You're making it hard for the adversary to understand the environment that they're working in, or the environment collapses around them so quickly that they can't take any time to do damage. That's another thing that's here. We have that technology now. I think, I predict in the future that the cyber security problems will become a whole lot less daunting, because these kinds of virtualization techniques and dynamic reconfigurations will be just part of the way the systems are developed and how they're brought forward into an organization. There's a lot of hope. We're not there yet obviously, because we still have – most of our systems are highly exposed and highly vulnerable to a lot of these bad attacks.
[0:32:04.6] AA: Yeah. I mean, you're preaching the choir on the virtualization stuff. When I'm not running the podcast, that's where I spend most of my time and the applications of it are manifold.
[0:32:20.2] RR: We didn't talk a whole lot about the adversary compromising the supply chain. It's not always bad guys that when you're designing a new chip, chipset, or new software, a lot of these vulnerabilities get surfaced during the development process, their weaknesses and deficiencies in the code or in the design of the chip that can later be exploited. I mean, you are familiar with that recent thing with the Intel chips - or the operating systems and all that and the vulnerabilities that were discovered there.
Well those things have been around a long time. When we discover those or when they become public, they are – now you're worried about the attack surface that makes those particular weaknesses and deficiencies visible to the adversary, because weaknesses and deficiencies that can't be exploited - those aren't really vulnerabilities - but we worry about the ones that can be targeted by the adversaries, and then they become vulnerabilities we have to deal with at some point.
[0:33:19.9] AA: Yeah, and I think, just to switch gears, I think that's an interesting point that you bring up and something that I have a lot of conversations about it. You touched on it - you started this conversation with thinking about the majority of those legacy systems aren't going to be replaced in the near term, like 95% of the stuff is not necessarily new.
I think that there's this – one of the challenges that I think about and talk about a lot is that the equipment, whether those are cars, or power generation, or military equipment, the usable life of it is measured in decades. Whereas, IT infrastructure, or computing is measured in a matter of several years.
You've got this disconnect in terms of a life cycle. Walk us through how you think about – and one of the ways that I've heard talked about is to use some of the virtualization pieces at that interface to wrap and protect the older potentially more vulnerable systems with newer more modern stuff.
[0:34:30.5] RR: I think you can do that. I mean, there's all kinds of engineering and architectural things that you can do with virtualization to create those wrappers. That may be the best technique to use in the near term. I think a lot of these systems, I think the good news is that you mentioned the lifecycle; these systems do churn at a pretty high rate.
We're seeing a lot of the maybe the whole system doesn't go away, but it's the old b-52 bomber. That's been around for over 60 years, I believe now, and it's still flying. It's older than all the pilots that are flying there, but the idea is that that's not the same aircraft that hit the streets 60 years ago. It's in a constant state of evolution. They're redoing the electrical system and all the computers and things inside and the exterior gets a work-over and they make engineering changes to that aircraft. It looks like the same aircraft as when they first rolled out, but it really isn't inside. All the guts have been taken out.
That's what happens with our systems. They never really go away, but you're upgrading the operating system, or you're bringing in new network components, or firewalls, whatever the component might be, the new software application. There is an opportunity over time and this is why we published the 800-160 engineering guideline volume one, is to tell our customers, “Look, when you've got a legacy system that you're upgrading or you're buying a new system, you need to make sure that you start having the discussions about security and privacy at the very first process step in that lifecycle where the mission and business analysis is taking place.”
What is our core mission? What's our business? What's our opportunities here? What do we have? What's the critical things we have to do for our customers? Then very quickly the stakeholders get to weigh in, and knowing that they're going to be using this very powerful technology to accomplish their missions and their business functions, they now have to understand that their success, it really depends on those system components and systems being dependable.
That doesn't happen without inserting themselves into the life cycle process, so those discussions can take place in the trade space dialogue, because every system has all the functional things it has to do. Then you layer into that all the security requirements and privacy requirements; those things have to be traded off, because you never get everything you want in the system, and that includes the security stuff. There's going to be those risk-based decisions.
The problem today is that the stakeholders are not involved in that process early enough. The systems go forward in the development cycle, and a lot of the security requirements never get discussed, or you don't have an opportunity to do the trade space discussions that engineers always do during the requirements engineering process - part of that's the – that's the heart of the life cycle.
You end up with systems that are largely indefensible. They can't be defended, because you didn't take the time to build those critical security features into each of the critical components, or the pieces of the system and the stack that are necessary. I do agree that we can bring a lot of this into a temporary state of goodness by doing some of the things you mentioned with virtualization and wrappers and trying to shield that more vulnerable – those more vulnerable system components until we can actually make them stronger and more resilient over time.
The lifecycle will take care of this. The natural evolution of technology through enterprises will take care of this problem. The question I have is, is it going to happen quickly enough before we have a self-destructing moment here? Because just think how highly dependent we are on all these computer systems that are in the energy sector and the power plants and all that, and we really don't know what we don't know as far as the level of penetration of our adversaries in the infrastructure, where you can pre-position malicious code and not pull the trigger until that opportune moment.
Those are the things that really keep a lot of people up at night, who are a lot smarter than I am and I just – but I worry about it too. We have a lot of people who are trying to bring our best and our A-game to this problem space.
[0:38:46.2] AA: I think, certainly what the work that you've been doing is hugely impactful, because I do see it as the two sides of the coin, right? Security on one side and resiliency on the other, and really if you are only thinking about one, you're you're missing half the picture, right? I think when you start to add that resiliency, then you start to get a much more complete picture.
Like you talked about, thinking through beyond that moment of breach, right? It's not just how do we stop a breach, but what happens after it, how do we recover, how do we should have respond? It's interesting that you brought up the power space, because the day after I was going to see you at Billington, I went to a conference for power particularly in the distribution side and I think that industry – I was struck by almost what may be the approach that that industry has taken to natural disasters and thinking how they respond to those may also inform how we think about cybersecurity.
Because I think that no one thinks that we're going to not have hurricanes and doesn't think that we can completely stop, or prevent them, but you can. You change building standards. There's a whole process that has been in place to respond, recover, survive those. In the kinetic space, I think that's an interesting – I'd be curious your thoughts on that analogy and what's happening in the cyber world.
[0:40:33.0] RR: I think it's a very good analogy, because it reflects a clear-eyed view of the problem that we're facing in that kinetic space. There are hurricanes are going to always happen. There are going to be power outages. Of course, the resilient part of the power sector, it's a lot of different discussions you can have, but I know a lot of the new developments that are being built, the power lines are underground so they're not susceptible to those trees coming down during ice storms and things that.
The threat space is fairly broad. It can cover everything from natural disasters, to cyber-attacks, to just good old-fashioned failures of devices that happen occasionally. Then there's errors of omission and commission that come with the software development community in general. It's pretty important, if nothing more 801-60 volume 2, we want to have a national dialogue. We want to start this conversation.
I have this with a lot of my military friends when I asked them. When I was in the army, we used to use map and compass when we were navigating around in the wilderness. Today they have GPS and I said, “What happens when the GPS goes down? What's the backup?” You can ask that question not just for that, but for any organization who relies on information technology and high-tech today. What's the back-up plan?
We devote a whole family of controls in our security control catalog to contingency planning. A lot of that is ignored. People focus on the stuff that they think is the most important, like the access control mechanisms, or encryption, or two-factor. Again, taking the view of a 800-160 volume two in cyber resiliency, you start out under the assumption that there is going to be a breach and some – your event.
To me, it's a pretty important question of what are we going to do? When you get the blank stare back, that that's all – you get that a lot, because I don't think people have thought through this, because we've been so focused on the technology, it's almost an addiction or it's so compelling that it's given us this huge blind spot, because everyone loves their smartphone and the hundreds of apps that are just awesome
When you press that little button, when you download that app from the – on your smartphone that says, “Hey, this app needs access to 55 things. Press yes.” Well, nobody knows what that app is really getting access to, because you just don't have the visibility below the line. It's not on your radar.
I think that's an important discussion. We got to figure out do I want to have a world that is highly functional and has all this great high-tech stuff and whiz-bang gadgets, but yet is so untrustworthy that I can't be sure that someone can't take us down with well-placed asymmetric cyber-attack? That's the stuff I think as a nation we have to really be focused on, because critical infrastructure is not just a name. It actually is critical for the survival of the country, whether it’s energy, the financial systems, the first responders, all of those things in the 18 critical sectors are important, and that's why they've been deemed critical.
We're going to have to do a little bit of soul-searching and find that - I call it a balance point. We all like the tech, but do I need all that stuff? Am I willing to pull back a little to make sure I can get a more trustworthy component or system? - It can't do everything. We have a joke. In my old organization I used to work, we called the trusted garage door opener.
It only does one thing, it opens your garage door closes it, but it does it with a high degree of trust where there’s an assurance. In other words, it’s never going to fail, because that software is a small amount of code, it's highly trusted, it's built with secure coding techniques. I use that as an example of how you can design things that are highly trustworthy, have a high degree of assurance, but you're not going to be able to do 10,000 things on that device.
One of the things that in the military systems, in the B-1 bomber, it has an operating system actually and it's not Windows surprisingly enough. It’s a small kernel-based operating system that's much simpler and has a higher degree of assurance and they put that operating system there, because the complexity of having something like a Windows – it wasn't appropriate for a weapon system, especially one of the tactical fighters. Those are design decisions that we're going to have to make and confront.
Unless as consumers, we understand the problem first and are engaged. Then as good consumers, we can push industry to making more trustworthy components and systems and services that we all depend on. I call this the essential partnership going back in the industry, government and the academic community. We're all in this together. The common denominator goes back to one simple thing, computers that are driven by software and firmware. Those are going into cyber-physical systems and things we care about, I think we need to have a discussion to how trustworthy should those things actually be.
[0:45:45.5] AA: Yeah. I think, you don't have to be reading the trade press, or be immersed in the space for it to be on your radar screen, perhaps like it never has been before in the last couple of weeks with whether that's Cambridge analytics and Facebook, even places like Atlanta and Baltimore getting ransomware. It seems it's just reaching, at least when my 95-year-old grandmother asked me about what's going on, I know it reached another level.
As we close and thank you so much for taking the time, how do we make sure that the work that you guys have done and this document gets out broadly, what's the what's the way that that happens and what are the impediments to it?
[0:46:41.5] RR: I characterize all the standards and guidelines we produced. They're not regulations and they're not mandatory for the private sector, obviously. We'd to people first of all to have some visibility to the general guidance that we're publishing in these important areas. That's not to say that everybody has to be a system security engineer, but even people who were in the C-suite, the s and the CFOs and CIOs (Chief Information Officer) and all those folks, just knowing that there is guidance out there that deals with cyber resiliency or systems security through an engineering-based approach, or even our cybersecurity framework, which is published in 2014, that's starting to be adopted across the country, but it's a high-level framework.
Eventually, we have to engage industry as consumers. We want to treat our information systems the same way, the same expectation we have for bridges and airplanes. When you get in an airplane and you fly from point A to point B, I think most people have the expectation of that plane is going to get from point A to point B safely. Same thing when you cross a bridge. The reason we have that level of confidence and assurance is because those bridges and airplanes are designed with best practices in engineering, math and science and physics and all the things that are good materials. That's what gives us that confidence.
We want to have that same type of discussion with industry, so we can get that same level of confidence in the systems, especially the critical systems and applications that we are all using. I think understanding the threat space is job one for everybody. Understanding what is out there to help make it better and then taking that knowledge and understanding how it can actually make your organization better and what you have to do, and there's almost a supply and a demand part of this problem.
I've talked to a lot of industry folks and they say well, the federal government made talk a lot about trustworthiness and insurance, but we don't see in any policy or, there's no regulation, or there's nothing that makes industry comply with these things and that's just the way our country works. We value the free market. There are certain things that are regulated, the energy sector is one of those, nuclear power plants and that's one big example.
For the most part, we want to encourage industry to use some of these techniques and concepts to improve their products for their customers and that's a win-win for everybody. A vendor that can substantially improve the security or trustworthiness of their products they're offering their customers, I believe that can be a market differentiator. The customers have to value that as well. Industries not just going to do that if the customers don't demand it on the other side.
There's a supply and demand issue here and it's a question of who's going to go first. In the federal government, we have the ability through our – and our things that we produce coming out of OMB to make statements about the valuing trustworthiness - it's a value proposition. You may see some of those things starting to emerge in 2018, where we start to talk about the value proposition of having trustworthy components and systems in the federal government and in our places in the critical infrastructure, where those things really can make a difference. Without that demand and the supply curve meeting at the same time, it can be difficult to get this thing off the ground zero, so to speak.
[0:50:13.5] AA: Yeah. I think, you certainly have set a great standard and beginning for adoption of these ideas, and to start, to kick-off the conversation. My background is not an engineering background, but I was able to follow this document throughout. I look forward to diving into the others as well and would certainly recommend even if you're running a major organization or anywhere involved, this is something you should be digesting for sure.
[0:50:46.8] RR: Yeah, and there's some one last point about the – there's always the notion of return on investment in cost. That's a key factor for every organization. People don't have unlimited budgets and funds to do a lot of these types of things, but there's been a lot of debates and studies done about good code, secure coding techniques and bringing those best practices into the software development lifecycle.
It may cost a little more to do that, but then on the other hand, building better software may have a huge savings on the other end, because there will be fewer vulnerabilities that will end up having to get fixed later on. Maybe it's like going to the cloud environment. People think they can just take their current legacy applications and data sets and just throw them into the cloud.
A lot of times, you have to do some reengineering of those mission business processes to take advantage of what the cloud offers. That that re-engineering sometimes there's a little bit of a cost bubble initially that you have to go through climbing up that hill. Then the benefits on the downside are very, very large.
Again, we have to look at all those factors and we have to make a credible argument why doing some of these things may be a little more expensive, but then the long run they're going to save a lot of money, they're going to protect your reputation, your assets and allow you to be a company that can flourish in a very dangerous cyber space going into the future.
Again, I appreciate the time today and helping us get the word out to all of our customers and I'm looking forward to hearing the podcast and maybe seeing when it comes out in the article form.
[0:52:23.7] AA: Yeah. No, this was great. Terrific. I'll make sure that we link on the site too to these standards and then some of the others that you mentioned as well. I wasn't even aware of all the other stuff that you guys have had been putting out recently. That's part of diving into the space and having guide codes like yourself to show us around. Thank you so much. This was great.
Create your
podcast in
minutes
It is Free