Bank's security chief focuses on targeting risk

31.10.2005
The constant need to patch vulnerable systems on its vast global networks has been driving London-based Standard Chartered Bank to a more risk-based approach to vulnerability management. Instead of rushing out to patch every flaw that's announced as soon as possible, the goal instead is to implement an approach that helps the bank identify the problems that matter most and to prioritize its responses based on asset value at risk. In this interview, John Meakin, Standard Chartered's group head of information security, explains what the bank is doing to help it identify the most urgent threats to its networks and which IT assets get priority for protection.

What's driving this whole effort? Deploying patches across a global, international network is a big challenge. There are lots of potential difficulties, and of course they are all magnified every time you have a vulnerability for which there is an exploit being deployed across the Internet. Given that we have already invested in automated [patch] distribution across the network, given that we think we have a very efficient way of capturing the initial information about a vulnerability and a patch, we were looking to see what other scope we had of making this problem less intractable.

How have you gone about doing that? We really have said the only way of solving this problem is to truly target where we deploy patches and when. Clearly, some of the servers on our network are more important than other in terms of the impact on our business. Equally, some of those servers are subject to a greater likelihood of any vulnerability on them being exploited. By measuring these two factors across the whole asset inventory on our network, we are able to know which of our high-value boxes are most exposed when new patches are released.

How big of a challenge has this been? It's very simple, very logical and very easy, when you put it that way. But actually doing it is a challenge in itself. First of all, it presupposes that you have a very accurate asset inventory. We've already made some investments on our network which have given us the beginnings of that asset inventory.

Secondly, we have made investments also in tools which scan for the existence of vulnerabilities across the network. The third piece of the puzzle, as an add-on to asset inventory, is a measure of just how valuable each box is, based on the data and the application that it supports.

The last piece of the picture is the ability to model, in a repeatable way, how easy it is for a vulnerability on a particular box on a particular place in the network to be exploited. A trivial example would be to say a box on your network boundary facing the Web that contains a vulnerability is at a higher risk of actually having the vulnerability being exploited than a box buried on the inside of your network.

If you add this piece, all of a sudden what was once an impossible task becomes a more possible task. So for us, it's not about being slicker in getting the [patches] out there, and it is not being slicker in testing them. It's more about saying we are targeting the limited effort we have on the [most important] boxes.

How effective has this whole strategy been in helping you deal with vulnerabilities so far? We haven't finished this yet. We've taken the risk-driven approach over a period of the last two to three years, really starting from SQL Slammer forward. We started focusing our efforts first of all by scanning for where the vulnerabilities are, and certainly we have deployed our proprietary risk inventory in order to target the work of patching.

Going forward, we are adding this final threat-modeling piece from Skybox [Security], which we believe will give us a further refinement of our patch deployment process. Skybox provides us with the ability to model, in a repeatable way, how easy it is for a vulnerability on a particular box on a particular place in the network to be exploited. It will allow us to go from 100 percent patching down to 35 percent if we target just the most valuable boxes, and down to about 20 percent if we target just the most valuable resources that are also most at risk.

But you still will be patching all the other systems, right? Exactly. We would have a patch cycle where we first deal with value high/exposure high systems, then value medium/exposure high, value medium/exposure medium and so on. We will leave the low value/low risk systems until we have a regular software release.The leisurely approach, if you can call it that, would be just to bundle up the patches into a regular software release if you are just dealing with the residual population of boxes which are of lower value and are not really exposed to an exploit.

Who decides how valuable a system is, and what is the process for doing that? We have encapsulated our risk model in an automated tool which we built to our own design. The tool is used by business guys, development project managers and by us, the security experts.

The business guys are the ones who are the experts in the value that's in the system and in the data. What they do is sit down with this tool, and they are asked a series of structured questions using a sort of wizard-driven approach about how bad it would be in financial and reputational terms if certain characteristics of the application and the data were put at risk. What that does, obviously, is to capture information about asset value, which is used to produce an overall rating [based on] confidentiality, integrity and availability.

The development project managers sit down with the tool and enter information about how the system is built, and that gives us basic information on how vulnerable, in broad terms, that application would be.

Within our tool is a body of knowledge that allows us to rank the various security controls we could deploy, and it maps them to particular vulnerabilities in design. In other words, a security measure which is about encryption will protect the value that is associated with confidentiality, but it won't do anything for availability. The requirements are then pushed out into the development life cycle, where they are hopefully fulfilled.

So, what is the final rating based on? We rate the application by giving it a value for each of the key security characteristics: availability, integrity and confidentiality. It is then given an aggregate overall value on a scale of 1 to 5, where a 5-rated application is of very high value to the bank, while 1 has a relatively low value.

What about legacy applications? We are still building up our coverage of legacy applications. Even that we are doing in a sort of risk-prioritized order because you can make a very high-level, notional, gut-feel assessment of the importance of a particular [legacy] application really just by having a knowledge of how the business works.

How easy has it been getting business owners to participate? Not easy. I would still say that we get 50-50 direct participation of business users. Sometimes it is a business-aligned IT guy who is engaged in the evaluation process.

The evaluation process itself is sort of self-correcting and is really quite sensitive to overvaluation. If I walked into one of our heads of business line's office and asked him how valuable a system is, his natural reaction is to say "high." They always say "high," and that experience has been confirmed when using our wizards as well. What we've found gets better business involvement is when we go back and say to them, "OK, you have gone through this valuation process and come out with a 5 for confidentiality, integrity and availability. You do realize that means that you've got to make the maximum investment in securing those systems?"