I let myself get caught up in a pointless twitwar yesterday, during which I took much abuse from my proponent for basically disagreeing with the assertion that you can just walk into an organization and "know" what is and is not important without doing some degree of assessment. His later point is that you don't need to do a "full" assessment, which is correct and not my point.
My point, quite simply was this: dowsing (or "divining") is no way to assess or manage enterprise risk. Dowsing is the ancient mystical practice of using a dowsing rod to find water hidden underground. To this day, well water is a very important commodity. In olden dayes, technology did not exist for finding sources, and so divining came into practice. Using the divining rod (or dowsing rod), a skilled individual could walk around an area, feeling mild tremors through the rod. The skilled individual would then move around until these tremors were maximized and the rod pointed down to the source of water.
In many ways, risk assessment today is exactly like dowsing. We walk into organizations with some mystical methodology that assesses pseudo-risk and then we act is if we've done something that is in fact truly legitimate and well-founded. The problem, of course, is one of repeatability. The INOFSEC Assurance Methodology (IAM) tries to specifically address this concern by setting up the System Criticality Matrix, but there are potential weaknesses in this approach. Similarly, FAIR leverages Bayes for providing reasonable modeling in the absence of real data. [6/4: correction - Bayes requires data, just provides model based on knowledge-state instead of nature-state]
Both cases are challenged, however, and at best "science" in the way of the "social sciences" (or so-called "soft" sciences). The problem, quite simply, is that there is no reliable way (today, anyway) to quantify a qualitative value. As such, we're stuck with gut instinct in assessing risk ratings, challenged in trying to come up with a consistent, reliable, and accurate method. If the method cannot withstand rigor, then it's not particularly sound or scientific.
This problem is one that is being actively researched. Notable figures like Alex Hutton (formerly of RMI and currently of Verizon Business) talk about this frequently; that enterprise risk management is a broken field that lacks scientific rigor. In my mind, this is spot on, and fully analogous to the state of the security industry. Gunnar Peterson, I think, captures this perfectly in his comment that "Its too bad but assumptions of yesteryear lead to building things on shaky foundations." His notable chart tells the story:
Similar to the lack of innovation and growth in infosec, where the world still revolves around firewalls and SSL, so does risk management revolve around pseudo-quantitative risk assessment that is based on qualitative assessments of varying degrees of reliability that are then converted to numbers, or otherwise averaged out. Dowsing risk in the enterprise is no way to live, and a good way to get completely off-track. Let's hope the future reveals a better way to exist.
Comments (12)
I've been doing lots of reading on this topic lately. I'm uncomfortable with the current state of risk management too, but find myself willing to play along with the idea of going with what we have (as bad/good as it may be) and refining it as we gain experience and knowledge based on our failures and successes. Info sec, if it can be called a science, is a science in it's infancy. We have lots of great instruments, but we're still waiting for our Galileo.
What approach do you advocate?
Posted by Dave Hull | May 19, 2009 11:48 AM
Posted on May 19, 2009 11:48
@Dave -
I don't know that I favor any one approach over another too much. Each has it's place. As I failed to get across yesterday in the twitwar, I like OCTAVE for certain environments, but it requires heavy involvement from higher-ups. IAM has a nice base approach for assessing "System Criticality" so long as you can lock in a good definition for each risk level/rating (H/M/L, whatever). Of course, if those change, then you have to go back and start over in many cases. Some of the maturity models are very interesting, but then they aren't really doing risk assessment and management in the same way. And, of course, FAIR does perhaps the best job of moving toward a scientifically sound approach, though Bayes makes many people uneasy (it's just not taught in basic statistics classes - at least not yet), and of course there are concerns about having any useful data to work with, either in forming or refining your model.
So, overall, I don't know that there is a good answer. We keep slogging through the best we can with what we have. I think this is really where people like Alex Hutton become very, very important. His understanding of the field is so wonderfully deep, I'm confident that he'll find a solution given enough time.
In the meantime, I think we need to be careful not to fully lock into any single assessment method, but rather leverage a couple that are compatible yet different enough to perform that "checks and balances" role to sanity check everything we do.
fwiw.
-ben
Posted by Ben | May 19, 2009 12:45 PM
Posted on May 19, 2009 12:45
It seems that we have plenty of people reading/researching/trying to find a way to do these things better. I agree with everything you said in your post. Wondering if we could try to put together a group to work on this common goal...
Posted by Augusto Paes de Barros | May 20, 2009 4:41 PM
Posted on May 20, 2009 16:41
@Augusto -
Thanks, I think a working group is an excellent idea. Logistically, no idea how to make that work, but it's definitely worth exploring!
-ben
Posted by Ben | May 20, 2009 4:57 PM
Posted on May 20, 2009 16:57
Hey Ben,
Sorry it took me so long to catch up here. So:
"Similarly, FAIR leverages Bayes for providing reasonable modeling in the absence of real data."
I would say is uncharacteristic of what FAIR does.
First, FAIR is a taxonomy and so whether you're using Bayes or Frequentist methods, FAIR is just the modeling construct within which to frame the "math-ering".
Second, Bayes doesn't compensate for the "absence of real data". Bayes operates in the knowledge-state (vs. frequentist approaches that focus on the nature-state). So it's a different but (many times) appropriate manner of looking at *noisy* data (that is, data with uncertainty that has useful information but would otherwise be jettisoned by a frequentist approach). So it would be more accurate (giggle) to say that "FAIR allows us to use information that we might otherwise be unable to use if you combine the taxonomy with a Bayesian approach."
Just sayin'.
Posted by Alex | May 26, 2009 4:49 PM
Posted on May 26, 2009 16:49
@Alex -
Thanks for the corrections. One of these days I'll finally understand what the heck you're talking about. ;)
I think my main point here is that the absence of data doesn't really help us, regardless of what methods you use. The current "gut" approach to qualitatively assessing "risk" seems neither useful nor satisfying.
-ben
Posted by Ben | May 26, 2009 5:25 PM
Posted on May 26, 2009 17:25
Hi Ben. Here are a few of my thoughts.
1. I do think it is possible to walk into an organization and quickly determine assets or processes of which there is great exposure to the company should they be disrupted.
2. However, being able to determine with a low degree of uncertainty the exact vulnerabilities each of those assets or processes has – will require some “formal assessment”.
3. Things become more complicated when you factor in regulatory / standards non-compliance. I would submit that flushing out this information (depending on the role one is fulfilling and the culture of the company) is going to take time.
4. From my perspective, risk assessment methodologies can be compared to the traditional Capability Maturity Model. If dowsing or divining is where one is at today – that may be fine for that organization. I would never articulate to any of the executives at my employer that what we do is the equivalent to divining – even if we were at that level from a CMM perspective.
5. There are numerous ways to quantify qualitative risk. Even a qualitative label is really a mask for some numerical value. :-)
6. Until one embraces the fact that risk will always have an element of uncertainty – individual or organizational progress in this space will always be hindered.
Keep your eyes open for a blog post about risk, uncertainty, and clarity.
Posted by Chris Hayes | May 28, 2009 8:37 AM
Posted on May 28, 2009 08:37
@Chris -
Thanks for the response. Perhaps what we need in risk assessment is the inclusion of an uncertainty factor. It would act kind of like a std deviation and would flag for the reader just how reliable the numbers are. If the uncertainty factor is high, then the reliability of the assessed risk values are low, whereas if the uncertainty factor is low, the reliability of the assessed risk values is much higher.
I agree that we should never assert to mgmt that what we're doing is tantamount to divining, but... within our own practice, we certainly need to realize when what we're doing is nowhere near scientific, repeatable, or reliable. The fact that the conventional practice is assigning numbers to words is proof positive that what we're doing is anything but scientific or empirical.
cheers,
-ben
Posted by Ben | May 28, 2009 9:48 AM
Posted on May 28, 2009 09:48
Firstly, if the discussion on Twitter was "pointless," why are you discussing it further? It was hardly a "war" and quite honestly, if it were, I don't know which side you're standing on at this point.
Secondly, you're misstating both the premise of my argument and what both of us said. Claiming I suggest that risk management and risk assessment are based on "gut" is ridiculous, considering the entire premise of my argument was using an empirical methodology for doing so and then backing it up with rigor from analysis tools.
Your desire for Vulcan-like "un-fuzzy" input data is simply unrealistic. It's simply silly to suggest that an organization that is at square zero is going to be able to get anywhere past "qualitative" measurement initially to help guide the process as part of maturation to arrive at a quantitative result.
H, M and L are useless? Seems to me that we used that quite a bit in the NSA IAM courseware to evaluate impact...
Thirdly, and as I pointed out, there are multiple versions of OCTAVE -- each with various levels of "upper management involvement." OCTAVE-S, as an example is a good example. But that's neither here nor there. The point is that you can walk into an organization and by asking a few questions understand what the most important assets are (people, process, technology, etc.)
Starting there and performing the discovery associated with a structured risk assessment is a fantastic way of ITERATIVELY cycling through this process, no matter what RA framework one chooses.
Finally, the reason you got my hackles up is because you're making declarative statements suggesting that things "can't be done" when I can clearly demonstrate the opposite, only to be dismissed by you as being argumentative or preaching from atop high.
If you don't want to debate, don't enter into one.
That being said, after all your complaining, what is your solution besides "...hope the future reveals a better way to exist." <-- that doesn't sound very scientific to me.
/Hoff
Posted by Christofer Hoff | June 4, 2009 12:27 PM
Posted on June 4, 2009 12:27
@Hoff -
I think you take unnecessary umbrage at the reference to the twit thing.
The fact is, my experience varies from yours. There are many cases where I've walked into orgs and had them say "BLAH is most important" only to find out later that, while yes it's relatively important, in the grand scheme it's not the most important.
I truly hate the H/M/L ratings. IAM at least does a good job defining what each one means, but that seems to be relatively rare in the biz. Overall, though, it's highly subjective. Hence my comparison to dowsing. Subjectivity leads to projecting bias, which leads to pet projects and bad decisions.
Is there currently a known way to be truly quantitative about this stuff? Clearly not - if there was, I wouldn't have written this incomplete thought. The point is that we really need a better way to do things. I'm hopeful it will be found by people far smarter than me. Such a future may lay in a complete change in thinking - this "state" discussion Alex brings up a lot (knowledge vs nature). TBD.
-ben
Posted by Ben | June 4, 2009 1:06 PM
Posted on June 4, 2009 13:06
Look, I'm not trying to be argumentative for bitcount sake, but your example simply makes my point.
Risk assessment (and thus management) is an iterative process. OCTAVE-based or not, you've done your job when you're able to point out that the asset(s) listed are or are not the most important based upon prevailing business conditions. Then you figure out -- within realistic constraints -- how you're going to deal with managing the risk(s) associated with them.
Rinse, lather, repeat...
Of course this stuff is subjective; business is run by humans. Ignoring that will get you awesome empirical data that is flawed the moment one of the monkeys varies from the binary.
It's a process. It takes time. It's subjective and messy. It would be great if it weren't but criticizing reality as being not scientific and then "wishing" for a brighter future is just an oxymoron; you're simply advocating the "dowsing/divining" method you're railing about in the first place.
There are tools and frameworks that allow you to get closer to making better decisions regarding risk; they still require the uncertainty of a human contributing subjective data to them.
In the grand scheme of things, I bet you all the $$$ in my pocket I can demonstrate how you can take an organization and derive "...a better way to do things."
Sadly, I don't think I'm any closer to really understanding your point beyond the fact that you "hate xyz," "wish for a better way," and don't offer any.
I'm not saying that to be a dick, I'm just missing the point completely, I guess.
/Hoff
Posted by Christofer Hoff | June 4, 2009 1:21 PM
Posted on June 4, 2009 13:21
@Hoff -
The points, in short, are this:
1) Risk assessment is subjective and loaded with uncertainty, and thus is challenged in being truly repeatable. (yes, I know, not earth-shattering, but I liked the analogy)
2) There is no reliable way today to quantify a qualitative value. ("soft" vs "hard" science)
3) Too much of the discussion in risk assessment and risk management seems to be around "getting better data" (such as to reduce uncertainty) without looking at the core problem: how the data itself is created and collected. ("reliability")
fwiw.
-ben
Posted by Ben | June 4, 2009 1:44 PM
Posted on June 4, 2009 13:44