I recently gave a talk based on my "Scaling Risk Management" blog post (and an upcoming article). The talk was generally well received, but there was a particular question that I didn't get a chance to answer, and thus thought I'd elaborate on it a bit here.
During the talk I cover some "fundamentals" in order to baseline the conversation. I go through common terms and what their generally accepted definitions are, highlighting discrepancies between a few industry definitions of common terms (including "risk"). Part of that discussion covers risk tolerance vs risk capacity vs risk appetite. Oftentimes these terms get used interchangeably, but they are in fact distinctly different.
It may be a bit over-simplistic, but I think of these terms as follows:
* Risk Tolerance: the "hard limit" on the amount of risk liability your organization is willing to carry.
* Risk Capacity: the "soft limit" on the amount of risk liability your organization is willing to carry.
* Risk Appetite: the amount of risk liability your organization will actively seek out.
One thing that should jump out at you is that risk appetite is a different categorization than the other two, and it, in my opinion, does not really apply to InfoRisk (or IT Risk, or InfoSec Risk - whichever version you prefer as a subset/component of OpRisk). The reason I don't think it fits is because to look at risk appetite means that you're actually taking a risk-seeking posture, which simply isn't applicable to InfoRisk. At no time can I conceive of InfoRisk being managed with a risk-seeking attitude.
The counter-example that was given to this topic was that of managing multiple risk areas. Representatively, imagine that you have 5 risk areas that are tracked on a (green-yellow-red) heat-mapped dashboard. 3 of those areas are floating in a yellow zone, 1 is green, and 1 is red. The business may decide to reallocate resources (funding, people) away from the green risk area to the red risk area. It was argued that in this case the business is demonstrating risk-seeking behavior. I disagree with that characterization.
Ultimately, this may reduce to academic and semantic haggling, but to me the scenario describes modification of the risk tolerance (and possibly risk capacity), and does not speak to appetite. Given unlimited resources, the business should want to push those risk areas all into the green. However, reality says that resources are (increasingly) limited, and thus the spend must be allocated smartly. In the example, the green risk has had a risk tolerance set too low, while the red area has too high of a risk tolerance. The business is thus deciding to increase the risk tolerance for the green risk, while reducing the tolerance for the red risk. It's then reallocating resources accordingly in order to better fit their strategy. As an aside, it would seem that the yellow risks demonstrate the risk capacity of the business - an amount of risk liability borne that is managed within the overall risk tolerance.
Worth a Mention: Gordon-Loeb
Related to this topic, there's been some interesting discussion in the SIRA community about an article in the WSJ, titled "You May Be Fighting the Wrong Security Battles," by Drs. Gordon and Loeb, which talks about their research melding risk management and economic modeling (they published a book on the topic in 2002). In the article (and their book), they talk about the point of diminishing returns, described as: "The amount a firm should spend to protect information is generally no more than one-third or so of the projected loss from a breach."
It's an interesting theory, and one that likely makes sense. Unfortunately, the rest of their article goes on to describe a "risk assessment" technique that doesn't make much sense. As is seen in numerous other places (e.g., the OWASP Risk Rating Methodology), they want to start with the "risk" (which oftentimes isn't "risk" so much as a threat or weakness) and somehow reverse-engineer the necessary component values (and, by extension, either ignoring or reversing the actual derived "risk" value). This approach is illogical as there are simply too many variables.
To summarize this: While I find their results interesting and plausible, I greatly disagree with their proposed approach to implement their findings. Rather, I think that you must start with a solid (preferably quantitative) risk assessment methodology that gives you estimated costs for breach scenarios (like what FAIR does). Once you have that information, you can determine your risk tolerance and risk capacity, as discussed above. Under such an approach, and applying Gordon-Loeb, I'd then say that your risk tolerance would max out at a 37% spend (against estimated breach costs), and that your risk capacity would float lower than that (possibly in the 20-25% range, or maybe even much lower than that). All of this is caveated with the thought that if there are cheap and easy remediations that give you a bigger bang for the buck, then you'd choose those.
One other tidbit here that is also missing is the need for iterative analysis. I think that in applying Gordon-Loeb you still need to re-assess your risk factors on a regular basis to measure the impact of mitigations. A purely economic model is unlikely to truly reflect the realities of a technical environment. As such, while their guidance provides us with an ok starting point, it is not in any way the end-all-be-all. More importantly, I don't think allocating resources purely based on Gordon-Loeb would be adequate from a legal defensibility perspective. You still need data and analysis to back-up the decisions you've made. In this regard, Gordon-Loeb really just provides an up-front estimate on expenditure, but it doesn't speak to where your spend will actually end up (which they seem to acknowledge in the article). It all just goes to show once again that a formal risk analysis is ultimately going to be necessary.