Risk = consequences × likelihood … or does it?

For those of use that like formal methodologies for translating threat models into action, then the NIST framework and presentations are a good place to get lost in. You can apply the approach to just about anything.

One formula you’ll come across a lot is that Risk = consequences x likelihood where likelihood = threat ranking × asset attractiveness × remaining vulnerabilities.

That makes it sound conceptually easy to come up with a prioritised risk management program but there are two things that screw up this process in the real world.

The first complication is that mitigating a particular threat (for example, patching a server) will probably mitigate others so you’ll want to have a measure of “How much bang per buck” for any action.

The second, and probably greatest real-world factor, is the ease of implementing any mitigation. When you think about why there are Windows XP, Server 2008 and lots of other unsupported old stuff still running, the “too difficult” excuse is the real reason.

I did some work with a space agency around cyber risk. They explained that a mission control centre rarely got updated because they couldn’t be sure if any update would affect a satellite or manned mission so you have sometimes 30 year old tech still in use today.

Fancy going down the rabbit hole? Take a look at discussion on some of the tech used on the Shuttle Missions