The linked article is a blatant plug for a scorecard platform (no, I don’t get paid for linking to it). Reading it gave me pause for thought. I can think of dashboards that show number of threats, and the number that have been mitigated; dashboards that show number of ‘attacks’ (but an automated scan gets the same weighting as a sophisticated attack, and insider jobs don’t get included); risk registers (I’m looking at you, RSA Archer) that are a manual best guess at the risks; and all sorts of other metrics.
Just how do organisations measure the effectiveness of their cyber security programs?…:
[…] McKinsey’s James Kaplan and Jim Boehm offer the example of reports sent by the security team to senior management. Those reports feature references to “the millions of attacks the organization faces per week or per day.” While “millions of attacks” sounds impressive, those incidents are likely not from skilled cyber criminals, and are probably pretty easy to repel.
Focusing on just the number of deflected incidents can provide management with a false sense of security. Executives might think they’ve got a robust cyber security program — after all, they’re catching and resolving millions of attacks a week — when in fact the real threats are flying under the radar.
Another pitfall in cyber security management is static reporting. Organizations may be relying on metrics — like security ratings — that are only issued periodically. Those reports are snapshots capturing just one moment in time. A vendor that’s in compliance when a questionnaire is filled out is given may be out of compliance the next day.