Before we go any further,
I'd like to discuss what I mean by risk.
Because it's a term that means different things to different people.
When I use the term risk,
I mean something that threatens things we value,
things like finances, health,
social status, or things like psychological well-being.
A risk is measured on two dimensions.
The first is a dimension of uncertainty regarding whether something bad will happen.
We measure uncertainty with probabilities.
The second dimension concerns the severity of that something bad.
We measure the magnitude of severity,
how bad might the possible consequences be,
in terms of some kind of potential loss:
lives, money, environmental damage, reputation.
Why do we care about biases and limitations in our risk perceptions and risk decisions?
Because when organizations experience some kind of disaster,
often after the fact investigations blame bad decision making and poor risk management.
For example, the presidential commission that investigated
the launch and explosion of the Challenger space shuttle in 1986,
"... concluded that there was a serious flaw in
the decision making process leading up to the launch of flight 51-L."
"A well structured and managed system emphasizing safety
would have flagged the rising doubts about the Solid Rocket Booster joint seal."
After those events and the commission's report,
the entire space shuttle program was shut down for nearly three years.
Now your organization may or may not need to manage risks of that kind of scale.
Regardless in all organizations,
people make risk decisions that can be of great consequence to that company.
As a manager, you want to maximize the ability of
the people in your organization to accurately identify risks,
and to properly evaluate,
communicate and address them.
The problem is there's a lot of research that
our intuitive decision processes and risk perceptions are often flawed. That
the conclusions we arrive at often deviate systematically from
those we would arrive at, if we conducted formal risk analyses.