I want to talk about the potential or what we call
a compatibility bias when we're doing pricing out.
The consequences that are already in dollar terms,
or they're readily converted,
they tend to get more weight in the decisions, in the evaluations.
You can see how that could affect the decision you'd make.
Now, it does turn out that it's possible to compensate through either careful assessment,
in the first place of the relative prices,
or there's even a method for adjusting for the compatibility bias after the fact.
That is, you make the judgments,
you make those pricing out judgments,
and then we'll come in and we'll adjust those slightly based
on knowing that the compatibility bias exists.
Let's talk about probability assessment.
Typically, in these complicated decisions,
whether they're your decisions or an organization's decisions,
you want to go to somebody who knows more than you do to get information.
Now, an expert may not know everything, right?
So, he falls somewhere between ignorance and perfect information.
Let's call it imperfect knowledge.
Well, that imperfect knowledge that the expert does know can be valuable,
and it's our job to figure out how to use it.
We're going to talk about probability as a judgment, here.
You realize that eliciting judgments,
probability judgments from experts is not the same as collecting data.
These can be based on models;
it can be based on data;
it can be based on experience;
and the expert judgments, of course, may be subject to biases,
like anchoring on some number that you have just been exposed to or overconfidence.
This is one we often have to worry about in decision analysis;
we get intervals that are too narrow.
So, instead of statistical rules,
such as when you're collecting data in an experiment,
we have to pay attention to psychological rules to
do a good job of eliciting expert probability judgments.