Showing posts with label psychology. Show all posts
Showing posts with label psychology. Show all posts

Saturday, July 19, 2008

Predictably Irrational

As regular readers have surely noticed by now, I've been on a bit of a behavioral psychology kick lately. Some of this reflects long-standing personal interest and my latest reading. But I also feel increasingly concerned that researchers in information seeking--especially those working on tools--have neglected the impact of cognitive bias.

For those who are unfamiliar with last few decades of research in this field, I highly recommend a recent lecture by behavioral economist Dan Ariely on predictable irrationality. Not only is he a very informative and entertaining speaker, but he chooses very concrete and credible examples, starting with his contemplating how we experience pain based on his own experience of suffering
third-degree burns over 70 percent of his body. I promise you, the lecture is an hour well spent, and the time will fly by.

A running theme of through this and my other posts on cognitive bias is that the way a information is presented to us has dramatic effects on how we interpret that information.

This is great news for anyone who wants to manipulate people. In fact, I once asked Dan about the relative importance of people's inherent preferences vs. those induced by presentation on retail web sites, and he all but dismissed the former (i.e., you can sell ice cubes to Eskimos, if you can manipulate their cognitive biases appropriately). But it's sobering news for those of us who want to empower user to evaluate information objectively to support decision making.

Tuesday, July 15, 2008

Beyond a Reasonable Doubt

In Psychology of Intelligence Analysis, Richards Heuer advocates that we quantify expressions of uncertainty: "To avoid ambiguity, insert an odds ratio or probability range in parentheses after expressions of uncertainty in key judgments."

His suggestion reminds me of my pet peeve about the unquantified notion of reasonable doubt in the American justice system. I've always wanted (but never had the opportunity) to ask a judge what probability of innocence constitutes a reasonable doubt.

Unfortunately, as Heuer himself notes elsewhere in his book, we human beings are really bad at estimating probabilities. I suspect (with a confidence of 90 to 95%) that quantifying our uncertainties as probability ranges will only suggest a false sense of precision.

So, what can we do to better communicate uncertainty? Here are a couple of thoughts:
  • We can calibrate estimates based on past performance. It's unclear what will happen if people realize that their estimates are being translated, but, at worst, it feels like good fodder for research in judgment and decision making.

  • We can ask people to express relative probability judgments. While these are also susceptible to bias, at least they don't demand as much precision. And we can always vary the framing of questions to try to factor out the cognitive biases they induce.
Also, we talk about uncertainty, it is important that we distinguish between aleatory and epistemic uncertainty.

When I flip a coin, I am certain it has a 50% chance of landing heads, because I know the probability distribution of the event space. This is aleatory uncertainty, and forms the basis of probability and statistics.

But when I reason about less contrived uncertain events, such as estimating the likelihood that my bank will collapse this year, the challenge is my ignorance of the probability distribution. This is epistemic uncertainty, and it's a lot messier.

If you'd like to learn more about aleatory and existential uncertainty, I recommend Nicholas Nassim Taleb's Fooled by Randomness (which is a better read than his better-known Black Swan).

In summary, we have to accept the bad news that the real world is messy. As a mathematician and computer scientist, I've learned to pursue theoretical rigor as an ideal. Like me, you may find it very disconcerting to not be able to treat all real-world uncertainty in terms of probability spaces. Tell it to the judge!

Friday, July 11, 2008

Psychology of Intelligence Analysis

In the course of working with some of Endeca's more interesting clients, I started reading up on how the intelligence agencies address the challenges of making decisions, especially in the face of incomplete and contradictory evidence. I ran into a book called Psychology of Intelligence Analysis by former CIA analyst Richards Heuer. The entire book is available online, or you can hunt down a hard copy of the out-of-print book from your favorite used book seller.

Given the mixed record of the intelligence agencies over the past few decades, you might be wondering if the CIA is the best source for learning how to analyze intelligence. But this book is a gem. Even if the agencies don't always practice what they preach (and the book makes a good case as to why), the book is an excellent tour through the literature on judgment and decision making.

If you're already familiar with work by Herb Simon, Danny Kahneman, and Amos Tversky, then a lot of the ground he covers will be familiar--especially the third of the book that enumerates cognitive biases. I'm a big fan of the judgment and decision making literature myself. But I still found some great nuggets, particularly Chapter 8 on Analysis of Competing Hypotheses. Unlike most of the literature that focuses exclusively on demonstrating our systematic departures from rationality, Heuer hopes offer at least some constructive advice.

As someone who builds tools to help people make decisions using information that not only may be incomplete and contradictory, but also challenging to find in the first place, I'm very sensitive to how people's cognitive biases affect their ability to use these tools effectively. One of the HCIR '07 presentations by Jolie Martin and Michael Norton (who have worked with Max Bazerman) showed how the manner in which information was partitioned on retail web sites drove decisions, i.e., re-organizing the same information affected consumer's decision process.

It may be tempting for us on the software side to wash our hands of our users' cognitive biases. But such an approach would be short-sighted. As Heuer shows in his well-researched book, people not only have cognitive biases, but are unable to counter those biases simply by being made aware of them. Hence, if software tools are to help people make effective decisions, it is the job of us tool builders to build with those biases in mind, and to support processes like Analysis of Competing Hypotheses that try to compensate for human bias.
Showing posts with label psychology. Show all posts
Showing posts with label psychology. Show all posts

Saturday, July 19, 2008

Predictably Irrational

As regular readers have surely noticed by now, I've been on a bit of a behavioral psychology kick lately. Some of this reflects long-standing personal interest and my latest reading. But I also feel increasingly concerned that researchers in information seeking--especially those working on tools--have neglected the impact of cognitive bias.

For those who are unfamiliar with last few decades of research in this field, I highly recommend a recent lecture by behavioral economist Dan Ariely on predictable irrationality. Not only is he a very informative and entertaining speaker, but he chooses very concrete and credible examples, starting with his contemplating how we experience pain based on his own experience of suffering
third-degree burns over 70 percent of his body. I promise you, the lecture is an hour well spent, and the time will fly by.

A running theme of through this and my other posts on cognitive bias is that the way a information is presented to us has dramatic effects on how we interpret that information.

This is great news for anyone who wants to manipulate people. In fact, I once asked Dan about the relative importance of people's inherent preferences vs. those induced by presentation on retail web sites, and he all but dismissed the former (i.e., you can sell ice cubes to Eskimos, if you can manipulate their cognitive biases appropriately). But it's sobering news for those of us who want to empower user to evaluate information objectively to support decision making.

Tuesday, July 15, 2008

Beyond a Reasonable Doubt

In Psychology of Intelligence Analysis, Richards Heuer advocates that we quantify expressions of uncertainty: "To avoid ambiguity, insert an odds ratio or probability range in parentheses after expressions of uncertainty in key judgments."

His suggestion reminds me of my pet peeve about the unquantified notion of reasonable doubt in the American justice system. I've always wanted (but never had the opportunity) to ask a judge what probability of innocence constitutes a reasonable doubt.

Unfortunately, as Heuer himself notes elsewhere in his book, we human beings are really bad at estimating probabilities. I suspect (with a confidence of 90 to 95%) that quantifying our uncertainties as probability ranges will only suggest a false sense of precision.

So, what can we do to better communicate uncertainty? Here are a couple of thoughts:
  • We can calibrate estimates based on past performance. It's unclear what will happen if people realize that their estimates are being translated, but, at worst, it feels like good fodder for research in judgment and decision making.

  • We can ask people to express relative probability judgments. While these are also susceptible to bias, at least they don't demand as much precision. And we can always vary the framing of questions to try to factor out the cognitive biases they induce.
Also, we talk about uncertainty, it is important that we distinguish between aleatory and epistemic uncertainty.

When I flip a coin, I am certain it has a 50% chance of landing heads, because I know the probability distribution of the event space. This is aleatory uncertainty, and forms the basis of probability and statistics.

But when I reason about less contrived uncertain events, such as estimating the likelihood that my bank will collapse this year, the challenge is my ignorance of the probability distribution. This is epistemic uncertainty, and it's a lot messier.

If you'd like to learn more about aleatory and existential uncertainty, I recommend Nicholas Nassim Taleb's Fooled by Randomness (which is a better read than his better-known Black Swan).

In summary, we have to accept the bad news that the real world is messy. As a mathematician and computer scientist, I've learned to pursue theoretical rigor as an ideal. Like me, you may find it very disconcerting to not be able to treat all real-world uncertainty in terms of probability spaces. Tell it to the judge!

Friday, July 11, 2008

Psychology of Intelligence Analysis

In the course of working with some of Endeca's more interesting clients, I started reading up on how the intelligence agencies address the challenges of making decisions, especially in the face of incomplete and contradictory evidence. I ran into a book called Psychology of Intelligence Analysis by former CIA analyst Richards Heuer. The entire book is available online, or you can hunt down a hard copy of the out-of-print book from your favorite used book seller.

Given the mixed record of the intelligence agencies over the past few decades, you might be wondering if the CIA is the best source for learning how to analyze intelligence. But this book is a gem. Even if the agencies don't always practice what they preach (and the book makes a good case as to why), the book is an excellent tour through the literature on judgment and decision making.

If you're already familiar with work by Herb Simon, Danny Kahneman, and Amos Tversky, then a lot of the ground he covers will be familiar--especially the third of the book that enumerates cognitive biases. I'm a big fan of the judgment and decision making literature myself. But I still found some great nuggets, particularly Chapter 8 on Analysis of Competing Hypotheses. Unlike most of the literature that focuses exclusively on demonstrating our systematic departures from rationality, Heuer hopes offer at least some constructive advice.

As someone who builds tools to help people make decisions using information that not only may be incomplete and contradictory, but also challenging to find in the first place, I'm very sensitive to how people's cognitive biases affect their ability to use these tools effectively. One of the HCIR '07 presentations by Jolie Martin and Michael Norton (who have worked with Max Bazerman) showed how the manner in which information was partitioned on retail web sites drove decisions, i.e., re-organizing the same information affected consumer's decision process.

It may be tempting for us on the software side to wash our hands of our users' cognitive biases. But such an approach would be short-sighted. As Heuer shows in his well-researched book, people not only have cognitive biases, but are unable to counter those biases simply by being made aware of them. Hence, if software tools are to help people make effective decisions, it is the job of us tool builders to build with those biases in mind, and to support processes like Analysis of Competing Hypotheses that try to compensate for human bias.
Showing posts with label psychology. Show all posts
Showing posts with label psychology. Show all posts

Saturday, July 19, 2008

Predictably Irrational

As regular readers have surely noticed by now, I've been on a bit of a behavioral psychology kick lately. Some of this reflects long-standing personal interest and my latest reading. But I also feel increasingly concerned that researchers in information seeking--especially those working on tools--have neglected the impact of cognitive bias.

For those who are unfamiliar with last few decades of research in this field, I highly recommend a recent lecture by behavioral economist Dan Ariely on predictable irrationality. Not only is he a very informative and entertaining speaker, but he chooses very concrete and credible examples, starting with his contemplating how we experience pain based on his own experience of suffering
third-degree burns over 70 percent of his body. I promise you, the lecture is an hour well spent, and the time will fly by.

A running theme of through this and my other posts on cognitive bias is that the way a information is presented to us has dramatic effects on how we interpret that information.

This is great news for anyone who wants to manipulate people. In fact, I once asked Dan about the relative importance of people's inherent preferences vs. those induced by presentation on retail web sites, and he all but dismissed the former (i.e., you can sell ice cubes to Eskimos, if you can manipulate their cognitive biases appropriately). But it's sobering news for those of us who want to empower user to evaluate information objectively to support decision making.

Tuesday, July 15, 2008

Beyond a Reasonable Doubt

In Psychology of Intelligence Analysis, Richards Heuer advocates that we quantify expressions of uncertainty: "To avoid ambiguity, insert an odds ratio or probability range in parentheses after expressions of uncertainty in key judgments."

His suggestion reminds me of my pet peeve about the unquantified notion of reasonable doubt in the American justice system. I've always wanted (but never had the opportunity) to ask a judge what probability of innocence constitutes a reasonable doubt.

Unfortunately, as Heuer himself notes elsewhere in his book, we human beings are really bad at estimating probabilities. I suspect (with a confidence of 90 to 95%) that quantifying our uncertainties as probability ranges will only suggest a false sense of precision.

So, what can we do to better communicate uncertainty? Here are a couple of thoughts:
  • We can calibrate estimates based on past performance. It's unclear what will happen if people realize that their estimates are being translated, but, at worst, it feels like good fodder for research in judgment and decision making.

  • We can ask people to express relative probability judgments. While these are also susceptible to bias, at least they don't demand as much precision. And we can always vary the framing of questions to try to factor out the cognitive biases they induce.
Also, we talk about uncertainty, it is important that we distinguish between aleatory and epistemic uncertainty.

When I flip a coin, I am certain it has a 50% chance of landing heads, because I know the probability distribution of the event space. This is aleatory uncertainty, and forms the basis of probability and statistics.

But when I reason about less contrived uncertain events, such as estimating the likelihood that my bank will collapse this year, the challenge is my ignorance of the probability distribution. This is epistemic uncertainty, and it's a lot messier.

If you'd like to learn more about aleatory and existential uncertainty, I recommend Nicholas Nassim Taleb's Fooled by Randomness (which is a better read than his better-known Black Swan).

In summary, we have to accept the bad news that the real world is messy. As a mathematician and computer scientist, I've learned to pursue theoretical rigor as an ideal. Like me, you may find it very disconcerting to not be able to treat all real-world uncertainty in terms of probability spaces. Tell it to the judge!

Friday, July 11, 2008

Psychology of Intelligence Analysis

In the course of working with some of Endeca's more interesting clients, I started reading up on how the intelligence agencies address the challenges of making decisions, especially in the face of incomplete and contradictory evidence. I ran into a book called Psychology of Intelligence Analysis by former CIA analyst Richards Heuer. The entire book is available online, or you can hunt down a hard copy of the out-of-print book from your favorite used book seller.

Given the mixed record of the intelligence agencies over the past few decades, you might be wondering if the CIA is the best source for learning how to analyze intelligence. But this book is a gem. Even if the agencies don't always practice what they preach (and the book makes a good case as to why), the book is an excellent tour through the literature on judgment and decision making.

If you're already familiar with work by Herb Simon, Danny Kahneman, and Amos Tversky, then a lot of the ground he covers will be familiar--especially the third of the book that enumerates cognitive biases. I'm a big fan of the judgment and decision making literature myself. But I still found some great nuggets, particularly Chapter 8 on Analysis of Competing Hypotheses. Unlike most of the literature that focuses exclusively on demonstrating our systematic departures from rationality, Heuer hopes offer at least some constructive advice.

As someone who builds tools to help people make decisions using information that not only may be incomplete and contradictory, but also challenging to find in the first place, I'm very sensitive to how people's cognitive biases affect their ability to use these tools effectively. One of the HCIR '07 presentations by Jolie Martin and Michael Norton (who have worked with Max Bazerman) showed how the manner in which information was partitioned on retail web sites drove decisions, i.e., re-organizing the same information affected consumer's decision process.

It may be tempting for us on the software side to wash our hands of our users' cognitive biases. But such an approach would be short-sighted. As Heuer shows in his well-researched book, people not only have cognitive biases, but are unable to counter those biases simply by being made aware of them. Hence, if software tools are to help people make effective decisions, it is the job of us tool builders to build with those biases in mind, and to support processes like Analysis of Competing Hypotheses that try to compensate for human bias.