Tag Archives: probability

Micromorts and Understanding the Probability of Death

Understanding probabilities is hard (viz.) — and it’s especially so when we try to understand and take rational decisions based on very small probabilities, such as one-in-a million chance events. How, then, to communicate risks on a similar level, too?

The answer is to use a more understandable scale, such as micromorts; “a unit of risk measuring a one-in-a-million probability of death”. Some activities that increase your risk of death by one micromort (according to, among other sources, the Wikipedia entry):

  • smoking 1.4 cigarettes (cancer, heart disease)
  • drinking 0.5 liter of wine (cirrhosis of the liver)
  • living 2 days in New York or Boston (air pollution)
  • living 2 months in Denver (cancer from cosmic radiation)
  • living 2 months with a smoker (cancer, heart disease)
  • living 150 years within 20 miles of a nuclear power plant (cancer from radiation)
  • drinking Miami water for 1 year (cancer from chloroform)
  • eating 100 charcoal-broiled steaks (cancer from benzopyrene)
  • eating 40 tablespoons of peanut butter (liver cancer from Aflatoxin B)
  • eating 1000 bananas, (cancer from radioactive 1 kBED of Potassium-40)
  • travelling 6 miles (10 km) by motorbike (accident)
  • travelling 16 miles (26 km) on foot (accident)
  • travelling 20 miles (32 km) by bike (accident)
  • travelling 230 miles (370 km) by car (accident)
  • travelling 6000 miles (9656 km) by train (accident)
  • flying 1000 miles (1609 km) by jet (accident)
  • flying 6000 miles (9656 km) by jet (cancer from cosmic radiation)
  • taking 1 ecstasy tablet

Issue fifty-five of Plus magazine looked at micromorts in more detail, thanks to David Spiegelhalter (the Winton Professor of the Public Understanding of Risk at the University of Cambridge) and Mike Pearson, both of Understanding Uncertainty.

via Schneier on Security

The Numbers in Our Words: Words of Estimative Probability

Toward the end of this month I will almost certainly post the traditional Lone Gunman Year in Review post. Exactly how likely am I to do this? Am I able to quantify the probability that I’ll do this? By using the phrase “almost certainly”, I already have.

To provide unambiguous, quantitative odds of an event occurring based solely on word choice, the “father of intelligence analysis”, Sherman Kent, developed and defined the Words of Estimative Probability (WEPs): words and phrases we use to suggest probability and the actual numerical probability range to accompany each.

Kent’s idea has had a mixed reception in the intelligence community and the disregarding of the practice has been blamed, in part, for the intelligence failings that lead to 9/11.

The words by decreasing probability:

  • Certain: 100%
  • Almost Certain: 93% ± 6%
  • Probable: 75% ± 12%
  • Chances About Even: 50% ± 10%
  • Probably Not: 30% ± 10%
  • Almost Certainly Not: 7% ± 5%
  • Impossible: 0%

The practice has also gained some advocates in medicine, with the following list of definitions formed:

  • Likely: Expected to happen to more than 50% of subjects
  • Frequent: Will probably happen to 10-50% of subjects
  • Occasional: Will happen to 1-10% of subjects
  • Rare: Will happen to less than 1% of subjects

It would be nice if there were such definitions for the many other ambiguous words we use daily.

Learn Statistics, Damn You!

Thanks to my moderate knowledge of statistics, I know that I have a lot more to learn in the field and should never make assumptions about data or analyses (even my own).

Because of this I share a grievance with Zed Shaw who says that “programmers need to learn statistics or I will kill them all”. Required reading and advice not just for programmers, but for everyone who looks at data, creates models, or even reads a newspaper.

I have a major pet peeve that I need to confess. I go insane when I hear programmers talking about statistics like they know shit when its clearly obvious they do not. I’ve been studying it for years and years and still don’t think I know anything. This article is my call for all programmers to finally learn enough about statistics to at least know they don’t know shit. I have no idea why, but their confidence in their lacking knowledge is only surpassed by their lack of confidence in their personal appearance.

My recommendation? Read this article to realise that you know nothing, and then pick up a copy of John Allen Paulos’ Innumeracy and Darrell Huff’s How to Lie with Statistics in order to realise that you know even less than you thought (but a hell of a lot more than the average person).

Being Rational About Risk

Leonard Mlodinow—physicist at Caltech and author of The Drunkard’s Walk, a highly-praised book looking at randomness and our inability to take it into account—has an interview in The New York Times about understanding risk. Some choice quotes:

I find that predicting the course of our lives is like predicting the weather. You might be able to predict your future in the short term, but the longer you look ahead, the less likely you are to be correct.

I don’t think complex situations like [the current financial crisis] can be predicted. There are too many uncontrollable or unmeasurable factors. Afterwards, of course, it will appear that some people had gotten it just right: since there are many people making many predictions, no doubt some of them will get it right, if only by chance. But that doesn’t mean that, if not for some unforeseen random turn, things wouldn’t have gone the other way. […]

In some sense this idea is encapsulated in the cliché that “hindsight is always 20/20,” but people often behave as if the adage weren’t true. In government, for example, a “should-have-known-it” blame game is played after every tragedy.

As someone who has taken risks in life I find it a comfort to know that even a coin weighted toward failure will sometimes land on success. Or, as I.B.M. pioneer Thomas Watson said, “If you want to succeed, double your failure rate.”

I haven’t had a chance to watch it, but in May 2008 Mlodinow spoke for the Authors@Google series.

The Birthday Problem

I’ve heard of this ‘problem’ numerous times before, as I’m sure many others have too. Nonetheless, everytime I do hear it, it fascinates me.

The birthday problem (or paradox, as it’s often referred), looks at the probability of two or more people from a randomly chosen set of people sharing a birthday.

In a group of at least 23 randomly chosen people, there is more than 50% probability that some pair of them will both have been born on the same day. For 57 or more people, the probability is more than 99%, and it reaches 100% when the number of people reaches 367[…]. The mathematics behind this problem leads to a well-known cryptographic attack called the birthday attack.