Tag Archives: statistics

Micromorts and Understanding the Probability of Death

Understanding probabilities is hard (viz.) — and it’s especially so when we try to understand and take rational decisions based on very small probabilities, such as one-in-a million chance events. How, then, to communicate risks on a similar level, too?

The answer is to use a more understandable scale, such as micromorts; “a unit of risk measuring a one-in-a-million probability of death”. Some activities that increase your risk of death by one micromort (according to, among other sources, the Wikipedia entry):

  • smoking 1.4 cigarettes (cancer, heart disease)
  • drinking 0.5 liter of wine (cirrhosis of the liver)
  • living 2 days in New York or Boston (air pollution)
  • living 2 months in Denver (cancer from cosmic radiation)
  • living 2 months with a smoker (cancer, heart disease)
  • living 150 years within 20 miles of a nuclear power plant (cancer from radiation)
  • drinking Miami water for 1 year (cancer from chloroform)
  • eating 100 charcoal-broiled steaks (cancer from benzopyrene)
  • eating 40 tablespoons of peanut butter (liver cancer from Aflatoxin B)
  • eating 1000 bananas, (cancer from radioactive 1 kBED of Potassium-40)
  • travelling 6 miles (10 km) by motorbike (accident)
  • travelling 16 miles (26 km) on foot (accident)
  • travelling 20 miles (32 km) by bike (accident)
  • travelling 230 miles (370 km) by car (accident)
  • travelling 6000 miles (9656 km) by train (accident)
  • flying 1000 miles (1609 km) by jet (accident)
  • flying 6000 miles (9656 km) by jet (cancer from cosmic radiation)
  • taking 1 ecstasy tablet

Issue fifty-five of Plus magazine looked at micromorts in more detail, thanks to David Spiegelhalter (the Winton Professor of the Public Understanding of Risk at the University of Cambridge) and Mike Pearson, both of Understanding Uncertainty.

via Schneier on Security

Psychic Numbing and Communicating on Risk and Tragedies

I’ve been preoccupied lately with the developing aftermath of the Tōhoku earthquake. Unlike other disasters on a similar or greater scale, I’m finding it easier to grasp the real human cost of the disaster in Japan as my brother lives in Kanagawa Prefecture and therefore there are less levels of abstraction between me and those directly affected. You could say that this feeling is related to what Mother Teresa was referring to when she she said “If I look at the mass I will never act. If I look at the one, I will“.

If I had no direct connection with Japan I assume the dry statistics of the sizeable tragedy would leave me mostly unaffected — this is what Robert Jay Lifton termed “psychic numbing”. As Brian Zikmund-Fisher, a risk communication expert at the University of Michigan, introduces the topic:

People are remarkably insensitive [to] variations in statistical magnitude. Single victims or small groups who are unique and identifiable evoke strong reactions. (Think, for example, the Chilean miners or “baby Jessica” who was trapped in the well in Texas in 1987.) Statistical victims, even if much more numerous, do not evoke proportionately greater concern. In fact, under some circumstances, they may evoke less concern than a single victim does. […]

To overcome psychic numbing and really attach meaning to the statistics we are hearing […] we have to be able to frame the situation in human terms.

Zikmund-Fisher links heavily to Paul Slovic‘s essay on psychic numbing in terms of genocide and mass murder (pdf): an essential read for those interested in risk communication that looks at the psychology behind why we are so often inactive in the face of mass deaths (part of the answer: our capacity to experience affect and experiential thinking over analytical thinking).

The Numbers in Our Words: Words of Estimative Probability

Toward the end of this month I will almost certainly post the traditional Lone Gunman Year in Review post. Exactly how likely am I to do this? Am I able to quantify the probability that I’ll do this? By using the phrase “almost certainly”, I already have.

To provide unambiguous, quantitative odds of an event occurring based solely on word choice, the “father of intelligence analysis”, Sherman Kent, developed and defined the Words of Estimative Probability (WEPs): words and phrases we use to suggest probability and the actual numerical probability range to accompany each.

Kent’s idea has had a mixed reception in the intelligence community and the disregarding of the practice has been blamed, in part, for the intelligence failings that lead to 9/11.

The words by decreasing probability:

  • Certain: 100%
  • Almost Certain: 93% ± 6%
  • Probable: 75% ± 12%
  • Chances About Even: 50% ± 10%
  • Probably Not: 30% ± 10%
  • Almost Certainly Not: 7% ± 5%
  • Impossible: 0%

The practice has also gained some advocates in medicine, with the following list of definitions formed:

  • Likely: Expected to happen to more than 50% of subjects
  • Frequent: Will probably happen to 10-50% of subjects
  • Occasional: Will happen to 1-10% of subjects
  • Rare: Will happen to less than 1% of subjects

It would be nice if there were such definitions for the many other ambiguous words we use daily.

The Statistics of Wikipedia’s Fundraising Campaign

Yesterday, 15th January 2011, Wikipedia celebrated its tenth birthday. Just over two weeks before, Wikipedia was also celebrating the close of its 2010 fundraising campaign where over sixteen million dollars was raised from over half a million donors in just fifty days.

The 2010 campaign was billed as being data-driven, with the Wikipedia volunteers “testing messages, banners, and landing pages & doing it all with an eye on integrity in data analysis”.

Naturally, all of the test data, analyses and findings are available, providing a fascinating overview of Wikipedia’s large-scale and effective campaign. Of particular interest:

If you’re ever involved in any form of fundraising (online or off), this dataset is essential reading–as will the planned “Fundraising Style Guide” that I hope will be released soon.

My favourite banner, which got eliminated toward the beginning of the campaign has to be:

One day people will look back and wonder what it was like not to know.

And if you’re interested in what Jimmy Wales had to say about his face been featured on almost every Wikipedia page for the duration of the campaign, BBC’s recent profile on the Wikipedia founder will satisfy your interest.

via @zambonini

Outliers, Regression and the Sports Illustrated Myth

By appearing on the cover of Sports Illustrated, sportsmen and women become jinxed and shortly thereafter experience bouts of bad luck, goes the Sports Illustrated Cover Jinx myth.

‘Evidence’ of the myth comes in the form of many individuals and teams who have died or, more commonly, simply experienced bad luck in their chosen vocation shortly after appearing on the cover of the magazine.

The Wikipedia entry for the Sports Illustrated Cover Jinx has a thorough list of some “notable incidences” and also provides a concise, scientific explanation of the phenomenon:

The most common explanation for the perceived effect is that athletes are generally featured on the cover after an outlier performance; their future performance is likely to display regression toward the mean and be less impressive by comparison. This decline in performance would then be misperceived as being related to, or even possibly caused by, the appearance on the magazine cover.

Related: The Madden NFL Curse.

via Ben Goldacre’s Bad Science