Tag Archives: risk

Micromorts and Understanding the Probability of Death

Understanding probabilities is hard (viz.) — and it’s especially so when we try to understand and take rational decisions based on very small probabilities, such as one-in-a million chance events. How, then, to communicate risks on a similar level, too?

The answer is to use a more understandable scale, such as micromorts; “a unit of risk measuring a one-in-a-million probability of death”. Some activities that increase your risk of death by one micromort (according to, among other sources, the Wikipedia entry):

  • smoking 1.4 cigarettes (cancer, heart disease)
  • drinking 0.5 liter of wine (cirrhosis of the liver)
  • living 2 days in New York or Boston (air pollution)
  • living 2 months in Denver (cancer from cosmic radiation)
  • living 2 months with a smoker (cancer, heart disease)
  • living 150 years within 20 miles of a nuclear power plant (cancer from radiation)
  • drinking Miami water for 1 year (cancer from chloroform)
  • eating 100 charcoal-broiled steaks (cancer from benzopyrene)
  • eating 40 tablespoons of peanut butter (liver cancer from Aflatoxin B)
  • eating 1000 bananas, (cancer from radioactive 1 kBED of Potassium-40)
  • travelling 6 miles (10 km) by motorbike (accident)
  • travelling 16 miles (26 km) on foot (accident)
  • travelling 20 miles (32 km) by bike (accident)
  • travelling 230 miles (370 km) by car (accident)
  • travelling 6000 miles (9656 km) by train (accident)
  • flying 1000 miles (1609 km) by jet (accident)
  • flying 6000 miles (9656 km) by jet (cancer from cosmic radiation)
  • taking 1 ecstasy tablet

Issue fifty-five of Plus magazine looked at micromorts in more detail, thanks to David Spiegelhalter (the Winton Professor of the Public Understanding of Risk at the University of Cambridge) and Mike Pearson, both of Understanding Uncertainty.

via Schneier on Security

Psychic Numbing and Communicating on Risk and Tragedies

I’ve been preoccupied lately with the developing aftermath of the Tōhoku earthquake. Unlike other disasters on a similar or greater scale, I’m finding it easier to grasp the real human cost of the disaster in Japan as my brother lives in Kanagawa Prefecture and therefore there are less levels of abstraction between me and those directly affected. You could say that this feeling is related to what Mother Teresa was referring to when she she said “If I look at the mass I will never act. If I look at the one, I will“.

If I had no direct connection with Japan I assume the dry statistics of the sizeable tragedy would leave me mostly unaffected — this is what Robert Jay Lifton termed “psychic numbing”. As Brian Zikmund-Fisher, a risk communication expert at the University of Michigan, introduces the topic:

People are remarkably insensitive [to] variations in statistical magnitude. Single victims or small groups who are unique and identifiable evoke strong reactions. (Think, for example, the Chilean miners or “baby Jessica” who was trapped in the well in Texas in 1987.) Statistical victims, even if much more numerous, do not evoke proportionately greater concern. In fact, under some circumstances, they may evoke less concern than a single victim does. […]

To overcome psychic numbing and really attach meaning to the statistics we are hearing […] we have to be able to frame the situation in human terms.

Zikmund-Fisher links heavily to Paul Slovic‘s essay on psychic numbing in terms of genocide and mass murder (pdf): an essential read for those interested in risk communication that looks at the psychology behind why we are so often inactive in the face of mass deaths (part of the answer: our capacity to experience affect and experiential thinking over analytical thinking).

Political Risk Assessments

“Safety is never allowed to trump all other concerns”, says Julian Baggini, and without saying as much governments must consistently put a price on lives and determine how much risk to expose the public to.

In an article for the BBC, Baggini takes a comprehensive look at how governments make risk assessments and in the process discusses a topic of constant intrigue for me: how much a human life is valued by different governments and their departments.

The ethics of risk is not as straightforward as the rhetoric of “paramount importance” suggests. People talk of the “precautionary principle” or “erring on the side of caution” but governments are always trading safety for convenience or other gains. […]

Governments have to choose on our behalf which risks we should be exposed to.

That poses a difficult ethical dilemma: should government decisions about risk reflect the often irrational foibles of the populace or the rational calculations of sober risk assessment? Should our politicians opt for informed paternalism or respect for irrational preferences? […]

In practice, governments do not make fully rational risk assessments. Their calculations are based partly on cost-benefit analyses, and partly on what the public will tolerate.

via Schneier on Security

Anchoring Our Beliefs

The psychological principle of anchoring is most commonly discussed in terms of our irrational decision making when purchasing items. However, Jonah Lehrer stresses that anchoring is more wide-ranging than this and is in fact “a fundamental flaw of human decision making”.

As such, Lehrer believes that anchoring also effects our beliefs, such that our first reaction to an event ‘anchors’ our subsequent thoughts and decisions, even in light of more accurate evidence.

Consider the ash cloud: After the cloud began drifting south, into the crowded airspace of Western Europe, officials did the prudent thing and canceled all flights. They wanted to avoid a repeat of the near crash of a Boeing 747 in 1989. […]

Given the limited amount of information, anchoring to this previous event (and trying to avoid a worst case scenario) was the only reasonable reaction. The problems began, however, when these initial beliefs about the risk of the ash cloud proved resistant to subsequent updates. […]

My point is absolutely not that the ash cloud wasn’t dangerous, or that the aviation agencies were wrong to cancel thousands of flights, at least initially. […] Instead, I think we simply need to be more aware that our initial beliefs about a crisis – those opinions that are most shrouded in ignorance and uncertainty – will exert an irrational influence on our subsequent actions, even after we have more (and more reliable) information. The end result is a kind of epistemic stubbornness, in which we’re irrationally anchored to an outmoded assumption.

The same thing happened with the BP oil spill.

Why We Should Trust Driving Computers

In light of recent suggestions of technical faults and the ensuing recall of a number of models from Toyota’s line, Robert Wright looks at why we should not worry about driving modern cars.

The reasons: the increased risks are negligible, the systems that fail undoubtedly save more lives than not, this is the nature of car ‘testing’.

Our cars are, increasingly, software-driven — that is, they’re doing more and more of the driving.

And software, as the people at Microsoft or Apple can tell you, is full of surprises. It’s pretty much impossible to anticipate all the bugs in a complex computer program. Hence the reliance on beta testing. […]

Now, “beta testing” sounds creepy when the process by which testers uncover bugs can involve death. But there are two reasons not to start bemoaning the brave new world we’re entering.

First, even back before cars were software-driven, beta testing was common. Any car is a system too complex for designers to fully anticipate the upshot for life and limb. Hence decades of non-microchip-related safety recalls.

Second, the fact that a feature of a car can be fatal isn’t necessarily a persuasive objection to it. […]

Similarly, those software features that are sure to have unanticipated bugs, including fatal ones, have upsides. Electronic stability control keeps cars from flipping over, and electronic throttle control improves mileage.