Tag Archives: risk

Micromorts and Understanding the Probability of Death

Under­stand­ing prob­ab­il­it­ies is hard (viz.) – and it’s espe­cially so when we try to under­stand and take ration­al decisions based on very small prob­ab­il­it­ies, such as one-in-a mil­lion chance events. How, then, to com­mu­nic­ate risks on a sim­il­ar level, too?

The answer is to use a more under­stand­able scale, such as micro­morts; “a unit of risk meas­ur­ing a one-in-a-mil­lion prob­ab­il­ity of death”. Some activ­it­ies that increase your risk of death by one micro­mort (accord­ing to, among oth­er sources, the Wiki­pe­dia entry):

  • smoking 1.4 cigar­ettes (can­cer, heart dis­ease)
  • drink­ing 0.5 liter of wine (cir­rhosis of the liv­er)
  • liv­ing 2 days in New York or Boston (air pol­lu­tion)
  • liv­ing 2 months in Den­ver (can­cer from cos­mic radi­ation)
  • liv­ing 2 months with a smoker (can­cer, heart dis­ease)
  • liv­ing 150 years with­in 20 miles of a nuc­le­ar power plant (can­cer from radi­ation)
  • drink­ing Miami water for 1 year (can­cer from chlo­ro­form)
  • eat­ing 100 char­coal-broiled steaks (can­cer from ben­zo­pyrene)
  • eat­ing 40 table­spoons of pea­nut but­ter (liv­er can­cer from Aflatox­in B)
  • eat­ing 1000 bana­nas, (can­cer from radio­act­ive 1 kBED of Potassi­um-40)
  • trav­el­ling 6 miles (10 km) by motor­bike (acci­dent)
  • trav­el­ling 16 miles (26 km) on foot (acci­dent)
  • trav­el­ling 20 miles (32 km) by bike (acci­dent)
  • trav­el­ling 230 miles (370 km) by car (acci­dent)
  • trav­el­ling 6000 miles (9656 km) by train (acci­dent)
  • fly­ing 1000 miles (1609 km) by jet (acci­dent)
  • fly­ing 6000 miles (9656 km) by jet (can­cer from cos­mic radi­ation)
  • tak­ing 1 ecstasy tab­let

Issue fifty-five of Plus magazine looked at micro­morts in more detail, thanks to Dav­id Spiegel­hal­ter (the Win­ton Pro­fess­or of the Pub­lic Under­stand­ing of Risk at the Uni­ver­sity of Cam­bridge) and Mike Pear­son, both of Under­stand­ing Uncer­tainty.

via Schnei­er on Secur­ity

Psychic Numbing and Communicating on Risk and Tragedies

I’ve been pre­oc­cu­pied lately with the devel­op­ing after­math of the Tōhoku earth­quake. Unlike oth­er dis­asters on a sim­il­ar or great­er scale, I’m find­ing it easi­er to grasp the real human cost of the dis­aster in Japan as my broth­er lives in Kanagawa Pre­fec­ture and there­fore there are less levels of abstrac­tion between me and those dir­ectly affected. You could say that this feel­ing is related to what Moth­er Teresa was refer­ring to when she she said “If I look at the mass I will nev­er act. If I look at the one, I will”.

If I had no dir­ect con­nec­tion with Japan I assume the dry stat­ist­ics of the size­able tragedy would leave me mostly unaf­fected – this is what Robert Jay Lifton ter­med “psych­ic numbing”. As Bri­an Zikmund-Fish­er, a risk com­mu­nic­a­tion expert at the Uni­ver­sity of Michigan, intro­duces the top­ic:

People are remark­ably insens­it­ive [to] vari­ations in stat­ist­ic­al mag­nitude. Single vic­tims or small groups who are unique and iden­ti­fi­able evoke strong reac­tions. (Think, for example, the Chilean miners or “baby Jes­sica” who was trapped in the well in Texas in 1987.) Stat­ist­ic­al vic­tims, even if much more numer­ous, do not evoke pro­por­tion­ately great­er con­cern. In fact, under some cir­cum­stances, they may evoke less con­cern than a single vic­tim does. […]

To over­come psych­ic numb­ing and really attach mean­ing to the stat­ist­ics we are hear­ing […] we have to be able to frame the situ­ation in human terms.

Zikmund-Fish­er links heav­ily to Paul Slov­ic’s essay on psych­ic numb­ing in terms of gen­o­cide and mass murder (pdf): an essen­tial read for those inter­ested in risk com­mu­nic­a­tion that looks at the psy­cho­logy behind why we are so often inact­ive in the face of mass deaths (part of the answer: our capa­city to exper­i­ence affect and exper­i­en­tial think­ing over ana­lyt­ic­al think­ing).

Political Risk Assessments

“Safety is nev­er allowed to trump all oth­er con­cerns”, says Juli­an Bag­gini, and without say­ing as much gov­ern­ments must con­sist­ently put a price on lives and determ­ine how much risk to expose the pub­lic to.

In an art­icle for the BBC, Bag­gini takes a com­pre­hens­ive look at how gov­ern­ments make risk assess­ments and in the pro­cess dis­cusses a top­ic of con­stant intrigue for me: how much a human life is val­ued by dif­fer­ent gov­ern­ments and their depart­ments.

The eth­ics of risk is not as straight­for­ward as the rhet­or­ic of “para­mount import­ance” sug­gests. People talk of the “pre­cau­tion­ary prin­ciple” or “erring on the side of cau­tion” but gov­ern­ments are always trad­ing safety for con­veni­ence or oth­er gains. […]

Gov­ern­ments have to choose on our behalf which risks we should be exposed to.

That poses a dif­fi­cult eth­ic­al dilemma: should gov­ern­ment decisions about risk reflect the often irra­tion­al foibles of the popu­lace or the ration­al cal­cu­la­tions of sober risk assess­ment? Should our politi­cians opt for informed pater­nal­ism or respect for irra­tion­al pref­er­ences? […]

In prac­tice, gov­ern­ments do not make fully ration­al risk assess­ments. Their cal­cu­la­tions are based partly on cost-bene­fit ana­lyses, and partly on what the pub­lic will tol­er­ate.

via Schnei­er on Secur­ity

Anchoring Our Beliefs

The psy­cho­lo­gic­al prin­ciple of anchor­ing is most com­monly dis­cussed in terms of our irra­tion­al decision mak­ing when pur­chas­ing items. How­ever, Jonah Lehr­er stresses that anchor­ing is more wide-ran­ging than this and is in fact “a fun­da­ment­al flaw of human decision mak­ing”.

As such, Lehr­er believes that anchor­ing also effects our beliefs, such that our first reac­tion to an event ‘anchors’ our sub­sequent thoughts and decisions, even in light of more accur­ate evid­ence.

Con­sider the ash cloud: After the cloud began drift­ing south, into the crowded air­space of West­ern Europe, offi­cials did the prudent thing and can­celed all flights. They wanted to avoid a repeat of the near crash of a Boe­ing 747 in 1989. […]

Giv­en the lim­ited amount of inform­a­tion, anchor­ing to this pre­vi­ous event (and try­ing to avoid a worst case scen­ario) was the only reas­on­able reac­tion. The prob­lems began, how­ever, when these ini­tial beliefs about the risk of the ash cloud proved res­ist­ant to sub­sequent updates. […]

My point is abso­lutely not that the ash cloud wasn’t dan­ger­ous, or that the avi­ation agen­cies were wrong to can­cel thou­sands of flights, at least ini­tially. […] Instead, I think we simply need to be more aware that our ini­tial beliefs about a crisis – those opin­ions that are most shrouded in ignor­ance and uncer­tainty – will exert an irra­tion­al influ­ence on our sub­sequent actions, even after we have more (and more reli­able) inform­a­tion. The end res­ult is a kind of epi­stem­ic stub­born­ness, in which we’re irra­tion­ally anchored to an out­moded assump­tion.

The same thing happened with the BP oil spill.

Why We Should Trust Driving Computers

In light of recent sug­ges­tions of tech­nic­al faults and the ensu­ing recall of a num­ber of mod­els from Toyota’s line, Robert Wright looks at why we should not worry about driv­ing mod­ern cars.

The reas­ons: the increased risks are neg­li­gible, the sys­tems that fail undoubtedly save more lives than not, this is the nature of car ‘test­ing’.

Our cars are, increas­ingly, soft­ware-driv­en — that is, they’re doing more and more of the driv­ing.

And soft­ware, as the people at Microsoft or Apple can tell you, is full of sur­prises. It’s pretty much impossible to anti­cip­ate all the bugs in a com­plex com­puter pro­gram. Hence the reli­ance on beta test­ing. […]

Now, “beta test­ing” sounds creepy when the pro­cess by which test­ers uncov­er bugs can involve death. But there are two reas­ons not to start bemoan­ing the brave new world we’re enter­ing.

First, even back before cars were soft­ware-driv­en, beta test­ing was com­mon. Any car is a sys­tem too com­plex for design­ers to fully anti­cip­ate the upshot for life and limb. Hence dec­ades of non-micro­chip-related safety recalls.

Second, the fact that a fea­ture of a car can be fatal isn’t neces­sar­ily a per­suas­ive objec­tion to it. […]

Sim­il­arly, those soft­ware fea­tures that are sure to have unanti­cip­ated bugs, includ­ing fatal ones, have upsides. Elec­tron­ic sta­bil­ity con­trol keeps cars from flip­ping over, and elec­tron­ic throttle con­trol improves mileage.