Follow @nemild
nemil
This is part 1 in an ongoing series that uses data to show selective media coverage. A version of this post was originally published on Priceonomics. Sign up to be notified of future posts.
    

Making Better Decisions in a Fearful World

"I think where I would hide my kids from shooters every time I am in public. No matter where. Not just movies or public events. I was in the grocery store last weekend with my four year old. I found myself scouting places I could hide my little boy."
– Kevin Bloxom (of Louisiana) quoted after the San Bernardino attacks on the front page of the New York Times

With alarming regularity, there seems to be another terrorist incident, mass shooting, or police shooting in the news – and it’s hard to know how scared we should be. How accurate is the news in covering what’s going on in the world? Is using the media a way to make good decisions?

Even when fair-minded journalists report events or thoughtful social media engineers craft newsfeeds, we’ll see how media (Facebook, Twitter, newspapers, 24 hour news networks) can negatively impact personal and societal decision-making.

We’ll look at two effects throughout this series:

  1. Media covers/distributes newsworthy (traditional media) or relevant (social network feeds) content as they focus on readership/engagement for their audience
  2. Media consumers use this media coverage to form opinions and make life decisions (like voting, determining personal safety), without realizing how selective​ this coverage is; this distorts how we view the world and leads us to make flawed decisions

In short: media is data, and selective media is selective data. As a result, selective media can lead to bad inferences and bad decisions, even if it is all factual.

But first, let’s begin in 1986, when one of America’s great tragedies occurred.

Part 1: The Space Shuttle Challenger Explosion and the O-ring

On January 28, 1986, the Space Shuttle Challenger exploded 73 seconds after liftoff, killing seven crewmembers and traumatizing a nation (See a video of the launch).

Source: Space Shuttle Challenger in 1983, by US Department of Defences (image link) [Public domain], via Wikimedia Commons.

Millions of viewers (including many schoolchildren) watched the launch live partly because Christa McAuliffe, a social studies teacher who was to be the first civilian in space, was on board.

The Final Crew of the Space Shuttle Challenger
Source: By NASA (NASA Human Space Flight Gallery (image link)) [Public domain], via Wikimedia Commons

The cause of the disaster was traced to an O-ring, a circular gasket that sealed the right rocket booster. This had failed due to the low temperature (31°F / -0.5°C) at launch time – a risk that several engineers noted, but that NASA management dismissed. NASA’s own pre-launch estimates were that there was a 1 in 100,000 chance of shuttle failure for any given launch – and poor statistical reasoning was a key reason the launch went through.

Before and After Shuttle Explosion
(first visible signs of danger on left, just after explosion on right
Sources: See page for author [Public domain], via Wikimedia Commons (left), by NASA image link [Public domain], via Wikimedia Commons (right)

In 1989, Siddhartha Dalal (my father), Edward Fowlkes, and Bruce Hoadley wrote a paper (“Risk Analysis of the Space Shuttle: Pre-Challenger Prediction of Failure”) to determine if the failure could have been predicted before launch. Using standard statistical techniques on previous launch data, they determined that the evidence was overwhelming that launching at 31°F would lead to substantial risk of failure. They measured a ~13% likelihood of O-ring failure at 31°F, compared to NASA's general shuttle failure estimate of 0.001%, and a 1983 US Air Force study of failure probability at 3-6%. Think about if you would let the shuttle launch if you knew there was a 1 in 8 chance it would fail? 1 in 100,000 chance?

Both the post-disaster presidential commission report and Risk Analysis of the Space Shuttle highlighted NASA management's use of data that showed the number of O-Ring failure incidents only vs. temperature before launch.

Look at the graph below, and see if you can spot any pattern between temperature and failure rate. If you were the decision maker for launch and only had this graph, would you have allowed the space shuttle to launch at 31°F?

Number of O-Ring incidents vs. Joint Temperature
(Incidents when O-Rings failed)
Source: Report of the Presidential Commission on the Space Shuttle Challenger Accident, 6 June 1986, Volume 1, Page 145, (link) Color added.

NASA management used the data behind this first graph (among many other pieces of information) to justify their view the night before launch that there was no temperature effect on O-ring performance (despite the objections of the most knowledgeable engineers who had run many other experiments). In this graph specifically, it’s hard to find any consistent relationship between temperature and failure rate in the provided data.

But NASA management made one catastrophic mistake: they looked at the times when the O-rings failed, and excluded the times when the O-rings were successful. If there were many successful launches at a certain temperature range but none in another range, they’d quickly show the danger.

Look at a graph of the full data set (this time including successes, rather than just the failures). Do you now see any pattern between temperature and failure rate? Would you still allow the space shuttle to launch at 31°F if this was the only information you had?

Number of O-Ring incidents vs. Joint Temperature
(failures AND successes)
Source: Report of the Presidential Commission on the Space Shuttle Challenger Accident, 6 June 1986, Volume 1, Page 145, (link) Color added

Successful launches (those with no failure incidents) had not been listed in the first data set we saw, and if included would have led most to conclude there was a definite temperature effect. Of the many launches at high temperature (>65°F) in the second graph, a smaller percentage had O-ring problems (15%). In the very few launches at low temperatures, 100% had O-ring problems (and these were only tested between 50°F and 65°F, not the even colder 31°F at launch). If the data behind this second graph had been used by NASA management, it’s more likely that the launch would have been postponed (though some think, even this wouldn’t have been enough).

This comparison highlights just how important having a full set of data is – and how it might literally mean the difference between life and death.

We might visualize the difference between the two graphs as follows, with the data leading to inferences that impact the decision about whether the space shuttle should have launched:

Dataset Inference from data Potential Decisions Result
Selective Data
(actuality)

Low temperatures have little to no effect on O-ring failure rate

Allow space shuttle launch at low temperatures

Space Shuttle Challenger explosion leading to 7 deaths

All Data

Low temperatures have a substantial effect on O-ring failure rate, especially as all launches below 65° had O-ring issues.

Don’t allow space shuttle launch at low temperatures, due to the substantially elevated risk of failure

Unknown; likely, significantly lower risk of O-ring failure and subsequent explosion

I drew a key lesson from the Challenger disaster: looking at selective data that excludes critical information leads to ineffective decision-making. As humans, when we seek to make sense of a large number of data points, we construct a limited data set – we observe or include only a subset of all data points through a process known as sampling. This can lead to a selective, biased data sample if this misrepresents the underlying data. In the Challenger’s case, it was selectivity for failures alone that led to a flawed graph and a subsequent flawed decision.

This selectivity is not confined to NASA, but applies from everything from assembly line defects to academic research. Selective data has been particularly troublesome in media, especially with the growth of social media. In this multi-part series, we’ll look at the challenges of selective media coverage, seeing how it distorts the decisions we make.

We’ll see the inferences we would make with selective media coverage and compare that to the inferences with the entire dataset, just like with the Space Shuttle Challenger.

Next time, we’ll look at media coverage for one particular topic that’s top of mind and widely covered, terrorism. (Read part 2)



×