The Monty Hall Problem: Improve Your Predictions with Bayes’ Rule

We predict outcomes and infer conclusions based on information that comes to us many times each day.  Doctors call this diagnosis; financial planners call this investment advice; attorneys call this counsel; executives call this strategic planning.  Good decision-making requires that we use as accurate a baseline as the available data will allow.  If we operate in dynamic and shifting environments, however, probabilities and outcomes are in flux and will change as new events occur and new data becomes available.  Is there a responsible way for us to update our predictions with the availability of new data?

I should start with a very necessary caveat: I am not a mathematician.  I have never taken a statistics or probability course.  I often do work in the legal field, though, where making a probabilistic (even if imperfect) assessment of many potential outcomes in my cases is necessary to my advising clients.  Recently, I have begun to investigate whether there are lessons contained in the concrete world of mathematics and statistics that I might apply even to non-scientific questions.

Bayes’ Rule is one such lesson.  Bayes’ Rule is named after Rev. Thomas Bayes (1701-1761) and demonstrates how to update beliefs or predictions based on the occurrence of a new event or the availability of new data.  Here is the theorem in its mathematical form:

6a00e553e3a603883401a51158c2fd970c

Defining terms, p(A) is the probability of event A occurring, and p(B) is the probability of event B occurring.  p(B|A) is the probability of event B occurring assuming that event A has occurred, while p(A|B) represents the probability that event A will occur assuming that event B has occurred.  What does this mean?  Bayes’ idea was to represent mathematical changes in the probability of an event based on conditions related to that event.  In other words, if a doctor is interested in determining whether a patient has cancer, and cancer is known to have some relationship to the age of a patient, Bayes’ rule sets out to predict the change in the probability that the patient has cancer if she is of a certain age.

Let’s look at another classic example, known as the Monty Hall Problem.  On the classic game show Let’s Make a Deal, host Monty Hall often gave contestants a chance to win a car by picking from one of three doors.  One door hid a car, the other two doors had no cars.  At this point, the contestant has a one in three chance to win.  Once the contestant picked a door, Hall opened one of the other two doors, but never a door with a car.  Hall then gave the contestant a final choice: stick with his original door or switch to the remaining door.  What is the right move for the contestant?  Bayes’ rule provides an answer that is not immediately obvious: the contestant actually doubles his chances to win by switching doors, from 33% to 66%.

How can this be?  The answer lies in updating our prediction based on new information.  Assume the contestant first picks Door 1.  He has a 33% chance that he has picked correctly.  There is a corresponding 67% chance that the car is behind Door 2 or Door 3.  Hall will then open Door 2 or 3, but will not open a door with a car behind it.  If Hall opens Door 2, the 66% chance lies entirely with Door 3.  The probability that the car is behind Door 1 is still 33%, but the probability that it lies behind Door 3 is doubled.  Switching is the smart move here.

Notice here how the increase in probability from the new information of Hall opening Door 2 comes only after the contestant has already picked Door 1.  Suppose instead, that the contestant was kept off stage, unaware of what Hall was doing onstage.  In this alternate scenario, before the contestant was invited to join the game, Hall opens Door 2 and shows the audience that no car lay behind it.  Hall then brings out the contestant and tells him he can win a car if he correctly guesses whether the car lies behind Door 1 or 3.  What are his odds?  In this scenario, he has a 50% chance of winning – his choice makes no difference.  His probability does not change because he has only one one data point – that the car is behind one of two doors, instead of the two data points in the other scenario – that the car is behind one of three doors, but not the door that Hall then opens.  It is in the cumulative collection of data over time that the wise contestant can observe a change in probability and leverage that to his advantage in the game.

This theorem has application beyond game shows, of course.  Consider a doctor who is called to see a sick child in a rural area without sophisticated diagnostic equipment. The doctor knows based on word of mouth that 90% of sick children in that neighborhood have the flu, while the other 10% are sick with measles.  Based on this, one would assume that there is a 90% chance that the child has the flu, and 10% chance that she has the measles.

Assume that the child also has a rash, which the doctor knows shows up in 95 percent of measles patients, but only 8% of flu patients.  Does this change the doctor’s assessment.  Is the chance that the girl has measles now 95%?

No.  The probabilities of these events influence one another and must be considered together.

Let F stand for an event of a child being sick with flu and M stand for an event of a child being sick with measles. Assume for simplicity that there no other maladies in that neighborhood.

 

Using the formula above, the probability of the girl having measles, given her rash, is equal to p(R|M)p(M) / (p(R|M)P(M) + p(R|F)p(F)), or .95 x .10 / (.95 x .10 + .08 x .90), or 0.57.  So the girl has a 57% chance of having measles, a far cry from the 95% likelihood that her rash might first suggest, but substantially more than the 10% chance predicted by the doctor’s initial data.

Stripped of the equations and technical analysis, the point here is a simple and elegant one: adjust your predictions based on new information.  Do not fall into the trap of an anchoring bias, the all-t00-human tendency to focus on one piece of information in making a decision.

Broadly viewed, Bayes’ rule will affect how you view and relate to new information in your life, and can change your decision-making process.  Our beliefs grow out of our experiences and the information we process day-to-day.  We should continue to test, update, and, if necessary, adjust our personal views?  Julia Galef makes a great case for this type of growth in this Big Think video that I’ll leave you with.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s