False Positives and the Base Rate Fallacy

This weekend was The Amaz!ng Meeting! If you haven’t been, it is an “amazingly” fun gathering of critical thinkers with tons of excellent talks on skeptical, scientific, and thought-provoking topics. I wasn’t able to attend this year, but lived somewhat vicariously through the Facebook posts of a friend and fellow skeptic. The post below inspired this blog entry.

post

Although I have not seen Harriet Hall speak on this topic, I presume she intelligently articulated her reasons for this stance as she has in her writing (e.g. here and here).  I thought I’d take a moment to explain the math behind screening tests in support of the argument that it is risky to over screen for disease.

In classical measurement theory, sensitivity and specificity are statistical measures regarding the outcome of any binary (i.e. two-outcome) test. Let’s consider medical tests in terms of a positive result (diagnosis of disease/condition) and a negative result (free of the disease/condition). Sensitivity is a measure of the “true positive rate” and is the proportion of positive results that are correct positive results. Specificity is a measure of the “true negative rate” and is the proportion of negative results that are correct negative results. Consider a test for pregnancy. There are four possible results as shown in the table below: a. pregnant woman gets the correct positive result, b. non-pregnant woman gets an incorrect positive result, c. pregnant woman gets an incorrect negative result, and d. non-pregnant woman gets a correct negative result. The sensitivity of a pregnancy test is the percentage of time a woman who IS actually pregnant will get the correct positive pregnancy result. The specificity of a pregnancy test is the percentage of time a woman who IS NOT actually pregnant will get the correct negative result.

table

We calculate sensitivity as the proportion of people who correctly get positive test results. If each cell represents the number of people in the sample with the corresponding test result and actual condition status, then Sensitivity = A/(A+C). Notice that sensitivity only incorporates data from people who actually have the condition.

We calculate specificity as the proportion of people who correctly get the negative test results. When each cell represents the count of people from the sample with a particular test result and condition status, Specificity = D/(B+D). Notice specificity only incorporates data from people who don’t have the condition.

Ideally, we want all medical tests to have 100% sensitivity and specificity, but the reality is that most medical tests do not have that level of precision. However,  tests need to be developed  balancing both sensitivity and specificity as an increase in one often means a decrease in the other.  For example, consider an HIV diagnosis. Sensitivity is particular important as you don’t want to miss diagnosing someone with HIV; you want to help further prevent spreading of the disease and to start the patient’s treatment promptly. On the other hand, if the test gives too many false positives, it would cause a great deal of unnecessary emotional distress.  That brings us to the math! If a test has 99% sensitivity, then for every 100 people with the disease, 99 would be correctly diagnosed and 1 would have false negative and a missed diagnosis.  If the same test has 99% specificity, then for every 100 HIV-negative people, 99 would correctly get a negative result and one person would get a false positive. Both balancing and maximizing sensitivity and specificity are considerations in the design of diagnostic tests.

Base Rate Fallacy

The so-called “Base Rate Fallacy” comes into play in terms of medical diagnoses. Broadly, the base rate fallacy is when a person makes a judgment of the overall likelihood of an event based on easily accessible knowledge (here: values of sensitivity and specificity) without taking into consideration the prevalence or base-rate of the event. Here, we make the base rate fallacy when take the results of the test and fail to incorporate the actual rate of disease/condition. Rate of Disease = (A + C)/(A + B + C + D) 

Let’s consider a fictitious, deadly disease, Disease X. Disease X is very rare and only afflicts .01% of the population. Your doctor has a cheap and very accurate test: it has both 99.5% sensitivity and specificity.  Because the test is inexpensive, he or she has decided to administer this test to all of his/her patients (should this scenario be true, it would be time to find another doctor). You think, “This is great! I’d love to know for sure that I don’t have Disease X.” The doctor does the test and it comes back positive. You panic, hire a lawyer to write your will, and start saying your goodbyes because death is surely imminent. After all, the test has 99.5% specificity, doesn’t that mean that you most likely have the disease (a 99.5% chance!).  But is death imminent? What is the probability you actually have this disease? After the initial panic subsides, you take a step back and think about the actual probability that you have the disease and discover you made the “base rate fallacy.”

Let’s walk through this. We need to calculate the probability that you have the disease given the positive results of the test. Conventional notation expresses the probability of a disease given a positive test result as P(Disease|Positive). Recall that sensitivity is the probability that a person gets a positive test result given they have the disease. This can be expressed as P(Positive|Disease).  The base rate fallacy is the assumption that P(Disease|Positive) =P(Positive|Disease). However, the actual probability of the disease given a positive diagnosis based on the disease’s low base rate (assuming no other risk factors or variables are known) can be calculated.  NOTE: This is about to get very “mathy”. If math isn’t your cup of tea, just scroll down for the results. The results illustrate the point and it is not crucial you understand the math.  

Using Bayes’ Theorem, the probability of the disease given a positive diagnosis is expressed below.

equ

The probability of a positive diagnosis can be expressed using the rules of probability and the equation above is equivalent to this equation.

EQU 2

Although the Equation above may seem a bit intimidating, we know the values to plug in:

P(Positive|Disease) = Specificity = 99.5% = .995

P(Disease) = Base Rate = .01% = .0001

P(Positive|NoDisease) = 1 – Specificity = .005

P(NoDisease) = 1 – P(Disease) = .9999

Plugging these values into the equation, we get:

equ3

MATH IS OVER – HERE ARE THE RESULTS. 

Based on the math, given a 99.5% specificity and a .01% base rate, a person with a positive test result has less than a 2% chance of having the disease!!!  And just a slight decrease in specificity, the rate goes down further (for 99% specificity there is less than a 1% chance of actually having the disease given a base rate of .01%). Although no one wants to hear that they have 2% chance of having a deadly condition, this is certainly better than the original panic over the 99.5% accuracy test! Given that low base rate diseases can have high rates of false positives, it is prudent for doctors not to over test and cause unnecessary distress.

Although the example I’ve given for the base rate fallacy is in terms of medical diagnostic testing, the base rate fallacy is broadly applicable to many types of scenarios.  For example, consider the low rate of terrorists in the US and thus the resulting accuracy of any sort of screening procedures – there will be many false positives. Psychologists have also studied this phenomenon and found that humans tend to fall prey to the base rate fallacy in their decisions and judgments (e.g. Kahneman & Tversky, 1985). Anytime you make judgement about the probability an event based on information at hand while ignoring the base rate of the event, you are making the base rate fallacy. In conclusion, I hope that you will 1) be prudent about interpreting the results of any medical tests and ask questions regarding sensitivity, specificity, and rate of disease and 2) be on a keen lookout for ways the base rate fallacy may influence your own decision making or be used to manipulate you in advertising and the media.

2 Responses to “False Positives and the Base Rate Fallacy”

  1. DaynaJD says :

    Harriet Hall illustrated the same points in her talk. She also spoke of the harm of misdiagnosing a disease where the treatment is harsh on the body. One example she used was prostrate cancer where treatment side effects include impotance and loose bowels. The main point of her talk was to inform people that if they are not in a high risk group (i.e. there is no history of cancer/disease) you should always talk with your doctor about the benefits and risks of screening. Great post!

    • Amanda B. says :

      Absolutely! A few years ago, there were changes in breast cancer screening guidelines that outraged many female health activists. For women with no increased risk factors, they upped the age for the first mammogram screening from 40 to 50, and recommended that mammograms be bi-annual instead of annual. Part of the reason for this change has to do with the reasons you just said – people were unnecessarily treating “abnormal” test results – causing physical and emotional side effects that could have been avoided.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>