Specificity And Sensitivity

From Sean_Carver
Jump to: navigation, search

Specificity and Sensitivity

Medical journals often report the Specificity and Sensitivity of tests for things like HIV or the Zika virus. These measures describe the rates of Type I and Type II errors.

  • Specificity, or sPecificity, concerns the rate of false Positives.
  • Sensitivity or seNsitivity concerns the rate of false Negatives.

This is where it gets confusing: False Positive Results are related to True Negative results and False Negative results are related to True Positive results. Huh?

  • A false positive result means the truth is negative. So the test might have been a true negative instead of being a false positive, those events are disjoint, and result couldn't possibly be categorized any other way (if the truth is negative).
  • If the truth is negative, the test could still go either way, so PROBABILITY(False Positive) + PROBABILITY(True Negative) = 1
  • Knowing one probability you can find the other
  • The Specificity is the Probability of a True Negative, assuming the truth is Negative.
  • Knowing the sPecificity, and applying the formula, you can figure out the probability of a False Positive.

Similarly...

  • If the truth is positive, the test could still go either way, so PROBABILITY(False Negative) + PROBABILITY(True Positive) = 1.
  • The Sensitivity is the Probability of a True Positive, assuming the truth is Positive.
  • Knowing the seNsitivity, and applying the last formula, you can figure out the probability of a False Negative.

Something to be aware of with imperfect tests

You get a test for some terrible rare disease. How terrible? Let your imagination run with this. How rare? Only 1 in a million people have the disease. Let's say the test has a seemingly decent specificity and sensitivity, 0.90 for each.

Uh oh, you test positive for the disease! Should you be worried?

The answer is no, you should not be worried, not if the disease is that rare.

If the disease were common, then yes, you should be worried. (Not that worrying is going to help.)

Let's say you run the test on all 10 million inhabitants of a large city. Only 10, or so, will have the disease; 9 of them will get a true positive whereas only 1 of them will get a false negative. There is of course some statistical uncertainty here, so the numbers may come out slightly different each time, in practice.

Of the approximately 10 million people that don't have the disease, 900,000 get true negative results where as 100,000 get false positives.

So if you are one of the crowd who has a positive test result, your odds are only 9 in 100,000 of having the disease. Pretty good odds!

Moral: don't test for rare conditions, unless your test is perfect, or unless there are other indications that the disease is present.

What if the condition were common? The news would be much worse!

If you 1 in 10 people had the disease, how would that change things?

There would be 1,000,000 with the disease, with 900,000 true positive and 100,000 false negatives.

There would be 9,000,000 disease-free with 8,100,000 true negatives and 900,000 false positives.

You have a 50-50 chance of having the disease! Much worse odds. But still a worthless test. You want the specificity to be much higher.