It's 2021! Time for a crash course in four terms that I often see mixed up when people talk about testing: sensitivity, specificity, positive predictive value, negative predictive value.

These terms help us talk about how accurate a test is, but from different viewpoints. 1/

Viewpoint 1 is about the status of the person taking the test. Are they infected, or not infected? How good is the test at identifying these people? That's sensitivity/specificity. 2/
A test that is very *sensitive* will be very good at accurately identifying people who are infected.

A test that is very *specific* will be very good at accurately ruling out infection in people who are not infected. 3/
Viewpoint 2 is about the result of the test itself. It says positive or negative (or detected or not detected). How much can those results be trusted? Did the positive or negative actually "predict" the situation correctly? 4/
A test that has a high *positive predictive value* means you can really trust a positive. Most of the positives that come out do really mean that person is infected. 5/
A test that has a high *negative predictive value* means you can really trust a negative. Most of the negatives that come out do really mean that person is not infected. 6/
Let's think about a population of 100 people. 10 are infected with SARS-CoV-2 (the coronavirus that causes Covid-19). All take a test.

Of the 10 people infected, 8 test + (true +), 2 test - (false -).
Of the 90 people uninfected, 89 test - (true -), 1 tests + (false +). 7/
The sensitivity of the test is true positives/(true positives + false negatives): 8/10. The denominator is, again, the status of the person: out of all the infected people, how many did the test catch? 80%. The test has a sensitivity of 80%. 8/
The specificity of the test is true negatives/(true negatives + false positives): 89/90. Out of all the uninfected people, the test correctly identified 98.9% of them. The test has a specificity of 98.9%. 9/
The denominator changes for ____ predictive values.

The test's positive predictive value is true positives/(true positives + false positives): 8/9, or 88.9%. It's the proportion of positives, out of all the positives, that were accurate. 10/
The test's negative predictive value is true negatives/(true negatives + false negatives): 89/91, or 97.8%. It's the proportion of negatives, out of all the negatives, that were accurate. 11/
So when you see that a test's specificity was 98.9%, for example, that doesn't actually, by itself, tell you the number of false positives that occurred.

It's also why specificity and negative predictive values for a test can be totally different numbers! 12/
That's it! Ta for now! 13/13

More from Education

New from me:

I’m launching my Forecasting For SEO course next month.

It’s everything I’ve learned, tried and tested about SEO forecasting.

The course: https://t.co/bovuIns9OZ

Following along 👇

Why forecasting?

Last year I launched
https://t.co/I6osuvrGAK to provide reliable forecasts to SEO teams.

It went crazy.

I also noticed an appetite for learning more about forecasting and reached out on Twitter to gauge interest:

The interest encouraged me to make a start...

I’ve also been inspired by what others are doing: @tom_hirst, @dvassallo and @azarchick 👏👏

And their guts to be build so openly in public.

So here goes it...

In the last 2 years I’ve only written 3 blog posts on my site.

- Probabilistic thinking in SEO
- Rethinking technical SEO audits
- How to deliver better SEO strategies.

I only write when I feel like I’ve got something to say.

With forecasting, I’ve got something to say. 💭

There are mixed feelings about forecasting in the SEO industry.

Uncertainty is everywhere. Algorithm updates impacting rankings, economic challenges impacting demand.

It’s difficult. 😩

You May Also Like