Skip to main content

Accuracy and precision - theory

(go to Outline)

But how do sampling error and bias relate to precision and accuracy, terms which are often confused. (For definitions, click on the term)

In short, a measurement (or in our case, the estimate from a survey) is precise if it obtains similar results with repeated measurement (or repeated surveys).

A measurement is accurate if it is close to the truth with repeated measurement (or repeated surveys).

Last's Dictionary of Epidemiology says it best:

A faulty measurement may be expressed precisely but may not be accurate. Measurements should be both accurate and precise, but the two terms are not synonymous.

Let's imagine a dart board with the centre representing the true population value, as pictured above. Each of the three darts is a repeated survey using the same methodology and sample size. Of course, you wouldn't do three surveys to measure the same indicator or outcome in the same population at the same time, but just imagine that this was done.

The situation above shows great precision but very poor accuracy. The result of NONE of these three surveys is anywhere close to the true population value. What happened here?

Click here for the answer.

The situation above shows poor precision (the three darts (or surveys) are far apart), but if we threw many more darts (or did many more surveys with the same methodology), the average of all the results from all the darts (or all the surveys) would be close to the truth. What happened with these three surveys?

Click here for the answer.

Of course, the best situation is pictured above. The survey results are both precise and accurate, so the darts are clustered (the survey results are similar) and they are close to the true population value. What happened here?

Click here for the answer.

So far, so good.

But when do we do many surveys to check precision? Right, never!

When do we know the true population value of what we're trying to measure? Right, never!

The usual situation is you do one survey in a population in which you do not know the true value, so you end up with a single dart (or the result from a single survey), but no dartboard showing the true population value, like this:

But this one dart doesn't tell us much. Where is the true population value? Is this a useful estimate? Can we make programmatic decisions based on this survey result?

1

What one thing can you always calculate from survey data which will help you get some idea of how close your estimate is to the true population value?

a)
b)
Please select an answerIncorrect answer. You cannot calculate a quantitative measure of most forms of bias. Correct. You can calculate the amount of precision you have by calculating the 95% confidence intervals.
Check your answer

If we calculate the 95% confidence interval, we can at least get some idea of the precision of our estimate, as shown below:.

2

True or false?

This result showing good precision means that we are absolutely certain that the survey result is very close to the true population value.

a)
b)
Correct. Having precision does NOT necessarily mean that the result is accurate (that is, there is little bias). If this survey had lots of bias, the result may still be far from the true population value.Incorrect. Having precision does NOT necessarily mean that the result is accurate (that is, there is little bias). If this survey had lots of bias, the result may still be far from the true population value.Your answer has been saved.
Check your answer

This point is very, very important.

Just having narrow confidence intervals producing good precision DOES NOT NECESSARILY mean that the survey result is close to the true population value. If there is bias which produces inaccuracy, the dartboard might be as shown below. In this case, you would draw very poor conclusions if you assumed that the dart indicated the true population value.

How do we prevent bias?

  • Be sure all measurement instruments are functioning well
  • Be sure to use correct technique in all measurements
  • Carry out the sampling correctly and randomly

How do you do these things?

  • Seek expert advise and have others review your survey plans
  • Provide good, thorough training to all survey workers
  • Provide good field supervision by choosing good team leaders
  • Do quality checks on the data as they come in

In short, BE CAREFUL, DO THINGS RIGHT, and DON'T MAKE MISTAKES!

Because if you aren't careful and you have bias in your survey, you may never know it. You then make inappropriate decisions about programmes based on invalid results. Then you could either fail to provide needed services or waste resources on providing unneeded services.

At the very least, you have wasted the resources and time put into the survey because the results do not reflect the true situation in the population.

In sum,

  • A larger sample size increases precision
    • It does NOT guarantee absence of bias
    • Bias may produce very incorrect results
    • If there is little sampling error, some people may place inappropriate faith in this wrong estimate
  • Quality control is more difficult with larger sample sizes
    • More teams are needed with more supervisors, forcing you to make some unqualified people team leaders
    • More teams are needed with more team members, forcing to use people who would not be hired if you had fewer teams

Therefore, you may be better off with a smaller sample size, a bit less precision, but much less bias. Bias may lead you to grossly wrong conclusions, while having not quite enough precision may only decrease your confidence in the survey results.

It is almost always better to be vaguely right than precisely wrong.