Skip to main content

Over-interpreting results

(go to Outline)

Interpreting data from surveillance, surveys, or any other source to formulate conclusions and recommendations is often a difficult task. Nonetheless, there are a few principles which apply to this process.

The conclusions must be based on data presented, either in an oral presentation or a report.

Too often, recommendations are stated which reach far beyond the data. Sometimes, the conclusions formulated are even directly contradictory to data presented elsewhere in the presentation or report. When reviewing conclusions, ask yourself "Do the data presented in this report, either data collected by the survey itself or data from other sources, lead me to this conclusion and no other conclusion?" If not, perhaps the conclusion needs to reformulated.

Do not over-interpret minor, statistically insignificant differences

Many people make the mistake of seeing important differences where none exist. This is especially true when comparing two surveys to look for changes over time.

For example, let us assume that two surveys were done 1 year apart. Both surveys weighed and measured children 6-59 months of age. The design effect for acute protein-energy malnutrition was 1.5 in both surveys. To detect with statistical significance a decline in the prevalence of acute protein-energy malnutrition from 6.5% to 4.5%, each of the surveys would have to have data on more than 3,060 children.

If the crude mortality rate dropped from 2.0 deaths/10,000/day in the first survey to 1.7 deaths/10,000/day in the second survey, and the design effect was 1.5 for mortality, each survey would have had to have data on 12,366 individuals for this difference to be statistically significant. If each selected household had on average 5 people, the sample for each survey would have to include 2,473 households for this change in crude mortality to be statistically significant.

Few surveys in humanitarian emergencies have such large sample sizes, so the likelihood is high that in such surveys, small differences are not statistically significant. Besides, whenever differences between surveys are presented, a p value should be presented showing the likelihood that the observed difference occurred only as a result of sampling error.