Understanding Confidence Intervals
A confidence interval is a range above and below an observed rate within which we would expect the "true" rate to lie a certain percentage of the time. The width of the confidence interval is influenced both by the degree of certainty sought (e.g., 95% versus 99% certainty) and the standard error. A high degree of certainty (e.g., 99%) increases the width of the confidence interval. For estimates based on sampling such as surveys, the sample size will also influence the width of the confidence interval, with small samples increasing the width of the confidence interval. The wider the confidence interval is, the less reliable the point estimate is indicating that the point estimate should be interpreted with caution.
The confidence interval also tells you about the stability of the point estimate. A stable estimate is one that would be close to the same value if the survey were repeated. An unstable estimate is one that would vary considerably from one sample to another. A wider confidence interval around the estimate indicates greater instability and thus less reliability.
When making comparisons between estimates, how do I determine if the differences are statistically significant?
Confidence intervals are similar to margins of error. When the confidence intervals of two estimates of the same indicator from different groups do not overlap, they may be said to be statistically significantly different, i.e., these differences are unlikely related to chance and are considered true differences.