Introduction
Standard Error (SE) and Standard Deviation (SD) are both measures of spread, but they answer fundamentally different questions. Confusing them is one of the most common mistakes in statistics.
Common Confusion
Many people use SD when they should use SE, especially when reporting the precision of sample means. This can lead to incorrect conclusions about statistical significance.
The Key Difference
Standard Deviation
Measures the spread of individual data points around the mean.
"How much do individual values vary?"
Standard Error
Measures the precision of the sample mean as an estimate of the population mean.
"How accurate is our sample mean?"
Standard Error Formula
Standard Error of the Mean
SE = s / √n
Where s is the sample standard deviation and n is the sample size.
Example Calculation
A sample of 25 students has mean test score = 75, SD = 10
- Standard Deviation (s) = 10 points
- Sample Size (n) = 25
- Standard Error = 10 / √25 = 10 / 5 = 2 points
Interpretation: The sample mean of 75 has an uncertainty of about ±2 points.
When to Use Each
- Use Standard Deviation when:Describing the variability of individual observations, characterizing a population or sample, setting normal ranges (e.g., clinical reference ranges), or quality control (acceptable variation in manufacturing)
- Use Standard Error when:Reporting the precision of a sample statistic, constructing confidence intervals, comparing means between groups, or hypothesis testing
Effect of Sample Size
A crucial difference: SD stays roughly the same as sample size increases, but SE decreases with larger samples.
| Sample Size (n) | SD | SE = SD/√n |
|---|---|---|
| 25 | 10 | 2.00 |
| 100 | 10 | 1.00 |
| 400 | 10 | 0.50 |
| 10,000 | 10 | 0.10 |
Key Insight
To halve the standard error, you need to quadruple the sample size. This is why very precise estimates require large samples.