Σ
SDCalc
IntermediárioApplications·12 min

Repeatability vs Reproducibility: What Standard Deviation Measures

Learn the difference between repeatability and reproducibility, how each uses standard deviation, and how to report precision correctly in lab, manufacturing, and measurement workflows.

By Standard Deviation Calculator Team · Data Science Team·Published

What Repeatability and Reproducibility Mean

Repeatability and reproducibility are both precision concepts. They answer the same statistical question, but under different operating conditions: how much do repeated results vary when the underlying item being measured is supposed to be the same?

Under IUPAC-style definitions, repeatability refers to repeated results generated with the same method, same operator, same apparatus, same laboratory, and short time interval. Reproducibility refers to repeated results generated with the same method but different conditions, such as different operators, instruments, laboratories, or times.

Core distinction

Repeatability asks whether the method is stable under tightly controlled conditions. Reproducibility asks whether the method still behaves consistently after real-world sources of variation are allowed to change.

That difference matters because a process can look excellent in one lab on one day and still perform poorly across shifts, sites, or instrument platforms. If you need the percentage version of precision, continue with Relative Standard Deviation (RSD) and Coefficient of Variation.

Same Idea, Different Conditions

The statistic behind both concepts is still a standard deviation. What changes is the data-generating setup. In practice, repeatability standard deviation is usually smaller because fewer variation sources are allowed to move.

QuestionRepeatabilityReproducibility
OperatorHeld constantMay change
Instrument or apparatusHeld constantMay change
Laboratory or siteHeld constantMay change
Time intervalShort and tightly controlledCan span longer periods
What the SD reflectsWithin-condition variationWithin-condition plus between-condition variation
Typical useMethod precision or instrument precisionInterlaboratory validation or cross-site robustness

Quick intuition

If one analyst injects the same sample six times on the same HPLC system, you are studying repeatability. If three labs each run the same material on their own systems, you are studying reproducibility.

How Standard Deviation Is Used

Standard deviation gives a unit-based measure of precision. If assay results are reported in mg/L, then repeatability SD and reproducibility SD are also reported in mg/L. When labs want a scale-free version, they divide by the mean and report a relative value such as percent RSD.

Precision under fixed conditions

s_r = standard deviation of repeated results under repeatability conditions

Precision across broader conditions

s_R = standard deviation of repeated results under reproducibility conditions

A useful mental model is that reproducibility includes everything repeatability includes plus extra between-operator, between-instrument, between-lab, or between-day variation. When those extra sources are material, the broader standard deviation grows.

Variance decomposition idea

s_R^2 approx s_r^2 + s_between^2

What this means operationally

If reproducibility is much worse than repeatability, the method itself may be fine, but transfer between people, instruments, or labs is introducing instability.

You can compute the raw spread with the site's sample standard deviation calculator, mean and standard deviation calculator, or relative standard deviation calculator. For process monitoring after validation, pair this topic with Control Charts and Outlier Detection.

Worked Example

Suppose a tablet assay method is tested in two stages. First, one analyst runs six replicates on the same instrument. Second, three laboratories each run the same material using the same method. The goal is to separate tight local precision from broader deployment precision.

Study stageMean resultStandard deviationInterpretation
Single analyst, same instrument, same day100.1%0.42%Repeatability is strong under controlled conditions
Three labs, different analysts and instruments99.8%1.35%Reproducibility is weaker because more variation sources are active
1

Start with the repeatability study

Use replicate results from the same local setup to estimate the within-condition standard deviation.
2

Expand the design

Introduce realistic changes such as operator, laboratory, or instrument to estimate the broader reproducibility standard deviation.
3

Compare the two numbers

A large jump from repeatability to reproducibility indicates that transfer effects, calibration practices, or local operating differences need attention.

In this example, the reproducibility SD is more than three times the repeatability SD. That does not automatically mean the method failed. It means the method is more sensitive to deployment conditions than the repeatability experiment alone suggested.

Good repeatability, weak reproducibility

Common causes include operator training gaps, site-specific sample preparation, calibration drift, environmental differences, and inconsistent SOP execution.

Good repeatability and good reproducibility

This is the stronger outcome for a method intended to move across shifts, sites, or contract labs because the precision holds after operational complexity increases.

How to Report Results

A good report does not stop at one isolated standard deviation. It states which conditions were held fixed, which conditions were allowed to vary, and whether the result is an absolute SD or a relative metric such as percent RSD.

  • State the measurement conditions explicitly: operator, instrument, laboratory, and time window.
  • Report the sample size or number of replicate results behind the estimate.
  • Say whether the number is a repeatability SD, reproducibility SD, or percent RSD under one of those conditions.
  • Include the mean so readers can interpret whether absolute spread or relative spread is more relevant.
  • If method transfer is the concern, compare both repeatability and reproducibility instead of reporting only one.

Do not mix terminology

Precision, repeatability, reproducibility, standard deviation, standard uncertainty, and RSD are related but not interchangeable labels. Name the statistic and the conditions together so readers know exactly what was measured.

For closely related theory, see Standard Error vs Standard Deviation, Pooled Standard Deviation, and Combining Standard Deviations.

Common Mistakes

  • Mistake 1:Calling a same-operator, same-day study reproducibility. That design only supports repeatability.
  • Mistake 2:Comparing SDs across methods without checking whether the means differ enough that percent RSD would be more informative.
  • Mistake 3:Treating a strong repeatability result as proof that a method will transfer well across laboratories.
  • Mistake 4:Reporting a single precision number without describing the conditions behind it.

The practical rule is simple: repeatability is precision in a tightly controlled local setup; reproducibility is precision after you widen the setup. The standard deviation framework is the same, but the experiment defines what that standard deviation means.

Further Reading

Sources

References and further authoritative reading used in preparing this article.

  1. IUPAC Gold Book: repeatabilityIUPAC
  2. NIST/SEMATECH Engineering Statistics Handbook, Chapter 2: Measurement Process CharacterizationNIST
  3. Q2(R2) Validation of Analytical ProceduresFDA