Why Accuracy Is Not the Same as Resolution
Pick up most digital calipers and you'll see a display that reads to 0.01mm or 0.0005in. That number — resolution — is the smallest increment the display can show. It tells you nothing about whether that displayed increment reflects the true dimension. Accuracy is the specification that addresses that question, and it is almost always a larger number with a ± in front of it.
A typical mid-range caliper might read: accuracy ±0.02mm. That means any single reading can be off by 0.02mm in either direction from the true value. On a display that shows two decimal places, that ±0.02mm uncertainty swallows the last digit entirely. You're reading 12.34, but the true value could be anywhere from 12.32 to 12.36. The display shows certainty; the spec sheet tells a different story.
Repeatability — also called repeatability error — is a third specification. This is how much your reading changes when you measure the same artefact five times, fully releasing and repositioning between each measurement. A caliper can be acceptably accurate on average but have poor repeatability: the mean of ten readings is close to true, but any single reading is unreliable. If you're making accept/reject decisions on individual parts, repeatability matters as much as accuracy. Our guide to reading a digital caliper covers the foundational technique for getting repeatable results from any caliper.
These three specs — resolution, accuracy, repeatability — are distinct. Professional buyers compare accuracy specs. Experienced users check repeatability before every critical session. Resolution is mostly a display feature. Understanding the difference is the first step to knowing when your caliper is telling you the truth.
The Four Main Sources of Caliper Error
Even on a perfectly calibrated caliper, four factors introduce measurable error in workshop conditions. Knowing which ones affect your work determines whether they matter.
Thermal expansion is the largest uncontrolled source of error in most workshops. Steel expands approximately 11μm per metre per degree Celsius. A 150mm steel caliper measured at 28°C reads approximately 0.11mm smaller than its true dimension at the standard reference temperature of 20°C. In an unheated workshop that warms through the day, this is a systematic drift — every reading is wrong in the same direction. Most manufacturers specify accuracy at 20°C and give a temperature coefficient (typically 0.01mm/°C for quality instruments). Budget calipers often don't specify a coefficient at all.
Jaw parallelism is a geometry error. The measuring faces of the outside jaws should be perfectly parallel when closed. On lower-quality calipers, the sliding jaw assembly has play that manifests as inconsistent readings depending on where the workpiece contacts the jaw face. You can detect this by measuring a gauge pin at three positions: near the hinge, at mid-jaw, and near the tip. A spread of more than one graduation across these positions indicates jaw geometry error.
Jaw deflection under pressure is a stiffness problem. As you close the jaws on a workpiece, the sliding jaw assembly bends slightly under hand pressure. On cheap calipers, the main beam flexes too. This error is proportional to pressure — the harder you press, the more the jaw opens. Consistent light pressure is the mitigation. Some machinists use a dedicated pressure fixture for critical measurements, essentially a spring-loaded jaw closure that applies the same force every time.
Encoder contamination affects digital calipers specifically. The linear encoder strip inside the slider housing accumulates dust, oil, and metal fines over time. Early contamination causes intermittent reading glitches — the display jumps or skips. Later, it causes systematic drift as the encoder sensor reads position with degraded fidelity. IP65-rated calipers resist this significantly. For machinists working near coolant or fine chips, this is not a minor concern. See our digital caliper spec guide for what IP ratings actually mean in practice.
Reading the Spec Sheet: What the Numbers Actually Guarantee
Most caliper spec sheets list between three and eight accuracy-related numbers. Here's what each one means, and which ones are worth caring about.
Stated accuracy (e.g., ±0.02mm for Mitutoyo CD-C series) is the manufacturer's guarantee that any single reading falls within this band of the true value when measured under reference conditions. This is the number to compare across instruments. It applies at a specific temperature — usually 20°C — and at a specific measurement position (often the centre of travel). The further you are from those conditions, the less this number means.
Repeatability (e.g., ≤0.01mm) is usually stated separately. This is the maximum spread of multiple readings on the same artefact under the same conditions. A caliper with ±0.02mm accuracy and ≤0.005mm repeatability is a better instrument than one with the same accuracy but ≤0.02mm repeatability — because with the first instrument, the mean of several readings is reliably within ±0.02mm, while with the second, even the mean can be unreliable.
Maximum measurement force (sometimes listed in N) is the force required to close the jaws fully. Higher is generally better — it means the jaw mechanism is stiffer and less prone to deflection under hand pressure. Most manufacturers don't list this spec explicitly for budget calipers, which is itself informative.
Temperature coefficient (e.g., 0.01mm/°C) tells you how much the reading drifts per degree away from the reference temperature. Use this to correct for workshop temperature if your tolerance band is tight. If your caliper doesn't specify a coefficient, assume 0.02–0.05mm/°C and be conservative.
Calibration interval is the recommended period between traceable calibrations — typically one year for production use, two years for general workshop use. This is a manufacturer recommendation, not a guarantee. High-use instruments may need more frequent verification. Instruments that have been dropped or stored in adverse conditions should be checked immediately, regardless of schedule.
Environmental Controls: The Workshop Factors That Compromise Accuracy
The accuracy specification assumes controlled conditions. Your workshop is not controlled. The gap between laboratory conditions and workshop conditions is where calibrated instruments go wrong, and most of the corrective measures are simple.
Temperature management is the primary concern. The practical fix is not to build a climate-controlled room — it's to let your caliper equilibrate to the workshop before using it. A caliper that has been in a drawer overnight reads differently from one that has been in your hand for ten minutes. The thermal mass of the instrument is small; it reaches equilibrium quickly. The fix is to keep your caliper in the workshop, not in a pocket, and to wait five minutes after retrieving it from a cold car or a warm cabinet before calibrating or measuring.
Direct sunlight creates localized heating on one side of the caliper, causing differential expansion that bends the beam. This is a systematic error that isn't captured by any spec coefficient because it's environmental, not intrinsic to the instrument. Keep calipers away from south-facing windows and direct lamp light when performing critical measurements. This is especially relevant in photography or detailed assembly workspaces where bright task lighting is common.
Workpiece temperature matters as much as instrument temperature. A machined aluminium part measured immediately after cutting is significantly warmer than the surface plate you're using to calibrate your caliper. Aluminium expands at roughly 23μm/m/°C — about twice the rate of steel. A 100mm aluminium part at 30°C reads about 0.12mm larger than its dimension at 20°C. If you're measuring aluminium to ±0.05mm tolerances, the part temperature alone can exceed your tolerance band. Let critical workpieces cool to workshop temperature before measuring.
Vibration isn't usually discussed in caliper contexts but matters for the most precise work. The encoder in a digital caliper is a contact sensor — vibration that causes the slider to micro-move relative to the beam introduces noise into the reading. For measurements below 0.02mm, use a solid bench or surface plate, not a bench that's sharing a work surface with a running machine.
The Stepwise Verification Test: How to Check Your Caliper Before It Checks Your Work
Run this before any session where the measurement outcome matters. It takes three minutes and tells you whether your caliper is performing within its spec or developing a problem.
Step 1: Clean the jaw faces with a dry cloth. Inspect for chips, burrs, or visible wear. Any contamination between the jaws when you close them introduces a systematic offset.
Step 2: Close the jaws fully, wait two seconds, press zero. The caliper should read 0.000 with no drift. Any reading that slowly moves from zero after zeroing indicates encoder or electronics instability — retire the instrument pending service.
Step 3: Take a reference artefact — a gauge pin, a hardened steel disk, or a micrometer setting ring. Measure it five times, fully releasing the jaws between each reading. Record each result. The spread (max minus min) should be ≤ one display graduation. If it's larger, the caliper has a mechanical or contamination problem.
Step 4: Measure at three positions across the travel: near the hinge (10–15mm), mid-range (70–80mm), and near maximum (130–140mm for a 150mm caliper). On a well-made caliper, the readings at each position will be consistent with each other and with the known dimension. On a worn or low-quality caliper, you may see a progressive drift across positions — e.g., reading correctly at hinge and mid-range but reading 0.03–0.05mm small at maximum travel. Note this and correct for it in your measurements.
Step 5: If you have two calipers of the same model, cross-check them on the same artefact. Agreement between instruments is not proof of accuracy, but disagreement tells you that at least one is wrong.
Thermal Equilibrium: The Step Most People Skip
Thermal equilibrium is the single most neglected factor in workshop measurement accuracy. The concept is straightforward: your caliper and your workpiece must be at the same temperature before you measure. The practice is simple: wait. But in a production environment, waiting feels like lost time, and it gets skipped.
The magnitude is not trivial. A steel caliper moving from a 16°C storage room to a 24°C workshop absorbs enough heat to change its reading by approximately 0.022mm over 150mm of travel. An aluminium workpiece measured immediately after cutting can be 5–8°C above ambient. That same 100mm aluminium part reads 0.06–0.09mm large at a 7°C temperature differential. For parts with ±0.05mm tolerances — common in aerospace, automotive, and precision engineering — both of these errors exceed the tolerance band entirely. You can be measuring correctly and still be wrong.
The practical solution in a workshop is to establish a consistent baseline: keep the caliper in the workspace, not in a case or drawer, for at least an hour before the session. For production environments, the standard practice is to specify measurement at ambient workshop temperature after a defined soak period — often two hours for critical work. This is documented in ISO 1 and ASME B89.7.3.1 standards for dimensional metrology.
For hobbyists and makers working to looser tolerances (±0.1mm or more), thermal effects are rarely the limiting factor. But once you approach ±0.05mm, temperature management starts to matter and becomes the primary source of unexplained measurement variation between morning and afternoon sessions.
Calibration vs Verification: What Each Term Means and When You Need Which
Calibration and verification are not the same thing, and confusing them is a common source of measurement error.
Calibration is the process of comparing your instrument against a known reference standard and determining the correction factor — the offset between your instrument's reading and the true value. A calibration certificate from an ISO 17025 accredited laboratory provides traceability: it links your instrument's reading to a national measurement standard, giving you documented confidence that your measurements are accurate within the stated uncertainty.
Verification is the process of checking that your instrument is still performing within its specified accuracy — without necessarily determining a correction factor. The stepwise test in the previous section is a verification. It tells you whether the instrument is still functioning correctly. It doesn't tell you what the correction factor is.
For hobby use, annual verification against gauge blocks on a surface plate is sufficient. For production machining or any work where dimensional documentation matters, you need a traceable calibration certificate. The calibration lab will measure your caliper at multiple points across its range, at controlled temperature, against standards with known uncertainties smaller than your caliper's accuracy spec. The certificate they provide is your evidence that your measurements are accurate.
Between calibration cycles, the in-house verification is what tells you whether the calibration is still valid. If your verification shows drift beyond the stated accuracy, the instrument needs recalibration before further use. For more on what traceable calibration means for precision instruments, see our dial indicator calibration guide, which covers the same concepts for a related instrument family.
Maintenance Practices That Preserve Accuracy Over Years
A well-maintained quality caliper holds calibration for years. A neglected caliper of the same quality degrades within months. The difference is almost entirely in handling and storage habits.
Keep it clean. After every session involving oil, coolant, or metal fines, wipe the jaw faces and the beam with a dry cloth. These contaminants migrate into the sliding mechanism and the encoder area. Budget calipers with open encoder designs are more susceptible to this than higher-end instruments with sealed slider housings, but no caliper is immune. A monthly cleaning with a lint-free cloth and a brief application of light machine oil (wiped off, not left as a film) on the beam rail maintains smooth operation without attracting excessive contamination.
Store it properly. The original case is designed for the instrument and is worth keeping. If you've lost it, a padded micrometer case or a dedicated drawer insert works well. Don't store calipers loose in a drawer with other tools — contact with other steel items chips the jaw faces. Don't store them with the jaws fully closed — light compression is fine, but full closure leaves the beam exposed and the encoder at its most vulnerable position.
Handle it with intent. Calipers are not robust in the way that a hammer is robust. Dropping a caliper onto a hard surface can shock-load the encoder mechanism enough to introduce intermittent reading errors that aren't immediately obvious. A caliper that has been dropped should be verified before further use. Some manufacturers specifically note that shock loading voids the warranty.
Don't lubricate the encoder or electronics. The slider mechanism on some calipers accepts a tiny amount of fine machine oil on the rail — check your manual. But never oil the encoder strip or the battery compartment seal. These seals are designed to keep contamination out, and oil degrades the elastomer over time, degrading the seal.
When a Caliper Is the Right Tool and When It Isn't
Digital calipers are the most versatile measurement tool in any workshop. They are not the most accurate. That distinction belongs to the micrometer, the gauge block, and the interferometer, in ascending order. Understanding where the caliper sits in this hierarchy determines whether your measurement results are fit for purpose.
Caliper is the right tool when: measuring dimensions where ±0.05mm or looser tolerance is acceptable; measuring multiple dimension types (outside, inside, depth, step) with one instrument; taking spot measurements in a production environment where speed matters; doing layout and setup work where range is more important than precision.
Caliper is not the right tool when: tolerance is ±0.02mm or tighter — use a micrometer; measuring bore diameters to better than 0.02mm — use a bore gauge; measuring depth in a deep hole to better than 0.02mm — use a depth micrometer; measuring thread pitch diameter — use a thread micrometer or go/no-go gauge; measuring surface finish — use a roughness meter. For a direct comparison between calipers and micrometers on the same measurement tasks, see our digital vs dial micrometer comparison.
The caliper's genuine advantage is convenience. One instrument covers 0–150mm with four measurement modes. That flexibility is worth the accuracy trade-off for everything except the most precise work. Know what you're trading, and buy and maintain your caliper accordingly.
The One-Paragraph Summary
Caliper accuracy is a function of the instrument's quality, its calibration state, the thermal conditions of the measurement, and the technique of the user. Resolution tells you what the display shows; accuracy tells you how close that reading is to the true value; repeatability tells you whether you can trust any single reading. Verify your caliper before every critical session using a stepwise test, let it thermally equilibrate before measuring, and store it clean and protected. A well-made caliper maintained this way will hold calibration for years and serve across every measurement scenario from rough layout to close-tolerance inspection. When accuracy requirements exceed what a caliper can deliver, you'll know — and that's the right time to reach for a micrometer instead.