Measurement Systems

Measurement and inspection are a critical part of ensuring outgoing quality and the quality engineer must now ensure that her measurement equipment has been chosen properly and is calibrated to perform the task.

In the effort to choose the correct Measurement System, or analyze an existing one, Measurement System Analysis is also very important.

Secondarily, your Calibration System is vital in ensuring that your measurements & testing are accurate and reliable when making quality decisions.

When we talk about Measurement & Testing Equipment or Measurement Systems, there are a few important concepts that the CQE must understand and apply, these are shown below and will be discussed in detail:

  • Accuracy & Bias
  • Trueness
  • Precision
  • Repeatability
  • Reproducibility
  • Linearity

Accuracy & Bias are interchangeable terms for the same concept which is defined as the closeness of agreement between an observed value and an accepted reference value.

The Accuracy of a test unit is the Accepted Reference Value minus this Average Observed Value.

Accuracy = Accepted Reference Value (Gold Standard) – Average of Multiple Measurements

Accuracy (and bias) is a cumulative sum of the systematic errors of the measurement system.

That is to say, the difference between the reference standard and the average measurement is due to all the different errors of the measurement equipment.

There are a few common sources of Accuracy/Bias (Error) which include an inadequate description of the measurement system or calibration system, variations in time of measurements & errors of omission.

The ISO standard 5725-1 uses the terms trueness & precision to describe this same concept, with trueness being the ability of a measurement system to give a correct result, and precision being the ability of the measurement system to replicate a given result.

Precision is often described the ability of a measurement method to replicate a given result. Repeatability and Reproducibility are subsets of precision and both help categorize the sources of variability in a measurement system.

Accuracy & Precision

Repeatability describes the minimum variability in results and is commonly used as a systems precision for measurements made within a restricted set of conditions.

This restricted set of conditions limits things like the operator, calibration, environment (humidity, temperature, etc) and the time elapsed between measurements. By eliminating these variables, you are able to accurately challenge the repeatability of a measurement system.

Reproducibility describes the maximum variability in the results of a measurement system.This is the measure of agreement between different test results made on the same object on two different, independent measurement systems or laboratories.

Unlike in repeatability, when measuring reproducibility you want to introduce error such as the operator, calibration, environment & time elapsed between measurements.reproducibility

As seen here, operators can contribute to different levels of reproducibility. Ideally, you’d like improve your reproducibility by reducing the impact of the operator on the overall measurement.

Linearity is defined as the variation in a measurement system throughout the range of expected measurements.

For example, let’s say you’re measuring 2 different critical dimension on a particular component and the first critical dimension is the thickness (.025″) of the ball-bearing and the other is the diameter (12.000″).

Linearity is the concept that at the .025″ measurement, you may get better accuracy and precision from your measurement system (Calipers) then when you make different measurement with the same tool at a different dimension (12.000″).

Precision Versus Accuracy

Repeatability & Reproducibility (Precision) – Means that multiple measurements taken from a measurement system are consistent around the same value. This does not mean they are accurate!

Accuracy: The difference between the average value taken by your measurement system and the reference value.

Example 1: You measure a part with a known length 2.0000″ (Reference Value) 5 times and get the following values

2.0101″  2.0102″ 2.0099″  2.0102″  2.0098″

These values are all Precise, (~2.0100″), but they are not accurate.

Example 2: You measure a part with a known length 2.0000″ (Reference Value) 5 times and get the following values

1.7100

2.0150

2.0610

1.8930

2.3250

These values are all Accurate in that their average is approximately 2.000″, but they are not precise.

Repeatability Versus Reproducibility (R&R)

While both repeatability and reproducibility represent the precision of a system people often confuse these terms  as synonyms or equivalents and that is incorrect. Each of these terms represents a different source of variation in your measurement system.

  • Repeatability is the error/variation when measuring the same part/feature a number of times.This type of error includes part setup, part size, linearity of the equipment and the variation associated with the measurement system.
  • Reproducibility is the error/variation when measuring the same item utilizing different operators. This variation captures the human factor associated with the measurement error.

A measurement system must first be repeatable before it can be reproducible.

Gauge R&R Study

Above we said that a Gauge Repeatability and Reproducibility (R&R) Study is an effective and recommended method for determining the variation (or error) of a Measurement System (MS).

Both of these values when collected are expressed as a standard deviation.

There are a number of ways to perform a gauge R&R study, below is a general description of the 2 most common types; the Range Method and the Average & Range Method.

Beyond these 2 methods a full ANOVA (Analysis of Variance) can be performed on a give MS to fully quantify and classify each contributor to variance (error).

In addition to this, we will discuss how to interpret your results, the acceptance criteria, and some potential actions you can take to reduce variation.

The Range Method

The first is called the Range Method which is a quick and dirty way to quantify your Measurement Systems R&R. The downside to the Range Method is that it does not reveal the individual contributions to error from the 2 primary sources – operators & equipment.

The Range Method can be performed with one or multiple operators measuring the same set of parts only once. The advantage to the Range Method is that it is quick and inexpensive.

Average & Range Method

The second method for performing a Gauge R&R study is called the Average & Range Method. The advantage with this method is that it reveals the individual contributions of the operation & equipment to the variability and also determines the total variability of the MS.

The Average and Range Method requires that multiple operators (usually 3x) take multiple measurements (10x) multiple times (3x) on samples that are known to represent the full range of the process.

Acceptance Criteria

Below is some general guidance on the Acceptance Criteria that you could utilize when performing your study. Remember that the measurement error should always be compared with the total tolerance of the system when determining if the results are acceptable.

Outcome #1 – Total Measurement Error of <10% of Total Tolerance.

Result – Acceptable Measurement System.

 

Outcome #2 – Total Measurement Error of >30% of Total Tolerance.

Result – Generally Unacceptable Measurement System. Effort should be made to identify and eliminate the potential sources of variation in the measurement system.

 

Outcome #3 – Total Measurement Error of 10-30% of Total Tolerance.

Result –  Generally acceptable Measurement System based on the importance of the application & cost of the equipment and maintenance. An example where this might be unacceptable would be if the measurement system was responsible for a critical measurement that had a direct correlation with customer safety.

 

Further Interpretations of the Results of a Gauge R&R Study

There are 2 general interpretations of a Gauge R&R study and are a comparison of Repeatability (Equipment) versus Reproducibility (Operator).

Outcome #1 – Reproducibility error is large when compared to Repeatability error.

Actions to Reduce Reproducibility Error: Perform Maintenance on Measurement Equipment. Redesign Equipment for more Tooling/Fixturing Rigidity. Improve the process for locating the part in the equipment.

Outcome #2 – Repeatability error is large when compared to Reproducibility error.

Actions to Reduce Repeatability Error: Operator Training on how to use, operate and read the measurement equipment. Create a “Visual Factory” for operating the measurement equipment. Improve the measurement process to eliminate error associated with manual part setup.

Measurement Tools

At some point as a CQE, you will be involved in the process of selecting a new piece of measurement equipment.

When performing any task like this, it is always good to keep the end in mind.

In this case the goal is to purchase a piece of equipment that produces reliable data (measurements) so that you can make good quality decisions about the product or process being measured.

So as you go through the selection process there are a number of factors that you should consider which include all the topics discussed above along with a few new topics:

  • Precision
  • Accuracy
  • Resolution
  • Repeatability
  • Reproducibility
  • Linearity
  • Stability & Consistency
  • Shape, Material & Dimensions of the part the be measured
  • Capabilities of your Metrology Lab

So many of these topics were discussed above, but just to touch on a few of the new items, below.

Resolution – A quick way to ensure that the equipment you’ve chosen has the proper resolution, you must be familiar with the Rule of 10. This states that your measurement/inspection system must have a resolution 10 times greater than the tolerance of the dimension being measured.

Example: A critical dimension on Widget A is 10.00+ .01″

If you were to follow the Rule of Ten here, you’d want to select a measurement system that has 10x the resolution of the dimension tolerance (.01″), therefore it would have to be resolute to .001″.

This Rule of 10 can also be applied to your metrology department where your calibration standards should be 10x better than your measurement instrumentation.

In the list above I mentioned a few new topics like Stability & Consistency & the Capabilities of your Metrology Lab. The measurement equipment you choose must be stable and reliable to make an accurate reading throughout a reasonable calibration interval.

Again this is very much aligned with your goal to select a measurement system that reliably produces good data.

In the end you should be able to firmly answer the questions, can my measurement system accurately and precisely distinguish between conforming and nonconforming product. If you can do this, you probably selected the correct measurement system.

 Linear Measurement Tools

Below is a high level overview of the different types of Linear Measurement Systems. Below is a short list of the different linear measurement systems that I’ll be covering.

  • Calipers
  • Micrometers
  • Rulers

Calipers are a linear measurement device used to measure the distance between 2 points.

There are many different types of calipers, but in today’s world you should probably be most familiar with the digital calipers, which unlike its predecessors, displays the measured distance digitally on a screen instead of on a dial or requiring a 2nd measurement against a known standard.

Caliper

Micrometers are another type of measurement device that use a fixed anvil and a rotating spindle to collect a linear dimension.

These are fairly similar to calipers in that they come in different styles, digital, vernier, etc.

Calipers

Rulers are the third, and probably most common type of linear measurement system.

Chances are one of your first experiences with taking a linear measurement happened in kindergarten using a ruler.

Ruler

With this type of measurement system the end of the ruler is aligned with the component to be measurement and then the resulting dimension is interpreted off the ruler.

Destructive & Non-destructive Testing

Another key topic for the Certified Quality Engineer is Destructive & Non-Destructive Testing. At this point you’ve learned that your new or existing widget must be inspected for quality, but how?

First you must understand the different available test methods and their distinction as either Destructive or Non-destructive Testing.

Below I will go over a few different test methods for each of these types of testing along with the pros and cons of each, but first a few definitions.

Destructive Testing

Destructive Testing is any test technique that inflict damage or permanently alter the form, fit and function of the Device Under Test (DUT).

A major downside to destructive testing is that it renders the part unsaleable.

Therefore using a destructive test method requires the development of a sampling plan to accept or reject lots, which of course we know has some risk to it (see Producers Risk & Consumers Risk).

Destructive Testing

Additionally, because these samples must be destroyed, destructive testing is more economical to use when mass producing. If you’re only building 3 widgets, don’t destroy one of them!

Examples of destructive testing include cross-sectioning of parts to measure critical dimensions, or performing tensile testing on material to determine tensile strength or other material properties.

Compression testing (image) & Hardness testing are other types of test methods that are considered destructive.

Typically destructive test methods are used to determine the material properties of the components you are using.

Non-Destructive Testing

Non-Destructive Testing is any test technique that does not permanently alter the form, fit, function or appearance of your product. Basically, you’re inspecting without destroying or damaging the DUT (Device Under Test).

As opposed to destructive testing, non-destructive testing does not require you to discard the DUT, and therefore you can utilize any different form of sampling plan you want, including 100% inspection.

non-destructive testing

Some notable examples of non-destructive testing include Electromagnetic testing, Radiographic (X-ray) testing, Ultrasonic Testing, Leak Testing, and the Visual Inspection which is the oldest forms of non-destructive testing, which is still common today and my least favorite form of inspection.