Measurement Principles and Error Analysis in Metrology
1. Definition of Measurement
The measurement process involves experimentally obtaining one or more quantity values that can reasonably be attributed to a quantity.
2. Uncorrected Result, Corrected Result, Final Result
- Uncorrected result: Result of a measurement before correction for systematic error.
- Corrected result: Result of a measurement after correction for systematic error.
- Final result: Result of a measurement after correction for systematic error, accompanied by estimated uncertainty from random errors.
3. Principle of Measurement, Measurement Method
- Principle of the measurement: Phenomenon serving as a basis of a measurement.
- Measurement method: Generic description of a logical organization of operations used in a measurement.
Measurement methods may be qualified as direct or indirect.
4. True Value, Conventional True Value
- True value: Value consistent with the definition of a given particular quantity.
Notes:
- This is a value that would be obtained by a perfect measurement.
- True values are by nature indeterminate.
- Conventional true value: Value attributed to a particular quantity and accepted, sometimes by convention, as having an uncertainty appropriate for a given purpose.
5. Measurement Error: Definition and Absolute Error Formula
Measurement error results from the fact that all measurement results, including those obtained with very high-precision instruments and with high experimental accuracy, are not accurate but approximate.
Absolute error formula: Δx = x – x0
6. Classification of Measuring Errors
- Systematic measurement error is that component of measurement error that, in replicate measurements, remains constant or varies in a predictable manner.
- Random measurement error is that component of measurement error that varies randomly (in sign and magnitude) for repeated measurements of one and the same quantity.
- Gross measurement error (failure) refers to measurement error significantly exceeding the error expected under the given conditions.
7. What is the Probability Density Function (PDF)?
The probability density function is used to specify the probability of the random variable falling within a particular range of values. This probability is given by the integral of this variable’s PDF over that range—that is, it is given by the area under the density function but above the horizontal axis and between the lowest and greatest values of the range.
8. Properties of PDF
9. Main Gauss Theory Assumptions
- If we measure the same value in unchanged conditions, the most probable value of the estimated value is the mean.
- The accuracy of the measurement is affected by many factors at the same time, so negative and positive errors have the same probability:
- The probability of error decreases with increasing its value, i.e., with the distance from the mean value.
10. Normal Gaussian Distribution
11. Confidence Interval: Definition
Confidence interval: A range of values so defined that there is a specified probability p = 1 – α that the value of a measurement lies within it.
12. Measurement Uncertainty
Measurement uncertainty: Interval distributed symmetrically with respect to the measurement result (mean value) in which the true value of the measured quantity is included with the specified probability p.
Standard uncertainty expressed by standard deviation:
- Type A evaluation of standard uncertainty: Method of evaluation of uncertainty by the statistical analysis of a series of observations. This method primarily includes random errors.
- Type B evaluation of standard uncertainty: Method of evaluation of uncertainty by means other than the statistical analysis of series of observations. This method includes systematic errors and any other uncertainty factors that the experimenter believes are important.
Expanded uncertainty: u product of a standard uncertainty u and a factor larger than the number one k:
U = ku
According to the ISO recommendations, the measurement result can be given with standard or expanded uncertainty.
Where: k-coverage factor from range 2-3, and depends on assumed confidence level.
13. How Can the Final Result Be Written?
- Mean value
- Systematic errors correction factor
- Uncertainty
Example:
14. Definitions in Dimensional Metrology
A) Basic size: The size used when the nominal size is converted to the decimal and from which deviations are made to produce limit dimensions.
B) Limit dimension: The lower and upper permitted sizes for a single feature dimension. For example, 0.500-0.506 inch, where 0.500 inch is the lower limit and 0.506 inch is the upper limit dimension.
C) Tolerance: Tolerance is the allowable variation for any given size in order to achieve a proper function. Tolerance equals the difference between lower and upper limit dimensions.
D) Maximum condition: In this condition, a hole is at its smallest limit dimension. The shaft is at its largest limit dimension. This condition exists at minimum clearance or maximum interference.
E) Fit: The general term of fit describes the range of tightness designed into parts that assemble one into another.
F) The fit can be explained under the three categories: clearance fit, force fit, transition fit.
G) Allowance: An alternative expression for the tightest possible fit, which is minimum clearance or maximum interference.
H) Deviation: Difference between the size and the corresponding basic size. The basic size is assigned as limits of deviation. It is the same for both parts of their fits.
I) Lower deviation: Difference between the minimum limit of the part’s size and the corresponding basic size. It is designated “ei” for a hole, “ei” for a shaft.
J) Upper deviation: Difference between the maximum limit of the part’s size and the corresponding basic size. It is designated “es” for a hole, “es” for a shaft.
K) Fundamental deviation: One of the deviations closest to the basic size.