What is Calibration and why is it important?

A calibration is a process used to compare the inspection, measuring, and test instruments to a recognized reference standard of known certified accuracy and precision, noting the difference and adjusting the instrument, where possible, to agree with the standard.

Fundamental to a systematic program of instrument calibration and periodic recalibration is the idea that the instruments are not constant. Extended use, wear, design, environment, and time are some of the factors that degrade the instrument performance and its accuracy.

A calibration system is designed to assure the verification, maintenance, and validation of the instrument’s desired accuracy and precision.

Selection of appropriate inspection, measuring, and test equipment is an integral part of inspection planning, and success depends on such factors as measurements to be made and accuracy requirements. Included are hardware items, such as instruments, fixtures, gauges, and templates, software for computer-aided inspections, and process instrumentation.

Also included is all testing equipment used in the development, manufacture, installation, and servicing of a product.

ELEMENTS OF A CALIBRATION PROGRAM

A calibration program is designed to maintain control over all the inspection and measurement systems. The elements of the program include:

• Selection and acquisition of equipment appropriate to the need

• Validation of equipment (hardware, software, and procedures), accuracy, and precision prior to first use

• Suitable environmental conditions for calibration, inspection, testing, and measurement

• Traceability to a reference standard of known accuracy and stability

• Accuracy ratio • Frequency of calibration • Handling, preserving, and storage

• Recall system for periodic maintenance, repairs, adjustments, and recalibration

• Establishment of procedures for periodic recalibrations

• Documentation

Selection and acquisition of appropriate equipment and validation of their accuracy and precision are part of the inspection or quality planning decision. The remaining elements from above are the primary aspects of a calibration system.

TRACEABILITY

Traceability involves the chain of measurements and accuracy transfers that are made that connect the nation’s standards of measurements, as maintained by the National Institute of Standards and Testing (NIST) with the measurements made in research, manufacturing, and the marketplace.

It is important to provide the evidence that the chain exists and that it is intact. It should provide evidence that at each link in the chain or transfer from the primary reference standards at NIST or other secondary standards, consideration is given to the measurement errors associated with that particular transfer. This brings us to the concepts of accuracy, precision and accuracy ratio.

ACCURACY VERSUS PRECISION

These are important concepts in calibration. Accuracy is defined as an agreement between the measured value and the true value.

When this agreement is within acceptable range, it is called within tolerance. The precision, on the other hand, is the closeness of multiple measurements values. Thus, an instrument can be precise but not accurate.

ACCURACY RATIO

The accuracy ratio is the relation between the accuracy of the measurement standard and the accuracy of the equipment or instrument being calibrated. In order to assign a stated accuracy to a particular characteristic of a measuring device, it is necessary to have a device with a “somewhat better accuracy” with which to compare.

The further away you go from the primary standard in terms of the traceability chain, the worse off you are going to be in terms of accuracy. Depending on the accuracy ratio in instrument calibration, compensation may have to be provided in the recalibration.

These methods may include setting tolerance bands for accuracy, correction factors, statistical methods, replicated testing, and other more sophisticated techniques.

FREQUENCY OF CALIBRATION

The next important concept is the frequency of calibration or interval between recalibrations. Most recalibration programs are based on some function of time. Intervals between recalibration should be based on such factors as the instrument’s purpose, stability of measurement (over different conditions), observation of drift (slow variation over time) and the degree of usage.

The interval between calibrations must be shortened to assure continued required accuracy based on the prior calibration history. It may be possible to lengthen the interval if historical calibration data indicates no degradation of the instrument’s accuracy.

A fixed interval recalibration program for all measuring instruments is the simplest to administer, but it fails to recognize the differences in instrument type and application. A slight variation of a fixed interval recalibration program involves setting recalibration intervals based on the instrument types or groups.

This method recognizes the differences in types of instruments but has no consideration for variations in applications or the differences in individual instruments within the group. A more complicated version of the above method is to adjust these intervals from time to time, based on the analysis and recalibration results.

The most sophisticated method is the one that takes into account differences between instrument types, between instruments of the same type, and between applications. It starts by establishing an initial interval for a type or group of instruments and then adjusts the interval of each instrument independently based upon the analysis of its own recalibration history.

The analysis is relatively simple and consists of determining whether the instrument was “in” or “out” of accuracy tolerances at the time of each recalibration. If the instrument is still within calibration, you may increase the calibration interval or inversely, you have to decrease the interval if the instrument is found to be out of calibration at the time of recalibration.

The underlying assumption to this approach is that instruments have an individually unique ability to remain within tolerances for a certain period, except for any breakdowns.

Although it is technically the most sophisticated, cost effective, and dynamic method, its administrative complexities require a high degree of support from the automatic data processing system to maintain the calibration history for each piece of equipment.

DOCUMENTATION

Documentation provides evidence of compliance with the program requirements in case of an internal or an external quality assurance audit. In addition to the documentation of the overall program requirements, one also needs to establish, document, and maintain calibration procedures for all of the equipment covered by the program requirements.

This also includes documentation for the recall protocol and prior calibration history for each instrument for frequency of recalibration decisions.

Documentation should also include the procedures and precautions for handling, preservation, and protection of the equipment to ensure its accuracy and precision under varying environmental conditions during storage, transportation, handling and use.

source

2 Likes