By John R. Gyorki
Calibration costs money, but so do inaccurate instruments.
One of the facts of life in working with sensors, meters, and other measurement devices is that eventually they drift out of tolerance and have to be recalibrated. The question is, how often do you really have to get this done? Calibration is an expensive process, whether it is performed within the organization or contracted out to a vendor. The user’s manual that comes with the device may recommend annual calibration, but is once a year really an appropriate period?
For some devices, governmental regulations dictate calibration frequency, particularly for medical and pharmaceutical instruments or other devices that can affect public safety. In these cases, flexibility is not allowed in setting up a calibration cycle. However, in other situations, it could be more profitable to find the best balance between calibrating too often (costly) and calibrating too little (inaccurate measurements).
All instruments degrade with time
Most measuring devices drift out of tolerance, and some devices need more frequent calibration than others. The reasons depend on the technologies used in the device and where the device is being used. When the device is primarily electronic in nature, the resistors, capacitors, and solid-state components that comprise it will deteriorate with time and exposure to heat, cold, and radiation. When the device is composed of mechanical or hydraulic components, it can degrade with time due to temperature, wear, oxidation, or exposure to chemicals. As a result, the accuracy of the measurements made by the device also degrade with time until its specifications are exceeded. Usually, the calibration process can compensate for this degradation through electrical or mechanical adjustments to the device. When calibration cannot bring the instrument back into specification, some repairs or part replacements may be needed.
One solution – calibration verification tests
So, how can you determine an appropriate calibration frequency? One good place to start is to obtain test data from the instrument manufacturer regarding the stability of the
instrument over time, that is, how long it takes, on average, for the device to drift out of specification. Some manufacturers may make this kind of data readily available to their customers, while others may not have such data or might consider it proprietary information.
If the stability information is not forthcoming from the manufacturer, you must develop it for your own organization. One way to do this is to perform periodic calibration verification tests (sometimes called “interim checks”) to determine whether the instrument is currently operating within specifications.
When these tests are performed in-house, certain costs are involved; both in hours spent and in equipment needed to provide known, accurate inputs to the device, sometimes called a “calibration reference.” In some cases, the instrument manufacturer may offer a calibration verification service that is less costly than the price of doing a full calibration. Regardless of who runs the tests, keep a log of the test results over a period of months and years. This log provides the information needed to develop a calibration cycle that is appropriate for the instrument.
Example of a spreadsheet tool
One way to run a calibration verification test is to apply known inputs to the instrument or device under test and then keep track of the actual values that the instrument produces. A computerized spread sheet such as the one shown here can be helpful in collecting the test data.
In the example on the following page, the spreadsheet tracks tests that are performed on a data acquisition device, which measures voltages in several different ranges (such as, 0 to 1V, 0 to 2V, 0 to 5V, and 0 to 10V). For each range, the tester must apply two known voltages, one at the lower end of the range and the other at the upper end, and then note the readings actually produced by the device. The spreadsheet calculates the deviation between the test input voltage and the measured voltage, and compares the deviation to that allowed by the accuracy specifications for the device:
Deviation = Test Input – Reading <= Specification
If the deviation is within specifications, the spreadsheet notes that the device has passed the test for that particular input voltage and range. If the deviation exceeds specifications, the device failed the test for that input voltage and range. Usually, the device must pass all of the input tests if it is to pass the overall test.
This calibration verification test can be performed on a periodic basis (say, once every three months), or it can be performed just before the instrument is actually used for a production test. In general, it is better to perform periodic tests so that the calibration history is more complete.
Develop a calibration trend history
The value of any one-calibration verification test is that it confirms the current ability of the instrument to perform within specifications. However, if the test data are logged over time, the trend of the data can help determine an appropriate calibration cycle for the instrument.
As shown in the example to the right, the last row in the calibration spreadsheet computes the worst-case deviation (in percent) of the measured value from the input value for all voltage ranges and for all applied test voltages. This worst-case deviation number is a measure of the overall adherence of the device to its published specifications, since if that number is less than 100% it means that the overall instrument is within specification.
Using this information, create a plot of the worst-case deviation number as a function of time over the course of several calibration verification tests. Here is an example of such a plot, showing the results of monthly tests over the course of 11 months:
In the example to the right, the trend of the plot shows that the instrument is likely to go out of calibration (exceed the accuracy specifications) about 15 months after its previous calibration. This information indicates that an annual cycle of calibration is quite appropriate for this particular instrument. On the other hand, a trend line that shows the instrument going out of calibration after only nine months would indicate that a shorter calibration cycle would be more appropriate.
The bottom line
The calibration of instruments, sensors, and other measuring devices is an expensive process. However, using an instrument to collect critical data and then finding out later that it is out of calibration is even more expensive in terms of wasted time and effort. Setting up a procedure for performing periodic calibration verification tests is one good way to gather the data needed to establish a calibration cycle that minimizes these expenses.
:: Design World ::
Filed Under: Data acquisition + DAQ modules
Tell Us What You Think!