Latest News

Ensuring an accurate result in an analytical instrumentation system

In many analytical instrumentation systems, the analyser does not provide an absolute measurement. Rather, it provides a relative response based on settings established during calibration, which is a critical process subject to significant error.

To calibrate an analyser, a calibration fluid of known contents and quantities is passed through the analyser, producing measurements of component concentration. If these measurements are not consistent with the known quantities in the calibration fluid, the analyser is adjusted accordingly.

Later, when process samples are analysed, the accuracy of the analyser’s reading will depend on the accuracy of the calibration process. It is therefore, imperative, that we understand how error or contamination can be introduced through calibration; when calibration can — and cannot — address a perceived performance issue with the analyser; how atmospheric pressure or temperature fluctuations can undo the work of calibration; and when and when not to calibrate.

System design

One common problem in calibration is incorrect system configuration. In many cases, the calibration fluid is mistakenly introduced downstream of the stream selection valve system and without the benefits of a double block and bleed configuration (see figure 1).

A better place to introduce the calibration fluid would be through the sample stream selection system, as in Figure 2. The purpose of a sample stream selection system is to enable rapid replacement of sample streams without the risk of cross contamination.

In figures 1 and 2, each stream in the sample stream selection system is outfitted with two block valves and a bleed valve (to vent) to ensure that one stream — and only one stream — is making its way to the analyser at one time.

Over the years, stream selection systems have evolved from double block and bleed (DBB) configurations comprised of conventional components to modular, miniaturised systems (New Sampling/Sensor Initiative, ANSI/ISA 76.00.02). The most efficient systems provide fast purge times, low valve actuation pressures, and enhanced safety characteristics, together with high flow capacity and consistent pressure drop from stream to stream for a predictable delivery time to the analyser.

A stream selection system provides the greatest insurance against the possibility of the calibration fluid leaking into a sample stream. Nevertheless, some technicians will bypass this assembly and locate the calibration fluid as close as possible to the analyser with the intent of conserving this expensive fluid.

If only a single ball valve is employed, as in Figure 1, the attempt to conserve calibration gas may result in biased analyser readings. The analyser may be properly calibrated, but there is always the risk that a small amount of calibration gas could leak into the sample stream and throw off the measurements.

Limitations of calibration

To effectively calibrate an analyser, the operator, technician or engineer should understand, theoretically, what calibration is, what it can correct and what it cannot. Let’s start with the difference between precision and accuracy. A shooter’s target is a good metaphor for explanatory purposes.

In Figure 3, the shooter has produced a series of hits (in red) on the target. Since the hits are very close together in one cluster, it can rightly be said that the shooter is precise. Time and again, he is hitting the target in the same place. Precision yields repeatable outcomes. However, the shooter is not hitting the centre of the target and, therefore, he is not accurate. If he or she makes an adjustment and lands all of his hits in the centre of the target, then he will be both precise and accurate.

The same terms can be applied to analysers. An analyser must first be precise. It must yield repeatable results when presented with a known quantity in the form of a calibration fluid. If it does not, then the analyser is malfunctioning or the system is not maintaining the sample at constant conditions. Calibration cannot correct for imprecision.

If the analyser produces consistent results but the results are not the same as the known composition of the calibration fluid, then the analyser is said to be inaccurate. This situation can and should be addressed through calibration. This is called correcting the bias.

Even if the analyser is found to be precise and accurate when tested with calibration fluids, it is still possible that it will yield inaccurate results when analysing the sample stream. If the analyser is asked to count red molecules and it encounters pink ones, what does it do?

The pink molecules look red to the analyser so it counts them as red, resulting in an inflated red count. This is called positive interference: A molecule that should not be counted is counted because, to the analyser, it looks similar to the molecule that should be counted. For example, in a system designed to count propane molecules, propylene molecules may show up. It’s possible that the analyser will count them as propane because it was not configured to make a distinction between the two.

No analyser is perfect, but they all strive for ‘selectivity’, which means they respond to just the molecules you want them to and not to anything else. Some analysers are more complex and are programmed to chemically inhibit certain types of interference. For example, a total organic compound (TOC) analyser is designed to measure carbon content in waste water so it can be determined if hydrocarbons are being disposed of inappropriately.

In order to do so accurately, the analyser removes a source of positive interference — inorganic carbons, like limestone, which are present in hard water. Then, it measures the organic carbons only. Without this initial step, the analyser would measure both organic and inorganic carbon, confusing hydrocarbons with hard water.

Another type of interference is negative interference: A molecule that should be counted isn’t counted because another molecule is hiding it. For example, in fluorinated drinking water, an electrode is used to analyse the amount of fluoride in the water. However, hydrogen ions, which are common in drinking water, hide the fluoride so the count is inaccurately low. The analyser may read 1 ppm, which is a standard dose but, in fact, the water may contain 10 ppm. The solution is to remove the source of interference. By introducing a buffer solution, the hydrogen ions are removed and the electrode can accurately measure the fluoride.

Controlling for atmospheric changes

Gas analysers are essentially molecule counters. When they are calibrated, a known concentration of gas is introduced, and the analyser’s output is checked to ensure that it is counting correctly. But what happens when the atmospheric pressure changes by 5 to 10 percent as it is known to do in some climates? The number of molecules in a given volume will vary with the change in atmospheric pressure and as a result the analyser’s count will change.

There is a common misperception that atmospheric pressure is a constant 14.7 psia (1 bar.a), but, based on the weather, it may fluctuate as much as 1 psi (0.07 bar) up or down. In order for the calibration process to be effective, absolute pressure in the sampling system during calibration and during analysis of samples must be the same. Absolute pressure may be defined as the total pressure above a perfect vacuum. In a sampling system, it would be the system pressure as measured by a gauge, plus atmospheric pressure.

To understand the degree of fluctuation in measurement that may be brought about by changes in absolute pressure, let’s refer to the perfect gas law:

PV = nRT

where P = pressure, psia; V = volume, cubic in.; n = number of moles (molecules); R = gas constant; and

T = absolute temperature, °F. Rearranging this equation to read

n=PV/RT

shows that as temperature and pressure change, the number of molecules present in the standard volume also changes. Pressure changes are more critical than temperature fluctuations. One atmosphere of pressure is defined as 14.3 psi. Therefore, a 1 psi variation in pressure can change the number of molecules in the analyser volume by about 7 percent.

Temperature, on the other hand, is measured on the absolute scale, keeping in mind that absolute zero is -460°F (-273°C), so a 1°F (0.5° C) temperature variation changes the number of molecules by only about 0.3%. In sum, it is probable that one might get a large change in pressure in percentage terms. It is not probable that one would get a large temperature change in percentage terms.

If pressure is so critical, how does one control for it? Some analysers, especially infrared and ultraviolet, allow atmospheric pressure to affect the reading but then later correct for it electronically. However, many analysers, including nearly all gas chromatographs, do not correct for atmospheric pressure fluctuations; most systems do not correct for it; and many system engineers or operators are satisfied to ignore it.

Some believe that atmospheric fluctuations are not significant. Others maintain that any atmospheric fluctuations are compensated for by other related or unrelated variables affecting the analyser, and it all comes out in the wash. Nevertheless, atmospheric fluctuations can be extremely significant. Let’s suppose that when you calibrate your analyser, the atmospheric pressure is X, but, later, when you inject the process gas, the atmospheric pressure is X + 1 psi (0.07 bar). The answer may be as much as 7 percent off the measured value.

Conclusion

Calibration is an important process and an absolute requirement in analytical systems, but care must be taken to perform this process properly. The operator, technician or engineer should understand how best to introduce the calibration gas into the system (i.e., through a DBB configuration so the possibility of cross-stream contamination is minimised) and how to control for atmospheric fluctuations in gas analysers (i.e., through an absolute pressure regulator).

Further, the technician or operator should understand the limitations of calibration — what problems it can address and what problems it cannot — and how frequent adjustments to the analyser based on incomplete data can introduce error. If the analyser is regularly validated with an automated system and is properly calibrated when a statistical analysis justifies it, then calibration will function as it should, and provide an important service in enabling the analyser to provide accurate measurements.

[Doug Nordstrom and Tony Waters are analytical instrumentation specialists at Swagelok in the US.]

Send this to a friend