Friday, May 8, 2009

Low Pressure Sensing and Control

Low Pressure Sensing and Control

The demand to sense and quantify low “draft pressures” accurately within the fractional psi and water column ranges has prompted sensor manufacturers to revisit this niche measurement requirement. Many technologies are used that convert a sensed force into a corresponding output signal. When addressing pressures in the tens to hundreds of psi, the forces available are substantial, and, thus, detection of this force is simplified. Based on a specific pressure range, a prescribed area for a sensing element is selected to optimize the characteristics of the required pneumatic to force conversion.

The sensing element’s cross-sectional area is ratio metric based on the magnitude of the pressure to be measured by the force-sensing diaphragm. Viable resolution and accuracy in the low-pressure domain require the use of a proportionally more sensitive element.

A new series of concerns evolves with this higher level of sensitivity. An idealized “pressure” sensor should detect only pneumatic pressure changes. That is, the change in the sensor’s output signal should reflect only pressure effects.

In the real world, this is not the case. All pressure sensors have varying susceptibility to a variety of external stimuli other than pressure alone. This susceptibility is compounded when a sensor is characterized to accommodate the low-pressure domain.

Intelligent Signal Processing Techniques

Certain stimuli such as thermal effects on null and span can be compensated for, as these become repeatable, quantifiable knowns for a specific primary sensor. With the advent of intelligent, microprocessor-based signal processing, the three key attributes required are repeatability, thermal retrace, and long-term stability.

Based on the known uncertainty of the intelligent signal processing and assuming multipoint linearity is in the offering, the greatest source of errors becomes that of repeatability, thermal retrace, and sensor long-term drift rates combined with signal processing input/output translation errors. Linearity typically is not a concern using such a signal processing approach.

Ambient temperature and the accurate measurement of the primary sensor’s output signal are paramount in order to optimize overall pressure measurement certainty.

Sources of Uncertainties

An overview of induced signal processing errors and signal translation accuracy is presented in Figure 1. By assuming an idealized situation exists prior to the analog FET switch used for parameter selection, we may limit our discussions to signal processing error considerations.

The configuration lends itself to typical analog-type sensing devices where A/D (analog to digital) conversion is required rather than the more exotic vibrating cylinder or diaphragm approaches that may bypass the initial A/D stage. The end result is analog in, and analog out, with scale factor, zero, linearity, sample rate, transfer function, and signal filtering courtesy of the microprocessor and its associated components. The errors are cumulative, where the signal processing performance of Figure 1 may be identified as follows:

A/D errors + uP errors = D/A errors + E/I errors = Signal Processing Uncertainty

Two dominant factors that limit signal processing resolving abilities target the combined limitations of the A/D to D/A (digital to analog) conversion and to a lesser magnitude, math performed within the micro block.

A/D and D/A Conversion Errors

Both the A/D and D/A conversion are resolution limited by the number of bits they may resolve. Their basic uncertainty over temperature is also a consideration. The A/D converter, for example, may be of a 12-bit architecture, but yields only 10 bits of certainty over temperature. This performance is typical of most SAR (Successive Approximation Register)-type A/D converters found on board single-chip microprocessors. When conversion speed is critical, the SAR type has the advantage, but at a lesser accuracy.

Most instrumentation applications (where speed is not a factor) use industry-standard “dual slope” A/D converters with commutating auto zero amplifiers in their front ends. These converters are slower, but here a typical 12-bit A/D converter may yield a certainty of 12 full bits, +/- 1 bit over a desired temperature range. This translates into performance of better than +/- 0.025% of scale versus the 12-bit SAR type, which may provide certainties marginally better than +/- 0.1% scale. Again, the tradeoff in accuracy between the two is inverse to conversion speed. While numerous other A/D conversion techniques exist (such as flash, half flash, and composites thereof), these are methods for our purposes of discussion. The key consideration when defining A/D errors is, therefore, not how many bits one is moving around, but rather the absolute certainty of conversion over a prescribed operational temperature range. This becomes the key factor in defining A/D errors for this discussion.

D/A converters are less complex by comparison. Beyond their bit-resolving capability, linearity, and desired monotonicity, performance over a defined temperature range could prove the most critical concern. Their resulting output signal may provide either a proportional current or voltage. In the case of the D/A converter, the norm is to find that a 10-bit converter yields 10 bits of resolution with +/- 1/2 bit of certainty.

The final analog stage depicted provides traditional 4-20 mA output signal common in most process monitoring and control applications. Translation errors over temperature while driving varying load compliances for this configuration typically add less than an additional 0.025 to 0.035%FS (full scale) uncertainty.

Therefore, as an example, by assuming equal BIT uncertainties for both A/D and D/A conversions, the following %FS translation errors would be typical: 8 BIT - .82%; 10 BIT - .24%; 12 BIT - .08%;

14 BIT - .05%; 16 BIT - .04%

Math Processing and Calibration Limitations

The microprocessor block provides the “digital to digital” interface and is responsible for absolute input (A/D) and output (D/A) calibration factors. In addition, the math processing required for linearity correction, transfer functions (square root, linear, etc.), and digital filtering enable features such as auto zero which are possible with this approach. Unlike the direct BIT resolving limitations of both the A/D and D/A converters, the microprocessor via software may move three sets of 4 BITS of throughput despite its 4 BIT architecture. Floating-point math routines running within the micro block enable resolving capabilities to beyond 32 BITS, which exceed any real-world requirements. Calibration factors are stored in nonvolatile memory allocations, such as battery-backed RAM or electrically erasable (E2) memories.

Auto Zero in Low Pressure Applications

A natural progression based on the microprocessor signal-processing configuration discussed is that of the implementation of intelligent sensor’s “auto zero.” Historically, the sensor’s stability is perhaps the greatest source of error in most low-pressure primary elements. When using “gauge” or “differential” primary sensing elements, zero pressure is one reference point that proves easily obtainable.

By incorporating electromechanical pneumatic control valves, the microprocessor periodically may reference a true “zero pressure” and correct for drift rates and temperature effects on sensor null or zero.

Figure 2 provides a schematic representation of this approach as used in a low pressure, differential measurement mode. Two modes of operation occur in this configuration. Normally, a direct pneumatic path is provided via the auto zero valves directly to the appropriate side of the sensing element. For differential modes two valves are required versus the one for gauge applications (with the reference port PL vented to atmosphere). Upon electrical activation of the auto zero valve, pneumatic input pressure to the sensor is dead-ended, and pressure previously acting upon the measurement element now is vented to atmosphere. This provides the “zero pressure reference” that enables the microprocessor to correct sensor zero routinely. In low-pressure measurement situations, this eliminates the greatest source of uncertainty. Therefore, the processor is required only to perform linearity, span, and temperature compensation of FS.

The schematic representation of sensor auto zero is typical of low-pressure applications as the auto zero valves vent to atmosphere. This configuration would pneumatically tie both vent outputs together for higher pressure ranges. This would establish common mode pressures at the prevailing static line pressure. Since low pressure ranges occur in closer proximity to barometric, we vent directly to atmosphere since common mode sensor errors are not a factor.

Adding Intelligence to Auto Zero

The microprocessor may be used to determine auto zero intervals. This would enable the longest time interval between auto zero actuations making the intelligent sensor less busy. Time and temperature would be the determining criteria, by which the intervals would be established.

Upon initial power-up, a warm-up period may exist. This is due to semi-conductor junctions achieving their working internal temperatures at some ambient temperature. If we assume, for example, that this requires five minutes to normalize to 99.9%, we could initiate three auto zero intervals within this time period. A typical initial auto zero routine may be based solely on time where the interval could be 1 minute, 2 minutes, and 5 minutes. This would serve to minimize sensor null shift during “device warm-up.”

At this point the sensor’s ambient temperature would be stored in a resident memory allocation for reference. In software, a window would be established where the auto zero would be prompted by temperature variations beyond a given threshold. This threshold would be perceived to vary the sensor’s null by a predetermined %FS.

Should an ambient temperature change prompt an auto zero prior to the system-prescribed time interval, the intelligent signal processing’s times-out interval is reset.

The sensor’s historic null variations over time can be realized by the microprocessor once two or more auto zero intervals are performed in sequence without actuation of the thermal threshold. This situation would be probable after device warm-up, and in a constant temperature environment.

Simple time versus rate of change (%FS deviation) enables interpolation prompting continuous updating of the auto zero timer interval.

This learning algorithm provides a level of sophistication that assures a prescribed zero certainty, as a %FS will be maintained over both time and temperature.

DP Synthesis for Secondary Calibration

Generating accurate, low differential pressures for the purpose of secondary calibration requires both highly accurate measurement and exacting control. By incorporating the techniques described previously, with the appropriate A/D certainty, auto zero and signal processing techniques, reliable low-pressure measurement with accuracies in the order of 0.05% are obtainable. Combined with a balanced differential piston array, and actuated via a linear drive stepping servo, precise microprocessor synthesis of differential reference pressures is possible.

When both the Hi and Lo ports of the unit under test (UUT) are pneumatically terminated to the differential piston, true differential pressures are realized. Low-pressure sensors are highly susceptible to pitot-static-induced background noise that results if the Lo port is reference to atmosphere. The pressure synthesis of Figure 3 provides a “closed loop” true differential pressure to the UUT. Pneumatic background noise and pitot-static-induced errors are not a factor with this configuration, enabling extremely low-pressure generation.

Since the pressures are truly differential (for example: PH = + 0.5” WC with PL = 0.5” WC = 1.0” WCD), common mode errors due to static line effect also are nonexistent. Pressure is governed under the direct digital control of the microprocessor and is considered “synthesized” since a direct BIT per pressure term is employed. This “see ahead” closed-loop approach minimizes overshoot potential in the control mode. The resulting monotonicity is key in quantifying hysterisis characteristics of the UUT.

The level of sophistication in intelligent signal processing has evolved to a degree that enables the implementation of compensation factors that were not viable in the analog domain. Despite exacting multipoint linearity correction, precise temperature compensation, and the auto zero capabilities of micro-driven signal processing, the key limitation to performance lies in both the thermal retrace and repeatability of the primary element.

No comments: