Zero percent humidity

William R. Brohinsky onlyocelot@joimail.com
Wed, 07 Jan 2004 12:07:28 -0500


hi again, Ray here. (Not william, btw: that's my legal first name, but 
no one who knows me uses it, except maybe my parents when I've been bad 8^),

Hygrometry is an inexact art at best. Among all the measurements that we 
take, only hygrometry and altitude remain so poorly understood by their 
users. Well, maybe I have to change that, since a) I haven't been a 
calibration technician for 30 years, and b) the advent of GPS systems 
adds two more dimensions to the altitude problem.

Anyway. It is always good to remember the 10% rule, especially for those 
of us who tune pianos, and therefore are calibration and repair 
technicians without even knowing it. (Also fundamental to the world of 
metrology is the fact that you _measure_ something by _changing_ it. 
Temperature of the air is measured by changing the temperature of 
something else, the glass and liquid of a thermometer, the sensor in an 
electronic thermometer, etc.)

The 10% rule expresses the difficulty of transferring a certifiably 
calibrated quantity from one device to another. Generally stated, it 
says that if you want to calibrate one device to magnitude 10^N, you'd 
better be doing it from a standard that is 10^(N-1) more accurate.

An example would be tuning A=440 on the piano. We all feel that we can 
match a piano string's fundamental frequency to an A=440 source with 
flabberghasting accuracy... but that A=440 source may be far from 440Hz. 
Here's a short description of the trip that frequency standards make 
from NIST to your tuning fork: NIST determines what A=440 is from a set 
of rubidium clocks. To decide what their real standard is, they take 10 
of them, throw out the extreme values, and average the rest. Then they 
take the closest clock and use that as The Standard, usually for The 
Day. (That means that it is possible for The Standard of The Day's 
frequency to vary from day to day, but fear not, these guys are good to 
a gross number of decimal places, and if the TV broadcasters don't sweat 
it, we sure don't have to at 440Hz!) This frequency is transferred to a 
frequency standard through a calibration process combined with an 
adjustment process, using a phase comparator which works exactly the 
same way the rotating displays on Accutuners do. The measurement is 
taken over many days, which increases the accuracy of the measurement, 
and after adjustment, another couple of days of measurement are taken. 
This frequency transfer standard is adjusted according to its measured 
aging rate: if it 'goes flat' with time, it is adjusted 'sharp', and vs. 
versa. Over the time of the certified cablibration cycle, the transfer 
standard ages, and its output frequency changes from one side of the 
actual claimed frequency to the other. Again, this is to many more 
decimal places of accuracy than we need. But the frequency transfer 
standard is certified to be only 1/10th as accurate as the rubidium 
standard of the day. Any further frequency standards which are 
calibrated against this standard will be 1/10th as accurate. And any 
standards calibrated against those third-generation standards will be 
1/10th less accurate than their 'parent' standards.

Few tuning fork manufacturers have a short direct line between NIST and 
their forks. (It'd be silly to bother, since that direct line is very 
expensive and waaaaaaa(insert another 23 'a's here)y too over accurate 
for their purpose. Instead, they purchase a standard that is (probably) 
a thousand times more accurate than they certify their forks for and 
which does have a certification which shows its pedigree all the way 
back to NIST. This means that it is probably good to six decimal places 
(I'm guessing here, because I have no direct experience with tuning fork 
manufacturers, just with time/frequency calibration), which may, in 
fact, be much more accurate than that (by accident or design) but is 
_certified_ to 10^-6Hz. They then tune their forks to 10^-3Hz (throwing 
away 1000x of accuracy, as it were) under specific conditions, which 
they sell to us branded as accurate to two decimals (10^-2Hz). We then 
tune a piano and claim (unscientifically) A=440, but probably think in 
our heads that it's going to be A=440.00. It isn't, because the fork 
isn't that accurate under changing conditions of temperature and 
pressure. So even though we probably have a good enough phase comparator 
system (beats, ears and brain) to match up the pitches to four or five 
decimals of Hz (although physiologists will swear it's a tenth at best), 
that's only a match to the temporally-changing state of the fork.

So along comes electronics, and we think we're doing better, because 
after all, it's electronic. But unless a tuning machine has a quartz, 
oven-controlled clock to produce it's beginning standard frequency, from 
which everything else is divided out (and no, even computers don't have 
that) they're only 'better' because they start with a very high 
frequency and divide down. And that's for digital systems: analog 
systems (like my faithful old (and I do mean old) Hale sight-o-tuner, 
from long before SAT (-1) days) have even more ways they react to 
temperature and pressure and the phase of the moon.

However, for frequency, most of this is well-defined, and Old Hat, I'm 
sure, for all of you. Hygrometry is a different story for most of us, 
though. We know that we can get relative humidity with a drybulb-wetbulb 
swing hygrometer, but may not have a clue (or want to) about the 
accuracy of the thermometers involved. But I'd bet no one uses a swing 
hygrometer anymore anyway. Instead, we use these neat electronic, 
digital hygrometers, and think that, because the specs say "3%" or 
"+/-3%" that we're getting accuracy or even a clue what we're reading.

Unfortunately, all digital readouts have built-in problems in the last 
digit. You have no clue if a reading of 50.0 is actually 50.0000000 or 
50.099999 (ie, effectively 50.1). This is a fundamental problem, but 
it's not a big one, because we intrinsically should distrust readings of 
1/10th of a percent in hygrometry anyway!

Further, the +/-3% spec doesn't tell you whether the readings will be 
the same amount off or a varying amount off from one end to the other. 
This is a common misconception in meters: that if they're one percent 
low at the left end of the scale, they're one percent off at the right 
side of the scale. Analog meters are always more accurate on the high 
(right) end of the scale than on the left side... but digital display 
meters can be as off as their twisted calibration curves show... or they 
might be very linearly off, or that line might be a nice smooth curve. 
All of this hides behind those reassuring numbers.

But with Hygrometry, it's far worse than all that. First of all, 
Hygrometry relies on both temperature and humidity measurements. These 
two measurements might be built into the same structure, but each will 
have a built-in 'calibration curve' describing the inaccuracies at 
different values. They may coincide or cancel or do either and or both 
at various stages... And their manufacturers will claim that they are 
accurate despite changes of temperature and pressure. But they don't say 
a thing about how these pores might get filled by cigarette smoke (or 
wood smoke from homefires, oil from cooking, or perfumes and colognes).

The swing hygrometer was the first indirect measure of relative 
humidity. Other physical forms involved human hairs or other 
moisture-sensitive physical bodies which contracted or expanded with 
moisture. These moved needles on bearings... and like tuning pins, those 
physical rotating joints suffered from friction. This is why people 
reading dials always are tapping them: to 'knock loose' the stiction in 
the bearings. But what are the 'modern' forms of hygrometry? They are 
usually thin-film silicon circuits, where an insulating layer is used to 
form a capacitor, and the moisture in the pores of the thin film 
structure determines the Rh reading. Unfortunately, reading out 
capacitors is also a complicated and inexact art. Oh, the science is all 
there. And using high frequencies and carefully built dividers and 
counters make accurate readings... but they're still capacitors. And the 
capacitance curves still have flat areas at one end, where 
differentiating one percentage reading from the next one up requires 
differentiating an extremely small change in capacitance.

But the most fun part of Rh readings is that they are affected by so 
many things. The good old swing hygrometer required a specific period of 
swinging to get its measurement, and this averaged out all sorts of 
things...which these loverly new thin film circuits respond to quite 
readily.

Here is an illustration of this: I was tasked to take Rh, temperature 
and air-movement measurements on a compartment. The actual application 
isn't important. In fact, the Rh measurements were to be made to +/-20%, 
temperature to +/-1degreeF, and air movement was just relative (fan on 
high, fan off, etc). I chose my sensors carefully, thermocouples with a 
proper reader for temperature, Omega duct-mount Rh sensors (the 
thermocouple reader had settings for reading out this sensor as well), 
and a Ford Escort motor airflow meter! I set it all up in my 
office...and discovered that the thermometric and Rh readings would vary 
as much as 10 degrees and 20%Rh depending on whether we were being 
visited by the secretary, or if my office-mate left and brought back a 
hot cuppa. (I left it running, and yes, I affected the readings by my 
presence and absence too. I'm not a vampire or anything!) While I took 
measurements on the compartment that discovered the problem we were 
after, I didn't mention that I could tell the person in charge whether 
the people who said they'd been in the compartment had been there when 
they said, and where they stood, when!

Anyway, that means that it is likely that you are affecting your Rh 
readings by how you take them. And even if you leave your Rh meter at a 
distance from people, going to read it can well change the reading.

All of which is to say that some of our measurement tools should be 
taken with a grain of salt, and some of them should be taken with 
Morton's Salt Mines.

By the way, for information, 'calibration' means 'measuring accuracy 
of'. If you take your torque wrench to be calibrated, and they measure 
it's accuracy and hand it to you and say, "throw it out, it's 
inaccurate", they've calibrated it. If they take your Rh meter, measure 
it, and adjust it to match its readings with reality to some level of 
accuracy, that is 'calibration and adjustment'.

I hope that sheds some light on the Rh discussion, and may I someday be 
able to speak as usefully about Piano repair!

raybro


This PTG archive page provided courtesy of Moy Piano Service, LLC