Monday, March 21, 2011

Teaching real statistics at TU Delft

A smart student of sorts taught at TU Delft in 1958 a seminar on the skew frequency distribution of ore assay values. It was none other than young Agterberg. And did he teach real statistics in those days! Scores of students at the University of Utrecht grew up with real statistics! Agterberg was no exception. I only found out when I read his 2000 eulogy for Professor Dr Georges Matheron. He brought up that Professor H J de Wijs thought the ratio of element concentration values to be constant regardless of the volume of the block. Here’s ad verbatim Agterberg’s criticism of what Professor H J de Wijs taught at TU Delft in 1958, “…it would be better to apply the conventional method of serial correlation to series of assay values.” Now that was in 1958 Agterberg’s point of view on spatial dependence between measured values in ordered sets. Why then has he swallowed Matheron’s spatial statistics with hook, line, and sinker?

Neither Agterberg nor de Wijs knew in 1958 that Dr Jan Visman’s 1947 PhD thesis on coal sampling was on file at TU Delft. I, too, was unaware of Visman’s work when I was a teaching assistant and a mature student in the early 1960s. I had been chief chemist with Dr Verwey but wanted to know more about sampling and statistics. The exchange of test samples and test results between trading partners was a thankless task to say the least. I knew all about the analytical variance but didn’t know how to estimate the variance of the primary sampling stage. I thought TU Delft would teach me what I wanted to know about sampling and statistics. One professor was H J de Wijs and the other a coal mining engineer. Neither knew of Visman’s work or of the properties of variances. In fact, H J de Wijs was Rector Magnificus when a student of his defended in 1965 a thesis in which transformation matrices played a key role. So, I left TU Delft, went to work for SGS in the Port of Rotterdam, and found out in 1967 about Visman’s 1947 sampling theory. So, Agterberg and I knew that mathematical statistics was shortchanged at TU Delft in those days. I don’t know why matrix and vector analysis were taught but sampling and statistics were ignored.

Dr Frederik P Agterberg had all but forgotten in this century what he had taught in 1958 at TU Delft. Here’s ad verbatim the very first paragraph of his eulogy, “Professor Georges Matheron (1930-2000) made fundamental contributions to science by establishing new theoretical frameworks in spatial statistics, random sets, mathematical morphology and the physics of random media”. Matheron was a French geologist who was called creator of geostatistics and founder of spatial statistics. I would never have praised Matheron’s surreal geostatistics let alone Journel's assumed spatial dependence. Whatever small minute contribution Matheron has made to science fell far short of real statistics. Surely, it did add up to surreal geostatistics. How ironic that he never got around to testing for spatial dependence between measured values in ordered sets. Why then did Dr Frederick P Agterberg see fit to praise Matheron’s work? He is Emeritus Scientist with the Geological Survey of Canada. Most of his 1974 textbook on Geomathematics has passed the test of time. And most of it will last much longer than Matheron’s magnum opus. What a shame that real statistics behind his 1970 and 1974 figures crumbles under scrutiny.


Figure 1 - Geologic prediction problem in 1970
Figure 64 - Typical kriging problem in 1974

Agterberg solved his geologic prediction problem by linear prediction in time series and assuming a two-dimensional autocorrelation function between his set of five (5) points. He has to explain how it came to pass that a geologic prediction problem in 1970 turned into a typical kriging problem by 1974. That was the very year that Elsevier published Agterberg’s Geomathematics. Cramped between its covers are some 600 pages of mostly real statistics, a lot of sound mathematics and a dash of Matheron’s surreal geostatistics. But why did Agterberg add Stationary random variables and kriging to an otherwise readable textbook?

What I see in each figure is a set of five (5) measured values in a two-dimensional sample space. If each of Agterberg’s points were equidistant to Po, then the central value of his set would be its arithmetic mean. David's famous Central Limit Theorem defines the functional relationship between a set of measured values with identical weights and its central value.

Agterberg refers to the Central Limit Theorem on pages 166, 206, 207 and 231. The number of degrees of freedom is a positive integer for a set of measured values with identical weights but a positive irrational for a set of measured values with variable weights. Agterberg refers to degrees of freedom on pages 174, 190 and 254. The Central Limit Theorem and the concept of degrees of freedom do not play a role in Chapter 10 - Stationary random variables and kriging. Dr Frederik P Agterberg, the author of Geomathematics and Emeritus Scientist with Natural Resources Canada, ought to explain why his Central Limit Theorem is not blessed with a variance and why his set of measured values is not blessed with degrees of freedom.

Thursday, March 10, 2011

ISO erred on trueness

When ISO was set up in April 1947 at Paris, France, it was all about nuts and bolts. As a matter of fact, ISO/TC1 Screw heads came first and ISO/TC2 Fasteners was second. Ever since has ISO been setting up a broad range of standards while the world is putting its standards to the test. But I wonder why ISO did err on trueness. Here’s what ISO announced in its Technical Corrigendum 1 on 2005-08-15.

Accuracy (trueness and precision) of
measurement methods and results
Part 5: Alternative methods for the determination of
the precision of a standard measurement method


ISO/TC69, Applications of Statistical Methods, Subcommittee SC 6, Measurement methods and results published the above Technical Corrigendum 1. So what error was SC6 to correct? Of course, trueness and precision should never have been between brackets! What ought to be between brackets are precision and accuracy! A true test for bias would need first of all an unbiased variance estimate. Those who have kriged and smoothed cannot possibly test for bias or estimate precision. Neither can they test for spatial dependence by applying Fisher’s F-test to the variance of bogus data and the first variance term of the bogus data set. So much for kriging and smoothing when we study climate change on our planet!

What I would want between brackets is precision and bias. Derive the variance and then test for bias if enough degrees of freedom are available. Bias detection limits (BDLs) and Probable bias ranges (PBRs) for Type I risks and Type I&II risks are intuitive and powerful measures for the observed bias. Ignorance of precision and bias has irked me as long as have central values without variances. Surely, Matheron and his disciples have brought a big catch of bad science to our world.

I have juxtaposed precision and bias since 1974. That’s when I became a member of CAC/ISO/TC102-Iron ore. I am also a Member of ISO/TC27-Solid mineral fuels, of ISO/TC69-Applications of statistical methods, and of ISO/TC183-Copper, lead, zinc and nickel ores and concentrates. Much of what I have written on sampling and weighing of bulk solids became part of ISO/TC183. My son and I have written a software module on Precision and Bias for Mass Measurement Techniques. ISO has published it as an ISO Standard. I was told Canadian Copyright was not violated. Merks and Merks found it easy to work with precision and bias. What’s more, we are pleased to be encumbered with Fischerian (sic!) statistics.

The International Organization for Standardization was much on my mind when I posted false and true tests for bias. ISO comes from the Greek word isos which means “equal”. Scores of countries have set up national institutions to interface with ISO. The Standards Council of Canada Act received Royal Ascent in 1970. That’s when the CAC prefix was placed before ISO. I have nothing but praise for Standards Council of Canada. CAC/ISO/TC69 Applications of statistical methods has played a key role in my work. I have always juxtaposed Precision and Bias. But it’s a long a story. And it’s bound to get longer while I’m trying to keep it short. I do want to kill two nutty practices with the same stone. The first is to assume spatial dependence between measured values in an ordered set. The second is to not apply Fisher’s F-test to the variance of a set of measured values and the first variance term of the ordered set. Geostatistocrats assume, krige, smooth and select the least biased subset of any infinite set of kriged estimates. It may well have dazzled those who have never scored a passing grade on Statistics 101. I still find it funny how so few could write so much about so little.
Biased but high degree of precision

Here’s where ISO has created the error to be corrected. Are true and false antonyms or not? Wouldn’t the antonym of trueness be falseness, or perhaps falsehood? Of course, I would call this ISO document Trueness (precision and bias) of measurement methods and results. Surely, a significant degree of spatial dependence between measured values in an ordered set does impact precision. But I upped the odds of finding a false positive. I did so by inserting David’s “famous Central Limit Theorem” between each pair of measured values. Pop in more kriged estimates between measured values and bogus spatial dependence may make the odd mind spin. Is it a minor miracle or Matheronian madness?

Spatial dependence between measured values in an ordered set ought to be verified by applying Fisher’s F-test to the variance of the set and the first variance term of the ordered set. When applied to sets of test results for single boreholes I came to call it fingerprinting boreholes. SME’s reviewers liked it a lot. And so will members of ISO/TC69/SC6 once the upshot of spatial dependence on confidence limits for central values is clear. Assuming spatial dependence between measured values and interpolation by inserting functionally dependent values between measured values has made a mess of the study of climate change. Surely, CAC/ISO/TC69/SC6 has a role to play in selecting the most fitting statistical methods.

Tuesday, March 01, 2011

True test for bias

Every scientist and engineer ought to grasp the properties of variances. Those who don’t should not even try to apply a true t-test for bias. And nobody could do it without counting degrees of freedom. It is an irrefutable fact that a true t-test for bias cannot be applied without counting degrees of freedom. That’s why geologists ought to question the validity of geostatistics. The more so since the author of the very first textbook predicted that "…statisticians will find many unqualified statements…” What David didn’t predict was that he would blow a fuse if somebody did. By the way, keep your copy in a safe place. It may well become a collector’s treasure before this millennium is history. Counting degrees of freedom comes to mind as a topic that does not get the respect it so richly deserves. But I’m getting off my train of thought! Here’s a simple but true test for bias applied to an ancient set of paired data.

Observed t-value significant at 99.9% probability

Scientists and engineers ought to apply Student’s t-test in the same way as W S Gosset himself did. All textbooks on applied statistics teach the t-test. It was Volk’s Applied Statistics for Engineers that taught me all about the t-test. Those who want to apply Student’s t-test for bias the proper way should study Chapter Six The t Test. Study not only Section 6.1.4 The Null Hypothesis but even more so Section 6.1.3 Degrees of Freedom. Here’s what Volk wrote, “…we accept a risk of a 5 per cent chance of being in error”. Next, he pointed out, “This error, of falsely rejecting the null hypothesis, is called an error of Type 1”. What I have done in my work is avoid the term “error” without some appropriate adjective. Risk analysis and loss control have played a key role in my work. That’s why I report Type I risk and combined Type I&II risks as intuitive measures for the power of the t-test.


Bias Detection Limits and Probable Bias Ranges

A simple analogy exists between those types of risks and the role of a fire alarm. The Type I statistical risk refers to the event that a fire occurs but the alarm does not sound. The Type II statistical risk refers to the event that the alarm sounds but no fire occurs. The combined Type I and Type II statistical risks refer to the event that a fire occurs and the alarm sounds. Simple comme bonjour! The next step was to define Probable Bias Ranges. It is true that PBRs may sound counterintuitive to those who are not used to working with applied statistics. But surely, PBRs fit the observed bias bon d’un t!

Dr Pierre Gy’s view on accuracy is spelled out on page 17 of his 1979 Sampling of Particulate Materials. Here’s his take, “Accurate: when the absolute value of the bias is not larger than a certain standard of accuracy”. He could have but didn’t mention Standard Reference Materials. SRMs play an important role in calibrating analytical methods and systems. Analytical chemists need SRM’s with confidence intervals for one or more constituents. Gy’s so-called sampling constant is, in fact, a function of a set of four (4) variables. As such, it does have its own variance. What’s more, Gy’s own Student t-Fisher test has raised more eyebrows than interest.

Posted on my website are scores of statistical tests and techniques. I used to attached it as an Appendix to my reports long before Professor Dr Michel David took such a dim view of applied statistics. My son and I in 1992 had to put together Precision and Bias for Mass Measurement Techniques. It was Part 1 of a series on Metrology in Mining and Metallurgy. I hold Canadian copyrights for Metrology in Mining and Metallurgy and for Behind Bre-X, the Whistleblower’s Story. I have scores of blogs to write before I make up my mind what to complete first.