Monday, January 25, 2010

Whatever happened to Setting New Standards

The Bre-X fraud brought about an orgy of hand wringing but not even a token search for the truth, the whole truth, and nothing but the truth. The Ontario Securities Commission and the Toronto Stock Exchange set up a Mining Standards Task Force. Morley P Carscallen, OSC’s Commissioner, and John W Carson, TSE’s Senior Vice President, called on Canadian mining experts to set new standards. Of course, the old standards were dreadfully wrong. All it took was to assume gold between salted boreholes. That’s how Bre-X’s bogus grades and Busang’s barren rock added up to a phantom gold resource! So what did the Mining Standards Task Force do? It wrote a lot but little else. Here's why!

Hardcore krigers and cocksure smoothers were silent after Bre-X had gone bust. So much so that none served on the task force. They would have had a tough time to explain why kriging variances rise first and then fall. Or to prove why spatial dependence may be assumed without proof. Without genuine geostatisticians on board the task force was in limbo. The more so since I had proved that Bre-X was a salting scam. My son and I had shown in 1992 how to verify spatial dependence by applying Fisher’s F-test to the variance of test results for gold determined in bulk samples taken from a set of rounds in a drift, and the first variance term of the ordered set. Stanford’s Journel wrote to Professor Dr R Ehrlich, Editor, Journal of Mathematics Geology, (in those days!) that I am, “… too encumbered with Fischerian (sic) statistics.” I confess to have worked with Fisher’s F-test most of my life. So what?

The Mining Standard Task Force was put to work in July 1997. MSTF released its Interim Report in June 1998, and published its Final Report in January 1999. MSTF’s Final Report is high on verbiage but low on sound sampling practices and proven statistical methods.

It took a while to find out that Setting New Standards had done nothing to improve sampling practices in mineral exploration. The task force could have but did not show how to derive unbiased confidence limits for metal contents and grades of mineral inventories. Sadly, geostatistics was very much alive when I looked at CIM’s website under APCOM 2009. The program for this event set the stage for another krige-and-smooth bash. But this time the stage was set on my home turf. The scientific fraud behind the Bre-X fraud turned out to be alive ten years after MSTF’s Final Report had been released. It is as much alive as it was on Journel’s watch in 1992. So much for setting new standards!

I dug into my data base and retrieved test results for gold and silver determined in pairs of interleaved bulk samples taken from 1 m³ volumes of crushed gossan ore mined from a vertical pit. I had designed this sampling program to test for spatial dependence, to derive confidence limits for gold and silver contents and grades, and to estimate the intrinsic variances of gold and silver. The same test proved that the intrinsic variance of gold in Bre-X’s gold resource was statistically identical to zero. My son and I submitted to APCOM 2009 for review a paper on Metrology in Mineral Exploration. It was accepted as “a highly specialized topic reserved for the advanced geostatistician.” How about that!

My coauthor was talking about EMF in Europe. His presentation was also of interest at L’Ecole des Mines in Nantes. So, his mom and my partner for life listened to my APCOM 2009 talk in Vancouver, BC. I asked again why the variance of Agterberg’s distance-weighted average point grade had gone missing. The question was met with solemn silence. My spouse got some kind of revised textbook on a CD. Long ago I had bought a copy of the original edition. What it taught me was not to mess around with sloppy semi-variograms. That's why I took a systematic walk, tested for spatial dependence between hypothetical uranium concentrations, and counted degrees of freedom properly.

NRCan’s Emeritus Scientist is loath to bring back the long-lost variance of his distance-weighted average point grade. But then, how could JMG’s Editor-in-Chief possibly do what Rio Tinto wants him to do if each and every weighted average point grade were to have its own variance? He may need but a few boreholes. But what he does need most of all are infinite sets of distance-weighted average point grades to play with by hook or by crook. I really don’t give a fiddle about JMG’s Editor-in-Chief and his models. What I want is a world free of Matheron’s mad science of geostatistics.

I agree with H G Wells. I like statistical thinking. And I like to write about it. A good grasp of statistics is needed to bridge the gap between sampling theory and sampling practice. I have written a great deal about spatial dependence in sample spaces and sampling units. I want to write much more. My website gets a load of traffic. I blog for fun and play mind games when I do. I found out in 2007 that geostatistics plays a role in the study of climate change. It was some Canadian hockey stick that struck a panic button around the world. The study of climate change is much more relevant to the world than unbiased mineral inventories are to mining investors. Securities commissions ought to set rules and regulations that protect the public at large against all sorts of scientific frauds. The kriging machine will be shredded as soon as the ugly factoids are clear to investors. Surely, geoscientists should apply classical statistics when they study climate change. After all, functions without variances are as dead as dodos. CRIRSCO does not think so but I know!

Friday, January 01, 2010

What if our world were free of geostatistics

A world free of surreal geostatistics is long past due. Geostatistics was called a new science in the 1960s but it turned out to be an insidious scientific fraud. Real statistics would have nipped the infamous Bre-X fraud in the bud but CIM and IAMG ruled in favor of surreal geostatistics in the early 1990s. Matheron’s so-called new science of geostatistics did make a mess of the study of climate change. That’s why our world ought to get rid of surreal geostatistics. And fast! Come frost bites or sun burns!
Thanks to all those who read my blogs. More than two million have done so. But I got fewer than ten comments. So, what’s the matter? Is it the way I write? All I do is put in plain words why geostatistics is a scientific fraud. Here’s what I have been writing for more than twenty years. Each weighted average has its own variance. Could I have put it any other way? It is a truism in real statistics. The Central Limit Theorem is bound to stand the test of time. Why then was the variance of the weighted average done away with in Matheron's novel science? It was G Matheron in the early 1960s who called the weighted average "a kriged estimate" to honor D G Krige. Matheron never derived the variance of his own kriged estimate. Neither did any of his docile disciples.
What happened in the 1970s defies common sense and sound science. Was it Matheron himself or one of his disciples who thought that every one set of kriged estimates ought to have its own kriging variance? Stanford’s Journel was Matheron’s most astute student. He figured out that an infinite set of kriged estimates gives a zero kriging variance. Wow! Here’s what he taught Stanford's neophytes in a nutshell. Assume spatial dependence between measured values in ordered sets, interpolate by kriging, and smooth a little but not a lot. Stanford’s finest geostatistical mind never took to testing for spatial dependence, or to counting degrees of freedom.
Some readers may want to study the odd opus on geostatistics. I suggest a paper on kriging small blocks. It was put together by genuine geostatisticians from the Centre de Géostatistique in France. Professor Dr Margaret Armstrong and Normand Champigny were the first scholars who cautioned against reckless over-smoothing by careless mine planners.

I messed up my own copy of Armstrong and Champigny’s A Study on Kriging Small Blocks. The Canadian Institute of Mining, Metallurgy, and Petroleum has not yet posted this study on its website. It would have passed David’s review at CIM Bulletin with flying colors. Elsevier in 1988 published Professor Dr Michel David’s 1988 Handbook of Applied Advanced Geostatistical Ore Reserve Estimation. It’s by far the worst textbook I’ve ever read. Yet, universities all over the world have added this work of geostatistical fiction to their libraries.
It was early in October 1989 when Precision Estimates for Ore Reserves ended up on David’s desk. That's when we found out that geostatistical peer review is a shamelessly self-serving sham. Too many geoscientists do not know that measured values do give degrees of freedom, and that functionally dependent values (calculated values!) do have variances. If the difference between calculated and measured is a bit of a mystery, buy Moroney’s Facts from Figures, read Abuse of Statistics, or take Statistics 101.
So, who’s to blame for the rise of Matheron’s new science of surreal geostatistics? What comes to mind first and most of all is the Canadian Institute of Mining, Metallurgy, and Petroleum and its APCOM appendix. The International Association for Mathematical Geosciences and a score of institutions of higher learning such as McGill, Stanford, UBC, and scores of others, are close seconds.
Thank goodness I still have plenty of geostats and stats stuff to write about. Every night I fall sleep in my straight-thoughts jacket and figure out what to do next. Tonight it’s full moon in Vancouver. I feel really good about real statistics!