Archive 1Archive 2Archive 3Archive 4Archive 5Archive 6Archive 10

Disputed para from TDC

(William M. Connolley 09:52, 2004 Mar 3 (UTC)) TDC added 2 paras:

The problems many critics point to with mathematical modeling of climate systems is the lack of validation and verification of any climate models used. Climate modeling was used unsuccessfully on several occasions to predict near and long term climate effects from large scale environmental events such as the Kuwaiti oil field fires after the first gulf war and the eruption of Mt. Pinatubo. The discrepancies between the predicted impact and the actual impact were so far off, the models showed little if any use as a long range predictive tool.
Although scientists have revised climate modeling software with the knowledge gained from earlier experiences, modeling software has still failed to accurately predict near and long term weather trends, such as the El Nino and La Nina effects.

There are several problems with these.

  • they are more about weather rather than climate
  • the "lack of validation" is just ignorance - see eg the IPCC reports
  • the Pinatubo predictions were quite good, as I recall
  • nothing but nothing in the above is sourced

So I've moved the paras here for repair.

There are several problems with these.

Response

  • they are more about weather rather than climate
They were not more about weather than climate. They were attempts to predict near to long term impacts, 3 month to 2 year, on global climate trends and they were totally off their mark.
(William M. Connolley 18:10, 2004 Mar 3 (UTC)) Predicting the impact of pinatubo on long term climate trends was and is easy. Zero. It was a short term effect. Climate isn't a few years - decades is more like it.
  • the "lack of validation" is just ignorance - see eg the IPCC reports
I have seen the IPCC reports, no where in the IPCC reports do they lay out in any kind of detail the verification and validation methodology they used to validate their models that are in tune with any of the generally accepted standards for modeling V&V. They continually tweak their models with information used from past predictive failures, like the Kuwaiti oil fires, or En Nino, but they have never been able to correct them enough to accurately predict large scale / long term events and the total error range is unknown.
(William M. Connolley 18:10, 2004 Mar 3 (UTC)) Then you haven't looked very hard. Try http://www.grida.no/climate/ipcc_tar/wg1/309.htm for the exec summary of TAR ch 8. If you want to know what they mean by evaluation, try: http://www.grida.no/climate/ipcc_tar/wg1/311.htm
The report goes through great extent and painstaking detail to inform the reader what they used to construct the climate models, but it does not mention what they did to validate the data that came out of it. All data coming from this is meaningless unless they can show that it can repeatedly predict various and known climate scenarios within an acceptably low [under .5%] error.
(William M. Connolley 18:27, 2004 Mar 4 (UTC)) You haven't looked, have you? See, e.g., http://www.grida.no/climate/ipcc_tar/wg1/317.htm for validation of the data and/or http://www.grida.no/climate/ipcc_tar/wg1/326.htm#861 for replication of 20th century climate. As for 0.5%, you've plucked that out of thin air.
Evaluation and validation are not the same thing. You should look it up in a dictionary some time. TDC 16:44, 4 Mar 2004 (UTC)
Imagine it is like this, I tell you to measure the length of your coffee table and I give you a ruler to do so. Let us also say that I constructed and gauged this ruler in mm’s myself. How could you be certain that the ruler I gave you is accurate? If you measured your coffee table to be 900mm long what would this mean? If the accuracy of the ruler I gave you is off by a large margin, it means that your table might be 200mm or 20000mm, but without any standard known quantity to measure this up against, you have absolutely no idea how accurate your measurement was. So naturally, you would go out and buy some calibrated gauge blocks, and determine the margin of error in the measuring instrument I provided and make a correction. Predictive modeling software V&V is a very similar process.
I didn't find that terribly illuminating.
Somehow that does not surprise me. It is a simple illustration as to why verification and validation are important steps when using predicitve modeling software. TDC 16:44, 4 Mar 2004.
I am sure you have heard of the acronym GIGO, garbage in = garbage out, unless researchers are using climate modeling software that has been V&V’d, the results are completely absolutely meaningless on a scientific level. In other words, computer models that show a warming trend due to a buildup of CO2 in the atmosphere are meaningless.
(William M. Connolley 18:10, 2004 Mar 3 (UTC)) No. Whats happening is, you're coming in from a totally different field and trying to use your old knowledge in a new area. And so you're making obvious mistakes.
Different area true. But this does not mean that the methodology I use to generate, run, evaluate, and interpret predictive modeling software is any different than the methodology used in any other field. Whether its finite element, structural modeling, reaction modeling [like chemkin], or CFD, all users and programmers must go through a standardized process of writing and testing the predictive modeling software and users must also verify through a standardized process that the inputs used and results generated are within the known errors and deviations of the modeling software. No one knows what the known errors and deviations of climate modeling is because no one has taken the steps to prove that it works.
So what obvious mistake am I making? Is it a mistake to question the validity of climate data derived from unverified and unvalidated climate models.
Failure to read the existing literature, such as the IPCC report.
You are right, I am coming from a totally different field. My “field” forces me to validate my work against know quantities and be able to back and verify my work up through lab and field testing of my analyses. I think the difference is if I make a mistake, I get fired, If you make a mistake, you get a bigger grant. TDC 16:44, 4 Mar 2004 (UTC)
Based on what I have seen and heard about climate modeling software used to make global warming predictions, this V&V does not exist, and in fact, researchers are modifying key input parameters so as to drive the model in the direction that their intuition tells them it should arrive at. Predictive modeling tells an individual what to expect, not the other way around.
  • the Pinatubo predictions were quite good, as I recall
The initial results for the Pinatubo eruption were dramatically revised months into it as more data became available and researchers could see that developing trends were not following what they had predicted earlier.
(William M. Connolley 18:10, 2004 Mar 3 (UTC)) Well, this is all rather sourceless. Try http://www.grida.no/climate/ipcc_tar/wg1/449.htm for some pinatubo results that are well sourced.
As far as sources go, I have performed many V&V’s on all kinds of predictive software as well as writing standards and procedures for the use of an interpretation of results, and consider myself very knowledgeable on the subject.
TDC 16:26, 3 Mar 2004 (UTC)

(William M. Connolley 18:10, 2004 Mar 3 (UTC)) If you're so knowledgeable about this, you'll understand the importance of properly citing your sources. Not your qualifications. So... where are they?

  NODES
Idea 1
idea 1
Note 1
USERS 2
Verify 2