Quantcast
Channel: Comments for Climate Etc.
Viewing all 148656 articles
Browse latest View live

Comment on Assessing climate model software quality by Steven Mosher

$
0
0

Oreskes did peer review. How weird is that


Comment on Assessing climate model software quality by Steven Mosher

$
0
0

Have you even looked at input parameters. I guess not.

Comment on Assessing climate model software quality by Steven Mosher

$
0
0

yes. I have seen some sensitivity analysis run. But a full parameter test grid is not feasible due to runtimes. The approach is to run fractional factorials and create an emulation. then run the emulation on the full space. Then check the emulation with additional runs on the full model

Comment on Week in review 4/13/12 by Captain Kangaroo

Comment on Assessing climate model software quality by Philip Lee

$
0
0

DocMartyn quoted me in an unfair way – giving only part in the statements:

while leaving off the full quote: “Second, it is a mistake in modeling is to expect computer models to reveal new knowledge. They may, but it is likely they will reveal only our scientific expectations.”

Because of the errors (other than the quote), I’d like to elaborate the point I made. One great question of mathematical physics is whether the point mass model of the solar system, a Newtonian N-body problem, is stable. If that could be established, it would represent new knowledge about the N-body problem and open new areas of research on the motion of the physical solar system. That new knowledge could never be accomplished by any computer model. Nor could a computer model disprove stability.

Any computer model of this problem isn’t even the math model of the problem because it involves approximations of non-linear differential equations whose errors will grow by reason of the non-linearity and eventually computational solutions will diverge from the mathematical one.

Even though we can build SW that can guide us around the moon and planets, that SW creates no new knowledge about the math N-body problem. But SW orbital models have helped us to understand our lumpy earth by the deviations that earth produced on satellites which we demonstrated we understood by modeling the lumps and matching prediction to experience with nature.

GCMs can’t predict the climate next year — nature is telling us that we don’t understand it well even if the SW has zero defects. There are a lot of reasons for that failure — failure to understand the science is the concern now, not SW quality.

Comment on Psychological(?) effects of global warming by Erica

$
0
0

Timg56
Do you have some references on this Ozone Hole stuff?

Comment on Assessing climate model software quality by Philip Lee

$
0
0

My opening paragraph just posted was truncated — it should have been: [DocMartyn quoted me in an unfair way – giving only part in the statements: “Second, it is a mistake in modeling is to expect computer models to reveal new knowledge”
That is the most insane thing I have read in a while.
This is precisly what models in science based fields do all the time. Indeed, the major difference between a model and a fit is that models are predictive in multidimensions. ]

Comment on Week in review 4/13/12 by Erica

$
0
0

I’ve become convinced that many of the editors of the high impact journals are inclined to cast opinion pieces”
What was that bit by a Climate-gater about ‘redefining peer review’ again ?
The whole ideas was to make science journals like newspapers wasn’t it, ie pushing a particular line to which the funder and editor is precommitted ?


Comment on Lindzen et al.: response and parry by Bart R

$
0
0

John S. | April 16, 2012 at 9:50 pm |

One is amazed by what you believe you can deduce about a stranger on the Internet, when you so disparage readings ‘mainly from the Internet’.

Alas, that you use so little of the Internet in forming the foundation of your deductions, else you may have noticed that Mosher and myself have a longstanding and rather bitter difference of opinion on exactly the views of his that you attribute to me. I’ve called some of his work “Harry Potter Statistics,” and excoriated his practices of using snippets of impossible to verify Canadian data without understanding the limitations of his source, in a manner not dissimilar to what you say BEST does. (Which, considering Steve Mosher’s close connections to a small part of the project, it’s not hard to see why you may be confusing him for the whole project.)

Note, that’s his statistics, his practices, his opinions and his views. As a human being, I’ve never met Mr. Mosher. I don’t know him personally, I have nothing against him, for all that we have a wide chasm between us of beliefs and attitudes, and I regard highly his determination, cleverness and resourcefulness, as well as respect his professional achievements and substantial body of support for the advancement of science.

Though he is a polemicist (and worse.. he’s good at it), he reminds scientists to take seriously their obligations, rather than to take themselves too seriously.

You, on the other hand, appear to seriously believe you’ve set out your case in enough detail that we can all psychically guess what you mean without you being under the least onus to say what exactly that is.

How is one to take such voodoo argumentation as anything but superstition, or less?

Now that you’ve supplemented your hand-waving with actual detail, the epithet ‘superstition’ is clearly inappropriate.

‘Elitism’, it appears, is more apt.

Internet sources aren’t good enough. Seven levels of statistical validation of methods and open peer review aren’t good enough. The idea that there may be other experienced statisticians in the world who might have handled larger and more complex datasets than your particular favorite ‘experienced geoscientists’ (who remain nameless?) never occurs to you. The idea that there may be other factors than were considered by Oke some four decades ago with less than one percent of the data currently available to BEST, and other qualified geoscientists lately before BEST, never occurs to you. (While Socrates, who also originated the UHI idea, somewhat before the 1800′s and not in London, goes uncredited by you.) The idea that your personal reading, that apparently ended when Jerry Maguire shouted, “Show me the money!”, might have been supplanted by more recent developments never apparently occurs to you.

If incisive analysis is required for science, then your argument is doomed.

Rejecting mathematical truths and replacing them with outdated opinion is the act of an old guard that cannot come to grips with time passing them by.

Comment on Assessing climate model software quality by Jim2

$
0
0

Subpar usability or bad coding practices aren’t bugs, bugs WILL cause bad output. A project to improve usability or ease maintenance of the code would be an enhancement. A project to fix what truly are bugs would be, well, bug fixes.

Comment on UQ by David Wojick

$
0
0

The ORNL statement seems to imply that the Liang et al approach will not work for climate type models, to which I agree. Liang et al are looking at relatively simple mechanics models that are applied many, many times to actual situations, so they are extensively tested. This means, for example, that one can do Monte Carlo analysis. Climate models cannot be tested once, much less many times, for the reasons given by ORNL.

Also, most of the Laing analysis and UQ method seems to be focused on what they call numerical error, as opposed to model form error. But in the climate case it is arguably model form error that is most significant, by far, especially the twin terrors of feedback and natural variability.

In short, UQ is unlikely to be possible in cases where one does not understand the underlying physical processes, especially if they are probably nonlinear. Premature UQ of climate models may well cause more problems then it solves, just as we see with quantitative CO2 sensitivity estimates. If you don’t know what you are doing it is probably not possible to quantify by how much.

Comment on Assessing climate model software quality by Steve Milesworthy

$
0
0

A lot of warming (compared with the temperature record) estimated to within a few-tenths of a degree, sustained for over 30 years and observationally supported by detailed analysis of oceans, the atmosphere, and the cryosphere is not a one-in-three chance event.

Comment on UQ by Bernie Schreiver

$
0
0
The attention of <i>Climate etc.</i> readers is directed to the <a href="http://www.nafems.org/tech/" rel="nofollow">National Agency for Finite Element Methods and Standards</a> (NAFEMS), which despite the word "national" in its name, is the world's premier standards authority for engineering simulation software and training. The word "training" is important, because generally speaking, a skilled simulationist running buggy software will consistently obtain predictions that are more accurate and more reliable that an unskilled simulationist running perfect software. The common-sense point of NAFEMS (and its journals) is that experience has established that devoting resources to software V&V is futile, unless the V&V is accompanied by matching investments in human training with regard to the software's foundations in physics and mathematics.

Comment on Assessing climate model software quality by Jim2

$
0
0

Last time I checked, not one of Hansen’s three scenarios were tracking the global atmospheric temperature.

Comment on Letter to the dragon slayers by Pete Ridley

$
0
0

I assume from Dougy’s earlier comments (e.g. 16th February at 4:52 pm) that when he talked about “ .. taking any legal action against perpetrators of this criminal hoax .. ” (ref. 16th April at 8:36 pm) he was referring to those who are deliberately supporting the Catastrophic Anthropogenic Climate Change (CACC) hypothesis for personal gain, financial or otherwise. There appear to be several hoaxes running alongside each other and they not only involve those perpetrators who support CACC but also those who reject it.

Although there are far more reliable sources, Wikipedia appears to be Doug’s favourite source of information (scientific or otherwise) so here’s a link to the definition there of “hoax” (http://en.wikipedia.org/wiki/Hoax). Note those words “deliberately fabricated falsehood made to masquerade as truth” because in my opinion there are relativley few, on both sides of the CACC debate, who are deliberately fabricating falsehoods.

In the comment here on 15th Oct. at 2:46 pm MattStat’s QUOTE: .. The word “hoax” would be unfortunate .. UNQUOTE almost hits the nail on the head with “ .. interested parties are demanding the transfer of monies to themselves, their companies .. and so forth .. ”. I say almost because of that word “demanding” which I would replace with “inviting”. There is a web-site “Hoax-Slayer” (http://www.hoax-slayer.com/about.html) run by Australian Brett M.Christensen and he has a page of “Charity Hoaxes” (http://www.hoax-slayer.com/charity-hoaxes.html). Charitable appeals, usually are made along the lines of “ .. Please help .. We need funds .. ”, can be just as much a hoax as any demand for money.

I doubt very much if anyone who has been reading the comments here since 26th Oct. (http://judithcurry.com/2011/10/15/letter-to-the-dragon-slayers/#comment-128005) would have imagined that “ .. PSI would have anything to do with taking any legal action against perpetrators of this criminal hoax .. ”. As I said on 7th Nov. on 4th Jan. (2011) John O’Sullivan (head “Slayer”, PSI CEO and “Legal Consultant”) said “ .. The Slayers know we are a team operating a book authoring and publishing business and that I’m running this aspect for profit .. my concerns are for the book publishing core of the group .. ” (http://judithcurry.com/2011/10/15/letter-to-the-dragon-slayers/#comment-134549).

John was claiming in Dec. 2010/Jan. 2011 that “ .. beating the AGW fraud in the courts-its the only serious game in town .. My legal associates and I are asking your support to help raise funds for our next objective: defeating NASA GISS, GHCN and NOAA in the federal court in Washington D.C. .. what is needed is an effective fund raising strategy .. .. ”. As I pointed out to Bryan here on 14th Dec. PSI is now simply another publishing organisation and an insignificant one at that (http://judithcurry.com/2011/10/15/letter-to-the-dragon-slayers/#comment-149933).

Maybe, in the spirit of transparency that is claimed to be so important to PSI, dear old Dougy will enlighten us as to who he understands will be taking legal action against which perpetrators of which criminal hoax.

Best regards, Pete Ridley


Comment on UQ by Jim2

$
0
0

So, Bernie. What does the skilled simulationist have to do to obtain good results. Can you give some specific scenarios and what would be done to the simulation to obtain “more accurate” predictions?

Comment on UQ by jbmckim

$
0
0

I don’t think there’s anything in David’s post to improve on. That is exactly the problem I have with the Climate Science community’s approach to modeling. They seem generally to give it weight that is disproportional to what it can reasonably be expected to provide.

Comment on UQ by robin

$
0
0

I think it would be good to first define the level of software required. Not all software needs to be bulletproof, and there are a lot of competing requirements beyond quality – time to build, cost, speed, runtime size, usability, ease of iteration, maintenance, ease of extending, use by others, accuracy, language, target machines etc etc.

It may well be that climate software doesn’t need to be high quality. That should be discussed first, decided, and stated in the spec (and even if low quality is fine, there should still be at least a ‘goals’ type document!). High quality is a bit of a misunderstood concept with software – people assume good software doesn’t contain errors or can’t be useful unless it is high quality, but that isn’t the case. Would I step into a plane running on code like Climate models? No I wouldn’t, but I’m not stepping on a plane. Is my home camera lens good enough quality for the Hubble? No again. Your goals are determined by your project. That said, the two climate code bases I’ve looked at could benefit from a more methodical approach to design and testing. As it is it is more like prototyping code.

What is counter productive is when it is clearly not ‘high quality’ and someone does a ‘study’ that claims that it is. I understand that there will be lots of grief in explaining how lower quality may still suit their needs, but making false claims about it is just throwing trust in the garbage.

Comment on UQ by David Wojick

$
0
0

I agree that getting the math and science right is far more important than getting the code right. But in the case of climate this is not a matter of training, rather it is a matter of science. The models are no good because we do not yet understand how climate works, the math as well as the physics. The ORNL fact sheet is pretty good in this regard.

Comment on UQ by jbmckim

$
0
0

I’m having trouble disagreeing with anyone today. This is usually not my problem I was the lead architect and one of the engineers on an AI project about 15 years ago. The software eventually was quite helpful when used by users that understood the way the problem (telephony sales configuration) had to be approached. However, only a segment of the user community could ever use it effectively because it was assumed by their management (and sadly mine) that the software would take care of everything. Actually it did, but you had to understand the limitations the software had in framing the problem.

Viewing all 148656 articles
Browse latest View live




Latest Images