Quantcast
Channel: Comments for Climate Etc.
Viewing all 148656 articles
Browse latest View live

Comment on Industry funding and bias by justinwonder

$
0
0

“…past climates…”

Sure, today’s fossil fuels comes from life in the past. Whatever the motivation, I am all for paleoclimate research.


Comment on Industry funding and bias by Ron Graf

$
0
0

Yes. It seems we only demand science be careful when mistakes lead to immediate catastrophic danger to life, as there is in pharma research. If we demanded the same kind to quality out of all science as we do medicine, and as Dr. Michael Crichton suggested, to have competing interests concurrently do the same study in open competition, it would be very hard to broadcast junk conclusions.

The losers in those competitions would have dings against them that would impair their future funding, kind of like pairing boxing matches.

Comment on Industry funding and bias by Stanton Brown

$
0
0

Crichton’s concern is for quality control. Scientists don’t understand quality control. Totally foreign concept.

Comment on Industry funding and bias by Daniel E Hofford

$
0
0

The Voters spoke.
Prop 39 passed.
The money flowed.
And Hilarity ensued.

We have no data by which to measure anything and no plan to collect any data by which to measure anything but I’m absolutely sure everything is peachy keen and anyone who doubts it is just a hater and against children
learning science.

Only in California, use to be the saying in the rest of the country. Thanks to Global Warming the California model is now Global in the US.

No meeting in three years cause their is no pretense that this is a serious program with any interest in it working. Typical of the Left, the core of reality for them lies in appearance, and how it makes them feel about themselves. But I guarantee you that the next meeting about continuing funding will be met.

Tracking data to measure results would mean they didn’t BELIEVE in the program and that would cause great emotional dissonance. Better not
to look and KNOW FOR SURE that it’s working.

‘Why should I give you my data,’ said the Climate Scientist,
‘if you’re jst going to show me I’m wrong?’

‘Why should we bother with tracking data or follow up studies,’ asks the Progressive Puke, ‘if there’s a chance it will prove our ideas are just utter slop?’

Comment on Industry funding and bias by David Wojick

$
0
0

Indeed, Reality Check. According to UAH the 1970s to 1990s warming, that is the basis for AGW, never happened. The warmers simply ignore this fact and rely on the questionable surface statistical models, from HadCRU to BEST, which all use convenience samples. Another fine case of paradigm protection, just as Kuhn described. Ignore the data that does not support the paradigm.

Comment on Industry funding and bias by David Wojick

$
0
0

Interesting point Verdeviewer, but I am not sure much of this money comes from the various USGCRP agency appropriations. There is a huge amount of climate related money sloshing around in the Federal budget that is not part of the science budget. I have seen estimates of 20 to 30 billion dollars. But then Federal funding is an unbelievably complex dark passage.

Comment on Industry funding and bias by jddohio

$
0
0

As someone who practiced workers compensation law for 17 years and frequently deposed expert witnesses (mostly specialist physicians), my opinion is that bias is intrinsic to human beings and is always present. The question is not whether there is bias, but whether a researcher is aware of his or her own biases and has the self-awareness and honesty to put them aside when the science comes down in a manner that challenges the researchers innate bias. It can be put aside, but it is almost always there.

I should also add that in trying cases before juries, it becomes apparent that people from different walks of life have highly predictable biases. For instance, in all my years of workers comp trials, I never had a farmer or insurance agent decide in favor of a worker. On the other hand, people with back injuries tend to be highly sympathetic to workers with back injuries.

Of course, government scientists and university scientists quite often have huge biases, but a large number of them are so simple minded and lacking in self-awareness that they are oblivious to their own bias.

JD

Comment on Week in review – science edition by Berényi Péter

$
0
0
<blockquote><a href="http://ift.tt/1J174nB" rel="nofollow">No Consensus: Earth’s Top of Atmosphere Energy Imbalance in CMIP5-Archived (IPCC AR5) Climate Models</a></blockquote> The 21 computational climate models included in CMIP5 clearly show, that most model pairs are mutually inconsistent. It can not be the case, that both members of an inconsistent pair are "true", therefore <i>ensemble average</i> includes false representations, making it utterly meaningless. The fact climate scientists are still unable to identify inadequate members of the <i>ensemble</i>, does not make this average less meaningless, for we do know for sure that such a subset <i>exists</i>. This is pure logic so far, shows how weak a logical relation "consistent with" is. It's not even <a href="https://en.wikipedia.org/wiki/Transitive_relation" rel="nofollow">transitive</a>. Consequently one can still construct a "reality" which is <i>consistent with</i> all computational models included in the intercomparison, only error bars have to be made large enough. Unfortunately <i>actual</i> reality is not like that, some observations are well constrained, because observational datasets are not <i>infinitely</i> flexible, however hard one tries to make them so. Therefore some computational models in CMIP5 are already falsified by past observations. If that's so, they should be identified clearly, excluded from further evaluation and the projects behind them discontinued and defunded. It would still make sense to construct an "inconsistency matrix" of climate models, irrespective of observations. In this case projections for the future can also be included. The idea is to run each model under the same set of "scenarios", that is, same history of "forcings" like well mixed greenhouse gases, aerosols, airborne black carbon, volcanoes, solar variability, etc. <i>multiple</i> times. Consider not only a single variable like <i>global average surface temperature anomaly</i>, but many, like actual average surface temperatures in several broad latitudinal bands, both over ocean and land, cloud fraction, average tropospheric humidity and temperature at different elevations, absorbed solar and outgoing longwave radiation flux, average wind speed, rate of evaporation at surface and vertical turbulent mixing in oceans, interhemispheric difference of said quantities, etc. Each run lends a trajectory (a crooked line) in the multi dimensional phase space of these variables and multiple runs of the same model under the same scenario from slightly different initial conditions a (rather loose) bundle of such trajectories (because the system is chaotic). The cross section of said phase space at a particular instant with its bundle is a set of points. Cover it with a multi dimensional blob, which contains their <a href="https://en.wikipedia.org/wiki/Convex_hull" rel="nofollow">convex hull</a> and some more, for the sake of safety. If these blobs are joined along the bundle, a multi dimensional "tube" is come by, specific to a (model; scenario) pair. Now we are in a position, that we can compare two models under the same scenario. If at any instance their "blobs" are disjoint sets, they are said to be <i>inconsistent with</i> each other under that scenario, otherwise they are <i>consistent</i> (under this particular scenario). If a pair of models is <i>consistent with</i> each other under all scenarios, it is a <i>consistent pair</i>. Please note it can still be the case, that a set of models contains only <i>consistent pairs</i>, but the set itself is inconsistent, because intersection of three sets can be non empty pairwise, while their full intersection is empty (like {A;B}, {B;C}, {C;A}). With that in mind we can define a <i>maximal consistent set of models</i> as a set, for which under all scenarios and instants intersection of their "blobs" is non empty, but with the addition of one more model this property is destroyed. It is important to note, that under this definition a model is necessarily <i>consistent with</i> itself. If the procedure described above is carried out for a pair of models which are identical but by name and the result is "inconsistent", they were not run a sufficient number of times to get a clear picture of their behavior, that is, their bundles are sparse. In that case the model should be run more times from slightly perturbed initial conditions. If that can't be accomplished for the lack of computational resources, the model is too complex from a computational point of view and should be disqualified, because its characteristic behavior can't be determined, so it is useless. I wonder what models are <i>disqualified</i> in CMIP5, what are the <i>maximal consistent sets</i> and what are their respective <i>ensemble averages</i>, or even better, intersection of their tubes for each scenario (because these intersections are continuous tubes themselves <i>by definition</i>). I am quite sure several models included in CMIP5 should be disqualified and the rest does not form a single <i>maximal consistent set</i>, so no <b>ensemble average</b> makes sense. What is more, I am afraid that under this definition all <i>maximal consistent sets</i> have but a single member, what makes all averages meaningless. However, one can't be sure until the job is done. I wonder what <a href="http://cmip-pcmdi.llnl.gov/" rel="nofollow">CMIP</a> folks are doing, if none of the tasks outlined above is accomplished so far. Or, if it's done, why is it not advertised? The <a href="http://cmip-pcmdi.llnl.gov/cmip5/docs/Taylor_CMIP5_design.pdf" rel="nofollow">CMIP5 experiment design document</a> is not encouraging.

Comment on Industry funding and bias by Stephen Segrest

Comment on Industry funding and bias by David Wojick

$
0
0

The political appointees certainly play a role, but in the climate science case much of the paradigm protecting bias exists at the career level. Gore in particular staffed up the USGCRP agency program offices with CAGW alarmists. We skeptics were appalled that when Bush came in he left most of those people in place, as he had no interest in the climate science issue.

Comment on Week in review – science edition by Joel Williams

$
0
0

The “hockey-stick” data relative to the Vostok Ice Core data for the last 1000 years in shown in the fig below. This is 1 or the 4 plots for the time period 0-10000ybp. Only the 0-10000ybp regression of the Ice Core data gives a downward trend for the last 1000 years. The Ice Core data is also overlaid a graph with many models for the NH. There is reverse behavior around 1600AD! Has the “hockey-blade” been smoothed over long time periods as has the “stick”?
The plots can be found at: http://gsjournal.net/Science-Journals/%7B$cat_name%7D/View/6170

Comment on Week in review – science edition by Joel Williams

Comment on Week in review – science edition by anng

$
0
0

You need words to explain anything. Maths is merely symbols and calculations. When the maths gets complicated, e.g. in particle physics, the words just muddy the waters as people take them literally instead of as very approximate analogies.

Comment on Week in review – science edition by jim2

$
0
0

Language is composed of mere symbols.

Comment on Industry funding and bias by Arch Stanton


Comment on Week in review – science edition by anng

$
0
0

http://ww2.kqed.org/mindshift/2015/08/04/seeing-struggling-math-learners-as-sense-makers-not-mistake-makers/
“My goal is for them to become the truthmakers,” Wees said. “I’m trying to build a mathematical community where something is true when everyone agrees it’s true.” To do that, he asks students to talk through mathematical ideas, struggle with them and give one another feedback. “A major goal of math classrooms should be to develop people who look for evidence and try to prove that things are true or not true,” Wees said. “You can do that at any age”

So he turns maths into people-based consensus!

Not a good start to the children’s maths career …

Comment on Industry funding and bias by aaron

$
0
0

Or, looking at it another way, another 57ppm CO2 equivalent for forcing to catch up to temps.

Or, another 28 years for forcing to catch up to current temperatures at current co2 growth rates.

(Of course this ignores other man made (other gasses, land use…) forcings to date and any feedbacks.)

Comment on Industry funding and bias by Michael

$
0
0

“Huge swathes of ‘climate science’ are drivel.” – Lati

Is there any thing other than your personal opinion here?

Comment on Mark Steyn’s new book on Michael Mann by captdallas2 0.8 +/- 0.3

$
0
0

John Sidels, “Conclusion The world’s mathematicians, scientists, and engineers regard a great portion of anti-Mann rhetoric to be ill-founded faux-mathematical bafflegab.”

Nice, it only takes one. You have read MM05 right? Here is another,
http://www.clim-past-discuss.net/6/659/2010/cpd-6-659-2010-print.pdf

That paper suggest comparing uni-proxy reconstruction to multi-proxy reconstruction.

Those are the Marcott et al close to Uni-proxies plus what “cap” reconstructions I could find to extend lower frequency members , I did include two TEX86 though. Kind of hard to see the average.

Now I welcome you to actual read about the issues, but I find it refreshing that you have such faith in your peers. Reminds me of my childhood.

Comment on Industry funding and bias by fulltimetumbleweed/tumbleweedstumbling

$
0
0

Absolutely agree. And proper real peer review and full disclosure of all data raw, and otherwise with no hiding behind “intellectual property rights” would allow everyone else to evaluate the paper and make a decision about how valid the research is. The current climate of some research money being bad has been fostered by the IPCC crowd of scientists because many of them won’t share data fully, hide their methodology in “black box” models because they know their work is inadequate and won’t stand up to real scrutiny yet some of the research that the “skeptics” and “deniers” have done is properly done, everything is shared with anyone who asks and the results are easily replicable. I think those who cry about the foul nature of industry funding need to be looked at just as carefully as the people doing research on industry funding. Those who screech about industry funding may well be using the industry funding as an excuse to shield their own inadequate work.

Viewing all 148656 articles
Browse latest View live




Latest Images