Quantcast
Channel: Comments for Climate Etc.
Viewing all 148687 articles
Browse latest View live

Comment on Week in review by R. Gates aka Skeptical Warmist

$
0
0

Jim D. asks the Big Money question:

“What would cause a Holocene recovery when the downward trend due to Milankovitch was expected to continue past the LIA?”

—-
This is exactly the point of the general validity of the hockey stick. Since the Holocene climate optimum, the general trend has been to a slow cooling, with periods that varied up (Roman Warm, MWP) and varied down (LIA), but the general shaft of the hockey stick had been trending slowly down. Now we have a period that is trending up in a way that basic physics, climate models, and the vast majority of scientists say is at least partially an anthropogenically forced upward “blade” in that hockey stick. This doesn’t seem to sit well with certain memeplexes,


Comment on Unforced variability and the global warming slow down by Frederick Colbourne (@FredColbourne)

$
0
0

The problem as understood from polynomial cointegration tests is that the combined walk seems to be that of a drunkard and her dog.

In terms of this blog, the leash appears to exist, but appears to be damped in some way so that global warming requires acceleration of the external forcing. This means the production of greenhouse gases must accelerate not merely increase linearly.

Since the production of greenhouse gases is related to the growth of the world economy, there would have to be acceleration in rate of economic growth. However, as has been well established, economic growth accelerates at the takeoff stage of an economy and gradually slows in accordance with the phenomenon of diminishing returns.

China, Russia and India are the major economies where economic growth is expected to accelerate and then slow down. If Beenstock and colleagues are correct, external forcing may well cause the climate to warm for a couple of generations, say 50 years.

If the author of this blog is correct, the oceans will absorb most of the energy and the Earth’s temperature will settle down a fraction of a degree warmer than now. .

This suggests that the sky is not falling and there is no cause for alarm.

Polynomial cointegration tests of anthropogenic impact on global warming
M. Beenstock, Y. Reingewertz, and N. Paldor
http://www.earth-syst-dynam.net/3/173/2012/esd-3-173-2012.html

Comment on Warsaw Loss and Damage Mechanism: A climate for corruption? by mosomoso

$
0
0

I dunno, tonyb. One of the most severe of all El Nino events occurred within a year or two of the British settling around Port Jackson in 1788. As repeated monsoon failures brought on the Great Deccan Famine, the birds were dropping dead from the air in Sydney and at Parramatta. Coincidence?

It took us another hundred years before we could shake off the colonial yoke and have our own super-drought. (Can’t blame the poms for 1902, I s’pose. although it would be just like you imperialists to pinch all the rain on the way out and leave us with a bunch of useless CO2.)

Comment on Warsaw Loss and Damage Mechanism: A climate for corruption? by Antonio (AKA "Un físico")

$
0
0

Talking about UN, is talking about corruption. If anyone do not see this is because is blind. What it disappoints me is that most of this world’s state of corruption relies in “scientific” claims not based in actual science:
https://docs.google.com/file/d/0B4r_7eooq1u2VHpYemRBV3FQRjA
it is just a science invented by UN, in order to generate political action lead by the UN.
I guess that an hypothetical trial, should include (in addition to: Gumar Myhre (CICERO), Drew Shindell (NASA), Gregory Flato (Env. Can.), Jochem Marotzke (MPI Met.), Matthew Collins (U. Exeter), Reto Knutti (ETH Zurich) and all IPCC members that “supervise” the 6 cited above) Christiana Figueres and all her supervisors in the UN.

Comment on Warsaw Loss and Damage Mechanism: A climate for corruption? by Mark Goldstone

$
0
0

Each time there is a natural disaster of any kind, developed countries pull together aid packages to assist. This might include expertise, military manpower, medics, food packages, blankets, temporary housing etc. They package it up and ship it at their own expense to countries who’s governments have not prepared in any way for the probable risks from natural disaster.

The aid suppliers never ask awkward questions like; why did you have so many poor people living in slums in disaster prone areas whilst your leaders live in palaces? or how is it that your military are only good at suppressing the masses and have no training whatsoever in disaster relief? They simply get on with bringing relief to those that really need it.

On top of this, there are the countless millions that are given in charity funding each year so that relief can be directed to those who need it.

I am assuming that this constant flow of aid was ignored in the calculations.

To be honest, I know that there are some poor countries that are really struggling and have good leadership. However, for the most part, I couldn’t think of a better way of wasting money than to give it to some of these governments and why on Earth would we ever give it to the UN? So that they can form yet another committee filled with the pampered elite and celebrities?

To my way of thinking the only sensible form of aid is one that ensures that every last cent gets to the people who really need it and I don’t see how that would happen with so many parasites along the way.

Comment on Warsaw Loss and Damage Mechanism: A climate for corruption? by Herman Alexander Pope

$
0
0

Everyone should pay their Fair Share.

Since there is no actual DATA that shows there is a problem and since there is no actual DATA that shows manmade CO2 caused any of the damage that has not happened, the fair share is zero, or, there is a balance due to be repaid.

Comment on Data corruption by running mean ‘smoothers’ by Vaughan Pratt

$
0
0
@NiV: <i>A further improvement could to use a sinc-shaped filter to get a square-topped frequency response, passing through all the data for which signal is greater than noise. It depends.</i> Quite right. The exact width and rolloff of each band determines the rate at which the side lobes decay. If you want a modestly square octave-wide frequency response you can get this using the <a href="https://en.wikipedia.org/wiki/Mexican_hat_wavelet" rel="nofollow">Mexican hat wavelet</a>, aka the Ricker wavelet, which decays the side lobes relatively quickly. Although it's far from obvious from the way I described the filters in my AGU spreadsheet, this is in fact how the four filters (serving to separate four octaves) actually worked in cascade. To see this, download <a href="http://clim.stanford.edu/hadcrut3.xls" rel="nofollow">my spreadsheet</a> and set columns AN52:AN212 to zero. (Enter 0 in AN52, select the range AN52:AN212, and type ^D.) Now enter 0.4 in AN130. This will serve as the impulse (scaled to 0.4 so as not to require rescaling the plots), allowing you to observe the impulse response at each of the subsequent filters. There are four outputs to observe, which can be seen graphically two at a time in respectively Figures 9 (SOL, accessed via the tabs at the bottom) and 12 (DEC). All four plots are of approximate Mexican hat wavelets of progressively decreasing width as you go up in frequency, an octave (more or less) at a time. Nowadays I skip all this cascading stuff and just work directly with the wavelets needed for any given band. @NiV: <i>But this application makes no sense unless you have clearly defined what you mean by ‘signal’ and ‘noise’. </i> A point that I had hoped would be evident from my poster, but which in hindsight clearly wasn't, is that it made no distinction between signal and noise but treated all frequency bands in the signal as equally important. <i>I discarded none.</i> The last figure, <a href="http://clim.stanford.edu/Fig11.jpg" rel="nofollow">Figure 11</a>, collects all the bands into three main groups, namely MUL (multidecadal), SOL (solar), and DEC (decadal). Their sum is <i>exactly</I> HadCRUT3. I discarded nothing!

Comment on Data corruption by running mean ‘smoothers’ by Vaughan Pratt

$
0
0
In your CSALT interface, Web, why "Window"? Wouldn't "Filter" be more descriptive? The uses of "triple running mean" that I've seen in the literature prior to 2013 refer either to a single running mean with a window size of 3 samples (what WoodForTrees gives when you select Mean and enter 3) or to three moving averages with quite different window sizes, e.g. 5, 10 and 20 such as <a href="http://www.stockdisciplines.com/triple-crossover-system" rel="nofollow">here</a>. Meanwhile I ran across something puzzling on Wikipedia. <a href="https://en.wikipedia.org/wiki/Moving_average" rel="nofollow">The article on Moving Average</a> was edited on Nov. 10 by an anonymous editor with IP address 82.149.161.108 (in Georgsmarienhütte, Germany, perhaps?) who cited Goodman's blog as the source for the editor's claim that "This solution [triple running mean] is often used in real-time audio filtering since it is compuationally quicker than other comparable filters such as a gaussian kernel." However Goodman's blog does not mention audio, and the first time Goodman claimed any such thing here was on Nov. 22. It would be interesting to know where this editor got the information about real-time audio if not from Goodman personally. I seriously doubt it appears anywhere else older than Goodman's May 2013 blog post. It seems very likely Goodman simply made this up. Wikipedia also gives Goodman's number of 1.3371 as the right ratio to decrease the window size each time. Clearly neither Goodman nor the editor understood the spreadsheet supporting my poster, which gives 0.4% as the maximum passed by my F3 past the cutoff, see cell Y279 on the main sheet. 1.3371 gives 0.9% which is considerably worse. More on this in my reply just above to NiV. If there were any truth to Goodman's claim that my technique is well-known to audio engineers he would not have had to try and guess what the right numbers were since the audio engineers would long ago have figured them out and he could have simply looked them up. Instead he tried to work them out himself, which turned out to be above his pay grade.

Comment on Reflection on reliability of climate models by Herman Alexander Pope

$
0
0

This model would be mildly useful for policy because it would tell policy makers that at least 20% of the temperature increase was beyond human control.

When the models show no skill, there is nothing to tell policy makers anything about what is beyond human control. Mainly, all of the temperature control is natural and beyond human control.

There is no actual data that shows humans have any control over earth temperature of sea level.

Comment on Reflection on reliability of climate models by Wagathon

$
0
0

Don’t be silly–we know that climate over period was not related to changes in atmospheric greenhouse gases so, irrespective of what the data may have shown–when looked at with a skeptical eye–the amount of ‘climate forcing’ supposedly due to changes in atmospheric greenhouse gases was obviously overstated or countervailing forces such as clouds were simply ignored by GCMs. And, we know that of the two the latter is what is happening: GCMs fail to account for changes in the Earth’s albedo due to clouds and do not account for the effect that clouds have on the amount of solar energy that is absorbed by the Earth.

Comment on Reflection on reliability of climate models by Jim Cripwell

$
0
0

Mi Cro you write “Now I expect you’ll protest some more,”

It is not that I am protesting. Everything you write is correct. There is empirical data we can use. The issue is that this data cannot prove whether or not the hypothesis of CAGW s correct

We could only prove whether CAGW is true or not, if we could keep all other conditions the same, increase how much CO2 there is in the atmosphere, and measure how much temperatures rise.

That controlled experiment we cannot do.

Comment on The 52% ‘consensus’ by A Climate Of Fear, Cash, And Correctitude | EPA Abuse

$
0
0

[…] worded survey. It also ignores the 700 climate scientists, 31,000 American scientists and 48% of US meteorologists who say there is no evidence that humans are causing dangerous climate […]

Comment on Reflection on reliability of climate models by Herman Alexander Pope

$
0
0

“Even if CO2 can warm the planet, we have no idea whether that effect overwhelms the centennial and millennial scale natural changes, or whether that effect is overwhelmed by natural changes.”

David, We do know that effect is overwhelmed by natural changes. we are much like a flea on an Elephant.

Comment on Reflection on reliability of climate models by Mi Cro

$
0
0

@ Jim C,
But we’ve already done that part of the experiment, though our samples have to be taken over 50-60 years. And we have the data to do this. I have the data.
I’ve partially done this already, but unless I put the time in to publish this, which I don’t really have, it’s dismissed by Mosh as wrong :)

Actually I think he’d say it was wrong even if I did published it, but that’s another topic.

Comment on Reflection on reliability of climate models by Herman Alexander Pope

$
0
0

All we know for sure is that the process releasing the CO2 (burning fossil fuels) is so hugely beneficial it’s not unwarranted to say modern civilization would not be here without it nor can it continue without it.

David, YES!!!!!!!!!!!!!!!!!!!!!!


Comment on Reflection on reliability of climate models by AK

$
0
0
<blockquote>the amount of ‘climate forcing’ supposedly due to changes in atmospheric greenhouse gases was obviously overstated or countervailing forces such as clouds were simply ignored by GCMs.</blockquote>We don't know anything of the sort: the most probable explanation is that one or more <i>"countervailing force"</i> was <b>calculated wrong</b>. <blockquote>And, we know that of the two the latter is what is happening: GCMs fail to account for changes in the Earth’s albedo due to clouds and do not account for the effect that clouds have on the amount of solar energy that is absorbed by the Earth.</blockquote>No, we don't know anything of the sort.

Comment on Reflection on reliability of climate models by A Lacis

$
0
0

This seems again to be a case of the professional tree experts coming by to expound their statistical ‘relevant dominant uncertainty’ (RDU) analysis of tree physiology and tree psychology. Rigorous statistical analysis and systematic investigation of data uncertainties and the perplexity of natural ongoing variability of various climate effects is of course a good thing and can lead to a better understanding of the complex climate system physical processes – and, all the better to confuse decision-makers with.

Decision-makers need to be looking more at what is happening to the forest, and not be so pre-occupied with what individual trees may or may not be doing. There are many uncertainties of climate variability that are never going to become ‘predictable’ in any preemptive sense. So, decision-makers (and the general public) will just have to learn how to deal with the consequences of those unpredictable climate events when they do happen.

Decision-makers need instead to understand the Relevant Dominant Certainty (RDC) of global warming and the changing climate system. The basic facts and physics are very clear. There is virtual certainty that atmospheric CO2 is increasing (now at 400 ppm) because of fossil fuel burning. There is also certainty that atmospheric CO2 is the principal non-condensing greenhouse gas which acts to prop up the terrestrial greenhouse effect, and that water vapor and clouds are the fast feedback effects that simply multiply the CO2 greenhouse warming by a factor of three or four.

The bottom line is that atmospheric CO2 is the principal control knob that governs the global temperature of Earth. Decision-makers need to understand that there really is no significant uncertainty in the basic cause-and-effect relationship of atmospheric CO2 and global warming, and the impending consequences of sea level rise and environmental disruption.

The decision-makers should keep the basic facts and physics in mind, and act accordingly.

Comment on Reflection on reliability of climate models by Joshua

$
0
0

Pekka -

What we have evidence of is that people are inclined to filter information so as to confirm biases. If you read the work of Kahan, you will see that he presents solid evidence that shows that as people display more expertise on these types of controversial issues, the more they tend to be polarized in their views (not on an individual basis, but across groups). His studies show that people assess the value of “expert” input in ways that are strongly associated with ideology and group identification. “Skeptics” and “realists” alike, on the whole, fail to account for the evidence that he presents. The picture painted by his carefully controlled and assessed evidence is one quite different than the phenomenon that the authors “predict” is “likely.”

I don’t doubt that what they describe exists to some extent – the question, however, is how relevant is what they describe to understanding the larger picture. My opinion is that in the way they outline their “prediction,” they have oversold and over-interpreted the evidence. They have some, anecdotal evidence of the sort you describe. They present no, none, zilch, nada, niente. bupkis evidence that supports their “prediction.” Despite that we all believe that the phenomenon they describe exists to some they have not presented actual evidence. They are making unsupported assumptions.

Rather ironic given their areas of expertise.

Comment on Data corruption by running mean ‘smoothers’ by Vaughan Pratt

$
0
0
Goodman deserves recognition for the discovery of his constant 1.3371... I suggest naming it the Goodman constant, by analogy with the Euler-Mascheroni constant 0.57721... The Wikipedia article Moving Average should refer to it as such. Goodman's constant is related to Santmann's constant 1.43029665312 which can be found at the bottom of <a href="http://answers.yahoo.com/question/index?qid=20070826003951AAKqyN6" rel="nofollow">this page</a>. Neither one is optimal for a cascade of two box filters, though their mean, 1.384, is close to the optimum 1.3937 that I mentioned above in the context of optimum values for other numbers of box filters cascaded in this way.

Comment on Reflection on reliability of climate models by Harold

$
0
0

“1. C02 will warm the planet
2. C02 is inert and doesnt interact with radiation
3. C02 will cool the planet”

Bonk bonk bonk.

Error Error. False trichotomy.

4. CO2 (btw, that’s “O” as in oxygen, not “0″ as in zip) has a finite warming effect that’s low enough and overall beneficial enough to

a) not be an immediate problem, and
b) possibly not be a problem in the future, and
c) we have enough time to evaluate adaptations without making panicked and stupid policy decisions now.

Not quite slick enough, Slick.

Viewing all 148687 articles
Browse latest View live




Latest Images