Quantcast
Channel: Comments for Climate Etc.
Viewing all 148700 articles
Browse latest View live

Comment on Understanding adjustments to temperature data by Jim D

$
0
0

McIntyre has recently gone on Paul Homewood’s page and said things to try to lead him in the right direction on TOBS ending with
” I got results that look similar to those shown in the NOAA graphic. I think that most of your comments and allegations are incorrect and should be withdrawn.”
Homewood you may recall is one of those trying to criticize the NOAA temperature record.
You can read through McIntyre’s comments to Homewood here
http://notalotofpeopleknowthat.wordpress.com/2014/07/01/temperature-adjustments-in-alabama-2/
Other comments include
“I certainly do not think that the evidence that you have adduced warrants the somewhat overheated rhetoric on this topic and urge you to dial back your language.”
This is fairly strong coming from McIntyre, and these show his thinking within the last 4 days. A lot of skeptics listen to what McIntyre says, so maybe they will take this into account.


Comment on Phunny Physics by Rob Ellison

$
0
0

Nitrogen and oxygen absorb and emit blue and violet light – technically thermal radiation. You just need to define your terms a little more closely.

Comment on Understanding adjustments to temperature data by GaryM

$
0
0

Steven,

Either one. I would be interested in the answers as to any data set.

Comment on Understanding adjustments to temperature data by Jan P Perlwitz

$
0
0

And that is all fake skeptics have to offer.

Comment on Open thread by Rob Ellison

$
0
0

Testing a cool new button I found

Comment on Understanding adjustments to temperature data by johanna

$
0
0

+1.

The BoM’s mysterious and freshly minted “national temperature” metric, is a case in point.

Doing some sort of calculation based on where weather stations were historically located, across an entire continent, is BS which is still in the air before hitting the ground.

Comment on Understanding adjustments to temperature data by GaryM

$
0
0

Why not test estimations against actual data? I asked a similar question above.

If there is a change in equipment, leave the old equipment in place for a year at say 100 sites. Keep series of measurements from both sets of equipment. Have one researcher prepare the correction based on the difference between the two types of equipment at 50 sites. Use that correction on the data of the old equipment at the other 50 sites, and then compare it to the actual measurements there of the new equipment. If they match, you know your can have real confidence in your correction. The same could be done with time of observation changes, station moves, etc.

The real, essential problem I have with temperature reports, GCMs, paleo-climate and much of the rest of climate science is that they rely on statistics for validation, rather than comparison to actual data. If you once test a proposition, correction or model against actual data, and it proves accurate and precise enough, it would inspire a lot more confidence when you use it elsewhere.

(Actually, I should correct myself in one instance. We are now able to test the consensus’ GCMs against 17 years of the consensus’ temperature data. And the results are not impressive. That divergence is starting to make the one hidden by Mike’s Nature trick look like a hiccup policywise.)

But statistics is full of assumptions, Bayesian priors, estimated trends and the like. Such a process might well provide results useful for some purposes. But climate science is being used to push for massive public policy initiatives with enormous costs and negative economic impacts.

“Trust us, we compared our results against our synthetic data,” is not good enough under those circumstances. The policy question isn’t whether your corrections and algorithms are the best available. It’s whether they are as precise as you claim, for the purpose for which you offer them.

Comment on Understanding adjustments to temperature data by Don Monfort

$
0
0

I don’t have time to waste on pause deniers, perlie. That’s all you get.


Comment on Understanding adjustments to temperature data by Greg Goodman

$
0
0

Scottish sceptic says: “I realised HACRUT couldn’t be trusted, when I found out that phil Jones couldn’t use a spreadsheet”

And why would a competent programmer want or need to use a spreadsheet.for data processing?!

Spreadsheets are for accountants. It is pretty amateurish to use one data processing. However most amateurs that manage to lash up a “chart” in a spreadsheet for some reasons think they are then qualified to lay into anyone who is capable of programming and has never needed to rely on point and click , cut and paste tools to process data.

You’d also look a lot more credible if you could at least get the name of dataset right and realised that it is the work of two separate groups.

There’s plenty to be criticised at CRU, at least try to make credible criticisms.

Comment on Understanding adjustments to temperature data by KTM

$
0
0

I like your handwritten record.

I noticed that NONE of the Tmax values recorded during the month were duplicated from one day to the next.

Since the entire basis for making TOBS adjustments to reduce the Tmax of temperatures recorded back in years like 1934 is that a very high reading might get double counted for two days due to recording temps in the afternoon, doesn’t the lack of a single double-value utterly refute the rationale?

If there are no duplicate values, there is no need for a TOBS adjustment to “correct” the data.

Comment on Understanding adjustments to temperature data by GaryM

$
0
0

Windchasers,

“And let’s say (for the sake of argument) that the temperature is perfectly correlated across this area – everywhere in this square mile, the temperatures move in lockstep up or down.”

Average temperature for an area is hard, but average anomaly for the same area is easy? If I am reading you correctly.

Your answer parallels what the NOAA site I linked to earlier says. (Item 7 in the list.)

http://www.ncdc.noaa.gov/monitoring-references/faq/anomalies.php

The fact that it is easier does not convince me it is more accurate. What you and the NOAA site both indicate is that my concerns about determining average temperature locally, let alone globally, as extremely difficult are correct.

I have a lot of difficulties accepting the argument that a trend in anomalies gives you the same result with more accuracy. Just as I have difficulty accepting the consensus argument that it is easier to more accurately predict temperatures 100 years out than 10.

To calculate an anomaly, you need an average to start with. I don’t see how you avoid dealing with the difficulties in finding an average temperature, when calculation of your anomalies requires you to do so as a first step.

If it were one station, I don’t suppose it would make much difference, Even if you got the initial average wrong, at least you would be comparing future data against a norm.

But for numerous stations over a wide area, your initial average must be based on numerous assumptions about the average temp in the first place. And the average would certainly be different in different areas. Which brings us back to the same place we started at. If it is so difficult to determine average temperature for a single location, how is it “easier” to determine the appropriate average for a larger area to compute anomalies from that average?

The fact that the statistics work out does not convince me that the process is accurate or precise. In fact, Mosher has made the statement in the past that it doesn’t matter if you subtract warming stations, or cooling stations, or stations based on seemingly any other factor. The trend in anomalies stays the same.

To this layman, this sounds remarkably similar to the fact that Mann’s original model always gave a hockey stick, no matter what data was input.

The primary problem is that the entire global warming movement is being sold based on telling people that the global average temperature of the Earth is increasing at a dangerous rate. And that this rate is detectable to within tenths of a degree per year, per decade, per century.

You write “If we mostly care about the trend – and we do – then we don’t need the average temperature.” But average temperature is what is sold to the public. And average temperature is what you need to calculate anomalies, and therefore a trend in anomalies.

There seem to be just too many assumptions in the whole process to claim that precision.

I would have no problem if the public were told that “we estimate that the country’s average of interpolated, estimated, krigged, infilled anomalies is increasing by one tenth of a degree per decade,” because then it would be clear that there is a lot more than measurement of temperature going on. And the argument would properly be over the validity of the various assumptions, corrections and estimations. Just as is occurring in this thread.

But that has not been the public debate. They are told simply that “the average temperature of the US has increased by x tenths of a degree per decade.” Or “this year’s average temperature is one tenth a degree higher than last year’s.” And anyone who dissents from the claims of precision is labelled a denier.

Comment on Understanding adjustments to temperature data by Mi Cro

$
0
0

GaryM,
“To calculate an anomaly, you need an average to start with.”
I don’t know how they do it, but I determine a daily anomaly for each station on min/max temps. Once I do that, I don’t have to calculate an average till I aggregate my station list.

Comment on Understanding adjustments to temperature data by Steven Mosher

$
0
0

you have no cause to assume bad faith. none.
the original TOBS work was done in 1986.
the first skeptic to look at it vindicated the work.
then others attacked.
Another verification was done. successful
then more attacks.. basically people saying “i dont understand”
More explaination
more verification.

There is bad faith. but not by the guys who did the work in 1986.
the bad faith is here and now. practiced by the likes of you

Comment on Understanding adjustments to temperature data by Wagathon

$
0
0

Convert PDFs to Excel:

Fortunately, Acrobat 9.1 offers a couple of different ways to export to Excel.

1.Select table and open in Excel
This allows you to select a portion of a page and open it in Excel.

2.Export as Tables in Excel
This method uses some artificial intelligence to convert multiple page PDF documents to multiple worksheets in an XML-based spreadsheet file. It works best on files which were converted directly from Excel to PDF.

Comment on Understanding adjustments to temperature data by mosomoso

$
0
0

I’m sure there is good faith. But, just as we know that MacGyver is going to get out of his fix and Maid Marian will elude the Sheriff of Nottingham’s grasp, we know which way the adjustments of temperature will go. As sure as Kojak will crack his case, we know.

We just know, don’t we?


Comment on Understanding adjustments to temperature data by Nickels

$
0
0

Claes had good references for aposteriori error estimation which is a useful tool for exploring the computability of diffeqs. I dont know much about skyyyyypedragon cause Im waiting till the third movie comes out so I can see them all together.

Comment on Skeptical of skeptics: is Steve Goddard right? by Obamacare’s part time economy–America in Full Time Repression — California Political Review

$
0
0

[…] then there is the pesky little problem where it appears that someone may have been fudging the temperature data by estimating results and adding in corrections which skew them toward higher […]

Comment on Understanding adjustments to temperature data by Steven Mosher

$
0
0

“If there is a change in equipment, leave the old equipment in place for a year at say 100 sites. Keep series of measurements from both sets of equipment. ”

That was done for the MMTS change. Its also being done for CRN

also you dont understand what holding out data means.

and you dont understand why you have to test with synthetics data AS WELL

Comment on Understanding adjustments to temperature data by Steven Mosher

$
0
0

i think what zeke means is clear

skeptics start with a belief that adjustments DEFINED IN 1986 are somehow suspect because of climategate.

good faith means assume no evil intention.

you have no evidence these guys, these PARTICULAR GUYS,
had evil intentions.

so look at the work, not who did it

good faith

Comment on Understanding adjustments to temperature data by WebHubTelescope (@WHUT)

$
0
0

Eggman and vikingboy, Neckles knows what I am referring to,but you two obviously don’t.

SkyyyyDragons is a banned word here, and ClaesJohnson is the king of the dragoons, and really not worth referencing as any kind of sane authority.

Viewing all 148700 articles
Browse latest View live




Latest Images