Steven Mosher writes:
“the vast majority of data comes from GHCN Daily unadjusted.
daily data doesnt get adjusted.”
Does BEST then apply its own TOBS and UHI adjustments?
Steven Mosher writes:
“the vast majority of data comes from GHCN Daily unadjusted.
daily data doesnt get adjusted.”
Does BEST then apply its own TOBS and UHI adjustments?
I look at Willis’ comment above and ask why he is unable to process the data himself, to get raw number answers as opposed to pretty pictures. That goes back to transparency, something that was promised with “BEST”.
Brandon, quite why stations should want to run away from people is open to speculation. If it was obvious why stations moved, it would be easy to model. In biological studies we like to explain things we can’t see as invisible.
I think that land prices drive the sensors away from populations, and then population encroachment driving up prices, is a reasonable mechanism. I have no idea if it is true, but it would be bloody difficult to deconvolute out.
The conference on Global Warming and Climate Change, scheduled to be held in Boston on Tuesday, February 10th, was cancelled yesterday on account of the weather: it was snowing! Please make a note of it.
Steven Mosher,
I have no problem with the algorithms, I think that the discontinuities are properly assessed. The problem is that these discontinuities are biased towards negative values. The issue is attribution. It has never found any other answer than that of Hansen in his 2001 paper.
Conclusion: BEST curves, like the others, are strongly biased, it greatly overestimate the warming throughout the twentieth century. This is confirmed by satellite and proxies.
AK
>”….the effects of “BEST’s homogenization” is to offer a tiny bit more credence to the LIA, and interpretations of recent warming as part of the rebound from it.”
Interesting perspective AK. That hadn’t crossed my mind until reading it.
If we consider the present as “fixed” as per the homogenization process due to better quality control, better sites, AWS, etc i.e. the reference level is the present, then adjustments that make the past cooler are in effect indicating that the LIA was cooler than we thought as you point out. That’s if we subscribe to the process being valid.
This perspective reverses the warmist argument. In effect they’re asserting that the LIA was real, and was much cooler than the present.
Another term for “the raw data” is “the only data”.
But they don’t say that.
blue line. no breakpoints = no adjustments
green line. stations are treated as new stations where the metadata indicates
a) different location.
b) different sensor
c) different TOB
red line: stations are broken into new stations where their data is
empirically divergent from its neighbors.. ie station A has a jump
that 100s of its neighbors dont have.
Empirical breakpoints say “something changed here that is unique to this
station. Like an undocumented move, undocumented sensor change.
somebody paving the surrounding areas with asphalt. The change is undocumented but it shows up in the data.
Once you understand this you will see why it is hard to “explain” any given adjustment without stepping through the code. And if you do that once, then you’ll have 40,000 more requests to step through the code.
I guess you were not smart enough to identify this artifact.
Don
Don’t you think we need to build gradually up to the collaboratifirsthand totally having a Series of scoping meetings in first class hotels in increasingly agreeable locations that, to simulate the weather conditions the improved data base will cover, should range from beach resorts to ski locations?
I’m in. Someone had better tell Rud and Harvard. I will leave that to Mosh. He has a way with words, some of them not profane.
Tonyb
phi
‘The problem is that these discontinuities are biased towards negative values. The issue is attribution. It has never found any other answer than that of Hansen in his 2001 paper.”
1. you assume that corrections will be NEUTRAL.
2. That is your prior, or your hypothesis.
3. you build an algorithm to adjust.
4. you test this algorithm against synthetic data.
5. you prove it is not biased.
6. You apply your validated algorithm to LIVE data.
7. The adjustments are slightly negative.
#1, your assumption, your theory, your prior, is wrong.
I start with a different prior
1. Corrections will never be neutral, but I dont know whether they
will be positive or negative.
…
7. The adjustments are slightly negative.
cool.
Then I look at the ocean.
7. The adjustments are large positive..
cool.
The difference?
You start with an asssumption that the adjustements will be neutral.
My experience in all fields says the opposite. that is raw data has always been biased high or biased low. and therefore I never expect adjustment code to produce neutral results.
Rob Ellison: See our group’s webpage “Slaying the Slayers: …
you basically have 3 parameters you can iterate over.
1) size of the drop ( you specify various drop sizes)
2. slope of the gradual rise.
3. length of the rise.
Do all possible combinations of these to create the detector.
turn your computer on.
watch 60 episodes of a korean historical drama.
count the fish in the bucket.
sigh…
realize that you’ve just wasted more time answering hypotheticals
rather than improving the areas you know need improvement.
vow to ignore certain people.
the data was created by a third party.
richardcfromnz:
If we consider the present as “fixed” as per the homogenization process due to better quality control, better sites, AWS, etc i.e. the reference level is the present, then adjustments that make the past cooler are in effect indicating that the LIA was cooler than we thought as you point out. That’s if we subscribe to the process being valid.
One problem with this perspective is BEST hasn’t really shown what its adjustments do in the portion of the LIA it covers. The figures in this post only go back to 1850, approximately when the LIA is said to have ended.
It would be interesting to see how the nearly 100 years of the BEST record which overlap with the LIA are affected by BEST’s adjustments. Maybe BEST could be convinced to show that.
Steven Mosher,
It is a matter of magnitude. A systematic bias of 0.05 °C per decade is of the order of magnitude of the signal. One does not correct such a bias if we do not clearly understand its origin.
Fan
Sea level pause?
I seem to remember you getting very excited about the acceleration you thought you saw in 2012 that you believed vindicated Dr Hansen
What happened to that?
Tonyb
Reply to Steve Mosher ==> So you are saying “Blue Line = numbers as originally recorded by volunteer co-op weather station operators in their log books” (or electronically from sensors) ?
Correct?
Well, yes, fair exchange; but he didn’t have to prove the absence of the device caused a mortality.
This is, of course, something hard to double-blind.
Here’s a bias. What’s the easiest to study prospectively, double-blinded, and placebo controlled? Why, pharmaceuticals, natch. A puffed up dominant treatment modality.
=================
This is precisous
“But if you look at just the blue and red lines, you can see a .2 to .3 degree difference in the earlier portions. ”
Red is Africa
Blue is the USwhen we talk about the adjustments being inconsequential GLOBALLY
we mean the BLACK lineThe red line is Africa– 20% of the land
the blue line is the US 5% of the land.So yes if you look at 5% of the data ( blue) you see a .2 to .3 degree difference
the POINT of showing people continents and how they differ is so that people will AVOID the kind of mistake Brandon just made.
GLOBALLY ( we are estimating the GLOBAL average) the adjustments are mousenuts..
BUT because people can cherry pick ( the US) they can show BIG differences.. BUT they also IGNORE big differences in the other direction.