JimD,
What do you mean by Nic’s short period.
His own paper is an extension of the work of Forest et al that covers the whole 20th century. The paper of Otto et al where he contributed covers the period from 1860 to 2009 or more precisely compares the first 20 years of this period with the last four decades separately and as a sum.
The authorship of Otto et al is really impressive. The approach is simple, but that is well justified at least for TCR. Some questions remain on the best way of expressing the outcome, but only at a level of little significance. More refined methods lead to somewhat different results. Some corrections can be tried to correct for specific short term effects, but the applicability of these corrections to this particular comparison must be studied carefully. I believe that the reference period has been chosen to minimize the influence of such corrections.
The paper that Nic wrote alone is a separate issue. He is perfectly right in his criticism of the interpretation of the outcome of Forest et al papers. My own preference is that the results would not be presented at all as a PDF that can be formed only by combining the likelihood function that’s the actual result with a prior. They are, however, commonly combined with an uniform prior in climate sensitivity.
The uniform prior in climate sensitivity is, indeed, suspect, and several arguments have been presented against it. Reaching high climate sensitivities requires a positive feedback close to the point of instability. An uniform prior in CS corresponds to a prior in feedback strength that diverges when approaching the point of instability. Something alone the line of thinking of Jeffreys on the noninformative prior for a scale parameter of the type of climate sensitivity seems to be required.
Jewson, Rowlands and Allen wrote an paper on using Jeffreys’s prior that’s invariant in changes between equivalent sets of parameters of a single model in climate forecasts, and Nic used this proposal to reanalyze the case of Forest et al 2006. Jeffreys’s prior has the nice property that the results are the same independently of the way a fixed model is formulated mathematically as long as it’s the same model. That leads, e.g., automatically to a specific and often plausible way for handling scale parameters. Jeffreys’s prior leads commonly to results that are not badly implausible, but it’s has replaced the dependence on the choice of parameters considered by the dependence on the model used to define it, in case of Nic’s paper on the MIT 2DCM.
As I wrote Jeffreys’s prior is commonly plausible. It’s probably best alternative we have, when a great inherent value is given on the objectivity in the sense that a decision made once removes further subjectivity in the analysis. That does, however, not guarantee at all that the original decision of selecting the Jeffreys’s prior leads to most correct results. At best it can prevent the introduction of really bad priors, when the goal is to get the best estimate for a parameter like ECS, rather than minimizing the subjectivity over all other considerations.
For a parameter that has meaning outside of one single model, as ECS has, each separate analysis based on a different model would have it’s own Jeffreys’s prior. That does not really make sense.