Hi Judith – From your quote of the Verheggen, et al paper, it appears they used the **exact same search in WoS** that Cook et al used. Oh snap. I only skimmed that paper when a commenter on my blog mentioned it, and never noticed the search they used. That search results in a ton of non-climate papers, including all the psychology papers, surveys of the general public, papers analyzing TV coverage, etc. that I found in the Cook scam.
If Verheggen et al did not screen out social science papers, or all the other non-climate papers, like all the irrelevant engineering papers, and it appears they did not, then their results are voided — their poll consists of the authors of those papers. We can’t do anything with their study or their numbers if they’re polling authors of those papers. I wish I’d caught this earlier…
(The engineering paper phenomenon I discovered was many, many iterations of: Hey guys, you all know about global warming… let me tell you about this new membrane I’ve developed, or this new diesel engine design, or this new atomic layer deposition technqiue.)
All those papers would go into their “mitigation” category if they used they same scheme as Cook et al. The quality of their questionnaire will become moot if this is the case. Now we would think that engineers, psychologists, pollsters, sociologists, and what have you who are polled as part of a study of the scientific consensus on anthropogenic climate change would be confused and tell the researchers “Yo, I’m not a climate scientist, or even in a related field. I don’t even study the natural world. You don’t want me.” But after Cook I’ve lost my power to be surprised by consensus studies.
The mitigation and impacts categories also create a structural bias that invalidates such studies. There is no disconfirming counterpart to mitigation or impacts papers/authors. There is no opposite of a mitigation paper, which will almost always be counted as endorsement. Same with impacts (unless there is an explicit category of Minimized Impacts or DIsputed Impacts, or like your hurricanes paper, disputing the evidence of causation) Cook’s Impacts category description (Table 1) does not contemplate minimization, no do the available guidelines in the rater forums (forums which violated their stated method.)
To illustrate, if talking about climate gets an engineering paper counted as endorsement, how does an engineering paper get counted as rejection? Most won’t talk about climate. Will they count as rejection? Could an engineering paper say “Yeah, we’re not talking about climate” and count as rejection? There’s no way. It becomes even worse with social science papers. If a paper that analyzes TV coverage of AGW counts as scientific endorsement of AGW, does a paper that analyzes Taco Bell commercials count as rejection?
The use of “mitigation” papers invalidates the method completely. (The TV paper was counted as mitigation by Cook.) It’s trickier because they’re polling the authors of such papers, but from the Methods section they didn’t exclude non-climate scientists, or even psychologists. The said this: “By also soliciting responses from signatories of public statements who are not necessarily publishing scientists, it is likely that viewpoints that run counter to the prevailing consensus are somewhat magnified in our results.”
This will turn out to be false if they included a bunch of the unrelated papers’ authors. The results will need to be recomputed excluding all the non-climate science respondents.