Matt Ridley is a columnist who writes generally insightful material for the Wall Street Journal about science and the culture of scientists. For the last three weeks, he has published a three-part series about confirmation bias, the tendency of people to overly weight evidence that agrees with their preconceived notions and downgrade the importance of evidence that disagrees with their preconceived notions. Confirmation bias is absolutely real and part of the human condition. Climate change skeptics have loudly accused climate scientists of confirmation bias in their interpretation of both data and modeling results. The skeptics claim that people like James Hansen will twist facts unrelentingly to support their emotion-based conclusion that climate change is real and caused by humans.
Generally Mr. Ridley writes well. However, in his concluding column today, Ridley says something that makes it hard to take him seriously as an unbiased observer in these matters. He says: "[A] team led by physicist Richard Muller of the University of California, Berkeley, concluded 'the carbon dioxide curve gives a better match than anything else we've tried' for the (modest) 0.8 Celsius-degree rise.... He may be right, but such curve-fitting reasoning is an example of confirmation bias."
Climate science debate aside, that last statement is just flat-out wrong. First, Muller was a skeptic - if anything, Muller's alarm at the result of his study shows that the conclusion goes directly against his bias. Second, and more importantly, "curve-fitting reasoning" in the sense of "best fit" is at the very heart of physical modeling. To put things in Bayesian language, a scientist wants to test the consistency of observed data with several candidate models or quantitative hypotheses. The scientist assigns some prior probabilities to the models - the likelihood going in that the scientist thinks the models are correct. An often used approach is "flat priors", where the initial assumption is that each of the models is equally likely to be correct. Then the scientist does a quantitative comparison of the data with the models, essentially asking the statistical question, "Given model A, how likely is it that we would see this data set?" Doing this right is tricky. Whether a fit is "good" depends on how many "knobs" or adjustable parameters there are in the model and the size of the data set - if you have 20 free parameters and 15 data points, a good curve fit essentially tells you nothing. Anyway, after doing this analysis correctly among different models, in Bayesian language the scientist comes up with posterior probabilities that the models are correct. (In this case, Muller may have assigned the "anthropogenic contributions to global warming are significant" hypothesis a low prior probability, since he was a skeptic.)
The bottom line: when done correctly, "curve fitting reasoning" is exactly the way that scientists distinguish the relative likelihoods that competing models are "correct". Saying that "best fit among alternative models" is confirmation bias is just false, if the selection of models considered is fair and the analysis is quantitatively correct.
No comments:
Post a Comment