For scientists, or those who aspire to think like one, the National Geographic piece by Joel Achenback titled "Why Do Many Reasonable People Doubt Science" is likely a painful read. But, as Achenback discusses, there's a scientific basis for scientific doubt, one that not even scientists can fully avoid.

The gist is that we're social animals. (Yes, even the introverts.) We belong to tribes—composed of our circle of friends, our family, our colleagues, our community or church members, and the like—whether we choose to recognize it or not. Tribal affinity can trump objective interpretation and unbiased presentation or dissemination of evidence.

Let me be concrete: My tribe recycles. Its members believe climate change is real and human-driven. We think parents should vaccinate their children. We're not fans of processed food. And so on.

It's not perfect, but I like my tribe. I was born into it, and I've made a comfortable life among its people, some of whom I care about deeply. I don't want to upset them. I don't want them to think badly of me either. So, what do I do with evidence on the matters above, among others?

It's natural—even rational—that I'm predisposed to accept, highlight, and discuss evidence that is consistent with my tribe's identity. Sure, maybe I explore and entertain counterpoints. Maybe I even see merit in them (well, some of them). But the field is tilted in what I'm likely to reveal about where I stand, or even what I entertain only to reject, to members of my tribe, if not more publicly. I'm probably even tricking myself as to what's true, at least some of the time and for some issues.

This is Dan Kahan's "identity-protective cognition" at work. Even if I could avoid it, can I be sure that I am? Can you? I really doubt it. Not even deeper engagement with evidence is sure to unbias.

Studies back that up. For example, in one study, Kahan and colleagues found that political orientation can overwhelm correct interpretation of data even among highly numerate individuals. In their intriguing study, such individuals could correctly interpret data when they were presented as pertaining to a topic that isn't politically charged, like how well a skin cream works. But, when the data were said to represent something politically charged, like the relationship between a gun ban and crime, they suddenly misinterpreted it in a way that aligned with their political views. You'll find other work on this theme at The Cultural Cognition Project.

This is enough to make one despair and overreact: If people are going to believe whatever they want, no matter the data, is science and evidence irrelevant? The answer is surely "no." Other work shows that facts can attenuate bias, but only so much.

But there's another reason to value science. It clearly has affected the human condition enormously, and largely for the better (in my view). The fruits of science have helped more than double our achievable lifespans, provide levels of control over our environment (think agriculture and transportation) and facility in interacting (think Internet and cell phones) that could not have been dreamed of decades ago. At a less grand level, honest debate over evidence does inform policy. I see it happen at the Comparative Effectiveness Public Advisory Council meetings I participate in, for example.

Moreover, it may be possible to at least somewhat protect oneself from one's tribal instincts. To improve the chances one's view is defensible, Bill Gardner suggests that we seek the best arguments from worthy adversaries from different tribes.

[Y]ou have to tackle the best version of the argument you hope to defeat. [...] You should also seek identity threat. You need to expose yourself to people who fight by the rules, and who can and will beat the intellectual crap out of you.

This is useful advice, so long as we understand that to get at the truth, the outcome of interest is light of knowledge not heat of battle. And yet, heat can be alluring and is often highlighted in the media.

In a recent paper, Matthew Nisbet and Declan Fahy examined the media's role in conflicts over interpretation of scientific evidence. The tendency for media to cover the "political angle" or to "balance" stories with opposing views—without resolving which one is more consistent with the body of evidence—swiftly moves attention away from light to heat. Such politicization of scientific subject areas encourages the public to heed their tribal instincts even beyond their subconscious tendencies. Nisbet and Fahey take heart in the rise of explanatory, data-driven journalism now found at The Upshot, 538, Vox, and elsewhere, suggesting that the more knowledge-based products they provide might help attenuate tribalism. Whether they're right or not, a focus on knowledge as opposed to debate is probably a necessary condition.

Better still is greater participation in dissemination by researchers and directly on such sites. (Aaron and I write regularly for The Upshot on health care research and policy, for instance, along with other scholars who write on other subjects.) As Dahlia Remler pointed out, such arrangements can disseminate knowledge more cheaply than financially struggling media could do on its own, because academics do not require a salary and benefits (just a relatively modest fee) to produce occasional pieces.

Quality journalism is a public good. While it was once cross-subsidized by selling sports coverage and classified ads, the Internet and other technology put a stop to that. Now, like most public goods, quality journalism is poorly sustained by the market. So, it makes sense to use higher education, an already financed public good, to support quality journalism, an under financed public good. That idea was just one of the reasons Don Waisanen, Andrea Gabor and I advocated Academic Journalism—a lot more journalism done by people with full-time academic salaries.

If scholars partner with media organizations to disseminate knowledge and do so by engaging the best alternative, identity-threatening interpretations, we might just learn something. It won't change the world overnight, but it isn't likely to make it worse.

Austin B. Frakt, PhD, is a health economist with the Department of Veterans Affairs and an associate professor at Boston University’s School of Medicine and School of Public Health. He blogs about health economics and policy at The Incidental Economist and tweets at @afrakt. The views expressed in this post are that of the author and do not necessarily reflect the position of the Department of Veterans Affairs or Boston University.

Blog comments are restricted to AcademyHealth members only. To add comments, please sign-in.