Credulity, Skepticism and Cynicism
You’ve met them. “Oh, those scientists. They get their funding from the government/industry/political think tanks. They’re just producing the results needed to keep their money flowing. They’ll say anything it takes. Besides, it’s not like they don’t make mistakes. Even Newton and Einstein had it wrong.”
You’ve met the others, too. “My friend told me about an Oprah show where she talked to a writer who explained how the universe really works. I always knew it was a special place made just for me.”
There’s no polite way to say it, but it can be said simply. They’re both doing it wrong.
Any of us who present complicated or contentious information to the rest of the world–bloggers, podcasters, journalists, interviewees, teachers–have an opportunity to help people figure out how to interact with it. We can model critical thinking. We can tell others why we trust those we do. We can….
Well, there has to be a fair number of things we can do. If I knew what they all were, I wouldn’t have proposed this topic at ScienceOnline ’10.
Description: Lay audiences often lack the resources (access to studies, background knowledge of fields and methods) to evaluate the trustworthiness of scientific information as another scientist or a journalist might. Are there ways to usefully promote critical thinking about sources and presentation as we provide information? Can we teach them to navigate competing claims? And can we do it without promoting a distrust of science itself?
In addition to the crew who’ve signed on to the session, I expect we’ll get lots of good ideas from the session attendees. That’s the grand thing about an unconference. Well, that and the fact that we can start early and finish late, with input on the blogosphere even from people who can’t afford the time or travel to the event itself.
We’re dealing with a spectrum of trust, of course, among other things. See my examples at the top of the post. Trusting anyone to trusting no one. Credulity to cynicism. And not to indulge in reflexive centrism, but the healthiest point in this spectrum is somewhere between the two ends.
It’s easy to spot what’s wrong with each extreme. The credulous can’t account for fraud or for the fact that our brains are are only good at some kinds of impulsive (gut) decision-making. The documentation of cognitive biases and fallacies is not just a creative venture. The cynics can’t account for anyone who doesn’t do science for mercenary reasons (and how many people do?) or for the continuous advance of knowledge. We really do understand more about how the universe works than we ever have, even if we have much, much more to learn.
The problem in getting to that healthy point is two-fold. First off, we need to encourage the credulous how to identify the professionally sympathetic. We also need to help the cynical identify sources of information that they can trust. However, we also need to do this without swinging the pendulum too far and making cynics of the credulous and vice versa.
That may sound like two problems, but it isn’t. It’s teaching people how to sort information and sources. We can still cause a broad swing, nonetheless, if we’re not careful. Finding out that the positive evidence for parapsychology was mostly based on bad research design and not reporting negative results certainly made me cynical for a time, although it mostly now gives me ideas on what to look for in good research design.
The second part of the problem is that, barring severe brain dysfunction, neither the cynics nor the credulous really exist. The spectrum isn’t a spectrum but a rugged terrain. Those people who don’t trust scientists believe the people who tell them where the conflicts of interest arise and those who poke holes in (or near) methodology. The one who trusts all of Oprah’s guests is deeply suspicious of pronouncements from faceless governments, universities and corporations.
Whether we’re right or wrong on a particular topic, we’re all partly credulous and partly skeptical. There is too much information required to make reasonable decisions in modern life for us evaluate it all. Instead, we trust some sources and distrust others and trust still others only on some subjects. We accept some evidence as valid and reject some as flawed or irrelevant. We decide when consensus has been reached among the experts who “count.” And often, we do all that without examining how or why, even if we think of ourselves as skeptics.
Does that mean we’re doing it all wrong or that it’s impossible to do it right? No, or we’d live in the postmodern nightmare my stock cynic at the top of this posts thinks we’re in. It does mean there’s plenty of work to be done, because the problem isn’t a simple one of teaching people how much to believe, but teaching them how to figure out what to believe, instead of basing their decisions on who is saying the things they want to hear or the things that get their attention. It isn’t even necessarily the case that credulity or cynicism aren’t occasionally called for.
So, skeptical and scientific interwebs, share your tricks. What do you do to promote critical thinking? How do you help others figure out who to trust when they aren’t experts in the field? And maybe more importantly, help us learn from an even wider group. What have people done to help you understand what you can trust and what you can’t?
This entry was posted on Monday, December 14th, 2009 at 12:36 am and is filed under Science, Stephanie Zvan. You can follow any responses to this entry through the RSS 2.0 feed. Both comments and pings are currently closed.