Trust and Critical Thinking in Science Reporting: A Case Study

If you’ve been paying attention, you’ve heard me say before that I’m not a science blogger. However, over the weekend, I authored a guest post that was not merely science blogging but also blogging on a peer-reviewed publication. I wasn’t thinking about it at the time, but it was an opportunity to apply some of my thoughts regarding my upcoming session on Trust and Critical Thinking for ScienceOnline, which seeks ideas on how to report science in a way that teaches readers to interact with information skeptically.

Given that, I thought I’d capture what I set out to do in my post. Mind you, all these strategies involve modeling critical thinking. I have no data on how effective modeling may be, but it’s the best idea I have right now and it’s fairly easy to do as a writer.

Use the Controversy
This is something that a lot of science writers do. Controversy is conflict is the basis of story. Stories order information, making it more accessible, and stories get remembered. I only hope I did it right.

I used the conflict between those who want us to believe that IQ testing differences between racial categories is indicative of some underlying, immutable, fundamental difference between races and those who find the concept abhorrent. I also used the conflict between a researcher who showed up to tell a bunch of science geeks and some scientists that they were incompetent to understand his field and all the people he stepped on. I used the first so that people would know they were dealing with competing claims that would have to be analyzed and the second to find a study that people would be interested in analyzing.

What I didn’t do: I didn’t generate controversy where none existed (except by writing one of the posts to which said researcher objected). I didn’t report on the controversy (they say this, but they say that). I didn’t suggest the researcher had any political reason to produce the results he did, because I don’t have any way of knowing his motivation.

Check the Tools
The first thing I noticed when the author recommended his paper was that he was using a tool (reaction time testing) that I had seen used for intrapersonal testing (looking at the effect of situational variables) but not interpersonal testing (looking at the effect of variables intrinsic to the person). So I read up on the tool.

It turns out that I was mostly correct. The majority of uses for the tool involve things like attentional priming and measures of distraction, although some trends in individual differences due to age and sex have been found. A good chunk of my post is giving the reader a summary of the background needed to understand the use of this tool, as well as resources for further understanding.

Check the Controls
Once I understood how the tool was used and what results it had produced in the past, I understood what variables affected it. I saw that age and sex had been controlled for and noted that in the post. I also noted some that could plausibly also vary with race and noted that they hadn’t been measured, much less controlled for.

Check the Claims
This was where things really fell apart and where I think much reporting of scientific findings falls apart. The researcher was making claims online about what his study proved that weren’t part of the Discussion section of the paper and weren’t supported by the citations in the paper. That didn’t mean they were wrong, but it did mean they were well worth investigating.

In the end, I contrasted the study’s findings with the researcher’s assertions by setting them next to each other. I presented the strongest support I could find in the literature for the leap being made by the researcher and explained where and why it still fell short of bridging the gap.

Untangle the Logic
This one came up in the discussion on my post. Someone asked me to evaluate the overall evidence for there being genetic differences between races that lead to differences in intelligence as measured by IQ tests. I think this person was looking for a simple summation.

Instead, they got an explanation of why it isn’t a simple question, as I broke the large hypothesis down into smaller hypotheses that would each, individually, need to be proved in order to prove the large one. I identified six, but there are almost certainly more. Making the steps explicit hopefully exposed some of the leaps of logic required by those who still say that “of course” these differences are real and genetic.

Identify the Biases
Also in the comments, someone noted that I was setting a high bar for evidence on this particular topic. I agreed, noting it wasn’t entirely an academic question, but I also pointed out that I was wary of accepting weak evidence because we’ve identified cognitive biases that make us more likely to believe the race/IQ hypothesis instead of the appropriate null hypothesis, which is that there is no connection.

We make a whole host of attributional errors on a regular basis. That is to say, we are much better at seeing how environment affects us than others and groups of which we’re a part than those we’re not. In each case, we’re more likely to look at “the other” and ascribe behavior to fundamental features of the other instead of to environmental factors. Race is one of the mostly highly “othering” factors in our society, and I pointed out that counteracting that bias (not even a one-race-good, other-race-bad bias) requires a great deal of skepticism.

Easter Eggs
All right, despite what I said above, this one doesn’t involve modeling critical thinking. There is a statement toward the end of the Discussion of the paper I blogged on that is pure assertion without experimental support. Nothing in the study addressed the question, and there was no citation.

I didn’t point it out. I don’t know how many people will read the paper in full, but those who do will have enough information after my post to have a little moment of discovery of their own when they read that. They will have figured out for themselves that something is wrong. I hope they find that as rewarding as I do and that it offers encouragement to continue thinking critically.

Okay, that’s it for my ideas. For those of you who read my guest post, were these strategies effectively modeled? And more importantly, did you identify the Easter egg statement in the original paper?

Tags: ,

19 Responses to “Trust and Critical Thinking in Science Reporting: A Case Study”

  1. December 29th, 2009 at 1:59 pm

    Bryan says:

    Stephanie– sorry if I am interrupting the chirping crickets here.

    I am sometimes slow to pick up on things when we interact; this has created confusion and perhaps some lols for you.

    To clarify, it seems to me that for this science conference: You will be featuring me (at least in part) in your presentation of how not to do science (i.e., lay people with no scientific expertise will be spoon fed by scientists / science educators out of field as to why my in-field peer-reviewed publication is pseudo-scientific, and not worthy of trust).

    If that’s true, then in the spirit of presenting both sides (and of academic integrity, professional courtesy, etc.) would you allow me to present a one page statement that lets me rebut what I predict you will claim?

    I am only requesting this if I am correct that my research will be featured in your panel session (that inference seems very reasonable on my part given your OP here). You and Greg would obviously see my statement before presenting it (perhaps as a power point slide, or in whatever format you use to present your talk). We can agree on a word count, but I would ask only that it’s presented to your audience long enough for them to read it. That’s it.

    If any costs are incurred (to make copies of my statement, or whatever) because of this, I will gladly pay them.

    I mentioned not posting here anymore, and yet here I am (go figure). My explanation: I was hoping to post this request on the conference website. But, I have never done something like “edit a wiki-type” page. I was reluctant to post there until I was sure that I am allowed to (I did register as a user there). I saw no other place to comment on your panel session, but it’s possible I missed the link where my comment could go (if indeed I am allowed to comment on it).

    Yes, I am parsing my words carefully here for reasons I hope are obvious given our interactions. Here is a link to the panel session where I would like to also post my request:


  2. December 29th, 2009 at 2:29 pm

    Stephanie Zvan says:

    sorry if I am interrupting the chirping crickets here.

    Nice gratuitous insult, Bryan.

    I have no intention of discussing your study at the conference. The topic of the session is finding ways to promote critical thinking among readers of science reporting. This post here is an exploration of modeling critical thinking in that kind of writing. You’ll note that it doesn’t even mention your name or the topic of your study. Nobody at the session is going to care about the contents of a single study or who the author is. They will be too busy coming up with strategies they can incorporate in their own writing and arguing over which ones work.

    If you post a defense of your paper there, yes, I (or someone else) will delete it. It is off the topic of the session. I suggest you post it, should you feel the need, on my guest post itself. People are notorious for not following links, and I haven’t gotten nearly the traffic on Almost Diamonds where you commented as I have on the original post at Greg’s blog. A statement anywhere else is simply likely to go unread.

  3. December 29th, 2009 at 4:34 pm

    Bryan says:

    Thanks for the clarification. It was a ding– the crickets chirping thing, but I thought it was funny. We have too much a history together now, so I thought we might be at the stage in our relationship where that’d be ok.

    I won’t need a statement, then, and perhaps it was paranoia / ego (or any other undesirable trait we could ascribe to me), but I thought my work would be featured there given this post here.

    Can I recommend here some suggested readings– having nothing at all to do with IQ– that I think science educators must know but might not be familiar with (because they’re mostly psych articles). Not at all implying these should be used for your conference, but I think you would appreciate them / find them useful.

    I’ll post one here: Still in my opinion the single best journal article I have ever read:

  4. December 30th, 2009 at 12:36 am

    BioinfoTools says:

    Regards Bryan’s reference, those with limited time might prefer this summary:,_D.G._(1983)._In_Defense_of_External_Invalidity._American_Psychologist,_38,_379-387.

    Besides, the PDF he links is a 7.4 Mb download! (Probably because it’s a scanned image rather than the article.)

  5. December 30th, 2009 at 9:47 am

    Scotlyn says:


    You will be featuring me (at least in part) in your presentation of how not to do science (i.e., lay people with no scientific expertise will be spoon fed by scientists / science educators out of field as to why my in-field peer-reviewed publication is pseudo-scientific, and not worthy of trust).

    Let’s connect the dots, here.
    Suppose there was a journal devoted to doing this:
    Bryan: There’s no excuse for not knowing the IQ / job performance relationship (no excuse for any scientist claiming to have scientific knowledge in this area). It’s not a secondary issue; it’s the single biggest thing the field’s done over the last *38* years (year after year, in direct response to the supreme court’s ruling in griggs v. duke power).

    in order to achieve this:

    Consider though re cultural bias, IQ tests create massive adverse impact (using them in employee selection results in far fewer minorities being hired, relative to the ratios of black and white applicants). This is a form of employment discrimination under the civil rights act. However, the employer can successfully defend AI by showing that the test is job related and consistent with business necessity. IQ tests meet these criteria. So, the courts and EEOC recognize that even though IQ tests show group differences and result in restricted employment opportunities for minorities, they are legal to use in selection because they’re proven as job-related / unbiased predictors of job success.

    can you please differentiate this “expert” field from pseudoscience?

  6. December 30th, 2009 at 9:50 am

    Scotlyn says:

    I lost some formatting – a blockquote on the second statement of Bryan’s and a bold on the in direct response to the supreme court’s ruling in griggs v. duke power.

    Bryan’s journal appears to be part of an industry effort to undermine a particular line of argument which once succeeded in overturning discriminatory practices.

    Are you ok about that Bryan?

  7. December 30th, 2009 at 12:01 pm

    Bryan says:

    Nah; I think many companies don’t even realize that IQ predicts job performance– much like many people don’t. Instead they use other things without realizing they’re really selecting for IQ (“why are manhole covers round” as an interview question is a classic example).

    The ignorance is so bad a study was conducted recently published in an elite journal. The title is something like: 7 examples where research in I/O psych is totally different from what HR practitioners believe and do.

    At least 4 of those examples centered on how IQ is the best predictor, and how things like personality pale in comparison. Most of these poor bastard scientists are in psych departments not business colleges. I had a brief convo with the founder of the journal intelligence last year. He liked the fact that people from business colleges were starting to attend his conference. We then got into a discussion of salaries (a new ph.d. in accounting can command about 200k in starting salary at a decent university! Psych people start at around 55/60k). He shook his head and walked away…

    And, I’ve been impressed by the brilliance of federal judges. They mull over scientific data far more complex than IQ/JP relationships in other fields. You cannot BS them that IQ tests are fair, valid and non-biased unless they are.

  8. December 30th, 2009 at 12:51 pm

    Ben Zvan says:

    Funny…I have never been asked why manhole covers are round in a job interview. I have been asked many questions about the details of my work history and my accomplishments and failures and my approach to challenges and all these questions have always related to my ability to apply my experience to the position for which I was interviewing. I think that asking interviewees trivia questions is counter-productive and could easily introduce (un)intended bias into the hiring process. (Unless you’re hiring a utility worker, then you can ask them about manhole covers.)

    I have also not been impressed by the brilliance of the legal system when it comes to questions of science. The patent office routinely grants legal rights to corporations over samples of nature (rather than extraction processes or uses). Fingerprint, ballistic, and DNA identification have never been put through rigorous scientific testing that I am aware of, but are still used to convict “criminals.” Even worse, polygraphy has been shown inadequate and inaccurate in many studies and is still being used in trials.

    From reading Stephanie’s article at Greg Laden’s Blog, I think the most important conclusion in your paper is “Whether these differences might arise from differences in environment, nutritional levels, genes or some other factor is an issue in need of further study.”

  9. December 30th, 2009 at 3:54 pm

    Scotlyn says:

    When courts have to deal with scientific matters, they frequently seek “expert” evidence. Your comments tell a very interesting story, which becomes clear when you look up the Griggs v Duke Power decision. Justice Burger summarised the case thus:

    We granted the writ in this case to resolve the question whether an employer is prohibited by the Civil Rights Act of 1964, Title VII, from requiring a high school education [401 U.S. 424, 426] or passing of a standardized general intelligence test as a condition of employment in or transfer to jobs when (a) neither standard is shown to be significantly related to successful job performance, (b) both requirements operate to disqualify Negroes at a substantially higher rate than white applicants, and (c) the jobs in question formerly had been filled only by white employees as part of a longstanding practice of giving preference to whites.

    In your comments you clearly state that the correlation between IQ and job performance (“the single biggest thing the field’s done”) was painstakingly documented “in direct response to the supreme court’s ruling in griggs v. duke power.” The aim of establishing such a correlation (which is otherwise fairly ho-hum/meaningless – in the absence of further work to establish causation, correlations are not particularly useful entities), could only be to reverse the Supreme Courts decision in that case, and hand the forbidden discriminatory employee selection tools back to employers to freely use, by:
    1) positioning those in the “field” of “psychometrics” and “intelligence” – such as Rushton, Jensen, possibly yourself – as the particular scientific “experts” to whose advice court decisions of the future might reasonably defer.
    2) to create a new, “academically validated” cover – (the copiously documented, but utterly meaningless correlation between IQ and Job Performance) for the exact same discriminatory employee selection practices the Supreme court had denied. You say “the employer can successfully defend AI [Adverse Impact] by showing that the test is job related and consistent with business necessity. IQ tests meet these criteria.” But this is circular, as you already stated the “field” has striven for 38 years to ensure this is so – in response to the Griggs v Duke Power decision.
    The very terms “job related” and “consistent with business necessity” arise from the decision itself, and the necessity to show that IQ testing is “job-related” obviously has provided the agenda and the focus for this whole body of research to which you have devoted yourself.

    So, I’m asking you again,
    1) can a body of “research” focussed so clearly on reversing a particular decision of the Supreme Court, be distinguished from pseudoscience, or if you wish a kinder term, from lobbying/marketing?
    2) are you happy to know that your own “research” efforts are being applied to the clear aim (as substantiated within the body of your own comments) of reducing equal access to the workplace for a certain sub-grouping of your fellow citizens?

  10. December 30th, 2009 at 6:25 pm

    Bryan says:


    The manhole cover question was famous at Microsoft. Supposedly it was Gates’ idea as his philosophy was to hire smart people above any other quality. He did an ok job growing his company on this philosophy.

    My point would be why ask a stupid question like this when you can give an IQ test instead?

    Google is also famous for this. If I recall the had a bill board with a math puzzle on it. Being able to solve it led you to a web site where they then invited you to apply for a job.

    I did cite somewhere the classic meta analysis on what predicts job performance. Believe it or not, experience is not as good as the 12 minute IQ test.

    I also strongly agree that most employers’ interviews are wholly invalid.

    I hear google too has grown nicely as a company.

    Finally, my comment was only about federal judges– especially SCOTUS. I think their writings are often as beautiful as many things in classical literature (read the decision and dissent in steel workers v. weber, 1979– the case legalizing affirmative action in employment settings. I wish I could write so well:(

  11. December 30th, 2009 at 6:28 pm

    Bryan says:

    Sorry, Ben, for posting twice: I agree 100% with your assessment of the best conclusion coming from my 1 study.

    Let’s study it. Let’s not dismiss people who do as cranks just because they think the topic is of fundamental importance.

  12. December 30th, 2009 at 6:37 pm

    Bryan says:

    You got it Scott, but the dates are off.

    The case was in 1971. After the ruling you cite, what did every company with half a brain do: Throw out their IQ tests.

    Academics / college professors then started studying it in exhaustive depth. In the decades after Griggs the validity data became so massive that the civil rights act was amended. The surreal irony here is that Linda Gottfredson actually played small role in this!

    What bothered congress was the idea of race norming (and banding; now illegal, but very popular last century). Here’s the idea: Wow, this 12 minute test has incredible utility and validity. Fuck, it creates adverse impact. We can defend that legally, but I don’t want to spend 6 figures in court proving I am right, nor do I want to use a test that adversely affects my company’s diversity.

    Solution: Add 15 points to black applicants scores. This removes the race difference, but then lets the company benefit from IQ tests without being sued!

    Problem: Congress changed the civil rights act to make this illegal!

    There was a high profile case last year where a content valid test was given to promote fire fighters (this is currently a massive problem in personnel psych. No one can create a test of anything mental that doesn’t measure g, so no one can create an employment test w/o also creating adverse impact). As the test was a mental test, it was g-loaded and showed race differences.

    20 or so people scored high enough for promotion. IIRC, 18 were white 2 were hispanic, none were black.

    The city threw the test scores out and didn’t promote these people. Law suits ensued, and the supreme court ruled the city was wrong.

    That’s extreme, if mental tests are biased crap.

  13. December 30th, 2009 at 6:48 pm

    Bryan says:

    I really didn’t address your concerns, Scott– in looking back. Sorry. I will. Eating dinner now.

  14. December 30th, 2009 at 8:47 pm

    khan says:

    Holy frakkin crap Bryan. Do you have unresolved issues? Why do you crap all over other blogs instead of setting up your own?

    As has been mentioned: you can set up a blog for free.

  15. December 31st, 2009 at 7:44 am

    Mike Haubrich says:

    Even worse, polygraphy has been shown inadequate and inaccurate in many studies and is still being used in trials.

    Polygraphs are pseudoscientific and used during investigations, whether criminal or background checks, to coerce or trick innocent people into confessing. They should be banned, not praised. I once refused a job interview because they wanted a polygraph test, after having been false positive on a polygraph test in a criminal matter. I told the potential employer I had been bonded, and no claim had ever been filed against me.

  16. January 1st, 2010 at 8:35 am

    Scotlyn says:


    The case was in 1971. After the ruling you cite, what did every company with half a brain do: Throw out their IQ tests. Academics / college professors then started studying it in exhaustive depth. In the decades after Griggs the validity data became so massive that the civil rights act was amended. The surreal irony here is that Linda Gottfredson actually played small role in this!

    Yes, and I don’t doubt she did – what you have usefully confirmed (thank-you very much) is how much of your “decades” worth of “validity data” has simply been gathered in the service of spin.

    I have carefully read your comments and your paper and this is all you have come up with so far.
    1. Correlation: IQ test performance and Job performance (ie – being good at performing is a portable skill)
    2. Correlation: IQ test performance and speed performance (ie – being good at tests is a portable skill)
    3. Correlation: IQ test (+other test) performance and self-defined socially constructed “racial identities” [which you falsely persist in conflating with “race,” and thence, with notional discrete genetic populations] (ie – people who have grown up knowing themselves to be expected to perform poorly often fulfill such expectations in various test and performance conditions).

    Such “correlations” are the sum total of what you’ve got. Enough for a bit of creative spin when lobbying lawmakers, or providing “expert” testimony in court, but not enough in themselves to be either interesting, or to add new and useful knowledge to the social sciences.

    You have not shown one ounce of interest in pursuing the meaning of such correlations. As every epidemiologist knows, establishing a correlation is only the beginning of your work. You can document a correlation until the cows come home, but if all you have is a correlation, you’ve got nothing. Correlations may be artifacts, or they may be real. If real, they may result from further, as yet undiscovered causes.

    You have shown no interest in precision in your definition of “race”. Someone’s selection of an identity off of a list of predetermined choices may have little or no demonstrable bearing on their genetic similarity or difference to people who have made different selections on that same list.

    Bryan, you are a spin doctor – possibly a good one. But you are not a scientist, and your work is not science – it is marketing.

  17. January 1st, 2010 at 12:21 pm

    Bryan says:

    I am in a business college.

    Yes, gottfredson is so clever, she pulled one over on all them stoopid congress peoples and advanced her racist agenda under their noses. We are all celebrating now because of the glass ceiling.

    I think studying ECTs is de facto research on causality.

  18. January 4th, 2010 at 12:48 pm

    Lou FCD says:

    I am in a business college.

    shocker, there.

  19. January 5th, 2010 at 1:44 pm

    Ben Zvan says:

    Google using complex problem solving as an interview question is valid since complex problem solving skills will most likely be a job requirement. Microsoft asking why manhole covers are round is not valid since opening a manhole is unlikely to be a job requirement.

SEO Powered by Platinum SEO from Techblissonline