on science journalism

Adam Ruben has created a bit of a kerfuffle with his opinion piece in Science, “The Unwritten Rules of Science Journalism.” And for once, I find myself siding with the journalists. His piece comes across as condescending and outright mean (maybe it was supposed to be funny?). And the ironic thing is I’m not generally all that forgiving of science writing. But Ruben and I seem to vastly disagree on where it goes wrong.

Reuban is pretty scathing about the techniques journalists use to make science accessible to laymen. Really, the average person (scientists included) are pretty bad at, estimating units of measure unless it’s something they use or see often, so “shortcuts” like comparing length to football fields, or the head of a pin, a human hair, or whatever, are quite valuable. He also is pretty condescending about the tendency for journalists to add personal anecdotes or little details about their meeting with the scientist in question. And my thinking there is…why wouldn’t we want a journalist trying to humanize us? Otherwise, in a lot of people’s eyes, we’re still that buck-toothed nerd with horrible communication skills and the safety-glasses taped together at the nose (oh wait….).

So here’s where Ruben and I are closer to being on the same page: factual errors and far fetched “applications” make us a little nuts. I don’t have much to say about factual errors except…well, yeah. If a “fact” in a story is outright wrong, it just looks bad. But on far-fetched conclusions, I place the blame equally on the scientists and on the journalists. See, here’s the thing: scientists as a whole are really, really bad at explaining their research, and Ruben’s comment about “dumbing down” the science is a perfect example how that skill isn’t even valued. Yes, there are exceptions, but not a lot. Researchers tend to assume a knowledge base about their topic that isn’t “common knowledge” even among scientists in a related field, and that’s just bad communication.  If you’re not capable of explaining why your research is amazing and cool to an intelligent, reasonably educated person (which is not “dumbing down,” it’s a valuable skill that’s far too lacking), then don’t blame them for grasping for the “coolness” factor on their own (or going to other researchers to try to gain some perspective). I think that a lot more effort needs to go into this area of communication, and we also need to be able to explain the research process as a whole to lay-people. For example, a scientist, even one in a different field, would probably know that being able to reverse a disease process in frogs is not an immediate precursor to being able to do the same thing in humans, right? But when a researcher is trying to get the word out, or get something published, there’s not a whole lot of time spent explaining that the experiment needs to be repeated in other animals, and chances are even changing the species of frog might give different results, and instead of that being horrible, it offers a new opportunity to try to figure out why, and what’s different between the two species that might tell us more about the disease process overall.

In a related rant, I tend to be frustrated by a lack of context in the reporting of new research. Take, for example, this article that came across my feed this morning. It’s a great explanation of a neat new finding that portions of the auditory brainstem (specifically in the cochlear nucleus) are highly organized by “plasticity,” by which they mean how sensitive a neuron is to change. But there is one particular sentence, early in the article, that makes me a little crazy: “The major finding: The synapses in question are not grouped randomly. Instead, like orchestra musicians sitting in their own sections, the synapses are bundled together by a key trait: plasticity.”

Well, duh. Even a quick wiki search will show you that we’ve known for a long time that the cochlear nucleus is highly organized, by neural projection destinations, by tonotopic sensitivity (which means what frequency or pitch the neurons respond to best), and by other sound characteristics. What this finding does for us, as far as I can tell (as of this writing, the actual Journal of Neuroscience article has not yet been released, I may need to go back and edit this after it is!), is raise a chicken and egg type question – does the organization by tonotopy arise from this inherent ability of neurons to self-organize by plasticity? Or are they independent processes?

So, what would I like to change about science reporting? I’d like to see greater effort put into expressing the joy of discovery of new things independent of whether they have immediate clinical applications. And I’d like to see a much greater effort to explain how any given finding relates to the larger body of research out there. I think that the public in general is (rightfully) frustrated by research reporting that seems mixed up and contradictory, and if we did a better job of explaining how changing small variables in studies can lead to large differences in results, people might better understand what it is we do, and be more supportive.

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s