a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
rcvf's comments
activity:
rcvf  ·  3793 days ago  ·  link  ·    ·  parent  ·  post: We need to consider revising our system of scholarly research and publication

I'm glad that the Economist stays excited about science, but I think there's a bit of a misconception here. Replication is the core of science, and I believe it's impossible to do experiments without replicating a lot of what's been done before.

Disclaimers: I'm a practicing scientist, and so replication and the reputation of science is something I'm concerned about. Also, this is my first post on hubski, hi everybody. Short version of arguments: replication is in the Methods sections of papers, and also doing a good experiment takes years.

For me, the scientific process is a lot like art-making. Riffing off of an arcane literature mostly unreadable to all but specialists, pushing the limits of a medium (e.g., imaging, protein purification, electrophysiological recordings from single cells), such that the 'cutting-edge' means working right at that noise level with equipment that barely works at all. Spending months or more frequently, years to produce a single cultural text or object that adds just a bit to that specialist-only arcane literature. As such, performing an experiment is about discovering something that is possible- not about discovering what is necessarily always the case, and sometimes not even all that plausible, only what happened to work under a very specific set of conditions.

So there's usually no reason to perform what to non-scientists looks like full replication- a duplication of exactly the same study to verify independent results. Not to be too much of an art nerd, but it'd be like Matisse painting an exact copy of Picasso's " Les Demoiselles d'Avignon" in 1910, rather than producing "La Danse". To put it bluntly, it's been done, no reason to just make a copy, and that excitement and novelty carries as much weight in science as it does in the arts. (And of course artists copy each other to learn technique in the first place, even though these copies might not see light of day- same is true in science.)

But replication occurs all the time in a more covert way. For example, in the lab I've studied a particular biological mechanism for changing how synapses work in the brain. Let's call it process X (actually called 'spike-timing-dependent plasticity' for the detail-oriented). About two decades ago, scientists had a less-precise way of changing synapses, so when process X was first reported, it seemed like a big deal (to the field). However, several major labs tried to replicate process X and failed, getting back to work on their own projects. Then a new crop of graduate students started working and about five years later, a few labs reported success of something that looked like process X, but for different synapses and performed in different ways. Gradually, more labs started to 'figure out' how to get it to work, and as it became more common, process X moved from being a sensational result to being a conventional result, and then it stopped being a result and became a 'Method'. In some papers today you have to read very carefully to even realize they're studying or using process X at all. I'm sure the same is true in other fields, where big findings move from being highlighted in the Abstract and Discussion, to reported in the Results, to being relegated to the Methods (in some journals, just deposited online and not even part of the paper).

I'll also assert three other points. One, that statistics are important as a common currency in science, but aren't the absolute value of the work. I urge my students to be careful not to try and game the system, working to or fetishizing p-values of 0.05. Nice results can be independent of statistical analysis. Two, peer review is not designed to catch most kinds of mistakes, including fraud. Other researchers being excited about a paper, students and postdocs working on a particular topic, finding what happens to hold in their specific conditions- that's the normal mode of science that keeps the field clean, and that normal mode operates on the time scale of years. Three, and most importantly- reputation is all we have as individual scientists. A result is only as important as the other researchers in the field that pick it up and move it forward. Not replicating it per se, but leveraging it to do something potentially even more outrageous.

So much for a defense of the current system. Can we do better? Yes, I think so. For one, I think that having open access to raw data will help. I'm not suggesting getting rid of the journal/peer review system. Rather, after publication all raw data that went into the paper, including the statistical analysis of the data as used by the researchers (Excel spreadsheets, Matlab code, etc), could be made available online, linked directly from the particular publication. The storage space available today makes this data transparency possible in a way that was previously much more problematic.