Discover
/
Article

Pretending to hypothesize

JUN 21, 2012
is it possible to correctly hypothesize a conclusion based on limited data?

DOI: 10.1063/PT.5.010181

For Physics Today‘s February 2003 issue, I wrote a news story about a paper in Physical Review Letters. In the paper, Dieter Braun and Albert Libchaber described how DNA molecules in solution, if confined in a small vessel and subjected to a steep temperature gradient, would form local concentrations that are 1000 times higher than in the rest of the vessel. The topic is potentially of monumental importance. High concentrations are needed for a primordial soup to beget the self-replicating molecular precursors of life—at least as we know it.

Now if you’d read the PRL paper before my PT story, you might have formed the impression that Braun and Libchaber set out to elucidate a physical mechanism that could have promoted the origin of life. Their clear, methodical description suggested, but did not state, that they were testing a hypothesis.

In fact, as I found out when I interviewed him, Braun stumbled on the effect as a byproduct of a quite different experiment to do with the nonequilibrium heating of reactants. The accidental nature of the discovery is absent from the PRL, where it might have been a distraction, but present in my story, where it added dramatic interest.

18743/pt5010181_kanga.jpg

Among McNeill Alexander’s research interests is the storage and release of energy in the muscles and tendons of kangaroos and other mammals. CREDIT: Chris Samuel

Braun and Libchaber’s discovery came to mind yesterday when I encountered a paper by Darrell Rowbottom and McNeill Alexander, which appears in the latest issue of Science in Context. Rowbottom is an associate professor of philosophy at Lingnan University, a public liberal arts college in Hong Kong. Alexander is a professor emeritus of biology at Leeds University in England. Together they sought to determine how often research papers in Alexander’s field, biomechanics, are framed as tests of hypotheses.

Why would anyone embark on such an investigation you might ask. The paper’s introduction hints at an answer. Alexander recounts what comes across—at least to me—as a troubling remark about funding from a fellow biomechanicist. The unnamed colleague told Alexander that he did hypothesis-driven research because that’s what the UK’s Biotechnology and Biological Sciences Research Council favors. “No hypothesis, no money” was the implication.

Some philosophers of science and some scientists regard the testing of hypotheses as the epitome of the scientific method, especially when it entails predicting a previously unmeasured phenomenon. A prime example is Arthur Eddington’s 1919 verification of the bending of starlight by the Sun’s gravity, a prediction of Albert Einstein’s theory of general relativity.

On the other hand, physicists and other scientists value curiosity-driven research. Indeed, the list of physics Nobel laureates abounds in people who were looking for something that they thought might be interesting but who weren’t testing a carefully formulated hypothesis. The most recent laureates, Saul Perlmutter, Adam Riess, and Brian Schmidt, were surprised by their discovery of dark energy.

Presentational hypotheses

Biologists, note Rowbottom and Alexander, tend to favor hypothesis testing, whereas physicists are more tolerant of open-ended investigations or, to use the perjorative term, “fishing expeditions.” Which group would biomechanists, who apply physics to biology, most resemble?

To find out, Rowbottom and Alexander looked at 50 papers each from the Journal of Experimental Biology and the Journal of Biomechanics. All the papers were drawn from single volumes published in 2007 and 2008. They classified the papers as H (actually testing a hypothesis), E (exploratory; not testing a hypothesis), P (presenting a hypothesis but not really testing one), and S (suspected of presenting a hypothesis but not really testing one). A fifth category O (for “other”) accounted for papers that couldn’t be clearly assigned to one of the other four categories.

If the P category seems odd, consider this example. In their J Exp. Bio. paper , Maria Almbro and Cecilia Kullberg sought to test “whether the flight performance of an insect . . . is affected by variation in body mass due to feeding.” But according to Rowbottom and Alexander, the authors, by their own admission, already knew that sated and starving insects (butterflies, in fact) fly differently. Even though Almbro and Kullberg presented their research as hypothesis testing, what they were actually doing, argue Rowbottom and Alexander, was measuring a known effect.

In all, Rowbotton and Alexander found that 58% of the papers purported to test hypotheses, of which two thirds really did. The remaining third used, or were suspected of using, hypothesis testing solely as a presentational device. Not one of the 100 papers was classified as E for exploratory. Summarizing their findings, Rowbottom and Alexander write:

“Overall, therefore, it is reasonable to conclude that biomechanists have a bias towards presenting their research as testing hypotheses, and (especially) prefer not to present their research as if it bears no relation to hypothesis testing. Needless to say, this could be mainly pragmatic, rather than reflect widespread agreement on what counts as good scientific practice (or genuine scientific activity). If biomechanists suspect that their chances of publication (and/or funding) will be increased by presenting their work in a particular way, then many will do so even if doing so is inaccurate.”

I find Rowbottom and Alexander’s findings somewhat shocking. Although I can’t be sure, I think that Braun and Libchaber omitted the serendipitous nature of their research for the sake of clarity. Their presentation certainly helped me understand what they’d measured. However presented, their results stand by themselves.

But it would damage science if a bias against exploratory research in biomechanics and in the rest of biology stifled not only the presentation of research but also its practice. Before Rowbottom and Alexander started their investigation, Alexander asked 11 “well-regarded biomechanists with a range of experience” to categorize what they considered to be their best three papers. Fifteen percent were fishing expeditions.

Related content
/
Article
The scientific enterprise is under attack. Being a physicist means speaking out for it.
/
Article
Clogging can take place whenever a suspension of discrete objects flows through a confined space.
/
Article
A listing of newly published books spanning several genres of the physical sciences.
/
Article
Unusual Arctic fire activity in 2019–21 was driven by, among other factors, earlier snowmelt and varying atmospheric conditions brought about by rising temperatures.
/
Article
This year’s Nobel Prize confirmed the appeal of quantum mysteriousness. And readers couldn’t ignore the impact of international affairs on science.
/
Article
Dive into reads about “quantum steampunk,” the military’s role in oceanography, and a social history of “square” physicists.

Get PT in your inbox

Physics Today - The Week in Physics

The Week in Physics" is likely a reference to the regular updates or summaries of new physics research, such as those found in publications like Physics Today from AIP Publishing or on news aggregators like Phys.org.

Physics Today - Table of Contents
Physics Today - Whitepapers & Webinars
By signing up you agree to allow AIP to send you email newsletters. You further agree to our privacy policy and terms of service.