For the Love of Bees and Beekeeping

August 2014

Mice, Old Combs, and the Reliability of Bee Science

by Keith Delaplane


I f one writes a monthly column on science-based recommendations it shouldn’t be a surprise if the idea of science-based recommendations comes up for discussion. This happened to me recently.

A reader emailed me with objections to my May column on screened bottom boards. How, this person asked, could I recommend screened bottom boards for Varroa control when no study has shown them to be statistically different [there was actually one that did] and all I could cite were “trends” for beneficial effects? “We must accept research based on significantly different levels to have any reasonable assurance that the effects of the study are true or accurate.” Could it be possible there were errors in the experiments, too small sample sizes, or other unaccounted factors? “Maybe,” he concludes, “the test should be repeated to verify the results.”

This is a lot to think about. These are indeed questions that should concern every practitioner or consumer of science. They strike at the heart of how we (especially in the developed world) make judgments about knowledge claims and use the time-tested scientific method.

I think most readers of this magazine would agree that scientific explanations for beekeeping practices are to be preferred over other contenders like tradition or hearsay. But I hasten to add that just because an explanation is “scientific” doesn’t necessarily mean it’s right. The strength of a scientific explanation is greater or lesser depending on the quantity, quality and consistency of the evidence, and the best kind of evidence is numerous independent studies, each directly asking the question and each finding results consistent with the others. Repeatability is a good thing and increases our confidence that the conclusion is in fact true.

Sometimes the evidence is “scientific” only in the sense that the evidence was collected in a scientific context, secondary to some other purpose. Let me give an example: I’m pretty sure there’s a correlation between the solidness of a queen’s brood pattern and her overall brood quantity. In other words a queen that lays a solid pattern will make more brood than a queen that lays a spotty pattern, even if the spotty brood is spread all over the place. This hunch is based on years’ worth of collecting both types of data – but always in the context of other experiments and questions. The data are certainly good in the sense that I am confident of their accuracy, but they have never been used to directly challenge the hypothesis, so I should stay tentative on that count.

The literature is rich in “orphaned” data like this – hence the term “data mining” and the statistical technique of “meta analysis” which is nothing more than a question catching up with pre-existing data to save the investigator time, enlarge the statistical sample size, and broaden one’s inferences across time and space. But for meta analysis one must take more than the usual precautions against bias and advocacy. We are all humans, and I think it’s a fiction to think that any scientist can be truly unbiased. I admit every time I do an experiment, deep down inside me (or not so deep down!) there’s an answer I want to be right. One can imagine a ...