One of the major strengths of science, in my eyes, is that it does not rely on authority. In other words, nothing is true just because a well-respected person says it’s true; there should always be evidence and a rational basis supporting an argument. The constant progress of science means that old ideas are continually challenged and tweaked and replaced – and I think that’s great. However, because ‘science’ is inextricably linked to the people who carry it out, building on scientific knowledge can be prone to error and all the fallibility and foibles of being human. One measure, now integral to research, in place to help iron out such problems is the peer review process.
In essence, the peer review process means that a scientific manuscript will be sent out to (often anonymous) experts in the field, who advise the publishing journal whether or not to accept the paper, as well as offering criticisms and comments on the data, arguments and conclusions of the paper. This serves to ensure poorly substantiated science is less likely to be published, while honing the quality of that which is. Differing journals can have very different approaches to the process, however, and individual reviewers can also vary in their attitude to critiquing papers. On occasion, good papers can be rejected because a reviewer misinterprets the paper or otherwise disagrees with the manuscript. On the flip side, poor science can make it through review if a multi-part paper contains one important finding, with other more dubious research shoe-horned in with it (or for other reasons).
Another major part of science is that claims of fact must be substantiated as well as possible, which is generally done by producing data in support of the fact, or citing (referencing) a different paper which backs up the claim. This can be tedious and difficult, but again, is of an overall benefit to the quality of research published. Often, at regular intervals, review papers will be published on a specific topic, pulling together new research and updating the old concensus to ensure everyone is ‘on the same page’ in the field. These papers are very useful, as entire areas of research can then be narrowed down to a few convenient, catch-all references by future researchers. But danger lies therein!
Steven Greenberg has just published a paper studying citation networks and how unfounded authority can be created by highly influential papers within a field.
“Unfounded authority was established by citation bias against papers that refuted or weakened the belief; amplification, the marked expansion of the belief system by papers presenting no data addressing it; and forms of invention such as the conversion of hypothesis into fact through citation alone.”
To break that quote down, a particular hypothesis regarding a muscular disease was strengthened not by data for or against, but by citation bias. Papers tended not to cite other papers when they presented data contrary to the hypothesis, while review papers gave it more air time and partially substantiated conclusions were quoted or cited as fact. This may be OK, if the hypothesis is indeed correct, but it brings up a worrying pathway for ‘attractive’ hypotheses to gain excessive authority over alternatives.
It’s something I’d like to look at further; a recent paper by Marie-Claude Roland in the Journal of Science Communication criticised the mainstream approach to scientific publishing for the tendency to continually present data as “maybe” or “potentially” “indicating” a conclusion. Sure, expressing a degree of uncertainty is fine, but if inconclusive data is surrounded by such words, it becomes easier for future papers to swallow (and cite) the conclusions as facts, which can lead to lines of unsolved enquiry to be closed prematurely. So come on, fellow scientists, let’s call a spade a spade when we’re writing!