In regards to studies

I love the idea of science

Science is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe. It is an ideal, and if you have paid any attention to what is happening on these web pages the application of science based process is what I have been harping about.

John Bohannon noted:

John Bohannon is an American scientist a science journalist and Harvard University biologist, his investigative journalism work includes:
-critiquing the Lancet surveys of Iraq War casualties (2006)
-uncovering serious problems with the peer review process at a large number of journals that charge fees to authors (2013)
-showing how uncritical mass media can be of claims made in fake scientific papers (2015) His article which looks at the vanity press of peer review “Who’s Afraid of Peer Review” by John Bohannon can be found here

Stribling, Aguayo and Krohn pranked:

In 2005, three MIT graduate students Stribling, Aguayo and Krohn wrote the program SCIgen to generate fake papers. In their sting, they submitted a paper to the 2005 World Multiconference on Systemics, Cybernetics and Informatics.

The three authors were invited to speak at the conference, where they exposed the hoax. The program SCIgen is available on the internet free to download and use by anyone. As recently as 2013, at least 16 SCIgen papers have been found in Springer journals.

Fiona Godlee audited:

The fact that journals accepted a junk article is not interesting. What is interesting is that journals run by Sage, Elsevier and Wolters Kluwer all accepted Bohannon’s bogus paper. These sorts of findings have been replicated by the British Medical Journal by Fiona Godlee in 1998.

In 2008, a similar study was done, again by Fiona Godlee, and again with BMJ reviewers. This time the paper had 9 “major errors” and 5 “minor errors”, and had 607 reviewers. Her paper “What errors do peer reviewers detect, and does training improve their ability to detect them? Fiona Godlee et. al. , 2008 : can be found here. The takeaway is that the peer review process does not guarantee that the study is relevant, useful or true.

Emerson, Warme and Wolf call “pollyanna

Another major problem is the bias toward positive results: In the November 2010 paper, “Testing for the Presence of Positive-Outcome Bias in Peer Review: A Randomized Controlled Trial”, the researchers sent two test manuscripts to 238 reviewers for The Journal of Bone and Joint Surgery and Clinical Orthopaedics and Related Research.

They were randomly given a paper that showed the effect of giving an antibiotic after surgery. The two papers randomly assigned were identical in everything EXCEPT the conclusions. The version that showed no effect for the antibiotic was accepted 80% of the time, whereas the paper that concluded a positive effect was accepted 97.3% of the time.

In addition, the reviewers of the paper with no positive results found more methodological errors. Scientists find more methodological problems with ideas they don’t find appealing, all else being equal.

Robert Wilson penned this:

“Every fact of science was once damned.
Every invention was considered impossible.
Every discovery was a nervous shock to some orthodoxy.
Every artistic innovation was denounced as fraud and folly.
Everything that is man-made and not given to us by nature, is the concrete manifestation of some man’s refusal to bow to authority.
We would be no more than the first apelike hominids if it were not for the rebellious, the recalcitrant, and the intransigent.’’
~Robert Anton Wilson, 1932–2007~

Scott Armstrong noted:

Strangely studies showing that most studies usually aren’t replicable have been replicated many times. Scott Armstrong from The Wharton School wrote a scathing evaluation of what peer review is like in his paper “Peer Review for Journals: Evidence on Quality Control, Fairness, and Innovation”. Scott listed a number of problems all which suggest that the process of generating peer reviewed papers lack a level of capability maturity.

-Reviewers lack relevant credentials
-Reviewers often work anonymously
-Reviewers often do not get remuneration
-Reviewers on average spend two to six hours in reviewing a paper
-Yet they often wait for months before doing their reviews
-Reviewers seldom use structured processes.
-They are not accountable for following proper scientific procedures.
-Reviewers’ recommendations often differ Cicchetti (1991).

Lacking a process, lacking metrics, lacking training, the process is operating at the Initial stage of capability maturity.

Lipstick on a pig

What can be done? It is amazing the number of recommendations that can be found using google scholar “peer review reform“. I will leave the recommendation to someone more qualified, and the implementation to someone who really cares; suffice to say people who do not understand the importance of capability maturity in workflow process control, measurement and improvement do not understand its importance in research. No surprise.

Back to Kidney transplant