ACS Industry


Blog Post created by ACS Industry on Apr 24, 2015

“When the FDA finds significant departures from good clinical practice, those findings are seldom reflected in the peer-reviewed literature, even when there is evidence of data fabrication or other forms of research misconduct.”  Ouch. That is the conclusion of a study by the U.S. Food and Drug Administration published this month.


Another report mentioning scientific misconduct and another example where the peer-review system is letting us down.  Important societal and personal decisions involve technology.  What should we do about climate change? What should be done about invasive species? How do we save the bees?  What kind of light bulbs should I buy?  We need those decisions to be based on a sound foundation, yet the foundation is showing itself to have cracks.  At a time when the credibility of science is very important, science is letting us down.


I can’t find data that explains whether we just have a percentage of bad actors that is constant, leading to more incidents of fraud as the number of total publications rises, or whether the fraction of scientist falsifying data is increasing. But I certainly hear about fraud more these days.  A cornerstone of academic science is peer review.  Credibility is gained by having experts cull incorrect, plagiarized, and fraudulent work before it is published.  Flaws in the peer review process have been clearly exposed.  We have examples of gibberish papers being selected for publication after supposed peer review; examples of clearly flawed papers clearing the hurdle of peer review; and, most recently, authors that gamed the system by peer reviewing their own papers.  No longer are retractions just for honest mistakes. Some are calling for a revamp, others elimination.  The scandals dont seem to be letting up and the system is being assaulted from all sides.


The first assault on peer review I really internalized was at an ACS National Meeting several years ago.  I got invited to be on a panel discussion about new media and chemistry.  The group of talks before I took the stage was about blogging.  This was soon after claims of arsenic-based life had been called into question.  The blogging community was taking credit for correcting that record and, flush with their perceived victory, were taking shots at peer review.  To their collective eye, the slow pace of academic publication is an antiquated relic in need of an overhaul.  Scrap the current system and just let the results flow. The collective intelligence of the web would surely sort out the issues open source. 


This was blasphemy to me.  I had always believed that a pillar of modern science was the offering of work to peers to review.  Committed editors and reviewers who were entrusted with insuring the process remained robust. That process was integral to how science progressed.  We had the debacle of cold fusion, in part, because peer review was skipped and science was taken directly to society. 


As an industrial researcher, I certainly rely on the academic literature as a foundation to build upon.  I want that foundation to be trustworthy and accurate.  Industrial work, if it is noteworthy, will be immediately reproduced by others.  Taking a discovery from the lab to commercial product is an exercise in successive, repetitive reproduction of results.  There isn’t really room for honest mistakes or falsification of results because it will be immediately detected.  It is a very different world from an academic lab where the output is publication.  The output of the industrial lab is a product. The trip from lab bench to product requires that others make it work at ever increasing scale.


Some industrial work gets published in the academic literature, but most doesn’t.  The patent literature is the publication outlet for discoveries with commercial potential. It is a trove that must be greeted with skepticism.  Patents aren’t peer reviewed at all.  Ironically, as important as they can be in protecting commercially interesting technology, the “first to file” system adds a fragility to the system that means there are many patents that actually aren’t worth much.  Unexpected results are extrapolated in patents that seek the widest coverage.  Metrics that seek to measure patent strength look for patents that reference few others, but are widely referenced.  Companies build on important technologies, surrounding them with what is sometimes called a picket fence of extending patents.  Important patents form the foundation for other patents.  What you find when you look at patents is that many are not referenced by others, meaning that they either aren’t commercially important or they are wrong.  Unlike the academic literature, where retractions are intended to remove flawed conclusions, they just build in the patent literature. 


I myself have fallen victim to claims that could never be reproduced.  One of the biggest projects I ever worked on was started on the basis of a patent where an apparently small change to a catalyst made a big change in its performance. Our work and the work of others all confirmed that analytical abnormalities were to blame.  Use a GC that resolves two compounds and the catalyst system is squarely with the rest of the pack.  Nothing special.


I had fallen victim to the trash that exists in the global patent system.  It is like space junk, useless pieces of information whirling around waiting to hit some unsuspecting researcher on a quest for the next great discovery.



Mark Jones is Executive External Strategy and Communications Fellow at Dow Chemical since September 2011. He spent most of his career developing catalytic processes after joining Dow in 1990. He received his Ph.D. in Physical Chemistry at the University of Colorado-Boulder doing research unlikely to lead to an industrial career and totally unrelated to his current responsibilities.