Our Blog

Bad stats sabotaging drug discovery?

combchartOne of the most widely read college textbooks in the 1960s and ‘70s was How to Lie with Statistics by Darrell Huff. Despite the humorous title, the serious intent of the book (written by a journalist, not a statistician) was to illustrate how common errors in the use of statistics frequently lead to misleading conclusions.

Though popular, it may not have done the job it was intended to do. Almost 50 years later, author John Allen Paulos revisited this subject in his book Innumeracy: Mathematical Illiteracy and Its Consequences. Unfortunately, problems with the proper use of statistics appear to still be a serious and widespread concern. A recent paper by statistician Valen Johnson of Texas A&M University in College Station suggested that one out of every four published scientific studies draw false conclusions because they employ weak statistical standards.

The idea that many researchers simply aren’t running and analyzing their experiments properly appeared in yet another article focused on trying to understand why mouse models often don’t wind up being useful predictors of drug responses in human diseases. A survey of 76 influential animal studies “found that half used five or fewer animals per group”, and many failed to properly randomize mice into control and treated groups. In a similar vein, a recent study compared the data obtained from two research groups who were testing cancer cell lines for their susceptibility to anti-cancer drugs. While some of the drugs gave similar results in both studies, the majority did not.

Click here to read more from this January 6, 2013 Xconomy article by Stewart Lyman.


This is a unique website which will require a more modern browser to work! Please upgrade today!