Business Success


.

  • Written by Evangeline Rose, Ph.D Candidate in Biological Sciences, University of Maryland, Baltimore County
The equivalence test: A new way for scientists to tackle so-called negative results

A paleontologist returns to her lab from a summer dig and sets up a study comparing tooth length in two dinosaur species. She and her team work meticulously to avoid biasing their results. They remain blind to the species while measuring, the sample sizes are large, and the data collection and the analysis are rigorous.

The scientist is surprised to find no significant difference in canine tooth length between the two species. She realizes that these unexpected results are important and sends a paper off to the appropriate journals. But journal after journal rejects the paper, since the results aren’t significantly different. Eventually, the scientist gives up, and the paper with its so-called negative results is placed in a drawer and buried under years of other work.

This scenario and many others like it have played out across all scientific disciplines, leading to what has been dubbed “the file drawer problem[1].” Research journals and funding agencies are often biased toward research that shows “positive” or significantly different results. This unfortunate bias contributes to many other issues in the scientific process, such as confirmation bias[2], in which data are interpreted incorrectly to support a desired outcome.

A new method: Equivalence

Unfortunately, publication bias issues have been prevalent in science for a long time. Due to the structure of the scientific method[3], scientists often focus only on differences between groups – like the dinosaur teeth from two different species, or a public health comparison of two different neighborhoods. This leaves studies that focus on similarities completely hidden.

However, pharmaceutical trials[4] have found a solution for this problem. In these trials, researchers sometimes use a test known as TOST, two one sided test, to look for equivalence between treatments.

For example, say a company develops a generic drug that is cheaper to produce than the name-brand drug. Researchers need to demonstrate that the new drug functions in a statistically equivalent manner to the name brand before selling it on the market. That’s where equivalence testing comes in. If the test shows equivalence between the effects of the two drugs, then the FDA can approve the new drug’s release on the market.

While traditional equivalence testing is very helpful for preplanned and controlled pharmaceutical tests, it isn’t versatile enough for other types of studies. The original TOST cannot be used to test equivalence in experiments where the same individuals are in multiple treatment groups[5], nor does it work if the two tests groups have different sample sizes.

Additionally, the TOST used in pharmaceutical testing does not typically address multiple variables simultaneously. For example, a traditional TOST would be able to analyze similarities in biodiversity at several river locations before and after a temperature change. However, our new TOST would allow to test for similarities in multiple variables – such as biodiversity, water pH, water depth and water clarity – at all of the river sites simultaneously.

The limitations of the traditional TOST and the pervasiveness of the “file drawer problem” led our team to develop a multivariate equivalence test[6], capable of addressing similarities in systems with repeated measures and unequal sample sizes.

Our new equivalence test, published in October[7], flips the traditional null hypothesis framework on its head. Now, rather than assuming similarity, a researcher starts with the assumption that the two groups are different. The burden of proof now lies with evaluating the degree of similarity, rather than the degree of difference.

Our test also allows researchers to set their own acceptable margin for declaring similarity. For example, if margin were set to 0.2, then the results would tell you if the means of the two groups were similar within plus or minus 2 percent.

A step in the right direction

Our modification means that equivalence testing can now be applied across a wide range of disciplines. For example, we used this test to demonstrate equivalent acoustic structure in the songs of male and female eastern bluebirds. Equivalence testing has also already been used in some areas of engineering[8] and psychology[9].

The method could be applied even more broadly. Imagine a group of researchers who want to examine two different teaching methods. In one classroom there is no technology, and in another all of the students’ assignments are done online. Equivalence testing might help an school district decide if they should invest more in technology or if the two methods of teaching are equivalent.

The development of a broadly applicable equivalence test represents what we think will be a huge step forward in scientists’ long struggle to present real and unbiased results. This test provides another avenue for exploration and allows researchers to examine and publish the results from studies on similarities that have not been published or funded in the past.

The prevalence of publication bias, including the file drawer problem[10], confirmation bias and accidental false positives[11], is a major stumbling block for scientific progress. In some fields of research, up to half of results are missing from the published literature.

Equivalence testing provides another tool in the toolbox for scientists to present “positive” results. If the scientific community takes hold of this test and utilizes it to its full potential, we think it may help mitigate one of the major limitations in the way science is currently practiced.

References

  1. ^ the file drawer problem (undark.org)
  2. ^ confirmation bias (dx.doi.org)
  3. ^ the structure of the scientific method (onlinestatbook.com)
  4. ^ pharmaceutical trials (doi.org)
  5. ^ where the same individuals are in multiple treatment groups (blog.minitab.com)
  6. ^ to develop a multivariate equivalence test (doi.org)
  7. ^ published in October (www.sciencedaily.com)
  8. ^ engineering (doi.org)
  9. ^ psychology (doi.org)
  10. ^ file drawer problem (dx.doi.org)
  11. ^ false positives (projects.fivethirtyeight.com)

Authors: Evangeline Rose, Ph.D Candidate in Biological Sciences, University of Maryland, Baltimore County

Read more http://theconversation.com/the-equivalence-test-a-new-way-for-scientists-to-tackle-so-called-negative-results-106041

Metropolitan republishes selected articles from The Conversation USA with permission

Visit The Conversation to see more

Business Marketing