Small, low-powered studies produce unreliable research in neuroscience

New research has questioned the reliability of neuroscience studies, saying that conclusions could be misleading due to small sample sizes.

A team led by academics from the University of Bristol reviewed 48 articles on neuroscience meta-analysis which were published in 2011 and concluded that most had an average power of around 20 per cent - a finding which means the chance of the average study discovering the effect being investigated is only one in five.

The paper, being published in Nature Reviews Neuroscience today [10 April], reveals that small, low-powered studies are 'endemic' in neuroscience, producing unreliable research which is inefficient and wasteful.

It focuses on how low statistical power - caused by low sample size of studies, small effects being investigated, or both - can be misleading and produce more false scientific claims than high-powered studies.

It also illustrates how low power reduces a study's ability to detect any effects, and shows that when discoveries are claimed, they are more likely to be false or misleading.

The paper claims there is substantial evidence that a large proportion of research published in scientific literature may be unreliable as a consequence.

Another consequence is that the findings are overestimated because smaller studies consistently give more positive results than larger studies. This was found to be the case for studies using a diverse range of methods, including brain imaging, genetics and animal studies.

Kate Button, from the School of Social and Community Medicine, and Marcus Munaf-, from the School of Experimental Psychology, led a team of researchers from Stanford University, the University of Virginia and the University of Oxford.

She said: "There's a lot of interest at the moment in improving the reliability of science. We looked at neuroscience literature and found that, on average, studies had only around a 20 per cent chance of detecting the effects they were investigating, even if the effects are real. This has two important implications - many studies lack the ability to give definitive answers to the questions they are testing, and many claimed findings are likely to be incorrect or unreliable."

The study concludes that improving the standard of results in neuroscience, and enabling them to be more easily reproduced, is a key priority and requires attention to well-established methodological principles.

It recommends that existing scientific practices can be improved with small changes or additions to methodologies, such as acknowledging any limitations in the interpretation of results; disclosing methods and findings transparently; and working collaboratively to increase the total sample size and power.

Paper

'Power failure: why small sample size undermines the reliability of neuroscience' by Katherine Button, John Ioannidis, Claire Mokrysz, Brian Nosek, Jonathan Flint, Emma Robinson and Marcus Munafo in Nature Reviews Neuroscience.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Research links COVID-19 vaccines to temporary facial palsy in over 5,000 patients