A hidden "universe of uncertainty" may underlie most scientific findings, especially in the social sciences, a new study suggests.
When scientists used the same data set to answer a specific hypothesis — that immigration reduces support for social policy — dozens of researchers produced completely different results, according to a new study, published Oct. 28 in the journal Proceedings of the National Academy of Sciences.
The finding suggests it may be very hard to be confident in findings in some of these fields, since even small changes in initial choices could yield dramatically different results.
In the new study, Nate Breznau, a postdoctoral researcher at the University of Bremen in Germany, and colleagues asked 161 researchers in roughly six dozen research teams to test a common hypothesis: that immigration reduces support for government social policy. This question has been asked hundreds of times in the social science literature, and the results have been all over the map, Breznau told Live Science.
As a baseline, they gave the research teams data from six questions related to government policy from the International Social Survey Programme, a broad data set that tracks policy differences across 44 countries.
Then, they asked the teams to use logic and prior knowledge to develop models to explain the relationship between immigration and support for government social services.
For example, one group might predict that an increased flow of immigrants to a country raises competition for scarce resources, which, in turn, decreases support for social services. The research teams then had to decide what types of data to use to answer that question (for instance, the net influx of immigrants to a country, the gross domestic product, or the average or median income in different regions), as well as what types of statistical analyses they would use.
Sign up for the Live Science daily newsletter now
Get the world’s most fascinating discoveries delivered straight to your inbox.
RELATED: Deductive vs. inductive reasoning
The research groups' findings mirrored the literature overall: 13.5% said it wasn't possible to draw a conclusion, 60.7% said the hypothesis should be rejected and 28.5% said the hypothesis was correct.
Breznau’s team then used their own statistical analysis to try to understand why different groups came up with such different conclusions.
They found that neither bias nor inexperience could explain the variance. Rather, hundreds of different, seemingly minor decisions may have shifted the conclusions one way or another. Even more surprising, no set of variables seemed to tip the outcomes one way or another, possibly because there simply wasn't enough data to compare the different models. (There was one limitation of the study: The authors' analysis itself is a statistical model and thus is subject to uncertainty as well.)
It's not clear to what extent this universe of uncertainty plagues other sciences; it may be that astrophysics, for example, is simpler to model than human interactions on a grand scale, Breznau said.
For instance, there are 86 billion neurons in the human brain and 8 billion people on the planet, and those people are all interacting in complex social networks.
"It might be the case that there are fundamental laws that would govern human social and behavioral organization, but we definitely don't have the tools to identify them," Breznau told Live Science.
One takeaway from the study is that researchers should spend time honing their hypothesis before jumping to data collection and analysis, Breznau said, and the new study's hypothesis is a perfect example.
"Does immigration undermine support for social policy? It's a very typical social science hypothesis, but it's probably too vague to really just get a concrete answer to," he said.
A more specific or targeted question could potentially yield better results, Breznau said.
If you want to see how different variables and modeling choices affected the results for each model, you can do so via their Shiny app.
Tia is the managing editor and was previously a senior writer for Live Science. Her work has appeared in Scientific American, Wired.com and other outlets. She holds a master's degree in bioengineering from the University of Washington, a graduate certificate in science writing from UC Santa Cruz and a bachelor's degree in mechanical engineering from the University of Texas at Austin. Tia was part of a team at the Milwaukee Journal Sentinel that published the Empty Cradles series on preterm births, which won multiple awards, including the 2012 Casey Medal for Meritorious Journalism.