Home»Commentary»Research Experiment Exposes Key Problem with Anti-gun Social Science

Research Experiment Exposes Key Problem with Anti-gun Social Science

0
Shares
Pinterest WhatsApp

Social science is in the midst of a replication crisis. This means the findings of many published social science papers cannot be reproduced and are likely invalid. A new paper published in the Proceedings of the National Academy of Sciences (PNAS) sheds light on the scale of the problem and calls into question the veracity of social science research in general, including that which anti-gun advocates use to push for gun control.

As an issue, the replication crisis came to prominence in 2015. That year the journal Science published the findings of a team of 270 scientists led by University of Virginia Professor Brian Nosek who attempted to replicate 98 studies published in some of psychology’s most prestigious journals. In the end, according to a Science article accompanying the study, “only 39% [of the studies] could be replicated unambiguously.”

In an article on the team’s findings, the journal Nature noted, “John Ioannidis, an epidemiologist at Stanford University in California, says that the true replication-failure rate could exceed 80%, even higher than Nosek’s study suggests.”

At the time, the New York Times explained how researchers’ incentives that can lead to the perversion of science, noting,

The report appears at a time when the number of retractions of published papers is rising sharply in a wide variety of disciplines. Scientists have pointed to a hypercompetitive culture across science that favors novel, sexy results and provides little incentive for researchers to replicate the findings of others, or for journals to publish studies that fail to find a splashy result.

For better or worse, given the political climate, “scientific” results involving guns are inherently “splashy.” Add to that research funding from wealthy gun control advocates like Michael Bloomberg and expressly anti-gun jurisdictions like California and there is an obvious incentive for “sexy results” at any cost.

More recently, Reason magazine did an excellent job of exposing almost all “gun violence” social science for the junk science it is by producing an accessible video explainer on the topic.

Drawing on the expertise of statistician and New York University and University of California at San Diego instructor Aaron Brown and a 2020 analysis by the RAND Corporation, the video explained that the vast majority of gun violence research is not conducted in a manner sufficient to offer meaningful conclusions. An article accompanying the video, written by Brown and Reason Producer Justin Monticello, noted,

A 2020 analysis by the RAND Corporation, a nonprofit research organization, parsed the results of 27,900 research publications on the effectiveness of gun control laws. From this vast body of work, the RAND authors found only 123 studies, or 0.4 percent, that tested the effects rigorously.

Reason and Brown examined the remaining 123 studies from the RAND analysis and offered the following,

We took a look at the significance of the 123 rigorous empirical studies and what they actually say about the efficacy of gun control laws.

The answer: nothing. The 123 studies that met RAND’s criteria may have been the best of the 27,900 that were analyzed, but they still had serious statistical defects, such as a lack of controls, too many parameters or hypotheses for the data, undisclosed data, erroneous data, misspecified models, and other problems.

Moreover, the authors noted that there appears to be something of an inverse relationship between the most rigorously conducted “gun violence” studies and those that receive media attention. The piece explained,

Tellingly, the studies that have gotten the most media or legislative attention aren’t among the 123 that met RAND’s approval. The best studies made claims that were too mild, tenuous, and qualified to satisfy partisans and sensationalist media outlets. It was the worst studies, with the most outrageous claims, that made headlines.

The PNAS paper further undermines the validity of social science research – even in cases where attempts are made to control for bias. Titled “Observing many researchers using the same data and hypothesis reveals a hidden universe of uncertainty,” the paper shows that researchers given the exact same data and hypothesis come to wildly different conclusions as a result of the researchers’ idiosyncratic decisions.

To construct their experiment, the authors assembled 161 researchers in 73 teams and provided them with the same data and hypothesis to be tested. In this case, the researchers were asked to determine from the data whether “greater immigration reduces support for social policies among the public.” To attempt to control for the bias towards “splashy” findings, the researchers were promised co-authorship of a final paper on the topic regardless of their conclusions.

Explaining the results of the experiment, the authors reported,

Results from our controlled research design in a large-scale crowdsourced research effort involving 73 teams demonstrate that analyzing the same hypothesis with the same data can lead to substantial differences in statistical estimates and substantive conclusions. In fact, no two teams arrived at the same set of numerical results or took the same major decisions during data analysis.

Even highly skilled scientists motivated to come to accurate results varied tremendously in what they found when provided with the same data and hypothesis to test.

Our findings suggest reliability across researchers may remain low even when their accuracy motivation is high and biasing incentives are removed.

In other words: Much of social science is of dubious value, even when its practitioners aren’t politically or financially-biased.

In attempting to explain the wide variation of results, the authors state,

Researchers must make analytical decisions so minute that they often do not even register as decisions. Instead, they go unnoticed as nondeliberate actions following ostensibly standard operating procedures. Our study shows that, when taken as a whole, these hundreds of decisions combine to be far from trivial.

This concept is sometimes presented as the “garden of forking paths.” Each minute decision a researcher makes in working with data or constructing a statistical model can lead to different sets of decisions down the road. These different decisions compound, resulting in extreme variations in results among even well-meaning researchers using the same data.

In summarizing the implications of their findings for social science, the PNAS authors note,

Considering this variation, scientists, especially those working with the complexities of human societies and behavior, should exercise humility and strive to better account for the uncertainty in their work.

Anti-gun social science certainly involves “complexities of societies and behavior” and should therefore be treated with the utmost skepticism. Moreover, this call for humility should be extended to journalists and policymakers who trumpet such questionable social science research with the goal of curtailing Americans’ fundamental rights.

Article by NRA-ILA

 

Don't forget to like us on Facebook and follow us on Twitter.

 

Previous post

The Armed Citizen® Dec. 19, 2022

Next post

Incrementalism in Action: Anti-Gun Governor Targets Lawfully Registered Firearms for Seizure