Should we test for statistical significance?

User avatar
mtngun
Site Admin
Posts: 1656
Joined: Fri Feb 01, 2008 5:45 pm
Location: Where the Salmon joins the Snake

Should we test for statistical significance?

Post by mtngun » Sat Mar 23, 2019 9:13 pm

Using statistics for shooting data is a good thing, but there is a growing push back against using statistics to provide "yes or no" answers.

Scientists Rise Up Against Statistical Significance
We should never conclude there is ‘no difference’ or ‘no association’ just because a P value is larger than a threshold such as 0.05 or, equivalently, because a confidence interval includes zero. Neither should we conclude that two studies conflict because one had a statistically significant result and the other did not.

We ... call for the entire concept of statistical significance to be abandoned ... we should not treat them categorically ... We must learn to embrace uncertainty.
I have often felt that I could "see" a difference in a load, yet I could not prove a statistical difference. Maybe I could prove it if I fired a thousand shots, or a million shots, but most of us don't have time to fire a million shots. There's nothing wrong with saying "I think this load might be a little bit better, but I can't prove it mathematically, at least not without more data."

And of course some people will disagree with you and say "you don't have enough data to prove your load is better," and that's fine, too. We can all agree that it would be great to have more data. But while we are waiting for more data, it's OK to make cautious observations based on the data that we do have.