A Neo - Jeffreys Theory Of Statistical Inference

#Abstract

Supporters of Bayesian statistical inference have (almost) always vociferously denied that Bayesianism is any less objective than its rivals, notably so-called Neyman-Pearson Frequentism. I evaluate their arguments, and find that they are right to charge that

1.Frequentism requires many arbitrary choices to be made by the statistical analyst, and

2.consequently, Frequentism is at least in the same ballpark of subjectivity as Bayesianism.

Why then do applied scientists continue to imagine that Bayesianism is considerably more subjective than Frequentism?

I believe there is one good reason for this (along with many bad reasons, which I do not consider here [for more details on those, see forthcoming book]). It is that applied scientists’ arbitrary rules for conducting Frequentist inference have been codified into a form which the statistical community does not allow to be varied analysis by analysis. An important result of this is that (almost) no subjectivity is applied in the

The fact that very little subjectivity enters into the

Up to this point, my argument amounts to a prima facie case in favour of using Frequentist rather than Bayesian methods  or, at least, rather than orthodox, subjectivist Bayesian methods. However, there is no reason why Bayesians cannot avail themselves of the same advantages, simply by moving arbitrary (and apparently arbitrary) elements of their procedures from the analysis-of-data stage into the very definitions of their procedures.

I discuss two existing systems of inference, due to Jeffreys and Jaynes, which have done precisely this. I argue that they suffer from (perhaps relatively minor) deficiencies. I propose a system which fixes those deficiencies. My proposed system has all the objectivity of Frequentism and some (although, necessarily, not all) of the many advantages of orthodox Bayesianism.

—-

In a moment, I will briefly summarise a number of famous and compelling arguments for preferring Bayesian statistical inference to Frequentist statistical inference. Despite these arguments, Frequentism is by far the dominant methodology in current science.

[fill in from book manuscript]

Conclusion

Frequentists have benefitted from the fact that finding a theory which scientists can use to make (token) non-arbitrary inferences does not require finding a (type) non-arbitrary theory of inference.

Jason Grossman

—-

See http://ba.stat.cmu.edu/vol01is03.php on Objective Bayesianism.

—-

from draft book:

Any theory of statistical inference can be made objective by revising it so that what were subjective choices become determinate. A slight modification of Jeffreys’s theory, for example, can be seen as an objectification of Subjective Bayesianism: the decisions about prior probability distributions which a Subjective Bayesian makes subjectively a neo-Jeffreys Bayesian can make by using one of the priors which Jeffreys specifies (see ).is not what Jeffreys’s theory actually is:~his theory is meant to prescribe prior probabilities only in cases in which the agent doing the analysis is actually ignorant about the parameters in question. Also, it would be a misreading of history to think of Jeffreys as attempting to make Subjective Bayesianism more objective, at least initially, since early versions of his theory predate any statement of Subjective Bayesianism, as far as I can tell. But we need not worry about Jeffreys’s actual theory for the moment. All I am doing is borrowing his mathematics in order to devise a Bayesian theory which is completely objective in the sense I am currently dealing with — the sense of not allowing any subjectivity in its {}

Of course there is something wrong with making a subjective theory objective by modifying it in an arbitrary way. I imagine everyone agrees that there is no virtue in doing so. Or rather, there is no \cite[.12 & passim][Howson 1993](). In this one important respect Frequentist theories are more objective than Subjective Bayesianism.example, by making the significance level required to reject a hypothesis always 5%, the form of Neyman’s theory which has become standard in biomedicine has stopped experimenters from using the arbitrariness of that cut-off to reject any hypotheses they happen not to like. Similarly, although Neyman’s theory of confidence intervals does not adequately {