## Bernoulli’s Fallacy by Aubrey Clayton

Statistical Illogical and the Crisis of Modern Science.

Page 40:

This same process will apply to any problem of inference among multiple hypotheses. In fact, this will be the only procedure we’ll ever need to use to do probabilistic inference – for the rest of the book and for the rest of our lives. To help keep the pieces organized, we’ll arrange them in a table, which I’ll refer to from now on as an inference table. The following are the steps for any problem, and the corresponding inference table shown in table 1.1.

1. Enumerate all the possible hypotheses, H1, …, Hn and consider their probabilities not including any observation, P[H1], …, P[Hn]. These are the prior probabilities or priors for short.

2. For a given observation of data, D, compute the probability of that observation assuming each hypothesis is true, in turn. These are the sampling probabilities for the data given the hypothesis.

3. Compute the probability of arriving at the observation D by means of any one of the hypotheses by multiplying the prior by the sampling probability: for example, P[H1] * P[D|H1] and so on. We’ll call these the pathway probabilities. Summing them gives the total probability of the data:

P[D] = P[H1] * P[D|H1] + … + P[Hn] * P[D|Hn]

4. Once this calculation is accomplished, the inferential probability for each hypothesis is easy to find, since it is just the relative proportion of that term in the preceding sum. These are just the posterior probabilities, which, according to Bayes’ theorem, are given by

P[Hi|D] = P[Hi] * P[D|Hi]/P[D]