This approach may seem on the surface like it is completelydifferent from the approach taken with the -test, but it is actuallybased on identical logic. The denominator of the -test is based on thevariability within the groups, just as it is in the ANOVA. But what about thenumerator in the -test. Surely a mean difference has nothing to do withthe variability of means. If the null hypothesis is true, then the means weredrawn from the same population with the same population mean. We know that, dueto sampling error, the mean of each sample drawn will be close to the populationmean, but is unlikely to be exactly equal to the population mean. In fact, weknow that the if we sample repeatedly from a population, we will obtain adistribution of means, which we called the sampling distribution of means. Thestandard deviation of this distribution was our standard error, which was thenumerator of the -test. If we draw two samples from the distribution ofmeans, we know that they are not likely to be identical, even though under thenull hypothesis that they were sampled from the same population. Therefore, thedifference between these samples is unlikely to be zero. How big that differencewould be by chance, if the null hypothesis is true, is predictable, because itis a function of how variable the sampling distribution of the means is. If thissample distribution is very narrow, the two means will always be close to oneanother, and so the mean difference necessarily must be small. But as thevariability of the sampling distribution increases, the possibility of themeans being different from one another increases. The math is beyond the scopeof this text, but you can show that there is a direct mathematical relationshipbetween the variability of a sampling distribution of means and the average sizeof the difference between two means randomly drawn from that samplingdistribution. In other words, the difference between the means (the numerator ofthe -test) really is just a different way of measuring the variability amongthe two means.
The formulas for ANOVA convert the variability ofthe means (the numerator) onto the same scale as the variability of the scoreswithin the groups (the denominator). Therefore, if the null hypothesis is true,both the numerator and the denominator of the -ratio estimate the samequantity(the variance of scores in the population). Therefore, the -ratio willbe approximately equal to 1.00. Of course, sometimes it will be greaterthan 1.00 and sometimes it will be less than 1.00, so we need to know how muchgreater than 1.00 the -ratio must be before we suspect that the nullhypothesis is false. Just like with the -test, there is a table of that you can consult to answer this question. Wewill show you exactly how to do that shortly.
For the fixed-sample size, when the number of realizations is decided in advance, the distribution of p is uniform, assuming the null hypothesis is true.
Now, if the calculated t is more extreme than the critical value, we say,"the chance of getting this t, by shear chance, when the null hypothesis is true, is so small that I would rather say the null hypothesis is false, and accept the alternative, that the means are not equal." When the calculated value is less extreme than the calculated value, we say,"I could get this value of t by shear chance.
In a one-way classification ANOVA, when the null hypothesis is false, the probability of obtaining an F-ratio exceeding that reported in the F table at the .05 level of significance is greater than .05.