Experienced academic writing professionals are at your fingertips.
Use this handy tool to get a price estimate for your project.

After you do a statistical test, you are either going to reject or accept the null hypothesis. Rejecting the null hypothesis means that you conclude that the null hypothesis is not true; in our chicken sex example, you would conclude that the true proportion of male chicks, if you gave chocolate to an infinite number of chicken mothers, would be less than 50%.

This number, 0.030, is the *P* value. It is defined as the probability of getting the observed result, or a more extreme result, if the null hypothesis is true. So "*P*=0.030" is a shorthand way of saying "The probability of getting 17 or fewer male chickens out of 48 total chickens, *IF* the null hypothesis is true that 50% of chickens are male, is 0.030."

*LIEBERMAN: When you have foot problems, that often causes knee problems, which cause hip problems. We have, for example, an epidemic of ** today. One contributing factor to arthritis might be the shoes that we wear. That’s a hypothesis. I don’t have any data on that. *

A Bayesian would insist that you put in numbers just how likely you think the null hypothesis and various values of the alternative hypothesis are, before you do the experiment, and I'm not sure how that is supposed to work in practice for most experimental biology. But the general concept is a valuable one: as Carl Sagan summarized it, "Extraordinary claims require extraordinary evidence."

Now instead of testing 1000 plant extracts, imagine that you are testing just one. If you are testing it to see if it kills beetle larvae, you know (based on everything you know about plant and beetle biology) there's a pretty good chance it will work, so you can be pretty sure that a *P* value less than 0.05 is a true positive. But if you are testing that one plant extract to see if it grows hair, which you know is very unlikely (based on everything you know about plants and hair), a *P* value less than 0.05 is almost certainly a false positive. In other words, *if you expect that the null hypothesis is probably true, a statistically significant result is probably a false positive.* This is sad; the most exciting, amazing, unexpected results in your experiments are probably just your data trying to make you jump to ridiculous conclusions. You should require a much lower *P* value to reject a null hypothesis that you think is probably true.

A related criticism is that a significant rejection of a null hypothesis might not be biologically meaningful, if the difference is too small to matter. For example, in the chicken-sex experiment, having a treatment that produced 49.9% male chicks might be significantly different from 50%, but it wouldn't be enough to make farmers want to buy your treatment. These critics say you should estimate the effect size and put a on it, not estimate a *P* value. So the goal of your chicken-sex experiment should not be to say "Chocolate gives a proportion of males that is significantly less than 50% (*P*=0.015)" but to say "Chocolate produced 36.1% males with a 95% confidence interval of 25.9 to 47.4%." For the chicken-feet experiment, you would say something like "The difference between males and females in mean foot size is 2.45 mm, with a confidence interval on the difference of ±1.98 mm."

Versatile Services that Make Studying Easy

We write effective, thought-provoking essays from scratch

We create erudite academic research papers

We champion seasoned experts for dissertations

We make it our business to construct successful business papers

What if the quality isn’t so great?

Our writers are sourced from experts, and complete an
obstacle course of testing to join our brigade. Ours
is a top service in the English-speaking world.

How do I know the professor
won’t find out?

Everything is confidential. So you know your student
paper is wholly yours, we use CopyScape and WriteCheck
to guarantee originality (never TurnItIn, which
professors patrol).

What if it doesn’t meet my expectations?

Unchanged instructions afford you 10 days to
request edits after our agreed due date. With
94% satisfaction, we work until your hair is
comfortably cool.

Clients enjoy the breezy experience of working with us

Click to learn our proven method

In the olden days, when people looked up *P* values in printed tables, they would report the results of a statistical test as "*P**P**P*>0.10", etc. Nowadays, almost all computer statistics programs give the exact *P* value resulting from a statistical test, such as *P*=0.029, and that's what you should report in your publications. You will conclude that the results are either significant or they're not significant; they either reject the null hypothesis (if *P* is below your pre-determined significance level) or don't reject the null hypothesis (if *P* is above your significance level). But other people will want to know if your results are "strongly" significant (*P* much less than 0.05), which will give them more confidence in your results than if they were "barely" significant (*P*=0.043, for example). In addition, other researchers will need the exact *P* value if they want to combine your results with others into a .

You should decide whether to use the one-tailed or two-tailed probability before you collect your data, of course. A one-tailed probability is more powerful, in the sense of having a lower chance of false negatives, but you should only use a one-tailed probability if you really, truly have a firm prediction about which direction of deviation you would consider interesting. In the chicken example, you might be tempted to use a one-tailed probability, because you're only looking for treatments that decrease the proportion of worthless male chickens. But if you accidentally found a treatment that produced 87% male chickens, would you really publish the result as "The treatment did not cause a significant decrease in the proportion of male chickens"? I hope not. You'd realize that this unexpected result, even though it wasn't what you and your farmer friends wanted, would be very interesting to other people; by leading to discoveries about the fundamental biology of sex-determination in chickens, in might even help you produce more female chickens someday. Any time a deviation in either direction would be interesting, you should use the two-tailed probability. In addition, people are skeptical of one-tailed probabilities, especially if a one-tailed probability is significant and a two-tailed probability would not be significant (as in our chocolate-eating chicken example). Unless you provide a very convincing explanation, people may think you decided to use the one-tailed probability *after* you saw that the two-tailed probability wasn't quite significant, which would be cheating. It may be easier to always use two-tailed probabilities. **For this handbook, I will always use two-tailed probabilities, unless I make it very clear that only one direction of deviation from the null hypothesis would be interesting.**

In this case there is no "theory" that gives us an obvious null hypothesis. For example, we have no reason to suppose that 55% or 75% or any other percentage of large spores will produce multiple outgrowths. So the most sensible null hypothesis is that both the large and the small spores will behave similarly and that both types of spore will produce 50% multiple outgrowths and 50% single outgrowths. In other words, we will test against a 1:1:1:1 ratio. Then, if our data do not agree with this expectation we will have evidence that spore size affects the type of germination.

A fairly common criticism of the hypothesis-testing approach to statistics is that the null hypothesis will always be false, if you have a big enough sample size. In the chicken-feet example, critics would argue that if you had an infinite sample size, it is impossible that male chickens would have *exactly* the same average foot size as female chickens. Therefore, since you know before doing the experiment that the null hypothesis is false, there's no point in testing it.

Does a probability of 0.030 mean that you should reject the null hypothesis, and conclude that chocolate really caused a change in the sex ratio? The convention in most biological research is to use a significance level of 0.05. This means that if the *P* value is less than 0.05, you reject the null hypothesis; if *P* is greater than or equal to 0.05, you don't reject the null hypothesis. There is nothing mathematically magic about 0.05, it was chosen rather arbitrarily during the early days of statistics; people could have agreed upon 0.04, or 0.025, or 0.071 as the conventional significance level.

89%

of clients claim significantly improved grades thanks to our work.

98%

of students agree they have more time for other things thanks to us.

Clients Speak

“I didn’t expect I’d be thanking you for actually
improving my own writing, but I am. You’re like a second professor!”