α = probability of aTypeIerror= P(TypeIerror) = probability of rejecting the null hypothesis when the null hypothesis is true. β = probability of aType II error= P(Type II error) = probability of not rejecting the null hypothesis when the null hypothesis is false.

You can get a nonsignificant result when there is truly no effect present. This is correct — you don’t want to claim that a drug works if it really doesn’t. (See the upper-left corner of the outlined box in the figure.)

You can get a significant result when there truly is some effect present. This is correct — you do want to claim that a drug works when it really does. (See the lower-right corner of the outlined box in the figure.)

You can get a significant result when there’s truly no effect present. This is a Type I error— you’ve been tricked by random fluctuations that made a truly worthless drug appear to be effective. (See the lower-left corner of the outlined box in the figure.)

Your company will invest millions of dollars into the further development of a drug that will eventually be shown to be worthless. Statisticians use the Greek letter alpha to represent the probability of making a Type I error.

You can get a nonsignificant result when there truly is an effect present. This is a Type II error (see the upper-right corner of the outlined box in the figure) — you’ve failed to see that the drug really works, perhaps because the effect was obscured by the random noise in the data.

Further development will be halted, and the miracle drug of the century will be consigned to the scrap heap, along with the Nobel prize you’ll never get. Statisticians use the Greek letter beta to represent the probability of making a Type II error.