Results drawn from samples are susceptible to two types of error: type I error and type II error A type I error is one in which we conclude that a difference exists when in truth it does not (the observed difference was due to chance) From probability distributions, we can estimate the probability of making this type of error, which is referred to as alpha This is also the P value—the probability that we have made a type I error Such errors are evident when a P value is statistically significant but there is no true difference or association In a given study, we may conduct many tests of comparison and association, and each time we are willing to accept a 5% chance of a type I error The more tests that we do, the more likely we are to make a type I error, since by definition 5% of our tests may reach the threshold of a P value