In
statistical hypothesis testing, a
type I error is the incorrect rejection of a true
null hypothesis (a "false positive"), while a
type II error is the failure to reject a false null hypothesis (a "false negative"). More simply stated, a type I error is detecting an effect that is not present, while a type II error is failing to detect an effect that is present. The terms "type I error" and "type II error" are often used interchangeably with the general notion of
false positives and false negatives in
binary classification, such as
medical testing, but narrowly speaking refer specifically to statistical hypothesis testing in the
Neyman–Pearson framework, as discussed in this article.