An important part of statistics is to make decisions about whether the groups in a study are similar or different. The most basic example in the sciences would be to compare the outcomes of an experimental group and a control group. Scientists use inferential statistics as an objective method for comparing groups and interpreting the outcomes.
When two groups are equal or very similar, the difference is called not significant. This means that the two groups are close enough to be treated as being essentially the same. Big differences between groups that seem to be nonrandom differences are called statistically significant differences.
The classic approach to inferential testing works by calculating a test score, such as z (from standard scores), t (from t-tests), or F (from analysis of variance). The score is then used to determine a probability from a probability distribution, which is informally called a p value. A p > .05 is usually judged as nonsignificant, whereas a p < .05 is judged as being statistically significant.
The following information covers traditional or classical approaches to inferential statistics that a beginning statistician might use. These approaches are sometimes called null hypothesis testing or frequentist statistics.
It is important to be aware that the interpretation of classic inferential statistics can be problematic. Null hypothesis testing approaches have come under increasing criticism in recent years. New approaches, such as Bayesian-based statistics, are gaining in popularity. However, these newer inferential statistics are beyond the current goal of helping the beginning statistician get started. People who are interested in newer approaches to hypothesis testing will have to refer to other sources for more information.
Index | Next - The t-test family
This work is licensed under a Creative Commons Attribution 4.0 International License that allows sharing, adapting, and remixing.