Khan Academy on a Stick
Inferential statistics
Making inferences based on sample data. Confidence intervals. Margin of error. Hypothesis testing.
-
Introduction to the normal distribution
Exploring the normal distribution
-
Normal distribution excel exercise
(Long-26 minutes) Presentation on spreadsheet to show that the normal distribution approximates the binomial distribution for a large number of trials.
-
ck12.org normal distribution problems: Qualitative sense of normal distributions
Discussion of how "normal" a distribution might be
-
ck12.org normal distribution problems: Empirical rule
Using the empirical rule (or 68-95-99.7 rule) to estimate probabilities for normal distributions
-
ck12.org normal distribution problems: z-score
Z-score practice
-
ck12.org exercise: Standard normal distribution and the empirical rule
Using the Empirical Rule with a standard normal distribution
-
ck12.org: More empirical rule and z-score practice
More Empirical Rule and Z-score practice
Normal distribution
The normal distribution (often referred to as the "bell curve" is at the core of most of inferential statistics. By assuming that most complex processes result in a normal distribution (we'll see why this is reasonable), we can gauge the probability of it happening by chance. To best enjoy this tutorial, it is good to come to it understanding what probability distributions and random variables are. You should also be very familiar with the notions of population and sample mean and standard deviation.
-
Central limit theorem
Introduction to the central limit theorem and the sampling distribution of the mean
-
Sampling distribution of the sample mean
The central limit theorem and the sampling distribution of the sample mean
-
Sampling distribution of the sample mean 2
More on the Central Limit Theorem and the Sampling Distribution of the Sample Mean
-
Standard error of the mean
Standard Error of the Mean (a.k.a. the standard deviation of the sampling distribution of the sample mean!)
-
Sampling distribution example problem
Figuring out the probability of running out of water on a camping trip
Sampling distribution
In this tutorial, we experience one of the most exciting ideas in statistics--the central limit theorem. Without it, it would be a lot harder to make any inferences about population parameters given sample statistics. It tells us that, regardless of what the population distribution looks like, the distribution of the sample means (you'll learn what that is) can be normal. Good idea to understand a bit about normal distributions before diving into this tutorial.
-
Confidence interval 1
Estimating the probability that the true population mean lies within a range around a sample mean.
-
Confidence interval example
Confidence Interval Example
-
Small sample size confidence intervals
Constructing small sample size confidence intervals using t-distributions
Confidence intervals
We all have confidence intervals ("I'm the king of the world!!!!") and non-confidence intervals ("No one loves me"). That is not what this tutorial is about. This tutorial takes what you already know about the central limit theorem, sampling distributions, and z-scores and uses these tools to dive into the world of inferential statistics. It may seem magical at first, but from our sample, we can now make inferences about the probability of our population mean actually being in an interval.
-
Mean and variance of Bernoulli distribution example
Mean and Variance of Bernoulli Distribution Example
-
Bernoulli distribution mean and variance formulas
Bernoulli Distribution Mean and Variance Formulas
-
Margin of error 1
Finding the 95% confidence interval for the proportion of a population voting for a candidate.
-
Margin of error 2
Finding the 95% confidence interval for the proportion of a population voting for a candidate.
Bernoulli distributions and margin of error
Ever wondered what pollsters are talking about when they said that there is a 3% "margin of error" in their results. Well, this tutorial will not only explain what it means, but give you the tools and understanding to be a pollster yourself!
-
Hypothesis testing and p-values
Hypothesis Testing and P-values
-
One-tailed and two-tailed tests
One-Tailed and Two-Tailed Tests
-
Type 1 errors
Type 1 Errors
-
Z-statistics vs. T-statistics
Z-statistics vs. T-statistics
-
Small sample hypothesis test
Small Sample Hypothesis Test
-
T-statistic confidence interval
T-Statistic Confidence Interval (for small sample sizes)
-
Large sample proportion hypothesis testing
Large Sample Proportion Hypothesis Testing
Hypothesis testing with one sample
This tutorial helps us answer one of the most important questions not only in statistics, but all of science: how confident are we that a result from a new drug or process is not due to random chance but due to an actual impact. If you are familiar with sampling distributions and confidence intervals, you're ready for this adventure!
-
Variance of differences of random variables
Variance of Differences of Random Variables
-
Difference of sample means distribution
Difference of Sample Means Distribution
-
Confidence interval of difference of means
Confidence Interval of Difference of Means
-
Clarification of confidence interval of difference of means
Clarification of Confidence Interval of Difference of Means
-
Hypothesis test for difference of means
Hypothesis Test for Difference of Means
-
Comparing population proportions 1
Comparing Population Proportions 1
-
Comparing population proportions 2
Comparing Population Proportions 2
-
Hypothesis test comparing population proportions
Hypothesis Test Comparing Population Proportions
Hypothesis testing with two samples
You're already familiar with hypothesis testing with one sample. In this tutorial, we'll go further by testing whether the difference between the means of two samples seems to be unlikely purely due to chance.
-
Chi-square distribution introduction
Chi-Square Distribution Introduction
-
Pearson's chi square test (goodness of fit)
Pearson's Chi Square Test (Goodness of Fit)
-
Contingency table chi-square test
Contingency Table Chi-Square Test
Chi-square probability distribution
You've gotten good at hypothesis testing when you can make assumptions about the underlying distributions. In this tutorial, we'll learn about a new distribution (the chi-square one) and how it can help you (yes, you) infer what an underlying distribution even is!
-
ANOVA 1: Calculating SST (total sum of squares)
Analysis of Variance 1 - Calculating SST (Total Sum of Squares)
-
ANOVA 2: Calculating SSW and SSB (total sum of squares within and between)
Analysis of Variance 2 - Calculating SSW and SSB (Total Sum of Squares Within and Between).avi
-
ANOVA 3: Hypothesis test with F-statistic
Analysis of Variance 3 -Hypothesis Test with F-Statistic
Analysis of variance
You already know a good bit about hypothesis testing with one or two samples. Now we take things further by making inferences based on three or more samples. We'll use the very special F-distribution to do it (F stands for "fabulous").