Stat Simulations

"All The Mathematical Methods I Learned In My University Math Degree Became Obsolete In My Lifetime" - Keith Devlin


Random Sampling:  Enter probability distribution (using relative frequence) into the list D. The Blue histogram is your population distribution. Let n be the size of your sample. How how different samples of size n (orange) result in different means, sample deviations, and confidence intervals. Change n to see the effect of n.  (https://www.desmos.com/calculator/36ez2nxepy)






Central Limit Theorem:  Enter probability distribution (using relative frequence) into the list D. See how repeatedly sampling D n-times results in normal shaped curve.  (https://www.desmos.com/calculator/lknxztzijt)






Hypothesis Testing simulation:  Enter the sample size, mu_0, alpha, and type of alternative hypothesis. See how different samples allow you to accept/reject the null. Also see (using E) whether the rejection or acceptance is an error, and which type.  (https://www.desmos.com/calculator/8ghubd3zvu)






Archer Problem:  An Archer has a magic bow that makes their arrows fly straight forever. The stand in front of an infinitely tall wall, picks a random angle and fires an arrow at it. How high does it fly? What is the mean and variance of the height?

The truth is, Desmos lacks the firepower to show just how wild and unpredictable this variable is. Still, we can see that at n=1000, we get nowhere no converging to respectable values. This is an example of a fat-tailed distribution. Distributions with no mean and variance, against whom the Central Limit Theorem is powerless.  (https://www.desmos.com/calculator/zhlpmkjsug)





95% Confidence Intervals and Central Limit Theorem in R:  This bit of R code samples the uniform distribution from 0 to 100 with a sample of size n (given your choice of n). It does so numsamp times, and generates numsamp 95% confidence intervals. You can mess with the code to make different intervals or sample a different distribution or more samples. It then plots the means so we can observe the normality of the distribution.

95% Confidence Intervals and Central Limit Theorem in R, BUT WITH ARCHERS:  If we try to do the same thing above, but with the Archer problem, nothing so nice happens.

Repeated Sampling:  If we have a normal variable with mean m and sd s, how many times do we need to sample it before we get significant results that the mean isn't the mean?

What is a probability:  If we make the claim that 1/6 of die rolls have an outcome of 6, then as we roll dice, wouldn't the proportion of dice approach 1/6?