Newton Used Averages and So Should You

A recent article at Mother Jones mentions Isaac Newton's (unusual at the time) use of averages in his experimental work.

The man who 'invented the calculus'1 was apparently ahead of his time in using averages. While studying his eponymous rings, instead of just picking and choosing from his measurements the ones that seemed best, he took all of his measurements and averaged them.

This only works if the measurement error is 'unbiased.' Which in probabilistic lingo means that the error has zero mean. That is (since I can't avoid introducing some notation), suppose \(r\) is the (true) value you would like to measure, but in the process of measuring it you introduce some error. Call this error \(\epsilon\) (because that's what statisticians like to call these disturbances). Then your actual measurement on any go \(i\) at the instrument is \[M_{i} = r + \epsilon_{i}.\] As long as the \(\epsilon_{i}\) have zero expected value (\(E[\epsilon_{i}] = 0\)), the measurement is unbiased for the true value \(r\). So, averaging will help, since the average will also be unbiased for \(r\), and the variance in the average will decrease like \(\frac{1}{n}\) when we have \(n\) samples. Take \(n\) large enough and the variance goes to zero, and we have the law of large numbers showing us that we'll get the right answer with very high probability.

We don't even need the errors to be uncorrelated (though correlations in the errors may slow down the convergence to the true value).

The intuition here, that Newton probably had, is that if we make mistakes to the left and right of a value about equally often, averaging will cause those mistakes to wash themselves out. This is the intuition, and I'm sure this was also how averaging was explained to me in my lab courses in high school and college. But at this point, I'd rather just abstract things away to unbiasedness and decreasing variance. I haven't (yet) come to a point in my career where this level of abstraction has hurt me. And I feel that many times it has helped me.

Averages may have been all the rage in the 17th century, but now many other approaches exist. We also have a nice framework for thinking about what sorts of errors these different statistics minimize. Newton did have the language of \(L_{p}\) norms when he did science. But I have no doubt he would have picked up on them quickly.


  1. Though not really. It's more that he (and Gottfried Leibniz) discovered the connection between derivatives (slopes) and integrals (areas). Obviously, people had been dealing with these two tools in isolation since, well, since the first person had to portion out a plot of land. What Leibniz and Newton did was bring them into a unified framework (the calculus) by introducing nice results like the Fundamental Theorem of Calculus.