# A Menagerie of Power Laws

I read this post today over at *God Plays Dice*. Michael points to a post by Robert Krulwich (of Radiolab fame). In this post, Krulwich discusses work done by Geoffrey West on how power laws seem to show up in the relationship between metabolic rates and things like body mass.

Michael ends with a caveat that power laws are overreported in the scientific literature (in my opinion, especially in the 'complex systems' literature). But he then points to Shalizi's^{1} wonderful *So You Think You Have a Power Law -- Well Isn't that Special?*, which links to a paper by Shalizi, Aaron Clauset, and Mark Newman that demolishes a lot of the supposed power laws reported in large parts of the complex systems literature.

There's a problem with connecting these two things (the West paper and the Shalizi paper), however. I posted a comment on Michael's post, so I'll just quote a part of that comment, with fancier MathJax to make some of the equations look nicer:

Second, I'm going to be a bit pedantic (being an (applied) mathematician-and-complex-systems-scientist-in-training myself) about the difference between Shalizi, et al.'s power law paper and and the research Krulwich has linked to.

West's research is a question of regression: given that we observe a particular mass, what is the expected mortality rate? That is, we seek to estimate \(E[L | M = m]\), where \(L\) is the lifespan and \(M\) is the mass of the organism. In this case, trying to fit a regression function that happens to take the form of a power (i.e. something like \(m^{-\alpha}\)) is perfectly appropriate. The regression function \(E[L | M = m]\) may well be approximated by this. West's research indicates this parametric form for the regression function is at least a good candidate.

Shalizi and his coauthors are addressing a different question, something more like 'What is the distribution of outbound links for websites on the internet?' In this case, we seek to estimate the probability distribution (either the mass function or density function). We could do this parametrically by assuming that, say, the number of outbound links to a page should be power law distributed, and thus the mass function takes the form \[p(x) = C x^{-\alpha}\] where \(x\) is the number of outbound links and \(C\) is a normalization constant. What Shalizi, et al. are warning against is then

fittingthis model by plotting the empirical distribution function and tracing a line through it and then deriving an estimate of alpha from the slope of the fitted line. This is a silly approach that doesn't have any nice theoretical guarantees. We've known of a better way (maximum likelihood estimation) since at least the early 1900s, and maximum likelihood estimators do have provably nice properties.

The two problems (regression and parametric estimation) are clearly related, but they're asking different questions, and those questions admit different statistical tools. When I first heard about these problems, I conflated them as well. The fact that both uses of the monomial \(x^{-\alpha}\) are called 'power laws' certainly doesn't help. And why the name 'law' anyway? We usually don't invoke 'Gaussian laws' when we observe that certain things are typically normally distributed...

Of course, we also have *models* for why the power laws show up in the West-sense. Because if you can't tell a story, you're not doing science!

**Full disclaimer:**Shalizi is one of my scientific / mathematical / statistical heroes, so I'll probably be linking to a*lot*of things by him. The fact that he showed up in Michael's post was an added bonus.↩