Randomness for Self-Improvement

A quick Google search of 'random reinforcement self help' and 'using randomness self help' returns nothing useful. Actually, the first set of terms returns this paper as its first hit. Something I'm interested in reading, and first discovered through Shalizi. Most of the other hits involve types of reenforcement schedules, including random ones.

But nothing in particular about actively designing a random reinforcement schedule. Which seems odd, since one of the best reinforcement schedules I've ever been subjected to (subject myself to?) has been Facebook's. The arrival of wall posts can surely be modeled as some sort of random process (that would be an interesting project, in fact, and would only require mining my inbox). And their arrivals have induced in me a behavior, common to most of Facebook users (I would imagine), of returning to Facebook several times a day. Interestingly, it doesn't seem to matter how infrequent the arrivals of these posts are; in the language of Poisson processes, the rate parameter \(\lambda\) seems to be inconsequential. Though I do find myself increasing my visits to the site immediately after I post something, obviously because that increases \(\lambda\) (as such, I suppose I should be talking about \(\lambda(t)\)) and thus my anticipation of a response.

I have a few ideas for behaviors that might be diminished by random reinforcement. But not many that would be improved by it. This isn't a new idea, either, as evidenced by this entry. But the idea might warrant further investigation.

And probably a more careful literature review than googling.

Digital Tsundoku

I've just collected all of the books I've ever bookmarked from Amazon (and have yet to purchase) into a single text file. There are 230 of them.

At the rate of six hours per book (a decent estimate at the average time I spend reading a book I manage to finish), that's 1380 hours of reading. Or 57 complete days.

A digital form of 'tsundoku', the Japenese word for 'buying books and not reading them; letting books pile up unread on shelves or floors or nightstands'.

Randomness for fun and profit

I now spend a part of Fridays going through my RSS reader (NetNewsWire), collecting a list of articles I would like to read. I then save all of those articles in a text file (called links.txt) and use a Python script, referenced via the bash alias lniks, to randomly choose a link and open it in Safari. Now (and this is relatively now), if the article is longer than a paragraph or so, I push it off to Instapaper, so that I can read it on my Nexus 7.

This is an example of actively incorporating randomness into my life. Previous to using the lniks script, I found that I wouldn't read a certain article for a very long time. Admittedly, this meant that I got through the articles that I most wanted to read first. But it left those other ones wasting away, never to be read.

Actually, I invented this system completely by accident. Before, I kept folders, by topic (MISC, SCI, TECH, META, etc.), that I stored the articles in. Once, I failed to save these folders, and had to reconstruct my weekly dose of articles through the 'History' feature in Safari. It turned out that when I saved the articles from 'History' into my 'Bookmarks' folder, Safari shuffled all of the articles around. "What a bother!" was my first thought. But as I started working my way through the articles, I found that my pace of consumption greatly increased, as I felt more comfortable deleting articles, if I didn't really have an interest in reading them.

So, for a while, I recreated this 'failure to save bookmarks' experience, until I realized that it was the randomness of the bookmarks' presentation that I really wanted to capture. Thus, lniks.py, and my new habit of weekly article consumption.

Testing footnotes with MultiMarkdown.

Here is a preface to a footnote1.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc sed arcu hendrerit libero rutrum egestas. Ut ut dolor a justo pretium porttitor. Aliquam pharetra lacinia elit, egestas scelerisque metus volutpat sit amet. Pellentesque magna sapien, egestas eget dapibus at, sollicitudin nec dolor. Donec orci turpis, posuere eget consequat sit amet, pellentesque nec turpis. Nullam mauris eros, consequat nec dapibus in, hendrerit lobortis enim. Pellentesque et enim sit amet nisi elementum malesuada. Integer eu orci erat. Integer eu semper libero. Fusce sed sapien at neque rutrum elementum. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Pellentesque auctor elit et sapien pharetra nec luctus eros semper. Duis commodo molestie felis, et pellentesque lacus sagittis rhoncus. Donec tempus tristique nibh, adipiscing rhoncus sapien dignissim non. Quisque sed neque leo. Sed sapien diam, fringilla et rutrum eget, aliquet vel velit. Donec sed augue justo, ac ultricies justo. Pellentesque in dui nibh. Aenean dictum rhoncus purus, ut lobortis elit vehicula at. Nulla et ligula lacus. Sed a scelerisque tellus. Suspendisse potenti. Nullam ac fermentum orci. In vestibulum orci vitae turpis auctor non viverra tellus gravida. Curabitur sodales cursus nunc et varius. In hac habitasse platea dictumst. Nullam lobortis ornare nisi in pharetra. Fusce eu iaculis quam. Aenean blandit semper nunc nec facilisis. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Proin at libero sit amet lacus tincidunt ultrices sit amet sed enim. Pellentesque tristique sodales blandit. Etiam tortor sapien, cursus in accumsan at, accumsan ac dui. Pellentesque eu dui metus. Sed eu risus id risus pellentesque egestas vestibulum in sapien. In auctor luctus orci et fringilla. Duis feugiat venenatis felis sed posuere. Pellentesque vitae tortor arcu, ut euismod sapien. Suspendisse placerat leo leo, nec imperdiet diam. Maecenas ipsum dui, condimentum et elementum id, consectetur sit amet nisl. Etiam sit amet est sapien, vitae molestie orci. Aenean pellentesque rhoncus dapibus. Donec bibendum orci eu libero vulputate pellentesque. Pellentesque est ante, consequat in consectetur sed, tempus et mauris. Ut non ultrices nunc. Suspendisse commodo porttitor sodales. Suspendisse viverra, elit non posuere tincidunt, elit lectus ullamcorper tellus, in convallis sem urna vitae sapien. Quisque ultricies posuere odio, quis porttitor orci dapibus vel. Sed tincidunt consequat felis, eget convallis augue vehicula sit amet. Aliquam dignissim augue sit amet sapien scelerisque vel volutpat arcu molestie. Donec velit metus, rhoncus at porta at, molestie et nunc. Quisque semper egestas ullamcorper. Suspendisse dapibus dictum tellus id hendrerit. Nulla et lorem mi. Cras porta elit vitae lorem feugiat faucibus. Praesent massa sapien, facilisis quis gravida et, volutpat a velit. Nam eros odio, pretium ut convallis at, interdum ut eros. Quisque egestas massa sit amet nunc consectetur fringilla. Integer ultricies felis in elit venenatis interdum. Nullam elit nisl, imperdiet a gravida et, semper id ligula. Sed dui lectus, pulvinar sed sollicitudin at, fermentum eu quam. Duis a metus neque, in imperdiet felis. Aliquam ac nibh orci, hendrerit tincidunt tellus.


  1. And here is the footnote text itself. ↩

Reading wasserman

I intentionally didn't capitalize Wasserman's name in the title of this post. Why? Because I've been reading a lot of writing by him lately (see, for instance here, where he misspells 'hear' as 'here' at one point), and I have noticed that they have a lot of typos. I've also found several typos in his text All of Statistics (and even made it into his the acknowledgements for his errata of the second edition for noticing where he misspelled Mahalanobis).

My point isn't to critique his spelling. Actually, my point is the opposite of that. Wasserman is a brilliant statistician. And he has better things to do than proofread all of his blog posts / textbooks for trivial mistakes like these. He's too busy, you know, doing amazing statistics. That's a good indication of what I should be doing if I want to become even an slightly-above-average scientist. Push out more material. Write more. Formulate more ideas. Spelling mistakes be damned.

Which is the long winded explanation for why I didn't capitalize Wasserman's name in the title for this post.

All of that said, if you have any interest in mathematics, statistics, machine learning, or just generally cool stuff, you should check out Wasserman's blog at Normal Deviate. (Great pun!)

Let me do the math...

One of my least favorite phrases. But let's see if this works:

\[ f_{X}(x) = \frac{1}{\sqrt{2 \pi \sigma^{2}}} e^{-\frac{1}{2 \sigma^2} (x - \mu)^2}\]

Yep. That's the probability density function for a normal random variable, \( X \sim N(\mu, \sigma^{2}) \). The most math-y thing I could think of at the moment. Which shows you were my head is at.

PS - The formulas in this post were generated using MathJax. Not quite as pretty as LaTeX, but it at least uses the same formatting.