Posts

It’s common for modelers to want to estimate uncertainty over fractions or probabilities. For example, we might want to estimate the fraction of people who support a political candidate or the propensity for someone to cooperate. In these cases and many more, we often use a beta distribution (or Dirichlet distribution in the case of more than two categories) to model the plausibility of fractions or probabilities. But why? The following physical analogy to sampling balls from an urn may provide helpful motivation.

In the fall of 2021, I taught a seminar on good reasoning. A central aim of the course was to teach students to think about probability as an extension of propositional logic. Although I failed miserably at teaching this perspective, I continue to believe that there’s value in the approach, which was championed by the physicist Edwin Thompson Jaynes, but goes back to the origins of probability theory with Laplace.

A friend of mine once told me about a colleague of his with a controversial opinion: it’s a good thing that elite colleges preferentially admit legacy students. Why? We should expect the children of alumni to share many of the impressive qualities of their parents. For example, the children of Barack Obama are more likely to be exceptionally intelligent and determined than the average applicant to Columbia. Given this, colleges should use what they know about legacy to predict who will succeed.

I recently read a book about how Big Data can help you “get what you really want in life” and avoid over-relying on your gut instincts when making important choices like whom to marry or what hobbies to take up. As the author suggests in this interview, the goal of the book is not to tell people to completely ignore their intuitions, but to get them to adopt more of a Moneyball approach to life decisions based on the latest rigorous Big Data analyses.

All of the code for this workshop is available on GitHub. We encourage you to play around with the examples as you follow along. Bayesian inference gives us a recipe for optimally revising uncertain beliefs after receiving evidence. Often times, we have good models of how hidden states of the world produce different outcomes, but we want to know what those hidden states are. Bayes’ rule gives us a recipe for solving such inverse problems.