Riding my Peloton is one of my favorite hobbies. Sometimes, though, my real-life friends don’t want to be my Peloton friends. They prefer to keep their profiles private because they’re embarrassed by their output and think that I’ll judge them.
Well let me be the first to say that I won’t judge you! But that’s beside the point. Suppose I did want to judge you. You couldn’t escape my judgment, anyway. For starters, what makes you think that I — a perfect Bayesian reasoner — have inflated expectations of your output? If my estimates are well calibrated, then, prior to learning your true output, I should be just as likely to underestimate your fitness as I am to overestimate it. And, presumably, your expected emotions should track my expected surprise when I learn about your performance. That is, your expected delight if I had underestimated your strength should mirror your expected embarrassment if I had overestimated it. Therefore, you shouldn’t expect to be embarrassed ex ante.
But there’s a bigger problem with your potential refusal to friend me. If you reject my request, this decision will reveal new and unflattering information about your skill level.
Imagine that, before I ask you to friend me, I think that your output is around average for a Peloton rider in your demographic, and you know that I believe this. If you then tell me you don’t want to be my friend, I’ll reason as follows. First, if you were above the average skill level for your demographic, you would want to share that information with me, as it would lead me to positively update my belief about your fitness. Since you don’t share that information, I’ll conclude that you must be below average for your demographic. But you should also anticipate that your rejection will lead me to update my belief in this way, and you should therefore want to prove that you’re at least better than my new guess about your skill level if you are, in fact, better than my new estimate. Your silence therefore further reveals that you’re below even my new guess. But then I can take this logic one step further, and drop my expectations even more as you remain silent. And so on ad infinitum.
In other words, if you reject my invitation, you’ll out yourself as the worst Peloton rider imaginable! Isn’t that more embarrassing than just accepting my friend request?
When unraveling fails
Okay, maybe I don’t actually believe that those who spurn my friend requests are the worst of the worst riders. For one thing, there could be other reasons for you to keep your profile private, such as the desire to hide the fact that you’re spinning during work hours (I do this too). Even if you only worried about my judgment of your fitness, though, the logic of “information unraveling,” which I facetiously outlined above, breaks down under most realistic conditions. Let’s see why.
First, a key assumption of the unraveling logic is that you respond in a deterministically ‘optimal’ way to my inferred beliefs. So, even before I do any fancy recursive theory of mind, if I think that you expect me to estimate your ability as average for your demographic, I assume that you will definitely reveal your quality if you’re above this average and definitely not reveal your quality if you’re below this average. But this assumption is unrealistic for a number of reasons.
For one thing, we each have limited information about what the other believes. That is, in most cases, I can’t perfectly estimate what you think I initially believe your skill level is. Suppose I believe that the typical rider in your demographic outputs an average of 120 watts for a 30-minute ride, and I incorrectly assume that you’d be likely to know that this is my initial guess for your output. Yet, actually, you think that I would estimate your output at 150 watts. If your true output is 130 watts, you might not want to share that information because you believe that you’re below my estimate of 150 watts, even though you’re actually above my estimate of 120 watts.
Even if we do share the same estimate, though, I can’t expect you to be thinking all that carefully about how you compare to my baseline belief about you. While it’s probably the case that — putting humility aside — you’d rather share a surprisingly impressive number than a surprisingly unimpressive one, it’s less obvious how you might respond if your output is close to what we both agree would be my initial guess — especially if you’re a bit shy or overconfident.
There’s a second major issue with the unraveling logic: I must assume that you’re simulating my thought process through many levels of recursion. But this may be cognitively infeasible. Perhaps if you’re just a little below my baseline belief about you, you expect me to lower my expectations about you if you don’t accept my friend request, but you don’t consider the further possibility that I will expect you to make a new decision in light of my lowered expectations. If I can’t assume that you’ll be running through a long chain of recursive inference, then I also can’t assume that you’re at the bottom of the distribution of Peloton riders.
To simplify the math, we can suppose that I want to minimize squared error, and therefore I want to report the expected value (average) of my posterior belief.↩︎
I’m assuming here, somewhat unrealistically, that \(\tau\) is common knowledge. I don’t expect the results to change a huge amount if it’s not, so long as it’s generally known that you’re relatively more or less random in your decisions.↩︎
Of course, the ability to apply Bayes’ rule is itself an assumption of rationality, which could be relaxed, but I will not be exploring this question further.↩︎
For this analysis, I assume that you (the signaler) have one more level of theory of mind than I (the guesser) do. That is, you use my \(k\)-level belief to decide whether to reveal your card, but I don’t further update my guess by modeling your decision at level \(k+1\).↩︎
Note that this might not be true under different conditions. For example, if it were particularly bad to be outed as having an extremely low number, perhaps decision noise could be more helpful in that type of situation.↩︎
You could also imagine playing this game in a big group where everyone had a different number. In that case, the people at the top would reveal their numbers first, prompting the next best to reveal their numbers, and so on. No sophisticated theory of mind is required in this case.↩︎