We’re often willing to undergo unpleasant short-term experiences for long-term benefits. We exercise to promote longevity, study hard to get better exam scores, and moderate consumption of vices like alcohol or gambling to avoid the risks of harmful addiction.
We also sometimes engage in such behaviors to signal desirable features about ourselves to other people. We’d like others to know — or, at least, believe — that we’re fit, industrious, temperate, and so on. Because of these signaling motives, we might push ourselves harder than we normally would to help secure a new job or romantic partner.
Might we also have a motive to signal these desirable features to ourselves? That is, even when nobody else is around or the long-term benefits are slight, might it make sense to endure some short-term suffering just to prove to ourselves that we’re the kind of person we want to be? The logic of doing this hinges on whether our behaviors can act as genuine self-signals of desirable traits. Critics of this approach instead argue that these behaviors are the product of self-deception. Who is right?
Self-signaling to a longer life
Consider the following example loosely inspired by a classic study from Quattrone and Tversky. There’s a device in front of you that’s known to deliver an extremely painful electric shock for up to five seconds. People who can withstand the full five-second shock are expected to live several years longer than those who quit early. Moreover, people’s tolerance for this shock is idiosyncratic and not easily predicted by basic demographic features or known health conditions. Would you want to hook yourself up?
You might first probe the mechanism that links the shock to longevity. If you have reason to believe that withstanding the full shock causes you to live longer, then your decision is straightforward. It’s worth trying to withstand all five seconds because the worst that happens is that you give up early and suffer some temporary pain, and there’s at least a small chance that you could substantially lengthen your life, which outweighs this short-term pain. Your decision in this case is much like the decision to do vigorous exercise: painful in the moment, but ultimately beneficial.
To understand the logic of self-signaling, however, we need to consider a different situation. Suppose it’s known that the shock has no causal impact on health; it’s merely diagnostic of longevity. Being able to withstand five seconds of this unusual kind of pain indicates — in a way that’s not easily verifiable through other means — that you have good longevity genes. For the sake of the example, let’s further suppose that you can’t act on this information to, say, pursue special gene therapies or exercise regimes. In other words, while it would be appealing to learn that you have these good longevity genes, you won’t change your future behavior after learning whether or not you can tolerate the shock. So the only reason to shock yourself is to potentially learn good news about your health; otherwise, you sign up for intense pain that you’d rather avoid.
Should you do it, with the hopes of learning good news? Despite finding evidence that people seem to subject themselves to this kind of diagnostic suffering, Quattrone and Tversky don’t even entertain the idea that self-signaling could be rational. They interpret their participants’ behavior as straightforward self-deception, especially in light of the fact that very few of these participants acknowledged that they adjusted their behavior to appear healthier. The implication is that, if these participants accurately accounted for the fact that they had a motive to self-signal, their pain-inducing behavior would no longer be diagnostic of longevity, and there would be no signal. Hence, these participants are self-deceiving.
But those who are familiar with the game theory of costly signaling will recognize this to be a weak criticism. Even if people knowingly push themselves to tolerate more pain in light of a self-signaling motive, pain tolerance can still signal good health.1 Standard costly signaling models assume that signalers have a signaling motive when sending signals to other people, yet it can still be rational for receivers of the signals to update their beliefs about the signalers. Concretely, in our electric shock example, we can imagine that the pain of the shock is extreme enough that even those with a strong motive to withstand the experience sometimes fail. In that case, if somebody observes you make it through the full five seconds, this observer should positively update their belief about your lifespan, as not everyone could withstand the shock even if they really wanted to succeed.
This same logic can be applied to self-signaling, just with the observer and signaler as the same person. You the ‘observer’ are uncertain about whether you can withstand the full shock ahead of time. Thus, if you observe yourself succeed (even in light of a signaling motivation to succeed), it’s rational for you to positively update your belief about your lifespan.
In sum, even if a behavior doesn’t cause you to have desirable traits, it can still provide genuine evidence of these traits. Moreover, as I argue in an earlier post on Newcomb’s problem, evidence, rather than causality, is what matters for decision making. If you value a long life, you should make choices that give you the best evidence that you’ll live a long life, regardless of whether your behavior causes a longer life or merely predicts a longer life. As I further argue in that post, though, it’s nearly impossible for non-causal behaviors to provide new evidence, once you account for the information you already have. Have we found an exception in self-signaling behaviors?
No pain, no (information) gain
To recap, in order to understand when self-signaling could be rational, we need to understand when standard (other-person) signaling is rational. In essence, signaling is about transmitting flattering information to an observer. In our electric shock example, you might want to convey to an observer that you’re expected to live a long life. You can’t simply tell a skeptical observer that you’re going to live a long life, as this is cheap talk that could be a self-serving lie. As we’ve established, though, you could send a valid signal of your health by showing the observer that you can withstand the full five-second shock. This comes at the cost of pain, but can be worthwhile if the reputation benefits are large enough to outweigh this cost.
There’s a critical further assumption of signaling: the observer needs to learn something new from your behavior. If the observer knows ahead of time that you will successfully withstand the shock, your behavior doesn’t teach them anything new; they already know that you likely have a good health profile, and it would be foolish for you to suffer the pain of the shock for no reason.
The same lesson applies to self-signaling: if you know with certainty that you can endure the shock, you already have all the flattering information about your health that you could possibly get, and there’s no reason to suffer through the pain.2
However, we’ve stipulated that you don’t know ahead of time whether you could make it through the full five seconds of this shock. So both you and a possible observer can’t perfectly predict your success (or failure). Could this make self-signaling rational?
To simplify the problem, let’s imagine that the information you’d like to signal (either to yourself or an observer) is that you have a high probability of surviving to (at least) 90 years old. Letting \(L\) stand for your lifespan in years, let’s denote this outcome as \(L \geq 90\). Because your ability to withstand the shock is correlated with longevity, you and the observer both know that
\[ P(L \geq 90 | D = S) > P(L \geq 90 | D = \neg S), \]
where \(D=S\) indicates that you successfully endure the full five-second shock, and \(D = \neg S\) denotes failure. In other words, you know that attempting to withstand the shock will yield information about your probability of surviving to 90. You’ll be more optimistic about your chances of living to 90 if you succeed than if you fail.
Given that you don’t know whether you’ll succeed, though, the question facing a (self-)signaler is whether it’s worth trying to sustain the shock for five seconds in order to signal flattering information about your health.3 As we’ve established, it can only be worth doing this if the (self-)observer can expect to have a higher estimate of your survival odds after you attempt the shock.4 That is, if we let \(P_{obs}(L \geq 90)\) denote the observer’s belief prior to your trying the machine, and \(E_{signaler}[P_{obs}(E \geq 90 | D)]\) denote your expectation of the observer’s future belief after you make your attempt, it’s necessary (though not sufficient) for rational signaling that
\[ E_{signaler}[P_{obs}(L \geq 90 | D)] > P_{obs}(L \geq 90). \]
When signaling to another person, for example, you might have private information that you tend to have high pain tolerance or insensitivity to electrical sensations. In that case, you expect the observer to be pleasantly surprised by your attempt to endure the shock, even if you’re not certain that you’ll succeed.
But what happens when you are the observer (i.e., you’re sending a signal to yourself)? The inequality above simplifies to
\[ E[P(L \geq 90 | D)] > P(L \geq 90). \]
That is, you must expect to have a higher estimate of your lifespan upon observing your shock tolerance than the estimate you have now.
But this is incoherent: your current estimate of your probability of enduring the shock, \(P(D = S)\), is the only information that can be used to compute your expected future belief. Common sense therefore suggests that this estimate of your future belief should already be reflected in your prior belief, \(P(L \geq 90)\).
Indeed, it follows from the axioms of probability that
\[ \begin{align} P(L \geq 90) = &\ E[P(L \geq 90 | D)] \\ = &\ P(D = S)P(L \geq 90 | D = S) + (1-P(D = S))P(L \geq 90|D = \neg S). \end{align} \]
In words, your current estimate of your odds of living to 90 should be exactly equal to your expected future belief, which is a weighted average of what you’d believe if you succeed and what you’d believe if you fail. In your current belief, the good news you hope to receive is perfectly balanced out by the bad news you hope not to receive, so you can’t expect to learn good news.
The above formula is an example of the more general Martingale property, which implies the more general result that
\[ E[L]=P(D=S)E[L|D=S]+(1-P(D=S))E[L|D=\neg S). \]
Translation: the lifespan you expect to have before observing your shock tolerance should equal your expected future expected lifespan. And this equality holds even when the quantity you care about is the expected utility of possible lifespans, which need not increase linearly in the number of years lived.
In short, rational self-signaling seems impossible. Unless…
The multiple selves within you
So far, we’ve assumed that you the signaler are identical to you the observer; that is, there’s a single you. While this may seem like a trivial assumption, there’s considerable evidence that the mind is modular. The conscious agent within you who is deciding whether to take an action for its signaling benefit may have access to different information than the unconscious module ‘selves’ within your mind. Moreover, these modular processes are somewhat cognitively impenetrable — your conscious thoughts cannot directly communicate with them. Under these conditions, perhaps the only way to transmit flattering information about yourself to these unconscious selves is via costly actions.
Why bother? Suppose your conscious self relies on unconscious modules to accomplish tasks that your conscious self desires. For example, perhaps some unconscious processes in your mind control essential aspects of your motivation and energy levels. When these unconscious processes are in a ‘pessimistic’ state, it’s difficult for you to muster the energy to do productive things like exercise or write. Your unconscious ‘self’ is akin to an external observer who’s skeptical that your actions can make a difference to your life, and therefore, this ‘self’ doesn’t want to expend resources for no expected benefit.
But perhaps your conscious self has a slightly more sanguine attitude about what you can accomplish. If only you (the conscious self) could get some motivation and energy back, you could start building up to that marathon you want to run or that book you want to write. To build back to this motivation, you first need to convince your unconscious self that you can accomplish simple tasks.
Here’s where self-signaling could play a role. With the limited energy that you have, you might try to go for a walk or write a short blog post, only to prove to the unconscious gatekeeper of your motivation that you can accomplish things. Although you’re not guaranteed to succeed at even these basic tasks, your signaling attempt could be worthwhile if your conscious self is more optimistic than the unconscious observer. Hopefully, you do succeed, and some of your motivation returns. Your conscious self is now in a better position to accomplish more ambitious goals.5
See, for example, the “true interpretation” models from Bodner and Prelec.↩︎
This logic only applies when your shock tolerance is merely diagnostic of longevity. If enduring the shock actually caused better health outcomes, like vigorous exercise does, you would need to take action.↩︎
Recall that we’re ignoring possible benefits of mere uncertainty reduction. What you care about is learning good news and avoiding bad news about your lifespan — not learning any news.↩︎
For simplicity, I’m assuming (perhaps unrealistically) that the observer can’t learn anything from your decision not to use the machine.↩︎
Thanks to Matt Cashman for helpful discussion and feedback.↩︎