Friday, October 2, 2015

Part II: Consciousness, or The Agency of Multiple Selves

"It used to be simpler. According to the traditional view, a single, long-term-planning self—a you—battles against passions, compulsions, impulses, and addictions. We have no problem choosing, as individuals or as a society, who should win, because only one interest is at stake—one person is at war with his or her desires. And while knowing the right thing to do can be terribly difficult, the decision is still based on the rational thoughts of a rational being."
The Atlantic:
The question “What makes people happy?” has been around forever, but there is a new approach to the science of pleasure, one that draws on recent work in psychology, philosophy, economics, neuroscience, and emerging fields such as neuroeconomics. This work has led to new ways—everything from beepers and diaries to brain scans—to explore the emotional value of different experiences, and has given us some surprising insights about the conditions that result in satisfaction.

But what’s more exciting, I think, is the emergence of a different perspective on happiness itself. We used to think that the hard part of the question “How can I be happy?” had to do with nailing down the definition of happy. But it may have more to do with the definition of I. Many researchers now believe, to varying degrees, that each of us is a community of competing selves, with the happiness of one often causing the misery of another. This theory might explain certain puzzles of everyday life, such as why addictions and compulsions are so hard to shake off, and why we insist on spending so much of our lives in worlds­—like TV shows and novels and virtual-reality experiences—that don’t actually exist. And it provides a useful framework for thinking about the increasingly popular position that people would be better off if governments and businesses helped them inhibit certain gut feelings and emotional reactions.

But there is no consensus about the broader implications of this scientific approach. Some scholars argue that although the brain might contain neural subsystems, or modules, specialized for tasks like recognizing faces and understanding language, it also contains a part that constitutes a person, a self: the chief executive of all the subsystems. As the philosopher Jerry Fodor once put it, “If, in short, there is a community of computers living in my head, there had also better be somebody who is in charge; and, by God, it had better be me.”

More-radical scholars insist that an inherent clash exists between science and our long-held conceptions about consciousness and moral agency: if you accept that our brains are a myriad of smaller components, you must reject such notions as character, praise, blame, and free will. Perhaps the very notion that there are such things as selves—individuals who persist over time—needs to be rejected as well.

The view I’m interested in falls between these extremes. It is conservative in that it accepts that brains give rise to selves that last over time, plan for the future, and so on. But it is radical in that it gives up the idea that there is just one self per head. The idea is that instead, within each brain, different selves are continually popping in and out of existence. They have different desires, and they fight for control—bargaining with, deceiving, and plotting against one another.

The notion of different selves within a single person is not new. It can be found in Plato, and it was nicely articulated by the 18th-century Scottish philosopher David Hume, who wrote, “I cannot compare the soul more properly to any thing than to a republic or commonwealth, in which the several members are united by the reciprocal ties of government and subordination.” Walt Whitman gave us a pithier version: “I am large, I contain multitudes.”

More-recent experiments with adults find that subtle cues can have a surprising effect on our actions. Good smells, such as fresh bread, make people kinder and more likely to help a stranger; bad smells, like farts (the experimenters used fart spray from a novelty store), make people more judgmental. If you ask people to unscramble sentences, they tend to be more polite, minutes later, if the sentences contain positive words like honor rather than negative words like bluntly. These findings are in line with a set of classic experiments conducted by Stanley Milgram in the 1960s—too unethical to do now—showing that normal people could be induced to give electric shocks to a stranger if they were told to do so by someone they believed was an authoritative scientist. All of these studies support the view that each of us contains many selves—some violent, some submissive, some thoughtful—and that different selves can be brought to the fore by different situations.

The population of a single head is not fixed; we can add more selves. In fact, the capacity to spawn multiple selves is central to pleasure. After all, the most common leisure activity is not sex, eating, drinking, drug use, socializing, sports, or being with the ones we love. It is, by a long shot, participating in experiences we know are not real—reading novels, watching movies and TV, daydreaming, and so forth.

Enjoying fiction requires a shift in selfhood. You give up your own identity and try on the identities of other people, adopting their perspectives so as to share their experiences. This allows us to enjoy fictional events that would shock and sadden us in real life. When Tony Soprano kills someone, you respond differently than you would to a real murder; you accept and adopt some of the moral premises of the Soprano universe. You become, if just for a moment, Tony Soprano.


Some imaginative pleasures involve the creation of alternative selves. Sometimes we interact with these selves as if they were other people. This might sound terrible, and it can be, as when schizophrenics hear voices that seem to come from outside themselves. But the usual version is harmless. In children, we describe these alternative selves as imaginary friends. The psychologist Marjorie Taylor, who has studied this phenomenon more than anyone, points out three things. First, contrary to some stereotypes, children who have imaginary friends are not losers, loners, or borderline psychotics. If anything, they are slightly more socially adept than other children. Second, the children are in no way deluded: Taylor has rarely met a child who wasn’t fully aware that the character lived only in his or her own imagination. And third, the imaginary friends are genuinely different selves. They often have different desires, interests, and needs from the child’s; they can be unruly, and can frustrate the child. The writer Adam Gopnik wrote about his young daughter’s imaginary companion, Charlie Ravioli, a hip New Yorker whose defining quality was that he was always too busy to play with her.

Long-term imaginary companions are unusual in adults, but they do exist—Taylor finds that many authors who write books with recurring characters claim, fairly convincingly, that these characters have wills of their own and have some say in their fate. But it is not unusual to purposefully create another person in your head to interact with on a short-term basis. Much of daydreaming involves conjuring up people, sometimes as mere physical props (as when daydreaming about sports or sex), but usually as social beings. All of us from time to time hold conversations with people who are not actually there.

This is not the traditional view of human frailty. The human condition has long been seen as a battle of good versus evil, reason versus emotion, will versus appetite, superego versus id. The iconic image, from a million movies and cartoons, is of a person with an angel over one shoulder and the devil over the other.

The alternative view keeps the angel and the devil, but casts aside the person in between. The competing selves are not over your shoulder, but inside your head: the angel and the devil, the self who wants to be slim and the one who wants to eat the cake, all exist within one person. Drawing on the research of the psychiatrist George Ainslie, we can make sense of the interaction of these selves by plotting their relative strengths over time, starting with one (the cake eater) being weaker than the other (the dieter). For most of the day, the dieter hums along at his regular power (a 5 on a scale of 1 to 10, say), motivated by the long-term goal of weight loss, and is stronger than the cake eater (a 2). Your consciousness tracks whichever self is winning, so you are deciding not to eat the cake. But as you get closer and closer to the cake, the power of the cake eater rises (3 … 4 …), the lines cross, the cake eater takes over (6), and that becomes the conscious you; at this point, you decide to eat the cake. It’s as if a baton is passed from one self to another.

Sometimes one self can predict that it will later be dominated by another self, and it can act to block the crossing—an act known as self-binding, which Thomas Schelling and the philosopher Jon Elster have explored in detail. Self-binding means that the dominant self schemes against the person it might potentially become—the 5 acts to keep the 2 from becoming a 6. Ulysses wanted to hear the song of the sirens, but he knew it would compel him to walk off the boat and into the sea. So he had his sailors tie him to the mast. Dieters buy food in small portions so they won’t overeat later on; smokers trying to quit tell their friends never to give them cigarettes, no matter how much they may later beg. In her book on gluttony, Francine Prose tells of women who phone hotels where they are going to stay to demand a room with an empty minibar. An alarm clock now for sale rolls away as it sounds the alarm; to shut it off, you have to get up out of bed and find the damn thing.


You might also triumph over your future self by feeding it incomplete or incorrect information. If you’re afraid of panicking in a certain situation, you might deny yourself relevant knowledge—you don’t look down when you’re on the tightrope; you don’t check your stocks if you’re afraid you’ll sell at the first sign of a downturn. Chronically late? Set your watch ahead. Prone to jealousy? Avoid conversations with your spouse about which of your friends is the sexiest.

Such contradictions arise all the time. If you ask people which makes them happier, work or vacation, they will remind you that they work for money and spend the money on vacations. But if you give them a beeper that goes off at random times, and ask them to record their activity and mood each time they hear a beep, you’ll likely find that they are happier at work. Work is often engaging and social; vacations are often boring and stressful. Similarly, if you ask people about their greatest happiness in life, more than a third mention their children or grandchildren, but when they use a diary to record their happiness, it turns out that taking care of the kids is a downer—parenting ranks just a bit higher than housework, and falls below sex, socializing with friends, watching TV, praying, eating, and cooking.

Pretty much no matter how you test it, children make us less happy. The evidence isn’t just from diary studies; surveys of marital satisfaction show that couples tend to start off happy, get less happy when they have kids, and become happy again only once the kids leave the house. As the psychologist Daniel Gilbert puts it, “Despite what we read in the popular press, the only known symptom of ‘empty-nest syndrome’ is increased smiling.” So why do people believe that children give them so much pleasure? Gilbert sees it as an illusion, a failure of affective forecasting. Society’s needs are served when people believe that having children is a good thing, so we are deluged with images and stories about how wonderful kids are. We think they make us happy, though they actually don’t.

The theory of multiple selves offers a different perspective. If struggles over happiness involve clashes between distinct internal selves, we can no longer be so sure that our conflicting judgments over time reflect irrationality or error. There is no inconsistency between someone’s anxiously hiking through the Amazon wishing she were home in a warm bath and, weeks later, feeling good about being the sort of adventurous soul who goes into the rain forest. In an important sense, the person in the Amazon is not the same person as the one back home safely recalling the experience, just as the person who honestly believes that his children are the great joy in his life might not be the same person who finds them terribly annoying when he’s actually with them.

The natural extension of this type of self-binding is what the economist Richard Thaler and the legal scholar Cass Sunstein describe as “libertarian paternalism”—a movement to engineer situations so that people retain their choices (the libertarian part), but in such a way that these choices are biased to favor people’s better selves (the paternalism part). 

For instance, many people fail to save enough money for the future; they find it too confusing or onerous to choose a retirement plan. Thaler and Sunstein suggest that the default be switched so that employees would automatically be enrolled in a savings plan, and would have to take action to opt out. A second example concerns the process of organ donation. When asked, most Americans say that they would wish to donate their organs if they were to become brain-dead from an accident—but only about half actually have their driver’s license marked for donation, or carry an organ-donor card. Thaler and Sunstein have discussed a different idea: people could easily opt out of being a donor, but if they do nothing, they are assumed to consent. Such proposals are not merely academic musings; they are starting to influence law and policy, and might do so increasingly in the future. Both Thaler and Sunstein act as advisers to politicians and policy makers, most notably Barack Obama.

It’s more controversial, of course, when someone else does the binding. I wouldn’t be very happy if my department chair forced me to take Adderall, or if the government fined me for being overweight and not trying to slim down (as Alabama is planning to do to some state employees). But some “other-binding” already exists—think of the mandatory waiting periods for getting a divorce or buying a gun. You are not prevented from eventually taking these actions, but you are forced to think them over, giving the contemplative self the chance to override the impulsive self. And since governments and businesses are constantly asking people to make choices (about precisely such things as whether to be an organ donor), they inevitably have to provide a default option. If decisions have to be made, why not structure them to be in individuals’ and society’s best interests?

The main problem with all of this is that the long-term self is not always right. Sometimes the short-term self should not be bound. Of course, most addictions are well worth getting rid of. When a mother becomes addicted to cocaine, the pleasure from the drug seems to hijack the neural system that would otherwise be devoted to bonding with her baby. It obviously makes sense here to bind the drug user, the short-term self. On the other hand, from a neural and psychological standpoint, a mother’s love for her baby can also be seen as an addiction. But here binding would be strange and immoral; this addiction is a good one. Someone who becomes morbidly obese needs to do more self-binding, but an obsessive dieter might need to do less. We think one way about someone who gives up Internet porn to spend time building houses for the poor, and another way entirely about someone who successfully thwarts his short-term desire to play with his children so that he can devote more energy to making his second million. The long-term, contemplative self should not always win.

This is particularly true when it comes to morality. Many cruel acts are perpetrated by people who can’t or don’t control their short-term impulses or who act in certain ways—such as getting drunk—that lead to a dampening of the contemplative self. But evil acts are also committed by smart people who adopt carefully thought-out belief systems that allow them to ignore their more morally astute gut feelings.

I wouldn’t want to live next door to someone whose behavior was dominated by his short-term selves, and I wouldn’t want to be such a person, either. But there is also something wrong with people who go too far in the other direction. We benefit, intellectually and personally, from the interplay between different selves, from the balance between long-term contemplation and short-term impulse. We should be wary about tipping the scales too far. The community of selves shouldn’t be a democracy, but it shouldn’t be a dictatorship, either.


Continue to Part III: Creativity, or Self-Organized Criticality, or the Brain's Cosmic Web.

No comments: