A LITTLE BACKGROUND
I wrote The Loop to warn readers about the impending collision of AI, capitalism, and our psychological vulnerabilities. This chapter describes several of the life-changing experiences that set that quest in motion, beginning with the firsthand realization that a whole world of PhDs were actively working at tech companies to make products addictive, and ending with a deep investigation into the ways that companies like Meta actively prey on lonely, elderly women.
I’m including it here also because it just happens to be a place in the book where I use the phrase “rip current.” Here it’s to describe a form of psychological autopilot that deprives us of the ability to make rational choices for ourselves.
Chapter 5
Guidance Systems
When we were kids, my parents took me and my sister to Disneyland in California a few times, usually as an adjunct to a visit with my grandparents, and I remember each occasion vividly. I loved the obscure narratives, and being jerked this way and that by the overlooked Mr. Toad’s Wild Ride, and while I would never have admitted it then, I found a real bliss in the placid predictability and intricate miniature landscapes of It’s a Small World. But my favorite experience by far, the one that stuck with me in my dreams and my drawings when we returned home to Connecticut, was Autopia, a winding set of roadways where anyone over thirty-twoinches tall was allowed to drive a car.
I remember the volcano of anticipation as we waited in line, and when I got the signal I sprinted out onto the track to choose among the various beach-buggy-like numbers, their early-1980s bright green and purple and yellow metal-flake paint jobs glittering in the Anaheim sun. We’d all climb in and I’d grip the steering wheel, anticipating power and freedom as the attendant fired up the incredibly loud and undoubtedly terribly polluting little car. And I can remember in my body, as I write this, the half-second delay between pressing down on the accelerator and feeling and hearing the two-stroke engine roar to life and pull my family and their nine-year-old driver (me!) wherever I decided to go.
But of course I wasn’t actually deciding anything. I always misjudged the first corner—it turns out there’s a reason we don’t let kids drive—and subjected everyone on board to the horrific thump of striking the hard guidance spine that ran down the center of the roadway, a sort of solitary curb that coursed the length of the track and violently forced the front wheels left or right if the person at the wheel didn’t keep the car centered in the lane. Because of the spine, I wasn’t driving at all, in the strict sense of the word. I was, in effect, only steering, and even that part wasn’t necessary, strictly speaking. If I’d taken my hands off the wheel, that guidance spine would have just forced the car in the proper direction. My efforts had more to do with trying to keep the front wheels parallel to the spine to avoid the thumped reminder of its existence than with actually choosing where to go. But all of that didn’t matter. I felt like I was driving. I felt powerful. I felt free. And that experience hit my brain so hard it created for me a lifetime’s fascination with cars and planes and boats and the mechanized pursuit of speed and freedom, even though I was really only in charge of the accelerator.
Over and over again, we see that our behavior, which feels to us like free will and clear choices, is actually the result of guidance systems we’re helpless to obey, whether it’s our own brains or something external like a guidance spine. And yet the factors I’ve tried to describe in the earlier chapters allow our brains not only to make us believe we’re making our own choices, but also filling in a narrative of power and independence in the same way that Autopia infused me with the feeling that I was on the open road.
Let’s talk first about what I view as the largest obstacle standing in the way of our seeing this dynamic clearly: Not only don’t we have the control of ourselves we think we do, for some reason our brains also make us resentful of other people wriggling in the grip of forces beyond their control. It’s an ongoing reason we’re so easily manipulated by each other and the systems we build. Maya Bar-Hillel, who studied alongside Amos Tversky, did pioneering work on inaccuracies in human reasoning, and is now a professor emeritus of psychology at the Hebrew University of Jerusalem, asked me: “Why is it that when we encounter perceptual illusions, we get all smiling and excited?” Fun-house mirrors, hollow heads—all of that is somehow enjoyable. “But when we hear about cognitive tendencies, we get all tense and defensive!”
“All these things that we do,” she told me, “it’s not because we’re stupid. It’s because we’re human.”
Sendhil Mullainathan, a professor of computational science and behavioral science at the University of Chicago, who has worked on everything from behavioral psychology to putting artificial intelligence toward a whole-body understanding of cancer, has done groundbreaking work on the cognitive burden of poverty. And not only has he revealed that poverty is vastly more crippling than we knew, he’s also shown that we unfairly misjudge people suffering under its weight.
Mullainathan was drawn to the subject in part by reading about the infamous Minnesota Starvation Study, conducted in 1944, at the height of America’s involvement in World War II. Huge swaths of the world were starving at that time, but little formal research had ever been done on the biological and physical effects of going without enough food. Researchers selected thirty-six males from a group of more than two hundred volunteers, many of them conscientious objectors who had agreed to serve the war effort but wouldn’t engage in violence. They were chosen for their good physical and mental health, as well as their collegiality under pressure, because they were about to be placed under tremendous amounts of it.
The plan was that every man—they were all men—would lose 25 percent of his body weight. For three months researchers fed them a standard diet, 3,200 calories a day, mostly potatoes, pasta, bread, and other foods widely available in Europe. Then they suffered through six months at only 1,570 calories per day. The before and after photos are a nightmare: men with the already-lean physiques of a more constrained era transformed into walking skeletons. And they weren’t allowed to just sit and suffer: they walked twenty-two miles a week, performed physical and written work, and were constantly interviewed. The negative consequences were clear: their resting heart rates slowed, their sex drives evaporated, and they reported irritability, depression, antipathy.
Something subtler leapt out at Mullainathan, though: starvation also transformed their unconscious lives. The researchers noted, in an offhand footnote, that many of the men took to reading cookbooks during the study—pretty uncommon for your average man in 1940, and a torturous choice for men consuming so little food. And a third of these men—drawn from all professions and walks of life—reported that when the study ended, they planned to open a restaurant. When the study ended, and the men resumed eating a healthy diet, that ambition evaporated. No one followed through.
Mullainathan went on to determine that poverty similarly reprograms the human mind. He and his colleagues spent five years investigating how poverty and hunger can affect our mental abilities as well as our bodies. And they found that just as being hungry makes people preoccupied with food, being poor will make them preoccupied with money. He and his researchers asked hundreds of people outside of grocery stores how much they’d paid for the items in their bags. People with enough money to easily purchase groceries admitted they couldn’t remember. People who could barely afford what they needed turned out to know exactly what a tube of toothpaste cost. It shows, he says, that traditional assumptions about persistent poverty being a function of laziness, or disorganization, or some other personal failing are wrong. And it shows that someone experiencing poverty is intensely distracted at all times. They are literally impaired by it. He went on to write a book about this work with psychologist EldarSafir, Scarcity: Why Having Too Little Means So Much.
Other researchers have discovered similar hard, guiding dynamics at work in other parts of our lives. A study by Anuj Shah, an associate professor of behavioral science, also at the Booth School, found that when two groups of people—those with money and those without—were given the same instructions from a doctor for treating an illness, those with money tended to recall the instructions clearly. Those without money recalled only that the treatment called for medication and what the pills would cost. Remember Pötzl’s work with Obszut, and everything that came after it? Our brains assemble a secondhand version of reality from the raw feed? Well, the research also suggests that people with and without enough material resources are experiencing distinct realities—literally seeing the world differently. Our brains are disabled by our unmet needs, and yet political rhetoric, self-help authors, and the plots of countless popular movies try to convince us that not only can we muscle our way to an independent life, but also that any claim to the contrary is a betrayal of us all.
“You would never ask the starved guy in the Minnesota study to lift a heavy weight, right?” Mullainathan asks me. “And you’d never blame him for not being able to do it. So why do we blame people who are poor for having a hard time doing certain things?”
All of this makes it that much more miraculous that we, as a society, have been able to recognize the power certain invented systems hold over our behavior. Just as, according toBanaji, the human brain needs huge numbers of counterfactual examples to stop associating certain qualities with certain races or genders, our society requires enormous amounts of evidence to begin blaming our behavior on a guidance system. It tends to happen only once the science has clearly established the way those systems interact with our biology to grab hold of us, so clearly that it can be proved in court.
At the age of fourteen, Sean David imagined becoming a doctor. Growing up in the 1970s, watching medical dramas after school, “I wanted to be Marcus Welby MD,” he remembers. And so he took the first hospital job he could find after graduating from college. He became an orderly, scrubbing out surgical theaters after each procedure.
“You go in, you pick up all the biological waste and dispose of it. You put this sudsy soap on the floor and mop and hose it down, go room to room,” he remembers. “The orthopedic cases were really rough, they use hammers and saws, they leave a lot of stuff behind.” But he was building his mental capacity for observing, up close, the human body in distress, and the work was satisfying, in its way.
Then, in 1991, his supervisor found him as he was wheeling a bucket between rooms and took him aside. “Your father is in the catheter wing,” he told David.
Cigarettes had always been part of David’s memories of his father. “We’d go fishing and he’d be smoking cigarettes. I remember him putting bait on the line with a cigarette in his mouth. It wasn’t unusual in those days.” But now his dad was lying, ghostly pale, weak, frightened, in this hospital. He’d come in with chest pains. It was the first time David had ever seen him this way.
The cardiac surgeon told David and his mother that David’s father had a heart lesion, likely from his decades of smoking, that would require a bypass. And as the surgeon walked back into surgery to get started, David realized “I had just finished cleaning that room.”
Luckily, David’s father survived the procedure. And the experience taught him what he wanted to do with his medical training. “It became clear that smoking is life and death,” he says. “So many people don’t survive that first heart attack. So emotionally, I’m very driven by this.” After medical school, David began a research career studying why it is that patients who show every outward sign of wanting to quit smoking simply cannot do it.
Ultimately, he says he believes there are a few basic facts of cigarette addiction that anyone hoping to fight it off must understand. The first is that it plays perfectly on our cognitive shortcomings. First of all, when we are first offered a cigarette outside a party or after a concert, we are terrible at understanding the odds. “People consistently underestimate the risk of smoking,” he says. “It’s a 50% chance of premature death. That’s one in two. A one-in-six chance of lung cancer. The risks are huge, but somehow risk has become attractive.” Human beings simply lack whatever it is we need to truly internalize the danger, he says, and to avoid the deadly gamble altogether.
So people need to be sold on avoiding tobacco and fighting off the grip of addiction, just as the cigarettes themselves are sold. “There has to be a key promise, something to get you out of smoking,” he argues. “We need formative research that finds out what motivates different audiences, because I haven’t seen a sustained public health campaign that resonates with every segment of the population. Maybe if we could talk about lung cancer—but the trouble is, there aren’t a lot of survivors, so there isn’t a good advocacy group out there.”
And there is no natural mechanism that will simply cause humanity to stop smoking. “People live to reproductive age while smoking, so evolution can’t touch it, can’t wipe it out,” he says. In creating cigarettes—a consumer technology that delivers a consistently satisfying experience in a portable package—we’ve invented a deadly vice that is immune to natural selection.
This is the difficulty of the modern world. We invent technologies upon which we build new businesses and, eventually, entire industries. And yet those technologies and the money we make from them usually outpace our understanding of their risks and rewards. They often prey on our psychological frailties, excite us when they should revolt us, and, cloaked in marketing and social acceptability, evade our natural ability to recognize that something is bad for us and should be avoided. We go ahead and form businesses without a clear understanding of just how completely they may control the customers they serve, and how that manipulation may affect everyone else. In the case of cigarettes, which first entered industrialized mass-production in 1845 under a French state tobacco monopoly, it was another 119 years before the US Surgeon General Luther Terry issued a report that showed tobacco causes lung cancer, and another fifty-five years (2020!) before the US government finally required a national minimum age of twenty-one to purchase tobacco products. And all that time, secondhand smoke, which, it turns out, can slowly poison people physically separated from the smoker (in an adjacent apartment, for example), was affecting all of us, even if we managed to avoid picking up cigarettes ourselves.
We live in what Thomas Friedman calls the Age of Acceleration, when breakthroughs in fields like mathematics and computation are arriving every day, and entrepreneurs are just as quickly turning those breakthroughs into products. But this acceleration is dangerous. Like cigarettes, even the most manipulative and harmful systems we build are typically allowed to run free for generations before we sort out whether or not they’re hurting us.
The behavioral science that’s being outpaced is what we discussed in the first chapters of this book. And that science has really only just begun. Think about it this way: We’ve barely arrived at the kind of obvious, in-your-face data that tells us smoking is bad. So we are only at the very beginning of revealing all sorts of uncomfortable truths about common mistakes of judgment and estimation, our allergy to uncertainty, our desperate desire for assurance, our amnesia about the lessons of the past, the way we outsource important decisions to our emotions. And further complicating things is that these are, in fact, cognitive gifts, part of an evolutionary inheritance that has worked wonderfully at keeping us alive for the vast majority of human history. But we’re now engineering products to appeal to those inner workings without fully understanding just how susceptible we are to being guided and manipulated. And while I believe it’s clear that the mental and physical health of entire generations could be at stake, I also believe that capitalism, culture, and our conviction that we are in charge of our own destinies are all blinding us to the threat.
Today’s entrepreneurs, including the ones who believe in the power of enlightened businesses to change the world for the better, seem to understand that our unconscious tendencies make us compliant users and customers. But they don’t seem to recognize (or, perhaps, care) that we have almost no insight into the long-term effects of playing on those tendencies, even in the service of a noble goal. We’re just barely able to spot the obvious, pernicious, short-term effects, such as the role of something like YouTube’s recommendation algorithms in helping to radicalize a lonely teenager like Mak Kapetanovic. But we simply don’t have the data or the accepted methodology yet to measure possible negative effects across nations and generations. And at the same time, we’re deeply resistant to admitting when we’re being manipulated: whether we just can’t see it, we’re too busy enjoying the product, or we’re blinded by our resentment of those we consider somehow weaker than we are. (I think of the men starved by researchers in Minnesota returning to construction work after the study ended and trying to explain to a foreman that they won’t be able to lift what they used to.) Meanwhile, companies continue to rapidly refine their understanding of our behavior and how they can influence it.
Perhaps it’s encouraging that cigarettes, one of the first modern examples of a product that openly plays on our unconscious behavioral systems, are slowly being regulated, and that researchers are closing in on an understanding of their effects. But while the carcinogenic effects of cigarettes are clearly understood, the effects on our behavior are still not. Sean David—now a physician and researcher at Stanford—is working with psychiatrist and addiction specialist Keith Humphreys on creating new ways to measure our dependence on cigarettes. They’re looking for biological indications of addiction and recovery. They’ve already found a few potential genetic indications that may help addiction programs tailor their treatments for particular patients.
This is good work made possible by the latest technology. And we need it. Because even now, when medical consensus has finally, clearly determined the health threat posed by cigarettes, medical science doesn’t even have a clear way of measuring the extent to which someone is addicted, nor their capacity to quit. We made cigarettes before we had any sense of how deadly they’d be. We certainly didn’t know how we’d measure whether people were addicted to them. Billions of dollars were made in that industry before those questions even came up.
Until 2016, I was the kind of person who preferred to blame these sorts of universal human vulnerabilities on some individual failure, much in the way we often blame those in poverty for their situation. That year, I stood on a street corner in San Francisco, listening to a man in his early twenties named Patrick tell me about his heroin overdose. (He didn’t give me his last name.) He had a faint wisp of a mustache, and if it weren’t for his gray skin and sunken cheeks, I’d have mistaken him for a software engineer or the bass player in a band.
We met in the Tenderloin neighborhood, where the city’s heroin and methamphetamine problem holds its most visible market. That summer, a deluge of fentanyl, a fast-acting opioid a hundred times stronger than morphine, had swept through the city. It caused nearly one thousandoverdose deaths in just three months.
I was there with a camera crew filming a report about an overdose antidote drug called Narcan, and it was a desperate, stressful environment. People kept peeking through the windows of our car, evaluating its contents, even though it was parked directly next to us. We had to time our questions so that Patrick responded during the breaks in shouting, when disoriented passersby weren’t arguing in the background. The manager of a harm-reduction program watched over us, and that seemed to be what gave us the occasional stretches of relative silence we needed to complete the interview. The whole scene was tense. And yet when Patrick began speaking, he did so easily and honestly, without any embarrassment or defensiveness, even as he talked about taking a drug that nearly killed him.
“About six months ago I was in Minneapolis living with my girlfriend,” he told me. “I acquired some heroin out there that was a lot stronger than what I was used to in San Francisco.” He described this the way I might describe being caught in the rain. “So when I used it, I OD’d.”
We paused to wait out a shouting match down the street. Then he turned back to me.
“I was out completely. I didn’t have any time to prepare for it or try to save myself or do anything. And my girlfriend luckily had trained in how to use Narcan.” He shook his head gratefully. “She revived me in about five or 10 minutes.”
This was the first of roughly a dozen conversations I went on to have with people addicted to heroin over the next few weeks, but that first conversation with Patrick thoroughly dented my instincts about the kind of person that gets hooked. It helped, of course, that he reminded me of myself. He was white, male, neutral accent, like me. (And harm-reduction experts have long complained, and rightly, that journalists only recognized the depth of the opioid crisis when it began to kill people that resembled them. I’m caught in The Loop just like anyone.) But beyond that, he also talked about heroin in a way that I’ve come to understand is the most accurate picture of its power: as an irresistible force, like a storm front or a tide. In years of speaking with people in its grip, their feelings and their language on the subject are the same. It is cause and effect. When it rains, you get wet.
Like many people—maybe most people—I spent most of my life thinking of heroin as a mistake other people make: a poisonous vice whose marketing gimmicks somehow just work more strongly on a weaker kind of person. I figured the only folks who would fall into using it would be those without the right upbringing, those with an addictive personality, those who just can’t control themselves. But as I listened to Patrick, who had a girlfriend and a place to stay in one of the most expensive cities in the world while carrying his terrible addiction, I realized I may not understand other people, or myself, as well as I thought I did.
My inflated opinion of myself, my utter blindness to my own chances of becoming addicted to heroin, the way I reassure myself about my place in life by ascribing Patrick’s circumstances to some sort of failure of character—all of that is a terrible misjudgment, and also an outgrowth of a survival strategy humans have pursued for generations. Evolution has built us into beings with tremendously high self-regard and an enormous capacity to either ignore or rationalize in ourselves what we decry in other people.
First of all, we’re incredibly upbeat. In 1969, a pair of researchers published the “Pollyanna Hypothesis”—the idea that human beings are simply more likely to remember positive information than negative information. A 2014 analysis of 100,000 words across tenlanguages confirmed that vastly more of the words we use describe good things than bad things.
And we like our own opinions. In the 1990s, the social psychologist Jonathan Haidt established the theory of “social intuition,” which posits that people develop their moral intuitions about the world almost automatically, and then essentially reverse engineer a system of reasoning that supports those moral intuitions. His theory is that any conscious weighing of morality we do usually winds up being used to justify the intuitions we already had.
We’re also self-congratulatory. In 2004, a meta-analysis of more than 266 separate studies firmly established what researchers call the “self-serving bias.” When something goes my way, my brain tells me it’s because of who I am. When something doesn’t go my way, my brain tells me it’s because of some external factor beyond my control. (In people with depression and anxiety disorders, very often this sort of attribution is reversed—a terrible curse.) This is why my first instinct is to assume I haven’t become addicted to opioids because I’m somehow of stronger moral character than other people, even though it’s a scourge afflicting millions of Americans across the nation. But if I were to become addicted to it, I’d undoubtedly blame the drug, my circumstances, and bad luck. Meanwhile, you’d be more likely to consider my addiction a personal failure. And so on.
All of this makes perfect evolutionary sense. My wife and I often joke that perhaps there was once a branch of the human evolutionary tree that was great at remembering and talking about how painful it is to give birth to a child. “Wow, that was terrible!” we imagine one woman of that branch telling her mate. “Let’s never do that again!” She and her kind are long dead, the joke goes. Only us upbeat amnesiacs are left thousands of years later. (A Swedish study of 1,300 women found that in fact a small number of women who experienced the worst kind of pain during childbirth remembered it quite well, actually. But in the rest of them, who didn’t report the worst kind of suffering, the memory of the pain consistently faded over time.)
And so we wander around, optimistic, forgetful, self-serving. It’s a wonderfully useful set of traits to possess, or at least it once was. But it gets us in trouble now. It means we have a sort of built-in immunity to accurately recognizing and analyzing our modern habits and cravings—our inability to put down the phone and get a good night’s sleep, our opioid epidemic, our shared difficulties and vulnerabilities—and how they define our modern selves.
In the 1970s, psychologists began experimenting with the implications of our highly evolved feeling of self-confidence. The psychologist Richard Nesbitt identified, for instance, an “actor-observer asymmetry”: a difference between how we explain our own behavior to ourselves versus how we explain the behavior of other people. I wish I’d known about it before meeting Patrick. According to Nesbitt’s findings, when I make a mistake (I forgot my house keys!), I tend to blame that behavior on the situation I’m in (It’s been a long week, I’ve got my kids waiting in the car, anyone might do this.). But if I observe that behavior in someone else (My wife forgot the house keys!), I tend to blame it on that person’s disposition (My wife is so forgetful, my God, this is always happening.).
In the years following Nesbitt’s paper, psychologists performed dozens of independent studies that supported his findings, until another psychologist named Bertram Malle surveyed all the studies and found that it’s more complicated than that. The way we describe intentional behavior to ourselves, for instance—like the decision to buy a particular car—has to do with a complex web of reasons and beliefs and desires. But Malle did find that when it came to unintentional behavior—locking ourselves out of the car—the actor-observer asymmetry held up.
What behavior could more perfectly fit the actor-observer asymmetry than our national attitude toward drug addiction? There are enough opioid painkiller prescriptions written in the United States each year to give every adult their own bottle of pills. These pills have the same chemical effect as heroin you buy on the street. And that, along with a complicated combination of welfare reform, malpractice law, and predatory marketing, is why so many thousands of people find themselves unintentionally developing dependence. Then, just as unintentionally—perhaps they run out of the prescription, perhaps they can’t afford the legal stuff any longer—they wind up in the grip of heroin or whatever pills they can score on the street.
And yet from parents to presidents, the dominant attitude toward drug addiction has been what I believed for most of my life: that it’s a matter of character. But if we look at the science of habit-formation and persuasion, that’s clearly not the case.
<LI_SB>
Our internal mechanisms of compulsion, along with the gauzy layers we drape over them to make them more attractive to our self-image, have created an extraordinary opportunity for taking advantage of our fellow humans by marketing to our involuntary decision-making systems and the ways we rationalize our involuntary choices. Those mechanisms are a powerful yet largely unexamined part of what makes capitalism go. And the top people in marketing and persuasion often start out as researchers studying those mechanisms.
In 1965, Robert Cialdini was an undergraduate at the University of Wisconsin, doing terrible things to earthworms. Then Cialdini’s humanity kicked in. Not his empathy for the invertebrates themselves—he talks about his time torturing worms for science with the same enthusiastic intensity with which he describes the rest of his work—but another emotion. “I had a mad crush on Marilyn Rapinski at the time, who was taking a social psychology class, and there was an empty seat next to her, so I sat in on the class.” Soon Cialdini was transfixed by the subject and went to graduate school to study it. In the end, his college crush lured Cialdini downa path that led to being published in thirty-six languages and traveling the world as an expert on how we persuade one another to do certain things.
As a graduate student Cialdini quickly grew tired of the same old methodology. “The way I’d been doing things with college sophomores on college campuses was not representative of the factors that make the biggest difference outside of that controlled environment. So I decided to go see what the professionals do.” Cialdini went on a bizarre three-year journey into the maw of American chicanery. “I presented myself as an applicant to various sales training programs. I learned to sell used cars, portrait photography over the phone, encyclopedias door to door, that sort of thing.” He began to discover that a lot of the tactics taught in these various programs were junk. But the fundamental principles that worked were consistent no matter the industry, and so he began to compile those principles.
His manifesto, Influence, remains one of the most-read business books of all time. It argues that six principles must be involved in any successful pitch: reciprocity (“Here, have a free sample”), commitment and consistency (“You’ve put so much into this process, don’t back out now”), social proof (“This one is our best seller, everyone loves it”), authority (“JD Power says it’s a best buy”), liking (“You have a lovely family”), and scarcity (“This two-door is the last one on the lot!”).
Cialdini’s work is devoured by readers in marketing and sales, and he is in constant demand as a consultant to multinationals, governments, you name it. Yet his work is also cited by academics studying human behavior and decision making. A 2008 paper published in Scienceexplored the “broken windows theory” that assumes people observing signs of disorder are more likely to break rules themselves. The paper’s authors, Dutch social psychologists, set up their findings by describing the well-established idea that messages asking people to refrain from an activity (like littering) are more effective in a setting where that thing hasn’t already been done (a clean, unlittered park). “In honor of the individual who first described this process,” the authors wrote, “we call this the Cialdini effect.”
Even the patterns of Cialdini’s success offer him useful information for analyzing his own work. Publishers in Thailand recently began printing his books. He has published papers in Polish psychology journals. His consultancy, Influence at Work, has worked on tax-collection strategies for the British government. His findings seem to be culturally universal. Well-established and young companies alike draw on his work. Articles about making social media more effective, extending the reach of email marketing campaigns, and improving sales numbers for online stores all regularly reference Influence and his six principles. “But when that book was written, there was no Internet, there was no e-commerce,” Cialdini says. “That’s what’s validating.”
I asked Cialdini whether he worries that he has inadvertently released into the wild what is essentially a user’s manual for manipulating humans. He says he doesn’t see it that way. According to Cialdini, he’s attempting to inoculate us against these tactics. He says it’s valuable to explain the work as widely as possible so that people are forewarned against manipulation.
But that doesn’t seem to have been the effect, for two reasons. First, his buying audience isn’t everyday people looking to protect themselves. It’s people in marketing, an industry that doesn’t need to read the peer-reviewed journals or think through the ethical implications to use our ancient wiring to sell modern products.
Second, and more important, being forewarned doesn’t seem to do us much good anyway. I no longer drink, after a prolific twenty-five-year relationship with alcohol, and in spite of being acutely conscious of the fact that drinking for me sets off a cycle of poor sleep and low moods that ultimately pulls me into terrible depression, I still feel the attraction to booze whenever I’m around it. It’s the sensory reminders of drinking that really do it to me. A dark bar, seated at the end where it wraps around and meets the wall, so I can watch the bartender work. The first antiseptic sting of a Manhattan on the rocks. Fishing the dark, boozy cherry out with a toothpick when the drink is finished. And this, I’ve learned, is why someone like me should essentially never set foot in a bar again.
The USC psychologist Wendy Wood once broke it down for me in front of a taqueria near her campus. Wood studies how we form good and bad habits, and she was trying to explain to me why I constantly return to the same order—carnitas burrito, guacamole, no sour cream—whenever I’m hit by the sights and smells of a taqueria. “We outsource a lot of our decision making to the environments we live in,” she told me. Once upon a time this wasn’t a vulnerability, it was an evolutionary advantage. We could quickly escape the bear, spot the outsider, grab the fruit, by simply following the environmental cues around us. (“Growling! Stranger! Ooh, shiny berries!”) It saved us enormous amounts of time and cognitive power and saved our lives as a result. But in modern life, that same circuitry sends ex-drinkers like me into a personal odyssey of self-evaluation every time I walk past the dark doorway of a bar. It’scarrying all sorts of signals into your brain, too. And when the signals arrive—as emotions, instincts, things your “gut” is telling you—we evaluate them, not the merits of the actual decision.
In fact, science has begun to establish that our unconscious decision-making systems become that much more irresistible when we let our emotions make our choices for us. Back in the 1980s, Paul Slovic was approached by the city of Las Vegas and asked to find out whether potential visitors to Sin City would still be inclined to arrive in town ready to party and gamble if they knew, say, that the federal government had put the nation’s largest repository of nuclear waste at Yucca Mountain, roughly eighty miles away.
“At first I thought we could ask people ‘hey, if we build this thing, will you still come?’ But I discovered pretty quickly that you couldn’t trust their answers,” says Slovic. Their reactions were instinctive, and emotionally driven, and not at all reflective of any rational analysis of risk. Essentially, he found that no matter what other reassurances were offered, if tourists were told that Yucca Mountain was nearby, the information poisoned the relationship. “If you mention nuclear waste and Las Vegas in the same sentence, they immediately say ‘no, of course, I’ll never visit again.’”
At the same time, Slovic had come into his trove of early documents from the tobacco industry, in which it described hiring scientists who determined more or less the same thing. “They were told by their consultants who were doing research, ‘take all the rational reasons for smoking your brand out of there,’” he says. Instead, “make it all about good feelings, you know—in beautiful environments, doing exciting things with your cigarette.” A positive emotional association with a particular brand of cigarettes essentially overrode any other, more analytical form of judgment. And a negative emotional association with nuclear waste overrode any reassuring information tourists might have had about Vegas.
Slovic believes that the ancient, instinctive system I use to make the same burrito order every time is in fact wonderfully useful. It got us here. It gets us through most of our day. “Most of the time, we’re doing it all with our feeling and experiential system,” Slovic says. “And it works pretty well.” But it lacks something important, he says. “This very sophisticated system can’t count.” When presented with information about modern, abstract risk—the distance to Yucca Mountain, the statistical probability of developing cancer from smoking, things out of our line of sight—we’re trying to use an emotion- and instinct-based system built for avoiding spiders and picking berries. And what’s more, Slovic says, our ancient systems are easy to manipulate as more and more behavioral research like Cialdini’s is adopted by business. “If they all become sophisticated in the System 1 and System 2 dynamics, which they clearly are,” he says, “people will be using this for their own purposes, and our empathy, our feeling system will be hijacked.”
<LI_SB>
What are the specific human instincts being used to manipulate us? What’s the way in? As early as the 1970s, Kahneman and Tversky were working on a new concept for measuring human instincts around risk and uncertainty.
In their 1979 paper “Prospect Theory: An Analysis of Decision Under Risk,” Kahneman and Tversky identified a tendency that upended decades of thinking about risk analysis. Rather than making rational decisions based on a thorough consideration of the straight probability of a good or bad thing taking place, they theorized, humans instead weigh gains and losses on a strange curve: our hatred of loss makes us wary of even statistically smart gambles, and we’ll do almost anything to convert a tiny bit of uncertainty into a sure thing. This was the beginning of what Slovic would go on to show was an outsourcing of decisions to our emotions.
Kahneman and Tversky deployed a devilish sort of question to tease out this tendency.
[BEGIN EXTRACT]
Choose one of the following gambles:
(A) A 50% chance of winning $1,000
(B) $450 for sure
[END EXTRACT]
They found that an overwhelming majority of respondents from across the world—Israel, Sweden, and the United States—chose the sure thing, although statistically the value of the first gamble works out to a higher payoff. And over the course of their careers, Kahneman and Tversky discovered that this tendency manifests itself in all sorts of other human irrationality.
Now, personally, I can relate to this hatred of uncertainty, and to being terrible at dealing with it. I’m a very terrible but very devoted surfer, and I’m often having to talk myself into a dark, cold patch of ocean that offers no visibility beneath the waves. My fears while in the ocean are various—drowning, being injured by the board, being contorted to the point of breaking my back in a wave, as has happened to a friend—but sharks are for some reason my most vivid fear. The incidence of shark attack in the United States is astonishingly small, with sixteen or fewer attacks per year, and at mosttwo fatalities every two years. Even if I spend an hour or two in the ocean every day, my chances of being attacked by a shark are statistically smaller than were my chances of being crushed by a vending machine or choking to death on a champagne cork when I lived in New York City. And yet the few times a year I go surfing, my eyes will instinctively glom onto any sign of the circular upwelling of water that I’ve read indicates a column of water is being pushed ahead of a rapidly surfacing shark. I panic multiple times per session that something is about to grab me with a torso-sized maw full of knives and drag me under. And that’s why for some reason I’ve developed an unconscious determination to never look under the surface. Whether I’m forced under by a wave or falling off the board, I close my eyes fiercely while I’m submerged and scramble for the board when I come back up, fighting panic. This is my brain’s effort to negotiate with my fears, and to wipe out the uncertainty of my situation.
That horror of uncertainty, and the tendency to find reassurance in empty gestures, is now the basis of whole industries, as the economist Richard Thaler explained to me. Thaler won the Nobel Prize for carrying Kahneman and Tversky’s prospect theory forward into new domains, formulating along the way notions like the endowment effect, the theory that when evaluating two identical items, the one that belongs to me is magically endowed with greater value in my estimation. Thaler originated, with the legal scholar Cass Sunstein, the notion of “nudges,” small design features within a system that can gently guide us toward choices that benefit us. For instance, whereas a decade ago it was common practice to ask new employees to opt in to their 401(k), Thaler’s work helped inspire a new standard: it’s now common practice at most major companies to enroll employees automatically, which has hugely increased the numbers of people saving for retirement. (The work has been so successful that an entire governmental agency, anudge unit, was created in the United Kingdom to study and implement tactics for improving British life.)
But Thaler despairs of the human allergy to uncertainty. We sat together in the atrium of the Booth School at the University of Chicago, where he teaches, and he began by explaining to me what he and others have found: that humans are hopeless with probability. “In fact, there’s really only five probabilities the average human can handle,” he said. “Want to guess what they are?”
I shrugged.
“They’re 99 percent, one percent, 100 percent, zero, and 50-50. That’s it.” He laughed.
“And we’ll do anything to push 99 percent to 100,” he said, leaning toward an open laptop. We sat together scrolling through page after page of sites offering insurance on the various things one can buy through the Internet. We looked at a pendant, for instance, that cost only $34.99. And yet for $35.00, an outside company would insure it against loss or theft. “You could just buy another one!” Thaler said. “That’s got to be the dumbest of dumb insurance.”
But what particularly irks him is travel insurance. In a 2015 New York Times op-ed, he railed against both the fact that airlines like United routinely require us to actively turn down travel insurance even when buying a fully refundable airline ticket. And in a larger sense, he railed against the idea that these dark nudges—what Robert Shiller and George Akerlof calledphishing in their 2015 book Phishing for Phools—are being deployed by the private sector. Critics of his and Sunstein’s work have complained that government policies that nudge us to make better decisions are paternalistic, maybe even unconsciously dictatorial. But, he wrote, one has a voice in what a representative government does. But if a manipulative tactic makes a company money, not only can’t we vote that company out of office, its competitors are likely to copy it.
This blindness to probability, combined with our inability to grasp the true risks of a situation, means we’re unaware of the real risks we’re taking when we put our heads underwater.
I encountered that dynamic most vividly in 2021 when I spoke with Kathleen Wilkinson. Wilkinson lived for forty-three years in Prairie du Chien, Wisconsin. It’s a small town, popular with tourists who come for the hunting and fishing, or perhaps for the Father’s Day weekend Rendezvous, full of buckskin costumes and antler mugs, that reenacts Prairie du Chien’s history as a fur-trading camp. But that town, where the Mississippi meets the Wisconsin River, was not a happy place for Wilkinson. She survived a toxic twenty-four-year marriage there. And she suffered a spinal injury in the course of her job as a medical technician in 2016, which isolated her further. Finally, as she watched her mother’s home go to pay the gambling debts of a family member, she decided she had to get out.
Wilkinson’s exit was accidentally paved by her first husband. He created a profile for her on a dating website, in a perverse act of digital control, and one day she got a message that read simply “hello.”
“There was a picture of this cute blond-haired man in a wheelchair, with blue eyes, surrounded by snow,” Wilkinson remembers. She wrote back, and soon they were in regular contact. “That man is now my husband.”
Today Wilkinson lives in the mountains of Montana, in Kalispell, on the edge of Glacier National Park, on property she shares with her new husband and his father. It’s been a paradise, she says, but a dark compulsion followed her from Wisconsin. Back in Prairie du Chien, while she was laid up with her spinal injury, she got a pop-up on Facebook for a game. “It looked like fun, so I tried it,” she recalls.
The game was what people in the business call a social casino app, one that combines the classic casino experience—slots, poker, blackjack—with social activities like text, audio, and video chat between players, and often the ability to form teams and clubs. The companies that make these games are often the same companies—or are owned by the same companies—that own and operate real-world casinos. Early studies of the social casino business found evidence that people who play the games were sometimes transitioning to real-world gaming, making it a logical “customer acquisition” tactic on the part of a traditional casino business. But in the end, the companies didn’t need to bring these players to a real casino. It turns out social casino games can become deeply habitual and costly to people who would never physically walk up to a slot machine or a card dealer. And as the pandemic set in, these companies discovered that while their real-world casino machines were suffering, the apps were thriving.
Wilkinson was already in over her head by the time the pandemic hit. She remembers telling her husband that she thought she might have a gambling problem. She simply couldn’t stop playing this little game. But you’re not playing for real money, he reassured her. That’s not gambling.
Experts in addiction and machine gambling could have told her the truth, if only she’d had one around to ask, not that it likely would have made a difference. Natasha Dow Schüll, an associate professor at NYU, spent twenty years with players of slot machines, with executives and designers at the companies that make them, and with regulators struggling with the right legal framework for analyzing them, to write her 2012 book Addiction by Design: Machine Gambling in Las Vegas. “What I learned over my many years of fieldwork,” she says, “and it took me a long time for it to really sink in, is that the gamblers are not there to win money, they’re there to spend time.” If anything, winning an unexpected amount of money can detract from their experience. “Gamblers used to tell me, ‘I get irritated and frustrated and sometimes angry when I win a jackpot.’”
She came to realize “they were not there to win a jackpot necessarily, but to keep going. So what they’re sitting there for is the kind of escape that they describe as ‘the machine zone.’ It’s a zone where time falls away, money falls away. And you just keep going.”
That zone, she’s found, is the basis of the gaming industry. Once upon a time, slot machines were built as a quick distraction, a place where you’d throw in a quarter and walk away, either a little richer or feeling a little foolish. “Old slot machines didn’t even used to have seats in front of them,” Schüll points out. “Not so today’s games. They’ve got ergonomic seats, and they’re really meant for you to sit and spend a ton of time at and not realize that you’re losing money as you’re sitting there.”
There is something in human programming that predisposes us, to varying degrees, to fall into the zone, she says. “I have talked to very rational people who show me their bank statements. And there will be a whole page of withdrawals, 20 minutes, two hours, the next 30 minutes of $100, $20, $20, $40. And clearly, that otherwise rational person is thinking something new each time they take money out, and not learning that this isn’t the way to go.” And, she says, the industry has a gauge for that. “Time on device,” or T.O.D., has become the way machine-gaming companies measure success. “T.O.D. really is the revenue metric of the industry.”
Schüll points out that if we want to begin quantifying the effect these products are having on the human mind, the data is already there—we just need subpoena power to get it. And there are signs that it could happen. In Massachusetts, for instance, a 2011 law that permits commercial casino operators to open three casinos and one slot parlor in the state has an often overlooked provision buried inside it. Section 97, which Schüll submitted, and which lawmakers, to her immense surprise, adopted, reads, in part:
Gaming operations shall supply the Massachusetts gaming control board . . . with customer tracking data collected or generated by loyalty programs, player tracking software, player card systems, online gambling transactions and any other such information system. . . . The commission shall convey the anonymized data to a research facility which shall make the data available to qualified researchers.
One point of optimism for fighting back against behavior-shaping technology, that we’ll see again and again moving forward, is that if companies like these are using data to prey on us, then much of what we need to know about our tendencies and how they show up inside a company’s algorithms and tactics is there for the taking, in court. If lawmakers actually forced gaming companies to provide the data they use to track player behavior, Schüll says, we’d know just how well they understand the effect they’re having on the minds and decisions of the players at the slot machines and card tables. “It’s absolutely the case that you can see the signature of addiction in the data that is being so robustly collected by these companies.”
How does she know? Back in 2011, Schüll discovered a Canadian company, iView, which took an algorithm built from two and a half years’ worth of player data from casinos in Saskatchewan and used it to detect problem gamblers in real time. The system uses all the means by which a casino would otherwise ensnare someone in the grip of compulsion to try to rip them free from it. The iView software records five hundred variables in player behavior, from the number of machines played to the days of the week, and calculates a risk score. When that score is combined with on-the-floor observations by trained casino staff, the system has been found to be 95 percent accurate in spotting people who are no longer in control of themselves. At that point, all advertising to that player freezes, the player’s funding card freezes, and the casino’s facial-recognition system tags them to casino managers as a risk alert.
It is easy, sitting as you probably are, somewhere far from a casino, to shake your head at all of this and lament the poor souls who have lost so much on the gaming floor. But what Schülland others have been trying to communicate for so many years is that there is no difference between an addiction to slots or poker and an addiction to a game you play on your phone.
The companies that make those games would, and do, argue that there’s an immense difference: gambling for real money is a highly regulated activity. Playing a simulator that offers no chance of winning real money is not considered gambling.
And the game that Kathleen Wilkinson was playing, DoubleDown Casino, didn’t offer the chance to cash out. With social casino games, money only flows one way: into the game, as it offers the chance to pay for expanded play time, for the chance to communicate with friends, for higher tiers of play. When she began playing it, Wilkinson considered it a diversion during a lonely phase of her life, and the $2.99 she spent here and there to connect with friends as coplayers and access new games seemed like a small price to pay for a little fun and virtual company.
By the time she told her husband that she was concerned about her compulsive behavior, she’d lost track of how much she was spending on the game. But when they looked more closely, she realized she’d spent roughly $50,000 on the game over four years. Wilkinson had planned to retire into a life of caring for her husband in Montana. But she now realized she’d have to work, with a spinal injury, well into the last phase of their life together. “I wish to God I had not spent that money. Because I’m worrying about being able to take care of my husband and I have no income coming in right now,” she told me, fighting tears. “I just wish they would do the right thing and give back the money.”
Wilkinson is now part of a class-action lawsuit against DoubleDown Interactive. When I describe this effort to most people, they laugh, and mock the plaintiffs for being gullible. “People who wasted fortunes on those apps have the problem, not the app companies,” a retired physician wrote to tell me after I published an interview with Kathleen Wilkinson.
But it turns out there are a lot of people like Wilkinson, and the companies have been doing an uncanny job finding them. In fact, according to Wilkinson’s lawyers, there are thousands of people in the United States who have spent at least $10,000 and as much as $400,000 on these games. “These social casinos are targeting people who are most likely to be addicts, and then taking their life savings,” says Jay Edelson, Wilkinson’s attorney. “These aren’t rich people. This isn’t Michael Jordan going to a casino losing $2 million and it’s a bad night and he moves on. These are ordinary, hardworking people who’ve saved up all of their money, they’re ready for retirement. And then they get sucked up into this world that nobody really talks about.”
I’ve seen that world, and the kind of thinking that built it, firsthand. Here’s what that looks like.
A few months after Patrick described his heroin overdose to me, I was invited to join a loose dinner party elsewhere in San Francisco thrown by a group that called itself BTech, short for Behavioral Tech. It was a monthly gathering of young neuroscientists, behavioral economists, and other specialists in human decision making who now work in the tech industry. The gathering of perhaps fifteen or so that night included someone from Lyft; someone from Fitbit;the creator of a financial-advice chatbot; and Nir Eyal, a Stanford MBA who had sold two companies, written a best-selling book calledHooked: How to Build Habit-Forming Products, and was now a consultant successful enough to have built out the white-walled former storefront in which we were all seated. I wrote to the group ahead of time to inform them that I’d be there, that I was writing a book, and that whatever they said might wind up in it. Over Coronas and Indian food, we listened to a pair of newly graduated neuroscience PhDs, T. Dalton Combs and Ramsay Brown, talk about their company, called Dopamine Labs, through which they were selling their academic knowledge of dopamine release to help make apps more addictive. They were there that night to talk about the market potential, and to offer their services.
Combs and Brown began the evening showing off an app they’d built, called Space, which could interrupt the feedback loop one gets from opening social media apps by enforcing a few seconds of mindfulness before the app would open. But by the end of the evening, it was clear they were happy to sell these same principles in the service of almost any company. As Combs told us, what’s so great about the human mind is that if you can just manipulate someone’s habits, their consciousness will invent a whole story that explains the change of behavior you tricked them into, as if it had been their idea all along. It makes what they were calling “behavior design” so easy, he told us.
Now, the room, I should point out, was full of good intentions. Many of the apps had to do with fitness and saving money and healthy meal plans. But there was an open way of discussing neuroscience and behavior change and habit formation that felt like a loaded gun. And when one member of the group asked the Dopamine Labs founders whether there was any company they wouldn’t be willing to work for, Combs said, “We don’t think we should be the thought police of the Internet.”
Dopamine Lab’s presentation centered on the idea that companies can use our hardwired habit system—the same stuff studied by the likes of Kahneman, Tversky, Slovic, and Santos, what the two neuroscientists described as a chain of triggers and actions and feedback and rewards that can be manipulated the way one might pull levers to drive a crane—to get people to turn over information about themselves through apps. These were folks trying to understand how to manipulate—or, hell, create—the habits of their customers. And eventually we wound up talking about drug addiction.
At one point in his presentation, Combs said that humans form habits through contextual triggers. “If you take a habituated drug user who has recovered, let’s say it’s someone who used to do cocaine in nightclubs—” He checked that we were with him. Forks and beer bottles were frozen mid-journey. “—Let’s say you bring them to a nightclub, and you show them baking soda, and you even tell them it’s baking soda, they’ll still want to snort a line of baking soda.” (That’s me outside the taqueria, I know now. That’s me inside any dark, wood-paneled bar.)
Someone in the room asked whether everyone is equally vulnerable to drug addiction. I felt relieved that it had come up. After all, we’d been talking for nearly three hours about the universal mechanisms of the human mind and how to take advantage of them. At this point there seemed to be a collective realization that we were playing with live ammunition. We’d clearly begun to push against the ethical boundaries of all this behavior design. But somehow the people in the room at that point managed to elevate themselves above the everyday people they imagined would use their apps, drawing distinctions that conveniently left those of us gathered in the room above the water line. And Eyal jumped in with a sweeping statement. “Let’s be honest: the people in this room aren’t going to become addicted. If you were injected with heroin 100 times—” he gestured at Combs, and then at the rest of the room “—you’re not going to become addicted to heroin.” He seemed utterly convinced of this. “Unless you’ve got some sort of deep-seated pain in you,” he says, “it’s just not going to happen.”
I didn’t share my opinion that pretty much any mammal injected with heroin one hundredtimes would likely become addicted, but I did mention to him that opioid overdose kills more people than car accidents these days, that at that time it was the number-one cause of death by accidental injury in the United States, that people from all walks of life were falling into opioid addiction at a rate never seen before in American history. But he was unshakeable. “The people who become addicted have pain in their lives. They return from war, or have some other trauma, and that’s what gets them hooked,” he told me calmly. “Only two to five percent of people are going to become addicts.” Forks and bottles began moving again.
Combs and Brown went on to appear on 60 Minutes a few months after I met them, and shortly after that they renamed their company Boundless Mind. Soon Arianna Huffington’s Thrive Global bought them, and Combs was their head of behavioral science until October 2020. He now runs a metabolic-fitness company. Dopamine Labs does not appear on his LinkedIn profile.
Eyal went on to write a book called Indistractable: How to Control Your Attention and Choose Your Life. It seems like a dramatic reversal for a guy who wrote a marketing handbook called Hooked, but it’s not. His thesis in Indistractable is that it’s up to individuals to learn self-discipline, that technology is not addictive, that self-control, not regulation, is our pathway forward. This is someone who used his understanding of psychology to establish and sell large-scale tactics of persuasion and habit formation. And yet he had somehow convinced himself—and, seemingly, most of the people there that night—that the people making this tech are somehow not only able to resist what we had just spent the evening establishing is an eons-old tendency of the brain, but that it was all right for that rarefied group to deploy products that played on those unconscious tendencies in the rest of us. Eyal went on to tell Ezra Klein in a 2019 podcast appearance, “The world is bifurcating into two types of people: people who allow their attention to be manipulated and controlled by others, and those who stand up and say, ‘No, I am indistractable. I live with intent.’”
When I watched the presentation from Dopamine Labs and heard Eyal’s theories that night, I hadn’t yet met Robert Cialdini or Paul Slovic. I hadn’t yet met Wendy Wood, and she was still a year away from publishing a 2018 review of habit-formation science with the business professor Lucas Carden, in which they looked across the field and concluded that people are not in control of their choices. Wood and Carden compared our vulnerability to forming unconscious habitual associations (between the smell of a grill and ordering a burger; between the feeling of taking our work shoes off at the door and the first pull of an after-work beer) to the inexorable pull of a rip current. The trick, they wrote, is to not get in the water. “We now know that self-control involves a wide range of responses beyond willpower. To be successful, people high in self-control appear to play offense, not defense, by anticipating and avoiding self-control struggles.”
And when I sat there with those young entrepreneurs, earnestly talking through the ethics of addiction and design, I hadn’t yet met Kathleen Wilkinson, who lost herself in an innocent diversion, costing her and her husband their last years together. I hadn’t seen the ways that our most ancient instincts were being built into emotion-shaping and decision-guidance systems for vast profit. I hadn’t yet discovered that our unconscious tendencies are being amplified and shaped and mutated for profit, by accident and sometimes even on purpose. But even back then, I was frightened by the conversation I witnessed that evening, and, knowing what I know now, I still am.