Rationality - by Steven Pinker

Published:

Rationality - by Steven Pinker

Read: 2023-11-11

Recommend: 6/10

This book explains what rationality is and why rational people can behavor irrationally.

Notes

Here are some text that I highlighted in the book:

  1. Yet as a cognitive scientist I cannot accept the cynical view that the human brain is a basket of delusions. Hunter-gatherers—our ancestors and contemporaries—are not nervous rabbits but cerebral problem solvers. A list of the ways in which we’re stupid can’t explain why we’re so smart: smart enough to have discovered the laws of nature, transformed the planet, lengthened and enriched our lives, and, not least, articulated the rules of rationality that we so often flout.

  2. The San also engage in critical thinking. They know not to trust their first impressions, and appreciate the dangers of seeing what they want to see. Nor will they accept arguments from authority: anyone, including a young upstart, may shoot down a conjecture or come up with his own until a consensus emerges from the disputation.

  3. Three quarters of Americans believe in at least one phenomenon that defies the laws of science, including psychic healing (55 percent), extrasensory perception (41 percent), haunted houses (37 percent), and ghosts (32 percent)—which also means that some people believe in houses haunted by ghosts without believing in ghosts.

  4. The starting point is to appreciate that rationality is not a power that an agent either has or doesn’t have, like Superman’s X-ray vision. It is a kit of cognitive tools that can attain particular goals in particular worlds. To understand what rationality is, why it seems scarce, and why it matters, we must begin with the ground truths of rationality itself: the ways an intelligent agent ought to reason, given its goals and the world in which it lives.

  5. System 1 means snap judgments; System 2 means thinking twice.

  6. If anything lies at the core of rationality, it must surely be logic.

  7. Echoing a famous argument by the philosopher Karl Popper, most scientists today insist that the dividing line between science and pseudoscience is whether advocates of a hypothesis deliberately search for evidence that could falsify it and accept the hypothesis only if it survives.

  8. But probabilities are not about the world; they’re about our ignorance of the world. New information reduces our ignorance and changes the probability. If that sounds mystical or paradoxical, think about the probability that a coin I just flipped landed heads. For you, it’s .5. For me, it’s 1 (I peeked). Same event, different knowledge, different probability. In the Monty Hall dilemma, new information is provided by the all-seeing host.

  9. Vos Savant invited her readers to imagine a variation of the game show with, say, a thousand doors. You pick one. Monty reveals a goat behind 998 of the others. Would you switch to the door he left closed? This time it seems clear that Monty’s choice conveys actionable information. One can visualize him scanning the doors for the car as he decides which one not to open, and the closed door is a sign of his having spotted the car and hence a spoor of the car itself.

  10. Once we get into the habit of assigning numbers to unknown events, we can quantify our intuitions about the future.

  11. A class of events described by a conjunction of statements can be more vivid, especially when they spell out a story line we can watch in the theater of our imagination. Intuitive probability is driven by imaginability: the easier something is to visualize, the likelier it seems. This entraps us into what Tversky and Kahneman call the conjunction fallacy, in which a conjunction is more intuitively probable than either of its elements.

  12. The shape and shading illusions are not bugs but features.

  13. A mind capable of interpreting the intent of a questioner in context is far from unsophisticated. That’s why we furiously hit “0” and scream “Operator!” into the phone when the bot on a help line reiterates a list of useless options and only a human can be made to understand why we called.

  14. rational pilots know when to discount them and turn their perception over to instruments.

  15. And as excellent as our cognitive systems are, in the modern world we must know when to discount them and turn our reasoning over to instruments—the tools of logic, probability, and critical thinking that extend our powers of reason beyond what nature gave us.

  16. A definition that is more or less faithful to the way the word is used is “the ability to use knowledge to attain goals.” Knowledge in turn is standardly defined as “justified true belief.”

  17. A rational agent must have a goal, whether it is to ascertain the truth of a noteworthy idea, called theoretical reason, or to bring about a noteworthy outcome in the world, called practical reason (“what is true” and “what to do”).

  18. With this definition the case for rationality seems all too obvious: do you want things or don’t you? If you do, rationality is what allows you to get them.

  19. Perfect rationality and objective truth are aspirations that no mortal can ever claim to have attained. But the conviction that they are out there licenses us to develop rules we can all abide by that allow us to approach the truth collectively in ways that are impossible for any of us individually.

  20. The rules are designed to sideline the biases that get in the way of rationality: the cognitive illusions built into human nature, and the bigotries, prejudices, phobias, and -isms that infect the members of a race, class, gender, sexuality, or civilization. These rules include the principles of critical thinking and the normative systems of logic, probability, and empirical reasoning that will be explained in the chapters to come. They are implemented among flesh-and-blood people by social institutions that prevent people from imposing their egos or biases or delusions on everyone else. “Ambition must be made to counteract ambition,” wrote James Madison about the checks and balances in a democratic government, and that is how other institutions steer communities of biased and ambition-addled people toward disinterested truth. Examples include the adversarial system in law, peer review in science, editing and fact-checking in journalism, academic freedom in universities, and freedom of speech in the public sphere. Disagreement is necessary in deliberations among mortals. As the saying goes, the more we disagree, the more chance there is that at least one of us is right.

  21. And ultimately even relativists who deny the possibility of objective truth and insist that all claims are merely the narratives of a culture lack the courage of their convictions. The cultural anthropologists or literary scholars who avow that the truths of science are merely the narratives of one culture will still have their child’s infection treated with antibiotics prescribed by a physician rather than a healing song performed by a shaman. And though relativism is often adorned with a moral halo, the moral convictions of relativists depend on a commitment to objective truth.

  22. One of our goals can be incompatible with the others. Our goal at one time can be incompatible with our goals at other times. And one person’s goals can be incompatible with others’. With those conflicts, it won’t do to say that we should serve and obey our passions. Something has to give, and that is when rationality must adjudicate. We call the first two applications of reason “wisdom” and the third one “morality.” Let’s look at each.

  23. Indeed, some of our apparent goals are not even really our goals—they are the metaphorical goals of our genes. The evolutionary process selects for genes that lead organisms to have as many surviving offspring as possible in the kinds of environments in which their ancestors lived. They do so by giving us motives like hunger, love, fear, comfort, sex, power, and status. Evolutionary psychologists call these motives “proximate,” meaning that they enter into our conscious experience and we deliberately try to carry them out. They can be contrasted with the “ultimate” motives of survival and reproduction, which are the figurative goals of our genes—what they would say they wanted if they could talk.

  24. The psychologist Walter Mischel captured the conflict in an agonizing choice he gave four-year-olds in a famous 1972 experiment: one marshmallow now or two marshmallows in fifteen minutes. Life is a never-ending gantlet of marshmallow tests, dilemmas that force us to choose between a sooner small reward and a later large reward.

  25. As Homer Simpson said to Marge when she warned him that he would regret his conduct, “That’s a problem for future Homer. Man, I don’t envy that guy.”

  26. The preference reversal is called myopic, or nearsighted, because we see an attractive temptation that is near to us in time all too clearly, while the faraway choices are emotionally blurred and (a bit contrary to the ophthalmological metaphor) we judge them more objectively.

  27. In fact, Odyssean self-control can step up a level and cut off the option to have the option, or at least make it harder to exercise. Suppose the thought of a full paycheck is so tempting that we can’t bring ourselves to fill out the form that authorizes the monthly deduction. Before being faced with that temptation, we might allow our employers to make the choice for us (and other choices that benefit us in the long run) by enrolling us in mandatory savings by default: we would have to take steps to opt out of the plan rather than to opt in. This is the basis for the philosophy of governance whimsically called libertarian paternalism by the legal scholar Cass Sunstein and the behavioral economist Richard Thaler in their book Nudge.

  28. rational agents choose to be ignorant to game their own less-than-rational biases. But sometimes we choose to be ignorant to prevent our rational faculties from being exploited by rational adversaries—to make sure they cannot make us an offer we can’t refuse.

  29. According to the Madman Theory in international relations, a leader who is seen as impetuous, even unhinged, can coerce an adversary into concessions.

  30. The availability bias may affect the fate of the planet. Several eminent climate scientists, having crunched the numbers, warn that “there is no credible path to climate stabilization that does not include a substantial role for nuclear power.” Nuclear power is the safest form of energy humanity has ever used. Mining accidents, hydroelectric dam failures, natural gas explosions, and oil train crashes all kill people, sometimes in large numbers, and smoke from burning coal kills them in enormous numbers, more than half a million per year. Yet nuclear power has stalled for decades in the United States and is being pushed back in Europe, often replaced by dirty and dangerous coal. In large part the opposition is driven by memories of three accidents: Three Mile Island in 1979, which killed no one; Fukushima in 2011, which killed one worker years later (the other deaths were caused by the tsunami and from a panicked evacuation); and the Soviet-bungled Chernobyl in 1986, which killed 31 in the accident and perhaps several thousand from cancer, around the same number killed by coal emissions every day.

  31. Things that happen suddenly are usually bad—a war, a shooting, a famine, a financial collapse—but good things may consist of nothing happening, like a boring country at peace or a forgettable region that is healthy and well fed. And when progress takes place, it isn’t built in a day; it creeps up a few percentage points a year, transforming the world by stealth. As the economist Max Roser points out, news sites could have run the headline 137,000 People Escaped Extreme Poverty Yesterday every day for the past twenty-five years. But they never ran the headline, because there was never a Thursday in October in which it suddenly happened. So one of the greatest developments in human history—a billion and a quarter people escaping from squalor—has gone unnoticed.

  32. How can we recognize the genuine dangers in the world while calibrating our understanding to reality? Consumers of news should be aware of its built-in bias and adjust their information diet to include sources that present the bigger statistical picture: less Facebook News Feed, more Our World in Data. Journalists should put lurid events in context. A killing or plane crash or shark attack should be accompanied by the annual rate, which takes into account the denominator of the probability, not just the numerator.

  33. During the 1995 trial of O. J. Simpson, the football star accused of murdering his wife, Nicole, a prosecutor called attention to his history of battering her. A member of Simpson’s “Dream Team” of defense attorneys replied that very few batterers go on to kill their wives, perhaps one in 2,500. An English professor, Elaine Scarry, spotted the fallacy. Nicole Simpson was not just any old victim of battering. She was a victim of battering who had her throat cut. The relevant statistic is the conditional probability that someone killed his wife given that he had battered his wife and that his wife was murdered by someone. That probability is eight out of nine.

  34. confusing prior with post hoc judgments (also called a priori and a posteriori). The confusion is sometimes called the Texas sharpshooter fallacy, after the marksman who fires a bullet into the side of a barn and then paints a bull’s-eye around the hole.

  35. One exception was Bill Miller, anointed by CNN Money in 2006 as “The Greatest Money Manager of Our Time” for beating the S&P 500 stock market index fifteen years in a row. How impressive is that? One might think that if a manager is equally likely to outperform or underperform the index in any year, the odds of that happening by chance are just 1 in 32,768 (215). But Miller was singled out after his amazing streak had unfolded. As the physicist Len Mlodinow pointed out in The Drunkard’s Walk: How Randomness Rules Our Lives, the country has more than six thousand fund managers, and modern mutual funds have been around for about forty years. The chance that some manager had a fifteen-year winning streak sometime over those forty years is not at all unlikely; it’s 3 in 4. The CNN Money headline could have read Expected 15-Year Run Finally Occurs: Bill Miller Is the Lucky One. Sure enough, Miller’s luck ran out, and in the following two years the market “handily pulverized him.”

  36. It’s not that the investigators faked their data. It’s that they engaged in what is now known as questionable research practices, the garden of forking paths, and p-hacking (referring to the probability threshold, p, that counts as “statistically significant”). Imagine a scientist who runs a laborious experiment and obtains data that are the opposite of “Eureka!” Before cutting his losses, he may be tempted to wonder whether the effect really is there, but only with the men, or only with the women, or if you throw out the freak data from the participants who zoned out, or if you exclude the crazy Trump years, or if you switch to a statistical test which looks at the ranking of the data rather than their values down to the last decimal place. Or you can continue to test participants until the precious asterisk appears in the statistical printout, being sure to quit while you’re ahead. None of these practices is inherently unreasonable if it can be justified before the data are collected. But if they are tried after the fact, some combination is likely to capitalize on chance and cough up a spurious result. The trap is inherent to the nature of probability and has been known for decades; I recall being warned against “data snooping” when I took statistics in 1974. But until recently few scientists intuitively grasped how a smidgen of data snooping could lead to a boatload of error. My professor half-jokingly suggested that scientists be required to write down their hypotheses and methods on a piece of paper before doing an experiment and safekeep it in a lockbox they would open and show to reviewers after the study was done. The only problem, he noted, was that a scientist could secretly keep several lockboxes and then open the one he knew “predicted” the data. With the advent of the web, the problem has been solved, and the state of the art in scientific methodology is to “preregister” the details of a study in a public registry that reviewers and editors can check for post hoc hanky-panky.

  37. Everyone complains about his memory, and no one complains about his judgment.

  38. The theory of rational choice goes back to the dawn of probability theory and the famous argument by Blaise Pascal (1623–1662) on why you should believe in God: if you did and he doesn’t exist, you would just have wasted some prayers, whereas if you didn’t and he does exist, you would incur his eternal wrath. It was formalized in 1944 by the mathematician John von Neumann and the economist Oskar Morgenstern. Unlike the pope, von Neumann really might have been a space alien—his colleagues wondered about it because of his otherworldly intelligence. He also invented game theory (chapter 8), the digital computer, self-replicating machines, quantum logic, and key components of nuclear weapons, while making dozens of other breakthroughs in math, physics, and computer science.

  39. Within rational choice theory, although the outcome of a chancy option cannot be predicted, the probabilities are fixed, like in a casino. This is called risk, and may be distinguished from uncertainty, where the decider doesn’t even know the probabilities and all bets are off. In 2002, the US defense secretary Donald Rumsfeld famously explained the distinction: “There are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know.” The theory of rational choice is a theory of decision making with known unknowns: with risk, not necessarily uncertainty.

  40. Simon suggested that a flesh-and-blood decider rarely has the luxury of optimizing but instead must satisfice, a portmanteau of “satisfy” and “suffice,” namely settle for the first alternative that exceeds some standard that’s good enough. Given the costs of information, the perfect can be the enemy of the good.

  41. Weighing risks and rewards can, with far greater consequences, also inform medical choices. Doctors and patients alike are apt to think in terms of propensities: cancer screening is good because it can detect cancers, and cancer surgery is good because it can remove them. But thinking about costs and benefits weighted by their probabilities can flip good to bad. For every thousand women who undergo annual ultrasound exams for ovarian cancer, 6 are correctly diagnosed with the disease, compared with 5 in a thousand unscreened women—and the number of deaths in the two groups is the same, 3. So much for the benefits. What about the costs? Out of the thousand who are screened, another 94 get terrifying false alarms, 31 of whom suffer unnecessary removal of their ovaries, of whom 5 have serious complications to boot. The number of false alarms and unnecessary surgeries among women who are not screened, of course, is zero. It doesn’t take a lot of math to show that the expected utility of ovarian cancer screening is negative. The same is true for men when it comes to screening for prostate cancer with the prostate-specific antigen test (I opt out).

  42. Even when exact numbers are unavailable, there is wisdom to be had in mentally multiplying probabilities by outcomes. How many people have ruined their lives by taking a gamble with a large chance at a small gain and a small chance at a catastrophic loss—cutting a legal corner for an extra bit of money they didn’t need, risking their reputation and tranquility for a meaningless fling? Switching from losses to gains, how many lonely singles forgo the small chance of a lifetime of happiness with a soul mate because they think only of the large chance of a tedious coffee with a bore?

  43. Every statistics student is warned that “statistical significance” is a technical concept that should not be confused with “significance” in the vernacular sense of noteworthy or consequential. But most are misinformed about what it does mean.

  44. “Statistical significance” is a Bayesian likelihood: the probability of obtaining the data given the hypothesis (in this case, the null hypothesis). But each of those statements is a Bayesian posterior: the probability of the hypothesis given the data.

  45. The scientist cannot use a significance test to assess whether the null hypothesis is true or false unless she also considers the prior—her best guess of the probability that the null hypothesis is true before doing the experiment. And in the mathematics of null hypothesis significance testing, a Bayesian prior is nowhere to be found.

  46. Most social scientists are so steeped in the ritual of significance testing, starting so early in their careers, that they have forgotten its actual logic. This was brought home to me when I collaborated with a theoretical linguist, Jane Grimshaw, who tutored herself in statistics and said to me, “Let me get this straight. The only thing these tests show is that when some effect doesn’t exist, one of every twenty scientists looking for it will falsely claim it does. What makes you so sure it isn’t you?” The honest answer is: Nothing.

  47. replicability imbroglio

  48. Many of our conventions and standards are solutions to coordination games, with nothing to recommend them other than that everyone has settled on the same ones. Driving on the right, taking Sundays off work, accepting paper currency, adopting technological standards (110 volts, Microsoft Word, the QWERTY keyboard) are equilibria in coordination games. There may be higher payoffs with other equilibria, but we remain locked into the ones we have because we can’t get there from here. Unless everyone agrees to switch at once, the penalties for discoordination are too high.

  49. The rational strategy in the midst of an Escalation Game is to cut your losses and bow out with a certain probability at each move, hoping that the other bidder, being equally rational, might fold first. It’s captured in the saying “Don’t throw good money after bad” and in the First Law of Holes: “When you’re in one, stop digging.” One of the most commonly cited human irrationalities is the sunk-cost fallacy, in which people continue to invest in a losing venture because of what they have invested so far rather than in anticipation of what they will gain going forward. Holding on to a tanking stock, sitting through a boring movie, finishing a tedious novel, and staying in a bad marriage are familiar examples.

  50. Prisoner’s Dilemmas are common tragedies. A divorcing husband and wife hire legal barracudas, each fearing the other will take them to the cleaners, while the billable hours drain the marital assets. Enemy nations bust their budgets in an arms race, leaving them both poorer but no safer. Bicycle racers dope their blood and corrupt the sport because otherwise they would be left in the dust by rivals who doped theirs. Everyone crowds a luggage carousel, or stands up at a rock concert, craning for a better view, and no one ends up with a better view.

  51. In a poignant environmental version called the Tragedy of the Commons, every shepherd has an incentive to add one more sheep to his flock and graze it on the town commons, but when everyone fattens their flock, the grass is grazed faster than it can regrow, and all the sheep starve. Traffic and pollution work the same way

  52. Evading taxes, stinting when the hat is passed, milking a resource to depletion, and resisting public health measures like social distancing and mask-wearing during a pandemic, are other examples of defecting in a Public Goods game: they offer a temptation to those who indulge, a sucker’s payoff to those who contribute and conserve, and a common punishment when everyone defects.

  53. the Tragedy of the Carbon Commons. The players can be individual citizens, with the burden consisting of the inconvenience of forgoing meat, plane travel, or gas-guzzling SUVs. Or they can be entire countries, in which case the burden is the drag on the economy from forgoing the cheap and portable energy from fossil fuels.

  54. In 2020 Jeff Bezos bragged, “All of my best decisions in business and in life have been made with heart, intuition, guts . . . not analysis,” implying that heart and guts lead to better decisions than analysis. But he did not tell us whether all of his worst decisions in business and life were also made with heart, intuition, and guts, nor whether the good gut decisions and bad analytic ones outnumbered the bad gut decisions and good analytic ones.

  55. These are epiphenomena, also known as confounds or nuisance variables: they accompany but do not cause the event. Epiphenomena are the bane of epidemiology. For many years coffee was blamed for heart disease, because coffee drinkers had more heart attacks. It turned out that coffee drinkers also tend to smoke and avoid exercise; the coffee was an epiphenomenon.

  56. Tell people there’s an invisible man in the sky who created the universe, and the vast majority will believe you. Tell them the paint is wet, and they have to touch it to be sure. —George Carlin

  57. many superstitions originate in overinterpreting coincidences, failing to calibrate evidence against priors, overgeneralizing from anecdotes, and leaping from correlation to causation.

  58. As Upton Sinclair pointed out, “It is difficult to get a man to understand something, when his salary depends upon his not understanding it.”

  59. In a recent diagnosis, a team of social scientists concluded that the sides are less like literal tribes, which are held together by kinship, than religious sects, which are held together by faith in their moral superiority and contempt for opposing sects. The rise of political sectarianism in the United States is commonly blamed (like everything else) on social media, but its roots lie deeper. They include the fractionation and polarization of broadcast media, with partisan talk radio and cable news displacing national networks; gerrymandering and other geographic distortions of political representation, which incentivize politicians to cater to cliques rather than coalitions; the reliance of politicians and think tanks on ideologically committed donors; the self-segregation of educated liberal professionals into urban enclaves; and the decline of class-crossing civil-society organizations like churches, service clubs, and volunteer groups.

  60. Any fair-weather friend can say the world is round, but only a blood brother would say the world is flat, willingly incurring ridicule by outsiders.

  61. Unfortunately, what’s rational for each of us seeking acceptance in a clique is not so rational for all of us in a democracy seeking the best understanding of the world. Our problem is that we are trapped in a Tragedy of the Rationality Commons.

  62. As the novelist Philip K. Dick wrote, reality is that which, when you stop believing in it, doesn’t go away.

  63. A scientific education is supposed to stifle these primitive intuitions, but for several reasons its reach is limited. One is that beliefs that are sacred to a religious or cultural faction, like creationism, the soul, and a divine purpose, are not easily surrendered, and they may be guarded within people’s mythology zone. Another is that even among the highly educated, scientific understanding is shallow. Few people can explain why the sky is blue or why the seasons change, let alone population genetics or viral immunology. Instead, educated people trust the university-based scientific establishment: its consensus is good enough for them

  64. Universities have turned themselves into laughingstocks for their assaults on common sense (as when a professor was recently suspended for mentioning the Chinese pause word ne ga because it reminded some students of the racial slur).

  65. These are some of the reasons to believe that failures of rationality have consequences in the world. Can the damage be quantified? The critical-thinking activist Tim Farley tried to do that on his website and Twitter feed named after the frequently asked question “What’s the Harm?” Farley had no way to answer it precisely, of course, but he tried to awaken people to the enormity of the damage wreaked by failures of critical thinking by listing every authenticated case he could find. From 1970 through 2009, but mostly in the last decade in that range, he documented 368,379 people killed, more than 300,000 injured, and $2.8 billion in economic damages from blunders in critical thinking. They include people killing themselves or their children by rejecting conventional medical treatments or using herbal, homeopathic, holistic, and other quack cures; mass suicides by members of apocalyptic cults; murders of witches, sorcerers, and the people they cursed; guileless victims bilked out of their savings by psychics, astrologers, and other charlatans; scofflaws and vigilantes arrested for acting on conspiratorial delusions; and economic panics from superstitions and false rumors.

  66. The prodding of cognitive reflection by analogizing a protected group with a vulnerable one is a common means by which moral persuaders have awakened people to their biases and bigotries. The philosopher Peter Singer, an intellectual descendant of Bentham and today’s foremost proponent of animal rights, calls the process “the expanding circle.”