Joshua Greene. Moral Tribes: Emotion, Reason and the Gap Between Us and Them (2013)

'Our moral instincts have evolved for cooperation in groups but cause conflict between groups. We have both automatic moral emotions and the capacity to employ utilitarian reasoning and we should use utilitarianism as a common currency to resolve disputes. Rather than rationalizing our intuitive moral convictions we should transcend the limitations of our tribal gut reactions.' My notes on the book.

Moral Tribes: Emotion, Reason and The Gap Between Us and Them

Joshua Greene (2013)

 

In a paragraph

Our moral instincts have evolved for cooperation in groups but cause conflict between groups.  We have both automatic moral emotions and the capacity to employ utilitarian reasoning and we should use utilitarianism as a common currency to resolve disputes.  Rather than rationalizing our intuitive moral convictions we should transcend the limitations of our tribal gut reactions.

 

Key points

  • There are two major kinds of moral problems, Me versus Us and Us versus Them. The moral machinery in our brains solves the first problem and creates the second problem. 

 

  • This gives The Tragedy of Commonsense Morality rather than The Tragedy of the Commons. The book starts with the Parable of the New Pasture which shows conflict between different Moral Tribes, especially between the individualistic Northern Tribe and the collectivist Southern Tribe. The tribes fight because they selfishly favour themselves, they have biased thinking, and they see the world through different moral lenses.

 

  • The moral brain is like a dual-mode camera with both automatic settings and a manual mode. The automatic settings are the moral emotions while manual mode uses a general capacity for practical reasoning. This “dual-process” morality reflects the general structure of the human mind.  We should think fast about Me-Us and slow about Us-Them.

 

  • We need a Common Currency, a metamorality, a global moral philosophy that can adjudicate among competing tribal moralities. This should be utilitarianism, a good idea with a poor name – a more apt name would be Deep Pragmatism. There are intuitively compelling arguments against utilitarianism, but utilitarianism becomes more attractive the better we understand our dual-process moral brains. Utilitarianism, properly understood, bears little resemblance to its many caricatures. 

 

  • It’s plausible that the goodness and badness of everything ultimately cashes out in terms of the quality of people’s experience. Happiness is the value that gives other values their value. Utilitarianism is not a decision procedure but a theory about what matters. Utilitarianism combines the Golden Rule’s impartiality with the common currency of human experience. This yields a moral system that can acknowledge moral trade-offs and adjudicate among them, and it can do so in a way that makes sense to members of all tribes.

 

  • Morality is a collection of devices, a suite of psychological capacities and dispositions that together promote and stabilize cooperative behaviour. This includes non-aggression and friendship. Love is a highly specialized piece of psychological machinery, an emotional straitjacket to create commitment and enable cooperative parenting.

 

  • Automatic settings are heuristic devices that are rather inflexible, and therefore likely to be unreliable, at least in some contexts. Greene’s psychological studies around the Trolley Problem suggest that an intuitive alarm goes off when personal force is used as a means.  The Doctrine of Double Effect and the Acts/Omissions Distinction are attempts to codify such intuitions.  It’s good that we’re alarmed by acts of violence, but the automatic emotional gizmos in our brains are not infinitely wise. It’s a mistake to grant these gizmos veto power in our search for a universal moral philosophy. 

 

  • Invoking rights stop discussion and enquiry. Rights try to turn intuition into something objective.  They cannot be justified.  But they are helpful when a matter is settled.

 

  • Applying utilitarianism to abortion suggests grey areas as a foetus becomes more like a baby. But there is no evidence for “ensolement” at conception beyond faith. And the argument to create more babies applies to abstinence as well. 

 

  • Greene follows Haidt, but considers that Liberal moral taste though narrower is more refined while the right’s moral taste is more tribal. 

 

  • Morality can climb the ladder of evolution and then kick it away. Donating to faraway strangers and birth control are good but are biological glitches.

 

  • We can rationalize our intuitive moral convictions, or we can transcend the limitations of our tribal gut reactions. Aristotle and Kant are both tribal philosophers while Bentham and Mill used reasoning to transcend our intuitions. We have a marvellous ability to question the moral laws written in our hearts and replace them with something better.  

 

Comments 

I loved this book.  It is beautifully written and presented and full of memorable phrases.  It may be the best argument I have seen for an idea close to my heart – that we should use more utilitarian-style reasoning.

The book title is rather misleading.  While understanding conflict between moral groups is a theme, the deeper issue is the dual nature of our moral functioning which combines automatic moral emotions with manual mode reasoning about achieving good outcomes. Greene argues that our automatic emotions although important are not infallible and that we should not regard our intuitions in unusual circumstances as reasons to dismiss utilitarianism.  Instead, we should use utilitarian reasoning when slow thinking is required.  I think this is a  general point applicable to all moral decisions and not limited to inter-group disputes. Indeed, I think this applies to all practical thinking and not just morality – in deciding what to do, sometimes we should use reasoning to transcend our instincts.

 

Notes and Highlights from the Book

 

Introduction: The Tragedy of Commonsense Morality

The Tragedy of Commonsense Morality, illustrated by this book’s first organizing metaphor, the Parable of the New Pastures.

Eastern tribe. Each family gets the same number of sheep.

Western tribe. The size of a family’s flock is determined by the family’s size.

Northern tribe.  Each family has its own plot of land, surrounded by a fence.

Southern tribe.  Share not only their pasture but their animals, too. Their council of elders is very busy.

Conflict, partly value-related over New Pastures

One tribe claimed the new pastures as a gift to them from their god.

Tribes with rules and customs that seemed to outsiders rather strange, if not downright ridiculous.

Yet most people are moral and want the same things.

The central tragedy of modern life

An attempt to understand morality from the ground up.

Understanding the deep structure of moral problems as well as the differences between the problems that our brains were designed to solve and the distinctively modern problems we face today.

Taking this new understanding of morality and turning it into a universal moral philosophy that members of all human tribes can share.

I started developing these ideas in my late teens, and they’ve taken me through two interwoven careers.

But Congressman, are you saying that society should just let him die?

I refer to contemporary liberals as “more Southerly.”

There is nobody in this country who got rich on their own. Warren.

Some tribes grant special authority to specific gods, leaders, texts, or practices — what one might call “proper nouns.”

For all but the past ten thousand years of our existence, it didn’t look like we’d amount to much. Yet here we are, sitting in our climate-controlled, artificially illuminated homes, reading and writing books about ourselves.

We’ve plenty of room for improvement. The twentieth century was the most peaceful on record (controlling for population growth), yet its wars and assorted political conflicts killed approximately 230 million people.

Two problems that may severely disrupt, or even reverse, our trend toward peace and prosperity: the degradation of our environment and the proliferation of weapons of mass destruction.

We can improve our prospects for peace and prosperity by improving the way we think about moral problems.  Over the past few centuries, new moral ideas have taken hold in human brains.

Some tribes have become a lot less tribal.

Steven Pinker asks: What are we doing right? And how can we do better? What we lack, I think, is a coherent global moral philosophy, one that can resolve disagreements among competing moral tribes. The idea of a universal moral philosophy is not new. It’s been a dream of moral thinkers since the Enlightenment. But it’s never quite worked out. What we have instead are some shared values, some unshared values, some laws on which we agree, and a common vocabulary.

There are two major kinds of moral problems.  Me versus Us and Us versus Them.

The moral machinery in our brains solves the first problem and creates the second problem.

This book’s second organizing metaphor: The moral brain is like a dual-mode camera with both automatic settings and a manual mode.

The moral brain’s automatic settings are the moral emotions.

Manual mode, in contrast, is a general capacity for practical reasoning that can be used to solve moral problems, as well as other practical problems.

Moral thinking is shaped by both emotion and reason and this “dual-process” morality reflects the general structure of the human mind.

The third and final organizing metaphor: Common Currency. Our search for a metamorality, a global moral philosophy that can adjudicate among competing tribal moralities.

A philosophy known (rather unfortunately) as utilitarianism.

Intuitively compelling arguments against utilitarianism.

Utilitarianism becomes more attractive the better we understand our dual-process moral brains.

Give it a better name. A more apt name for utilitarianism is Deep Pragmatism.

We can rationalize our intuitive moral convictions, or we can transcend the limitations of our tribal gut reactions.

I’ll make the case for transcendence, for getting beyond point-and-shoot morality, and for changing the way we think and talk about the problems that divide us.

 

Part I. Moral Problems

Garrett Hardin The Tragedy of the Commons 1968.

Problem of cooperation.  Accomplish things together that they can’t accomplish by themselves.

This principle has guided the evolution of life on earth from the start. Approximately four billion years ago, molecules joined together to form cells.

Humans, by cooperating with one another, have become the earth’s dominant species.

Nearly all cooperative enterprises involve at least some tension between self-interest and collective interest, between Me and Us. And thus, nearly all cooperative enterprises are in danger of eroding, like the commons in Hardin’s parable.

Even the most basic form of decency, nonaggression, is a form of cooperation, and not to be taken for granted

The tension between individual and collective interest arises not only between us but within us.  Some of the cells in an animal’s body start pulling for themselves instead of for the team, a phenomenon known as cancer.

After Darwin, human morality became a scientific mystery.

Morality evolved as a solution to the problem of cooperation

Morality is a set of psychological adaptations that allow otherwise selfish individuals to reap the benefits of cooperation.

Our moral brains evolved for cooperation within groups

Universal cooperation is inconsistent with the principles governing evolution by natural selection. I wish it were otherwise, but there’s no escaping this conclusion.

Cooperation evolves, not because it’s “nice” but because it confers a survival advantage.

Morality can climb the ladder of evolution and then kick it away.  As an analogy, consider the invention of birth control. In the same way, we can take morality in new directions that nature never “intended.” We can, for example, donate money to faraway strangers without expecting anything in return. From a biological point of view, this is just a backfiring glitch, much like the invention of birth control. But from our point of view, as moral beings who can kick away the evolutionary ladder, it may be exactly what we want. Morality is more than what it evolved to be.

The tribes themselves are divided by their moral ideals.

Morality did not evolve to promote universal cooperation. On the contrary, it evolved as a device for successful intergroup competition.

What we in the modern world need, then, is something like morality but one level up. We need a kind of thinking that enables groups with conflicting moralities to live together, a metamorality.

Identifying universal moral principles has been a dream of moral philosophy since the Enlightenment.

The problem, I think, is that we’ve been looking for universal moral principles that feel right, and there may be no such thing. What feels right may be what works at the lower level (within a group) but not at the higher level (between groups).

We may need to think in new and uncomfortable ways.

Morality is a collection of devices, a suite of psychological capacities and dispositions that together promote and stabilize cooperative behaviour.

The Prisoner’s Dilemma, like the Tragedy of the Commons, involves a tension between individual interest and collective interest.

Nature’s purposes need not be revealed in our experience. Sex, for example, is primarily about making babies, but that’s not necessarily what motivates people to do the deed.

If the idea of friendship as a cooperation device seems strange, that may be because of the unusually good times in which we live. In the feast-and-famine world of our hunter-gatherer ancestors, having friends who were willing to have you over for dinner wasn’t just a nicety but a matter of life and death.

Many soldiers couldn’t bring themselves to shoot at strangers, even ones who were trying to kill them. From this experience, the U.S. military concluded that soldiers need to have their reluctance to kill trained out of them — the birth of modern military training.

We shudder at the thought of behaving violently toward innocent people.

Our basic decency extends beyond nonaggression to positive acts of kindness.

Watching another person experience pain, for example, engages the same emotion-related neural circuits that are engaged when one experiences pain oneself, and the brains of people who report having high levels of empathy toward others exhibit this effect more strongly.

Maternal care. Oxytocin.  Caring in nonhuman primates.

Because we care about one another, because our individual payoffs are not the only ones that matter to us, we can more easily get ourselves into the magic corner.

Vengeful behaviours are, or can be, a kind of rational irrationality.

Love as a mechanism.  You’re a great catch, but there is bound to be someone out there who’s got everything you’ve got plus a little more. Knowing that your partner might someday meet such a person, you’d be reassured by the knowledge that your partner isn’t going to leave you as soon as something better comes along.  Only love provides the kind of loyalty you need in order to take the parenting plunge. Love appears to be more than just an intense form of caring. It’s a highly specialized piece of psychological machinery, an emotional straitjacket that enables cooperative parenting by assuring our parenting partners that they won’t be abandoned.

Good foot soldiers have the virtues of loyalty and humility.

Reputations. Gossiping.  Gossip seems to happen automatically. For many people, not gossiping requires great effort.

Embarrassment was designed to play precisely this kind of signalling role, restoring one’s social standing by signalling a genuine desire to behave differently in the future.

Judgmental as babies. All twelve six-month-olds reached out for the helpful toy.

To ferret out Ephraimite refugees, the Gileadite guards employed a simple test: They asked travellers seeking passage to pronounce the Hebrew word shibboleth.

According to the Bible, forty-two thousand Ephraimites were killed because they couldn’t say sh.

Humans are predisposed from an early age to use the original shibboleths — linguistic cues — as markers of group identity and as a basis for social preference. Race, far from being an innate trigger, is just something that we happen to use today as a marker of group membership. Gender-based categorization, as compared with race-based categorization, should be harder to change.

Our brains are wired for tribalism. We intuitively divide the world into Us and Them, and favour Us over Them. We begin as infants, using linguistic cues, which historically have been reliable markers of group membership. We discriminate based on race (among other things), but race is not a deep, innate psychological category. Rather, it’s just one among many possible markers for group membership.

Our brains include many different circuits that compete for control of behaviour, some of which are more modifiable than others.

There are several complementary strategies for getting otherwise selfish individuals into the magic, cooperative corner.

The faster people decided, the more they cooperated, consistent with the idea that cooperation is intuitive

Our social instincts are great at averting the Tragedy of the Commons but this is not the only tragedy.

In calling our brains’ psychological tools for cooperation “moral machinery,” I’m not saying that this machinery is used exclusively to promote cooperation.

In calling this psychological machinery “moral,” I am not endorsing it, at least not all of it. On the contrary, as we’ll see shortly, I believe that our moral machinery gets us into a lot of unnecessary trouble.

These features of our psychology, many of which are not especially admirable, are parts of an organic whole — a suite of psychological adaptations that evolved to enable cooperation.

Out of evolutionary dirt grows the flower of human goodness.

Some genuine moral disagreements are essentially matters of emphasis.

The tribes of the new pastures fight in part because each tribe selfishly favours Us over Them and in part because different tribes see the world through different moral lenses.  Tribes that are moral, but differently moral.

Little has been more pernicious in international politics than excessive righteousness.

Philosophical dilemmas that pit “the heart” against “the head.”

 

Part II. Morality Fast and Slow

People who generally favour effortful thinking over intuitive thinking are more likely to make utilitarian judgments

The trade-off between efficiency and flexibility.

We humans, in contrast, lead much more complicated lives, which is why we need a manual mode.

There are two distinct systems at work in the snacking brain. There is a more basic appetitive system that says “Gimme! Gimme! Gimme!” (automatic settings) and then a more controlled, deliberative system that says “Stop. It’s not worth the calories” (manual mode).

The controlled system, manual mode, considers the big picture, including both present and future rewards, but the automatic system cares only about what it can get right now.

When the manual mode is occupied with other business, the automatic response gets its way more easily.

Having both a decent appetite and the capacity for restraint.

For white people who don’t want to be racist, interacting with a black person imposes a kind of cognitive load

For most of the things that we do, our brains have automatic settings that tell us how to proceed. But we can also use our manual mode to override those automatic settings.

Our automatic settings can be, and typically are, very smart

The brain’s automatic settings work best when they have been “manufactured” based on lessons learned from past experience.

 

Part III.  Common Currency

Cooperation between groups is thwarted by tribalism (group-level selfishness), disagreements over the proper terms of cooperation (individualism or collectivism?), commitments to local “proper nouns” (leaders, gods, holy books), a biased sense of fairness, and a biased perception of the facts.

Even if the relativist is right about the nonexistence of moral truth, there is no escape from moral choice.

The herders should simply do whatever works best. Deep pragmatism.

The original utilitarians refused to accept practices and policies as right simply because of tradition, or because they seemed intuitively right to most people, or because it was “the natural order of things.” They argued that slavery is wrong not because God opposes it, but because whatever good it may yield (for example, in terms of economic productivity) is vastly outweighed by the misery it produces.

Utilitarianism is widely misunderstood. The trouble begins with its awful name, suggesting a preoccupation with the mundanely functional. (The “utility room” is where one does the laundry.) Replacing “utility” with “happiness” is a step in the right direction, but this, too, is misleading. What utilitarian philosophers mean by “happiness” is far broader than what we think of when we think about “happiness.”

It seemed to me there was further conceptual tidying to be done.

It’s plausible that the goodness and badness of everything ultimately cashes out in terms of the quality of people’s experience.

if it doesn’t affect someone’s experience, then it doesn’t really matter .

 

To value happiness is to value everything that improves the quality of experience , for oneself and for others — and especially for others whose lives leave much room for improvement.

It’s not that happiness beats out the other values on the list. Happiness, properly understood, encompasses the other values. Happiness is the ur-value, the Higgs boson of normativity, the value that gives other values their value.

It’s not unreasonable to think of happiness as occupying a special place among our values, as more than just another item on the list.

Happiness is the common currency of human values.

In defending the noble life, Mill appealed to immediate self-interest (“Really, it’s a much better high!”), when he should have instead appealed to the greater good.

Thus, even if you reject the utilitarian idea that happiness is all that ultimately matters, as long as you think that happiness matters to some extent, you need to measure it, too!

Let our instincts carry us past the moral temptations of everyday life (Me vs. Us) but to engage in explicit utilitarian thinking when we’re figuring out how to live on the new pastures (Us vs. Them).

Utilitarianism is not, at the most fundamental level, a decision procedure. It is, instead, a theory about what matters at the most fundamental level, about what’s worth valuing and why.

Utilitarianism combines the Golden Rule’s impartiality with the common currency of human experience. This yields a moral system that can acknowledge moral trade-offs and adjudicate among them, and it can do so in a way that makes sense to members of all tribes.

The Tragedy of Commonsense Morality is a tragedy of moral inflexibility.

There seems to be a connection between manual-mode thinking and utilitarian thinking.

Bentham and Mill did something fundamentally different from all of their predecessors, both philosophically and psychologically. They transcended the limitations of commonsense morality by turning the problem of morality (almost) entirely over to manual mode.

First: What really matters? Second: What is the essence of morality? They concluded that experience is what ultimately matters, and that impartiality is the essence of morality.

The original utilitarians took the famously ambiguous Golden Rule  — which captures the idea of impartiality — and gave it teeth by coupling it with a universal moral currency, the currency of experience.

We all have (or can be made to have) an “if all else is equal” commitment to increasing happiness — this has profound moral implications. It means that we have some very substantial moral common ground.

If we drop the “if all else is equal” qualifier, we get utilitarianism.

Should we drop the “if all else is equal” clause and simply aim for maximum happiness?

it’s just the philosophical tip of a much deeper psychological and biological iceberg

We’re not all utilitarians — far from it. But we all “get” utilitarianism.

It’s as if there’s one part of our brain that thinks utilitarianism makes perfect sense and other parts of our brain that are horribly offended by it.

Utilitarianism is the native philosophy of the human manual mode

A problem solver begins with an idea (a representation) of how the world could be and then operates (behaves) on the world so as to make the world that way

The human manual mode, housed in the PFC, is a general-purpose problem solver, an optimizer of consequences.

Recognizing that there is no objective reason to favour oneself over others does not entail abandoning one’s subjective reasons for favouring oneself

Human empathy is fickle and limited, but our capacity for empathy may provide an emotional seed that, when watered by reasoning, flowers into the ideal of impartial morality.

The ideal of impartiality has taken hold in us (we who are in on this conversation) not as an overriding ideal but as one that we can appreciate.

First, the human manual mode is, by nature, a cost-benefit reasoning system that aims for optimal consequences. Second, the human manual mode is susceptible to the ideal of impartiality.

Maximize happiness impartially.  Is it a misguided oversimplification of morality?

Me: Perhaps it is less a morality than an axiology and an attitude

 

Part IV. Moral Convictions

Is the problem with utilitarianism or with us?

The classic objections to utilitarianism are very intuitive. Unfortunately, the best replies to these objections are not at all intuitive.

Our negative reaction to pushing comes from a gut reaction

Automatic settings are heuristic devices that are rather inflexible, and therefore likely to be unreliable, at least in some contexts

Doctrine of Double Effect.  Acquinas, Ignore side effects, collateral damage. If you harm someone as a means and you use personal force, then the action seems wrong to most people

The doctrine is just an (imperfect) organizing summary of those intuitive judgments

Modular myopia hypothesis.  The module monitors our behavioural plans and sounds an emotional alarm bell when we contemplate harming other people.  This alarm system is “myopic,” because it is blind to harmful side effects

These limitations make us emotionally blind — but not cognitively blind — to certain kinds of harm

We acquired manual-mode reasoning and planning.

A creature that can dream up new ways of achieving its goals, is a very dangerous kind of creature, especially if that creature can use tools

Branching actions plans

Doctrine of Doing and Allowing

Representations of actions are more basic and accessible.

Infants could grasp the idea of “choosing the blue mug,” but they couldn’t grasp the idea of “not choosing the blue mug.”

Representations of omissions are inherently abstract

Harmful omissions don’t push our emotional moral buttons in the same way that harmful actions do

Once again, it seems that a hallowed moral distinction may simply be a cognitive by-product

People who are less willing to trust their intuition (more “cognitively reflective”)

It responds to harms that are specifically intended. It responds more to harm caused actively, rather than passively. It responds more to harm caused directly by personal force. It responds negatively to prototypically violent acts, independent of whatever benefits those acts may produce

How seriously should we take the advice we get from this gizmo?

In general, I think we should take its advice very seriously.  Without the gizmo, we would all be more psychopathic. This alarm system also provides a good hedge against overconfidence and bias.

It makes little sense to regard it as infallible and to elevate its operating characteristics into moral principles.

It makes sense for each of us to take special responsibility for our own actions

Omissions vastly outnumber actions

The alarm draws the line in the wrong place, treating foreseen side effects as if they were accidents

There are many good reasons to be glad that we have this antiviolence gizmo in our brains. But our key question is this: Should we let the gizmo determine our overarching moral philosophy?

This dilemma is also a moral pest. It’s a highly contrived situation in which a prototypically violent action is guaranteed (by stipulation) to promote the greater good

The lesson that philosophers have, for the most part, drawn from this dilemma is that it’s sometimes deeply wrong to promote the greater good

Our understanding of the dual-process moral brain suggests a different lesson: Our moral intuitions are generally sensible, but not infallible

I would approve of this action, although I might be suspicious of the person

Just as visual illusions reveal the structure of visual cognition, bizarre moral dilemmas reveal the structure of moral cognition.

If harming the environment felt like pushing someone off a footbridge, our planet might be in much better shape

It’s good that we’re alarmed by acts of violence. But the automatic emotional gizmos in our brains are not infinitely wise.

It’s a mistake to grant these gizmos veto power in our search for a universal moral philosophy

Sometimes we’ll accommodate, arguing that maximizing happiness in the real world doesn’t have the absurd conclusions.

Other times we’ll argue for reform, using our cognitive and evolutionary understanding of moral psychology to cast doubt on our intuitive sense of justice.

If you want to save someone’s life, you can probably do it for about $2,500

Even if the world’s best humanitarian organizations were to frivolously waste half of their money (which they don’t), we’d still be on the hook, because merely doubling the cost of helping doesn’t fundamentally change the math.

Today, there’s no denying that you can — if you want to — use your money to help people who desperately need help

Being a perfect utilitarian requires forsaking almost everything you want out of life and turning yourself into a happiness machine.

At least I have the decency to admit that I’m a hypocrite! 

Once we’ve taken care of our own needs — broadly construed — we must face up to our moral opportunities

Identifiable victim effect. Statistical death.

Our capacity for empathy may be the most quintessentially moral feature of our brains

It would be foolish to let the inflexible operating characteristics of our empathy gizmos serve as foundational moral principles

Homo selfishus. Homo justlikeus. Homo utilitus.

When we imagine Homo utilitus, we should imagine heroes, not drones

Man mistakes his species-typical moral limitations for ideal values

Our intuitive sense of justice is a set of heuristics: moral machinery that’s very useful but far from infallible. We have a taste for punishment. This taste, like all tastes, is subtle and complicated

Sometimes our tastes make fools of us. Our tastes for fat and sugar make us obese

Remarkable coincidence. How strange if the true principles of justice just happen to coincide with the feelings produced by our punishment gizmos,

We see punishment as inherently worthy and not just a means to better behaviour, much as we experience food as inherently tasty and not just a means to nutrition. The enjoyment we get from food is typically harmless, but making people suffer is never harmless. Thus, we should be wary of punishment that tastes good but does more harm than good

The first key utilitarian idea is the primacy of experience.  The second idea, once again, is that everyone’s experience counts equally.

Experience — happiness and unhappiness, broadly construed — is the utilitarian currency. But the word utility suggests something more like “useful stuff.”

It’s easy to mistake utilitarianism for “wealthitarianism.”

We don’t ordinarily quantify the quality of our experiences.  [Important, why?]

People have a very hard time thinking clearly about utility. On the one hand, people understand that stuff and utility are different.

 

Part V. Moral Solutions

Our laws have to say something. We have to choose, and unless we’re content to flip coins, or allow that might makes right, we must choose for reasons.

There are utilitarian reasons to reject all of these naive, pseudo-utilitarian ways.

Utilitarianism, properly understood, bears little resemblance to its many caricatures

Properly understood, and wisely applied, utilitarianism is deep pragmatism

It’s our second moral compass, and our best guide to life on the new pastures

Our moral emotions — our automatic settings — are generally good at restraining simple selfishness, at averting the Tragedy of the Commons

When it’s the Tragedy of Commonsense Morality — when it’s Us versus Them — it’s time to stop trusting your gut feelings and shift into manual mode

For a straightforward moral transgression, such as fraud or murder, there is a moral problem, but there is no moral controversy

When tribes disagree, it’s almost always because their automatic settings say different things

Here we can’t get by with common sense, because our common sense is not as common as we think

When we think about divisive moral problems, our first instinct is to think of all the ways in which We are right and They are wrong

“the illusion of explanatory depth”. “confabulation.”

Kant has the same automatic settings as his surrounding tribespeople . But Kant, unlike them, felt the need to provide esoteric justifications for their “popular prejudices.”

Rationalization is the great enemy of moral progress, and thus of deep pragmatism

Appeals to “rights” function as an intellectual free pass, a trump card that renders evidence irrelevant.

The rights and the duties follow the emotions.  Nonnegotiable, reflecting the inflexibility of our automatic settings.

Such feelings can be overridden, but the feeling itself is, so to speak, unwilling to negotiate

Rights and duties are absolute — except when they’re not

We embattled moralists love the language of rights and duties because it presents our subjective feelings as perceptions of objective facts.

Our subjective feelings often feel like perceptions of things that are “out there,” even when they are not

Sexiness is in the mind of the beholder.  It’s natural to describe someone as “sexy,” rather than as “provoking sexual desire in people like me.”

Rights and duties are the manual mode’s attempt to translate elusive feelings into more object-like things

It represents such feelings as perceptions of external things. The feelings get nounified.

Common doesn’t mean universal. It means common enough for practical, political purposes

When dealing with moral matters that truly have been settled, it makes sense to talk about rights. As a deep pragmatist, I’m happy to join the chorus: Slavery violates fundamental human rights!

We have no non-question-begging (and non-utilitarian) way of figuring out which rights really exist

But when it’s not worth arguing — either because the question has been settled or because our opponents can’t be reasoned with — then it’s time to stop arguing and rally the troops

Unless you think that morality requires us to make as many happy babies as possible, you can’t argue that abortion is wrong on the grounds that it prevents human lives from getting lived.

Fertilization is amazing, a pivotal moment in the development of a new human being . But so far as we can tell, it’s not magic. What’s more, the fertilization of a human egg appears to be no more or less magical than the fertilization of a mouse egg or a frog egg. There is no evidence whatsoever for the occurrence of “ensoulment” at fertilization, or at any other point in development

Even if life begins in that horrible situation of rape, that it is something that God intended to happen. The Silent Scream. Abortion doesn’t feel wrong until the fetus starts to look human.

The pro-lifer’s life-saving utilitarian argument is a good one. The problem is that it’s too good. If we’re opposed to abortion because it denies people their existence, then we should be opposed to contraception and abstinence, too.

This deeply pro-life argument is, in fact, analogous to the utilitarian argument in favour of extreme altruism – pump out more happy people – too much to ask of nonheroic people

Pro-lifers will have to admit that they have no evidence-based answers to these questions, while nevertheless insisting that their faith-based answers dictate the law of the land

An explicit theory that we can write out in words — our gut reactions were not designed to be organized, and they weren’t necessarily designed to serve truly moral ends. Automatic settings are heuristics.

If we use our scientific self-knowledge to debunk our biased intuitions, where will we end up? I believe that we’ll end up with something like utilitarianism.

Why? First, as I explained in chapter 8, utilitarianism makes a whole lot of sense — not just to me and you, but to every non psychopath with a manual mode. The only truly compelling objection to utilitarianism is that it gets the intuitively wrong answers in certain cases, especially hypothetical cases.

Our non-debunkable moral intuitions are “rule utilitarian”

At some point, it dawns on you: Morality is not what generations of philosophers and theologians have thought it to be: Morality is not a set of freestanding abstract truths that we can somehow access with our limited human minds. Moral philosophy is a manifestation of moral psychology.

Once you’ve understood this, your whole view of morality changes. Figure and ground reverse, and you see competing moral philosophies not just as points in an abstract philosophical space but as the predictable products of our dual-process brains.

These three schools of thought are, essentially, three different ways for a manual mode to make sense of the automatic settings with which it is housed.

We can use manual-mode thinking to explicitly describe our automatic settings (Aristotle). We can use manual-mode thinking to justify our automatic settings (Kant). And we can use manual-mode thinking to transcend the limitations of our automatic settings (Bentham and Mill).

As an ethicist, Aristotle is essentially a tribal philosopher. Read Aristotle and you will learn what it means to be a wise and temperate ancient Macedonian-Athenian aristocratic man. Aristotle’s virtue-based philosophy, with its grandfatherly advice, simply isn’t designed to answer these kinds of questions.

Philosophers have failed to find a metamorality that feels right. (Because our dual-process brains make this impossible.)

Faced with this failure, one option is to keep trying. (See above.) Another option is to give up — not on finding a metamorality, but on finding a metamorality that feels right. (My suggestion.) And a third option is to just give up entirely on the Enlightenment project, to say that morality is complicated, that it can’t be codified in any explicit set of principles.

When confronted with the morass of human values, modern Aristotelians simply bless the mess.

Kant crosses the line from reasoning to rationalization, and that’s why Nietzsche is chuckling

I’m a deep pragmatist first, and a liberal second.

Contrast my understanding of morality and politics with that of Jonathan Haidt, whose work we’ve discussed throughout this book, and who’s been a major influence on my own thinking.

Morality is a suite of psychological capacities designed by biological and cultural evolution to promote cooperation.  At the psychological level, morality is implemented primarily through emotional moral intuitions, gut reactions that cause us to value the interests of (some) others and encourage others to do the same. Different human groups have different moral intuitions, and this is a source of great conflict. Conflicts arise in part from different groups’ emphasizing different values and in part from self-serving bias, including unconscious bias. When people disagree, they use their powers of reasoning to rationalize their intuitive judgments.

The central message of Haidt’s wonderful book The Righteous Mind, which I summarize as follows: To get along better, we should all be less self-righteous. We should recognize that nearly all of us are good people, and that our conflicts arise from our belonging to different cultural groups with different moral intuitions. We’re very good at seeing through our opponents’ moral rationalizations, but we need to get better at seeing our own. More specifically, liberals and conservatives should try to understand one another, be less hypocritical, and be more open to compromise.

It’s one thing to acknowledge that one’s opponents are not evil. It’s another thing to concede that they’re right, or half right, or no less justified in their beliefs and values than you are in yours. Agreeing to be less self-righteous is an important first step, but it doesn’t answer the all-important questions: What should we believe? and What should we do?

None of this answers the most critical question: Are liberals morally deficient? I think the answer is no. Quite the opposite, in fact.

One might say, as Haidt does, that liberals have narrow moral tastes. But when it comes to moral foundations, less may be more. Liberals’ moral tastes, rather than being narrow, may instead be more refined.

American social conservatives are not best described as people who place special value on authority, sanctity, and loyalty, but rather as tribal loyalists — loyal to their own authorities, their own religion, and themselves. This doesn’t make them evil, but it does make them parochial, tribal.

The rights-based fundamentalist argument for libertarian policies is a nonstarter.  The pragmatic argument for libertarian policies is that they serve the greater good.

When faced with the ultimate question — What should we do? — it seems that the autistic philosopher was right all along.

Some of us, defend the rights of gays and women with great conviction. But before we could do it with feeling, before our feelings felt like “rights,” someone had to do it with thinking.

Nearly all of our biggest problems are caused by, or at least preventable by, human choice.

We’ve made enormous progress in reducing human enmity, replacing warfare with gentle commerce, autocracy with democracy, and superstition with science.

How can we do better?  When it’s Me versus Us, think fast. When it’s Us versus Them, think slow. Modern herders need to think slower and harder.

Bentham and Mill turned this splendid idea into a systematic philosophy, and gave it an awful name. We’ve been misunderstanding and underappreciating their ideas ever since.

Our gut reactions were not designed to form a coherent moral philosophy. Any truly coherent philosophy is bound to offend us, sometimes in the real world but especially in the world of philosophical thought experiments, in which one can artificially pit our strongest feelings against the greater good.

We’ve mistakenly assumed that our gut reactions are reliable guides to moral truth

Rule 1. In the face of moral controversy, consult, but do not trust, your moral instincts

Rule 2. Rights are not for making arguments; they’re for ending arguments

Rule 3. Focus on the facts, and make others do the same; acknowledge our ignorance

Rule 4. Beware of biased fairness

Rule 5. Use common currency

Rule 6. Give

We can argue about rights and justice forever, but we are bound together by two more basic things. First, we are bound together by the ups and downs of the human experience. We all want to be happy. None of us wants to suffer. Second, we all understand the Golden Rule and the ideal of impartiality behind it. Put these two ideas together and we have a common currency, a system for making principled compromises.

We can agree, over the objections of our tribal instincts, to do whatever works best, whatever makes us happiest overall.

We are marvelous in many ways, but the moral laws within us are a mixed blessing.  More marvelous, to me, is our ability to question the laws written in our hearts and replace them with something better.

 

Links

Amazon UK

Joshua Greene website

Joshua Greene on Sean Carroll podcast