print logo
RSS FEED

The Tribal Mind: Moral Reasoning and Public Discourse

Thursday, April 26, 2012

If we want to get along better and resolve differences more easily, it will take conscious effort to overcome tribal behavioral instincts.

Editor’s note: Books discussed in this essay include Jonathan Haidt’s The Righteous Mind; Daniel Kahneman’s Thinking, Fast and Slow; Bruce Schneier’s Liars and Outliers; and Jim Manzi’s Uncontrolled.

Moral reasoning is often used to intensify partisan loyalty. In that respect, it can actually harm public discourse.

In this essay, I examine the problem of moral reasoning and offer three proposals for mitigating its damaging effects. The first is to take opposing points of view at face value, rather than attempt to analyze them away reductively. A second proposal is to police your own side, meaning that one should attempt, contrary to instinct, to examine more critically the views of one's allies than the views of one's opponents. The third proposal is to “scramble the teams” by creating situations in which people of differing political views must work together to achieve a goal requiring cooperative effort.

This essay is inspired in large part by reading Jonathan Haidt's new book, The Righteous Mind. It also draws on a number of other works that look at the role of moral reasoning for both the individual and society.

What I take away from Haidt is the hypothesis that our capacity to think about moral and social problems evolved from an ability to rationalize our actions. Thus, our capacity to rationalize our moral and political beliefs is much greater than we realize; conversely, our capacity for detached reasoning about moral and political issues is much less than we realize. The fact that we rationalize more readily than we reason helps to sustain political polarization.

Political polarization is unfortunate for at least two reasons. First, there are some issues, notably the unsustainable fiscal path of the budget of the United States going forward, which require compromise.

Second, the environment for political discourse is very unpleasant. Rather than try to engage in constructive argument, partisans make the most uncharitable interpretations possible of what their opponents intend.

Our capacity to rationalize our moral and political beliefs is much greater than we realize; conversely, our capacity for detached reasoning about moral and political issues is much less than we realize.

For example, earlier in 2012, an issue arose concerning contraception and health insurance. Conservatives described the liberal position as a “war on religion” and liberals described the conservative position as a “war on women.” Rather than attempt to clarify the difference of opinion, the public discourse on each side was aimed at portraying the opposition in the worst possible light. I believe that this is true in general, and I believe that such behavior is distasteful and harmful.

Experimental psychologists, including Haidt and also Daniel Kahneman in Thinking, Fast and Slow, see evidence that human moral decisions are often based on rapid, intuitive judgments, what Kahneman refers to as System One. When System Two comes into play, with its more measured reasoning, it is often to justify the responses of System One, rather than to control or correct them.

The psychological literature suggests that any moral argument, including this one, can be interpreted on three levels.

First, there is a strategic level. People make moral arguments in order to justify their behavior to themselves and to others, to compete for status, and to manipulate the behavior of others.

The second level of interpretation might be called dispositional. People have a disposition to take certain moral positions based on genetic personality traits and life experiences. For example, Haidt finds that conservatives tend to score lower than liberals and libertarians on the personality trait of “openness to new experience.”

The final level of interpretation is face value. Regardless of the strategic or psychological analysis, one may evaluate a moral argument on its merits. It is important not to get so caught up in neuroscience that one forgets the face-value level.

Our moral beliefs are grounded in intuition. However, this intuition is modified by our reasoning, experiences, and responses to the moral arguments made by others.

What is the best set of moral beliefs? Haidt does not directly answer this question, but he does favor what he calls Durkheimian utilitarianism. By this he means utilitarianism that includes a value for community attachment.

Rather than try to engage in constructive argument, partisans make the most uncharitable interpretations possible of what their opponents intend.

Haidt uses the metaphor that humans are 90 percent chimpanzee, 10 percent bee. The chimp represents the individualistic, status-seeking competitor. The bee represents the sociocentric cooperator. The bee needs to feel that he or she is serving a higher purpose. Durkheimian utilitarianism is an attempt to balance the needs of the chimp and the bee in all of us. If you allow too much chimp, trust and order break down. On the other hand, suppressing the chimp and trying to make humans behave entirely like bees degenerates into a totalitarian project.

Moral Systems and Social Pressures

Given a view of the good society—Haidt's Durkheimian utilitarianism is certainly a fair approximation—one can proceed to look at moral codes from a functional standpoint. Along such lines, Haidt writes:

Moral systems are interlocking sets of values, virtues, norms, practices, identities, institutions, technologies, and evolved psychological mechanisms that work together to suppress or regulate self-interest and make cooperative societies possible.

It is interesting to compare this perspective with what one finds in Liars and Outliers, a recent book by Bruce Schneier on the social problem of trust and security. Schneier, a security consultant, views our lives from the perspective of game theory. Every day, we must decide whether to cooperate or to defect. Do I try to arrive at work on time, or do I show up late? Do I drive safely or aggressively? Do I support the goals of my department, or do I work for myself? Does my department support the goals of the larger organization, or does it pursue its own interests? Does the larger organization work to support the goals of the society to which it belongs, or does it pursue its own goals?

He says that there are four "societal pressures" that induce cooperation: Moral pressures (internalized desires to cooperate); the value of reputation; institutional and legal incentives; and security systems. He points out that in small groups (think of a band of hunter-gatherers) the pressure from morals and maintaining reputation are often sufficient. Larger societies need institutional and legal incentives. Security systems are in some sense a last resort.

Schneier's concept of “social pressures” seems to have much in common with Haidt's concept of “moral systems.” As individuals, we are, like chimpanzees, prone to defect rather than to cooperate with one another. However, unlike chimpanzees, we have communication skills that have enabled us to develop societal pressures that punish defection and reward cooperation in sophisticated, highly tuned ways. These moral systems facilitate, particularly in the sphere of production and trade, the emergence of highly complex, interdependent human interactions involving hundreds of millions of people.

Within this framework, tribalism plays an ambivalent role. On the one hand, Haidt would argue that tribalism is the basis for our bee instincts. We have a willingness to sacrifice, but that willingness is strongest relative to the 150 or so people that we know well.

However, the bee instinct is much weaker with respect to people outside of that circle. Indeed, tribal instincts tend to make it difficult for large groups of people to cooperate.

In Schneier's terminology, we are unlikely to defect from our immediate circle. As he points out, this can have a down side. Loyalty to a criminal gang, or simply the unwillingness to question a dubious practice within a corporate entity, represents cooperation at a local level but defection from the standpoint of the larger society.

People make moral arguments in order to justify their behavior to themselves and to others, to compete for status, and to manipulate the behavior of others.

Tribalism tends to foster economic exchange within groups, because people trust other members of their tribe. The “social pressures” within a tribe are strong. In economic history, there are a number of well-known examples of minority groups that were important commercially. They were able to sustain trust in the process of trading among themselves because of strong within-group enforcement of ethical norms. The challenge is to go beyond within-group trade to broader commercial activity.

Schneier points out that Quakers played a role in the development of capitalism in the West because they developed a reputation for fair dealing. I would argue that one unusual feature of Quakerism is the importance of the belief that “there is that of God in everyone.” This means that Quakers would expect one another to obey their strongest moral codes even when dealing with non-Quakers. An extended capitalist order requires that individuals treat outsiders as moral equals in the context of economic transactions.

However, for the most part, the scaling up of cooperation beyond small groups requires legal incentives and institutions. In a large, complex society, in the absence of laws and enforcement mechanisms, individuals or groups would be too prone to defect.

Tribal Minds in the Modern World

We no longer live in tribes. Instead, we are embedded in complex, large-scale social institutions. However, our basic tribal instincts have remained. They have formed the basis of national rivalries. Within nations, they form the basis of political conflict.

Haidt quotes political scientist Donald Kinder: "In matters of public opinion, citizens seem to be asking themselves not 'What's in it for me?' but rather 'What's in it for my group?'" Political opinions function as "badges of social membership."

Haidt comments, “The partisan brain has been reinforced so many times for performing mental contortions that free it from unwanted beliefs. Extreme partisanship may be literally addictive.”

Confirmation bias plays an import role. As Haidt puts it,

[W]hen we want to believe something ... we search for supporting evidence, and if we find even a single piece of pseudo-evidence, we can stop thinking ... In contrast, when we don't want to believe something ... we search for contrary evidence, and if we find a single reason to doubt the claim, we can dismiss it.

Political beliefs are harder to change than scientific beliefs. That is because propositions in social science are much more difficult to disprove than propositions in natural science. James Manzi, in his book Uncontrolled, suggests that this is because social science deals with phenomena in which there is high causal density. That is, the possible causal relationships are so numerous and so complex that we cannot arrive at any final truth.

As an economist, I see this play out in the controversies over Keynesian economics. Proponents of Keynesian stimulus argue that President Obama's stimulus program worked, and that we need more of it. Opponents argue the contrary. By focusing on different possible causal mechanisms, each side can make a plausible case.

David McRaney, another psychologist, writes that people “are driven to create and form groups and then believe others are wrong just because they are others.” Even worse, our capacity for rationalization and our instinct for confirmation bias create a distorted view of how much we know about the moral beliefs of ourselves and others.

We have a willingness to sacrifice, but that willingness is strongest relative to the 150 or so people that we know well.

McRaney describes surveys conducted by Emily Pronin, Lee Ross, Justin Kruger, and Kenneth Savitsky. What these surveys show is that we believe we understand our political opponents better than they understand themselves.

In a political debate, you feel like the other side just doesn’t get your point of view, and if they could only see things with your clarity, they would understand and fall naturally in line with what you believe. They must not understand, because if they did they wouldn’t think the things they think.

By contrast, you believe you totally get their point of view and you reject it. You see it in all its detail and understand it for what it is–stupid. You don’t need to hear them elaborate. So, each side believes they understand the other side better than the other side understands both their opponents and themselves.

Haidt examines this belief that we understand our opponents and he finds it to be incorrect. We are not very good at predicting the moral reasoning of our opponents.

Moderates and conservatives were most accurate in their predictions, whether they were pretending to be liberals or conservatives. Liberals were the least accurate, especially those who described themselves as "very liberal."

One may speculate as to why liberals might show the least understanding of their ideological opponents. However, the important point is that neither side understands the other very well. Hardly anyone could pass what economist Bryan Caplan calls the “ideological Turing test.”

In Caplan's test, I would have to appear on a panel with several of my ideological opponents. My goal would be to articulate their point of view so sympathetically that an audience of ideological opponents could not distinguish my views from those of the other panelists.

What the psychological research shows is that most partisans would be extremely unlikely to pass such a test, because we fail to understand the nuances of others' points of view as well as we think we do.

The psychology of moral reasoning leads me to question my own partisanship. The arguments I make for my point of view are likely to be rationalizations. I am likely to value my group identity, leading me to scrutinize opposing points of view to find errors while I overlook flaws in my allies' reasoning.

When I make a case for my point of view, I am likely to reinforce the bonds with my allies but only alienate further those with whom I disagree. When we encounter opposing points of view, we are unlikely to maintain an open mind; instead, our instinct is to look for weaknesses and to make the least charitable interpretations possible.

Elevating the Debate

In the remainder of this essay, I propose some techniques to check this tendency toward extreme partisanship. I think that adoption of these would improve the atmosphere for political debate.

Take opposing points of view at face value.

It is more comfortable to treat opposing points of view reductively. That is, rather than deal with a different viewpoint, we prefer to explain it away. “They just want power.” “They just serve special interests.” “They don't believe in science.” “They are socialists.”

Taking opposing points of view at face value means that we try to pass the ideological Turing test. Could my characterization of another ideology allow me to pass as a proponent of that ideology? Could an opponent's characterization of my ideology allow that person to pass as someone like me?

Police your own side.

We need to find a substitute for external threats as a social bonding agent.

In political debates, we put a lot of energy into pointing out the errors of our opponents. When somebody writes an op-ed exposing the “myths” that surround an issue, the purpose is to debunk the other side, almost never to question one's own allies.

Basically, the “myth-busting” process works like this. You create a straw-man caricature of the other side's point of view. You knock down that straw man. Your allies applaud your brilliant insight. Your opponents dismiss what you have to say. Both sides come away with their partisan views reinforced.

Accusing the other side of an intellectual foul seems like a much better idea than it really is.

First of all, chances are that you are not correctly interpreting the position that you are criticizing. Remember, we have poor empathy for ideological opponents. There is a high probability that we are attacking a straw man rather than a real position.

Second, even if we are correct, the other side may not be persuaded.

Finally, even if we are correct on this one point, there probably are other arguments that the other side can use to bolster its case. As much as we may take pleasure in "not letting them get away with saying X," in the grand scheme of things, we probably are not changing anyone's mind.

Imagine instead an environment in which we primarily tried to expose intellectual error on our own side. In street basketball terms, you “call your own fouls.” The onus of calling liberals' intellectual fouls would fall on liberals. The onus of calling conservatives' intellectual fouls would fall on conservatives.

Policing your own side would require a conscious effort to reverse the tendency toward confirmation bias. We would have to search as hard for holes in our allies' arguments as if they were opponents' arguments. If the goal is to improve public discourse by removing improper arguments, we are much more likely to succeed by having each side call its own fouls than by having people call fouls on the other side.

Street basketball with teams calling fouls on one another would probably degenerate into unsettled arguments. That is, it would start to resemble politics.

Scramble the teams.

Many years ago, some men in our neighborhood started a pickup softball game on Sundays. We quickly realized that if we formed regular teams, antagonisms would fester. Instead, each week we formed new teams on a different basis, such as odd-numbered birthdays vs. even-numbered birthdays. Scrambling the teams kept the games friendly.

Much of our partisanship reflects emotional loyalty to the ideological group with which we identify. To scramble the teams, we would need to foster situations in which liberals develop emotional bonds with conservatives.

We believe we understand our political opponents better than they understand themselves.

Emotional bonds develop when people work towards a common goal. Thus, in the past, military service and foreign threats have served to break down ideological differences. Historians view World War II as a period in which American unity was strong. Likewise, many pundits believe that external threats help to hold together Israeli society, which otherwise is extremely fragmented, particularly between religious and secular Jews.

As Haidt and others have noted, the September 11 attacks produced a burst of patriotism and unity in the United States. This dissipated as no subsequent attack resulted in mass casualties. Overall, the end of the Cold War, which reduced the sense of common threat, may account for some of the rise in partisanship within the United States in recent decades.

We need to find a substitute for external threats as a social bonding agent. Maybe some ideological peace could be bought by having liberals and conservatives who both root for the same sports team get together during important games. Perhaps liberals and conservatives could actively participate in charitable endeavors that both can endorse.

The work of Haidt and other psychologists is persuasive and disturbing. It exposes a tendency to form ideological tribes that use moral arguments as rationalizations. Tribes will go out of their way to misunderstand one another. If we want to get along better and resolve differences more easily, it will take conscious effort to overcome tribal behavioral instincts.

Arnold Kling is a member of the Financial Markets Working Group at the Mercatus Center of George Mason University. He writes for EconLog.

FURTHER READING: Kling also writes “The Challenge of Achieving a Liberal Order,” “The Case for an Executive Re-Organization,” and “The Political Implications of Ignoring Our Own Ignorance.” Andrew G. Biggs asks “Liberals or Conservatives: Who’s Really Close-Minded?” Alex J. Pollock says “'Shared Delusions' Run Deep With Risks.” Sally Satel contributes “Ordering Disorder.”

Image by Rob Green / Bergman Group

Most Viewed Articles

Mission Essential: Leveraging and Protecting Our Special Forces By Phillip Lohaus 09/12/2014
Military leaders must build the optimal balance between special and conventional forces, or risk ...
Telecommuting: Good for Workers, Good for Bosses By Michael M. Rosen 09/12/2014
Challenges abound, but the trajectory is plain.
The Minimum Wage Can Never Be High Enough By Ike Brannon 09/07/2014
The minimum wage is a facile non-solution for the complicated problem of poverty in America.
Closing the Racial Gap in Education By Jason L. Riley 09/10/2014
The usual explanation for the academic achievement gap is that blacks come from a lower ...
The Improbable Practicality of the Humanities By Edward Tenner 09/05/2014
Prospects for the humanities can be more promising than ever.
 
AEI