Evolution, Psychology, and Ethics

See here for the Finnish version: Evoluutio, psykologia ja etiikka

In April, I wrote a text that I guess is now technically the first one by me published by anyone else; an article (in Finnish) for the scholarly journal Hybris, 1/2012. Page limitations made it a little too limited (I should just write a book), but presumably it presents its restricted topic reasonably well. It took a long time for me to create an English version for several reasons, but now it’s finally done. The translation can now be found below, and it contains citations and hyperlinked endnotes.

Evolution, Psychology, and Ethics

Originally published in Finnish in the scholarly journal Hybris, 1/2012.

How can we know what is right? If at all we accept the view that some deeds are right and others wrong, it is important for us to be able to tell these apart. Plenty of answers have been offered along with reasons to accept each. Some think god commands this, others that to maximise utility requires that. Once we become aware of the multiplicity of options, we cannot claim to act responsibly if we in our moral choices only stick to what we have been taught or what comes into mind first. We also have responsibility for what moral principles we choose to adopt as our own. We must be able to weigh different options and understand where they come from and why they are supported, and then decide whether there is reason to accept them.

Science deals with facts — how things are. It is generally thought — and I agree — that how things should be cannot be derived from how they are. One must always make a choice of what to consider valuable in itself and worth aiming at. However, understanding facts is one important part of building ethical understanding. Ethics is a field of philosophy, and one problem of philosophy is that its tool, the human reason, is limited and sometimes blind to its own weaknesses. Things that now seem obvious may later appear as the most questionable of assumptions. For this reason, it is helpful to obtain knowledge of the world and of the origins of human notions before proceeding to philosophical conclusions, especially ones meant to guide action. In this article I look at what biology and psychology have to say about the origin and nature of morality and at how this can in part help us decide what to think of different moral views, and which things to choose as ends in themselves.

Natural Selection and Altruism

According to modern biology, current living organisms, humans included, have been produced by evolution guided by natural selection. The DNA molecules in the nuclei of our cells contain our genes, which in complex indirect ways determine what we grow to be like. Offspring inherit part of the DNA of each of their parents, and in addition to this, there are errors in copying, making for random variation. For this reason, a population contains various different alternative genes (or pieces of DNA code[1]) that compete which each other in the sense that some prevail at the expense of others. The pressure of natural selection acts through individuals. They survive or don’t survive, breed or don’t breed. However, what can survive through the generations is the hereditary information contained in the gene, so natural selection properly applies to genes[2]. Genes are successful and spread if they can produce something in their bearers that advances their getting passed on — which usually means the bearer’s being good at surviving and reproducing.

Because the process by which the organism is constructed is guided by the information in the DNA, and because genes can moderate each others’ functioning, even a small change in the DNA may cause a noticeable change in the entire organism.[3] This can also lead to psychological results. Genes cannot force persons carrying them to take any actions more complicated than reflex ones, but they are capable of creating notable statistical tendencies, such as certain natural appetites that people then have a tendency to act on. It is because of this that effects of evolution can also be seen in human thought and behaviour.[4]

Given this, how can natural selection lead to the emergence of morality? Morality as it appears in practice intimately involves altruism, behaviour in which the individual acts for the good of another rather than themselves. How can natural selection have favoured such a thing? The first answer is kin selection: genes that maker a person inclined to help close relatives survive may prosper because the close relative is reasonably likely to have a copy of the same gene, in which case the gene is only helping itself. This works equally well between siblings as between parents and their offspring.[5]

This does not work in relation with other members of the same species, however. Their genes are too different. Natural selection isn’t about groups, such as herds or species; rather, different genes in the same group compete with each other.[6] The ironic thing about this is that it can make genes “short-sighted”. Cooperation often ultimately pays off better than competition. Social animals such as humans are faced with this problem. This has usually been presented through the “Prisoner’s Dilemma[7] — this is a situation involving two parties where cooperation would be a good option, but by acting rationally, that is to say by maximizing their own benefit, both end up refusing it, which is worse for both than cooperation. If one chooses to cooperate, the other can benefit the most by refusing to reciprocate, in which case he can benefit from the other’s work without having to make any effort itself. If one defects, the other should also defect, lest he waste his effort helping the other for nothing. Yet, this logic ends up with neither cooperating, which is much worse than both doing it. This is also relevant for cooperation between members of a species. In the long term, cooperation is the best way, but in a Prisoner’s Dilemma, defecting always pays off a little better. If someone were to try to play fair, the defectors can simply suck her dry, even though the long-term result will be the worse for everyone.

Fortunately, evolution does not always operate in a pure Prisoner’s Dilemma.

The field studying gains and losses and different strategies in such situations, or “games”, is called game theory. In studying unselfish behaviour in the context of evolution, it has made some observations that I will condense here in a simplified form without their mathematical side. First, it should be noted that animals in a community do not simply find themselves in a single Prisoner’s Dilemma; the “game” is reiterated again and again. Nevertheless, it is still true that if the community were to consist of individuals who always play fair and of defectors, the defectors would do so much better against the fair players that they would take over the population. If there were a population consisting entirely of fair players, the same thing would happen as soon as a single defector appeared there, say as the result of a mutation; its genes would replace the fair genes. But there are other strategies, and one type can usually beat the defectors: one that starts off cooperating, but when running into a defector, gets “angry” and punishes the other for cheating. Punishment takes up resources as well, so it is not a rational strategy either; it means selecting a worse outcome for oneself in order to keep the other party from benefitting. But the punishers get the good result from cooperation when playing with each other (and the entirely nice types), whereas the defectors get the worst result against them. This leads to the punishers being able to prosper, corner the defectors, and become dominant themselves. And when this type of strategy becomes dominant, it is usually stable, meaning that others strategies cannot unseat it.[8]

This is of course only a partial picture and does not encompass all of morality. After all, morality does not only apply when the parties have explicitly committed themselves to a particular deal. It is not possible to explore the matter comprehensively here, but let it be said that a similar principle applies more broadly. Michael Murray[9] has argued that strategies that work from the point of view of evolution lead to the following principle of consent:

Any act is morally permissible if and only if all competent, suitably informed, concerned parties voluntarily, or can reasonably be predicted to voluntarily, consent. Put negatively, Any act by people that negatively affects others who have not voluntarily agreed to being so affected is an immoral act. [10]

But what does this all mean on a concrete level, for example when speaking of human beings? It follows from the effectiveness of the punishing strategy that it can be expected that most of us have a genetic tendency to want to play fair and, on the other hand, to become angry at and punish those who don’t. This indeed appears to be the case. This has been shown for example in ultimatum bargaining experiments, where one party is offered, say, a sum of money, which he is to divide between himself and another person as he wishes[11]. The other test subject gets to choose: either accept the offer, or refuse, in which latter case neither gets anything. If both were rational, the first person would keep as much as possible for himself, and the second would accept whatever’s offered to her, no matter how little. But in actual practice, slightly depending on culture, the first person usually divides the money about evenly — and if he doesn’t but rather takes much more for himself, the other is offended and refuses to take the offer, leading to both gaining nothing.

Evil and Justice?

At this point, the picture we have built looks rather positive. Evolution has predisposed us to be fair. But if we look at the other things psychology reveals about modern humans, we come to see obvious downsides. Insofar as our tendencies are those of punishers, we should have a desire to harm other people if they have done something wrong. A tendency to such an inclination clearly exists, and it is a much less pleasant thing in practice than in game theory or bargaining experiments.

When someone harms another person in a way that is perceived as unjustified, the logic of punishment should be activated in the victim as well as in bystanders who are on their side. The perpetrator should become an object of punishment in their eyes. And this indeed happens. The desire to punish is an urge that demands fulfillment[12]. In the mental stage of revenge, sympathy for the perpetrator is inhibited, in fact reversed so that their suffering causes pleasure to the avenger.[13] In addition, when looking at a harmful act from the point of view of the victim, people usually see it as senseless and unjustified. They think that the perpetrator is acting entirely willfully, and at best from selfish motives, but more likely just for the joy of doing evil.[14] Especially if the perpetrator is a person not otherwise known well, and who can so come to be defined based on this one act alone, this means that they are pushed into an entirely different category from what we at least in principle these days[15] think all people belong to by default: instead of being another person whom we should be compassionate towards, they are now a malicious being, completely unlike ourselves, who rightfully should be harmed. And there is naturally no difference in this regard between vengeance and just punishment — though each can also be given motives that are unrelated to the evolutionary-psychological need to punish, and these motives do differ from each other[16].

The irony of this tendency is completed by how the “evildoers” themselves tend to see the situation. People harming others do not all belong to some special class of psychopaths; most of them are quite ordinary[17]. Hardly anyone is driven by a desire to do evil, and people don’t usually think that the means they use for other ends are evil, even if they would appear so to an impartial observer[18]. Where the victim’s point of view exaggerates the perpetrator’s evil, perpetrators have a tendency to downplay the wrongness of their actions and their own responsibility for them.[19] This isn’t even a question of conscious distortion on either side, but rather seems to be an automatic way to think. In an experiment where the subjects were asked to remember a certain fictional story from either the point of view of the victim or the one harming them, or from an outside point of view, this was enough to affect how they later remembered the contents of the story.[20] The reason for this could be that a slightly exaggeratedly positive view of oneself may carry an evolutionary benefit.[21] Note that in this description anyone is potentially the victim or the perpetrator — the same person may think either way in different situations, in fact even in both ways in the same situation.[22] It’s no wonder, then, that there can be cycles of revenge in which the previous act of revenge always appears excessive and demands further vengeance in return[23]; or that, in many situations, both parties think that the other “started it”.[24]

The results of always seeing evil as something alien and different from oneself can be very ugly. The most common reason for people doing bad things even though they want to do what is right is likely just that they can’t think that something that they might want to do could be evil; because I’m a perfectly normal, good person, and I have good reasons for this, I’m not like those evil people. Seeing others as purely evil also causes a lot of damage in other contexts besides private revenge — in wars, just to start with[25].

Other Moral Emotions

Morality is usually seen as a matter of reason.[26] After the foregoing, it should be no surprise that it has lately been claimed to be more based on emotion.[27] In his article “The Moral Emotions”[28], Jonathan Haidt describes several morality-related emotions. What makes an emotion “moral” in his intended sense is that it can be caused by actions not directed at the self (disinterestedness) and that it causes prosocial acts, which either benefits other people or upholds the social order;[29] or, alternatively, that emotions counting as moral would not be dictating the actions of the rational agent mentioned above in the context of game theory. (Whether an action is guided by reason or emotions is irrelevant for rationality or lack thereof in the sense intended here. Emotions could direct one to maximise one’s own gains.) So classified, different emotions are moral to a different extent, not simply either–or. It may be the case that some emotions are inherent, biologically determined, and others culturally formed variants of these, and the moral emotions can also be classified on this basis.[30]

Moral hatred has already been discussed above. There are also other “other-condemning” moral emotions. The emotion of disgust seems to originally have evolved mostly for the purposes of hygiene, keeping us away from literally unclean things. It has, however, expanded to apply in a “moral” sense to outcast social groups as well to different culture-dependent condemned concepts from hypocrisy to homosexuality. This is prosocial in the sense of maintaining the social order. The third other-condemning emotion is contempt, which mainly causes an emotionally cold attitude and is only a mildly moral emotion.[31]

Self-conscious emotions (condemning of the self) include shame, embarrassment, and guilt. Shame and embarrassment would appear to be variations of the same basic emotion, mainly appearing as separate in Western cultures where a clear line is drawn between breaking social norms (embarrassment) and doing something morally wrong in the proper sense (shame). Whereas these two are based on the social order, guilt is mostly related to attachment relations and the sense of having hurt someone one cares about. There is also a difference in that shame is directed at oneself as a whole and guilt to a particular act.[32]

On the more positive side, the moral emotions also include ones relating to others in a positive way. Gratitude causes helpfulness and positive attitude towards its object, if not so much towards other people in general. Elevation follows from witnessing “moral beauty”, impressive acts of morality, and besides tendencies similar to gratitude, it causes a general desire to be a better person oneself. Elevation also appears in many ways like the opposite of disgust.[33] In this context we can also mention one more emotion placed by Haidt in a different class from the previous two (they were “other-praising”, this “other-suffering”): compassion. It is a reaction to pain or sadness perceived in another person, and causes a desire to help the other and alleviate suffering. Compassion is naturally aimed mostly at people close to oneself.[34] In fact, it appears that its application having been broadened gradually to all people and also to animals has been a historical cultural process.[35]

Ethical Conclusions

We have briefly explored some explanations and descriptions offered by science that shed light on the origin and nature of moral practices. This most certainly not been all that different fields could reveal; social practices, for example, have only been referred to in passing, and even the discussion of the perspectives of evolution and psychology has been limited. In addition, it would also take a great deal of philosophical background work to create a complete theory of what is right and wrong. The following will only be a brief survey of what conclusions can be drawn from what has been stated above.

The kinds of game theoretical models discussed above are significant even without considering their effects on evolution. They support and illuminate one kind of reasoning given in support for why moral rules are necessary even if they have not been somehow dictated from above. Without game theoretical considerations, this can be expressed as follows: It is in the best interests of self-interested actors to agree to act by certain rules when dealing with each other. Put this way, and given as basis only verbal arguments based on how things appear to be, this is not at all necessarily true. Game theory, including the mathematics I have here left out, can give much more rigorous grounds for its claims. It cannot be established that it would always be rational to follow some particular rule limiting the seeking of immediate gain; nevertheless, in the long term, following the right kind of rules is statistically more profitable[36]. In addition, the knowledge that people as a group have a statistical tendency to act fairly and demand the same of others weighs the odds even further towards morality. In addition to fairness and cooperation paying off in principle, others agreeing to act the same way should be easy to find, and it is likely that defection will be separately penalised.

Though the idea of moral rules as usually beneficial for each individual is significant, there is another idea that I consider a more significant basis for morality. The model of common benefit is based on each subject’s aiming at their own benefit. To seek pleasure and to avoid pain need no further motivation; they are naturally motivating things for virtually anyone. There is no reason, however, for everyone to only care about their own benefit. To the extent that we understand another person, we know that they are in many respects just like ourselves. Even though we cannot directly feel it, their suffering is as bad and their pleasure as good a thing as our own. In addition, they are a subject in the same sense as we are. The tendency towards compassion makes us to some extent see things from this perspective — but if we happen to feel no compassion towards someone, that doesn’t change the facts in any way. Likewise, the other-condemning emotions do nothing to change the situation in reality even though they can do so in our minds. If a person despises or is disgusted by another, they can see them as insignificant or as simply an impurity, but this grants no justification to act accordingly. There is a view called moral intuitionism according to which the basis for what is right, and at the same time the source of knowledge about it, is our own intuition; what we feel is right is what is right[37]. This view is highly suspect to begin with and seems arbitrary, and scientifically examining the origin and nature of our moral intuitions makes it even more questionable. Nothing about them looks as if they reflect some higher, objective basis of morality.

There is one clear contradiction that sometimes comes up in moral reflections, and it also does so here. This is the contradiction between punishment and compassion. I argued above that moral anger can make one see another person very strongly as something other than a subject and wish ill for them. This clearly contradicts the requirement to always recognise another’s worth and to heed their interests. Yet, it has also been seen that punishment has a positive effect on the common welfare. Even without this, it is easy to see what kinds of arguments can be given for the necessity of punishment in our societies. Without them, criminals could freely go on taking advantage of others and would spoil the society for everyone else. In addition, they of course act as deterrents beforehand.

There is a view according to which meting out the right punishment for a morally wrong act is a moral duty in its own right. This view is called retributivism. Based on the above, it’s easy to make a likely guess about where this view at least often comes from. Humans simply have a tendency to want to punish. However, we also see that the reason for this tendency appears to have been that it has been useful earlier. Insofar as no separate reasons not even covertly based on intuition are given for it, we can abandon the idea of punishment as an end in itself along with other intuitionist views.[38]

Feelings of moral anger are evolution’s way of playing against defectors, but from the individual’s point of view, they are delusional, negative, and often unreasonable, and they poison relationships between people. Luckily there already exists a well-known alternative for them, one that does the same thing much more effectively: a just police and court system. A neutral judge viewing the situation from the outside can take care of the punishment that’s necessary for the populations’ welfare evenly, reasonably, and without causing any more damage than necessary. It also appears, fortunately, that this is not a mere naïve theoretical ideal. Steven Pinker has in his book about the history and causes of violence examined its realisation from many angles, and he summarises his results as follows[39]:

When bands, tribes and chiefdoms came under the control of the first states, the suppression of raiding and feuding reduced their rates of violent death fivefold (chapter 2). And when the fiefs of Europe coalesced into kingdoms and sovereign states, the consolidation of law enforcement eventually brought down the homicide rate another thirtyfold (chapter 3)[40]. Pockets of anarchy that remained beyond the reach of government retained their violent cultures of honor[41], such as the peripheral and mountainous backwaters of Europe, and the frontiers of the American South and West (chapter 3). The same is true of the pockets of anarchy in the socioeconomic landscape, such as the lower classes who are deprived of consistent law enforcement and the purveyors of contraband who cannot avail themselves of it (chapter 3). When law enforcement retreats, such as in instant decolonization, failed states, anocracies, police strikes, and the 1960s, violence can come roaring back (chapters 3 and 6).

In other words, judging from history and the present day, the closer we get to the ideal of a just state, justice system, and law enforcement, the less violence there is within the community. In addition, there is a clear connection to be seen to whether people feel that they need to dispense justice personally. For once, the theoretical ideal is compatible with reality.

So, we should leave retribution to an official, neutral party — which we should, of course, also keep on developing instead of accepting it in just any form — and in our personal relationships with other people aim at really understanding them instead of letting negative emotions make them into objects or monsters to us and maybe lead us into evil deeds of our own. We should judge actions, not persons. We need consistent principles for this and other moral action. Just because the behaviour generally regarded as moral is usually dictated by emotions rather than reason, it does not follow that it should be.

Sources

AIRAKSINEN, TIMO (1987). Moraalifilosofia. Juva: WSOY.

BAUMEISTER, ROY F. (1999). Evil. Inside Human Violence and Cruelty. New York: Holt Paperbacks.

BEARDEN, JOSEPH NEIL (2001). ”Ultimatum Bargaining Experiments: The State of the Art.” Retrieved on March 26, 2012.

DAWKINS, RICHARD (1993/1989). The Selfish Gene. References are to the Finnish translation: Geenin itsekkyys. Translated by Kimmo Pietiläinen. Jyväskylä: Gummerus.

DONALDSON, THOMAS (1986). Issues in Moral Philosophy. New York: McGraw-Hill.

FERGUSON, NIALL (2006). The War of the World. History’s Age of Hatred. London: Allen Lane.

HAIDT, JONATHAN (2002). ”The Moral Emotions”. In Davidson, Richard J.; Scherer, Claus R. ja Goldsmith, H. Hill (toim.) (2002): Handbook of Affective Sciences. [eBrary link requires username and password.] Oxford: Oxford University Press. Pages 852-870.

MURRAY, MALCOLM (2007). The Moral Wager. Evolution and Contract. Dordrecht: Springer Science+Business Media B.V.

PINKER, STEVEN (2011). The Better Angels of Our Nature. The Decline of Violence in History and Its Causes. London: Penguin Books.

Notes

[1] See Dawkins 1993, 47 on defining the gene more precisely.

[2] Ibid., 48.

[3] Ibid., 51.

[4] Ibid., eg. chapter 13.

[5] Ibid., 105-125.

[6] Ibid., eg. 21-25. See also Murray 2007, 136–140.

[7] See eg. Pinker 2011, 532–533.

[8] Dawkins 1993, chapter 12; Pinker 2011, 533–537.

[9] Murray 2007.

[10] Ibid., 170–171. Strictly speaking, this principle leaves unexplained the intuition of a responsibility to help those in need; this is discussed in chapter 8 of the same work. It should also be noted that in this model those who do not follow this principle are excluded from it as if they consented to anything anyone wants to do to them (ibid., 32), so the model does not contradict punishment even though it contains no reference to it as such.

[11] Bearden 2001.

[12] Pinker 2011, 530.

[13] Pinker 2011, 577.

[14] Baumeister 1999, eg. 72–5.

[15] On how perhaps surprisingly differently people used to think about this, see Pinker (2011) in its entirety, for example chapters 1, 4, and 7.

[16] On the motives of revenge, see Baumeister 1999, 162–167. On just punishments, see below.

[17] Baumeister 1999, eg. 5.

[18] Baumeister 1999, chapter 2.

[19] Baumeister 1999, 38–43.

[20] Pinker 2011, 489–490.

[21] Pinker 2011, 491.

[22] Baumeister 1999, 47–48.

[23] Baumeister 1999, 160–161.

[24] Baumeister 1999, 52–57.

[25] Baumeister 1999, 84–88. Ferguson 2006, which is about the two world wars, presents this chillingly well — among other things, how not only were for example German and Japanese soldiers in the second world war taught to consider their enemies sub-human, and many acted accordingly, but also their opponents, such as British and American soldiers, assumed a similar attitude towards them after seeing what they had done.

[26] Haidt 2002, 852.

[27] Ibid., 856–866.

[28] Still Haidt 2002.

[29] Ibid., 852–854.

[30] Ibid., 855.

[31] Ibid., 857–858.

[32] Ibid., 859–861.

[33] Ibid., 862–864.

[34] Ibid., 861–862.

[35] Pinker 2011, eg. 175, 580–581, chapter 7.

[36] Murray 2007; this is perhaps the most central claim of the work.

[37] Eg. Airaksinen 1987, 114.

[38] On utilitarian reasons for punishment and on retributivist views see eg. Donaldson 1986, 307–310.

[39] Pinker 2011, 681. The references to chapters of the book that I have left in the text also act as references if someone should want to look into the claims in more detail.

[40] If these figures sound surprising or even nonsensical, I can only recommend reading the whole book.

[41] It should be noted that honour mainly means (at least here) that “insults” have to be avenged.

Sama suomeksi.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s