Sunday, February 26, 2012

Moral Hypocrisy: Definition and a Demonstration

Valdesolo and Desteno (2007) studied moral hypocrisy - judging your own actions to be more moral than when another person performs the same actions in similar circumstances. They operationalized moral hypocrisy by examining differences in attributions of fairness/unfairness to the same act when it was performed by the self, dissimilar others, and similar others.

Following Batson et al. (1997), “[i]n one condition, subjects were required to distribute a resource (i.e., time and energy) to themselves and another person, and could do so either fairly (i.e., through a random allocation procedure) or unfairly (i.e., selecting the better option for themselves). They were then asked to evaluate the morality, or fairness, of their actions. In another condition, subjects viewed a confederate acting in the unfair manner, and subsequently evaluated the morality of this act.”

Valdesolo and DeSteno (2007) divided their participant pool into four groups. The first group was asked to decide whether to allocate a difficult task to themselves and an easy task to another person, or vice versa. They were given two options, to decide using a randomizer or to allocate the tasks however they wished. All allocations would be anonymous. All but 2 participants in this first group allocated the easy task to themselves and the difficult task to the other person (whom they had never met and who did not, in fact, exist).

A second group of participants was asked to watch someone else (a confederate of the experimenter) make the allocation. This confederate, like people in the first group, allocated the easy task to himself. A third group also watched a confederate make the allocation, but were told that they differed from the confederate on one trait, being an Underestimator or an Overestimator. The fourth group was told that they were similar to the confederate on one trait (being an Underestimator or an Overestimator).

All groups were asked to rate how fair the decision was (either their own or the decision of the confederates). The group that rated themselves tended to see their own actions as more fair than the group that rated the action's of a confederate. Of the two groups that rated similar and dissimilar confederates, the group that rated an arbitrarily similar confederate saw his actions as more fair than the group that rated an arbitrarily dissimilar confederate. In other words, people saw their own selfishness as more fair than another's selfishness. They also saw a dissimilar other's selfishness as less fair than a similar other's.

Source:
Valdesolo, P. and DeSteno, David (2007). Moral Hypocrisy: Social Groups and the Flexibility of Virtue. Psychological Science, 18(8):689-690 http://socialemotions.org/page5/files/Valdesolo.DeSteno.2007.pdf

Thursday, February 23, 2012

Defining Morality: My Current Perspective

As discussed previously, the moral psychology literature often operates from a folk psychology definition of morality. This habit has always frustrated me because the folk psychology definition seems, in my experience at least, to assume that there is a single system of attitudes, judgments, and behaviors that can be perceived, studied, and labeled as "morality." Some researchers argue that morality can be divided into different domains, characterized by distinct cognitions which may vary in certain ways, from culture to culture, but that are ultimately biologically constrained. These researchers tend to look at a variety of domains, including decisions to cause death or to allocate resources. Other researchers argue that emotions (which have a cognitive component) both underly moral attitudes and judgments in different domains and motivate behaviors. Still other researchers ask participants to identify whether an attitude is "a reflection of your core moral beliefs and convictions." Still other researchers may examine morality is a dimension used in social judgments--often looking at traits like trustworthiness, honesty, and fairness that could be distinguished from warmth-related traits like friendliness.


What is immediately evident (to me at least) is that these researchers are examining potentially-related but definitely distinct objects of study. They lump attitudes, judgments, and decisions in different domains together as "moral attitudes," "moral judgments," and "morally-relevant behaviors." They do so because these domains are traditionally considered morally-relevant. However, as Shweder argues from research contrasting explicit discussions of moral and immoral domains in Hyde Park, IL and in Bhubaneswar, Orissa, India, moral domains may vary from culture to culture. This could either be because there are more domains of moral inquiry in other cultures, or because more behaviors (etc) are considered relevant to each domain. 


Notice that I am using the term "morality" but still have not defined it. This is partially because some variation on "moral," "ethical," "right" or "wrong," "good" or "evil," are commonly used in every culture that I know of (although I really need to find a citation for this). It should be noted, however, that these terms may not be equivalent. The terms ethics and morality, for example, may be used to distinguish rules that apply to individuals in certain roles from rules that apply to all individuals at all times. They may also be used to refer to rules that people would prefer that you follow from rules that you must follow. It should be noted that these are explicit distinctions, and a study of implicit associations with the terms may yield yet other similarities and distinctions. 


At this point, you might scoff. You might say, "But I can use the word in a sentence. I can talk about some things being moral and immoral. I can talk about how some things are worst than others and about how some things are better than others, morally-speaking." I can do this too. My question is, what allows us to do this? What goes on when we do this?


When I think about morality - I understand it in terms of goals - implicit and explicit. These goals are various - for pleasure, for meaning, to avoid uncertainty, to avoid social dissolution, etc. I argue that individuals may judge something--a belief, an affective reaction, a behavior--to be moral or immoral when it facilitates or hinders the achievement of goals that are defined by the approach or avoidance of states of being that are intensely positive or negative and that are, more importantly, foundational for the approach or avoidance of other goals. 


These foundational goals include, for example, the maintenance of meaning. Meaning is here defined as the experience of being able to understand and consistently react to a variety of phenomena. The pursuit of meaning is an active goal that is threatened by a) threats to the self, including threats to self-esteem, b) perceptions of randomness and the experience of uncertainty, c) threats to affiliation with others, d) threats to symbolic immortality, e.g. reminders of mortality, e) and categories of meaning-threats (and affirmations) that have not yet been clearly delineated in the social psychological literature. I should note that meaning affirmations, as well as meaning threats, should have an important role in a moral system. For example, affirming one's self-concept may lead one to be more open to otherwise threatening experiences, people arguing against one's political position, for example. 


Interestingly, in one experiment, people responded both to being primed with nonsense phrases like "bull left" and "turn-frog" and to mortality salience primes by assigning higher bail to a fictional prostitue than people who were not primed with either meaning-threat (Randles, Proulx, & Heine, 2011). I argue that this is evidence of moralization. Perhaps, in response to meaning-threat, the fictional prostitute's actions were considered even more damaging to existing moral schemas. Alternatively, in response to meaning threat, the logic of the moral schema was more appealing. If the second perspective is accurate, I could see emotions and their associated and/or constituent cognitions helping to restore meaning. Emotions provide a clear, structured, and motivating interpretation of events. However, I should note that, in response to meaning threat, explicit changes in affect are rarely reported (Randles, Proulx, & Hein, 2011). 


People, I should note, can explicitly self-regulate against negatively evaluating things--beliefs, affective reactions, behaviors--that hinder the achievement of meaning and they can also explicitly self-regulate against positively evaluating things that facilitate the achievement of meaning. This self-regulation occurs when individuals know that achieving meaning is difficult and that any attempt to achieve meaning may have only ephemeral success. Interestingly, in a situation of powerlessness and intense threat, attempts to restore meaning may be less successful, leading to a rapidly changing set of moral cognitions. An American who was being subjected to an attack by a member of al-Qa'ida, for example,  would rarely call al-Qa'ida's actions moral if those actions lead to negative outcomes. Why not? Well, salient at the time of attack might be a variety of affectively-laden goals - the preservation of one's continued existence, a need to avoid attack because of one's impression that the attack is not fair, that one is being randomly targeted rather than being punished for one's wrongdoing, or a variety of other goals. 


The attack would challenge meaning a) because a controlled environment is only positively evaluated if it allows you the possibility of achieving positive goals, b) staying alive is fundamental to the achievement of a variety of goals, c) one does not, automatically, find oneself worthy of attack if one believes that one is someone who is worthy of continued existence, worthy of being able to continue to pursue a variety of goals. If the attack continued, however, one might seek meaning by justifying the attack as moral, by imagining a future where people like you stop future attacks and punish the perpetrators, avoiding thoughts of death and imagining a reward in the afterlife, avoiding thoughts of death and focusing on being a moral person before you do die, thinking of death as an acceptable consequence of meaning-establishing positive acts, like saving the lives of others, etc. 


Does this account challenge the existing moral psychology literature? No, but it may help integrate it and may lead to new predictions. Skitka and colleagues argue that moral identity is distinct from social identity, at least for individuals in their samples. This would make a degree of sense - the moral identity may be related to aspects of self that are fundamental to the pursuit of a variety of goals, while social-identities and personal-identities may be fundamental to the pursuit of goals that are marked-as-non-moral, as varying, as not worthy of strict control. At the same time, implicit evaluations of how worthy something is of stability and of control may vary. Someone could have an implicit moral reaction but not explicitly acknowledge this reaction. 


Haidt and colleagues argue that morality is linked to emotions, emotions which contain both an affective component and a cognitive one. The cognitive component is a story about responsibility (control), the event that caused the negative or positive affect (the disruption of stability of perceptions of the environment through the mixing of categories, the loss of valued things, attack, the superiority or inferiority of other selves, establishing control and value through care), and whether one should approach or avoid the moral actor or the person he, she, etc has acted upon. Scholars of person-perception argue for that we distinguish moral from social traits - with traits like trust being moral traits and friendliness being social ones. Friendliness is a general, positively-evaluated "thing" but it has few implications for other goals. Trust, on the other hand, has implications for multiple goals. 
Harder questions would be - why do hierarchy-maintenance goals and group-boundary-maintenance goals get moralized? They may be fundamental to meaning in that they influence one's self-understanding and one's ability to understand other people.


So far, I've presented a theory. Ways to test this theory would include correlating reactions identified in the literature as "moral" with challenges to goal-pursuit that the literature has not yet performed. It would also involve establishing the motivation behind phenomena as diverse as judgments on trolley-problems and reactions to same-sex marriage with certain, specifiable goals. It also involves establishing the context-sensitive relationship among goals. If we did all that work, however, we might reach my particular goal - predicting and being able to influence cooperation and conflict in groups with diverse ways of applying the labels "moral" and "immoral," "right" and "wrong."