Showing posts with label Defining Morality as an Object of Study. Show all posts
Showing posts with label Defining Morality as an Object of Study. Show all posts

Thursday, February 23, 2012

Defining Morality: My Current Perspective

As discussed previously, the moral psychology literature often operates from a folk psychology definition of morality. This habit has always frustrated me because the folk psychology definition seems, in my experience at least, to assume that there is a single system of attitudes, judgments, and behaviors that can be perceived, studied, and labeled as "morality." Some researchers argue that morality can be divided into different domains, characterized by distinct cognitions which may vary in certain ways, from culture to culture, but that are ultimately biologically constrained. These researchers tend to look at a variety of domains, including decisions to cause death or to allocate resources. Other researchers argue that emotions (which have a cognitive component) both underly moral attitudes and judgments in different domains and motivate behaviors. Still other researchers ask participants to identify whether an attitude is "a reflection of your core moral beliefs and convictions." Still other researchers may examine morality is a dimension used in social judgments--often looking at traits like trustworthiness, honesty, and fairness that could be distinguished from warmth-related traits like friendliness.


What is immediately evident (to me at least) is that these researchers are examining potentially-related but definitely distinct objects of study. They lump attitudes, judgments, and decisions in different domains together as "moral attitudes," "moral judgments," and "morally-relevant behaviors." They do so because these domains are traditionally considered morally-relevant. However, as Shweder argues from research contrasting explicit discussions of moral and immoral domains in Hyde Park, IL and in Bhubaneswar, Orissa, India, moral domains may vary from culture to culture. This could either be because there are more domains of moral inquiry in other cultures, or because more behaviors (etc) are considered relevant to each domain. 


Notice that I am using the term "morality" but still have not defined it. This is partially because some variation on "moral," "ethical," "right" or "wrong," "good" or "evil," are commonly used in every culture that I know of (although I really need to find a citation for this). It should be noted, however, that these terms may not be equivalent. The terms ethics and morality, for example, may be used to distinguish rules that apply to individuals in certain roles from rules that apply to all individuals at all times. They may also be used to refer to rules that people would prefer that you follow from rules that you must follow. It should be noted that these are explicit distinctions, and a study of implicit associations with the terms may yield yet other similarities and distinctions. 


At this point, you might scoff. You might say, "But I can use the word in a sentence. I can talk about some things being moral and immoral. I can talk about how some things are worst than others and about how some things are better than others, morally-speaking." I can do this too. My question is, what allows us to do this? What goes on when we do this?


When I think about morality - I understand it in terms of goals - implicit and explicit. These goals are various - for pleasure, for meaning, to avoid uncertainty, to avoid social dissolution, etc. I argue that individuals may judge something--a belief, an affective reaction, a behavior--to be moral or immoral when it facilitates or hinders the achievement of goals that are defined by the approach or avoidance of states of being that are intensely positive or negative and that are, more importantly, foundational for the approach or avoidance of other goals. 


These foundational goals include, for example, the maintenance of meaning. Meaning is here defined as the experience of being able to understand and consistently react to a variety of phenomena. The pursuit of meaning is an active goal that is threatened by a) threats to the self, including threats to self-esteem, b) perceptions of randomness and the experience of uncertainty, c) threats to affiliation with others, d) threats to symbolic immortality, e.g. reminders of mortality, e) and categories of meaning-threats (and affirmations) that have not yet been clearly delineated in the social psychological literature. I should note that meaning affirmations, as well as meaning threats, should have an important role in a moral system. For example, affirming one's self-concept may lead one to be more open to otherwise threatening experiences, people arguing against one's political position, for example. 


Interestingly, in one experiment, people responded both to being primed with nonsense phrases like "bull left" and "turn-frog" and to mortality salience primes by assigning higher bail to a fictional prostitue than people who were not primed with either meaning-threat (Randles, Proulx, & Heine, 2011). I argue that this is evidence of moralization. Perhaps, in response to meaning-threat, the fictional prostitute's actions were considered even more damaging to existing moral schemas. Alternatively, in response to meaning threat, the logic of the moral schema was more appealing. If the second perspective is accurate, I could see emotions and their associated and/or constituent cognitions helping to restore meaning. Emotions provide a clear, structured, and motivating interpretation of events. However, I should note that, in response to meaning threat, explicit changes in affect are rarely reported (Randles, Proulx, & Hein, 2011). 


People, I should note, can explicitly self-regulate against negatively evaluating things--beliefs, affective reactions, behaviors--that hinder the achievement of meaning and they can also explicitly self-regulate against positively evaluating things that facilitate the achievement of meaning. This self-regulation occurs when individuals know that achieving meaning is difficult and that any attempt to achieve meaning may have only ephemeral success. Interestingly, in a situation of powerlessness and intense threat, attempts to restore meaning may be less successful, leading to a rapidly changing set of moral cognitions. An American who was being subjected to an attack by a member of al-Qa'ida, for example,  would rarely call al-Qa'ida's actions moral if those actions lead to negative outcomes. Why not? Well, salient at the time of attack might be a variety of affectively-laden goals - the preservation of one's continued existence, a need to avoid attack because of one's impression that the attack is not fair, that one is being randomly targeted rather than being punished for one's wrongdoing, or a variety of other goals. 


The attack would challenge meaning a) because a controlled environment is only positively evaluated if it allows you the possibility of achieving positive goals, b) staying alive is fundamental to the achievement of a variety of goals, c) one does not, automatically, find oneself worthy of attack if one believes that one is someone who is worthy of continued existence, worthy of being able to continue to pursue a variety of goals. If the attack continued, however, one might seek meaning by justifying the attack as moral, by imagining a future where people like you stop future attacks and punish the perpetrators, avoiding thoughts of death and imagining a reward in the afterlife, avoiding thoughts of death and focusing on being a moral person before you do die, thinking of death as an acceptable consequence of meaning-establishing positive acts, like saving the lives of others, etc. 


Does this account challenge the existing moral psychology literature? No, but it may help integrate it and may lead to new predictions. Skitka and colleagues argue that moral identity is distinct from social identity, at least for individuals in their samples. This would make a degree of sense - the moral identity may be related to aspects of self that are fundamental to the pursuit of a variety of goals, while social-identities and personal-identities may be fundamental to the pursuit of goals that are marked-as-non-moral, as varying, as not worthy of strict control. At the same time, implicit evaluations of how worthy something is of stability and of control may vary. Someone could have an implicit moral reaction but not explicitly acknowledge this reaction. 


Haidt and colleagues argue that morality is linked to emotions, emotions which contain both an affective component and a cognitive one. The cognitive component is a story about responsibility (control), the event that caused the negative or positive affect (the disruption of stability of perceptions of the environment through the mixing of categories, the loss of valued things, attack, the superiority or inferiority of other selves, establishing control and value through care), and whether one should approach or avoid the moral actor or the person he, she, etc has acted upon. Scholars of person-perception argue for that we distinguish moral from social traits - with traits like trust being moral traits and friendliness being social ones. Friendliness is a general, positively-evaluated "thing" but it has few implications for other goals. Trust, on the other hand, has implications for multiple goals. 
Harder questions would be - why do hierarchy-maintenance goals and group-boundary-maintenance goals get moralized? They may be fundamental to meaning in that they influence one's self-understanding and one's ability to understand other people.


So far, I've presented a theory. Ways to test this theory would include correlating reactions identified in the literature as "moral" with challenges to goal-pursuit that the literature has not yet performed. It would also involve establishing the motivation behind phenomena as diverse as judgments on trolley-problems and reactions to same-sex marriage with certain, specifiable goals. It also involves establishing the context-sensitive relationship among goals. If we did all that work, however, we might reach my particular goal - predicting and being able to influence cooperation and conflict in groups with diverse ways of applying the labels "moral" and "immoral," "right" and "wrong." 

Wednesday, October 5, 2011

Defining Morality as an Object of Study


All entries tagged with this title examine different scholars and their approaches to studying morality.  

Friday, June 24, 2011

Moral Convictions


According to Skika and colleagues, people who have identified moral convictions will, likely, believe that these convictions apply to others and will, likely, be intolerant of those who do not share these convictions. They operationalize moral conviction by asking participants whether their “feelings about X are a reflection of my core moral beliefs and convictions” or asking them “to what extent is your attitude about X a reflection of your core moral beliefs and convictions.” Moral convictions, then, are defined by the participant and not by the experimenter’s own moral or scientific theories. While I would imagine that those beliefs that participants label moral convictions may vary in their structure, function, and origins, Skitka and colleagues have established that, at least for their samples, when participants identify beliefs as being central to their core moral beliefs and convictions, they are identifying beliefs that have similar effects on social perceptions, similar strengths, and similar effects on behavior.  

According to Bauman and Skitka’s 2009 chapter, moral convictions tend automatically inform an individual’s perception of their environment (physical and social) and themselves. Stimuli that are relevant to moral convictions will be considered salient and these stimuli will be judged to have a moral significance that is objective, independent of the mind of the perceiver. Some individuals may challenge this automatic assumption, but the objectivity of this moral salience is an implicit, automatic and perhaps unexamined, belief, and overcoming it can require more deliberate thought or the activation of another, contradictory, automatic goal (Moskowitz & Li, 2010).

Without the interference of any, what we can quickly term moral subjectivity goals, individuals will consider any motivations, behaviors, and justifications for these behaviors to be both natural and normative responses. They will believe that all people should naturally share these motivations, behaviors, and justifications for these behaviors. As Bauman and Skitka (2009, p. 342) argue “[P]eople experience morals as if they were readily observable, objective properties of situations, or as facts about the world . . . Unlike facts, however, morals carry prescriptive force . . . moral judgments both motivate and justify consequent behaviors.”

Moral convictions, Skitka and colleagues argue, are a special type of attitude that is imperfectly captured by previous attitude strength research. Correlations in Sktika and Bauman (2008) between attitude extremity and moral conviction, for example, were high enough that they could be tapping the same construct, but not so high as to make this likely. The behavioral implications of attitude and extremity and moral conviction were also different, with the latter, unlike the former, being associated with social distancing from people who do not share the same attitude (Skitka et al., 2005).

Bauman and Skitka (2009) distinguish their research from other moral psychology research by arguing that those other experiments require that participants judge whether a person’s behavior, or the participant’s own decision, is moral or immoral or right or wrong. These experiments themselves, then, are unable to say whether their participants would have spontaneously made moral judgments. Those participants who spontaneously made moral judgments may behave differently from those who require experimenter-prompting. If this is the case, the experiment may have limited application outside of the laboratory. Many experiments, for example, involve trolley problems. Bauman (2008) confirmed that “there is considerable variability in the extent that people perceive the dilemma to be a situation that involves a moral choice” (Bauman & Skitka, 2009, p. 347). 

Thursday, July 29, 2010

Morality Based in Emotional and Affective Appraisals

Some scholar believe that emotions are the basis of moral judgments. Emotions may arise from cognitive appraisals, but they may also influence cognitive appraisals. The majority of research to date has dealt with the latter influence of emotions. Emotions should be distinguished from Affect. Affect is general positivity and negativity. Some research has looked at the influence of positive versus neutral emotions on moral judgments. 


For example, Valdesolo and DeSteno (2006), examined the moderating roles of positive affect on participant responses to the footbridge dilemma. In the footbridge dilemma, the experimenter presents the participant with a fictional scenario. The brakes and steering on a train have failed. The train is hurtling towards track workers, who will all be killed if the train is not redirected onto an empty track. A bystander on a bridge above the track notice a large switch that would redirect the train. Unfortunately, the switch can only be moved by a great force. The bystander looks around and sees a heavyset man who, if pushed onto the switch, would move the switch and divert the train. The bystander herself is not heavy enough to move the switch. She must decide whether to push the heavyset man onto the switch, killing him in the process, or let the train hit the track workers.

Many participants respond to this situation by refusing to push the heavyset man to his death, dooming the workers in the process. However, Valdesolo and DeSteno (2006) were able to get more participants to agree to push the man. They did this by having participants watch "a comedy video immediately before completing a questionnaire on which they judged the appropriateness of pushing a man to his (useful) death" (Haidt & Kesebir, 2009). The positive affect, the researchers believe, counteracted any negative affect aroused by the footbridge dilemma.

Positive affect (including positive emotions) can also increase helpful action. Experiments have examined the roles of "[g]ood weather (Cunningham, 1979), hearing uplifting or soothing music (Fried &  Berkowitz, 1979; North, Tarrant &  Hargreaves, 2004), remembering happy memories (Rosenhan, Underwood &  Moore, 1974), eating cookies (Isen &  Levin, 1972), and smelling a pleasant aroma such as roasted coffee (R. A. Baron, 1997)" (Haidt & Kesebir, 2009) on helping behaviors. 

Other experiments have examined specific emotions more directly. One particularly influential emotion is disgust. Wheatley and Haidt (2005) "used post-hypnotic suggestion to implant an extra flash of disgust whenever participants read a particular word (“take” for half of the participants; “often” for the other half). Participants later made harsher judgments of characters in vignettes that contained the hypnotically enhanced word, compared to vignettes with the nonenhanced word. Some participants even found themselves condemning a character in a story who had done no wrong--a student council representative who “tries to take” or “often picks” discussion topics that would have wide appeal" (Haidt & Kesebir, 2009). Schnall, Haidt, Clore, and Jordan (2008) "extended these findings with three additional disgust manipulations: seating participants at a dirty desk (vs. a clean one), showing a disgusting video clip (vs. a sad or neutral one), and asking participants to make moral judgments in the presence of a bad smelling “fart spray” (or no spray)" (Haidt & Kesebir, 2009)

The influence of disgust was moderated by participant awareness of disgust, as measured by the "private body consciousness" scale (Miller, Murphy, & Buss, 1981). This scale measures "the degree to which people attend to their own bodily sensations. This finding raises the importance of individual differences in the study of morality: Even if the ten literatures reviewed here converge on a general picture of intuitive primacy, there is variation in the degree to which people have gut feelings, follow them, or override them (see Bartels, 2008; Epstein, Pacini, Denes-Raj, & Heier, 1996). For example, individual differences on a measure of disgust sensitivity (Haidt, McCauley, & Rozin, 1994) has been found to predict participants’ condemnation of abortion and gay marriage, but not their stances on non-disgust-related issues such as gun control and affirmative action (Inbar, Pizarro, & Bloom, in press). Disgust sensitivity also predicts the degree to which people condemn homosexuals, even among a liberal college sample, and even when bypassing self-report by measuring anti-gay bias using two different implicit measures (Inbar, Pizarro, Knobe, & Bloom, in press)"(Haidt & Kesebir, 2009).

It should be noted that the exact appraisal underlying the relationship between disgust and negative moral judgments is not necessarily well-understood. For example, in Inbar, Pizarro, Knobe, & Bloom  (2009) disgust sensitivity predicted both attributions of intentionality to a situation that resulted in more gay men kissing in public and implicit negative attitudes towards gay men as measured by an IAT. However, it is not clear whether disgust is moderating sensitivity to moral purity and pollution or, more generally, sensitivity to the violation of social conventions

Tapias, Glaser, Keltner, Vasquez, & Wicken (2007) further explore the relationship between disgust and attitudes towards gay people. In this study, the researchers primed participants with words related to homosexuality. They then tested participant reaction on an onstensibly unrelated experiment. Participants reacted to this experiment as if they had been primed directly with disgust, or so the authors argue. Participants who reported being more likely to experience disgust in their daily lives also reported higher levels of prejudice towards gay people. 

Disgust is not the only emotion studied in the moral psychology literature. Anger, for example, has a powerful influence on moral judgments. Tapias et al. (2007) used the same methodology to demonstrate a relationship between anger and prejudice towards African Americans. 

Wednesday, July 14, 2010

Morality as Cognition


Marc Hauser's Moral Minds argues that an evolved moral faculty generates moral judgments. Moral judgments are defined as judgments of right and wrong, permitted and forbidden. This faculty applies innate principles that are biologically determined. However, these principles allow for variation.  Just as some principles of language are immutable and some are varying, so moral judgments can vary across cultures.  Culture, in Hauser's terminology, sets the parameters.  Nature gives us the principles.

Departing from other researchers in the field, Hauser argues that cognitive appraisals that may precede or be driven by emotion are distinct from the appraisals made by the moral faculty.  However, for Hauser, emotions may directly result from or accompany these appraisals and remain, in his account, an important area of inquiry.

Methodology: Supporting evidence comes from social-psychology-based laboratory experiments and surveys, behavioral economics, neuroscience, and evolutionary biology.  Hauser argues that while little evidence contradicts his thesis, more research is needed to ensure that it is the most parsimonious explanation.

Morality vs. Convention


Shweder et al. (1987) contrast social conventions and moral rules cross-culturally. They conclude that the category of social convention emerges from the moral concept of individual rights and is not found in cultures that lack this concept.

Shweder et al. would give a more complete analysis if they discussed informal social rules. These rules are unmarked by culture and are learned through mimicry.  People may feel uncomfortable upon witnessing another person violating these informal rules. However, they are unlikely to know why.

Shweder et al. support their conclusions using data collected via structured interviews in Hyde Park, IL and India.