Showing posts with label Moral Selves. Show all posts
Showing posts with label Moral Selves. Show all posts

Sunday, February 26, 2012

Moral Hypocrisy: Definition and a Demonstration

Valdesolo and Desteno (2007) studied moral hypocrisy - judging your own actions to be more moral than when another person performs the same actions in similar circumstances. They operationalized moral hypocrisy by examining differences in attributions of fairness/unfairness to the same act when it was performed by the self, dissimilar others, and similar others.

Following Batson et al. (1997), “[i]n one condition, subjects were required to distribute a resource (i.e., time and energy) to themselves and another person, and could do so either fairly (i.e., through a random allocation procedure) or unfairly (i.e., selecting the better option for themselves). They were then asked to evaluate the morality, or fairness, of their actions. In another condition, subjects viewed a confederate acting in the unfair manner, and subsequently evaluated the morality of this act.”

Valdesolo and DeSteno (2007) divided their participant pool into four groups. The first group was asked to decide whether to allocate a difficult task to themselves and an easy task to another person, or vice versa. They were given two options, to decide using a randomizer or to allocate the tasks however they wished. All allocations would be anonymous. All but 2 participants in this first group allocated the easy task to themselves and the difficult task to the other person (whom they had never met and who did not, in fact, exist).

A second group of participants was asked to watch someone else (a confederate of the experimenter) make the allocation. This confederate, like people in the first group, allocated the easy task to himself. A third group also watched a confederate make the allocation, but were told that they differed from the confederate on one trait, being an Underestimator or an Overestimator. The fourth group was told that they were similar to the confederate on one trait (being an Underestimator or an Overestimator).

All groups were asked to rate how fair the decision was (either their own or the decision of the confederates). The group that rated themselves tended to see their own actions as more fair than the group that rated the action's of a confederate. Of the two groups that rated similar and dissimilar confederates, the group that rated an arbitrarily similar confederate saw his actions as more fair than the group that rated an arbitrarily dissimilar confederate. In other words, people saw their own selfishness as more fair than another's selfishness. They also saw a dissimilar other's selfishness as less fair than a similar other's.

Source:
Valdesolo, P. and DeSteno, David (2007). Moral Hypocrisy: Social Groups and the Flexibility of Virtue. Psychological Science, 18(8):689-690 http://socialemotions.org/page5/files/Valdesolo.DeSteno.2007.pdf

Thursday, February 23, 2012

Defining Morality: My Current Perspective

As discussed previously, the moral psychology literature often operates from a folk psychology definition of morality. This habit has always frustrated me because the folk psychology definition seems, in my experience at least, to assume that there is a single system of attitudes, judgments, and behaviors that can be perceived, studied, and labeled as "morality." Some researchers argue that morality can be divided into different domains, characterized by distinct cognitions which may vary in certain ways, from culture to culture, but that are ultimately biologically constrained. These researchers tend to look at a variety of domains, including decisions to cause death or to allocate resources. Other researchers argue that emotions (which have a cognitive component) both underly moral attitudes and judgments in different domains and motivate behaviors. Still other researchers ask participants to identify whether an attitude is "a reflection of your core moral beliefs and convictions." Still other researchers may examine morality is a dimension used in social judgments--often looking at traits like trustworthiness, honesty, and fairness that could be distinguished from warmth-related traits like friendliness.


What is immediately evident (to me at least) is that these researchers are examining potentially-related but definitely distinct objects of study. They lump attitudes, judgments, and decisions in different domains together as "moral attitudes," "moral judgments," and "morally-relevant behaviors." They do so because these domains are traditionally considered morally-relevant. However, as Shweder argues from research contrasting explicit discussions of moral and immoral domains in Hyde Park, IL and in Bhubaneswar, Orissa, India, moral domains may vary from culture to culture. This could either be because there are more domains of moral inquiry in other cultures, or because more behaviors (etc) are considered relevant to each domain. 


Notice that I am using the term "morality" but still have not defined it. This is partially because some variation on "moral," "ethical," "right" or "wrong," "good" or "evil," are commonly used in every culture that I know of (although I really need to find a citation for this). It should be noted, however, that these terms may not be equivalent. The terms ethics and morality, for example, may be used to distinguish rules that apply to individuals in certain roles from rules that apply to all individuals at all times. They may also be used to refer to rules that people would prefer that you follow from rules that you must follow. It should be noted that these are explicit distinctions, and a study of implicit associations with the terms may yield yet other similarities and distinctions. 


At this point, you might scoff. You might say, "But I can use the word in a sentence. I can talk about some things being moral and immoral. I can talk about how some things are worst than others and about how some things are better than others, morally-speaking." I can do this too. My question is, what allows us to do this? What goes on when we do this?


When I think about morality - I understand it in terms of goals - implicit and explicit. These goals are various - for pleasure, for meaning, to avoid uncertainty, to avoid social dissolution, etc. I argue that individuals may judge something--a belief, an affective reaction, a behavior--to be moral or immoral when it facilitates or hinders the achievement of goals that are defined by the approach or avoidance of states of being that are intensely positive or negative and that are, more importantly, foundational for the approach or avoidance of other goals. 


These foundational goals include, for example, the maintenance of meaning. Meaning is here defined as the experience of being able to understand and consistently react to a variety of phenomena. The pursuit of meaning is an active goal that is threatened by a) threats to the self, including threats to self-esteem, b) perceptions of randomness and the experience of uncertainty, c) threats to affiliation with others, d) threats to symbolic immortality, e.g. reminders of mortality, e) and categories of meaning-threats (and affirmations) that have not yet been clearly delineated in the social psychological literature. I should note that meaning affirmations, as well as meaning threats, should have an important role in a moral system. For example, affirming one's self-concept may lead one to be more open to otherwise threatening experiences, people arguing against one's political position, for example. 


Interestingly, in one experiment, people responded both to being primed with nonsense phrases like "bull left" and "turn-frog" and to mortality salience primes by assigning higher bail to a fictional prostitue than people who were not primed with either meaning-threat (Randles, Proulx, & Heine, 2011). I argue that this is evidence of moralization. Perhaps, in response to meaning-threat, the fictional prostitute's actions were considered even more damaging to existing moral schemas. Alternatively, in response to meaning threat, the logic of the moral schema was more appealing. If the second perspective is accurate, I could see emotions and their associated and/or constituent cognitions helping to restore meaning. Emotions provide a clear, structured, and motivating interpretation of events. However, I should note that, in response to meaning threat, explicit changes in affect are rarely reported (Randles, Proulx, & Hein, 2011). 


People, I should note, can explicitly self-regulate against negatively evaluating things--beliefs, affective reactions, behaviors--that hinder the achievement of meaning and they can also explicitly self-regulate against positively evaluating things that facilitate the achievement of meaning. This self-regulation occurs when individuals know that achieving meaning is difficult and that any attempt to achieve meaning may have only ephemeral success. Interestingly, in a situation of powerlessness and intense threat, attempts to restore meaning may be less successful, leading to a rapidly changing set of moral cognitions. An American who was being subjected to an attack by a member of al-Qa'ida, for example,  would rarely call al-Qa'ida's actions moral if those actions lead to negative outcomes. Why not? Well, salient at the time of attack might be a variety of affectively-laden goals - the preservation of one's continued existence, a need to avoid attack because of one's impression that the attack is not fair, that one is being randomly targeted rather than being punished for one's wrongdoing, or a variety of other goals. 


The attack would challenge meaning a) because a controlled environment is only positively evaluated if it allows you the possibility of achieving positive goals, b) staying alive is fundamental to the achievement of a variety of goals, c) one does not, automatically, find oneself worthy of attack if one believes that one is someone who is worthy of continued existence, worthy of being able to continue to pursue a variety of goals. If the attack continued, however, one might seek meaning by justifying the attack as moral, by imagining a future where people like you stop future attacks and punish the perpetrators, avoiding thoughts of death and imagining a reward in the afterlife, avoiding thoughts of death and focusing on being a moral person before you do die, thinking of death as an acceptable consequence of meaning-establishing positive acts, like saving the lives of others, etc. 


Does this account challenge the existing moral psychology literature? No, but it may help integrate it and may lead to new predictions. Skitka and colleagues argue that moral identity is distinct from social identity, at least for individuals in their samples. This would make a degree of sense - the moral identity may be related to aspects of self that are fundamental to the pursuit of a variety of goals, while social-identities and personal-identities may be fundamental to the pursuit of goals that are marked-as-non-moral, as varying, as not worthy of strict control. At the same time, implicit evaluations of how worthy something is of stability and of control may vary. Someone could have an implicit moral reaction but not explicitly acknowledge this reaction. 


Haidt and colleagues argue that morality is linked to emotions, emotions which contain both an affective component and a cognitive one. The cognitive component is a story about responsibility (control), the event that caused the negative or positive affect (the disruption of stability of perceptions of the environment through the mixing of categories, the loss of valued things, attack, the superiority or inferiority of other selves, establishing control and value through care), and whether one should approach or avoid the moral actor or the person he, she, etc has acted upon. Scholars of person-perception argue for that we distinguish moral from social traits - with traits like trust being moral traits and friendliness being social ones. Friendliness is a general, positively-evaluated "thing" but it has few implications for other goals. Trust, on the other hand, has implications for multiple goals. 
Harder questions would be - why do hierarchy-maintenance goals and group-boundary-maintenance goals get moralized? They may be fundamental to meaning in that they influence one's self-understanding and one's ability to understand other people.


So far, I've presented a theory. Ways to test this theory would include correlating reactions identified in the literature as "moral" with challenges to goal-pursuit that the literature has not yet performed. It would also involve establishing the motivation behind phenomena as diverse as judgments on trolley-problems and reactions to same-sex marriage with certain, specifiable goals. It also involves establishing the context-sensitive relationship among goals. If we did all that work, however, we might reach my particular goal - predicting and being able to influence cooperation and conflict in groups with diverse ways of applying the labels "moral" and "immoral," "right" and "wrong." 

Sunday, October 9, 2011

Non-conformity and Counter-conformity to Group Norms - An Exploration Using Gay Marriage and a Government Apology to Australian Aborigines


Matthew Hornsey,  Louise Majkut, Deborah Terry, and Blake McKimmie's 2003 article examines the conditions under which University of Queensland students in favor of legal recognition of gay couples or in favor of a government apology to the Aborigines would act on these attitudes either publicly or privately.  They specifically analyzed the roles of moral conviction, perceived societal support, and perceived support by the rest of the student body.  They found that, in general, a strong moral basis for the attitude, perceived societal opposition, and perceived group support correlated positively with intention to act, both privately and publicly.  Interestingly, intention to act publicly was sometimes greater than intention to act privately and group support sometimes had no effect at all.  

Using Emotions to Motivate Action and Constrain Cognition - A Speculative Perspective on ACT UP's Tactics


In Gould’s Moving Politics: Emotion and ACT UP’s Fight Against AIDS, different emotions privilege different acts of contention. Gould, for example, directly links anger to ACT UP’s use of nonviolent direct action. In Gould’s account, members of ACT UP believed that anger would inspire direct action and explicitly encouraged anger in order to sustain their own and others' participation. Gould argues that members of ACT UP embraced these tactics and the anger that inspired them because doing so provided a more effective alternative to less confrontational tactics already in use by other organizations. This anger, sustained by the emotion-work of ACT UP members, served to sustain both ACT UP and its cause.

However, as Gould discusses, emotions exist within a set of frames, including political ideology, social-normative assumptions, and identity. Throughout Gould’s account, these frames shape not only which emotions are relevant but the actions that these emotions privilege and the targets of these actions. It is not clear, however, how emotions interact with these frames—whether they reflect them, amplify them, or transform them. Underlying this ambiguity in Gould’s text is an ambiguous account of the indeterminacy of emotions. Focusing on anger and emotions that moderate the effects of anger, I will here suggest a definition of emotion as modular frame that both fits Gould’s evidence and generates a set of testable hypotheses that could render her model applicable to a range of other contentious movements.

In Gould’s account, emotions reflect existing frames and help motivate but do not restrict action. I argue that emotions both create new frames and directly cause action, although they do so in coordination with wider frames. Specifically, emotions frame personal and social goals. Emotions are intimately bound to—and may even be partially constituted by—appraisals of whether a goal can or should be achieved, how much control the actor has over goal achievement, and the actor’s ability to cope with achievement or non-achievement of these goals.

Anger, for example, tends to arise when another individual’s goals are interfering with yours, when you believe that they have control over their actions, and when you believe that you cannot tolerate (cope with) interference. This pattern tends to hold even when individuals are experiencing anger as group members rather than simply as individuals.

Last, anger, like all emotions, can be instantaneous and, to a degree, transferable. If you see an angry person, you form a theory about what they are thinking and feeling that is based on the appraisals described above. If you identify with that person, you can adopt those appraisals as your own even before you have experienced their situation for yourself.

ACT UP deliberately framed events in terms of the core narrative of anger, creating an amplifying resonance between sociopolitical beliefs and instantaneous emotion. ACT UP, for example, shifted blame for the epidemic from the gay community to the government, arguing that the gay community had responded to the virus by developing safe sex practices while government actors had callously refused to act or acted in a way that further threatened the health and safety of the gay community. Further, they made anger normative, encouraging its display and privileging demonstrations of anger over demonstrations of other, potentially frame-threatening, emotions. It was impossible to be a member of ACT UP and not feel anger, impossible to be a member of ACT UP and not see the epidemic, and actors in the epidemic, through the lens of anger.

By choosing to embrace anger (over alternatives) ACT UP members were driven to direct action. Anger involves the appraisal that another person’s actions (or your own past actions, or a group’s actions, or the actions of God or fate) are intolerable. Both nonviolent and violent resistance are possible results of anger, but an emphasis on resistance is almost assured if anger is given free reign.

Hope and anger interact when an individual or group is deciding when, where, and who to resist. Anger establishes threat and the intentionality of the offending actor, but hope provides a sense of the angry person’s agency, their ability to affect change. ACT UP managed hope, framing nonviolent direct action as the most effective action and framing other tactics as hopeless. When hope was lost, some people surrendered anger, finding numbness or renewed compassion. Numbness shifted focus to inaction, leading activists to both leave ACT UP and to resist re-involving themselves in the fight to manage the AIDS crisis. Renewed compassion shifted focus away from political activism and instead emphasized helping others to cope with the epidemic, leading former ACT UP members to volunteer to care for AIDS victims. The loss of hope also brought a redirection of anger, emphasizing the perceived failings of individual ACT UP members. This new anger, expressed as a sense of betrayal, both amplified existing divisions and spurred the emergence of new social identities.

Moralization was evident in both attitudes towards government actors and, later, attitudes toward ACT UP members. By emphasizing the intentional, negative, and intolerable actions of others, anger can, when appropriately framed, create the perception that an individual’s bad behavior reflects essential badness. It is associated with increased stereotyping, both automatic and more deliberate.

Anger however, only tends to moralize around justice and fairness, not other markers of “badness” like pollution or weakness. Interestingly, anger may not only have focused the attention of ACT UP members on justice and fairness, but may have focused governmental attention on these issues as well. When medical and scientific officials paid attention, even defensively, to ACT UP’s anger, they may have been more attentive to ACT UP’s justice and fairness frames and it is possible that a combination of anger and hope encouraged frame bridging between these officials and members of ACT UP.

References:

Chapter Two; “New Feelings and an Expanding Political Horizon After Hardwick;” “Individuals and the Social Space for Militancy.”
Gould, Deborah. “Moving Politics.” Chapter Seven; “From Despair to Activism;” “Act Up’s Antidote to Despair”
Gould, Deborah. “Moving Politics.” Introduction. “New Curves in the Emotional Turn;” “Affect, Feelings, and Emotions.”
Kuppens, Peter, Van Mechelen, Iven, Smits, Dirk J. M., and De Boeck, Paul. “The Appraisal Basis of Anger: Specificity, Necessity, and Sufficiency of Components.” Emotion. 2003:3(3). 254-269.
Mackie, Diane M., Devos, Thierry, and Smith, Eliot. “Intergroup Emotions: Explaining Offensive Action Tendencies in an Intergroup Context.” Journal of Personality and Social Psychology. 2000. 79(4):602-616.
Smith, Eliot R., Seger, Charles R., Mackie, Diana M. “Can Emotions Be Truly Group Level? Evidence Regarding Four Conceptual Criteria.” Journal of Personality and Social Psychology. 2007. 93(3):443.
Snow, David, Rochford, E. Burke, Worden, Steven K., Benford, Robert D. “Frame Alignment Processes, Micromobilization, and Movement Participation.” American Sociological Review, 1986. 51(4):477
Gould, Deborah. “Moving Politics.” Chapter 2; “The Affects and Emotions of Framing;” “Aids as Genocide: Linking Fear, Grief, and Anger to Action.”
Gould, Deborah. “Moving Politics.” Chapter 4; “ACT UP and a New Emotional Habitus;” “Grief into Anger.”
Gould, Deborah. “Moving Politics.” Chapter 7; “What Despair Does;” “Forbidding Despair.”
Gould, Deborah. “Moving Politics.” Chapter 6; “Moralism.”

What Biases Judgements of the Self?


According to Gilbert and Malone there are four main factors that can lead to inaccurate beliefs about others and oneself: low levels of awareness of situational forces, unrealistic expectations, inflated categorizations and incomplete corrections for perceived errors (1995, p. 8).  Low levels of awareness may be due to the invisibility of those causal influences on human behavior that are temporally and spatially distant (1995, p. 10).  Even if the causal influences are proximal, the observer must have a theory of influences on human behavior in order to interpret these influences.  Here, Gilbert and Malone’s first and second factors interact.  In their account, individuals regularly underestimate incentives, basic social pressures, and egocentric biases (1995, p. 11-12).  Egocentric biases occur when individuals do not fully understand why they perceive a certain situation in a specific way.  Unable to fully comprehend the reasons for their evaluations, individuals may either fall back on a naïve realism in which their impressions of objects are qualities of the objects themselves.  Alternatively, they may simply be unable, through introspection, to understand the origin of their own reactions and thus may actively apply a theoretical perspective that they believe to be accurate but are unable to test (1995, p. 11-12). 

Individual self-understandings tend to be highly biased.  In 1993, Sedikides demonstrated that, for his participants, self-enhancement and consistency motives were more influential than diagnosticity motives (cited in Baumeister, 2010, 149). Acknowledging situational influences could undermine both positive accomplishments and the consistency with which one believes that one can achieve these accomplishments.   

At the same time, it is not clear how these biases in self-judgment would bias judgments of other people.  Judgments of others are typically more realistic.  For example, in 1988 Taylor and Brown demonstrated that “people overestimate their successes and good traits . . . underestimate their failures and bad traits) . . . overestimate how much control they have over their lives and their fate . . . [and are] unrealistically optimistic, believing that they are more likely than other people to experience good outcomes and less likely to experience bad ones” (cited in Baumeister, 2010, p. 150).  Further, as Zuckerman demonstrated in 1979, people tend to look to situation causes to explain their own failures (cited in Baumeister, 2010, 150) but do not extend the same courtesy to others (Gilbert and Malone, 1995).  When individuals do admit to having negative traits they “persuade themselves that their good traits are unusual whereas their bad traits are widely shared” (Baumeister, 2010, 150).  

It is possible that judgments of self influence lay theories that are applied to both the self and the other.  However, Gilbert and Malone point out other cognitive reasons for unrealistic expectations-the availability bias leads to inaccurate judgments of the typicality of certain behaviors (1995, p. 13) and lay theories of situational influence can lead individuals to underestimate even their own dispositions (1995, p. 14).  Inflated categorization of behavior occurs because individuals seek to resolve ambiguity and thus see behavior as more strongly conforming to expectation than it actually does (1995, pg. 14).  Last, individuals tend to make either situational or dispositional attributions based on the motives of the moment, and correcting for these attributions can be difficult (1995, p. 15-16).  

What Moderates Attitude-Behavior Consistency?


Attitude structure, as mapped through different measures of attitude strength, influences attitude-behavior consistency. For example, the accessibility of an attitude, the ease with which it comes to mind, has been positively correlated to voting behavior, consumer product choices, puzzle completion, and the choice to donate to charity (Fabrigar and Wegener, 2010, p. 187-188.)  To take another example, just believing that you’ve thought about something can lead to greater certainty and with that certainty, greater attitude-behavior consistency (Petty & Brinol, 2010, p. 241).  Other, more general features of attitude structure, such as the content of knowledge structures and the valence of evaluations, have been respectively correlated to influence on instrumental and consummatory behaviors (2010, p. 188).  In both cases there is a match between the content of knowledge structures linked to the attitude and the motivations in that particular situation.    

Ambivalence is usually negatively correlated to attitude-behavior consistency (2010, p. 188-189).  This is the case even for complex attitudes, which can include multiple evaluations of different valences.  Normally, these attitudes may be considered “informative guides even when the goal of the behavior has little direct relevance to any of the dimensions of knowledge (2010, p. 195), perhaps because they have been tested across a variety of situations and are considered generally relevant.  However, ambivalence decreases confidence in these attitudes, their perceived situational relevance, and, in some cases, willingness to act.   Less-complex attitudes may be even more affected (2010, p. 195).  

According to Petty et al.’s Meta-Cognitive Model, the MCM, individual evaluations and knowledge structures may be “tagged” with meta-evaluations of their likelihood, the confidence with which they should be held, accuracy, and certainty (Petty & Brinol, 2010, p. 218-219).  Ambivalence can cause clashes between attitudes which may be experienced as discomfort and prompt an adjustment of these tags (2010, p. 219).  Attitudes with meta-cognitive tags indicating certainty and confidence should be better correlated to behaviors both deliberative and automatic.  Attitudes with tags indicating their weakness and personal lack of confidence in them should, eventually, cease to affect behavior.  However, they may persist as implicit attitudes that affect automatic reactions (2010, p. 219). 

Because attitudes can form by many routes, including evaluative conditioning, heuristic, and elaborative processing (2010), it is possible for behaviors to change attitudes.  For example, “[a]ttitude self-reports filled out in front of a mirror . . . better predict subsequent behavior,” (Baumeister, 2010, p. 143) presumably biases attention toward the reflexive self.  Further, participants who have recalled “extraverted versus introverted tendencies” (2010, p. 146) typically begin to think of themselves as introverted or extroverted which can lead to the expression of introverted and extroverted behaviors (2010, p. 146).  Role playing can also induce attitude change (Petty & Brinol, 2010, p. 221). 

Wednesday, October 5, 2011

The Private and Public Moral Self


According to Baumeister, the heart of the self is reflexive consciousness (Baumeister 2010, p. 142).  The object of this reflection, however, is somewhat mysterious.  Who we are and what we will do is not a given. We learn about ourselves by observing past and current behaviors and making predictions  (Baumeister 2010, p. 142). However, only a few of these beliefs about the self “are active in focal awareness at any given time” (Baumeister 2010, p. 145).  The presence of specific beliefs in focal awareness is moderated both by internal processes such as self-regulation (Baumeister, 2010, p. 143) and the extent to which awareness is focused on the self to the exclusion of other objects of awareness (Baumeister, 2010, p. 143).  For example, as Duval and Wicklund found in 1972 attitude “self-reports filled out in front of a mirror are more accurate (in the sense that they better predict subsequent behavior) than those filled out with no mirror present” (as cited in Baumeister, 2010, p. 143).

Self-awareness can bring ongoing mental processes into the forefront of consciousness, intensifying, as Scheier and Carver found in 1977, either awareness of emotional reactions or emotional reactions themselves (as cited in Baumeister, 2010, p. 143).  Increased self-awareness is also positively correlated to successful self-regulation (Baumeister, 2010, p.  143).  However, if individuals are engaged in behaviors that are at odds with their self-concept, self-awareness may be avoided (Baumeister, 2010, p. 144) often through effortful and biased self-justification and selective recall.   

The self, then, is, can be acted upon.  Rather than being a given, it is a flexible store of self-knowledge of varying degrees of accuracy and subject to revision or minimization in a variety of situations, social and non-social.  For humans, at least, social situations predominate, and, Baumeister writes, “[t]he first job of the self is to garner social acceptance” (as cited in Baumeister & Finkel 2010, p. 140).  Social acceptance requires “self-understanding on things that connect [the self] to other people, including family, groups, country, and other relationships” (Baumeister, 2010, p. 140).  When the self identifies with each of these categories, relationships and roles are made salient (Baumeister, 2010, p. 140). At another level of identification, the individual may look to the group in order to learn about herself.  Not only must an individual seeking greater status or seeking to maintain status within the group understand the standards against which she is being judged but she must further internalize these standards so that self-monitoring proceeds automatically (Baumeister, 2010, p. 140).  

People effortfully defend their private moral self-concept. As Haidt and Kesbir (2009) argue, "[w]hen people behave selfishly, they judge their own behavior to be more virtuous than when they watch the same behavior performed by another person." However, this pattern is not exhibited when participants are under cognitive load, suggesting that individual reappraisal of selfish action as virtuous is effortful and deliberate. 

The presence of an audience, however, does increase prosocial behavior (Baumeister, 1982). Even the presence of security cameras, acting, perhaps, like the mirror in Duval and Wicklund (1972), increased helping behaviors (Van Rompay, Vonk, & Fransen, in press, cited in Haidt & Kesebir, 2009). Indeed, if participants play a dictator game on a computer that has been given "stylized eyespots on the desktop background" they will give more generously (Haidt & Kesebir, 2009). 

Curiously, when the audience can be deceived, individuals will often fall back on behaviors that are selfish, not prosocial, and dishonest. For example, Batson, Thompson, Seuferling, Whitney, &  Strongman (1999) "asked participants to decide how to assign two tasks to themselves and another participant. One of the tasks was much more desirable than the other, and participants were given a coin to flip, in a sealed plastic bag, as an optional decision aid. Those who did not open the bag assigned themselves the more desirable task 80-90% of the time. But the same was true of participants who opened the bag and (presumably) flipped the coin. Those who flipped may well have believed, before the coin landed, that they were honest people who would honor the coin’s decision: A self-report measure of moral responsibility, filled out weeks earlier, correlated with the decision to open the bag, yet it did not correlate with the decision about task assignment" (Haidt & Kesebir, 2009).

Privacy, then, can reduce the incidence of prosocial behaviors, even when these behaviors are supported by strong cultural mandates. In Japan, for example, "when participants are placed in lab situations that lack the constant informal monitoring and sanctioning systems of real life, cooperation rates in small groups are low, even lower than those of Americans (Yamagishi, 2003)" (Haidt & Kesebir, 2009). It could be argued, however, that the social context of the laboratory would predictably be differently marked in context-sensitive collectivist cultures. 

Moral Selves

Posts tagged with the "Moral Selves" label discuss the private and public moral self.