Ritual Sacrifice of Whitney Houston, Madonna's SuperBowl Sermon, What on Earth is Happening?


Are We Ready for a ‘Morality Pill’?

Last October, in Foshan, China, a 2-year-old girl was run over by a van. The driver did not stop. Over the next seven minutes, more than a dozen people walked or bicycled past the injured child. A second truck ran over her. Eventually, a woman pulled her to the side, and her mother arrived. The child died in a hospital.

The entire scene was captured on video and caused an uproar when it was shown by a television station and posted online. A similar event occurred in London in 2004, as have others, far from the lens of a video camera. Yet people can, and often do, behave in very different ways.

 A news search for the words “hero saves” will routinely turn up stories of bystanders braving oncoming trains, swift currents and raging fires to save strangers from harm. Acts of extreme kindness, responsibility and compassion are, like their opposites, nearly universal. Why are some people prepared to risk their lives to help a stranger when others won’t even stop to dial an emergency number? Scientists have been exploring questions like this for decades.

In the 1960s and early ’70s, famous experiments by Stanley Milgram and Philip Zimbardo suggested that most of us would, under specific circumstances, voluntarily do great harm to innocent people. During the same period, John Darley and C. Daniel Batson showed that even some seminary students on their way to give a lecture about the parable of the Good Samaritan would, if told that they were running late, walk past a stranger lying moaning beside the path. More recent research has told us a lot about what happens in the brain when people make moral decisions. But are we getting any closer to understanding what drives our moral behavior?

 Here’s what much of the discussion of all these experiments missed: Some people did the right thing. A recent experiment (about which we have some ethical reservations) at the University of Chicago seems to shed new light on why. Researchers there took two rats who shared a cage and trapped one of them in a tube that could be opened only from the outside. The free rat usually tried to open the door, eventually succeeding. Even when the free rats could eat up all of a quantity of chocolate before freeing the trapped rat, they mostly preferred to free their cage-mate. The experimenters interpret their findings as demonstrating empathy in rats. But if that is the case, they have also demonstrated that individual rats vary, for only 23 of 30 rats freed their trapped companions.

 The causes of the difference in their behavior must lie in the rats themselves. It seems plausible that humans, like rats, are spread along a continuum of readiness to help others. There has been considerable research on abnormal people, like psychopaths, but we need to know more about relatively stable differences (perhaps rooted in our genes) in the great majority of people as well. Undoubtedly, situational factors can make a huge difference, and perhaps moral beliefs do as well, but if humans are just different in their predispositions to act morally, we also need to know more about these differences. Only then will we gain a proper understanding of our moral behavior, including why it varies so much from person to person and whether there is anything we can do about it.

 If continuing brain research does in fact show biochemical differences between the brains of those who help others and the brains of those who do not, could this lead to a “morality pill” — a drug that makes us more likely to help? Given the many other studies linking biochemical conditions to mood and behavior, and the proliferation of drugs to modify them that have followed, the idea is not far-fetched. If so, would people choose to take it? Could criminals be given the option, as an alternative to prison, of a drug-releasing implant that would make them less likely to harm others?

Might governments begin screening people to discover those most likely to commit crimes? Those who are at much greater risk of committing a crime might be offered the morality pill; if they refused, they might be required to wear a tracking device that would show where they had been at any given time, so that they would know that if they did commit a crime, they would be detected. Fifty years ago, Anthony Burgess wrote “A Clockwork Orange,” a futuristic novel about a vicious gang leader who undergoes a procedure that makes him incapable of violence. Stanley Kubrick’s 1971 movie version sparked a discussion in which many argued that we could never be justified in depriving someone of his free will, no matter how gruesome the violence that would thereby be prevented.

No doubt any proposal to develop a morality pill would encounter the same objection. But if our brain’s chemistry does affect our moral behavior, the question of whether that balance is set in a natural way or by medical intervention will make no difference in how freely we act. If there are already biochemical differences between us that can be used to predict how ethically we will act, then either such differences are compatible with free will, or they are evidence that at least as far as some of our ethical actions are concerned, none of us have ever had free will anyway. In any case, whether or not we have free will, we may soon face new choices about the ways in which we are willing to influence behavior for the better. Peter Singer, a professor of bioethics at Princeton University and a laureate professor at the University of Melbourne, is the author, most recently, of “The Life You Can Save.” Agata Sagan is a researcher

Why do people defend unjust, inept, and corrupt systems?

Why do we stick up for a system or institution we live in -- a government, company, or marriage -- even when anyone else can see it is failing miserably? Why do we resist change even when the system is corrupt or unjust? A new article in Current Directions in Psychological Science, a journal published by the Association for Psychological Science, illuminates the conditions under which we're motivated to defend the status quo -- a process called "system justification." System justification isn't the same as acquiescence, explains Aaron C. Kay, a psychologist at Duke University's Fuqua School of Business and the Department of Psychology & Neuroscience, who co-authored the paper with University of Waterloo graduate student Justin Friesen. "It's pro-active.

When someone comes to justify the status quo, they also come to see it as what should be." Reviewing laboratory and cross-national studies, the paper illuminates four situations that foster system justification: system threat, system dependence, system inescapability, and low personal control. When we're threatened we defend ourselves -- and our systems. Before 9/11, for instance, President George W. Bush was sinking in the polls. But as soon as the planes hit the World Trade Center, the president's approval ratings soared. So did support for Congress and the police.

During Hurricane Katrina, America witnessed FEMA's spectacular failure to rescue the hurricane's victims. Yet many people blamed those victims for their fate rather than admitting the agency flunked and supporting ideas for fixing it. In times of crisis, say the authors, we want to believe the system works. We also defend systems we rely on. In one experiment, students made to feel dependent on their university defended a school funding policy -- but disapproved of the same policy if it came from the government, which they didn't perceive as affecting them closely.

However, if they felt dependent on the government, they liked the policy originating from it, but not from the school. When we feel we can't escape a system, we adapt. That includes feeling okay about things we might otherwise consider undesirable. The authors note one study in which participants were told that men's salaries in their country are 20% higher than women's. Rather than implicate an unfair system, those who felt they couldn't emigrate chalked up the wage gap to innate differences between the sexes. "You'd think that when people are stuck with a system, they'd want to change it more," says Kay. But in fact, the more stuck they are, the more likely are they to explain away its shortcomings. Finally, a related phenomenon: The less control people feel over their own lives, the more they endorse systems and leaders that offer a sense of order.

The research on system justification can enlighten those who are frustrated when people don't rise up in what would seem their own best interests. Says Kay: "If you want to understand how to get social change to happen, you need to understand the conditions that make people resist change and what makes them open to acknowledging that change might be a necessity."

 Story Source: The above story is reprinted from materials provided by Association for Psychological Science. Note: Materials may be edited for content and length. For further information, please contact the source cited above.