For Immediate Release
New research sheds light on how people decide whether behavior is moral or immoral. The findings could serve as a framework for informing the development of artificial intelligence (AI) and other technologies.
“At issue is intuitive moral judgment, which is the snap decision that people make about whether something is good or bad, moral or immoral,” says Veljko Dubljević, lead author of the study and a neuroethics researcher at North Carolina State University who studies the cognitive neuroscience of ethics.
“There have been many attempts to understand how people make intuitive moral judgments, but they all had significant flaws. In 2014, we proposed a model of moral judgment, called the Agent Deed Consequence (ADC) model – and now we have the first experimental results that offer a strong empirical corroboration of the ADC model in both mundane and dramatic realistic situations.
“This work is important because it provides a framework that can be used to help us determine when the ends may justify the means, or when they may not,” Dubljević says. “This has implications for clinical assessments, such as recognizing deficits in psychopathy, and technological applications, such as AI programming.”
Moral judgment is a tricky subject. For example, most people would agree that lying is immoral. However, most people would also agree that lying to Nazis about the location of Jewish families would be moral.
To address this, the ADC model posits that people take three things into account when making a moral judgment: the agent, which is the character or intent of the person who is doing something; the deed, or what is being done; and the consequence, or the outcome that resulted from the deed.
“This approach allows us to explain not only the variability in the moral status of lying, but also the flip side: that telling the truth can be immoral if it is done maliciously and causes harm,” Dubljević says.
To test this complexity and the model, researchers developed a series of scenarios that were logical, realistic and easily understood by both lay people and professional philosophers. All of the scenarios were evaluated by a group of 141 professional philosophers with training in ethics.
In one part of the study, a sample of 528 study participants from the U.S. also evaluated different scenarios in which the stakes were consistently low. This means that the possible outcomes were not dire.
In a second part of the study, 786 study participants evaluated more drastic scenarios – including situations that could result in severe injury or death.
In the first part, when the stakes were lower, the nature of the deed was the strongest factor in determining whether an action was moral. Whether the agent was lying or telling the truth mattered the most, rather than whether the outcome was bad or good. But when the stakes were high, the nature of the consequences was the strongest factor. The results also show that in the case of a good outcome (survival of the passengers of an airplane), the difference between a good or a bad deed, although relevant for the moral evaluation, was less important.
“For instance, the possibility of saving numerous lives seems to be able to justify less than savory actions, such as the use of violence, or motivations for action, such as greed, in certain conditions,” Dubljević says.
“The findings from the study showed that philosophers and the general public made moral judgments in similar ways. This indicates that the structure of moral intuition is the same, regardless of whether one has training in ethics,” Dubljević says. “In other words, everyone makes these snap moral judgments in a similar way.”
While the ADC model helps us understand how we make judgments about what is good or bad, it may have applications beyond informing debates about moral psychology and ethics.
“There are areas, such as AI and self-driving cars, where we need to incorporate decision making about what constitutes moral behavior,” Dubljević says. “Frameworks like the ADC model can be used as the underpinnings for the cognitive architecture we build for these technologies, and this is what I’m working on currently.”
The paper, “Deciphering Moral Intuition: How Agents, Deeds, and Consequences Influence Moral Judgment,” is published in the open access journal PLOS ONE. The paper was co-authored by Sebastian Sattler of the University of Cologne and Eric Racine of the Institut de recherches cliniques de Montréal.
Note to Editors: The study abstract follows.
“Deciphering Moral Intuition: How Agents, Deeds, and Consequences Influence Moral Judgment”
Authors: Veljko Dubljević, North Carolina State University; Sebastian Sattler, University of Cologne; and Eric Racine, Institut de recherches cliniques de Montréal
Published: Oct. 1, PLOS ONE
Abstract: Moral evaluations occur quickly following heuristic-like intuitive processes without effortful deliberation. There are several competing explanations for this. The ADC-model predicts that moral judgment consists in concurrent evaluations of three different intuitive components: the character of a person (Agent-component, A); their actions (Deed-component, D); and the consequences brought about in the situation (Consequences-component, C). Thereby, it explains the intuitive appeal of precepts from three dominant moral theories (virtue ethics, deontology, and consequentialism), and flexible yet stable nature of moral judgment. Insistence on single-component explanations has led to many centuries of debate as to which moral precepts and theories best describe (or should guide) moral evaluation. This study consists of two large-scale experiments and provides a first empirical investigation of predictions yielded by the ADC model. We use vignettes describing different moral situations in which all components of the model are varied simultaneously. Experiment 1 (within-subject design) shows that positive descriptions of the A-, D-, and C-components of moral intuition lead to more positive moral judgments in a situation with low-stakes. Also, interaction effects between the components were discovered. Experiment 2 further investigates these results in a between-subject design. We found that the effects of the A-, D-, and C-components vary in strength in a high-stakes situation. Moreover, sex, age, education, and social status had no effects. However, preferences for precepts in certain moral theories (PPIMT) partially moderated the effects of the A- and C-component. Future research on moral intuitions should consider the simultaneous three-component constitution of moral judgment.