Skip to main content
Creating Safe, Secure and Intelligent Systems

Ethics in Artificial Intelligence

Like any new technology, the rise of AI presents many ethical concerns. At North Carolina’s flagship STEM university, an ethicist has teamed up with engineers of different disciplines to study some of the most pressing issues society faces amidst the adoption of autonomous vehicles, generative AI like ChatGPT, and even humanoid robots.

View of Cyborg hand holding Cloud of justice and law icon bubble with data 3d rendering

Since at least the mid-20th century, society has been fascinated with the concept of artificial intelligence. Talking robots and sentient systems — from Star Wars’ C-3PO to 2001: A Space Odyssey’s HAL 9000 — have played prominent roles in pop culture for decades. 

With that fascination, however, has come plenty of fear. While some fears were merited, many others were merely fueled by Hollywood fiction. 

Today, roughly a quarter of the way through the 21st century, anybody with access to TV or the internet can easily find tech companies touting the power of their newest AI-powered products. For many of us, it’s impossible to avoid ads about all the latest-and-greatest AI agents — and everything they claim to be capable of. 

So it might be easy for fans of the Terminator series to think Skynet is due to take over the world any day now. But rest assured, there’s no legitimate reason to fear that any fiction of the sort will come to fruition in the foreseeable future. 

“We’re far from that concern,” says Veljko Dubljević, an AI ethicist and philosophy professor at NC State University.

Instead, Dubljević has more than enough real concerns about AI to consider in the present. 

“AI is becoming ubiquitous. And there are different ways in which it can be implemented,” Dubljević says. “Some of the ways can put people out of work, and some of the other ways can help people achieve way more than they’d be able to achieve on their own. So it really depends on the rollout of the technology — the approach to adoption.” 

Dubljević works at his computer, with a research assistant in a VR headset and his lab’s humanoid robot beside him.

Yet, while the average worker has good reason to be concerned about one day AI taking their job, Dubljević believes that long before people should worry about anything as drastic as a rogue-robot uprising, we’re more likely to see another “winter of AI” — when funding and public support for the technology wither away. 

“AI is being overhyped, and that’s for sure,” Dubljević says. “We’ve seen a literature looking at the fact that, for instance, while companies have been touting stuff as AI-powered — customers are not really interested in that. The marketing strategy can actually backfire.”

The first winter of AI came in the late 1980s, Dubljević says, as a result of the tech industry overpromising and ultimately underdelivering. 

The History of AI: Then vs. Now

Modern AI tools rely on computational models known as artificial neural networks. But back in the ‘50s and ‘60s, a far more primitive form of AI was born. 

“At the beginning, artificial intelligence was symbolic AI. And that was literally done by applying logic,” says Dubljević, a University Faculty Scholar and Joseph D. Moore Distinguished Professor of Philosophy and Science, Technology, and Society. 

He says that, among other potential applications, scientists at the time hoped to use symbolic AI as a language translator. The technology worked well when it came to simple phrases. But it struggled when faced with ambiguity or complexity; for example, with the word “spirit,” which can mean vastly different things depending on the context. 

“If you’re chasing ghosts, it means one thing. If you’re talking about willpower, that’s another thing. Or if you’re talking about alcoholic drinks, there’s going to be an entirely different set of words connected with that,” Dubljević explains. 

Today’s large-language models (LLMs) — like ChatGPT or Google Gemini — which all utilize artificial neural networks, are much better at dealing with ambiguity and detecting which meaning is correct based on the other words around it. 

“We’ve made major strides by imitating the human brain,” Dubljević says. “With artificial neural networks, we are actually imitating how humans learn a language.”

If used correctly, generative AI tools can significantly boost productivity. A 2023 study found that when “used within the boundary of its capabilities,” AI can improve a worker’s performance by almost 40% in comparison to workers who don’t use it.

However, while LLMs have the power to do plenty, they’re also prone to problematic patterns of behavior. For one, they’ve been known to make things up. It’s a phenomenon dubbed “hallucination” — when generative AI creates nonsensical or entirely false outputs.

An even more pervasive problem is sycophancy. AI chatbots have learned to, by and large, tell you what you want to hear. After all, who doesn’t love being flattered? And who likes being told they’re wrong?

Perhaps that’s partly why LLMs look to be among the fastest-adopted technologies in history. 

According to a national survey conducted by Elon University earlier this year, over half of all U.S. adults already use LLMs like ChatGPT on a regular basis. The same survey found that 23% of LLM users reported that they’d made a “significant mistake or bad decision by relying on information generated by LLMs.”

Dubljević believes that if we as a society don’t understand the limitations of LLMs and other forms of AI, we could easily become over-reliant on them. 

“And that can lead us to make horrible mistakes,” Dubljević says.

In the case of a Utah law clerk, it cost them their job and got their boss sanctioned. A legal brief the clerk wrote on behalf of an attorney at their firm contained false citations — which did not appear in any legal database and could only be found in ChatGPT. 

Even worse than risking your livelihood or reputation, some LLM users have lost their sanity

While LLMs alone leave ethicists much to consider, AI applications in the physical world — self-driving cars, for instance — present even greater potential for concern. 

“Wherever there are applications of AI, there are ethical issues that come with that,” Dubljević says. “We really need to appreciate how AI is going to be making decisions that affect the health and well-being of humans.” 

The Go-to Guy

Dubljević got to NC State in 2016. Since then, he’s established himself as one of the go-to AI experts in the humanities and social sciences for his STEM colleagues across the university — and beyond. 

“When I joined NC State, I was tasked with exploring ethics in science and technology and establishing collaborations across campus. And I have to say I was very successful in that,” Dubljević says. “I’ve forged relationships with people from multiple colleges at NC State. Whoever’s interested in the ethics of science and technology has found in me a person to help out with their research.”

 Dubljević stands next to a student researcher in a VR headset, with their robot assistant looking in from behind.

Dubljević has partnered with colleagues from other colleges at NC State to study the ethical implications of AI in autonomous vehicles, elder-care robots, law enforcement and more. 

In 2020, he co-authored a paper highlighting the realistic pros and cons of apps and other technologies that use AI to benefit older adults, including those with dementia. The work focused on how these technologies can help older adults preserve their autonomy. 

Two years later, together with an interdisciplinary team, Dubljević developed a blueprint for creating algorithms that more effectively incorporate ethical guidelines into AI decision-making programs, such as carebots. 

“Technologies like carebots are supposed to help ensure the safety and comfort of hospital patients, older adults and other people who require health monitoring or physical assistance. In practical terms, this means these technologies will be placed in situations where they need to make ethical judgments,” Dubljević said in a 2022 news release about the blueprint he helped develop.

More recently, in June of this year, Dubljević published a paper validating a technique to study how people make “moral” decisions when driving — with the ultimate goal of using the data to train the AI programs that run autonomous vehicles. 

“Accidents often stem from low-stakes decisions, such as whether to go five miles over the speed limit or make a rolling stop at a stop sign. How do we make these decisions? And what constitutes a moral decision when we’re behind the wheel?” Dubljević said in a news release about the research. “We needed to find a way to collect quantifiable data on this, because that sort of data is necessary if we want to train autonomous vehicles to make moral decisions.”

Without a doubt, making moral judgments — determining right from wrong — can get complicated. 

Philosophers have argued for centuries over which ethical framework more accurately theorizes morality; John Stuart Mill’s Utilitarianism and Immanuel Kant’s deontological theory, both of which are centuries old, remain among the most prevailing schools of thought. Put simply, Deontology is more concerned with the means than the ends. Utilitarianism, on the other hand, argues that actions are right if they result in the best outcomes for the greatest number of people.

The age-old “trolley problem” — asking someone to decide whether it’s acceptable to take one person’s life in order to save several others’ — is a critique of utilitarianism. Until a couple of years ago, it was also the best paradigm researchers had to study moral judgment in traffic. Then, with the help of two postdocs, Dubljević came up with a new thought experiment that’s much more realistic. 

“The typical situation comprises a binary choice for a self-driving car between swerving left, hitting a lethal obstacle, or proceeding forward, hitting a pedestrian crossing the street. However, these trolley-like cases are unrealistic,” Dubljević said in a news release about the work. “Drivers have to make many more realistic moral decisions every day. Should I drive over the speed limit? Should I run a red light? Should I pull over for an ambulance?

“Those mundane decisions are important because they can ultimately lead to life-or-death situations.”

Beyond Utilitarianism

No matter the context — and regardless of whether technology is involved — morality can be judged upon three criteria, according to a neurocomputational model Dubljević developed with colleagues

“The model explains that there are three subcomponents to morality: the intentions and character traits of the protagonist; the action being taken and whether it’s in accordance with duties and rules and norms or not; and the consequences — whether they are good or bad,” Dubljević explains. “And once we figure out that these are equally important aspects — not one of them dominates — we can appreciate the flexibility and stability of human moral judgment.”

Known as the Agent Deed Consequence (ADC) model, it posits that we take three things into account when making a moral judgment: the agent, the deed and the consequence. The deed is what’s being done. The agent refers to the character or intent of the person doing the deed. And the consequence is the outcome that results from the deed.

To capture a more realistic array of moral challenges in traffic than the trolley problem, Dubljević built on the ADC model. His research team first created seven driving scenarios; then,

using various combinations of agent, deed and consequence, they created eight different versions of each of those seven scenarios. 

For example, in one version of the driving scenarios, a caring parent who’s taking their child to school chooses to brake at a yellow light but still gets there on time; in a different version of the same scenario, the parent is abusive, runs a red light and causes an accident. 

The researchers plugged each scenario into a VR environment. That way, rather than simply reading scenarios out loud to the study participants, the participants could virtually immerse themselves in the information that the hypothetical drivers used to make their decisions behind the wheel. 

NC State’s driving simulator has seven screens — which give the person behind the wheel a 315-degree view. The car is also interactive with gear shifts, brakes, a steering wheel, and all the bells and whistles of a modern-day vehicle. It allows researchers to observe many important safety-critical cases without putting people at risk.
The driving simulator’s seven screens are mounted to a moving platform. The interactive simulator features gear shifts, brakes, a steering wheel — and all the bells and whistles of a modern-day vehicle. It allows researchers to observe many important safety-critical cases without putting people at risk.
Located in Fitts-Woolard Hall, NC State’s driving simulator allows researchers to observe many important safety-critical cases without putting people at risk.

Dario Cecchini, first author of a paper on the work, said that the goal was to “have study participants view one version of each scenario and determine how moral the behavior of the driver was in each scenario, on a scale from one to 10.”

Cecchini said that the data they collected “on what we consider moral behavior in the context of driving a vehicle” could later be used to develop AI algorithms for moral decision-making in autonomous vehicles.

He was right. 

The work they did together in 2023 led to the paper that Dubljević, Cecchini and others published in the journal Frontiers in Psychology this past June, “Morality on the road: the ADC model in low-stakes traffic vignettes.”

“Once we found a way to collect that data, we needed to find a way to validate the technique — to demonstrate that the data is meaningful and can be used to train AI. For moral psychology, the most detail-oriented set of critics would be philosophers, so we decided to test our technique with them,” Dubljević says.

Enlisting 274 study participants with advanced degrees in philosophy, the researchers shared driving scenarios with participants and asked them about the morality of the decisions that drivers made in each scenario. Afterward, they used a validated measure to assess the study participants’ ethical frameworks.

“Different philosophers subscribe to different schools of thought regarding what constitutes moral decision-making,” Dubljević said in a news release about the study. 

“For example, utilitarians approach moral problems very differently from deontologists, who are very focused on following rules. In theory, because different schools of thought approach morality differently, results on what constituted moral behavior should have varied depending on which framework different philosophers used,” he said. “What was exciting here is that our findings were consistent across the board.”

Dubljević said that no matter their school of thought, the philosophers who participated in the study all reached the same conclusions regarding moral decision-making in the context of driving.

“That means we can generalize the findings,” Dubljević said. “And that means this technique has tremendous potential for AI training. This is a significant step forward.”

A Research Center’s Revival

In addition to autonomous vehicles, Dubljević has also applied the ADC model to virtual assistants used in healthcare settings. A carebot must make ethical judgments — arguably even more so than autonomous vehicles. 

“Let’s say that a carebot is in a setting where two people require medical assistance. One patient is unconscious but requires urgent care, while the second patient is in less urgent need but demands that the carebot treat him first,” Dubljević offered as an example in a news release on the work. “How does the carebot decide which patient to assist first? Should the carebot even treat a patient who is unconscious and therefore unable to consent to receiving the treatment?”

Similar to self-driving cars, previous efforts to incorporate ethical decision-making into AI programs for carebots were limited in scope and “focused on utilitarian reasoning, which neglects the complexity of human moral decision-making.”

Two of Dubljević’s former research assistants — Ariana D’Alessandro (left) and Sean Brantley (right), both of whom recently graduated — adjust the settings on the lab’s Pepper robot, which they’ve named “Sava.”

The National Endowment for the Humanities (NEH) awarded NC State $500,000 for Dubljević to lead a center focused on ethics in AI. The center would focus on three research thrusts: autonomous vehicles, generative AI (e.g., LLMs like ChatGPT) and elder care. 

In April, that NEH grant was one of many to be cancelled

Despite the setback, Dubljević remains confident that he and his colleagues will find the support needed to get the previously planned center off the ground. He’s even more certain that our university is exactly the place “this sort of research needs to happen.”

“NC State is the place where a lot of this research is being done. This is the place where we have the talent to really move the frontier forward,” Dubljević says. “It’s not going to be overnight, but we are dedicated to working on this. We have a track record, and we’re going to do it. It’s just a question of time.”

Dubljević has dubbed his latest organized effort the Centering AI in Society and Ethics (CASE) initiative.

On Oct. 13, CASE is hosting an “Ethics and Autonomous Vehicles” workshop, followed by a special issue in the journal AI & Society: Knowledge, Culture and Communication. The workshop’s keynote speakers will be Chandra Bhat, the Joe J. King Chair in Engineering at the University of Texas at Austin, and Noah Goodall, a senior research scientist for the Virginia DOT’s Transportation Research Council. 

“They’re real experts in the field. And I think it’s going to be a very successful conference,” Dubljević says.

Anyone interested in insightful discussions exploring the ethical challenges surrounding the rise of self-driving and highly automated vehicles is welcome to attend. All attendees will receive free lunch and refreshments — but registration is required.

Dubljević says the workshop has drawn abstract submissions from researchers around the world — and of diverse backgrounds. 

He recognizes that it will take experts in many disciplines to effectively ensure that AI can deliver on its promises without taking over entirely. While Dubljević hopes to see AI one day broadly benefit society, he believes that humans must always first and foremost rely on each other.

“No one human being can do everything, not even with AI,” Dubljević says.