How to Make ‘Smart City’ Technologies Behave Ethically
For Immediate Release
As local governments adopt new technologies that automate many aspects of city services, there is an increased likelihood of tension between the ethics and expectations of citizens and the behavior of these “smart city” tools. Researchers are proposing an approach that will allow policymakers and technology developers to better align the values programmed into smart city technologies with the ethics of the people who will be interacting with them.
“Our work here lays out a blueprint for how we can both establish what an AI-driven technology’s values should be and actually program those values into the relevant AI systems,” says Veljko Dubljević, corresponding author of a paper on the work and Joseph D. Moore Distinguished Professor of Philosophy at North Carolina State University.
At issue are smart cities, a catch-all term that covers a variety of technological and administrative practices that have emerged in cities in recent decades. Examples include automated technologies that dispatch law enforcement when they detect possible gunfire, or technologies that use automated sensors to monitor pedestrian and auto traffic to control everything from street lights to traffic signals.
“These technologies can pose significant ethical questions,” says Dubljević, who is part of the Science, Technology & Society program at NC State.
“For example, if AI technology presumes it detected a gunshot and sends a SWAT team to a place of business, but the noise was actually something else, is that reasonable?” Dubljević asks. “Who decides to what extent people should be tracked or surveilled by smart city technologies? Which behaviors should mark someone out as an individual who should be under escalated surveillance? These are reasonable questions, and at the moment there is no agreed upon procedure for answering them. And there is definitely not a clear procedure for how we should train AI to answer these questions.”
To address this challenge, the researchers looked to something called the Agent Deed Consequence (ADC) model. The ADC model holds that people take three things into account when making a moral judgment: the agent, which is the character or intent of the person who is doing something; the deed, or what is being done; and the consequence, or the outcome that results from the deed.
In their paper, the researchers demonstrate that the ADC model can be used to not only capture how humans make value judgments and ethical decisions, but can do so in a way that can be programmed into an AI system. This is possible because the ADC model uses deontic logic, which is a type of imperative logic.
“It allows us to capture not only what is true, but what should be done,” says Daniel Shussett, first author of the paper and a postdoctoral researcher at NC State. “This is important because it drives action, and can be used by an AI system to distinguish between legitimate and illegitimate orders or requests.”
“For example, if an AI system is tasked with managing traffic and an ambulance with flashing emergency lights approaches a traffic light, this may be a signal to the AI that the ambulance should have priority and alter traffic signals to help it travel quickly,” says Dubljević. “That would be a legitimate request. But if a random vehicle puts flashing lights on its roof in an attempt to get through traffic more quickly, that would be an illegitimate request and the AI should not give them a green light.
“With humans, it is possible to explain things in a way where people learn what should and shouldn’t be done, but that doesn’t work with computers. Instead, you have to be able to create a mathematical formula that represents the chain of reasoning. The ADC model allows us to create that formula.”
“These emerging smart city technologies are being adopted around the world, and the work we’ve done here suggests the ADC model can be used to address the full scope of ethical questions these technologies pose,” says Shussett. “The next step is to test a variety of scenarios across multiple technologies in simulations to ensure the model works in a consistent, predictable way. If it passes those tests, it would be ready for testing in real-world settings.”
The paper, “Applying the Agent-Deed-Consequence (ADC) Model to Smart City Ethics,” is published open access in the journal Algorithms.
This work was supported by the National Science Foundation under grant number 2043612.
-shipman-
Note to Editors: The study abstract follows.
“Applying the Agent-Deed-Consequence (ADC) Model to Smart City Ethics”
Authors: Daniel Shussett and Veljko Dubljević, North Carolina State University
Published: October 3, 2025, Algorithms
DOI: 10.3390/a18100625
Abstract: Smart cities are an emerging technology that is receiving new ethical attention due to recent advancements in artificial intelligence. This paper provides an overview of smart city ethics while simultaneously performing novel theorization about the definition of smart cities and the complicated relationship between (smart) cities, ethics, and politics. We respond to these ethical issues by providing an innovative representation of the agent-deed-consequence (ADC) model in symbolic terms through deontic logic. The ADC model operationalizes human moral intuitions underpinning virtue ethics, deontology, and utilitarianism. With the ADC model made symbolically representable, human moral intuitions can be built into the algorithms that govern autonomous vehicles, social robots in healthcare settings, and smart city projects. Once the paper has introduced the ADC model and its symbolic representation through deontic logic, it demonstrates the ADC model’s promise for algorithmic ethical decision-making in four dimensions of smart city ethics, using examples relating to public safety and waste management. We particularly emphasize ADC-enhanced ethical decision-making in (economic and social) sustainability by advancing an understanding of smart cities and human-AI teams (HAIT) as group agents. The ADC model has significant merit in algorithmic ethical decision-making, especially through its elucidation in deontic logic. Algorithmic ethical decision-making, if structured by the ADC model, successfully addresses a significant portion of the perennial questions in smart city ethics, and smart cities built with the ADC model may in fact be a significant step toward resolving important social dilemmas of our time.