Skip to main content

Artificial Intelligence in Veterinary Medicine Raises Ethical Challenges

Boxer
Photo by Lucie Helešicová on Unsplash

For Immediate Release

Use of artificial intelligence (AI) is increasing in the field of veterinary medicine, but veterinary experts caution that the rush to embrace the technology raises some ethical considerations.

“A major difference between veterinary and human medicine is that veterinarians have the ability to euthanize patients – which could be for a variety of medical and financial reasons – so the stakes of diagnoses provided by AI algorithms are very high,” says Eli Cohen, associate clinical professor of radiology at NC State’s College of Veterinary Medicine. “Human AI products have to be validated prior to coming to market, but currently there is no regulatory oversight for veterinary AI products.”

In a review for Veterinary Radiology and Ultrasound, Cohen discusses the ethical and legal questions raised by veterinary AI products currently in use. He also highlights key differences between veterinary AI and AI used by human medical doctors.

AI is currently marketed to veterinarians for radiology and imaging, largely because there aren’t enough veterinary radiologists in practice to fill the demand. However, Cohen points out that AI image analysis is not the same as a trained radiologist interpreting images in light of an animal’s medical history and unique situation. While AI may accurately identify some conditions on an X-ray, users need to understand potential limitations. For example, the AI may not be able to identify every possible condition, and may not be able to accurately discriminate between conditions that look similar on X-rays but have different treatment courses.

Currently, the FDA does not regulate AI in veterinary products the way that it does in human medicine. Veterinary products can come to market with no oversight beyond that provided by the AI developer and/or company.

“AI and how it works is often a black box, meaning even the developer doesn’t know how it’s reaching decisions or diagnoses,” Cohen says. “Couple that with lack of transparency by companies in AI development including how the AI was trained and validated, and you’re asking veterinarians to use a diagnostic tool with no way to appraise whether or not it is accurate.

“Since veterinarians often get a single visit to diagnose and treat a patient and don’t always get follow up, AI could be providing faulty or incomplete diagnoses and a veterinarian would have limited ability to identify that, unless the case is reviewed or a severe outcome occurs,” Cohen says.

“AI is being marketed as a replacement or as having similar value to a radiologist interpretation, because there is a market gap. The best use of AI going forward, and certainly in this initial phase of deployment, is with what is called a radiologist in the loop, where AI is used in conjunction with a radiologist, not in lieu of one,” Cohen says. “This is the most ethical and defensible way to employ this emerging technology: leveraging it to get more veterinarians and pets access to radiologist consults, but most importantly to have domain experts troubleshooting the AI and preventing adverse outcomes and patient harm.”

Cohen recommends that veterinary experts partner with AI developers to ensure the quality of the data sets used to train the algorithm, and that third-party validation testing be done before AI tools are released to the public.

“Nearly everything a veterinarian could diagnose on radiographs has the potential to be medium-to-high risk, meaning leading to changes in medical treatment, surgery, or euthanasia, either due to the clinical diagnosis or client financial constraints,” Cohen says. “That risk level is the threshold the FDA uses in human medicine to determine whether there should be a radiologist in the loop. We would be wise as a profession to adopt a similar model.

“AI is a powerful tool and will change how medicine is practiced, but the best practice going forward will be using it in line with radiologists to improve access to and quality of patient care, as opposed to using it to replace those consultations.”

The paper, “First, Do No Harm. Ethical and Legal Issues of Artificial Intelligence and Machine Learning in Veterinary Radiology and Radiation Oncology,” appears in the Dec. 13 issue of Veterinary Radiology and Ultrasound. Cohen is the corresponding author. Ira Gordon, DVM, is co-author.

-peake-

Note to editors: An abstract follows.

“First, Do No Harm. Ethical and Legal Issues of Artificial Intelligence and Machine Learning in Veterinary Radiology and Radiation Oncology”

DOI: 10.1111/vru.13171

Authors: Eli Cohen, North Carolina State University; Ira Gordon, DVM
Published: Dec. 13, 2022 in Veterinary Radiology and Ultrasound

Abstract:
Artificial Intelligence and machine learning are novel technologies which will change the way veterinary medicine is practiced. Exactly how this change will occur is yet to be determined, and, as is the nature with disruptive technologies, will be difficult to predict. Ushering in this new tool in a conscientious way will require knowledge of the terminology and types of AI as well as forward thinking regarding the ethical and legal implications within the profession. Developers as well as end users will need to consider the ethical and legal components alongside functional creation of algorithms in order to foster acceptance and adoption, and most importantly to prevent patient harm. There are key differences in deployment of these technologies in veterinary medicine, namely our ability to perform euthanasia, and the lack of regulatory validation to bring these technologies to market. These differences along with others create a much different landscape than AI use in human medicine, and necessitate proactive planning in order to prevent catastrophic outcomes, encourage development and adoption, and protect the profession from unnecessary liability. The authors offer that deploying these technologies prior to considering the larger ethical and legal implications and without stringent validation is putting the AI cart before the horse, and risks putting patients and the profession in harm’s way.