For Immediate Release
A recent study finds that users have trouble utilizing images from unmanned aerial systems (UASs), or drones, to find the position of objects on the ground. The finding highlights challenges facing the use of UAS technology for emergency operations and other applications, while offering guidance for future technology and training development.
“Because UASs operate at heights that most normal aircraft do not, we are getting new aerial perspectives of our surroundings,” says Stephen Cauffman, a Ph.D. student at North Carolina State University and lead author of a paper describing the work. “We wanted to know how good people are at integrating these perspectives into their perception of the real world environment – which can be relevant in situations such as security or emergency response operations.
“For example, if we’re using UASs to identify a trouble spot, how good are we at using visual information from UASs to point to the correct spot on a map?”
To address this, researchers had a group of 18 study participants evaluate different views of an urban environment that included multiple objects. In one scenario, participants were shown an aerial view of the environment, then a ground level view of the same environment with one object missing. Participants were then asked to show where the missing object had been located. The study also had participants perform similar tasks comparing two aerial images, two ground images and a ground image followed by an aerial image.
The researchers found that comparing two aerial views got the best results, but that switching from an aerial view to a ground view posed the biggest challenge for study participants. When shown an aerial view followed by a ground view, participants took at least a second longer to estimate where the missing object was – and their estimates were four times farther away from the correct placement of the object than when comparing two aerial views.
“This tells us that incorporating UASs into some situations, such as emergency response, may not necessarily be as useful as one might think,” says Doug Gillan, a professor of psychology at NC State and co-author of the paper.
“It also offers insights into how we can modify training or interface design to improve performance for UAS users,” Cauffman says.
“A lot of work remains to be done in this area,” Cauffman adds. “We’ve already conducted additional work on the role of landmarks and perspective in how people are able to process aerial visual information.”
The paper, “Eye In The Sky: Investigating Spatial Performance Following Perspective Change,” will be presented at the Annual Meeting of the Human Factors and Ergonomics Society, being held Oct. 9-14 in Austin, Tex.
Note to Editors: The study abstract follows.
“Eye In The Sky: Investigating Spatial Performance Following Perspective Change”
Authors: Stephen J. Cauffman and Douglas J. Gillan, North Carolina State University
Presented: Annual Meeting of the Human Factors and Ergonomics Society, Oct. 9-14 in Austin, Tex.
Abstract: Unmanned Aerial Systems (UASs) are becoming more prevalent in civilian use, such as emergency response and public safety. As a result, UASs pose issues of remote perception for human users (Eyerman, 2013). The purpose of this experiment was to test the effects of combining aerial and ground perspectives on spatial judgments of object positions in an urban environment. Participants were shown randomly ordered image pairs of aerial and ground views of objects in a virtual city and were asked to make judgments about where a missing object was in the second image of the pair. Response times and error were collected with error being calculated using the Euclidean distance formula. The results were consistent with previous research and showed that congruent trials (aerial-aerial and ground-ground) resulted in less error and response time. It was also shown that there was a significant four-way interaction between stimulus image, response image, object density, and stimulus duration. The results of this study are intended to provide the basis for future work in understanding the underlying reasons behind spatial errors that might occur during use of UASs and lead to design implementations for interfaces to reduce these errors.