Understanding Human Perception Through IVEs
Immersive Virtual Environments (IVEs) have been described as “the next best thing to being there.” They are digital worlds in which users can immerse themselves by wearing headsets and other hardware. A research team in NC State’s Center for Geospatial Analytics is taking advantage of the realism and sense of presence provided by IVEs to better understand human perceptions, preferences, and emotions in response to different environments. These insights can help us to design better built environments.
At the Coffee & Viz seminar in Hunt Library on Friday, April 7, at 9:30 a.m., the team will present an overview of IVE technology and describe its uses for environmental design research. There will be a hands-on demonstration of IVE applications following the talk. Presenters will include Payam Tabrizian, Perver Baran, Makiko Shukunube and Saeed Ahmadi Oloonabadi.
Payam Tabrizian, a Ph.D. student in the College of Design and a research assistant at the Center for Geospatial Analytics, talks about this work.
What sort of work does your group do?
In short, we evaluate people’s experience of landscapes; that is, natural and built environments. We integrate immersive virtual environment and geospatial analytics to understand the impact of landscape change on people’s perceptions and affective responses, such as moods, feelings and attitudes.
What kinds of projects do you do that involve immersive virtual environments?
IVE is useful in three types of projects: landscape assessment, landscape development and theory development. In landscape assessment projects, IVE helps us to quantify and map experiential qualities of a landscape. In other words, it allows us to evaluate landscape beyond its ecological or monetary value. For example, understanding the perceived safety patterns in a neighborhood park, or the perceived historic or spiritual qualities of a large-scale urban park or the perceived authentic components of farmscapes.
In landscape development projects, we use IVE to display different future design scenarios, in a realistic and high fidelity manner, to identify those solutions that are most compatible to stakeholders’ preferences. We see great value in integrating IVE in the design process as a tool for public engagement, as well as a decision-support tool.
In theory development, we delve into theories of environmental psychology and environmental design to see what types of landscapes, components of landscapes, or configurations of these elicit the most positive or negative responses. For example, understanding the restorative potential of various vegetation components of urban green spaces.
Why are IVEs are important for those projects? Can you give examples?
Research on landscape perceptions has utilized a variety of methods. While taking participants to a site or to a variety of sites for on-site experiences provides high ecological validity, this method is relatively difficult and expensive to employ. Alternatively, researchers have utilized photos and videos, which are limited in representing the in-situ experience. In comparison with these conventional methods, IVEs elicit a high degree of immersion and presence, meaning that people can feel physically present in the under-study environment. IVE headsets continuously stream visual information, linked to users’ head and body movements, so that they can actively explore all facets of the environment. For example, being aware of the environment behind you, something not elicited in static photos, is very important in the perception of danger, and consequently feelings of fear or personal safety. In addition, virtual environments are flexible and can be programmed to enable higher experimental control and more rigorous data collection. For instance, we program them such that participants can respond to survey using a joystick controller while they are immersed in an environment, which could mimic capturing responses in a real setting. We also program the order of the settings being experienced as well as the duration of the experiences.
What can or should be done to improve the IVE experience in the context of your work?
IVE is a fledgling technology and there are many aspects to improve. In the context of our work, improvements in 360 video capturing technology would be immensely helpful. Other areas are eye-tracking and haptic interaction. So far integration of eye-tracking with Oculus is costly and not readily available. We are currently experimenting with leap-motion technology so users can interact with the environments using hand gestures, thus improving the experience. Another area of interest to us is integrating a walk-through experience, that is, moving through the environment.
How long does it take you to design an IVE experience for your research? How does that compare to the visual presentations you would otherwise use?
It depends on the methods we use. In the case of realistic photos and in-situ photography, the duration for capturing and preparing an IVE environment is similar to that of 2-D images. It takes between five-to-seven minutes to capture a set of photos to create an IVE scene and about 30 minutes to stitch the photos and prepare an IVE presentation. If photo-manipulation is involved, depending on the type and extent of the changes, and the designer’s proficiency, the procedure can take from 30 minutes to few hours.
In the case of synthetic and 3D modelled environments, it can take from a day to a month depending on the details and expected degree of realism. We have recently automated the entire process and brought the 3D modelling process down to a second, through coupling a tangible user interface with open-source 3D modelling and game engine software. Users can change a physical model of landscape with their hand and then see the changes in real-time, using an IVE headset.