|Home < AuSIM Applications < Simulation-based Training|
Simulation-based training is becoming increasingly widespread in today's world. One reason is that more advanced greater-capacity computers allow more complex and thus more realistic training scenarios to be simulated. Another reason is that the demand for training is growing in many areas, from equipment operators to warfighters, as ever-changing skillsets are required, and ever-increasing human performance levels and efficiencies are demanded.
|Thus, trainers and instructors
are continually searching for new ways to apply technology to enhance training programs
to make them as effective as possible in preparing their students for real-life
situations, enabling them to better and more rapidly match a real situation to their
training experiences and respond as trained.
Virtual Environment (VE) technology has emerged as one of the most promising new training tools. Flexible configurability allows training in a wide variety of environments and situations (too complex, risky or expensive for training), and allows very specific scenarios to be practiced as needed. Yet, the effectiveness of training in virtual environments is proportional to the quality of the virtual experience, which is directly related to the sense of presence and immersion one feels when within the environment.
While high-resolution graphical displays are one important component of effective virtual environments, a multi-sensory representation of an environment must be presented to maintain a convincing illusion. The sensory displays must be accurate, synchronized with each other, and calibrated to the user’s biophysical receptors. One way of increasing sound field perception accuracy is to calibrate the perceptual cues to each individual. Calibration allows a trainee to tune a spatial auditory display system to work best with their own ears. This would be the acoustic equivalent of generating a custom eyeglass prescription for a person based on the functional state of their own eyes.
|The most significant factor in accurately presenting a
sound field is the selection of aural cues. Head-Related Transfer Functions (HRTF’s) encompass
cues the auditory system uses for spatial sound perception. HRTF’s capture three directionally
dependent cues: 1) time of arrival, 2) intensity at each ear,
and 3) spectral coloration. Due to dissimilar shapes and sizes of human bodies
and ears, these characteristics vary between individuals, in effect acting as an “ear-print”.
Presentation of audio cues that are not customized, called foreign cues, can cause perceptual
distortions that do not occur when the user is presented with customized or natural cues. The
main failing of non-individualized HRTF’s is hemispheric confusion. Using foreign HRTF’s
increases front-back confusions by a factor of four, vertical confusions by a factor of seven
and induces images to be perceived inside the head, resulting in poor sound localization
performance, which can, in turn, lead to spatial disorientation.
AuSIM's HRTF measurement system, HeadZaptm allows measurement of individual HRTF's at a high degree of precision. These datasets can then be used with AuSIM3Dtm technology to present sound fields that create a very realistic aural environment to supplement the highly immersive visual environments desirable for simulation-based training. What's more, once an individual's HRTF is measured, the dataset can be kept and re-used indefinitely in a wide variety of training scenarios.
Sample application: Driving Simulator
|[Home] [About AuSIM] [Products] [Services] [Applications]||
|[Support] [Contacts] [Buy Online] [Downloads] [News & Events]|
|© AuSIM Inc. 1998-2011. Last updated on|