|
Musical recordings have been generally ignorant of the listener. The
producer creates a static mix that optimizes the listener's initial experience of
the piece. Thereafter the piece is stagnant, producing the same experience
every time. Additionally, the production is created for an optimal
listening environment, which the listener seldom has available.
Audio simulation through digital computation provides technologies to include
the listener in the listening experience. First, audio simulation can
entirely synthesize the listening environment, making it optimal for every
experience. Second, audio simulation can react to the listener, not
only conforming the aural environment, but the content as well. The
result is that no two experiences of the piece are identical, and that the
listener has some control to optimize for their listening pleasure. While
this does not replicate the live performance, where the artist and audience work
together to create a unique experience, it is definitively an innovation in
that direction.
One of the principal components of audio simulation technologies is the application
of the head-related transfer function (HRTF) to a monaural sound stream
to produce a binaural pair yielding synthetic cues to directionality of a sound
to a listener. The HRTF, however, is only the last piece of a very complex
simulation of sound waves' emission from a source and their propagation around
and through an environment. In creating their audio simulation technology
AuSIM3D®, AuSIM pushed the technology in three areas.
-
Real-time techniques for changing HRTF's smoothly were enhanced, allowing
head-tracked listeners to consistently perceive a virtual environment through
motion-parallax.
- Dynamic, physically-based environmental and source models were developed to complement the HRTF
technology.
- Multi-participant technologies were developed where each listener is not only
absorbing the sounds of the environment and other participant, but also providing
a dynamically positioned sound-source with their voice.
The simulation within
InTheMix maintains most of the information to generate a fully immersed
visual 3D world. The audio simulation technology works very well tightly integrated
with a wide field-of-view head-mounted display (HMD). But InTheMix is sans-viz to emphasize the aural senses to humans, who have become visually dominant.
InTheMix employs AuSIM's AuTraktm software, which generically
supports most popular 6D (six-degree-of-freedom) tracking systems of various electromagnetic,
visual, inertial, and ultra-sonic technologies. In development, InTheMix has used Polhemus Fastraktm electromagnetic instrumentation coupled with the
Polhemus LongRangertm transmitter. At Siggraph 2000, the InTheMix team may partner with any of several tracking suppliers to demonstrate their
leading-edge technology.
InTheMix represents a Point Of Departure for listeners, for artists, and
for active participants. It encourages collaboration by bringing people together
to share experiences in natural ways. It also promotes mobility by maintaining
high-fidelity interaction from remote locations. And, it challenges humans to use
more of their senses in this increasingly dominant and overwhelmed visual world.
Finally, it employs an artificial life, giving artists a proxy for delivering their art.
Many more applications of the technologies that make InTheMix are in our future. |