3D Virtual Sound Project: Results

ejhong@cs.stanford.edu

Results

Informal observation has shown that externalization of sound sources and localization was achieved with the use of the MIT HRTF set and the use of the plug-in style headphones. The ability to move one's head and have the sound adjust for such movement greatly enhanced the ability to localize sound. Even the interpolated sound files provided an adequate perception of localization when combined with head movement.

The system allows for simultaneous convolution of two sound sources or interpolation of four pre-processed interleaved sound files (or a proper combination of the two types). One reason use of the pre-processed is not faster is that four times as much data needs to be accessed which can increase file read time or delays due to memory swapping (if the sound is loaded into memory). The major drawback of using the pre-processed method is the large increase in file size required. Thus use of pre-processed interleaved sound files is appropriate for very short sounds while direct convolution is appropriate for longer sound files. In addition, the system allows a client to specify whether a sound should be pre-loaded into memory from a sound file to decrease file read time. Providing the options to specify direct convolution, interleaved files and direct memory access allows a client to get the most out of the system based on those sounds that will be playing simultaneously.

Future Work

Possible future work includes use of compressed files and rapid run-time decompression to alleviate the large file sizes required for interleaved sound files, performing automatic loading of appropriate sound files into memory when the system is idle and adding more realistic modeling of the environmental context at run time.

Back to overview


e-mail: ejhong@cs.stanford.edu

Last modified: March 20, 1996 by Eugene Jhong