This is an old revision of the document!
RASPUTIN is a fundamental collaborative research project (PRCE) at the intersection of « Sciences et Technologies Numérique » and « Psychologie », aiming at reducing the cognitive complexity of navigation by the visually impaired in new interior surroundings through digital simulations and virtual auditory reality explorations as preparation and mental map construction exercises. The objective promotes accessibility to information by all, and from anywhere. With training, it is possible to acoustically judge the distance to walls or other sound reflective objects, a skill learned by many visually impaired individuals. RASPUTIN is concerned with investigating the use of perceptually realistic room acoustic simulations for assisting visually impaired individuals in the preparation of indoor navigation.
Simulation algorithms have been improving in their ability to predict acoustic metrics. In real-time VR systems, where sound source, listener, and room architecture vary in unpredicted ways, investigations of perceptual quality/realism have been hindered by algorithm simplifications.
RASPUTIN addresses fundamental questions of spatial perception, memory, acoustics, and signal processing related to functional VR room simulations while addressing psychoacoustic and cognitive impacts of rendering quality. Evaluations will consider added benefits in terms of navigation speed, precision, improved self-confidence, and sense of security. The research goals of RASPUTIN are fourfold: first, a significant advancement in understanding the fundamental capacities in spatial architectural perception and memory through auditory experience; second, the improvement and integration of a real-time room acoustic simulation algorithm into an open source research virtual reality platform; third, the evaluation of interactive room acoustic simulations as a planning/training aid for visually impaired individuals; and fourth, the improvement of autonomy of visually impaired people. The RASPUTIN project addresses the advancement of Human-Computer Interaction through the development of virtual auditory environments that improve training and learning for the handicapped as well as improving access and understanding to public sites.
The aim of the project is in full accordance with the joint challenge « Société de l’information et de la communication / Sociétés innovantes, intégrantes et adaptative - La révolution numérique : rapports aux savoirs et à la culture » concerning « Education et formation », with a clear fundamental research collaboration between « Sciences et Technologies Numérique » and « Psychologie », thereby addressing « Orientation n° 33 (Innovations sociales, éducatives et culturelles) ». RASPUTIN aims at reducing the cognitive complexity of navigation by the visually impaired in new interior surroundings through digital simulations and virtual auditory reality explorations as preparation and mental map construction exercises.
The project is organized in the following principal research workpackages:
WP 1 will examine the fundamental acoustic, psychoacoustic, and cognitive facets of spatial perception and representation resulting from various presentation means (navigation, tactile maps, etc.). Evaluation methods include quality assessments of mental spatial models (topological organization and conservation of metric relations) which evaluate the stability and accuracy of the mental maps, being directly correlated to the real environment. Results of these studies will be incorporated into WP 2, to be evaluated in detail in WP 4.
WP 2 addresses the development of the room acoustic simulation engine. This research task concerns initially the improvement of the previously method (Noisternig, 2008), an Iterative Image Source Method (IISM). This involves, first and foremost, the integration of the IISM into a geometrical scene engine which will allow for the definition of a unified visual and acoustical geometrical model. The open source multiplatform architecture Blender has been previously identified as a good candidate for integration.
The audio rendering will be implemented with the powerful real-time spatialization library Spat~ (Carpentier, 2015). The quality of the simulated room acoustic responses will be further improved through the addition of spatially coherent statistical reverberation models for the later parts of the acoustic response. These statistical models will be developed based on recent works concerning hybrid reverberators (Carpentier, 2014) which combine a high spatial-resolution convolution engine with feedback delay networks; appearing particularly well-suited for pairing with room acoustics simulation software. The model parameterizations will be based on simplified approximations of the room geometry obtained from the unified geometrical visual-acoustic model. WP 2 and WP 3 concern implementations and evaluations that can be carried out in parallel, after the completion of WP 1
WP 3 involves the development of a proof-of-concept prototype for the informative exploration of virtual acoustic environments. Working in conjunction with a selected user group panel who will remain engaged in the project for the duration, several test cases of interest will be identified for integration into the prototype and evaluations in WP 4.
Evaluations by a recruited panel of visually impaired potential users will establish the degree of benefit of the VR training relative to traditional preparatory means, such as tactile maps and verbal descriptions. Benefits will be quantified with respect to spatial mental model accuracy, speed of navigation, autonomy, success rate to destinations, and level of self-confidence.