Abstract:
Accurately perceiving the structure of biochemical molecules is key to understanding their function in biological systems. Visualization software has given the scientific and medical communities a means to study these structures in great detail; however, these tools lack an intuitive means to convey this information to persons with visual impairment. Advances in spatial audio technology have allowed for sound to be perceived in 3-dimensional space when played over headphones. This work presents the development of a novel computational tool that utilizes spatial audio to convey the three dimensional structure of biochemical molecules.
Abstract:
Prime III is a multimodal voting system that allows users to use touch or voice to make selections on their ballot. This paper discusses an experiment that evaluated the system’s speech recognition at various levels of background noise. An approach to simulate realistic background noise in a controlled environment is described. This approach helped mimic a voter voting in a precinct. The goal of the experiment was to establish a threshold for when distortion occurs and speech recognition accuracy declines. The signal-to-noise ratios (SNR) between the volumes were recorded and the system’s accuracy was tested. The result was a suggested threshold of a SNR equal to 1.44 to attain 90% system accuracy. The next phase of this project is to test the level of system interference from ambient noise in an actual voting precinct.
Abstract:
Many people with visual impairments actively play soccer, however the task of making the game accessible is met with significant challenges. These challenges include: the need to constantly talk to signify location and detecting the positions of silent objects on the field. Our work aims to discover methods to help persons with visual impairments play soccer more efficiently and safely.
Abstract:
Since the United States federal law Help America Vote Act was passed in 2002, it is widely recognized that all Americans should have equal access to vote with privacy and security, but current electric voting technologies have failed to provide a barrier free access system for visually disabled and impaired population to write and check their desired candidates? names without assistance. Attempts have been made to create audio voting systems that read letters to visually impaired voters for picking, but with lack of checking and correcting features for users? typing. This paper will describe a new technology that allows users to navigate a virtual keyboard to type, check and modify their desired candidates? names using mouse movement and clicking. This new voting system was recently tested at Clemson University, South Carolina on 16 subjects who were blindfolded to simulate the experience of visually impaired voters. The result shows that blindfolded users have difficulty to find keys they want on a virtual keyboard using mouse, no matter how the keys are sorted. This research is expected to reveal the difference between human muscle memory and spatial memory, and to provide a new reference for human-computer interaction research in the future.
Abstract:
Although the concept of virtual spatial audio has existed for almost twenty-five years, only in the past fifteen years has modern computing technology enabled the real-time processing needed to deliver high-precision spatial audio. Furthermore, the concept of virtually walking through an auditory environment did not exist. The applications of such an interface have numerous potential uses. Spatial audio has the potential to be used in various manners ranging from enhancing sounds delivered in virtual gaming worlds to conveying spatial locations in real-time emergency response systems. To incorporate this technology in real-world systems, various concerns should be addressed. First, to widely incorporate spatial audio into real-world systems, head-related transfer functions (HRTFs) must be inexpensively created for each user. The present study further investigated an HRTF subjective selection procedure previously developed within our research group. Users discriminated auditory cues to subjectively select their preferred HRTF from a publicly available database. Next, the issue of training to find virtual sources was addressed. Listeners participated in a localization training experiment using their selected HRTFs. The training procedure was created from the characterization of successful search strategies in prior auditory search experiments. Search accuracy significantly improved after listeners performed the training procedure. Next, in the investigation of auditory spatial memory, listeners completed three search and recall tasks with differing recall methods. Recall accuracy significantly decreased in tasks that required the storage of sound source configurations in memory. To assess the impacts of practical scenarios, the present work assessed the performance effects of: signal uncertainty, visual augmentation, and different attenuation modeling. Fortunately, source uncertainty did not affect listeners’ ability to recall or identify sound sources. The present study also found that the presence of visual reference frames significantly increased recall accuracy. Additionally, the incorporation of drastic attenuation significantly improved environment recall accuracy. Through investigating the aforementioned concerns, the present study made initial footsteps guiding the design of virtual auditory environments that support spatial configuration recall.