Publication Types:

Sort by year:

Hearing biochemical structures: molecular visualization with spatial audio

Article
Arce, Terek R and McMullen, Kyla A
ACM SIGACCESS Accessibility and Computing(117): 9—13, 2017
Publication year: 2017

Abstract:

Accurately perceiving the structure of biochemical molecules is key to understanding their function in biological systems. Visualization software has given the scientific and medical communities a means to study these structures in great detail; however, these tools lack an intuitive means to convey this information to persons with visual impairment. Advances in spatial audio technology have allowed for sound to be perceived in 3-dimensional space when played over headphones. This work presents the development of a novel computational tool that utilizes spatial audio to convey the three dimensional structure of biochemical molecules.

To Start Voting, Say Vote: Establishing a Threshold for Ambient Noise for a Speech Recognition Voting System

Article
Jackson, France and Solomon, Amber and McMullen, Kyla and Gilbert, Juan E
Procedia Manufacturing, 3: 5512—5518, 2015
Publication year: 2015

Abstract:

Prime III is a multimodal voting system that allows users to use touch or voice to make selections on their ballot. This paper discusses an experiment that evaluated the system’s speech recognition at various levels of background noise. An approach to simulate realistic background noise in a controlled environment is described. This approach helped mimic a voter voting in a precinct. The goal of the experiment was to establish a threshold for when distortion occurs and speech recognition accuracy declines. The signal-to-noise ratios (SNR) between the volumes were recorded and the system’s accuracy was tested. The result was a suggested threshold of a SNR equal to 1.44 to attain 90% system accuracy. The next phase of this project is to test the level of system interference from ambient noise in an actual voting precinct.

Using virtual spatial audio to aide visually impaired atheletes

Article
Becwar, R and Sieron, D and McMullen, K and Gardner, C
{journal}, 2014
Publication year: 2014

Abstract:

Many people with visual impairments actively play soccer, however the task of making the game accessible is met with significant challenges. These challenges include: the need to constantly talk to signify location and detecting the positions of silent objects on the field. Our work aims to discover methods to help persons with visual impairments play soccer more efficiently and safely.

Audio Voting for the Visually Impaired: Virtual Keyboard Navigation

Article
Ongsarte, Ashley and Jiang, Youxuan Lucy and McMullen, Kyla
{journal}, 2014
Publication year: 2014

Abstract:

Since the United States federal law Help America Vote Act was passed in 2002, it is widely recognized that all Americans should have equal access to vote with privacy and security, but current electric voting technologies have failed to provide a barrier free access system for visually disabled and impaired population to write and check their desired candidates? names without assistance. Attempts have been made to create audio voting systems that read letters to visually impaired voters for picking, but with lack of checking and correcting features for users? typing. This paper will describe a new technology that allows users to navigate a virtual keyboard to type, check and modify their desired candidates? names using mouse movement and clicking. This new voting system was recently tested at Clemson University, South Carolina on 16 subjects who were blindfolded to simulate the experience of visually impaired voters. The result shows that blindfolded users have difficulty to find keys they want on a virtual keyboard using mouse, no matter how the keys are sorted. This research is expected to reveal the difference between human muscle memory and spatial memory, and to provide a new reference for human-computer interaction research in the future.

Interface Design Implications for Recalling the Spatial Configuration of Virtual Auditory Environments.

Article
McMullen, Kyla A
{journal}, 2012
Publication year: 2012

Abstract:

Although the concept of virtual spatial audio has existed for almost twenty-five years, only in the past fifteen years has modern computing technology enabled the real-time processing needed to deliver high-precision spatial audio. Furthermore, the concept of virtually walking through an auditory environment did not exist. The applications of such an interface have numerous potential uses. Spatial audio has the potential to be used in various manners ranging from enhancing sounds delivered in virtual gaming worlds to conveying spatial locations in real-time emergency response systems. To incorporate this technology in real-world systems, various concerns should be addressed. First, to widely incorporate spatial audio into real-world systems, head-related transfer functions (HRTFs) must be inexpensively created for each user. The present study further investigated an HRTF subjective selection procedure previously developed within our research group. Users discriminated auditory cues to subjectively select their preferred HRTF from a publicly available database. Next, the issue of training to find virtual sources was addressed. Listeners participated in a localization training experiment using their selected HRTFs. The training procedure was created from the characterization of successful search strategies in prior auditory search experiments. Search accuracy significantly improved after listeners performed the training procedure. Next, in the investigation of auditory spatial memory, listeners completed three search and recall tasks with differing recall methods. Recall accuracy significantly decreased in tasks that required the storage of sound source configurations in memory. To assess the impacts of practical scenarios, the present work assessed the performance effects of: signal uncertainty, visual augmentation, and different attenuation modeling. Fortunately, source uncertainty did not affect listeners’ ability to recall or identify sound sources. The present study also found that the presence of visual reference frames significantly increased recall accuracy. Additionally, the incorporation of drastic attenuation significantly improved environment recall accuracy. Through investigating the aforementioned concerns, the present study made initial footsteps guiding the design of virtual auditory environments that support spatial configuration recall.