Research

Dr. McMullen leads the SoundPAD Laboratory at the University of Florida which focuses on the Perception, Application, and Development of 3D audio systems. This research area falls at the intersection of human-centered computing, electrical engineering, and perception. This focus area is necessary because as virtual reality (VR) and augmented reality (AR) become more commonplace, as evidenced by the recent proliferation of consumer-grade VR devices, significant research is needed to investigate 3D audio’s role in systems of the future. Broadly, the goal of Dr. McMullen’s research is to elucidate the human factors that must be taken into account in the design and use of 3D audio systems. In addition, the lab’s work in 3D audio had strengthened our signal processing capacity, which we also apply to BCI (brain-computer interaction) research.

The lab is composed of undergraduate,  masters, and PhD students researching topics that relate to 3D sound and/or sound signal processing. This headphone-rendered 3D sound can be used in a wide variety of use cases such as compensating for the visual channel when it becomes overloaded or visual cues are not present, increasing immersion in virtual and augmented reality,  helping to overcome the narrow field-of-view (FOV) issues in augmented reality, aiding persons with visual impairments, and increasing sound intelligibility in complex acoustic scenes.

Research Interests

  • Assistive technology
  • Auditory User Interfaces
  • Augmented Reality
  • Auditory Display
  • Boundary Element Methods for Acoustics
  • Brain-Computer Interface
  • Head-Related Transfer Functions
  • Human-Centered Computing
  • Multimodal Display
  • Psychoacoustics
  • Virtual Reality

Current and Past Projects

  • CAREER: Audio RESCUE: Realistic Audio for RESponders in Complex Undetermined Environments

    Abstract

    The role of first responders is more important than ever. In the case of firefighters, the greatest challenges in search and rescue tasks are: disorientation, tracking team members, and low visibility. Despite methods to overcome these challenges, firefighters are still plagued by getting lost in buildings, disorientation, miscommunication of locations and spaces, and not being able to identify paths. Using 3D sound within this context is the most promising solution because responders’ hands are often occupied and visual display can increase cognitive load.

    Our research pursues a novel research program to create a fundamental understanding of how to move 3D audio research from the laboratory and into the real world. Working closely with Gainesville Fire Rescue (GFR) as the testbed application domain, the project team will evaluate using 3D audio to support a search and rescue mission. The PI and team will evaluate firefighters’ performance and use the results to inform data-driven and empirically validated 3D sound design guidelines that address the following challenges:

    • Realistic 3D sound rendering: Rendering 3D sound is challenging, due to the individualistic nature of 3D sound filters, sound delivery hardware, and perceptual qualities of sounds.
    • Change detection: In real-world contexts, a user must be able to quickly detect 3D sound changes, however, limited literature exists in this area.
    • Effects of competing sounds: Real-world scenarios are filled with background noise that may hinder the listener from accurately distinguishing a 3D sound of interest.
  • Effects of 3D Audio on Prostate Biopsy Training

    Medical task simulators can provide a safe environment in which practitioners can rehearse procedures without impacting patient safety. Augmented reality (AR) emerges in this field as the ideal method of display, in which a user can directly interact in a scene that is aligned with their body in natural space, without losing sight of the natural environment, however current AR displays have a narrow field of view (FOV), making it challenging for a user to immediately attend to an object outside of their periphery. This research investigates how the addition of 3D audio cues in narrow-FOV AR contexts will aid users in overcoming the challenge of perceiving and interacting with points of interest outside the FOV

  • 3D Audio for Narrow Field of View, Augmented Reality Contexts

    In general, AR is plagued by the narrow-FOV challenge. The lab conducts research that focuses on quantifying the degree to which these challenges can be mitigated using 3D audio combined with visual cues.

  • Generation NEXT (Need-based, EXtensive Support Through Degree Completion) Project

    NSF-supported project to give Computer Science Grad Students at UF scholarships, strategic advisement, and support, as well as working to change the department climate. The project will advance knowledge concerning computer science identity (CSI) and research identity (RI), while also addressing gaps in the literature regarding CSI measurement for graduate students.

  • Numerical Computation of HRTFs from a 3D Mesh, Using Boundary Element Methods

    In an additional effort to arrive upon a customized HRTF without the need for costly measurement, we are developing a solution to scan a user’s head and torso, and mathematically compute their HRTF.

  • Brain Computer Interface for Measuring Musical Affect

    Music has long been known to affect a person’s emotional state. The lab is using BCI and machine learning to investigate whether there is a measurable neurological basis to sense affect, as an effect of emotional state, elicited by music.

  • 3D Audio Interfaces for Sensing Museum Exhibits

    In an effort to make the museum experience more immersive and interactive, we have partnered with the UF Library and Museum Studies Department to develop a 3D Audio system that allows museum visitors to hear content (such as aural histories or ambient sounds) as they approach a display. The 3D audio is updated in real-time as the user moves around the library.

  • Head-Related Transfer Function (HRTF) Subjective Selection

    HRTFs or Head-Related Transfer Functions allow us to hear accurate 3D audio over headphones. The function highly depends on each person’s anthropometric proportions. As such, we have conducted many studies to allow a user to select the “best” HRTF among a database of publicly-available HRTFs, without the need of a costly measurement apparatus.

  • Brain-Controlled Drum Interface

    We developed a proof of concept system that allows a person to control an electronic drum, only using facial movements such as blinking, winking, raising eyebrows, and smiling, as detected by the BCI.

  • Mindtrack: Using Brain-Computer Interface to Translate Emotions into Music

    In this exploratory project, a user wore an electroencephalogram (EEG) EMOTIV Insight headset. The raw EEG data was converted to brain wave components, followed by high-level EEG characteristics that were used to control the music’s tempo and key signature. Other musical parameters, such as harmony, rhythm and melody were specified by the user.Tempo and key were calculated based on the emotion detected from the EEG. In Mindtrack, the brain is used as the sole instrument for translating emotions to music.

  • Interaction for Interfaces of the Future

    Gestures are currently being used to facilitate user interaction in a wide array of technologies. Gestural interaction is extremely beneficial in that it allows the user to interact more expressively with a system. One major challenge of designing gestural interfaces is the fact that neither standardized gestural taxonomy nor gesture design guidelines exist. This project contributes to this effort by providing taxonomy to characterize human gestures. The second contribution of this work is design implications as informed by a user study in which participants performed individualistic gestures to execute 25 different television commands.

  • Soccer for Players with Visual Impairments

    Many people with visual impairments actively play soccer, however the task of making the game accessible is met with significant challenges, such as detecting the positions of players, the ball, and targets on the field. This project aims to discover methods to help persons with visual impairments play soccer more efficiently and safely using headphone-rendered spatial audio, an on-person computer, and sensors to create 3D sound that represents the objects on the field in real-time.

  • Interfaces for Learning in Introductory Computing Courses

    Undergraduate computer science enrollment continues to soar in universities across the nation. In order to meet the growing demands of personalized computing instruction, it is imperative that educational software is designed to help meet this demand. It is critical to let research inform the design of technology for programming interfaces because fully understanding and realizing the potentials of educational technology will require participatory design and cooperation among the providers of technology, learning and technology researchers, instructors, and learners. The lab also performs research to determine best practices for using e-books to teach introductory computer science courses.

My Awesome Lab Members

Ziqi Fan

Ziqi Fan

Research Assistant, PhD Student

LinkedIn
Terek Arce

Terek Arce

Research Assistant, Graduate Student Instructor, PhD Student

Terek Arce is PhD student in the Computer & Information Science & Engineering Department at the University of Florida. His research is focused on developing standardized psychoacoustic experiments for 3D Audio in virtual and mixed reality systems. In his free time, he enjoys surfing and swimming. Github Webpage
Yunhao (Chris) Wan

Yunhao (Chris) Wan

Research Assistant, Teacher's Assistant, PhD Student

Yunhao ``Chris`` Wan is a CISE Phd student at the University of Florida. His research focuses on machine listening (using machine learning for sound source localization) and spatial audio. He does not admit to having any spare time. GitHub
Chenshen (Jason) Lu

Chenshen (Jason) Lu

Research Assistant, Masters Student

Chenshen Lu is a Masters student majoring in Mechanical Engineering at the University of Florida, currently serving as a volunteer in the SoundPad Lab. He has been designing psychoacoustic experiments, and helping with data collection and analysis. In his free time, he likes to take photos of landscapes and portraits with his DSLR camera. His Github
Eduardo J. Santos González

Eduardo J. Santos González

Undergraduate Student, Undergraduate Research Assistant

Eduardo J. Santos González is a 3rd year undergraduate electrical engineering student. His research interests are brain-machine interfaces, affective computing, and artistic expression. Outside of school, he enjoys playing music and losing chess matches. Personal Website
TIffany Scott

TIffany Scott

Undergraduate Student, Undergraduate Research Assistant

Stanley Celestin

Stanley Celestin

Undergraduate Student, Undergraduate Research Assistant

Anyu Guo

Anyu Guo

Research Assistant, Masters Student

Amazing Students!

The SoundPAD lab would not be as great as it is today, without the efforts and hard work from all current and past students. I am grateful for their dedication and perseverance.