Filter by type:

Sort by year:

The Design of an Algorithmic Modal Music Platform for Eliciting and Detecting Emotion

Conference paper
Eduardo J. Santos, Kyla McMullen
8th IEEE International Winter Conference on Brain-Computer Interaction
Publication year: 2020

Leading Conversations about Microaggressions, Bias, and Other Difficult Topics

Inproceedings
Lewis, Colleen M and DuBow, Wendy M and McMullen, Kyla
In Proceedings of the 50th ACM Technical Symposium on Computer Science Education (SIG CSE) , 2019
Publication year: 2019

Abstract :

Many SIGCSE attendees are committed to inclusive teaching practices and creating an inclusive culture within their classrooms; yet, advocating for and sustaining these initiatives may require having difficult conversations with our colleagues and students. Understandably, many faculty are unsure about how to talk about sensitive topics such as race and gender with their colleagues and students. Research suggests that practicing some of these difficult conversations is essential to achieve the goals of inclusive teaching and culture. In our well attended session at SIGCSE in 2018, attendees learned strategies for responding to bias in academic settings. This was facilitated by playing two rounds of a research-based game developed by the NSF project CSTeachingTips.org (#1339404). This session will extend the work begun last year by helping attendees to replicate this activity with their colleagues. In this special session, attendees will first play the game to practice those strategies in small groups and will then receive facilitation tips and guidance for conducting this activity on their own. All attendees will receive a printed copy of the game and a link to download and print more copies.

What Would You Say if...: Responding to Microaggressions, Bias, and Other Nonsense

Conference paperInproceedings
Lewis, Colleen M and Ashcraft, Catherine and McMullen, Kyla
In Proceedings of the 49th ACM Technical Symposium on Computer Science Education, 2018
Publication year: 2018

Abstract:

Many SIGCSE attendees are committed to inclusive teaching practices and creating an inclusive culture within their classrooms; yet, advocating for and sustaining these initiatives may require having difficult conversations with our colleagues and students. Understandably, many faculty are unsure about how to talk about sensitive topics such as race and gender with their colleagues and students. Research suggests that practicing some of these difficult conversations is essential to achieve the goals of inclusive teaching and culture. Most SIGCSE attendees probably use active learning throughout their teaching, but we rarely see active learning at SIGCSE – let’s try it! In this interactive session, attendees will learn strategies for responding to bias in academic settings. Attendees will then practice those strategies in small groups. This will be facilitated by playing two rounds of a research-based game learning approach developed by the NSF project CSTeachingTips.org (#1339404), which has been tested in group of 200 teaching assistants. This is the fifth iteration of the game-learning approach and all attendees will receive a printed copy of the game and a link to download and print more copies.

Mindtrack: Using brain-computer interface to translate emotions into music

Conference paperInproceedings
Desai, Bhaveek and Chen, Benjamin and Sirocchi, Sofia and McMullen, Kyla A
In 2018 International Conference on Digital Arts, Media and Technology (ICDAMT), 2018
Publication year: 2018

Abstract:

The present work describes Mindtrack, a Brain-Computer Musical Interface that uses real-time brainwave data to allow a user to expressively shape progressive music. In Mindtrack, the user wears an electroencephalogram (EEG) EMO-TIV Insight headset. The raw EEG data is converted into brain wave components, followed by high-level EEG characteristics (such as emotion) that are used to control the music’s tempo and key sig-nature. Other musical parameters, such as harmony, rhythm and melody are specified by the user. Tempo and key are calculated according to the emotion detected from the EEG device. In Mindtrack, the brain is the sole instrument used to translate emotions to music. Mindtrack has the potential to increase the quality of life for persons with physical impairments who still desire to express themselves musically. Furthermore, Mindtrack can be used for music therapy, recreation, and rehabilitation.

Five Slides About: Abstraction, Arrays, Uncomputability, Networks, Digital Portfolios, and the CS Principles Explore Performance Task

Conference paperInproceedings
Lewis, Colleen M and Aaronson, Leslie and Allatta, Eric and Dodds, Zachary and Forbes, Jeffrey and McMullen, Kyla and Sahami, Mehran
In Proceedings of the 49th ACM Technical Symposium on Computer Science Education, 2018
Publication year: 2018

Abstract:

SIGCSE is packed with teaching insights and inspiration. However, we get these insights and inspiration from hearing our colleagues talk about their teaching. Why not just watch them teach? This session does exactly that. Each of six exceptional educators will be given ten minutes to teach the audience something. After this, the moderator will draw the attention of the audience to particular pedagogical moves that the instruction included. Attendees can see a new approach to introducing a topic or a new pedagogical move. No matter what, we expect attendees will be taking ideas from this session directly back to their teaching! The format is based upon a practice in chemistry of sharing” Five Slides About,” which introduce a topic in a novel or concise way (https://www. ionicviper. org/types/five_slides_about). Resources from each of the presenters will be shared on the website CSTeachingTips. org.

Think First: Fostering Substantive Contributions in Collaborative Problem-Solving Dialogues

Incollection
Celepkolu, Mehmet and Wiggins, Joseph B and Boyer, Kristy Elizabeth and McMullen, Kyla
{booktitle}, Philadelphia, PA: International Society of the Learning Sciences., 2017
Publication year: 2017

Abstract:

Working collaboratively holds many benefits for learners. However, varying incoming knowledge and attitudes toward collaboration present challenges and can lead to frustration for students. An important open question is how to support effective collaboration and foster equity for students with different levels of incoming preparation. In this study, we compared two collaborative instructional approaches for computer science problem solving, in which students participated in one of two conditions: The Baseline condition featured collaborative problem solving in which students worked in dyads from the beginning of the collaboration; in the other condition, called Think-First, students first worked on the problem individually for a short time and then began collaborating to produce a common solution. The results from 190 students from an introductory programming class working in 95 pair-programming teams demonstrate that this simple modification to pair programming had a significant positive effect on test scores and on substantive contributions in collaborative dialogue.

The Effects of Training on Real-Time Localization of Headphone-Rendered, Spatially Processed Sounds

Conference paperInproceedings
McMullen, KA and Wakefield, Gregory H
In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 2017
Publication year: 2017

Abstract:

Although static localization performance in auditory displays is known to substantially improve as a listener spends more time in the environment, the impact of real-time interactive movement on these tasks is not yet well understood. Accordingly, a training procedure was developed and evaluated to address this question. In a set of experiments, listeners searched for and marked the locations of five virtually spatialized sound sources. The task was performed with and without training. Finally, the listeners performed a second search and mark task to assess the impacts of training. The results indicate that the training procedure maintained or significantly improved localization accuracy. In addition, localization performance did not improve for listeners who did not complete the training procedure.

The Effects of 3D Audio on Hologram Localization in Augmented Reality Environments

Conference paperInproceedings
Arce, Terek and Fuchs, Henry and McMullen, Kyla
In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 2017
Publication year: 2017

Abstract:

Currently available augmented reality systems have a narrow field of view, giving users only a small window to look through to find holograms in the environment. The challenge for developers is to direct users’ attention to holograms outside this window. To alleviate this field of view constraint, most research has focused on hardware improvements to the head mounted display. However, incorporating 3D audio cues into programs could also aid users in this localization task. This paper investigates the effectiveness of 3D audio on hologram localization. A comparison of 3D audio, visual, and mixed-mode stimuli shows that users are able to localize holograms significantly faster under conditions that include 3D audio. To our knowledge, this is the first study to explore the use of 3D audio in localization tasks using augmented reality systems. The results provide a basis for the incorporation of 3D audio in augmented reality applications.

Hearing biochemical structures: molecular visualization with spatial audio

Article
Arce, Terek R and McMullen, Kyla A
ACM SIGACCESS Accessibility and Computing(117): 9—13, 2017
Publication year: 2017

Abstract:

Accurately perceiving the structure of biochemical molecules is key to understanding their function in biological systems. Visualization software has given the scientific and medical communities a means to study these structures in great detail; however, these tools lack an intuitive means to convey this information to persons with visual impairment. Advances in spatial audio technology have allowed for sound to be perceived in 3-dimensional space when played over headphones. This work presents the development of a novel computational tool that utilizes spatial audio to convey the three dimensional structure of biochemical molecules.

Quantitatively Validating Subjectively Selected HRTFs for Elevation and Front-Back Distinction

Conference paperInproceedings
Fan, Ziqi and Wan, Yunhao and McMullen, Kyla
In {booktitle}, 2016
Publication year: 2016

Abstract:

As 3D audio becomes more commonplace to enhance auditory environments, designers are faced with the challenge of choosing HRTFs for listeners that provide proper audio cues. Subjective selection is a low-cost alternative to expensive HRTF measurement, however little is known concerning whether the preferred HRTFs are similar or if users exhibit random behavior in this task. In addition, PCA (principal component analysis) can be used to decompose HRTFs in representative features, however little is known concerning whether the features have a relevant perceptual basis. 12 listeners completed a subjective selection experiment in which they judged the perceptual quality of 14 HRTFs in terms of elevation, and front-back distinction. PCA was used to decompose the HRTFs and create an HRTF similarity metric. The preferred HRTFs were significantly more similar to each other, the preferred and non-preferred HRTFs were significantly less similar to each other, and in the case of front-back distinction the non-preferred HRTFs were significantly more similar to each other.

To Start Voting, Say Vote: Establishing a Threshold for Ambient Noise for a Speech Recognition Voting System

Article
Jackson, France and Solomon, Amber and McMullen, Kyla and Gilbert, Juan E
Procedia Manufacturing, 3: 5512—5518, 2015
Publication year: 2015

Abstract:

Prime III is a multimodal voting system that allows users to use touch or voice to make selections on their ballot. This paper discusses an experiment that evaluated the system’s speech recognition at various levels of background noise. An approach to simulate realistic background noise in a controlled environment is described. This approach helped mimic a voter voting in a precinct. The goal of the experiment was to establish a threshold for when distortion occurs and speech recognition accuracy declines. The signal-to-noise ratios (SNR) between the volumes were recorded and the system’s accuracy was tested. The result was a suggested threshold of a SNR equal to 1.44 to attain 90% system accuracy. The next phase of this project is to test the level of system interference from ambient noise in an actual voting precinct.

Temporal reliability of subjectively selected head-related transfer functions (hrtfs) in a non-eliminating discrimination task

Conference paper
Wan, Yunhao and Fan, Ziqi and McMullen, Kyla
Audio Engineering Society Convention 139
Publication year: 2015

Abstract:

The emergence of commercial virtual reality devices has reinvigorated the need for research in realistic audio for virtual environments. Realistic virtual audio is often realized through the use of head-related transfer functions (HRTFs) that are costly to measure and individualistic to each listener, thus making their use unscalable. Subjective selection allows a listener to pick their own HRTF from a database of premeasured HRTFs. While this is a more scalable option further research is needed to examine listeners’ consistency in choosing their own HRTFs. The present study extends the current subjective selection research by quantifying the reliability of subjectively selected HRTFs by 12 participants over time in a non-eliminating perceptual discrimination task.

Gesture-based Sound Localization and Manipulation

Conference paperInproceedings
Ranjan, Shashank and McMullen, Kyla
In Proceedings of the 3rd ACM Symposium on Spatial User Interaction, 2015
Publication year: 2015

Abstract:

With current advancements in computer vision depth sensing technologies, gestures provide a new means of computer interaction. 3D audio research has gained significant ground in accurately localizing sound in 3D space, but not much work has been conducted relating to modes of user interaction in such applications. In this paper, gestures are used as a more natural way of interacting with 3D spatial audio applications, specifically for the localization and manipulation of sound sources.

Assessment of Electronic Write-in Voting Interfaces for Persons with Visual Impairments

Inproceedings
Ongsarte, Ashley and Jiang, Youxuan and McMullen, Kyla
In International Conference on Human-Computer Interaction, 2015
Publication year: 2015

Abstract:

In 2002, the Help America Vote Act (HAVA) mandated that all Americans should have an equal opportunity to vote with privacy and security. However, current electric voting technologies have unsuccessfully provided barrier-free access for people with visual impairments to write in a desired candidate’s name without assistance. The present work describes a new e-voting technology where voters independently use a mouse to interact with a virtual audio keyboard that provides the ability to type, check, and modify a write-in candidate choice. The goal of this work is to create an accessible, accurate, and independent keyboard based interaction mechanism for visually impaired voters. The interface was assessed using 16 participants. Performance was measured in terms of voting speed, accuracy, and preference. The system was compared to a voting technology that uses a linear method to select letters. Results indicated that performance using the linear write-in interface was significantly better than the virtual keyboard. The results also revealed an interesting distinction between human muscle memory and spatial memory.

Using virtual spatial audio to aide visually impaired atheletes

Article
Becwar, R and Sieron, D and McMullen, K and Gardner, C
{journal}, 2014
Publication year: 2014

Abstract:

Many people with visual impairments actively play soccer, however the task of making the game accessible is met with significant challenges. These challenges include: the need to constantly talk to signify location and detecting the positions of silent objects on the field. Our work aims to discover methods to help persons with visual impairments play soccer more efficiently and safely.

The potentials for spatial audio to convey information in virtual environments

Inproceedings
Mcmullen, Kyla A
In 2014 IEEE VR Workshop: Sonic Interaction in Virtual Environments (SIVE), 2014
Publication year: 2014

Abstract:

Digital sounds can be processed such that auditory cues are created that convey spatial location within a virtual auditory environment (VAE). Only in recent years has technology advanced such that audio can be processed in real-time as a user navigates an environment. We must first consider the perceptual challenges faced by 3D sound rendering, before we can realize its full potential. Now more than ever before, large quantities of data are created and collected at an increasing rate. Research in human perception has demonstrated that humans are capable of differentiating among many sounds. One potential application is to create an auditory virtual world in which data is represented as various sounds. Such a representation could aid data analysts in detecting patterns in data, decreasing cognitive load, and performing their jobs faster. Although this is one application, the full extent of the manner in which 3D sounds can be used to augment virtual environments has yet to be discovered.

Evaluating the consistency of subjectively selected head-related transfer functions (hrtfs) over time

Conference paperInproceedings
Wan, Yunhao and Zare, Alireza and McMullen, Kyla
In Audio Engineering Society Conference: 55th International Conference: Spatial Audio, 2014
Publication year: 2014

Abstract:

Virtual auditory environments (VAEs) are created by filtering digital sounds through HRTFs (Head-Related Transfer Functions) such that they convey a spatial location to the listener. The most accurate HRTFs are obtained by direct individual acoustic measurement, however this is a costly and time-consuming process. Subjective selection arises as a low cost alternative to obtaining customized HRTFs, however, this manner of selection is perceptual in nature, and a user’s choices may change over time. The validity of using subjective selection for HRTF customization relies on the consistency of the HRTFs selected by listeners. The present work assesses how listener’s subjectively selected HRTFs may change over time. The results suggested that listeners are able to select adequate HRTFs in one session, without the need for additional sessions.

Effects of Visual Augmentation on the Memory of Spatial Sounds

Conference paperInproceedings
McMullen, Kyla A and Wakefield, Gregory A
In {booktitle}, 2014
Publication year: 2014

Abstract:

Spatial audio displays are created by processing digital sounds such that they convey a spatial location to the listener. These displays are used as a supplementary channel when the visual channel is overloaded or when visual cues are absent. This technology can be used to aid decision-makers in complex, dynamic tasks such as urban combat simulation, flight simulations, mission rehearsals, air traffic control, military command and control, and emergency services. Accurate spatial sound rendering is a primary focus in this research area, with spatial sound memory receiving less attention. The present study assesses the effects of visual augmentation on spatial sound location and identity memory. The chosen visual augmentations were a Cartesian and polar grid. The work presented in this paper discovered that the addition of visual augmentation improved location and identity memory without degrading search time performance.

Design of an accessible and portable system for soccer players with visual impairments

Conference paperInproceedings
Zare, Alireza and McMullen, Kyla and Gardner-McCune, Christina
In Proceedings of the extended abstracts of the 32nd annual ACM conference on Human factors in computing systems, 2014
Publication year: 2014

Abstract:

Many people with visual impairments actively play soccer, however the task of making the game accessible is met with significant challenges. These challenges include: the need to constantly talk to signify location and detecting the positions of silent objects on the field. Our work aims to discover methods to help persons with visual impairments play soccer more efficiently and safely. The proposed system uses headphone-rendered spatial audio, an on-person computer, and sensors to create 3D sound that represents the objects on the field in real-time. This depiction of the field will help players to more accurately detect the locations of objects and people on the field. The present work describes the design of such a system and discusses perceptual challenges. Broadly, our work aims to discover ways to enable people with visual impairments to detect the position of moving objects, which will allow them to feel empowered in their personal lives and give them the confidence to navigate more independently.

Audio Voting for the Visually Impaired: Virtual Keyboard Navigation

Article
Ongsarte, Ashley and Jiang, Youxuan Lucy and McMullen, Kyla
{journal}, 2014
Publication year: 2014

Abstract:

Since the United States federal law Help America Vote Act was passed in 2002, it is widely recognized that all Americans should have equal access to vote with privacy and security, but current electric voting technologies have failed to provide a barrier free access system for visually disabled and impaired population to write and check their desired candidates? names without assistance. Attempts have been made to create audio voting systems that read letters to visually impaired voters for picking, but with lack of checking and correcting features for users? typing. This paper will describe a new technology that allows users to navigate a virtual keyboard to type, check and modify their desired candidates? names using mouse movement and clicking. This new voting system was recently tested at Clemson University, South Carolina on 16 subjects who were blindfolded to simulate the experience of visually impaired voters. The result shows that blindfolded users have difficulty to find keys they want on a virtual keyboard using mouse, no matter how the keys are sorted. This research is expected to reveal the difference between human muscle memory and spatial memory, and to provide a new reference for human-computer interaction research in the future.

3D sound memory in virtual environments

Conference paperInproceedings
McMullen, Kyla A and Wakefield, Gregory H
In 2014 IEEE Symposium on 3D User Interfaces (3DUI), 2014
Publication year: 2014

Abstract:

Virtual auditory environments (VAEs) are created by processing digital sounds such that they convey a 3D location to the listener. This technology has the potential to augment systems in which an operator tracks the positions of targets. Prior work has established that listeners can locate sounds in VAEs, however less is known concerning listener memory for virtual sounds. In this study, three experimental tasks assessed listener recall of sound positions and identities, using free and cued recall, with one or more delays. Overall, accuracy degrades as listeners recall the environment, however when using free recall, listeners exhibited less degradation.

The effects of attenuation modeling on spatial sound search

Conference paperInproceedings
McMullen, Kyla A and Wakefield, Gregory H
In {booktitle}, 2013
Publication year: 2013

Abstract:

Virtual spatial audio often utilizes the inverse-square law to model the relationship between intensity and distance for sources in the far-field. The present study explores the potential advantages of an “inverse-Nth” law, where N is greater than two, for dense, noisy environments where sources are distributed over a wide range of distances, or potentially sparse environments where the distance varies little. The findings of the study show significantly improved listener search and recall performance, without affecting sound search time, when using an inverse-8th law.

Effects of plane mapping on sound localization in a virtual auditory environment

Conference paperInproceedings
McMullen, Kyla A and Wakefield, Gregory H
In International Conference on Human-Computer Interaction, 2013
Publication year: 2013

Abstract:

Virtual auditory environments (VAEs) can be used to communicate spatial information, with sound sources representing the location of objects. A critical factor in this type of immersive system is the degree to which the participant can interact with the virtual environment. Our prior work has demonstrated that listeners can successfully locate virtual spatialized sounds, delivered over headphones, in a VAE using a mouse and screen to navigate the virtual world. The screen indicates the avatars position on the vertical plane. The present study seeks to determine the effects of plane mapping on listener performance. In the horizontal-plane interface, the listener used a WACOM tablet and pen to navigate the VAE on the horizontal plane. Results suggest that there is no significant performance difference when locating a single sound source. In the multi-source context, it was observed that the time taken to locate the first sound was significantly larger than the time taken to locate the remaining sounds.

Alternate Pathways to Careers in Computing: Recruiting and Retaining Women Students

Conference paperInproceedings
Daily, S.B. and Gardner-Mccune, C. and Gilbert, J. and Hall, P.W. and McMullen, K. and Remy, S.L. and Woodard, D.
In ASEE Annual Conference, 2013
Publication year: 2013

Subjective selection of head-related transfer functions (hrtf) based on spectral coloration and interaural time differences (itd) cues

Inproceedings
McMullen, Kyla and Roginska, Agnieszka and Wakefield, Gregory H
In Audio Engineering Society Convention 133, 2012
Publication year: 2012

Abstract:

The present study describes an HRTF subjective individualization procedure in which a listener selects from a database those HRTFs that pass several perceptual criteria. Earlier work has demonstrated that listeners are as likely to select a database HRTF as their own when judging externalization, elevation, and front/back discriminability. The procedure employed in this original study requires individually measured ITDs. The present study modifies the original procedure so that individually measured ITDs are unnecessary. Specifically, a standardized ITD is used, in place of the listener’s ITD, to identify those database minimum-phase HRTFs with desirable perceptual properties. The selection procedure is then repeated for one of the preferred minimum-phase HRTFs and searches over a database of ITDs. Consistent with the original study, listeners prefer a small subset of HRTFs; in contrast, while individual listeners show clear preferences for some ITDs over others, no small subset of ITDs appears to satisfy all listeners.

Interface Design Implications for Recalling the Spatial Configuration of Virtual Auditory Environments.

Article
McMullen, Kyla A
{journal}, 2012
Publication year: 2012

Abstract:

Although the concept of virtual spatial audio has existed for almost twenty-five years, only in the past fifteen years has modern computing technology enabled the real-time processing needed to deliver high-precision spatial audio. Furthermore, the concept of virtually walking through an auditory environment did not exist. The applications of such an interface have numerous potential uses. Spatial audio has the potential to be used in various manners ranging from enhancing sounds delivered in virtual gaming worlds to conveying spatial locations in real-time emergency response systems. To incorporate this technology in real-world systems, various concerns should be addressed. First, to widely incorporate spatial audio into real-world systems, head-related transfer functions (HRTFs) must be inexpensively created for each user. The present study further investigated an HRTF subjective selection procedure previously developed within our research group. Users discriminated auditory cues to subjectively select their preferred HRTF from a publicly available database. Next, the issue of training to find virtual sources was addressed. Listeners participated in a localization training experiment using their selected HRTFs. The training procedure was created from the characterization of successful search strategies in prior auditory search experiments. Search accuracy significantly improved after listeners performed the training procedure. Next, in the investigation of auditory spatial memory, listeners completed three search and recall tasks with differing recall methods. Recall accuracy significantly decreased in tasks that required the storage of sound source configurations in memory. To assess the impacts of practical scenarios, the present work assessed the performance effects of: signal uncertainty, visual augmentation, and different attenuation modeling. Fortunately, source uncertainty did not affect listeners’ ability to recall or identify sound sources. The present study also found that the presence of visual reference frames significantly increased recall accuracy. Additionally, the incorporation of drastic attenuation significantly improved environment recall accuracy. Through investigating the aforementioned concerns, the present study made initial footsteps guiding the design of virtual auditory environments that support spatial configuration recall.

Searching for Sources from a Fixed Point in a Virtual Auditory Environment

Conference paperInproceedings
Roginska, Agnieszka and Wakefield, Gregory H and McMullen, Kyla
In {booktitle}, 2011
Publication year: 2011

Abstract:

Interaction between the listener and their environment in a spatial auditory display plays an important role in creating better situational awareness, resolving front/back and up/down confusions, and improving localization. Prior studies with 6DOF interaction suggest that using either a head tracker or a mouse-driven interface yields similar performance during a navigation and search task in a virtual auditory environment. In this paper, we present a study that compares listener performance in a virtual auditory environment under a static mode condition, and two dynamic conditions (head tracker and mouse) using orientation-only interaction. Results reveal tradeoffs among the conditions and interfaces. While the fastest response time was observed in the static mode, both dynamic conditions resulted in significantly reduced front/back confusions and improved localization accuracy. Training effects and search strategies are discussed.

Performance of Using Orientation Search during an Auditory Navigation and Search Task Using Avatar, Natural, and Static Mediations

Conference paperInproceedings
Roginska, Agnieszka and Wakefield, GH and McMullen, K
In International Conference on Auditory Display, 2011
Publication year: 2011

Effects of interface type on navigation in a virtual spatial auditory environment

Conference paperInproceedings
Roginska, Agnieszka and Wakefield, Gregory H and Santoro, Thomas S and McMullen, Kyla
In {booktitle}, 2010
Publication year: 2010

Abstract:

In the design of spatial auditory displays, listener interactivity can promote greater immersion, better situational awareness, reduced front/back confusion, improved localization, and greater externalization. Interactivity between the listener and their environment has traditionally been achieved using a head tracker interface. However, trackers are expensive, sensitive to calibration, and may not be appropriate for use in all physical environments. Interactivity can be achieved using a number of alternative interfaces. This study compares learning rates and performance in a single-source auditory search task for a headtracker and a mouse/keyboard interface within a single source and multi-source context.

This is who I am and this is what I do: demystifying the process of designing culturally authentic technology

Conference paper
Eugene, Wanda and Hatley, Leshell and McMullen, Kyla and Brown, Quincy and Rankin, Yolanda and Lewis, Sheena
In International Conference on Internationalization, Design and Global Development, 2009
Publication year: 2009

Abstract:

The goal of this paper is to bridge the gap between existing frameworks for the design of culturally relevant educational technology. Models and guidelines that provide potential frameworks for designing culturally authentic learning environment are explained and transposed into one comprehensive design framework, understanding that integrating culture into the design of educational technology promotes learning and a more authentic user experience. This framework establishes principles that promote a holistic approach to design.

Relationship Learning Software: Design and Assessment

Conference paper
McMullen, Kyla A and Wakefield, Gregory H
In International Conference on Human-Computer Interaction, 2009
Publication year: 2009
Abstract:

Interface designers have been studying how to construct graphical user interfaces (GUIs) for a number of years, however adults are often the main focus of these studies. Children constitute a unique user group, making it necessary to design software specifically for them. For this study, several interface design frameworks were combined to synthesize a framework for designing educational software for children. Two types of learning, relationships and categories, are the focus of the present study because of their importance in early-child learning as well as standardized testing. For this study the educational game Melo’s World was created as an experimental platform. The experiments assessed the performance differences found when including or excluding subsets of interface design features, specifically aesthetic and behavioral features. Software that contains aesthetic, but lack behavioral features, was found to have the greatest positive impact on a child’s learning of thematic relationships.