Enabling Multimodal Mobile Interfaces for Interactive ...
Embed Size (px)
Transcript of Enabling Multimodal Mobile Interfaces for Interactive ...
Enabling Multimodal Mobile Interfaces for InteractiveMusical Performance
Charles RobertsUniversity of California at
Santa BarbaraMedia Arts & Technology
Angus ForbesUniversity of Arizona
School of Information:Science, Technology, & Arts
Tobias HöllererUniversity of California at
Santa BarbaraDept. of Computer Science
KeywordsMusic, mobile, multimodal, interaction
1. INTRODUCTION“Multimodal”interfaces are broadly defined to combine mul-tiple signals from different modalities in order to generatea single coherent action. Although there are many ex-amples in the electronic music community of interfaces thatinclude synchronization mechanisms, for instance , it ismore common for musicians to map individual modalitiesto individual parameters than it is to employ multimodalfusion. That is, while the detection of discrete features in-volving the analysis of various multimodal signals is oneimportant aspect of multimodal interfaces, a second impor-tant aspect for musical applications is enabling the directmapping of continuous sensor output to expressive controlparameters.
The use of multimodal signals for the control of musicalparameter spaces is found in various domains, from avant-garde electronic compositions to commercial implementa-tions of multimodal musical interfaces. In David Rosen-boom’s “Portable Gold and Philosophers’ Stones (MusicFrom Brains In Fours)”, first performed in 1972, four per-formers don a combination of EEG electrodes, body tem-perature monitors and galvanic skin response monitors.
Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee.NIME’13, May 27 – 30, 2013, KAIST, Daejeon, Korea.Copyright remains with the author(s).
Signals from the body temperature and galvanic skin re-sponse monitors control the frequency ratios of four droningpulse waves. These waveforms are then fed into a resonantbandpass filterbank, in which the center frequency and out-put amplitude of each band is controlled by an analysis ofthe EEG signals. This creates a modulating output whereindividual frequencies are slowly emphasized and mutedover the course of the eighteen minute piece in responseto biological signals obtained from the performers.
In 1994, the Yamaha Corporation released an influentialcommerical product, the VL-1 physical modeling synthe-sizer, that made extensive use of multimodal signal pro-cessing. The sonic realism possible with physical models ishighly dependent on the quality of control signals being fedto the model; accordingly, the VL-1 is a duophonic instru-ment with up to twelve parameters of the model exposedfor simultaneous control using a standard piano keyboard, abreath controller, multiple pedal inputs and various wheels,knobs and sliders.
The musical interest in exploring different modalities toexpressively control large parameter spaces has recently ex-panded to include the affordances offered by mobile devices.Atau Tanaka, perhaps best known for his work using elec-tromyography and other bio-sensors in performance, hasmore recently included mobile devices in his performances.He is attracted to the “autonomous, standalone quality -the embodied nature - of a mobile instrument” that allowsthe expressive mapping of sensors to musical parameterscontrolling audio generated on the devices themselves.
2. RELATED WORKAlthough there are a number of toolkits for multimodal in-terface development on desktop computers, such as [20, 4],there are currently no applications available that providesimple APIs for reading the myriad of sensors available onmobile devices. Our research draws upon previous workthat enables complex interactivity through the use of mul-tiple sensors.
An early inspiration for many mobile applications focus-ing on interface creation for musical control is the JazzMu-
1Access to many of the sensors described in this paper iscurrently only available in the iOS version of the app.
tant Lemur. The Lemur was a standalone touchscreendevice first introduced in 2006. It featured unlimited mul-titouch tracking and allowed users to design their own cus-tom interfaces to run on it. In addition to a wide varietyof touchscreen widgets the Lemur also featured a physicsengine and scripting capabilities that afforded complex in-teractivity. The software from the Lemur was ported torun on iOS devices in late 2011, however, the current ver-sion of Lemur only collects input from the touchscreen andaccelerometer while ignoring the other sensors found on mo-bile devices. Many related iOS apps for interface creation(TouchOSC, mrmr, etc.) also have this shortcoming.
An impressive application related to Control is urMuswhich provides access to a variety of sensors available onmobile devices. urMus enables users to define interfacesfor expressive musical control of remote applications as wellas synthesis algorithms to be executed on mobile devices;as such it provides a complete system for designing NIMEswhile Control instead focuses on controlling networked com-puters and MIDI devices. Both audio synthesis and in-terfaces in urMus are defined programmatically using Lua.Some aspects which differentitate the Control applicationfrom urMus in regards to multimodal interfaces are the in-clusion of speech recognition, a more robust video analysismodule, support for aftertouch, and a simpler environmentfor creating interfaces using markup.
A number of toolkits for augmented reality are availableon the market, each of which provide support for the anal-ysis of a live camera feed, for the tracking of particularfeatures, and for the rendering of auxiliary information ormodels onto the original scene representation captured bythe video camera. Some AR toolkits claim to support multi-modal interaction, for instance [23, 14], but in fact supportit only implicitly through its integration with the mobile op-erating system or via another toolkit, such as Unity3D mo-bile game engine. An execption to this is , which pro-vides tracking through the integration of“gravity aware AR”which measures orientation using the device’s accelerom-eter. That is, the combination of two signals is used toimprove both feature tracking and then the rendering ofinformation using that tracking. However, this function-ality is not directly accessible to the user nor able to bemanipulated by a user via an interface. Other toolkits pro-vide multimodal use case scenarios, such as integrating GPSdata with object recognition so that a site-specific data setcan be pre-loaded in order to improve the recognition ca-pabilities[23, 21]. However, these modalities are accessedsequentially and the toolkits do not explicitly support theconcurrent control of multiple signals.
Figure 1: A mixer interface in Control with volume,pan, mute and solo controls for six channels.
3. AN OVERVIEW OF CONTROLAnonymous is an application for iOS and Android that al-lows users to define interfaces that output OSC and MIDI
4. CONTRIBUTIONSPrior to this research, Control only had access to the iner-tial sensors on the mobile devices, enabling users to makeuse of raw accelerometer and gyroscope output and orienta-tion information calculated using a variety of sensors. Ourwork on extending Control newly provides access to the mi-crophone, camera (both front and back) and magnetometerand integrates feature extraction for audio and video signalsto simplify their use in performative contexts. The featureextraction algorithms provided include speech recognition,pitch detection, amplitude detection, changes in the bright-ness and color distribution of video, and blob tracking. Wealso added support for a crude yet surprisingly musical ap-proximation of aftertouch that is derived from the surfaceradius of individual touches.
4.1 Audio Input and Analysis4.1.1 Musical Feature Analysis
Feature analysis of audio input enables many important mu-sical mappings. As one example, the amplitude of audio en-tering the microphone on a mobile device can both triggermusical notes when the volume crosses a certain thresholdand also control their amplitude. By measuring the funda-mental frequency of pitched sounds entering the microphonewe enable users to sing the pitches of notes they would liketo hear electronically synthesized. Control provides featureextraction algorithms to measure both the pitch and ampli-tude of incoming audio signals.
Amplitude is measured by observing the highest samplevalue in an audio window or by calculating the root meansquare of all samples. The amplitude can be determinedconcurrently with the pitch of periodic signals. Pitch isdetected by measuring the number of zero crossings in eachaudio window or by finding the harmonic product spectrum(HPS) of the signal; users may specify which method to usedepending on their particular application.
4.1.2 Speech RecognitionControl uses speech recognition provided by the Pocket-Sphinx library from Carnegie Mellon University; this li-brary is wrapped by the OpenEars framework to makeit easier to deploy under iOS. Custom dictionaries can bedefined on a per interface basis using JSON markup and aredynamically generated when the interface is loaded. When-ever a word or phrase in the dictionary is recongized, Con-trol will output a UDP message to the address matching thename of the Speech object (in this case /speechRecognizer)and pass the recognized word or phrase as an argument tothe message. Users can also define a custom callback to becalled whenever a word or phrase is recognized; this call-back can be used to change aspects of the interface itself inaddition to sending messages to remote destinations. TheSpeech object included with Control also includes listen()and stopListening() methods that can be used to controlwhen speech recognition is engaged. Finally, the dictionaryused by the speech object can be changed dynamically at
runtime via UDP messages or by scripts triggered from in-side the interface.
Figure 2: A screen capture of Control’s blob detec-tion module. The user has selected a light red pixelby touching the screen and attentuated the thresh-old in the red channel so that blobs are detectedonly on the marker caps.
4.2 Video Input and AnalysisMany researchers and performers have investigated gesturetracking and other feature detection from video input forinteractive performance. For example,  uses a live videofeed of conducting gestures to control musical parameters; incorporates optical tracking to enhance music and danceperformance; and  analzyes a live video feed of facialmovements for expressive musical control. Our video analy-sis module more generally allows users to analyze a real-timefeed of video data from the camera and apply the results toarbitrary functionality. In particular, we have implementeda custom blob-tracking algorithm, similar to , as wellas a detector of changes in average brightness and color.Information about the number of blobs, the overall bright-ness, as well as the size, centroid position, average color,average brightness, and density of each blob can be usedto control musical parameters. The blob-tracking interfaceemphasizes ease-of-use for real time tracking and threshold-ing of one or multiple pixel regions. The user can pick apixel range simply by touching the screen to retrieve thespecific color or brightness values at that spot, and thenadjust the range of values in each channel to include or ex-clude adjacent pixels. Further real-time control is availableso that users can adjust the minimum and maximum den-sity of a blob as well as the minimum and maximum sizeof a blob. The live-video feed, the threshold map of pixelswithin a particular pixel range, and the blobs discoveredwithin these pixel ranges are blended together in a singlewindow so that the user has real-time feedback of whichblobs are currently active, as well as information about howto adjust the parameters to track additional blobs. Thismakes it easy for the user to immediately identify interest-ing features in various environments, even if they changeduring a performance. Moreover, since most mobile devices
prohibit programmatic access to camera calibration (e.g.,preventing the automatic updating of white balance), thisis particularly important as pixel values may shift unexpect-edly when lighting conditions change.
6. CONCLUSIONSAlthough current mobile devices feature a variety of sensorsthat makes them ideal for multimodal interfaces, there hasbeen a need for an application that allowed users to take fulladvantage of these sensors in an integrated fashion. Our re-search extending Control provides such an application. AsControl has already been downloaded over 40,000 times oniOS alone, the updates we are providing will immediatelymake the creation of multimodal interfaces more accessi-ble to existing users and encourage users downloading theapplication for the first time to explore the inherent multi-modality of their devices.
Our future work with multimodal interface creation willprimarily consist of adding new feature extraction algo-rithms. Some examples to explore include methods to de-termine the velocity of a finger strike on the touchscreenusing the microphone and accelerometer, signal processingto determine the tempo of rhythmic audio entering the mi-crophone and video processing techniques to use the cam-era and flash for photoplethysmography. Ongoing researchmore general to Control involves letting users create inter-faces from within the app itself using drag-and-drop meth-ods, which would further simplify the interface creation pro-cess empower users who might be intimidated by the processof creating interfaces using markup.
Control is open source and can be downloaded from a gitrepository located on GitHub2.
7. REFERENCES PhoneGap - Home. http://phonegap.com/, accessed
 B. Bell, J. Kleban, D. Overholt, L. Putnam,J. Thompson, and J. Kuchera-Morin. The multimodalmusic stand. In Proceedings of the 7th international
conference on New interfaces for musical expression,pages 62–65. ACM, 2007.
 Carnegie Mellon University.http://cmusphinx.sourceforge.net/2010/03/pocketsphinx-0-6-release/, accessed May2012.
 P. Dragicevic and J. Fekete. Icon: input deviceselection and interaction configuration. In Companionproceedings of the 15th ACM symposium on UserInterface Software & Technology (UIST’02), Paris,France, pages 27–30, 2002.
 B. Dumas, D. Lalanne, and S. Oviatt. Multimodalinterfaces: A survey of principles, models andframeworks. Human Machine Interaction, pages 3–26,2009.
 G. Essl. Designing mobile musical instruments andenvironments with urmus. Proceedings of the 2010Conference on New Interfaces for Musical Expression,2010.
 JazzMutant Lemur.http://www.jazzmutant.com/lemur overview.php,accessed May 2012.
 D. Kurz and S. Benhimane. Gravity-aware handheldaugmented reality. In Mixed and Augmented Reality(ISMAR), 2011 10th IEEE International Symposiumon, pages 111–120. IEEE, 2011.
 Q. Liu, Y. Han, J. Kuchera-Morin, M. Wright, andG. Legrady. Cloud bridge: a data-driven immersiveaudio-visual software interface. Proceedings of the2013 Conference on New Interfaces for MusicalExpression (NIME 2013), 2013.
 M. Lyons. Facial gesture interfaces for expression andcommunication. In Systems, Man and Cybernetics,2004 IEEE International Conference on, volume 1,pages 598–603. IEEE, 2004.
 mrmr. http://mrmr.noisepages.com/, accessed May2012.
 J. A. Paradiso and F. Sparacino. Optical tracking formusic and dance performance. In Optical 3-DMeasurement Techniques IV, pages 11–18. HerbertWichmann Verlag, 1997.
 Politepix. http://www.politepix.com/openears,accessed May 2012 2012.
 Qualcomm’s Vuforia SDK. https://ar.qualcomm.at,accessed May 2012.
 C. Rasmussen, K. Toyama, and G. Hager. Trackingobjects by color alone. DCS RR-1114, YaleUniversity, 1996.
 C. Roberts. Control: Software for End-User InterfaceProgramming and Interactive Performance.Proceedings of the International Computer MusicConference, 2011.
 C. Roberts, B. Alper, J. Morin, and T. Hollerer.Augmented textual data viewing in 3d visualizationsusing tablets. 3D User Interfaces (3DUI), 2012 IEEESymposium on, pages 101–104, 2012.
 D. Rosenboom. Biofeedback and the arts, results ofearly experiments. Aesthetic Research Centre ofCanada, 1976.
 A. Salgian, M. Pfirrmann, and T. Nakra. Follow thebeat? understanding conducting gestures from video.Advances in Visual Computing, pages 414–423, 2007.
 M. Serrano, L. Nigay, J. Lawson, A. Ramsay,R. Murray-Smith, and S. Denef. The openinterfaceframework: a tool for multimodal interaction. InCHI’08 extended abstracts on Human factors in
computing systems, pages 3501–3506. ACM, 2008.
 String Augmented Reality.http://www.poweredbystring.com, accessed May2012.
 A. Tanaka. Mapping out instruments, affordances,and mobiles. In Proceedings of the 2010 Conferenceon New Interfaces for Musical Expression (NIME2010), pages 88–93, 2010.
 Total Immersion’s D’Fusion Suite.http://www.t-immersion.com, accessed May 2012.
 TouchOSC by Hexler.http://hexler.net/software/touchosc/, accessed May2012.
 UNITY: Game Development Tool.http://unity3d.com, accessed May 2012.