A Survey of Mobile Phone Sensingcampbell/papers/mobile-phone-survey.pdf · A Survey of Mobile Phone...

11
IEEE Communications Magazine • September 2010 140 0163-6804/10/$25.00 © 2010 IEEE AD HOC AND SENSOR NETWORKS Nicholas D. Lane, Emiliano Miluzzo, Hong Lu, Daniel Peebles, Tanzeem Choudhury, and Andrew T. Campbell, Dartmouth College A Survey of Mobile Phone Sensing INTRODUCTION Today’s smartphone not only serves as the key computing and communication mobile device of choice, but it also comes with a rich set of embedded sensors, such as an accelerometer, digital compass, gyroscope, GPS, microphone, and camera. Collectively, these sensors are enabling new applications across a wide variety of domains, such as healthcare [1], social net- works [2], safety, environmental monitoring [3], and transportation [4, 5], and give rise to a new area of research called mobile phone sensing. Until recently mobile sensing research such as activity recognition, where people’s activity (e.g., walking, driving, sitting, talking) is classi- fied and monitored, required specialized mobile devices (e.g., the Mobile Sensing Platform [MSP]) [6] to be fabricated [7]. Mobile sensing applications had to be manually downloaded, installed, and hand tuned for each device. User studies conducted to evaluate new mobile sens- ing applications and algorithms were small-scale because of the expense and complexity of doing experiments at scale. As a result the research, which was innovative, gained little momentum outside a small group of dedicated researchers. Although the potential of using mobile phones as a platform for sensing research has been dis- cussed for a number of years now, in both indus- trial [8] and research communities [9, 10], there has been little or no advancement in the field until recently. All that is changing because of a number of important technological advances. First, the availability of cheap embedded sensors initially included in phones to drive the user experience (e.g., the accelerometer used to change the dis- play orientation) is changing the landscape of possible applications. Now phones can be pro- grammed to support new disruptive sensing applications such as sharing the user’s real-time activity with friends on social networks such as Facebook, keeping track of a person’s carbon footprint, or monitoring a user’s well being. Sec- ond, smartphones are open and programmable. In addition to sensing, phones come with com- puting and communication resources that offer a low barrier of entry for third-party programmers (e.g., undergraduates with little phone program- ming experience are developing and shipping applications). Third, importantly, each phone vendor now offers an app store allowing develop- ers to deliver new applications to large popula- tions of users across the globe, which is transforming the deployment of new applications, and allowing the collection and analysis of data far beyond the scale of what was previously possi- ble. Fourth, the mobile computing cloud enables developers to offload mobile services to back-end servers, providing unprecedented scale and addi- tional resources for computing on collections of large-scale sensor data and supporting advanced features such as persuasive user feedback based on the analysis of big sensor data. The combination of these advances opens the door for new innovative research and will lead to the development of sensing applications that are likely to revolutionize a large number of existing business sectors and ultimately significantly impact our everyday lives. Many questions remain to make this vision a reality. For exam- ple, how much intelligence can we push to the phone without jeopardizing the phone experi- ence? What breakthroughs are needed in order to perform robust and accurate classification of activities and context out in the wild? How do we scale a sensing application from an individual to a target community or even the general popula- tion? How do we use these new forms of large- scale application delivery systems (e.g., Apple AppStore, Google Market) to best drive data ABSTRACT Mobile phones or smartphones are rapidly becoming the central computer and communica- tion device in people’s lives. Application delivery channels such as the Apple AppStore are trans- forming mobile phones into App Phones, capa- ble of downloading a myriad of applications in an instant. Importantly, today’s smartphones are programmable and come with a growing set of cheap powerful embedded sensors, such as an accelerometer, digital compass, gyroscope, GPS, microphone, and camera, which are enabling the emergence of personal, group, and community- scale sensing applications. We believe that sen- sor-equipped mobile phones will revolutionize many sectors of our economy, including busi- ness, healthcare, social networks, environmental monitoring, and transportation. In this article we survey existing mobile phone sensing algorithms, applications, and systems. We discuss the emerg- ing sensing paradigms, and formulate an archi- tectural framework for discussing a number of the open issues and challenges emerging in the new area of mobile phone sensing research.

Transcript of A Survey of Mobile Phone Sensingcampbell/papers/mobile-phone-survey.pdf · A Survey of Mobile Phone...

Page 1: A Survey of Mobile Phone Sensingcampbell/papers/mobile-phone-survey.pdf · A Survey of Mobile Phone Sensing INTRODUCTION Today’s smartphone not only serves as the key computing

IEEE Communications Magazine • September 2010140 0163-6804/10/$25.00 © 2010 IEEE

AD HOC AND SENSOR NETWORKS

Nicholas D. Lane, Emiliano Miluzzo, Hong Lu, Daniel Peebles, Tanzeem Choudhury, and Andrew T. Campbell, Dartmouth College

A Survey of Mobile Phone Sensing

INTRODUCTIONToday’s smartphone not only serves as the keycomputing and communication mobile device ofchoice, but it also comes with a rich set ofembedded sensors, such as an accelerometer,digital compass, gyroscope, GPS, microphone,and camera. Collectively, these sensors areenabling new applications across a wide varietyof domains, such as healthcare [1], social net-works [2], safety, environmental monitoring [3],and transportation [4, 5], and give rise to a newarea of research called mobile phone sensing.

Until recently mobile sensing research suchas activity recognition, where people’s activity(e.g., walking, driving, sitting, talking) is classi-fied and monitored, required specialized mobiledevices (e.g., the Mobile Sensing Platform[MSP]) [6] to be fabricated [7]. Mobile sensingapplications had to be manually downloaded,installed, and hand tuned for each device. Userstudies conducted to evaluate new mobile sens-ing applications and algorithms were small-scalebecause of the expense and complexity of doingexperiments at scale. As a result the research,which was innovative, gained little momentumoutside a small group of dedicated researchers.Although the potential of using mobile phones

as a platform for sensing research has been dis-cussed for a number of years now, in both indus-trial [8] and research communities [9, 10], therehas been little or no advancement in the fielduntil recently.

All that is changing because of a number ofimportant technological advances. First, theavailability of cheap embedded sensors initiallyincluded in phones to drive the user experience(e.g., the accelerometer used to change the dis-play orientation) is changing the landscape ofpossible applications. Now phones can be pro-grammed to support new disruptive sensingapplications such as sharing the user’s real-timeactivity with friends on social networks such asFacebook, keeping track of a person’s carbonfootprint, or monitoring a user’s well being. Sec-ond, smartphones are open and programmable.In addition to sensing, phones come with com-puting and communication resources that offer alow barrier of entry for third-party programmers(e.g., undergraduates with little phone program-ming experience are developing and shippingapplications). Third, importantly, each phonevendor now offers an app store allowing develop-ers to deliver new applications to large popula-tions of users across the globe, which istransforming the deployment of new applications,and allowing the collection and analysis of datafar beyond the scale of what was previously possi-ble. Fourth, the mobile computing cloud enablesdevelopers to offload mobile services to back-endservers, providing unprecedented scale and addi-tional resources for computing on collections oflarge-scale sensor data and supporting advancedfeatures such as persuasive user feedback basedon the analysis of big sensor data.

The combination of these advances opens thedoor for new innovative research and will lead tothe development of sensing applications that arelikely to revolutionize a large number of existingbusiness sectors and ultimately significantlyimpact our everyday lives. Many questionsremain to make this vision a reality. For exam-ple, how much intelligence can we push to thephone without jeopardizing the phone experi-ence? What breakthroughs are needed in orderto perform robust and accurate classification ofactivities and context out in the wild? How do wescale a sensing application from an individual toa target community or even the general popula-tion? How do we use these new forms of large-scale application delivery systems (e.g., AppleAppStore, Google Market) to best drive data

ABSTRACT

Mobile phones or smartphones are rapidlybecoming the central computer and communica-tion device in people’s lives. Application deliverychannels such as the Apple AppStore are trans-forming mobile phones into App Phones, capa-ble of downloading a myriad of applications inan instant. Importantly, today’s smartphones areprogrammable and come with a growing set ofcheap powerful embedded sensors, such as anaccelerometer, digital compass, gyroscope, GPS,microphone, and camera, which are enabling theemergence of personal, group, and community-scale sensing applications. We believe that sen-sor-equipped mobile phones will revolutionizemany sectors of our economy, including busi-ness, healthcare, social networks, environmentalmonitoring, and transportation. In this article wesurvey existing mobile phone sensing algorithms,applications, and systems. We discuss the emerg-ing sensing paradigms, and formulate an archi-tectural framework for discussing a number ofthe open issues and challenges emerging in thenew area of mobile phone sensing research.

LANE LAYOUT 8/24/10 10:43 AM Page 140

Page 2: A Survey of Mobile Phone Sensingcampbell/papers/mobile-phone-survey.pdf · A Survey of Mobile Phone Sensing INTRODUCTION Today’s smartphone not only serves as the key computing

IEEE Communications Magazine • September 2010 141

collection, analysis and validation? How can weexploit the availability of big data shared byapplications but build watertight systems thatprotect personal privacy? While this newresearch field can leverage results and insightsfrom wireless sensor networks, pervasive com-puting, machine learning, and data mining, itpresents new challenges not addressed by thesecommunities.

In this article we give an overview of the sen-sors on the phone and their potential uses. Wediscuss a number of leading application areas andsensing paradigms that have emerged in the liter-ature recently. We propose a simple architecturalframework in order to facilitate the discussion ofthe important open challenges on the phone andin the cloud. The goal of this article is to bringthe novice or practitioner not working in this fieldquickly up to date with where things stand.

SENSORSAs mobile phones have matured as a computingplatform and acquired richer functionality, theseadvancements often have been paired with theintroduction of new sensors. For example,accelerometers have become common after beinginitially introduced to enhance the user interfaceand use of the camera. They are used to automat-ically determine the orientation in which the useris holding the phone and use that information toautomatically re-orient the display between alandscape and portrait view or correctly orientcaptured photos during viewing on the phone.

Figure 1 shows the suite of sensors found inthe Apple iPhone 4. The phone’s sensors includea gyroscope, compass, accelerometer, proximitysensor, and ambient light sensor, as well as othermore conventional devices that can be used tosense such as front and back facing cameras, amicrophone, GPS and WiFi, and Bluetoothradios. Many of the newer sensors are added tosupport the user interface (e.g., the accelerome-ter) or augment location-based services (e.g., thedigital compass).

The proximity and light sensors allow thephone to perform simple forms of context recog-nition associated with the user interface. Theproximity sensor detects, for example, when theuser holds the phone to her face to speak. Inthis case the touchscreen and keys are disabled,preventing them from accidentally being pressedas well as saving power because the screen isturned off. Light sensors are used to adjust thebrightness of the screen. The GPS, which allowsthe phone to localize itself, enables new loca-tion-based applications such as local search,mobile social networks, and navigation. Thecompass and gyroscope represent an extensionof location, providing the phone with increasedawareness of its position in relation to the physi-cal world (e.g., its direction and orientation)enhancing location-based applications.

Not only are these sensors useful in drivingthe user interface and providing location-basedservices; they also represent a significant oppor-tunity to gather data about people and theirenvironments. For example, accelerometer datais capable of characterizing the physical move-ments of the user carrying the phone [2]. Dis-

tinct patterns within the accelerometer data canbe exploited to automatically recognize differentactivities (e.g., running, walking, standing). Thecamera and microphone are powerful sensors.These are probably the most ubiquitous sensorson the planet. By continuously collecting audiofrom the phone’s microphone, for example, it ispossible to classify a diverse set of distinctivesounds associated with a particular context oractivity in a person’s life, such as using an auto-matic teller machine (ATM), being in a particu-lar coffee shop, having a conversation, listeningto music, making coffee, and driving [11]. Thecamera on the phone can be used for manythings including traditional tasks such as photoblogging to more specialized sensing activitiessuch as tracking the user’s eye movement acrossthe phone’s display as a means to activate appli-cations using the camera mounted on the frontof the phone [12]. The combination ofaccelerometer data and a stream of location esti-mates from the GPS can recognize the mode oftransportation of a user, such as using a bike orcar or taking a bus or the subway [3].

More and more sensors are being incorporat-ed into phones. An interesting question is whatnew sensors are we likely to see over the nextfew years? Non-phone-based mobile sensingdevices such as the Intel/University of Washing-ton Mobile Sensing Platform (MSP) [6] haveshown value from using other sensors not foundin phones today (e.g., barometer, temperature,humidity sensors) for activity recognition; forexample, the accelerometer and barometer makeit easy to identify not only when someone iswalking, but when they are climbing stairs and inwhich direction. Other researchers have studiedair quality and pollution [13] using specialized

Figure 1. An off-the-self iPhone 4, representative of the growing class of sensor-enabled phones. This phone includes eight different sensors: accelerometer,GPS, ambient light, dual microphones, proximity sensor, dual cameras, com-pass, and gyroscope.

Ambient light

Proximity

Dual cameras

GPS

Accelerometer

Compass

Gyroscope

Dual microphones

LANE LAYOUT 8/24/10 10:43 AM Page 141

Page 3: A Survey of Mobile Phone Sensingcampbell/papers/mobile-phone-survey.pdf · A Survey of Mobile Phone Sensing INTRODUCTION Today’s smartphone not only serves as the key computing

IEEE Communications Magazine • September 2010142

sensors embedded in prototype mobile phones.Still others have embedded sensors in standardmobile phone earphones to read a person’sblood pressure [14] or used neural signals fromcheap off-the-shelf wireless electroencephalogra-phy (EEG) headsets to control mobile phonesfor hands-free human-mobile phone interaction[36]. At this stage it is too early to say what newsensors will be added to the next generation ofsmartphones, but as the cost and form factorcome down and leading applications emerge, weare likely to see more sensors added.

APPLICATIONS AND APP STORESNew classes of applications, which can takeadvantage of both the low-level sensor data andhigh-level events, context, and activities inferredfrom mobile phone sensor data, are beingexplored not only in academic and industrialresearch laboratories [11, 15–22] but also withinstartup companies and large corporations. Onesuch example is SenseNetworks, a recent U.S.-based startup company, which uses millions ofGPS estimates sourced from mobile phoneswithin a city to predict, for instance, which sub-population or tribe might be interested in a spe-cific type of nightclub or bar (e.g., a jazz club).Remarkably, it has only taken a few years forthis type of analysis of large-scale location infor-mation and mobility patterns to migrate fromthe research laboratory into commercial usage.

In what follows we discuss a number of theemerging leading application domains and arguethat the new application delivery channels (i.e.,app stores) offered by all the major vendors arecritical for the success of these applications.

TRANSPORTATIONTraffic remains a serious global problem; forexample, congestion alone can severely impactboth the environment and human productivity(e.g., wasted hours due to congestion). Mobilephone sensing systems such as the MIT VTrack

project [4] or the Mobile Millennium project [5](a joint initiative between Nokia, NAVTEQ, andthe University of California at Berkeley) arebeing used to provide fine-grained traffic infor-mation on a large scale using mobile phones thatfacilitate services such as accurate travel timeestimation for improving commute planning.

SOCIAL NETWORKINGMillions of people participate regularly withinonline social networks. The DartmouthCenceMe project [2] is investigating the use ofsensors in the phone to automatically classifyevents in people’s lives, called sensing presence,and selectively share this presence using onlinesocial networks such as Twitter, Facebook, andMySpace, replacing manual actions people nowperform daily.

ENVIRONMENTAL MONITORINGConventional ways of measuring and reportingenvironmental pollution rely on aggregate statis-tics that apply to a community or an entire city.The University of California at Los Angeles(UCLA) PEIR project [3] uses sensors in phonesto build a system that enables personalized envi-ronmental impact reports, which track how theactions of individuals affect both their exposureand their contribution to problems such as car-bon emissions.

HEALTH AND WELL BEINGThe information used for personal health caretoday largely comes from self-report surveys andinfrequent doctor consultations. Sensor-enabledmobile phones have the potential to collect insitu continuous sensor data that can dramaticallychange the way health and wellness are assessedas well as how care and treatment are delivered.The UbiFit Garden [1], a joint project betweenIntel and the University of Washington, captureslevels of physical activity and relates this infor-mation to personal health goals when presentingfeedback to the user. These types of systemshave proven to be effective in empowering peo-ple to curb poor behavior patterns and improvehealth, such as encouraging more exercise.

APP STORESGetting a critical mass of users is a commonproblem faced by people who build systems,developers and researchers alike. Fortunately,modern phones have an effective application dis-tribution channel, first made available by Apple’sApp Store for the iPhone, that is revolutionizingthis new field. Each major smartphone vendorhas an app store (e.g., Apple AppStore, AndroidMarket, Microsoft Mobile Marketplace, NokiaOvi). The success of the app stores with the pub-lic has made it possible for not only startups butsmall research laboratories and even individualdevelopers to quickly attract a very large numberof users. For example, an early use of app storedistribution by researchers in academia is theCenceMe application for iPhone [2], which wasmade available on the App Store when it openedin 2008. It is now feasible to distribute and runexperiments with a large number of participantsfrom all around the world rather than in labora-tory controlled conditions using a small user

Figure 2. Mobile phone sensing is effective across multiple scales, including: asingle individual (e.g., UbitFit Garden [1]), groups such as social networks orspecial interest groups (e.g., Garbage Watch [23]), and entire communities/population of a city (e.g., Participatory Urbanism [20]).

Individual Group Community

UbitFit Garden Garbage Watch Participatory Urbanism

LANE LAYOUT 8/24/10 10:43 AM Page 142

Page 4: A Survey of Mobile Phone Sensingcampbell/papers/mobile-phone-survey.pdf · A Survey of Mobile Phone Sensing INTRODUCTION Today’s smartphone not only serves as the key computing

IEEE Communications Magazine • September 2010 143

study. For example, researchers interested in sta-tistical models that interpret human behaviorfrom sensor data have long dreamed of ways tocollect such large-scale real-world data. Theseapp stores represent a game changer for thesetypes of research. However, many challengesremain with this new approach to experimenta-tion via app stores. For example, what is the bestway to collect ground-truth data to assess theaccuracy of algorithms that interpret sensordata? How do we validate experiments? How dowe select a good study group? How do we dealwith the potentially massive amount of datamade available? How do we protect the privacyof users? What is the impact on getting approvalfor human subject studies from university institu-tional review boards (IRBs)? How doresearchers scale to run such large-scale studies?For example, researchers used to supportingsmall numbers of users (e.g., 50 users withmobile phones) now have to construct cloud ser-vices to potentially deal with 10,000 needy users.This is fine if you are a startup, but are academicresearch laboratories geared to deal with this?

SENSING SCALE AND PARADIGMSFuture mobile phone sensing systems will oper-ate at multiple scales, enabling everything frompersonal sensing to global sensing as illustratedin Fig. 2 where we see personal, group, and com-munity sensing — three distinct scales at whichmobile phone sensing is currently being studiedby the research community. At the same timeresearchers are discussing how much the user(i.e., the person carrying the phone) should beactively involved during the sensing activity (e.g.,taking the phone out of the pocket to collect asound sample or take a picture); that is, shouldthe user actively participate, known as participa-tory sensing [15], or, alternatively, passively par-ticipate, known as opportunistic sensing [17]?Each of these sensing paradigms presents impor-tant trade-offs. In what follows we discuss differ-ent sensing scales and paradigms.

SENSING SCALEPersonal sensing applications are designed for asingle individual, and are often focused on datacollection and analysis. Typical scenarios includetracking the user’s exercise routines or automatingdiary collection. Typically, personal sensing appli-cations generate data for the sole consumption ofthe user and are not shared with others. An excep-tion is healthcare applications where limited shar-ing with medical professionals is common (e.g.,primary care giver or specialist). Figure 2 showsthe UbitFit Garden [1] as an example of a person-al wellness application. This personal sensingapplication adopts persuasive technology ideas toencourage the user to reach her personal fitnessgoals using the metaphor of a garden blooming asthe user progresses toward their goals.

Individuals who participate in sensing appli-cations that share a common goal, concern, orinterest collectively represent a group. Thesegroup sensing applications are likely to be popu-lar and reflect the growing interest in social net-works or connected groups (e.g., at work, in theneighborhood, friends) who may want to share

sensing information freely or with privacy pro-tection. There is an element of trust in groupsensing applications that simplify otherwise diffi-cult problems, such as attesting that the collect-ed sensor data is correct or reducing the degreeto which aggregated data must protect the indi-vidual. Common use cases include assessingneighborhood safety, sensor-driven mobile socialnetworks, and forms of citizen science. Figure 2shows GarbageWatch [23] as an example of agroup sensing application where people partici-pate in a collective effort to improve recycling bycapturing relevant information needed toimprove the recycling program. For example,students use the phone’s camera to log the con-tent of recycling bins used across a campus.

Most examples of community sensing onlybecome useful once they have a large number ofpeople participating; for example, tracking thespread of disease across a city, the migrationpatterns of birds, congestion patterns across cityroads [5], or a noise map of a city [24]. Theseapplications represent large-scale data collection,analysis, and sharing for the good of the commu-

Figure 3. Mobile phone sensing architecture.

Inform, share and persuasion

Learn

Sense

Application distribution

Mobile computing cloud

Big sensor data

b

Fi1j1

Yij

{l}

{l} {l} Mij

Yij

{l}

{l} {l} Mij

Yij

{l}

{l} {l} Mij

Fi2j2

Rij Rij

Finjn

Rij

LANE LAYOUT 8/24/10 10:43 AM Page 143

Page 5: A Survey of Mobile Phone Sensingcampbell/papers/mobile-phone-survey.pdf · A Survey of Mobile Phone Sensing INTRODUCTION Today’s smartphone not only serves as the key computing

IEEE Communications Magazine • September 2010144

nity. To achieve scale implicitly requires thecooperation of strangers who will not trust eachother. This increases the need for communitysensing systems with strong privacy protectionand low commitment levels from users. Figure 2shows carbon monoxide readings captured inGhana using mobile sensors attached to taxicabsas part of the Participatory Urbanism project[20] as an example of a community sensing appli-cation. This project, in conjunction with the N-SMARTs project [13] at the University ofCalifornia at Berkeley, is developing prototypesthat allow similar sensor data to be collectedwith phone embedded sensors.

The impact of scaling sensing applicationsfrom personal to population scale is unknown.Many issues related to information sharing, pri-vacy, data mining, and closing the loop by pro-viding useful feedback to an individual, group,community, and population remain open. Today,we only have limited experience in building scal-able sensing systems.

SENSING PARADIGMSOne issue common to the different types of sens-ing scale is to what extent the user is activelyinvolved in the sensing system [12]. We discusstwo points in the design space: participatory sens-ing, where the user actively engages in the datacollection activity (i.e., the user manually deter-mines how, when, what, and where to sample) andopportunistic sensing, where the data collectionstage is fully automated with no user involvement.

The benefit of opportunistic sensing is that itlowers the burden placed on the user, allowingoverall participation by a population of users toremain high even if the application is not thatpersonally appealing. This is particularly usefulfor community sensing, where per user benefitmay be hard to quantify and only accrue over along time. However, often these systems aretechnically difficult to build [25], and a majorresource, people, are underutilized. One of themain challenges of using opportunistic sensing isthe phone context problem; for example, theapplication wants to only take a sound samplefor a city-wide noise map when the phone is outof the pocket or bag. These types of contextissues can be solved by using the phone sensors;for example, the accelerometer or light sensorscan determine if the phone is out of the pocket.

Participatory sensing, which is gaining inter-est in the mobile phone sensing community,places a higher burden or cost on the user; forexample, manually selecting data to collect (e.g.,lowest petrol prices) and then sampling it (e.g.,

taking a picture). An advantage is that complexoperations can be supported by leveraging theintelligence of the person in the loop who cansolve the context problem in an efficient man-ner; that is, a person who wants to participate incollecting a noise or air quality map of theirneighborhood simply takes the phone out oftheir bag to solve the context problem. Onedrawback of participatory sensing is that thequality of data is dependent on participantenthusiasm to reliably collect sensing data andthe compatibility of a person’s mobility patternsto the intended goals of the application (e.g.,collect pollution samples around schools). Manyof these challenges are actively being studied.For example, the PICK project [23] is studyingmodels for systematically recruiting participants.

Clearly, opportunistic and participatory rep-resent extreme points in the design space. Eachapproach has pros and cons. To date there is lit-tle experience in building large-scale participato-ry or opportunistic sensing applications to fullyunderstand the trade-offs. There is a need todevelop models to best understand the usabilityand performance issues of these schemes. Inaddition, it is likely that many applications willemerge that represent a hybrid of both thesesensing paradigms.

MOBILE PHONE SENSINGARCHITECTURE

Mobile phone sensing is still in its infancy. Thereis little or no consensus on the sensing architec-ture for the phone and the cloud. For example,new tools and phone software will be needed tofacilitate quick development and deployment ofrobust context classifiers for the leading phoneson the market. Common methods for collectingand sharing data need to be developed. Mobilephones cannot be overloaded with continuoussensing commitments that undermine the perfor-mance of the phone (e.g., by depleting batterypower). It is not clear what architectural compo-nents should run on the phone and what shouldrun in the cloud. For example, some researcherspropose that raw sensor data should not bepushed to the cloud because of privacy issues. Inthe following sections we propose a simple archi-tectural viewpoint for the mobile phone and thecomputing cloud as a means to discuss the majorarchitectural issues that need to be addressed.We do not argue that this is the best systemarchitecture. Rather, it presents a starting pointfor discussions we hope will eventually lead to aconverging view and move the field forward.

Figure 3 shows a mobile phone sensing archi-tecture that comprises the following buildingblocks.

SENSEIndividual mobile phones collect raw sensor datafrom sensors embedded in the phone.

LEARNInformation is extracted from the sensor data byapplying machine learning and data mining tech-niques. These operations occur either directly onthe phone, in the mobile cloud, or with some

Figure 4. Raw audio data captured from mobile phones is transformed intofeatures allowing learning algorithms to identify classes of behavior (e.g., driv-ing, in conservation, making coffee) occurring in a stream of sensor data, forexample, by SoundSense [11].

Raw data Extracted features Classification inferences

LANE LAYOUT 8/24/10 10:43 AM Page 144

Page 6: A Survey of Mobile Phone Sensingcampbell/papers/mobile-phone-survey.pdf · A Survey of Mobile Phone Sensing INTRODUCTION Today’s smartphone not only serves as the key computing

IEEE Communications Magazine • September 2010 145

partitioning between the phone and cloud.Where these components run could be governedby various architectural considerations, such asprivacy, providing user real-time feedback,reducing communication cost between the phoneand cloud, available computing resources, andsensor fusion requirements. We therefore con-sider where these components run to be an openissue that requires research.

INFORM, SHARE, AND PERSUASIONWe bundle a number of important architecturalcomponents together because of commonality orcoupling of the components. For example, a per-sonal sensing application will only inform the user,whereas a group or community sensing applicationmay share an aggregate version of informationwith the broader population and obfuscate theidentity of the users. Other considerations are howto best visualize sensor data for consumption ofindividuals, groups, and communities. Privacy is avery important consideration as well.

While phones will naturally leverage the dis-tributed resources of the mobile cloud (e.g.,computation and services offered in the cloud),the computing, communications, and sensingresources on the phones are ever increasing. Webelieve that as resources of the phone rapidlyexpand, one of the main benefits of using themobile computing cloud will be the ability tocompute and mine big data from very large num-bers of users. The availability of large-scale databenefits mobile phone sensing in a variety ofways; for example, more accurate interpretationalgorithms that are updated based on sensordata sourced from an entire user community.This data enables personalizing of sensing sys-tems based on the behavior of both the individu-al user and cliques of people with similarbehavior.

In the remainder of the article we present adetailed discussion of the three main architec-tural components introduced in this section:• Sense• Learn• Inform, share, and persuasion

SENSE: THE MOBILE PHONE AS ASENSOR

As we discussed, the integration of an everexpanding suite of embedded sensors is one ofthe key drivers of mobile phone applications.However, the programmability of the phonesand the limitation of the operating systems thatrun on them, the dynamic environment present-ed by user mobility, and the need to supportcontinuous sensing on mobile phones present adiverse set of challenges the research communityneeds to address.

PROGRAMMABILITYUntil very recently only a handful of mobilephones could be programmed. Popular plat-forms such as Symbian-based phones presentedresearchers with sizable obstacles to buildingmobile sensing applications [2]. These platformslacked well defined reliable interfaces to accesslow-level sensors and were not well suited to

writing common data processing components,such as signal processing routines, or performingcomputationally costly inference due to theresource constraints of the phone. Early sensor-enabled phones (i.e., prior to the iPhone in2007) such as the Symbian-based Nokia N80included an accelerometer, but there were noopen application programming interfaces (APIs)to access the sensor signals. This has changedsignificantly over the last few years. Note thatphone vendors initially included accelerometersto help improve the user interface experience.

Most of the smartphones on the market areopen and programmable by third-party develop-ers, and offer software development kits (SDKs),APIs, and software tools. It is easy to cross-com-pile code and leverage existing software such asestablished machine learning libraries (e.g.,Weka).

However, a number of challenges remain inthe development of sensor-based applications.Most vendors did not anticipate that third par-ties would use continuous sensing to developnew applications. As a result, there is mixed APIand operating system (OS) support to access thelow-level sensors, fine-grained sensor control,and watchdog timers that are required to devel-op real-time applications. For example, on NokiaSymbian and Maemo phones the accelerometerreturns samples to an application unpredictablybetween 25–38 Hz, depending on the CPU load.While this might not be an issue when using theaccelerometer to drive the display, using statisti-cal models to interpret activity or context typi-cally requires high and at least consistentsampling rates.

Lack of sensor control limits the managementof energy consumption on the phone. Forinstance, the GPS uses a varying amount ofpower depending on factors such as the numberof satellites available and atmospheric condi-tions. Currently, phones only offer a black boxinterface to the GPS to request location esti-mates. Finer-grained control is likely to help inpreserving battery power and maintaining accu-racy; for example, location estimation could beaborted when accuracy is likely to be low, or ifthe estimate takes too long and is no longer use-ful.

As third parties demand better support forsensing applications, the API and OS supportwill improve. However, programmability of thephone remains a challenge moving forward. Asmore individual, group, and community-scaleapplications are developed there will be anincreasing demand placed on phones, both indi-vidually and collectively. It is likely that abstrac-tions that can cope with persistent spatial queriesand secure the use of resources from neighbor-ing phones will be needed. Phones may want tointeract with other collocated phones to buildnew sensing paradigms based on collaborativesensing [12].

Different vendors offer different APIs, mak-ing porting the same sensing application to mul-tivendor platforms challenging. It is useful forthe research community to think about and pro-pose sensing abstractions and APIs that could bestandardized and adopted by different mobilephone vendors.

Most of the smartphones on themarket are open and

programmable bythird party

developers and offerSDKs, APIs, and

software tools. It iseasy to cross-compile

code and leverageexisting software

such as establishedmachine learning

libraries.

LANE LAYOUT 8/24/10 10:43 AM Page 145

Page 7: A Survey of Mobile Phone Sensingcampbell/papers/mobile-phone-survey.pdf · A Survey of Mobile Phone Sensing INTRODUCTION Today’s smartphone not only serves as the key computing

IEEE Communications Magazine • September 2010146

CONTINUOUS SENSING

Continuous sensing will enable new applicationsacross a number of sectors but particularly inpersonal healthcare. One important OS require-ment for continuous sensing is that the phonesupports multitasking and background process-ing. Today, only Android and Nokia Maemophones support this capability. The iPhone 4 OS,while supporting the notion of multitasking, isinadequate for continuous sensing. Applicationsmust conform to predefined profiles with strictconstraints on access to resources. None of theseprofiles provide the ability to have continuousaccess to all the sensors (e.g., continuousaccelerometer sampling is not possible).

While smartphones continue to provide morecomputation, memory, storage, sensing, and com-munication bandwidth, the phone is still aresource-limited device if complex signal process-ing and inference are required. Signal processingand machine learning algorithms can stress theresources of the phones in different ways: somerequire the CPU to process large volumes of sen-sor data (e.g., interpreting audio data [12]), someneed frequent sampling of energy expensive sen-sors (e.g., GPS [3]), while others require real-timeinference (e.g., Darwin [12]). Different applica-tions place different requirements on the execu-tion of these algorithms. For example, forapplications that are user initiated the latency ofthe operation is important. Applications (e.g.,healthcare) that require continuous sensing willoften require real-time processing and classifica-tion of the incoming stream of sensor data. Webelieve continuous sensing can enable a new classof real-time applications in the future, but theseapplications may be more resource demanding.Phones in the future should offer support for con-tinuous sensing without jeopardizing the phoneexperience; that is, not disrupt existing applica-tions (e.g., to make calls, text, and surf the web) ordrain batteries. Experiences from actual deploy-ments of mobile phone sensing systems show thatphones which run these applications can havestandby times reduced from 20 hours or more tojust six hours [2]. For continuous sensing to beviable there need to be breakthroughs in low-ener-gy algorithms that duty cycle the device whilemaintaining the necessary application fidelity.

Early deployments of phone sensing systemstended to trade off accuracy for lower resourceusage by implementing algorithms that requireless computation or a reduced amount of sensordata. Another strategy to reduce resource usageis to leverage cloud infrastructure where differ-ent sensor data processing stages are offloadedto back-end servers [12, 26] when possible. Typi-cally, raw data produced by the phone is not sentover the air due to the energy cost of transmis-sion, but rather compressed summaries (i.e.,extracted features from the raw sensor data) aresent. The drawback to these approaches is thatthey are seldom sufficiently energy-efficient tobe applied to continuous sensing scenarios.Other techniques rely on adopting a variety ofduty cycling techniques that manage the sleepcycle of sensing components on the phone inorder to trade off the amount of battery con-sumed against sensing fidelity and latency [27].

Continuous sensing raises considerable chal-lenges in comparison to sensing applications thatrequire a short time window of data or a singlesnapshot (e.g., a single image or short sound clip).There is an energy tax associated with continuous-ly sensing and potentially uploading in real time tothe cloud for further processing. Solutions thatlimit the cost of continuous sensing and reducethe communication overheard are necessary. If theinterpretation of the data can withstand delays ofan entire day, it might be acceptable if the phonecan collect and store the sensor data until the endof the day and upload when the phone is beingcharged. However, this delay-tolerant model ofsensor sampling and processing severely limits theability of the phone to react and be aware of itscontext. Sensing applications that will be success-ful in the real world will have to be smart enoughto adapt to situations. There is a need to study thetrade-off of continuous sensing with the goal ofminimizing the energy cost while offering suffi-cient accuracy and real-time responsiveness tomake the application useful.

As continuous sensing becomes more com-mon, it is likely that additional processing sup-port will emerge. For example, the Little Rockproject [28] underway at Microsoft Research isdeveloping hardware support for continuoussensing where the primary CPU frequentlysleeps, and digital signal processors (DSPs) sup-port the duty cycle management, sensor sam-pling, and signal processing.

PHONE CONTEXTMobile phones are often used on the go and inways that are difficult to anticipate in advance.This complicates the use of statistical modelsthat may fail to generalize under unexpectedenvironments. The background environment oractions of the user (e.g., the phone could be inthe pocket) will also affect the quality of the sen-sor data that is captured. Phones may be exposedto events for too short a period of time, if theuser is traveling quickly (e.g., in a car), if theevent is localized (e.g., a sound) or the sensorrequires more time than is possible to gather asample (e.g., air quality sensor). Other forms ofinterfering context include a person using theirphone for a call, which interferes with the abilityof the accelerometer to infer the physical actionsof the person. We collectively describe theseissues as the context problem. Many issues remainopen in this area.

Some researchers propose to leverage co-located mobile phones to deal with some ofthese issues; for example, sharing sensors tem-porarily if they are better able to capture thedata [12]. To counter context challengesresearchers proposed super-sampling [13] wheredata from nearby phones are collectively used tolower the aggregate noise in the reading. Alter-natively, an effective approach for some systemshave been sensor sampling routines with admis-sion control stages that do not process data thatis low-quality, saving resources, and reducingerrors (e.g., SoundSense [11]).

While machine learning techniques are beingused to interpret mobile phone data, the reliabil-ity of these algorithms suffer under the dynamicand unexpected conditions presented by every-

Different vendorsoffer different APIs,making porting thesame sensing appli-cation to multi-ven-

dor platformschallenging. It is use-ful for the researchcommunity to thinkabout and proposesensing abstractionsand APIs that couldbe standardized andadopted by different

mobile phone vendors.

LANE LAYOUT 8/24/10 10:43 AM Page 146

Page 8: A Survey of Mobile Phone Sensingcampbell/papers/mobile-phone-survey.pdf · A Survey of Mobile Phone Sensing INTRODUCTION Today’s smartphone not only serves as the key computing

IEEE Communications Magazine • September 2010 147

day phone use. For example, a speaker identifi-cation algorithm maybe effective in a quiet officeenvironment but not a noisy cafe. Such problemscan be overcome by collecting sufficient exam-ples of the different usage scenarios (i.e., train-ing data). However, acquiring examples is costlyand anticipating the different scenarios thephone might encounter is almost impossible.Some solutions to this problem straddle theboundary of mobile systems and machine learn-ing and include borrowing model inputs (i.e.,features) from nearby phones, performing col-laborative multi-phone inference with modelsthat evolve based on different scenarios encoun-tered, or discovering new events that are notencountered during application design [12].

LEARN: INTERPRETING SENSOR DATAThe raw sensor data able to be acquired byphones, irrespective of the scale or modality (e.g.,accelerometer, camera) are worthless withoutinterpretation (e.g., human behavior recogni-tion). A variety of data mining and statisticaltools can be used to distill information from thedata collected by mobile phones and calculatesummary statistics to present to the users, suchas, the average emissions level of different loca-tions or the total distance run by a user and theirranking within a group of friends (e.g., Nike+).

Recently, crowd-sourcing techniques havebeen applied to the analysis of sensor data whichis typically problematic; for example, image pro-cessing when used in-the-wild is notoriously dif-ficult to maintain high accuracy. In theCrowdSearch [21] project, crowd sourcing andmicro-payments are adopted to incentivize peo-ple to improve automated image search. In [21]human-in-the-loop stages are added to the pro-cess of image search with tasks distributed to theuser population.

We discuss the key challenges in interpretingsensor data, focusing on a primary area of inter-est: human behavior and context modeling.

HUMAN BEHAVIOR AND CONTEXT MODELINGMany emerging applications are people-centric,and modeling the behavior and surrounding con-text of the people carrying the phones is of par-ticular interest. A natural question is how wellcan mobile phones interpret human behavior(e.g., sitting in conversation) from low-level mul-timodal sensor data? Or, similarly, how accurate-ly can they infer the surrounding context (e.g.,pollution, weather, noise environment)?

Currently, supervised learning techniques arethe algorithms of choice in building mobileinference systems. In supervised-learning, asillustrated in Fig. 4, examples of high-levelbehavioral classes (e.g., cooking, driving) arehand annotated (i.e., labeled). These examples,referred to as training data, are then provided toa learning algorithm, which fits a model to theclasses (i.e., behaviors) based on the sensor data.Sensor data is usually presented to the learningalgorithm in the form of extracted features,which are calculations on the raw data thatemphasize characteristics that more clearly dif-ferentiate classes (e.g., the variance of theaccelerometer magnitude over a small time win-

dow could be useful for separating standing andwalking classes). Supervised learning is feasiblefor small-scale sensing applications, but unlikelyto scale to handle the wide range of behaviorsand contexts exhibited by a large community ofusers. Other forms of learning algorithms, suchas semi-supervised (where only some of the datais labeled) and unsupervised (where no labelsare provided by the user) ones, reduce the needfor labeled examples, but can lead to classes thatdo not correspond to the activities that are use-ful to the application or require that the unla-beled data only come from the already labeledclass categories (e.g., an activity that was neverencountered before can throw off a semi-super-vised learning algorithm).

Researchers show that a variety of everydayhuman activities can be inferred most successful-ly from multimodal sensor streams For example,[29] describes a system which is capable of recog-nizing eight different everyday activities (e.g.,brushing teeth, riding in an elevator) using theMobile Sensing Platform (MSP) [6] — an impor-tant mobile sensing device that is a predecessorof sensing on the mobile phone. Similar resultsare demonstrated using mobile phones that infereveryday activities [2, 3, 30], albeit less accuratelyand with a smaller set of activities than the MSP.

The microphone, accelerometer, and GPSfound on many smartphones on the market haveproven to be effective at inferring more complexhuman behavior. Early work on mobility patternmodeling succeeds with surprisingly simpleapproaches to identify significant places in peo-ple’s lives (e.g., work, home, coffee shop). Morerecently researchers [31] have used statisticaltechniques to not only infer significant places butalso connect these to activities (e.g., gym, waitingfor the bus) using just GPS traces. The micro-phone is one of the most ubiquitous sensors andis capable of inferring what a person is doing(e.g., in conversation), where they are (e.g., audiosignature of a particular coffee shop) — inessence, it can capture a great deal both about aperson and their surrounding ambient environ-ment. In SoundSense [11] a general-purposesound classification system for mobile phones isdeveloped using a combination of supervised andunsupervised learning. The recognition of a staticset of common sounds (e.g., music) uses super-vised learning but augmented with an unsuper-vised approach that learns the novel frequentlyrecurring classes of sound encountered by differ-ent users. Finally, the user is brought into theloop to confirm and provide a textual description(i.e., label) of the discovered sounds. As a result,SoundSense extends the ability of the phone torecognize new activities.

SCALING MODELSExisting statistical models are unable to copewith everyday occurrences such as a person usinga new type of exercise machine, and strugglewhen two activities overlap each other or differ-ent individuals carry out the same activity differ-ently (e.g., the sensor data for walking will lookvery different for a 10-year-old vs. a 90-year-oldperson). A key to scalability is to design tech-niques for generalization that will be effective forentire communities containing millions of people.

A natural question ishow well can mobile

phones interprethuman behavior

(e.g., sitting in con-servation) from low-

level multimodalsensor data? Or, sim-ilarly, how accurately

can they infer thesurrounding context

(e.g., pollution,weather, noise environment)?

LANE LAYOUT 8/24/10 10:43 AM Page 147

Page 9: A Survey of Mobile Phone Sensingcampbell/papers/mobile-phone-survey.pdf · A Survey of Mobile Phone Sensing INTRODUCTION Today’s smartphone not only serves as the key computing

IEEE Communications Magazine • September 2010148

To address these concerns current researchdirections point toward models that are adaptiveand incorporate people in the process. Automati-cally increasing the classes recognized by a modelusing active learning (where the learning algo-rithm selectively queries the user for labels) isinvestigated in the context of heath care [23].Approaches have been developed in which train-ing data sourced directly from users is groupedbased on their social network [12]. This workdemonstrates that exploiting the social network ofusers improves the classification of locations suchas significant places. Community-guided learning[30] combines data similarity and crowd-sourcedlabels to improve the classification accuracy of thelearning system. In [30] hand annotated labels areno longer treated as absolute ground truth duringthe training process but are treated as soft hintsas to class boundaries in combination with theobserved data similarity. This approach learnsclasses (i.e., activities) based on the actual behav-ior of the community and adjusts transparently tothe changes in how the community performsthese activities — making it more suitable forlarge-scale sensing applications. However, if themodels need to be adapted on the fly, this mayforce the learning of models to happen on thephone, potentially causing a significant increase incomputational needs [12].

Many questions remain regarding how learn-ing will progress as the field grows. There is alack of shared technology that could help accel-erate the work. For example, each researchgroup develops their own classifiers that arehand coded and tuned. This is time consumingand mostly based on small-scale experimentationand studies. There is a need for a commonmachine learning toolkit for mobile phone sens-ing that allows researchers to build and sharemodels. Similarly, there is a need for large-scalepublic data sets to study more advanced learningtechniques and rigorously evaluate the perfor-mance of different algorithms. Finally, there isalso a need for a repository for sharing datasets,code, and tools to support the researchers.

INFORM, SHARE, AND PERSUASION:CLOSING THE SENSING LOOP

How you use inferred sensor data to inform theuser is application-specific. But a natural questionis, once you infer a class or collect together a setof large-scale inferences, how do you close theloop with people and provide useful informationback to users? Clearly, personal sensing applica-tions would just inform the individual, while socialnetworking sensing applications may share activi-ties or inferences with friends. We discuss theseforms of interaction with users as well as theimportant area of privacy. Another topic wetouch on is using large-scale sensor data as a per-suasive technology — in essence using big data tohelp users attain goals using targeted feedback.

SHARINGTo harness the potential of mobile phone sens-ing requires effective methods of allowing peo-ple to connect with and benefit from the data.The standard approach to sharing is visualization

using a web portal where sensor data and infer-ences are easily displayed. This offers a familiarand intuitive interface. For the same reasons, anumber of phone sensing systems connect withexisting web applications to either enrich existingapplications or make the data more widely acces-sible [12, 23]. Researchers recognize the strengthof leveraging social media outlets such as Face-book, Twitter, and Flickr as ways to not only dis-seminate information but build communityawareness (e.g., citizen science [20]). A popularapplication domain is fitness, such as Nike+.Such systems combine individual statistics andvisualizations of sensed data and promote com-petition between users. The result is the forma-tion of communities around a sensingapplication. Even though, as in the case ofNike+, the sensor information is rather simple(i.e., just the time and distance of a run), peoplestill become very engaged. Other applicationshave emerged that are considerably more sophis-ticated in the type of inference made, but havehad limited up take. It is still too early to predictwhich sensing applications will become the mostcompelling for user communities. But social net-working provides many attractive ways to shareinformation.

PERSONALIZED SENSINGMobile phones are not limited to simply collect-ing sensor data. For example, both the Googleand Microsoft search clients that run on theiPhone allow users to search using voice recogni-tion. Eye tracking and gesture recognition arealso emerging as natural interfaces to the phone.

Sensors are used to monitor the daily activi-ties of a person and profile their preferences andbehavior, making personalized recommendationsfor services, products, or points of interest possi-ble [32]. The behavior of an individual alongwith an understanding of how behavior and pref-erences relate to other segments of the popula-tion with similar behavioral profiles can radicallychange not only online experiences but realworld ones too. Imagine walking into a pharma-cy and your phone suggesting vitamins and sup-plements with the effectiveness of a doctor. At aclothing store your phone could identify whichitems are manufactured without sweatshop labor.The behavior of the person, as captured by sen-sors embedded in their phone, become an inter-face that can be fed to many services (e.g.,targeted advertising). Sensor technology person-alized to a user’s profile empowers her to makemore informed decisions across a spectrum ofservices.

PERSUASIONSensor data gathered from communities (e.g.,fitness, healthcare) can be used not only toinform users but to persuade them to make posi-tive behavioral changes (e.g., nudge users toexercise more or smoke less). Systems that pro-vide tailored feedback with the goal of changingusers’ behavior are referred to as persuasivetechnology [33]. Mobile sensing applicationsopen the door to building novel persuasive sys-tems that are still largely unexplored.

For many application domains, such ashealthcare or environmental awareness, users

Existing statisticalmodels are unable tocope with everydayoccurrences such as

a person using anew type of exercise

machine, and struggle when two

activities overlapeach other or whendifferent individualscarry out the sameactivity differently.

LANE LAYOUT 8/24/10 10:43 AM Page 148

Page 10: A Survey of Mobile Phone Sensingcampbell/papers/mobile-phone-survey.pdf · A Survey of Mobile Phone Sensing INTRODUCTION Today’s smartphone not only serves as the key computing

IEEE Communications Magazine • September 2010 149

commonly have desired objectives (e.g., to loseweight or lower carbon emissions). Simply pro-viding a user with her own information is oftennot enough to motivate a change of behavior orhabit. Mobile phones are an ideal platform capa-ble of using low-level individual-scale sensordata and aggregated community-scale informa-tion to drive long-term change (e.g., contrastingthe carbon footprint of a user with her friendscan persuade the user to reduce her own foot-print). The UbiFit Garden [1] project is an earlyexample of integrating persuasion and sensingon the phone. UbiFit uses an ambient back-ground display on the phone to offer the usercontinuous updates on her behavior in responseto desired goals. The display uses the metaphorof a garden with different flowers blooming inresponse to physical exercise of the user duringthe day. It does not use comparison data butsimply targets the individual user. A naturalextension of UbiFit is to present communitydata. Ongoing research is exploring methods ofidentifying and using people in a community ofusers as influencers for different individuals inthe user population. A variety of techniques areused in existing persuasive system research, suchas the use of games, competitions among groupsof people, sharing information within a socialnetwork, or goal setting accompanied by feed-back. Understanding which types of metaphorsand feedback are most effective for various per-suasion goals is still an open research problem.Building mobile phone sensing systems that inte-grate persuasion requires interdisciplinaryresearch that combines behavioral and socialpsychology theories with computer science.

The use of large volumes of sensor data pro-vided by mobile phones presents an excitingopportunity and is likely to enable new applica-tions that have promise in enacting positivesocial changes in health and the environmentover the next several years. The combination oflarge-scale sensor data combined with accuratemodels of persuasion could revolutionize howwe deal with persistent problems in our livessuch as chronic disease management, depression,obesity, or even voter participation.

PRIVACYRespecting the privacy of the user is perhaps themost fundamental responsibly of a phone sens-ing system. People are understandably sensitiveabout how sensor data is captured and used,especially if the data reveals a user’s location,speech, or potentially sensitive images. Althoughthere are existing approaches that can help withthese problems (e.g., cryptography, privacy-pre-serving data mining), they are often insufficient[34]. For instance, how can the user temporarilypause the collection of sensor data without caus-ing a suspicious gap in the data stream thatwould be noticeable to anyone (e.g., family orfriends) with whom they regularly share data?

In personal sensing applications processingdata locally may provide privacy advantages com-pared to using remote more powerful servers.SoundSense [11] adopts this strategy: all the audiodata is processed on the phone, and raw audio isnever stored. Similarly, the UbiFit Garden [1]application processes all data locally on the device.

Privacy for group sensing applications is basedon user group membership. For instance,although social networking applications likeLoopt and CenceMe [2] share sensitive informa-tion (e.g., location and activity), they do so withingroups in which users have an existing trust rela-tionship based on friendship or a shared commoninterest such as reducing their carbon footprint.

Community sensing applications that can col-lect and combine data from millions of peoplerun the risk of unintended leakage of personalinformation. The risks from location-basedattacks are fairly well understood given years ofprevious research. However, our understandingof the dangers of other modalities (e.g., activityinferences, social network data) are less devel-oped. There are growing examples of reconstruc-tion type attacks where data that may look safeand innocuous to an individual user may allowinvasive information to be reverse-engineered.For example, the UIUC Poolview project showsthat even careful sharing of personal weight datawithin a community can expose information onwhether a user’s weight is trending upward ordownward [35]. The PEIR project evaluates dif-ferent countermeasures to this type of scenario,such as adding noise to the data or replacingchunks of the data with synthetic but realisticsamples that have limited impact on the qualityof the aggregate analysis [3].

Privacy and anonymity will remain a signifi-cant problem in mobile-phone-based sensing forthe foreseeable future. In particular, the second-hand smoke problem of mobile sensing createsnew privacy challenges, such as:• How can the privacy of third parties be

effectively protected when other peoplewearing sensors are nearby?

• How can mismatched privacy policies bemanaged when two different people areclose enough to each other for their sensorsto collect information from the other party?

Furthermore, this type of sensing presents evenlarger societal questions, such as who is respon-sible when collected sensor data from thesemobile devices cause financial harm? Strongertechniques for protecting the rights of people assensing becomes more commonplace will be nec-essary.

CONCLUSIONThis article discusses the current state of the artand open challenges in the emerging field ofmobile phone sensing. The primary obstacle tothis new field is not a lack of infrastructure; mil-lions of people already carry phones with richsensing capabilities. Rather, the technical barri-ers are related to performing privacy-sensitiveand resource-sensitive reasoning with noisy dataand noisy labels, and providing useful and effec-tive feedback to users. Once these technical bar-riers are overcome, this nascent field willadvance quickly, acting as a disruptive technolo-gy across many domains including social net-working, health, and energy. Mobile phonesensing systems will ultimately provide bothmicro- and macroscopic views of cities, commu-nities, and individuals, and help improve howsociety functions as a whole.

The risks from location-based

attacks are fairly wellunderstood givenyears of previous

research. However,our understanding ofthe dangers of other

modalities (e.g.,activity inferences,

social network data)are less developed.

LANE LAYOUT 8/24/10 10:43 AM Page 149

Page 11: A Survey of Mobile Phone Sensingcampbell/papers/mobile-phone-survey.pdf · A Survey of Mobile Phone Sensing INTRODUCTION Today’s smartphone not only serves as the key computing

IEEE Communications Magazine • September 2010150

REFERENCES[1] S. Consolvo et al., “Activity Sensing in the Wild: A Field

Trial of Ubifit Garden,” Proc. 26th Annual ACM SIGCHIConf. Human Factors Comp. Sys., 2008, pp. 1797–1806.

[2] E. Miluzzo et al., “Sensing meets Mobile Social Net-works: The Design, Implementation, and Evaluation ofthe CenceMe Application,” Proc. 6th ACM SenSys,2008, pp. 337–50.

[3] M. Mun et al., “Peir, the Personal Environmental ImpactReport, as a Platform for Participatory Sensing SystemsResearch,” Proc. 7th ACM MobiSys, 2009, pp. 55–68.

[4] A. Thiagarajan et al., “VTrack: Accurate, Energy-AwareTraffic Delay Estimation Using Mobile Phones,” Proc.7th ACM SenSys, Berkeley, CA, Nov. 2009.

[5] UC Berkeley/Nokia/NAVTEQ, “Mobile Millennium”;http://traffic.berkeley.edu/

[6] T. Choudhury et al., “The Mobile Sensing Platform: AnEmbedded System for Activity Recognition,” IEEE Perva-sive Comp., vol. 7, no. 2, 2008, pp. 32–41.

[7] T. Starner, Wearable Computing and Contextual Aware-ness, Ph.D. thesis, MIT Media Lab., Apr. 30, 1999.

[8] Nokia, “Workshop on Large-Scale Sensor Networks andApplications,” Kuusamo, Finland, Feb. 3–6, 2005.

[9] A. Schmidt et al., “Advanced Interaction in Context,”Proc. 1st Int’l. Symp. HandHeld Ubiquitous Comp.,1999, pp. 89–101.

[10] N. Eagle and A. Pentland, “Reality Mining: SensingComplex Social Systems,” Personal Ubiquitous Comp.,vol. 10, no. 4, 2006, pp. 255–268.

[11] H. Lu et al., “Sound-Sense: Scalable Sound Sensing forPeople-Centric Applications on Mobile Phones,” Proc.7th ACM MobiSys, 2009, pp. 165–78.

[12] Dartmouth College, “Mobile Sensing Group”;http://sensorlab.cs.dartmouth.edu/

[13] R. Honicky et al., “N-Smarts: Networked Suite ofMobile Atmospheric Real-Time Sensors,” Proc. 2ndACM SIGCOMM NSDR, 2008, pp. 25–30.

[14] M.-Z. Poh et al., “Heartphones: Sensor Earphones andMobile Application for Non-Obtrusive Health Monitor-ing,” IEEE Int’l. Symp. Wearable Comp., 2009, pp.153–54.

[15] J. Burke et al., “Participatory Sensing,” Proc. ACM Sen-Sys Wksp. World-Sensor-Web, 2006.

[16] A. Krause et al., “Toward Community Sensing,” Proc.7th ACM/IEEE IPSN, 2008, pp. 481–92.

[17] A. T. Campbell et al., “People-Centric Urban Sensing,”2nd ACM WICON, 2006, p. 18.

[18] T. Abdelzaher et al., “Mobiscopes for Human Spaces,”IEEE Pervasive Comp., vol. 6, no. 2, 2007, pp. 20–29.

[19] M. Azizyan, I. Constandache, and R. Roy Choudhury,“Surround-Sense: Mobile Phone Localization via Ambi-ence Fingerprinting,” Proc. 15th ACM MobiCom, 2009,pp. 261–72.

[20] Intel/UC Berkeley, “Urban Atmospheres”;http://www.urban-atmospheres.net/

[21] T. Yan, V. Kumar, and D. Ganesan, “CrowdSearch:Exploiting Crowds for Accurate Real-Time Image Searchon Mobile Phones,” Proc. 8th ACM MobiSys, 2010.

[22] Nokia, “SensorPlanet”; http://www.sensorplanet.org/[23] CENS/UCLA, “Participatory Sensing / Urban Sensing

Projects”; http://research.cens.ucla.edu/[24] R. Rana et al., “Ear-Phone: An End-to-End Participatory

Urban Noise Mapping,” Proc. 9th ACM/IEEE IPSN, 2010.[25] T. Das et al., “Prism: Platform for Remote Sensing

using Smartphones,” Proc. 8th ACM MobiSys, 2010.[26] E. Cuervo et al., “MAUI: Making Smartphones Last

Longer with Code Offload,” Proc. 8th ACM MobiSys,2010.

[27] Y. Wang et al., “A Framework of Energy EfficientMobile Sensing for Automatic User State Recognition,”Proc. 7th ACM MobiSys, 2009, pp. 179–92.

[28] B. Priyantha, D. Lymberopoulos, and J. Liu, “LittleRock: Enabling Energy Efficient Continuous Sensing onMobile Phones,” Microsoft Research, tech. rep. MSR-TR-2010-14, 2010.

[29] J. Lester, T. Choudhury, and G. Borriello, “A PracticalApproach to Recognizing Physical Activities,” PervasiveComp., 2006, pp. 1–16.

[30] D. Peebles et al., “Community-Guided Learning:Exploiting Mobile Sensor Users to Model Human Behav-ior,” Proc. 24th National Conf. Artificial Intelligence,2010.

[31] L. Liao, D. Fox, and H. Kautz, “Extracting Places andActivities from GPS Traces Using Hierarchical Condition-al Random Fields,” Int’l. J. Robotics Research, vol. 26,no. 1, 2007, pp. 119–34.

[32] J. Liu, “Subjective Sensing: Intentional Awareness forPersonalized Services,” NSF Wksp. Future Directions NetSensing Sys., Nov. 2009.

[33] B. J. Fogg, Persuasive Technology: Using Computers toChange What We Think and Do, Morgan Kaufmann,Dec. 2002.

[34] A. Kapadia, D. Kotz, and N. Triandopoulos, “Oppor-tunistic Sensing: Security Challenges for the NewParadigm,” Proc. 1st COMNETS, Bangalore, 2009.

[35] R. K. Ganti et al., “Poolview: Stream Privacy for Grass-roots Participatory Sensing,” Proc. 6th ACM SenSys,2008, pp. 281–94.

[36] A. T. Campbell et al., “NeuroPhone: Brain-MobilePhone Interface Using a Wireless EEG Headset,” Proc.2nd ACM SIGCOMM Wksp. Networking, Sys., andApps. on Mobile Handhelds, New Delhi, India, Aug. 30,2010.

BIOGRAPHIESNICHOLAS D. LANE ([email protected]) is a Ph.D.candidate at Dartmouth College, and a member of theMobile Sensing Group and the MetroSense project. Hisresearch interests revolve around mobile sensing systemsthat incorporate scalable and robust sensor-based compu-tational models of human behavior and context. He has anM.Eng. in computer science from Cornell University.

EMILIANO MILUZZO ([email protected]) is a Ph.D.candidate in the computer science department at DartmouthCollege and a member of the Mobile Sensing Group at Dart-mouth. His research focus is on spearheading a new area ofresearch on mobile phone sensing applying machine learn-ing and mobile systems design to new sening applicationsand systems on a large scale. These applications and systemsspan the areas of social networks, green applications, globalenvironment monitoring, personal and community health-care, sensor augmented gaming, virtual reality, and smarttransportation systems. He has an M.Sc. in electrical engi-neering from the University of Rome La Sapienza.

HONG LU ([email protected]) is a Ph.D. candidate inthe computer science department at Dartmouth College,and a member of the Mobile Sensing Group and the Met-roSense Project.. His research interests include ubiquitouscomputing, mobile sensing systems, and human behaviormodeling. He has an M.S. in computer science from TianjinUniversity, China.

DANIEL PEEBLES ([email protected]) is a Ph.D. stu-dent at Dartmouth College. His research interests are in devel-oping machine learning methods for analyzing and interpretingpeople’s contexts, activities, and social networks from mobilesensor data. He has a B.S. from Dartmouth College.

TANZEEM CHOUDHURY ([email protected]) isan assistant professor in the computer science departmentatDartmouth College. She joined Dartmouth in 2008 after fouryears at Intel Research Seattle. She recieved her Ph.D. fromthe Media Laboratory at MIT. She develops systems that canreason about human activities, interactions, and social net-works in everyday environments. Her doctoral thesis demon-strated for the first time the feasibility of using wearablesensors to capture and model social networks automatically,on the basis of face-to-face conversations. MIT TechnologyReview recognized her as one of the world’s top 35 innova-tors under the age of 35 (2008 TR35) for her work in thisarea. She has also been selected as a TED Fellow and is arecipient of the NSF CAREER award. More information canbe found at http://www.cs.dartmouth.edu/~tanzeem.

ANDREW T. CAMPBELL ([email protected]) is a pro-fessor of computer science at Dartmouth College, where heleads the Mobile Sensing Group and the MetroSense Pro-ject. His research interests include mobile phone sensingsystems. He has a Ph.D. in computer science from Lancast-er University, England. He received the U.S. National Sci-ence Foundation Career Award for his research inprogrammable mobile networking.

The primary obstacleto this new field isnot a lack of infra-

structure. Rather, thetechnical barriers arerelated to perform-ing privacy-sensitiveand resource-sensi-tive reasoning with

noisy data and noisylabels and providinguseful and effectivefeedback to users.

LANE LAYOUT 8/24/10 10:43 AM Page 150