Participatory noise mapping works! An evaluation of participatory … · 2019. 6. 18. ·...

27
Participatory noise mapping works! An evaluation of participatory sensing as an alternative to standard techniques for enviromental monitoring. Ellie D’Hondt a,1,* , Matthias Stevens a,2 , An Jacobs b a BrusSense, Vrije Universiteit Brussel, Pleinlaan 2, 1050 Brussels, Belgium b IBBT-SMIT, Vrije Universiteit Brussel, Pleinlaan 9, 1050 Brussels, Belgium Abstract Participatory sensing enables a person-centric collection of environmental measurement data with, in principle, high granularity in space and time. In this paper we provide concrete proof that participatory techniques, when implemented properly, can achieve the same accuracy as standard mapping techniques. We do this through a citizen science experiment for noise map- ping a 1 km 2 area in the city of Antwerp using NoiseTube, a participatory sensing framework for monitoring ambient noise. At the technical side, we set up measuring equipment in accordance with official norms insofar as they ap- ply, also carrying out extensive calibration experiments. At the citizen side, we collaborated with up to 10 volunteers from a citizen-led Antwerp-based ac- tion group. From the data gathered we construct purely measurement-based noise maps of the target area with error margins of about 5dB, comparable to those of official simulation-based noise maps. We also report on a survey evaluating NoiseTube, as a system for participative grassroots noise mapping campaigns, from the user perspective. Keywords: Environmental policy, noise mapping, participatory sensing, citizen science, mobile phones 2010 MSC: 68U99, * Corresponding author Email addresses: [email protected] (Ellie D’Hondt), [email protected] (Matthias Stevens), [email protected] (An Jacobs) 1 Supported by the Brussels Institute for Research and Innovation (Innoviris), Belgium. 2 Supported by the Fund for Scientific Research (FWO), Flanders, Belgium. Preprint submitted to Pervasive and Mobile Computing December 15, 2011

Transcript of Participatory noise mapping works! An evaluation of participatory … · 2019. 6. 18. ·...

Page 1: Participatory noise mapping works! An evaluation of participatory … · 2019. 6. 18. · Participatory noise mapping works! An evaluation of participatory sensing as an alternative

Participatory noise mapping works!

An evaluation of participatory sensing as an alternative

to standard techniques for enviromental monitoring.

Ellie D’Hondta,1,∗, Matthias Stevensa,2, An Jacobsb

aBrusSense, Vrije Universiteit Brussel, Pleinlaan 2, 1050 Brussels, BelgiumbIBBT-SMIT, Vrije Universiteit Brussel, Pleinlaan 9, 1050 Brussels, Belgium

Abstract

Participatory sensing enables a person-centric collection of environmentalmeasurement data with, in principle, high granularity in space and time.In this paper we provide concrete proof that participatory techniques, whenimplemented properly, can achieve the same accuracy as standard mappingtechniques. We do this through a citizen science experiment for noise map-ping a 1 km2 area in the city of Antwerp using NoiseTube, a participatorysensing framework for monitoring ambient noise. At the technical side, we setup measuring equipment in accordance with official norms insofar as they ap-ply, also carrying out extensive calibration experiments. At the citizen side,we collaborated with up to 10 volunteers from a citizen-led Antwerp-based ac-tion group. From the data gathered we construct purely measurement-basednoise maps of the target area with error margins of about 5dB, comparableto those of official simulation-based noise maps. We also report on a surveyevaluating NoiseTube, as a system for participative grassroots noise mappingcampaigns, from the user perspective.

Keywords: Environmental policy, noise mapping, participatory sensing,citizen science, mobile phones2010 MSC: 68U99,

∗Corresponding authorEmail addresses: [email protected] (Ellie D’Hondt),

[email protected] (Matthias Stevens), [email protected] (An Jacobs)1Supported by the Brussels Institute for Research and Innovation (Innoviris), Belgium.2Supported by the Fund for Scientific Research (FWO), Flanders, Belgium.

Preprint submitted to Pervasive and Mobile Computing December 15, 2011

Page 2: Participatory noise mapping works! An evaluation of participatory … · 2019. 6. 18. · Participatory noise mapping works! An evaluation of participatory sensing as an alternative

2010 MSC: 68N01,2010 MSC: 62P12

1. Introduction

Pervasive computing naturally lends itself to many of environmental sus-tainability’s core challenges. For one thing, it provides numerous possibilitiesfor better monitoring the state of the physical world [1], by way of minia-turised sensors which are becoming more pervasive, performant and cheaperas we speak. At the same time pervasive technologies are inherently con-nected to their users, which means that they can play an important role inawareness building. Indeed numerous international reports – starting withthe Rio Declaration – have expressed the importance of the participation ofall involved citizens, at all levels, to move towards sustainable development.The realisation that pervasive technology can provide a direct link betweendata and people has crystallised into the idea of participatory sensing, whichappropriates everyday mobile devices such as cellular phones to form inter-active sensor networks that enable public and professional users to gather,analyse and share local knowledge [2, 3]. When dealing with environmentaldata participatory sensing is essentially a digital version of citizen science,which predates pervasive computing for over a century. Citizen science is theidea of having volunteers without scientific training perform research-relatedtasks such as observation, measurement or computation. A major advantageof citizen science is that it enables data collection on a spatio-temporal scalenot possible for scientists to achieve otherwise. All of the above indicates thatin an environmental setting one is able to close the feedback loop betweendata and people: on the one hand, citizens are crucial for managing data col-lection on a large scale, while on the other hand, the data gathered reflectsback on citizens and nurtures their awareness of the situation at hand. Oneparticularly interesting application of these ideas is to construct participa-tory, entirely measurement-based, highly localised environmental maps.

Environmental mapping is becoming more and more important as ur-banisation, and hence pollution, increases. Currently there is a huge effortgoing on to monitor the extent and assess the effects of noise pollution, whichcauses a heavy burden to citizens all over the world [4]. Concretely, officialbodies such as the EU impose norms which require member states to producenoise maps on a regular basis. These maps are created through computersimulations based on general statistics, such as the average number of cars in

2

Page 3: Participatory noise mapping works! An evaluation of participatory … · 2019. 6. 18. · Participatory noise mapping works! An evaluation of participatory sensing as an alternative

the city. They are backed up only by limited amounts of sound measurementsbecause current measuring methods are expensive and thus not very scalable.The resulting maps give an average but not at all a complete view on thesituation, entirely missing local variations due to street works, neighbour-hood noise &tc. Crowdsourcing through participatory sensing techniquesalleviates these issues and allows for a person-centric approach to gatheringactual measured data on a much larger scale. We note that noise pollution isrepresentative both in that it is tightly correlated to other types of pollutionand in terms of the structure of the regulatory framework behind it.

There has been considerable progress in enabling the idea of participa-tory environmental monitaoring at the technological level, with applicationssuch as EarPhone [5], NoiseSPY [6], WideNoise and our in-house NoiseTubeapplication [7] taking the lead.3 However, the main hurdle that needs tobe overcome for participatory sensing to be accepted as a suitable methodfor environmental monitoring is not technological, but rather has to do withdata quality. Indeed the widely-held view, especially prevalent in institu-tional contexts, is that participatory sensing techniques are just not preciseenough. While it is certainly true that individual sensors are less accuratethan professionally used equipment, the question remains if the potentialof gathering massive amounts of data is able to counteract this issue. Inthis article we answer this question firmly in the affirmative. We do this byimplementing a participatory noise mapping experiment in a citizen sciencecontext which at the same time takes all rules for proper data acquisitioninto account. Although participatory noise maps have been reported earlierin literature [5, 6], they are either much more limited in scale or they do nottake the data quality aspect nearly as serious as we do in the work presentedhere.

In what follows we report on the various implementation steps for a 2-phase realistic citizen science experiment for noise mapping a 1 km2 area inAntwerp. Our experiments make use of the NoiseTube application [7], whichestablishes all tools and techniques for carrying out geo-located noise mea-surements. At the technical side we set up measuring equipment accordingto official norms (insofar as they apply), carrying out extensive calibration

3Only WideNoise and NoiseTube are currently publicly available. WideNoise targetsthe iPhone platform, while NoiseTube is available for Android, iPhone and JavaME plat-forms.

3

Page 4: Participatory noise mapping works! An evaluation of participatory … · 2019. 6. 18. · Participatory noise mapping works! An evaluation of participatory sensing as an alternative

experiments and putting forward concrete measurement protocols. To pro-vide the reader with an idea of what these norms involve we give a shortoverview in Sec.2. Calibration experiments are reported upon in Sec.3. Thissection already provides half of the answer to the question of data accuracyin a participatory context, as it thoroughly profiles the performance of mobilephones as sound measuring devices. At the citizen side, we collaborated withup to 13 volunteers from a citizen-led Antwerp-based action group, equip-ping each of them with a mobile phone for measuring and allowing extensivefeedback sessions on results, experiences and usability. In a first phase ofour experiment we define fixed measurement trajectories and time intervalsin order to build up massive amounts of data the for area and time frame ofchoice. This coordinated mapping campaign allows us to answer the otherhalf of the question on data quality, namely if a large dataset can increasethe accuracy and representativeness of the experiment’s final outcomes. Asa result we construct pure measurement-based, complete, statistically anal-ysed noise maps of the target area with error margins comparable to thoseof official maps. The results of this experiment are presented in Sec.4.1. Ina second phase of our experiment we impose less coordination, correspond-ing to a more realistic case of a group of users living in the same area andmeasuring as they go about their daily lives. The results of phase 2 of ourexperiment are presented in Sec.4.2. While the focus of phase 1 is really onthe performance of our architecture and the quality of the obtained data,the focus of phase 2 is on data collection patterns. However, both phasesare crucial in that it is only by virtue of the results of phase 1 that we areable to loosen up on spatio-temporal constraints and still be confident thatthe gathered data is credible. In each case we make a qualitative compari-son of our maps with official noise maps of the area, demonstrating if andhow the two can and cannot be compared, and indicating where the mostobvious commonalities or differences are. Finally, because adoption by thewider public is so important for participatory techniques to take off their,we organised a user survey of which results are presented in Sec.5. Whilethe group is too small to draw any kind of general conclusion, the feedbackgathered is still extremely useful to fine-tune future experiments, and at thesame time our survey technique has been put to the test for future use. Weconclude in Sec.6.

.

4

Page 5: Participatory noise mapping works! An evaluation of participatory … · 2019. 6. 18. · Participatory noise mapping works! An evaluation of participatory sensing as an alternative

2. Noise mapping today.

The European Union is at the forefront [8] when it comes to noise as-sessment. Indeed, as a densely populated area there has been a traditionin noise mapping for many years, at least at the level of individual memberstates. On the basis of this tradition European-wide regulations were definedin the form of European Union Directive 2002-49-EC [9]. This directive hasin turn spurred on further developments in noise assessment methods, andhas essentially led to an inventory of noise maps for large cities throughoutEurope. The United States, though its Noise Control Act dates back to 1972,does not mandate noise mapping at a federal or state level. As a result ithas been slower to catch on to regional noise mapping, and community-widenoise maps are rare. Nevertheless, the same methods that are applied inEurope can be implemented in the context of the United States, and this isbecoming a reality at the time of writing [8]. The situation for other devel-oped countries, such as Canada or Japan, are typically similar to that of theUnited States, while developing countries typically lack national legislationfor noise assessment and even noise control. For these reasons, and becauseour research is based in Europe, we take the assessment methods which lie atthe basis of the European Noise Directive (END) to be the norm. Below wedescribe in short what this directive entails, so as to provide policy context toour research, and, more importantly, place our approach within that context.

The END dictates that as of 2007 cities with a population of over 250.000have to produce statistics for the number of citizens exposed to average yearlysound levels of 55 to 75 decibels, in bands of 5 decibels, and over 75 decibels,and this at 4 meters above the ground on the most exposed facade of theirhome. Separate numbers are required for road, rail and air traffic and forindustrial sources, where only major roads (more than 6 million vehicle pas-sages per year), major railroads (more than 60.000 train passages per year)and major airports (more than 50,000 movements per year) within the cityterritories should be considered. Exposure numbers should be updated every5 years and moreover, as of 2012, smaller cities (over 100.000 inhabitants)and traffic sources are also to be taken into account. The most visible part ofthe implementation of this directive has been the creation of strategic noisemaps, which are evidently a crucial step toward determining citizen expo-sure statistics. These show the aforementioned sound levels as a colour-codedoverlay on city maps. More concretely, the main noise indicators for noisemapping are Lday, Levening, Lnight and Lden (day-evening-night), obtained by

5

Page 6: Participatory noise mapping works! An evaluation of participatory … · 2019. 6. 18. · Participatory noise mapping works! An evaluation of participatory sensing as an alternative

averaging sound levels for corresponding time intervals over the period of oneyear. To be more precise, the sound level required is the so-called A-weightedequivalent continuous sound pressure level LA,eq, the quantity put forwardas the international standard for environmental noise assessment [10]. LA,eq

is itself an average, this time of instantaneous air pressure differences (i.e.sound) over arbitrary periods of time:

LA,eq = 10 · log10

1

T

∫p2

p20

dt [dB(A)], (1)

where p0 is the reference sound pressure, taken as 20 µPa. The subscript Adenotes that the equivalent sound pressure level is to be weighted with theA-weighting curve, which approximates the sensitivity of the human ear todifferent frequencies (low and high frequencies are attenuated). By choosingtime intervals appropriately equivalent sound pressure levels can be computedfor periods of interest, such as peak hours or during the nighttime. In thisway, Lday, Levening and Lnight are found by taking the following time intervalsin Eq.(1): a 12h day period, a 4h evening period, and an 8h night periodrespectively.4 The composite day-evening-night sound pressure level Lden isa logarithmic,5 weigthed average over these three periods, as follows,

Lden = 10 · log10

(1

24

[12 · 10

Lday10 + 4 · 10

Levening+5

10 + 8 · 10Lnight+10

10

])[dB(A)].

(2)By adding 5, respectively 10, to the evening and night values one accounts forthe increased annoyance caused during such periods. In summary, the mapsthat are produced as a result of the END present Lden and Lnight values, fortrain-, traffic-, airport- and industry-related sources of noise, in colour-codedbands of 5 dB(A). Though not mandatory, sometimes a composite map forall sources considered is provided. Note that the END states specificallythat “the values of Lden and Lnight can be determined either by computa-tion or by measurement” [11, Annex II.2]. In practise, however, sound levels

4Typically a 24h period is split up as follows: 7-19h (day), 19-23h (evening), 23-7h(night), though member states may define these periods independently to account forlocal and cultural differences.

5Logarithmic averages are typically used when computing the average sound level over aperiod in terms of sound levels measured during consecutive time intervals for that period.

6

Page 7: Participatory noise mapping works! An evaluation of participatory … · 2019. 6. 18. · Participatory noise mapping works! An evaluation of participatory sensing as an alternative

are always computed because of scalibility issues – indeed, it has hithertobeen unfeasible to consider measurement devices at all places and times. In-stead, soundmaps are simulated on the basis of statistics on sources present,source-specific propagation rules and information on the urban layout. Thisapproach is in fact embodied in the established norms which per constructionare phrased in terms of the four targets for sources of sound and their sizes.Measurements are only present as a means to initialise the model, for whichdata is gathered from professional and precise equipment typically mountedon rooftops in representative areas of the city. As an example, the BrusselsRegion, which is over 160km2 large and counts over one million inhabitants,relies on 18 measuring stations (one of which is mobile). The simulation ap-proach has been extremely useful as it has allowed assessment of backgroundin absence of physical data of adequate granularity. However, by focussingon 4 sources of sound only, one ignores all incidental sounds such as thoseoriginating in construction works, traffic jams, passage of emergency services,manifestations, and more general city events or neighbourhood noise, all ofwhich are known to be highly annoying to city dwellers. Simulated maps arenot only incomplete in acoustic terms but also geographically (large roadsonly) and temporally (average day and night in the year). They also lackcontextual data, to which individual sound experience is highly sensitive,and typically date back a number of years. Simulated maps are useful forauthorities to obtain an global trends in the urban soundscape, providinga lower limit on actual citizen exposure. They do not, however, captureperson-centric exposure levels or model local variations very well. For thisreason they have remained of little interest to citizens, who either perceivethe obvious (i.e. that his street is loud), or, worse, cannot link the mappedarea to his own experience (street not considered, local variations ignored).

3. Mobile phones as sound level meters.

If we are to take participatory sensing seriously as a method to producenoise maps, the first thing to do is to evaluate the equipment that gathersthe data behind those maps. Moreover, if we want to compare results to offi-cial, simulation-based noise assessment methods, it is essential to play by thesame rules insofar as possible. As mentioned above, the END allows the useof measurement equipment to establish sound levels directly [11]. The normsfor how these measurements should proceed are again very much writtenwith simulation models in mind, providing separate methods for each source

7

Page 8: Participatory noise mapping works! An evaluation of participatory … · 2019. 6. 18. · Participatory noise mapping works! An evaluation of participatory sensing as an alternative

considered, and dictating the measurement of quantities that are directlycompatible with propagation formulae. For example to assess traffic noisemeasurement equipment should involve: (1) A-filtering, (2) direct read-outof sound levels in dB(A), (3) computation of LA,eq over arbitrary time in-tervals, (4) calibration, (5) spherically sensitivity and wind protection, (6)wind speed, (7) wind direction, (8) speed of passing vehicles and (9) mea-surement at 4m above street level. Requirements (1)-(5) correspond to theinternational standard for class 1 sound level meters [12]. Requirements (6)-(9) are tailored to the simulation model, allowing to compute meteorologicalcorrections and propagation parameters related to vehicle speed, as well asensuring compatibility with computed sound levels. In a participatory settinglarge-scale adoption is crucial and the focus is on personal exposure levelsduring daily life. For this reason requirements (5)-(9) do not make sense:we want all users to be able to carry out measurements with the equipmentthey already own (ruling out wind and speed measurements), and we wantto determine what users are exposed to as they traverse their daily lives(i.e. wherever they may find themselves). Requirements (1)-(2) are satisfiedthrough the NoiseTube software, which shows A-weighted equivalent soundlevels per second; requirement (3) is achieved by post-processing of these in-dividual values. This leaves us with requirement (4), to which we pay specialattention here.

Calibration is a comparative procedure by which one device’s measure-ment outcomes are compared with those of a more professional device, whichis considered to produce correct outcomes. It is a way in which one can deter-mine the measurement characteristics of a device, which allows to track downsystematic errors and correct for them later on. While calibration is in itselfa well-known procedure, it is not typically carried out for mobile phones.For this reason we have put much effort in calibration experiments, deter-mining frequency- as well as white noise characteristics for up to 10 differentphones. To our knowledge this is the first time calibration for mobile phoneshas been carried out so extensively. Earlier work typically targets only onephone with one type of sound, be it a test signal [5] or white noise [6]. Anexception to this is the work in [13], which targets four phones with differ-ent test signals, which, while more general, still do not give a systematicoverview of the frequency-sound level domain as we have done. None of theabove works motivates their choice of a particular calibration method. Ourexperiments first and foremost serve as a way to correct systematic errorsdue to the characteristics of mobile phone microphones, enabling the full

8

Page 9: Participatory noise mapping works! An evaluation of participatory … · 2019. 6. 18. · Participatory noise mapping works! An evaluation of participatory sensing as an alternative

potential of mobile as sound measuring devices. This is important in thecontext of this article because we want to determine the potential accuracyof participatory noise maps. But is also serves as a way to obtain insight intowhat kind of characteristics one can expect and gives precise arguments infavour of particular calibration techniques. It is also important in the largercontext of participatory environmental sensing in order to counter the oft-heared critique that mobile phones simply are not precise enough. Note thatcalibration only goes so far in increasing precision of a mobile phone run-ning NoiseTube as compared to a professional sound level meters. However,remember that an essential component of the participatory approach is tocounter this issue by collecting massive amounts of data an aggregating thisdata in appropriate ways. As we shall see in Sec.4, combining both aspectsallows us to obtain noise maps that are of the same quality as those producedby European member states today, (albeit at a smaller scale).

For the noise maps produced in this article we have used a set of 10Nokia 5230 mobile phones (referred to as Nokia 5230 #i or just phone#ifor the i-th phone henceforward), a reasonably cheap model which never-theless satisfies all requirements for running NoiseTube. Calibrating thesephones as sound level measuring devices requires three ingredients: a con-trolled environment to carry out sound measurements, a sound generator,and a professional sound level meter to compare our mobile phones against.Experiments were carried out in an anechoic chamber, the preferred environ-ment for calibration. Sound was generated at particular frequencies using anHP Agilent 33120A waveform generator, and as white noise (an equal com-position of all frequencies) using a Bruel&Kjær Type 1405 noise generator,each of which was connected to an amplifier (Bruel&Kjær Type 2706) anda speaker to produce that actual sound in the anechoic chamber. Finally,the professional (“correct”) sound level meter setup consisted of a MicroTechK250 high-end condenser microphone, an LMS Scadas III data acquisitionstation, and a Windows PC running the LMS Test.Lab software. Sound gen-erators, LMS station and PC were positioned outside the anechoic chamber,while amplifier, speaker, microphone and mobile phones were positioned in-side it. NoiseTube ran continuously on the mobile phones that were beingtested, with sound recorded at the highest sampling rate possible, in thiscase 48 kHz. The general procedure is then to produce sound of a particu-lar type for a time interval of one minute; sound levels for the microphoneare readout directly from the PC, while those of our mobile phones were ob-tained by downloading NoiseTube measurement data, identifying the interval

9

Page 10: Participatory noise mapping works! An evaluation of participatory … · 2019. 6. 18. · Participatory noise mapping works! An evaluation of participatory sensing as an alternative

of interest, and averaging over the obtained measurements.In our experiments we have studies three types of variability: phones,

frequencies, and sound levels. While targeting one particular model, we havenot acquired our phones in one batch but rather partly new, partly from eBay,which simulates a realistic phone set with different usage histories and datesof manufacture. Before deciding whether a frequency-independent method(such as white noise calibration) is authorised, we subjected two of our phonesto tests spanning the full frequency range of the human ear. More concretely,we produced pure tones at every third octave band from 50 Hz to 20 kHz, for atotal of 27 tones. This sequence was traversed a total of 7 times, taking soundpressure levels at 1kHz to be fixed at 60dB to 90dB with intervals of 5dB.The first thing we noticed when plotting the results of these experiments wasthat there is no significant difference between the two phones we tested, whichled us to focus on the characteristics of one phone only. Figure 1 comparesthe results for phone#0 with the reference microphone. One can clearly seea dependence on sound levels by the difference in spread between differentsequences. On the other hand, the frequency characteristics is quite similarto that of the reference microphone in the domains that matter in urban areas(i.e. exluding low sound levels and frequencies). Calibrating phones in termsof frequency entails developing digital filters, which is not easy and hard toaccomplish in a participatory setting where calibration is in itself already abottleneck for large-scale adoption. But our data shows that we are entitledto take a pragmatic stance, as sound levels have a more significant impactthan frequencies do. This lead us to turn towards a calibration techniquewhich is independent of frequencies, in the sense that one calibrates againsta test sound that is a fixed mix of all frequencies in the range considered.Again there are different options, the most common being white noise orpink noise, which has a decreasing frequency spectrum. Pink noise is thespectrum of choice when there is a focus on low frequency sounds, which isnot particularly the case in an urban setting. Therefore, we decided to focuson white noise as it is the standard followed by many institutions.

The procedure for white noise calibration is similar as above, where in-stead white noise is generated at sound levels of 30dB to 105dB in 5dB-intervals. Figure 2 shows sound levels as measured by the ten Nokia 5230phones used in our mapping experiments (coloured dots) and a polynomialfit of the latter (red line), which represents the average behaviour of the 10phones tested. Again we see that phones behave quite similarly — with theexception of phone #9 which is about 5dB off for typical urban sound lev-

10

Page 11: Participatory noise mapping works! An evaluation of participatory … · 2019. 6. 18. · Participatory noise mapping works! An evaluation of participatory sensing as an alternative

30

40

50

60

70

80

90

100

50 80 125 200 315 500 800 1250 2000 3150 5000 8000 12500 2000020,0

30,0

40,0

50,0

60,0

70,0

80,0

90,0

100,0

50 80 125 200 315 500 800 1250 2000 3150 5000 8000 12500 20000

Figure 1: Outcomes of frequency-dependent calibration experiments, giving dB(A) valuesin term of frequency. Nokia 5230 #0 phone on the left, reference microphone on the right.

els with respect to the other phones. Note that this is not an issue sincewe calibrate for each phone independently. Concretely, we compensate forthe average measurement offsets of each individual phone through a correc-tion term which is found by linear interpolation between calibration pointsfor that phone. This process occurs at the level of the NoiseTube software,which keeps track of calibration data and which ensures that the right correc-tion term is added to measured values before storage and on-screen display.After thus calibrating all phones we checked the performance of phone#0by exposing it to the same levels of white noise, the results of which canbe seen as the dark red line in Fig.2. This experiment shows that decentcalibration methods can bring back average errors to 1,07dBA, decreasing toas little as 0,56 dBA for sound levels under 100 dB, at least in a controlledenvironment.6

In a real-world measuring campaign phones do not find themselves inan anechoic chamber with pure white noise coming in from one particulardirection. Rather, they are exposed to the elements and are surrounded bysounds of arbitrary complexity. As a result one should expect a decrease ofaccuracy in such a setting with respect to the situation presented in Fig.2.To get an idea of real-world performance of calibrated phones we comparedthe sound levels as measured by phone#0 with those measured by a CEM-

6One should take into account that 3 dB is the smallest difference that can be detectedby the human ear, so that these numbers indeed correspond to small errors.

11

Page 12: Participatory noise mapping works! An evaluation of participatory … · 2019. 6. 18. · Participatory noise mapping works! An evaluation of participatory sensing as an alternative

20

30

40

50

60

70

80

90

100

110

Ref

eren

ceSP

L[d

B(A

)]

20 30 40 50 60 70 80 90 100 110

Measured SPL [dB(A)]

Ideal behaviourNokia 5230 #0Nokia 5230 #1Nokia 5230 #2Nokia 5230 #3Nokia 5230 #4Nokia 5230 #5Nokia 5230 #6Nokia 5230 #7Nokia 5230 #8Nokia 5230 #9Nokia 5230 #10Nokia 5230 genericNokia 5230 #0 calibratedCEM-DT8852 Class 2 SLM

Figure 2: Outcomes of white noise calibration experiments.

DT8852 class 2 sound level meter [12] during a 1,5h walk in the target area.This sound level meter advertises an accuracy of ±1,4dB, confirmed in thelab with another white noise experiment (which actually showed an improvedaverage error value of 0,77dB – see also the green line in Fig.2). Figure 3shows the sound levels measured during the first part of this walk. It isimmediately apparent that accuracy varies substantially, wind being an im-portant factor which effects can clearly be observed in the first few minutesand between minutes 9 and 13. One also clearly sees that the DT8852 evolvesmore smoothly that the Nokia measurements. This is due to the fact thatconventional sound level meters often use time weighting rather than stan-dard averaging for computing sound levels from instantaneously varying airpressure, which has the effect of smoothing out fluctuations.7 To quantifyaccuracy levels better we compare logarithmic averages for the whole periodrather than reasoning on individual measurements, which is difficult because

7Time weighting was inevitable in the analogue past for technical reasons, and not anissue since resulting sound levels converge on actual values. It is still often in use today,though standard averaging techniques permitted in digital systems such as NoiseTube aremore appropriate. We have used our sound level meter in what is know as the SLOWsetting, with typical time constant of 1 second.

12

Page 13: Participatory noise mapping works! An evaluation of participatory … · 2019. 6. 18. · Participatory noise mapping works! An evaluation of participatory sensing as an alternative

they have different timings. For this particular itinerary we find that our cal-ibrated Nokia 5230 #0 phone measures an average sound level of 71,02 dBA,while the CEM-DT8852 measures 70,90 dBA. Focusing on an interval withgood correspondence (between 8 and 12 minutes) we find that the averagesound levels of 70,57 and 72,94 dBA respectively, which is not as good as forthe whole interval but still perfectly respectable. Focusing on a windy inter-val (between 9 and 13 minutes) we find that our phone overestimates soundlevels substantially, measuring an average sound level of 68,31 dBA while thesound level meter measures 59,44 dBA. This is to be expected due to thelack of wind protection for mobile phones. The evidence presented, whilecircumstantial, already supports the main punchline for this article, namelythat in a participatory setting statistical reasoning over more data allows tocounteract the inaccuracies due to individual measurements, be it in termsof time (averaging over longer time intervals, as we have done here) or space(averaging per geographical unit, as we shall do in the next section). Nextto this, the difference in accuracy observed also underlines the importanceof attaching a quality factor to obtained data in terms of contextual factorssuch as weather, something currently lacking in our approach.

30

40

50

60

70

80

90

00:00 02:30 05:00 07:30 10:00 12:30 15:00 17:30 20:00 22:30 25:00

Figure 3: Sound levels as measured by a Nokia 5230 #0 (green) and a DT8852 sound levelmeter (blue) during a 25 minute walk.

It should be noted that our approach crucially relies on localisation to beable to classify the multitude of measurements that are required to producecredible noise maps. For this reason it is also important to get an idea onthe accuracy of GPS measurements. Since this is highly dependent on theurban layout (high buildings having an obvious effect) we achieved this bypositioning ourselves at 7 locations in the target area, taking 100 measure-ments at each location, and then computing the spread on longitude andlatitude values obtained. Note that this spread is computed with respect tothe average longitudes and latitudes measured, rather than with respect toabsolute geographical coordinates which are very hard to obtain (amongstothers due to projection issues). We found the average spread on latitude to

13

Page 14: Participatory noise mapping works! An evaluation of participatory … · 2019. 6. 18. · Participatory noise mapping works! An evaluation of participatory sensing as an alternative

be 2,76m, and on longitude to be 2,23m. Maximal differences measured were8,25m for latitude and 7,94m for longitude. These values are important whendeciding upon the granularity of noise maps, as we shall see in the below.

4. Participatory noise mapping.

Having thoroughly tested and analysed the performance of phones assound level meters, the next step is to use them in a realistic participatorymeasurement campaign. We were fortunate in that we could collaboratewith one of the best-known citizen-led action groups in Belgium, Ademloos.This non-profit association, initiated as a result of environmentally dubiousgovernment plans for reorganising traffic around Antwerp, has been labouringfor the past ten years for a more sustainable solution to the city’s congestionproblems. Organisations such as Ademloos are the proverbial wet dream for acitizen science project such as this one, consisting of motivated, community-driven citizens which are concerned with their environment but who do nottypically have any technical or scientific domain knowledge. In fact, some ofour measurers did not even own a mobile phone!

A first meeting with both the secretary and the spokesman of Ademloosresulted in a list of requirements both for the citizens as for us researchers.The action group had the main say in practical issues such as the focus area(Linkeroever, or “left bank”) and measurement dates and times. From ourside we structured the actual campaign in a way that allowed us to max-imise research outcomes. In particular as this was the first time NoiseTubewas used in a coordinated measurement campaign, we decided to run theexperiment in two phases. Phase 1 consisted of a smaller, more controlledexperiment allowing us to test and fine-tune our procedure while demandinglimited effort from Ademloos members. Phase 2 involved more volunteersand less constraints, thus corresponding more closely to participatory noisemapping at a city level, where data accumulates as citizens use NoiseTubein the context of their daily lives. While the first phase guarantees enoughdata per space-time coordinate for statistical averaging to make sense, in thesecond phase it is more difficult to produce complete noise maps as theremay be gaps in time as well as in space. However, we shall see below thatthe maps produced are still valuable, both at the level of noise pollution dataas for learning how user behaviour changes in a setting with less constraints.Specifics and outcomes of both measurement phases are discussed in Sec.4.

14

Page 15: Participatory noise mapping works! An evaluation of participatory … · 2019. 6. 18. · Participatory noise mapping works! An evaluation of participatory sensing as an alternative

As per our suggestion, the secretary served as the main communicationchannel between us and Ademloos members, his first task being to line up alist of interested members located in that area. Meetings were set up at eachstage of the experiment. First, we organised an introductory meeting withinterested members, after which volunteers for the actual measurement cam-paign were recruited and their main constraints summarised, which allowedus to concretise measurement times and areas. Second, training meetingswere set up right before each measurement phase, to distribute phones andto specify how measurements where to be carried out, both in terms of usingthe software and good practices for improving data quality. These trainingmeetings were crucial since about half of our volunteers had limited experi-ence with mobile phones. For this reason we simplified our user interface topure noise measurements, excluding the tagging component and deactivat-ing automatic communication with our website to upload measurement data,and we supplied a short document summarising the measurement protocolin simple terms and with lots of pictures on phone usage and troubleshoot-ing. These meetings were also used to communicate preliminary outcomesand to allow opportunities for feedback. A final meeting in May 2011 pre-sented the outcomes of both phases of the experiment and the resulting noisemaps, including a discussion round, ample time for questioning, and a surveyquestioning volunteers on their experience with NoiseTube and participatorynoise mapping. While the survey’s size is too small to draw any kind of gen-eral conclusion, we mention some notable outcomes relevant to this particularexperiment in Sec.5.

4.1. Coordinated noise mapping.

In the first phase of our mapping experiment we wanted to control asmany free parameters as possible in order to be able to focus on the per-formance of our architecture and the quality of the obtained data. At thesame time we wanted the experiment to be defined in a way that it corre-sponds to a realistic use case of NoiseTube: that of coordinated measurementcampaigns. Indeed, one way that we envision participatory platforms suchas NoiseTube to be useful is to tackle local issues that are typically hardto capture in simulated noise mapping, for example due to neighbourhoodissues, roadworks and the like. These campaigns may be carried out fromthe bottom up, i.e. through grassroots actions carried out by citizen collec-tives, or top-down, i.e. by authorities to quickly map an area of concerns.The middle way between these requirements led us to consider fixed trajec-

15

Page 16: Participatory noise mapping works! An evaluation of participatory … · 2019. 6. 18. · Participatory noise mapping works! An evaluation of participatory sensing as an alternative

tories. While fixed trajectories are only one, rather dull way to coordinatemeasurement campaigns, they are representative of this use case while atthe same time allowing us to eliminate measurement variability. Concretely,four volunteers followed a pre-defined measurement track twice daily for aweek during a chosen peak-hour (7:30–8:30 am). The next week an off-peakhour was covered by four different volunteers (9:00–10:00 pm). Phase 1 ranin early July and covered a total area of about 0,4km×0,4km with a trackof about 2km. The track was chosen as a composite of typical urban sound-scapes such as an important intersection, a park, and residential streets, allnear the area were volunteers lived. At the same time its length was chosenso that four people measuring one hour a day would produce enough data soas to adequately represent the statistical sample space. A quick theoreticalcalculation shows that in ideal conditions we would gather a total of about36000 measurements = 30 measurements /minute × 60 minutes × 5 days ×4 people, which gives an average of 180 measurements per 10m of the pathfollowed – ample for statistical analysis.

To represent this amount of data on a map we can not rely on our stan-dard NoiseTube web application, which produces one map per measurementtrack with coloured dots for each individual measurement. Instead we intro-duced a statistical component which enables us to produce one single noisemap for a collection of measurement tracks. The basic procedure is this: di-vide the measured area into smaller areas, partition the set of measurementsover those areas, make a statistical analysis per unit area, and finally, mapthe colour coded averages on each pertaining area. Participatory noise mapsfor phase one of the Ademloos experiment are shown in Fig.4; the chosentrajectory is clearly recognisable in each map. The map on the left showsaverage sound levels for the peak hour and is based on 30977 measurements,while that on the right shows average sound levels for the off-peak hour andis based on 36394 measurements. For each week the number of measure-ments is very close to our theoretical calculation above. Measurements weremade during different days of the week, but are accumulated here to obtainaverages for the chosen hours. We chose to organise the measured area into agrid with a unit size of 20mx20m, excluding grid elements with less than 50measurements for statistical credibility. Note that there is always a trade-offbetween these values, as a smaller grid size requires more data to achieve sta-tistically credible results. Next to this, grid size should never be smaller thanlocalisation accuracy, for obvious reasons. Maximal GPS errors mentioned atthe end of Sec.3 suggest that one could decrease map granularity to cells of

16

Page 17: Participatory noise mapping works! An evaluation of participatory … · 2019. 6. 18. · Participatory noise mapping works! An evaluation of participatory sensing as an alternative

10mx10m, at least within the targeted area. However, in experimenting withvalues for grid and sample size we found the above combination to be bestin terms of balancing clarity with accuracy. The peak-hour map contains172 grid elements, has an average of 164 measurements per grid element,average sound level of 63,6 dB(A) over all grid elements, and an averagestandard deviation of 4,2 dB(A) over all grid elements. The off-peak-hourmap contains 192 grid elements, has an average of 167 measurements per gridelement, average sound level of 60,8 dB over all grid elements, and an averagestandard deviation of 4,0 dB(A) over all grid elements. For both maps thespread of average sound levels is between about 55dB and 70 dB, with verylittle differences in extremal values: we find a spreads 55,3 to 69,7 dB(A) and56,7 dB(A) to 71,9 dB(A) for off-peak and peak hours respectively. However,there is a clear visual difference between the global noise maps for both timeperiods, reflected in difference of 3 dB(A) in average sound levels for bothmaps. The main busy road, horizontally traversing the middle of the area, isclearly recognisable in each map. An interactive version of these and othermaps may be found online.8 On these maps users can click individual squaresto obtain further statistics such as minimum, maximum and average soundlevels, standard deviation, and size of the data sample. Other grid sizes arealso shown.

Sound level dB(A)

< 55 55-6060-6565-7070-75> 75

Sector 30

Figure 4: Participatory noise maps: peak hour on the left, off-peak hour in the middle,simulated map for Lnight on the right.

8See http://www.brussense.be/experiments/linkeroever/.

17

Page 18: Participatory noise mapping works! An evaluation of participatory … · 2019. 6. 18. · Participatory noise mapping works! An evaluation of participatory sensing as an alternative

Comparison with simulated maps is difficult because of the difference inapproach and in quantities that are represented. Nevertheless it is useful toattempt at least a qualitative comparison, if only to highlight immediatelyobvious commonalities and/or differences. Official maps for Antwerp repre-sent either Lden and Lnight, and this for road traffic, air traffic, train trafficand industry separately; they are obtained via the simulation methods dis-cussed in Sec.2. If we want to make a comparison at all, we will have tomake do with one of these maps. The most suitable one is probably that forroad traffic and Lnight. This is because first, the main noise source in the areawhere we measured is road traffic, there being no airport or train activityin the neighbourhood and industry also being somewhat farther of. Second,while our measurements where carried out during the day and evening pe-riod, we chose not to represent Lden since this is a weighted value and thusone step further away from actual measured values. In order to be as close aspossible in terms of timeframe, we shall focus on the participatory map forthe off-peak hour between 9 and 10 pm, in the middle of Fig.4. The right-hand side represents a close-up of Lnight for the target area. To be precise,it represents average values for the sound level between 11pm and 7 am at aheight of 4m from the ground for an average night in the year. The map isfrom 2010 but it relies on statistical data going back to 2006. Moreover, onlytraffic on the larger roads is considered, with propagation into neighbouringstreets computed by the simulation. Despite these differences, the first thingto notice is that both maps are rather similar in terms of sound level distri-bution. This indicates that general trends are captured quite well in bothapproaches. While this is understood for simulated maps, it is definitely anew result for participatory noise maps, and a result which gives support tothe validity of our technique. Second, we note that simulated noise mapstypically have standard deviations of ±5dB [14], which is in the same leagueas the deviations for the participatory maps. Again, this result is a new oneand undermines the widely held opinion that participatory measurement val-ues just are not precise enough. Our results show that they are, or at leastcan be, by compensating the lesser accuracy with massive amounts of data.Third, the simulated map is slightly louder, whereas we would have expectedthe reverse. Indeed, NoiseTube measurements are closer to the source (sincethey are taken at street level), they involve all sound sources, not only traffic,and factors such as wind and the behaviour of the person carrying out themeasurements tend to cause an overestimation of actual sound levels. Toclarify this issue, as well as to compare maps at a quantitative level, would

18

Page 19: Participatory noise mapping works! An evaluation of participatory … · 2019. 6. 18. · Participatory noise mapping works! An evaluation of participatory sensing as an alternative

require access to the computational methods used and the quantitative dataproduced by these methods. We note that even getting access to this in-formation is not trivial since noise maps are typically produced by acousticstudy bureaus which are not permitted to disclose information to the generalpublic without consent of the city administrations that they work for.

4.2. Wild noise mapping.

For the second phase of our experiment we move away from the fixed tra-jectory and timeframe setup; instead NoiseTube users are allowed to measureat will (albeit within a particular area so as to ensure enough data is gath-ered). On the one hand this is reasonable because of the results obtainedduring phase 1, which show that the data gathered with NoiseTube is indeedcredible. This allows us to loosen up on space-time restrictions and focusinstead on data collection patterns. Concretely, we want to get an idea ofhow the freer movement of contributors, which generally results in a datasetwith more gaps, affects the quality and completeness of collective noise maps.On the other hand this situation once again corresponds to a realistic Noise-Tube use case: that of city administrations polling their citizens for noisemeasurements as they go about their daily lives. Indeed, in the near futureone could imagine medium-scale organisations to rely on crowdsourcing toobtain information on urban parameters such as noise. Citizens could enrollin campaigns set up in a top-down manner by city administrations, targetingparticular areas or times, or just contribute to city-wide data accumulationthat can be analysed later on.

Ten volunteers were asked to measure for at least one hour a day for aweek in November 2010, without pre-described time slots and in a area ofabout 1km×1km, encompassing that of phase 1. This larger effort shouldin theory deliver at least 90000 measurements = 30 measurements /minute× 60 minutes × 5 days × 10 people, which gives an average of 36 mea-surements per grid cell over the whole week if we take a grid of 20m×20mas above. Of course this number is just an indication as the area is onlypartially walkable; still, it seems likely we may have to increase grid size tocompensate for sample size. In Fig.5 we present participatory noise maps forthe day (left map) and evening periods (middle map), where the day periodruns from 7am–7pm and the evening from 7–11pm. We chose these timeintervals to adhere to standard noise mapping practices while at the sametime grouping enough data so that statistical analysis is feasible. Indeedthe temporal data collection pattern is spread out over these periods, with

19

Page 20: Participatory noise mapping works! An evaluation of participatory … · 2019. 6. 18. · Participatory noise mapping works! An evaluation of participatory sensing as an alternative

no measurements taken at all during the night. Peak measurement activitytook place at 11am, 5pm and 10 pm, with dips around 1pm (lunchtime) and7-8am (dinnertime). However, the total data set is too small to be able to fo-cus on smaller intervals even for these periods of high measurement activity.Concretely a total number of 84309 measurements was gathered, showingonce again excellent agreement with theoretical predictions. Of these 62853contribute to the day period, and 17513 to the evening period. With thesenumbers and a larger area to cover we found the best map representation tobe one in terms of a larger grid of 40m×40m with a minimum of 50 mea-surements per grid element. Even so, it is clear that the evening period doesnot really merit to be called a noise map, as results are scarce. The day mapcontains of 329 grid elements, has an average of 150 measurements per gridelement, average sound level of 61,1 dB and an average standard deviationof 4,5 dB(A) over all grid elements. Sound levels vary between 44,2 dB(A)to 70,5 dB(A). The evening map contains 90 grid elements, has an average of90 measurements per grid element, and an average sound level of 58,0 dB(A)and standard deviation of 5,4 dB(A) over all grid elements. Here sound lev-els vary between 38,2 dB(A) to 68,8 dB(A), indicating, together with thedifference in average sound level, that overall sound levels for the eveningperiod are about 2 dB(A) lower – but more data is required to really back upthis statement. An interactive version of these maps may be found online atthe same address as above, complemented by a number of alternative mapsfocusing on different users and time intervals. The liberty of choosing thelatter is one big advantage of participatory maps with respect to simulatedones. However, a larger data set is required for a temporal analysis in termsof particular hours or days to become fruitful.

For completeness we once again juxtapose our participatory maps withthe simulated map of Lnight for road traffic of the target area. It is evenmore difficult to make any kind of comparison in the setting of phase 2 thanin that of phase 1, because data is sparser. However, we permit ourselvesto make one binary statement: that we have managed to map a number ofareas which clearly do not appear in the simulated map. Indeed the west-eastroad at the top of the map is not coloured by the simulated map, becausepropagation of the main road just does not reach that far. The same holds,to a lesser extent, for the areas at the top right and bottom right of themap. It is precisely these areas where participatory sensing is able to makea difference. And this is where the combination of the two phases of ourexperiment really comes into play: by virtue of the outcomes of phase 1 of

20

Page 21: Participatory noise mapping works! An evaluation of participatory … · 2019. 6. 18. · Participatory noise mapping works! An evaluation of participatory sensing as an alternative

Figure 5: Participatory noise maps: day period on the left, evening period in the middle,simulated map for Lnight on the right.

the experiment, which shows that we find the same results for large roadsand hence that our approach is trustworthy, we have confidence in the resultsof phase 2 pertaining to areas that are missing in simulated maps.

5. Citizens as data gatherers.

We consider it important also to evaluate NoiseTube, as a system forparticipative grassroots noise mapping campaigns, from the user perspec-tive. Therefore we need an evaluation tool. The feedback gathered withthis tool can be used to guide further development of NoiseTube and similarparticipatory sensing applications.

Considering the limited number of people who took part in our experi-ments in Antwerp we could have done an interpretative qualitative approachto evaluate the experience during the different test phases. But since weplan to reuse the evaluation tool with a larger audience (e.g. students whotested NoiseTube or subscribed users on noisetube.net) we have chosen amethodology that can be scaled up more easily, namely that of a standard-ised questionnaire. Hence, the Ademloos members are not only a pilot groupfor testing the NoiseTube system but also for the evaluation tool. To designthis questionnaire we built on dimensions found in literature on participa-tive/mobile sensing, in combination with open issues to decide on data rep-resentation [15, 16]. We kept some open questions since we are still exploringthe dimensions of the experience of sensing. Building on the results with theAdemloos group, and possibly other pilot groups, we should be able to revise

21

Page 22: Participatory noise mapping works! An evaluation of participatory … · 2019. 6. 18. · Participatory noise mapping works! An evaluation of participatory sensing as an alternative

the questionnaire to have closed categories for these dimensions as well.The questionnaire was filled out by the group at the start of the final

feedback session in May 2011, during which we presented the results of themeasuring campaigns. Everybody in the total group of 13 participants (somevolunteers contributed to phase 1 as well as phase 2) filled in the questionnaire(7 men and 6 women, with an average age of 62 and a standard deviation of13 years).

From a methodological point of view we learned that some questionsshould be reconsidered because the answers do not show a lot of variation.For example a ranking question instead of a Likert item scale would havegiven more insight in prioritisation of the kind of information to be shownon noise maps, as well as on the motivation to take part in such a mappingcampaign.

Although this was a first test of the questionnaire as an evaluation tool,besides methodological insights it has also offered some valuable feedbackfrom this particular user group concerning the NoiseTube system as it waswhen tested. We summarise some of the most important indications, but wemust stress that statistical significance is not possible with this limited andopportunistic sample of testers.

When asked about the concerns they had about privacy, only one par-ticipant said he was worried because “there is a log of where I was at whichmoment”. A lack of general concern about privacy can be hypotheticallyattributed to the fact that participants were part of an existing group thatknow and trust each other, who consciously took part in a scientific experi-ment and who got time to get acquainted with the researchers is person.

Looking at the motivational factors for taking part in the campaign, mostagreement was on a general concern of noise nuisance (11 out of 13 consideredit “very important”), followed by personal experience of noise nuisance (8/13indicated this was “very important”) and supporting the/an activist group(also 8/13 “very important”). Supporting scientific research is seen an im-portant (5/13) or very important (also 5/13) motivation. Factors which wereless important drivers for this particular group were an interest in technologyor that the campaign was a fun pastime, or a useful one.

With an open question we asked for the three nicest things they experi-enced taking part in the campaign as well as the three most annoying things.Some unexpected dimensions appeared from these questions. The fact thatwalking was an integral part of the sensing activity was seen as beneficial forthe general health condition. The immediate feedback given by the Noise-

22

Page 23: Participatory noise mapping works! An evaluation of participatory … · 2019. 6. 18. · Participatory noise mapping works! An evaluation of participatory sensing as an alternative

Tube application gave them insight in sound (level) and in the problem ofnoise (relativity, locality). They also mentioned several times that they likedthe team spirit this campaign created in their local group. Less annoying ex-periences were mentioned than nice experiences. Some annoying experienceswere: stability of the program, the constant holding of the phone, the fixedroutes (in phase 1) and not being able to have conversations while measuring(one person even noted that she regretted having to pass by acquaintances,which indicates how committed she was to the experiment). Regarding thefixed hours in phase 1 and the freely chosen times in phase 2, opinions seemedto be mixed. For instance, one user who took part in both phases mentioned“the fixed timeframe was easier to keep up” (comparing phase 2 to 1), whileanother who also contributed in both phases mentioned the “the fixed time-frame” as an annoying aspect (of phase 1).

We also asked for feedback on features of the NoiseTube system whichthese users had not been given access to (because we wanted to focus onthe accuracy of measurement and the mapping strategies). For instance,we left out the “noise tagging” feature out of the application tested by theparticipants. In the questionnaire we then asked whether they would haveliked to be able to indicated sources of noise while measuring. The group wasvery positive about this (10/13 answered Yes). Also popular was the idea ofbeing able to comment on the measurements made by their group through awebsite (6/16 Yes, 3/13 Maybe). The proposed possibility of commenting onmeasurements made by other NoiseTube users (outside of their group), wasseen as much less interesting (8/13 No, 1/13 Maybe).

We also asked a number of questions regarding the elements of noise maps.We found two interesting things. On the one hand, sound level was preferred(9/13) to be displayed using categories instead of exact values, which is inline with conventional noise maps. On the other, a slight majority (7/13)indicated that they preferred maps to show peak values rather than averages,which may indicate a lack of understanding of the dynamics and perceptionof sound.

6. Conclusion

The main contribution of this article is that it shows that, when takenseriously, participatory sensing can serve as a valid technique for producingcredible and accurate noise maps that are on a par with official, simulation-based maps produced on a regular basis by cities throughout Europe to-

23

Page 24: Participatory noise mapping works! An evaluation of participatory … · 2019. 6. 18. · Participatory noise mapping works! An evaluation of participatory sensing as an alternative

day. While technological developments for participatory sensing have beensteadily proceeding, to our knowledge this is the first time that the limitsof the accuracy of the acquired data have been so thoroughly investigated.Our approach has been to design and implement a citizen science experi-ment which from the ground up operates within the framework of Europeannorms, which involves submitting all used equipment to a prior thoroughperformance analysis, and by structuring the experiment in such a way thatdifferent aspects could be investigated. First, extensive calibration experi-ments show that the accuracy of mobile phones as sound level meters can bepushed to an entirely acceptable level. Next, by structuring the first phase ofour experiment in such a way that massive data acquisition is guaranteed, wewere able to show that a representative data set was indeed built up, and thatstatistical analysis of this data set improves credibility even further. Indeedour participatory noise maps show the same general tendencies as simulatedones, which shows that the data gathered is credible, and moreover errormargins are of the same order. With this result as a backup, we were able tocarry out a second experiment with less coordination and which allowed usto focus on data collection patterns. While the data gathered for this partof the experiment is more spread out, our results indicate that completingthese maps would be just a matter of increasing effort. Finally, since userparticipation is so important in a participatory setting, we have carried outa survey which polls volunteers for their measurement experience. Even ifthis survey is too small to draw general conclusions, it does show the motiva-tion and ability of untrained citizens to contribute in this type of grassrootcampaigns.

The work carried out for this particular noise mapping experiment was ex-tensive and of course one cannot expect all participatory mapping actions tofollow the same procedure in general. However, the goal of this research wasto provide all parties interested in using the participatory approach with oneconcrete reply to the prevalent scepticism with respect to the quality of gath-ered data. Yes, it is easy to gather data that is worthless. But it is also thecase that when it is done right, participatory sensing is a valuable techniquefor environmental mapping, in particular when the focus is on reasonably-sized areas or time frames which isolate local issues. While simulated mapswill certainly remain useful, we see much potential for participatory mappingtechniques as a complementary approach for tackling pollution issues in thenear future.

24

Page 25: Participatory noise mapping works! An evaluation of participatory … · 2019. 6. 18. · Participatory noise mapping works! An evaluation of participatory sensing as an alternative

Acknowledgements

Our special thanks goes out to Prof. Patrick Guillaume, without whoseguidance, expertise and equipment we would never have been able to carryout the calibration experiments reported on in this paper with such profes-sionalism and thoroughness. And of course to all volunteers from Ademloos,without whom this research would not only not exist, but also would notmake any sense. Thank you Guido Verbeke, Ivo Delrue, Els Meelker, StafWaelbers, Rudy Crombe, Raymond Leunis, Jos Aerts, Luc De Belie, Chan-tal Flament, Liliane Van Handenhoven, Marjo, Magdalena Maes and PeterNijssen!

References

[1] Allison Woodruff, Jennifer Mankoff, Environmental Sustainability,IEEE Pervasive Computing 8 (2009) 18–21.

[2] Jeffrey A. Burke, Deborah Estrin, Mark Hansen, Andrew Parker, NithyaRamanathan, Sasank Reddy, Mani B. Srivastava, Participatory Sensing,in: WSW’06: Workshop on World-Sensor-Web (October 31, 2006), heldat ACM SenSys ’06 (October 31-November 3, 2006, Boulder, CO, USA).

[3] Eric Paulos, Citizen Science: Enabling Participatory Urbanism, in:M. Foth (Ed.), Handbook of Research on Urban Informatics: The Prac-tice and Promise of the Real-Time City, Information Science Reference,IGI Global, 2009, pp. 414–436.

[4] WHO Regional Office for Europe / European Commission Joint Re-search Centre, Burden of disease from environmental noise. Quantifica-tion of healthy life years lost in Europe, Report, 2011.

[5] Rajib Kumar Rana, Chun Tung Chou, Salil S. Kanhere, Nirupama Bu-lusu, Wen Hu, Ear-Phone: An End-to-End Participatory Urban NoiseMapping System, in: IPSN 2010: Proceedings of the 9th ACM/IEEEInternational Conference on Information Processing in Sensor Networks(April 12-16, 2010, Stockholm, Sweden), ACM, New York, NY, USA,2010, pp. 105–116.

[6] Eiman Kanjo, NoiseSPY: A Real-Time Mobile Phone Platform for Ur-ban Noise Monitoring and Mapping, Mobile Networks and Applications15 (2010) 562–574.

25

Page 26: Participatory noise mapping works! An evaluation of participatory … · 2019. 6. 18. · Participatory noise mapping works! An evaluation of participatory sensing as an alternative

[7] Nicolas Maisonneuve, Matthias Stevens, Bartek Ochab, Participatorynoise pollution monitoring using mobile phones, Information Polity 15(2010) 51–71. The International Journal of Government & Democracyin the Information Age. Special Issue on “Government 2.0: Making Con-nections between Citizens, Data and Government”.

[8] K. Kaliski, E. Duncan, J. Cowan, Community and regional noise map-ping in the united states, Sound and Vibration (2007).

[9] European Parliament and Council, Directive 2002/49/EC of 25 June2002 relating to the assessment and management of environmental noise,Official Journal of the European Communities L 189 (2002) 12–26.

[10] International Organization for Standardization, Acoustics – Description,measurement and assessment of environmental noise – Part 2: Determi-nation of environmental noise levels, ISO 1996-2:2007, 2007.

[11] European Commission, Commission Recommendation of 6 August 2003concerning the guidelines on the revised interim computation methodsfor industrial noise, aircraft noise, road traffic noise and railway noise,and related emission data, Official Journal of the European Union L212 (2003) 49–64. Annex II of Directive 2002/49/EC.

[12] International Electrotechnical Commission, Electroacoustics – Soundlevel meters – Part 2: Pattern evaluation tests, IEC 61672-2:2003, 2003.

[13] Silvia Santini, Benedikt Ostermaier, Robert Adelmann, On the Use ofSensor Nodes and Mobile Phones for the Assessment of Noise PollutionLevels in Urban Environments, in: INSS2009: Sixth International Con-ference on Networked Sensing Systems (17-19 June, 2009, Pittsburgh,PA, USA), IEEE, 2009, pp. 1–8.

[14] Brussels Instituut voor Milieubeheer, Geluidshinder door het verkeer -Strategische kaart voor het Brussels Hoofdstedelijk Gewest / Bruit destransports - Cartographie strategique en Region de Bruxelles-Capitale,Report, 2010. In Dutch and French.

[15] Paul M. Aoki, R. J. Honicky, Alan Mainwaring, Chris Myers, Eric Pau-los, Sushmita Subramanian, Allison Woodruff, A Vehicle for Research:Using Street Sweepers to Explore the Landscape of Environmental Com-munity Action, in: CHI ’09: Proceedings of the 27th international

26

Page 27: Participatory noise mapping works! An evaluation of participatory … · 2019. 6. 18. · Participatory noise mapping works! An evaluation of participatory sensing as an alternative

conference on Human factors in computing systems (April 4-9, 2009,Boston, MA, USA), ACM, New York, NY, USA, 2009, pp. 375–384.

[16] S. Consolvo, D. W. McDonald, T. Toscos, M. Y. Chen, J. Froehlich,B. Harrison, P. Klasnja, A. LaMarca, L. LeGrand, R. Libby, I. Smith,J. A. Landay, Activity Sensing in the Wild: A Field Trial of UbiFitGarden, in: CHI ’08: Proceeding of the twenty-sixth annual SIGCHIconference on Human factors in computing systems (Florence, Italy,April 5-10, 2008), ACM, New York, NY, USA, 2008, pp. 1797–1806.

27