Prototyping for the Digital City

download Prototyping for the Digital City

of 37

Transcript of Prototyping for the Digital City

  • 8/10/2019 Prototyping for the Digital City

    1/37

    Tobias Stockinger, Marion Koelle, Patrick Lindemann, and Matthias Kranz (Editors)

    Prototyping for theDigital City

    Advances in Embedded Interactive SystemsTechnical Report Winter 2013/2014

  • 8/10/2019 Prototyping for the Digital City

    2/37

    Prototyping for the Digital City

    Tobias Stockinger, Marion Koelle, Patrick Lindemann, and Matthias Kranz

    March 2014

    2

  • 8/10/2019 Prototyping for the Digital City

    3/37

    Contents

    Preface 4

    Prototyping Methoden und Prozesse 5Maximilian Bauer

    Prototyping of Haptic Interfaces 9Severin Bernhart

    Ubiquitous Urban Interactions 13Thomas Bock

    Prototyping Toolkits 18Philipp Neumeier

    Urban Context Awareness 23Sebastian Scheffel

    Citizen Apps und Websites 28David Scherrer

    Nachhaltigkeit durch Persuasive Technologie 32Lukas Witzani

    Copyright Notes 37

    3

  • 8/10/2019 Prototyping for the Digital City

    4/37

  • 8/10/2019 Prototyping for the Digital City

    5/37

    Prototyping - Methoden und Prozesse

    Maximilian BauerUniversitt Passau

    [email protected]

    ABSTRACTDieser Artikel besch aftigt sich zun achst mit der Denitionvon Prototypen und deren Nutzen. Danach wird auf derenKlassizierung eingegangen, wobei hierf ur die Unterscheidunganhand der Pr azision (delity) gew ahlt wurde. Dabei werdendie allgemeinen St arken und Schw achen von Low-Fidelity undHigh-Fidelity im Kontext von dazu durchgef uhrten Studienbehandelt. Auch wird speziell auf den Einuss von Physica-lity auf Hardwareprototypen eingegangen. Zuletzt wird dieerfolgreiche Integration von Prototypen im Prozess der Soft-

    wareentwicklung angeschnitten und ein kurzer Ausblick uberdie zukunftigen Herausforderungen von Prototypen geboten.

    KeywordsPrototyping, low vs. high delity, user centered design, phy-sicality

    1. DEFINITION UND NUTZENDer Begriff des Prototyps ndet seinen Ursprung aus an-

    deren, weitaus alteren Ingenieursdisziplinen, in welchen erin der Konzeptionsphase durchaus ublich ist - sei es dieFlugzeugindustrie, wo Miniaturversionen oder Computersi-mulationen zum Testen und Repr asentieren von Einf allengebraucht werden, oder auch die Architektur, bei der dieIngenieure ein oftmals mastabsgetreues Modell ihres zuk unf-tigen Werkes anfertigen [2]. Auch die Softwareentwicklung alsnoch relativ junge Ingenieursdisziplin hat in den letzten Jahr-zehnten zunehmend den Einsatz solcher Prototypen f ur sichentdeckt. Wo anfangs das nale Softwareprodukt das einzigewar, das der Endkunde zu sehen bekam - und nicht seltenverfehlte dies die anf anglich festgelegten Anforderungen beiweitem - setzt man zunehmend auf die fr uhe Einbindung desEndbenutzers in den Entwicklungsprozess [1].In der Softwaretechnik bezeichnet ein Prototyp eine konkre-te, oft unvollst andige Abbildung eines Systems, und auchwenn Software Engineering zur Verizierung mithilfe vonwissenschaftlichen Methoden eingesetzt wird, handelt es sichprim ar um einen Designprozess. Allein dieser Begriff l asst

    Maximilian Bauer is a student at the University of Passau .

    This research paper was written for the Advances in Embedded Interactive Systems (2014) technical report.Passau, Germany

    richtig vermuten, dass ein Software-Prototyp weniger Erkl a-rung oder spezischer Kenntnisse bedarf, im Gegensatz zuabstrakten Beschreibungen oder Modellierungen. Aus derdaraus resultierenden Einfachheit l asst sich ein direkter Nut-zen f ur alle Beteiligten des Prozesses - vom Entwickler uberden Projektleiter bis zum Endkunden - ableiten [2, 1].Neben dieser Uberschaubarkeit kann jedoch vor allem derwirtschaftliche Aspekt uberzeugen: wenn sich Missverst and-nisse in fruhen Entwicklungsstadien manifestieren und durchmangelnde Kommunikation oft erst zu sp at entdeckt werden,wird deren Behebung oftmals umso teurer. Das Erstellen vonPrototypen stellt daher eine kosteng unstige M oglichkeit zurfruhen Reduzierung von Fehlern dar [1]. Verschiedene Un-tersuchungen, die sich mit deren Nutzen besch aftigen, best a-tigten dies: ein Groteil der Usability-Fehler - beispielsweiseeine unklare Bedeutung von Beschriftungen oder Symbolen- wurden in einer Untersuchung, sie sich mit verschiedenenArten von Entw urfen besch aftigte, gefunden [6].

    2. KLASSIFIZIERUNGDer Nutzen, das Ziel von Prototypen ist klar - jedoch gibt

    es verschiedene Wege, verschiedene Arten von Prototypen, die- abh angig von dem Fortschritt des Projekts oder der Intenti-on - zum Ziel f uhren. Diese verschiedenen Arten lassen sich

    beispielsweise durch deren Repr asentation unterteilen, wobeidie Form des Prototypen eine entscheidende Rolle spielt - anden beiden Enden stehen hier Off-line-Prototypen (Papier-Prototypen) und On-line-Prototypen (Software-Prototypen).Auch die Handlungsm oglichkeiten, die der Benutzer mit demPrototypen zur Verf ugung stehen, k onnen zur Kategorisie-rung herangezogen werden - w ahrend beispielsweise eineVideoaufnahme als Prototyp den niedrigsten Grad der In-teraktivit at bietet, steht diesem ein vollst andig InteraktiverEntwurf entgegen. Einen weiteren Anhaltspunkt zur Unter-scheidung von Prototypen ist die Evolution desselben, dererwarteter Lebenszyklus des Entwurfs - handelt es sich bei-spielsweise um einen Wegwerfprototypen, der vor allem beimsogenannten Rapid PrototypingVerwendung ndet, einemsich weiterentwickelndem, Iterativen Entwurf zur Evaluierungvon Alternativen, oder gar einem Evolution aren Prototypen,der zur Verwendung im sp ateren System konzipiert und imHinblick darauf stetig weiterentwickelt wird [2].Im Groteil der Literatur und Forschung zum Thema Pro-totyping scheint sich jedoch ein de-facto-Standard zur Klas-sizierung von Prototypen herausgebildet zu haben - dieUnterscheidung anhand der Pr azision, im folgenden mit demetablierten, englischen Orginalbegriff delitybezeichnet.

  • 8/10/2019 Prototyping for the Digital City

    6/37

    2.1 Low-FidelityDie Beschr ankung von Funktion und Interaktion auf ein

    Minimum ist charakteristisch f ur Prototypen mit einer nied-rigen Pr azision, genannt Low-Fidelity. Dessen Zweck derschnellen Abbildung von Konzepten und verf ugbaren Alter-nativen steht hierbei klar im Vordergrund - genauso wie dieAnnahme, dass jene erstellten Prototypen direkt nicht imnalen Produkt weiterverwendet werden und keine Basis f urProgrammierer darstellen, was die Entwurfszeit nochmals

    verk urzt. Solche Low-Fidelity-Prototypen bestehen in der Re-gel aus einer Abfolge von verschiedenen statischen Elementen,die beispielsweise einzelne Men us oder Fenster des zu kon-zipierenden Systems skizzenhaft darstellen. Diese Entw urfewerden dem Benutzer durch deren Urheber oder von Dritten,mit dem Umgang von solchen Prototypen erfahrenen, pr a-sentiert, w ahrend die geplante Funktionsweise dabei erkl artwird [10]. Die Art der Umsetzung eines solchen Entwurfs - seies eine traditionelle Papierskizze oder ein computergest utz-ter Entwurf (beispielsweise mithilfe von PowerPoint) ist f urdie Qualit at, sowie auch die Quantit at des Feedbacks eherunerheblich - auch wenn die Benutzer in einer Untersuchungdiesbez uglich eine klare zu den computergest utzten zeigten[11].Die Vorz uge solcher Low-Fidelity-Prototypen liegt vor allemin fruhen Stadien des Entwicklungsprozesses, in denen sichDesigner noch nicht auf weniger optimale Konzepte festset-zen konnten, auf der Hand - Untersuchungen zeigten, dassdie geauerte Kritik generell viel konstruktiver aufgenom-men wurde, da in jene Papier-Prototypen weitaus weniger -insbesondere darauf aufbauende - Arbeit geossen ist: daherlasst sich solche Kritik in fr uhen Entwicklungsstadien auchohne groe Kosten umsetzen. Auch zeigte sich, dass die Be-nutzer aus ahnlichen Gr unden auch mehr bereit waren, indiesem fr uhen Stadium mehr Feedback zu geben und dieses -unabh angig von ihren Kenntnissen in Softwaretechnik - auchzu visualisieren. Aus diesen Gr unden kam auch jene Studiezu dem Schluss, dass hierbei die Nutzen die Kosten eindeutiguberwogen [8].Doch jene niedrige Genauigkeit, die angestrebte Einfachheitf uhrt oftmals dazu, dass wichtige Design-Entscheidungenschlicht

    ubersehen werden. Jene Unvollst

    andigkeit steigert

    letztendlich die Schwierigkeit der Umsetzung, da beispiels-weise Fehlermeldungen oder wenig benutzte Funktionen,dieim Prototyp fehlen, oft allein nach den Vorstellungen desProgrammierers entstehen - ohne ausreichende Erfahrungdessen kann dies eine weitere Fehlerquelle darstellen. Auchdie fehlende Besch aftigung mit der Machbarkeit des Entwurfswahrend der Konzeptionsphase stellt sich bei der Umsetzungdessen sp ater oftmals als ein Problem heraus, das zu Kon-ikten und erh ohtem Arbeitsaufwand f uhren kann [10].

    2.2 High-FidelitySetzt man dagegen auf eine hohe Pr azision - einen High-

    Fidelity Prototypen - wird die Kernfunktionalit at des Be-nutzerinterfaces implementiert. Diese unterscheiden sich ei-nerseits durch die Herstellung, bei der im Gegensatz zuPapier-Prototypen vor allem moderne Technologie wie UI-Designer oder Programmiersprachen, zum Beispiel Smalltalkoder Visual Basic, Verwendung nden. Im Austausch gegendie Schnelligkeit und Einfachheit wird hierbei ein besonderesAugenmerk auf eine realistische Darstellung des Benutzerin-terfaces gelegt. Ein weiterer Unterschied zu Low-FidelityPrototypen liegt in der Interaktivit at: darin unterscheiden

    sich Entw urfe mit hoher Pr azision am wenigsten vom nalenProdukt, da sie eine nahezu vollst andig funktionst uchtigeInteraktion zwischen Benutzer und System (beziehungsweisePrototyp) erm oglichen. Der Umfang eines solchen Entwurfs,dessen Erstellung in seiner vollst andigen Ausf uhrung einigesan Zeit bedarf, kann je nach Verwendungszweck verschiedenreduziert werden; dabei l asst sich zwischen vertikalen undhorizontalen Prototypen unterscheiden. Wobei ersterer auf die vollst andige Implementierung von wenigen Funktionensetzt, wird bei einem horizontalen Prototypen der Groteilder Funktionen implementiert, jedoch in einer weitaus ober-achlicheren Art und Weise [10].Die Gr unde, die f ur den Einsatz eines High-Fidelity Pro-totypen stehen, nden sich schon in der Denition: ebendurch ihre hohe Pr azision, ihren hohen Interaktions- undFunktionsgrad ist deren Bedienung dem nalen Produkt sehrAhnlich. Daher bieten dieser - im Gegensatz zu Entw urfenmit niedriger Pr azision - viel mehr M oglichkeiten, die Navi-gation und den Bedienuss vor der eigentlichen Produktioneffektiv zu testen und zu erleben [10]. Diese Ahnlichkeithat allerdings noch einen anderen Vorteil - schon w ahrendder Implementierung des Produkts ist es m oglich, mit derErstellung eines Hilfesystems oder der Dokumentation zubeginnen, ohne daf ur bis zur Vollendung des eigentlichenProdukts warten zu m ussen. Ein weiterer, wirtschaftlicher

    Aspekt ist die M oglichkeit, dadurch dem Kunden w ahrendder Produktion bereits jenen Entwurf - der vom Laien vomnalen Produkt oft kaum zu unterscheiden ist - pr asentierenzu konnen und somit die Vermarktung des Produkts schonvor seiner Vollendung entscheidend voranzutreiben. Durcheine sehr genaue Vorlage wird den Programmierern zudemdie Implementierung noch um einiges erleichtert, da sie mitdem Prototypen uber eine genaue und leicht verst andliche(zus atzliche) Spezikation verf ugen [13].Untersuchungen, die sich mit dem Vergleich von Low-Fidelityund High-Fidelity Prototypen besch aftigten, ergaben aller-dings einen signikanten Nachteil: Beim Vergleich zweierPrototypen eines Spiels, wobei einer in Papierform und derandere in Form eines Programms vorlag, stellte sich heraus,dass der Entwurf mit der h oheren Pr azision trotz erh ohtemHerstellungsaufwandes nicht signikant mehr Fehler in derUsability aufdeckte wie der Prototyp, der in einer deutlichkurzeren Zeitspanne mit weitaus weniger Aufwand produ-ziert wurde [5]. Dies hebt die mangelnde Wirtschaftlichkeitals ein zentrales Gegenargument gegen High-Fidelity Pro-totypen hervor. Aus diesem Grund ist es selbstverst andlichauch nicht rentabel, eine hohe Pr azision f ur den Vergleichvieler verschiedener Ans atze zu w ahlen. Ein weiteres Pro-blem stellt der Kunde selbst dar - durch die Pr asentationeines Prototypen, der sich vom nalen Softwareprodukt nurwenig unterscheidet, kommt es vor, dass dieser die fr uhereHerausgabe des Programms fordert, da er sich nicht uber denAufwand im Klaren ist, der n otig ist, aus diesem Prototypenein funktionierendes, stabiles Softwaresystem zu erstellen[10].

    3. PHYSICALITYVor allem mit der stetigen Verbesserung der Technik unddem daraus entstehenden Aufkommen von mobilen Enter-tainmentsystemen wie Musikplayer und Spielekonsolen ist dieFidelity, die Pr azision des Prototypen nicht mehr der einzigrelevante Faktor - die Physicality, [...] loosely understood asbeing the physical nature of something, for example, a form,

  • 8/10/2019 Prototyping for the Digital City

    7/37

    process or button[4], ist es, die vor allem bei der Konzepti-on neuen Hardwaresystemen eine entscheidende Rolle spielt- und bei schlechter Hardwareumsetzung der limitierendeFaktor sein kann, der jede noch so gute Softwareumsetzungruinieren w urde. Schon allein deshalb ist es essentiell, derPhysicality bei der Konzeption solcher Systeme besondereAufmerksamkeit zu schenken.Eine Studie zum Effekt von Physicality auf verschiedenePrototypen [3] stellte dazu vier unterschiedliche Entw urfeeines existieren Produkts her, die sich in Aufwand, Form undFunktion - generell jedoch im Grad ihrer aktiven und passi-ven Physicality unterschieden; dabei wurde jeweils derselbecomputergest utze Low-Fidelity Prototyp mit den verschie-denen Hardware-Entw urfen genutzt. W ahrend die passivePhysicality das Aussehen, die Positionen der Schalter unddas Gewicht - generell das Ger at im ausgeschalteten Zustandbehandelt, geht es bei der aktiven Physicality um die Reak-tionen auf Benutzereingaben und den Bedienuss, die sichaus der Interaktion des Benutzers mit den Buttons, Slidernund ahnlichem ergeben. Einen niedrige passive und aktivePhysicality hatte in der Studie beispielsweise ein kosteng uns-tiger, aus Schaumstoff angefertigter Prototyp - hier mussteder Benutzer die gew unschte Aktion beschreiben, die dabeivom Operator am Bildschirm ausgel ost wurde. Der Grund f urdie niedrige passive Physicality fand sich in der Zusammen-

    setzung: der Hardware-Prototyp bestand ausschlielich ausSchaumstoff und verf ugte uber keine benutzbaren Schalter.Ein weiterer Ansatz bestand aus einem Modell aus Kunststoff,der mit einem elektronischen Bausatz versehen war, der beieiner Bet atigung einfacher Buttons das Signal weiterleiteteund im Programm die gew unschte Aktion ausl oste. Durch diegute Abbildung der Interaktionen und eine gute Ann aherungan das nale Design verf ugte der Prototyp uber eine mittlereaktive, sowie passive Physicality. Auch die Kosten warenhierbei eher im Mittelfeld. Die teuersten Entw urfe wareneinerseits ein Arduino-Modell, bei dem ein Kunststoff-Modelldes Produkts auen mit allen verf ugbaren Buttons, sowieeinem f ur das Produkt speziellen Drehknopf versehen wurde,sowie einem Modell, das uber einen integrierten Bildschirm, jedoch keine funktionierenden Kn opfe verf ugte und uber einTablet gesteuert werden musste. Dabei war das Verh altniszwischen aktiver und passiver Physicality bei beiden Entw ur-fen sehr unausgeglichen, da jeder Entwurf sich speziell auf einen der beiden fokussierte und den anderen im Gegenzugvollig vernachl assigte. Man kam zu dem Schluss, dass sowohlaktive als auch passive Physicality in fr uhen Entwicklungs-stadien wichtiges Feedback geben - die besten Ergebnissewurden jedoch durch eine gleichm aige Balance beider erzielt,wie im Beispiel der ersten beiden angef uhrten Entw urfe.

    4. INTEGRATION INENTWICKLUNGSPROZESSE

    Dass sich mit Prototypen automatisch gute Ergebnisse er-zielen lassen und sie ein simples Erfolgsrezept f ur jedes belie-bige Softwareprojekt darstellen, ist allerdings ein Trugschluss.Damit Prototypen sowohl erfolgreich als auch gewinnbrin-gend im Softwareentwicklungsprozess eingesetzt werden k on-nen, sind drei Faktoren besonders wichtig: Geeignete Tools,die eine schnelle Konstruktion und Anderung erm oglichen,ein Wechsel der Denkweise durch ein Abweichen von einerFixierung auf formelle, etablierte Entwicklungsmethoden und-muster zugunsten von experimentelleren Entwicklungsans at-

    zen, und zuletzt - eine Vorgehensweise [7]. Doch auch beiden bisher vorgestellten Vorgehensweisen ist Einfallsreichtumund Kreativit at in der kontextabh angigen Anpassung oderErweiterung jener Methoden der Weg zum Ziel - ein Beispielbietet der Ansatz der Mixed-FidelityPrototypen: Hierbeiwird das Prototyping an sich viel mehr zu einem iterativenProzess, bei dem die wichtigen Teile an Pr azision gewinnenund die anderen Teile in ihrem urspr unglichen, skizzenhaftenZustand verbleiben k onnen. In der Praxis k onnte dies Be-deuten, einen groben Entwurf eines Benutzerinterfaces miteinem oder mehreren funktionst uchtigen Programmteilenzu versehen [9]. Ein anderer Ansatz, ein Musterbeispiel f urUser-Centered-Design, ndet sich in der sogenannten BlankPage Technique: Hierbei war ein Prototyp einer Webseitegegeben, der bewusst nur teilweise implementiert wurde undbeim Aufrufen eines nicht enthaltenen Links den Benutzerneine Fehlermeldung anzeigte - mit der Aufforderung an die-sen, seine Erwartungen auf ein Blatt Papier zu skizzieren[12].Dieser Ansatz macht jedoch auch eines klar: Prototypingan sich stellt auch gewisse Anforderungen an die Benutzer,genauso wie den Prototyper. Dieser muss vor allem mitdem verwendeten Prototyping-Tool vertraut sein, jedoch sindgenauso bestimmte Charaktereigenschaften von Vorteil - Ge-duld, diplomatisches Verhalten und Objektivit at sind einige

    davon; besonders wichtig ist jedoch, dass die Testteilnehmerdurch ihn nicht eingesch uchtert werden. Diese Teilnehmerkonnen problemlos Mitarbeiter aus anderen Abteilungen sein.Ob sie dieser Aufgabe positiv oder negativ gegen uberstehen,ist f ur das Ergebnis in der Regel unerheblich - auch wennletzteres risikobehaftet ist, kann es bestenfalls sogar zu einerpositiven Ver anderung des Betriebsklimas f uhren. Bei derAuswahl der Teilnehmer stellt sich jedoch eine weitere Frage- entscheidet man sich eher f ur eine Konstellation aus einemBenutzer und einem Entwickler, wird keine wirkliche Debattezustande kommen; insbesondere ist dabei das Ergebnis vonden F ahigkeiten weniger Personen abh angig. Desto mehrTeilnehmer allerdings ausgew ahlt werden, desto mehr prakti-sche Probleme werden sich im Laufe des Prozesses ergeben -zudem ist eine Verlangsamung die Folge. Ein Kompromisskonnte erreicht werden, indem man einen ersten Prototypmit einer einzelnen Person - dem typischen Benutzer - er-stellt, und diesen Prototyp dann an einer gr oeren Gruppean Personen testet. Auch hier stellt sich jedoch die Frage -wahlt man dieselbe Gruppe von Personen f ur mehrere Unter-suchungen, was eine geringere Einarbeitungszeit zur Folgehatte, oder entscheidet man sich f ur verschiedene Testgrup-pen, bei der man durch mehr Erfahrung, Ideen und Wissenein reicheres Ergebnis erzielen k onnte? Eins steht jedenfallsfest - der klassische Entwicklungsprozess, in dem sich derBenutzer und der Entwickler uber die groben Anforderungenund Probleme austauschen und aus diesen dann ein Softwa-reprodukt entsteht, sind mit dem Einsatz von Prototypenvorbei: Prototyping hat einen signikanten Einuss auf denEntwicklungsprozess, in dem der Benutzer nicht mehr nurder Anfang ist - sondern ein fester Teil in dem iterativen Pro-

    zess, in dem alle Beteiligten ein gemeinsames Ziel verfolgen:Bessere Software [7].

    5. AUSBLICKUnabh angig von den vielen verschiedenen Arten von Pro-

    totypen und den Voraussetzungen zu ihrem Einsatz ist eines jedoch sicher - wie auch andere etablierte Ingenieursdiszipli-

  • 8/10/2019 Prototyping for the Digital City

    8/37

    nen sind Prototypen vor allem mit der steigenden Wichtigkeitvon Benutzerfreundlichkeit, intuitiver Benutzerinterfaces undEinfachheit in der Softwaretechnik schon in der heutigen Zeitkaum mehr weg zu denken - es bleibt abzuwarten, wie sich jene Methoden zum Entwurf von Software mit derselbenentwickeln. W ahrend sich bei den heutigen Benutzerinter-faces mittlerweile gewisse Standards zum Entwurf und zurValidierung herausgebildet haben, werden sich Designer undEntwickler sehr bald mit neuen Herausforderungen konfron-tiert sehen - mit dem neuen Trend der Augumented Realitygeben Google Glasses, Oculus Rift und Microsoft Kinect denWeg vor: in die n achste, die dritte Dimension.

    6. REFERENCES[1] S. Asur and S. Hufnagel. Taxonomy of

    rapid-prototyping methods and tools. In Rapid System Prototyping, 1993. Shortening the Path from Specication to Prototype. Proceedings., Fourth International Workshop on , pages 4256, 1993.

    [2] M. Beaudouin-Lafon and W. Mackay. Thehuman-computer interaction handbook. In J. A. Jackoand A. Sears, editors, The human-computer interaction handbook , chapter Prototyping Tools and Techniques,pages 10061031. L. Erlbaum Associates Inc., Hillsdale,

    NJ, USA, 2003.[3] J. Hare, S. Gill, G. Loudon, and A. Lewis. The effect of physicality on low delity interactive prototyping fordesign practice. In P. Kotze, G. Marsden, G. Lindgaard,J. Wesson, and M. Winckler, editors, Human-Computer Interaction INTERACT 2013 , volume 8117 of Lecture Notes in Computer Science , pages 495510. SpringerBerlin Heidelberg, 2013.

    [4] J. Hare, S. Gill, G. Loudon, D. Ramduny-Ellis, andA. Dix. Physical delity: Exploring the importance of physicality on physical-digital conceptual prototyping.In T. Gross, J. Gulliksen, P. Kotze, L. Oestreicher,P. Palanque, R. O. Prates, and M. Winckler, editors,Human-Computer Interaction INTERACT 2009 ,volume 5726 of Lecture Notes in Computer Science ,pages 217230. Springer Berlin Heidelberg, 2009.

    [5] B. Kohler, J. Haladjian, B. Simeonova, andD. Ismailovic. Feedback in low vs. high delity visualsfor game prototypes. In Games and Software Engineering (GAS), 2012 2 nd International Workshopon , pages 4247, 2012.

    [6] Y.-k. Lim, A. Pangam, S. Periyasami, and S. Aneja.Comparative analysis of high- and low-delityprototypes for more valid usability evaluations of mobile devices. In Proceedings of the 4 th Nordic Conference on Human-computer Interaction: Changing Roles , NordiCHI 2006, pages 291300, New York, NY,USA, 2006. ACM.

    [7] P. Mayhew. Software prototyping: Implications for thepeople involved in systems development. InB. Steinholtz, A. Slvberg, and L. Bergman, editors,Advanced Information Systems Engineering , volume 436of Lecture Notes in Computer Science , pages 290305.Springer Berlin Heidelberg, 1990.

    [8] E. Olmsted-Hawala, J. Romano, and E. Murphy. Theuse of paper-prototyping in a low-delity usabilitystudy. In Professional Communication Conference,2009. IPCC 2009. IEEE International , pages 111,

    2009.[9] J. N. Petrie and K. A. Schneider. Mixed-delity

    prototyping of user interfaces. In G. Doherty andA. Blandford, editors, Interactive Systems. Design,Specication, and Verication , volume 4323 of Lecture Notes in Computer Science , pages 199212. SpringerBerlin Heidelberg, 2007.

    [10] J. Rudd, K. Stern, and S. Isensee. Low vs. high-delityprototyping debate. interactions , 3(1):7685, Jan. 1996.

    [11] R. Sefelin, M. Tscheligi, and V. Giller. Paperprototyping - what is it good for?: A comparison of paper- and computer-based low-delity prototyping. InCHI 2003 Extended Abstracts on Human Factors in Computing Systems , CHI EA 2003, pages 778779, NewYork, NY, USA, 2003. ACM.

    [12] B. Still and J. Morris. The blank-page technique:Reinvigorating paper prototyping in usability testing.Professional Communication, IEEE Transactions on ,53(2):144157, 2010.

    [13] R. A. Virzi, J. L. Sokolov, and D. Karis. Usabilityproblem identication using both low- and high-delityprototypes. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems , CHI 1996,pages 236243, New York, NY, USA, 1996. ACM.

  • 8/10/2019 Prototyping for the Digital City

    9/37

    Prototyping of Haptic Interfaces

    Severin BernhartUniversitt Passau

    [email protected]

    ABSTRACTToday people are moving away from using mouses or key-boards to interact with a machine, people utilize more andmore haptic interfaces, which are more physical and tangi-ble. This Paper presents the process how a haptic interfaceis developed and it shows a lot of different possibilities toproduce a haptic interface. We differentiate some methodsfor prototyping, namley the Low Fidelity Prototyping likethe Paper Prototyping and the Rapid Prototyping, espe-cially the 3D-Print. Also the paper will explain how the

    haptic interfaces get as user-friendly as possible and showthe different types of feedback possibilities. At the end youwill see what typ of prototyping is the most qualied methodof prototyping for each kind of input devices.

    KeywordsLow Fidelity Prototyping, Paper Prototyping, Rapid Pro-totyping, 3D Print, Tangible User Interfaces (TUI), HapticFeedback

    1. INTRODUCTIONCurrent input technologies for wearable computers are

    difficult to use and learn and can be unreliable. Physi-cal interfaces offer an alternative to traditional input meth-ods[11]. Haptic Interfaces make it possible for humansto be in interaktion with a machine only through touching.The user give commands to the machine and it react withgiving feedback to the user, so haptic interfaces are nec-essary to communicate and exchange informations betweenhumans and machines [3]. It is much more easier for hu-mans to perform digital tasks, if they get a force feedbackor a haptic feedback[2]. So a machine transmit a digital en-vironment to a surrounding almoust like in reality. HapticInterfaces are diffused in all situations of life. It doesnt mat-ter what kind of games console you use, the conroller is ahaptic interface, because every controller gives you feedbackthrough vibration for example or at playing ying simulatoryou have a joystick with force feedback like you are sitting in

    Severin Bernhart is a student at the University of Pas-sau .

    This research paper was written for the Advances in Embedded Interactive Systems (2014) technical report.Passau, Germany

    Figure 1: Box in Box-Experiment

    a real plane. But haptic interfaces act a much more impor-tant part at the evolution of new technical systems. Thereare a lot of experiments with new deployments. For examplethe Box in Box experiment, see gure 1: You can simu-late moving a magnetic cube in a magnetic eld with an joystick, if you move the cube right up to the edges fromthe bigger cube you get a force feedback, which simulatesthat the cubes batter each other and it feels like the cube ismagnetic manipulated.

    2. METHODS FOR PROTOTYPING

    Prototyping is a very effective method for software en-gineering. You get the rst solutions very fast and you getfeedback in every step of developing, because you invite test-ing people, who give improvements and show mistakes atthe prototyp. So problems or modications can be removedearlier and the costs are less expensive than if you haveto correct the mistakes at the end of the developing pro-cess. Getting the right design, and then getting the designright.[1], so Buxton called this. Now there are shown somedifferent methods of prototyping, namely the Paper Proto-typing, which is a kind of Low Fidelity Prototyping and theRapid Prototyping, particularly the 3D-Print.

    2.1 Low Fidelity Prototyping - Paper Proto-typing

    The Low Fidelity(Lo-Fi) Prototyping is character-istic for a few effort and low costs[8]. The production of low delity prototyps is cheep, because the materials youhave to use are only paper or other low delity materialslike plastic, so you get a wide variety of designs, and thenchoosing the most promising[12] model. Additionally thedevelopers of the prototype is a team of non-specialists, be-cause they dont have to know much about this subject mat-

  • 8/10/2019 Prototyping for the Digital City

    10/37

    Figure 2: App Paper Prototyping

    ter. Drawing layouts or tinkering some little models doesntrequire much knowledge and so the designs dont confrontthe user with explicit details to early. One sort of Lo-Fi-Prototyping is the Paper Prototyping ,see gure 2, whichis often used for making websites or apps for smartphonesor tablets. You rst decide on the tasks that youd likethe user to accomplish. Next, you make screen shots and/orhand-sketched drafts of the windows, menus, dialog boxes,pages, popup messages, etc. that are needed to performthose tasks[9]. So you can follow every little step, whenyou click each button on the touchscreen for example. Thenpeople have to test it, give thier feedback, therewith theapp get easier for the user to employ. This type of PaperPrototyping is only used for developing software but you canavail Lo-Fi-Prototyping for creating hardware, too. The pro-

    cess is similar to the procedure described ahead. The differ-ence is, that you change from 2D-Prototyping, like drawingsketchis for screen shots, to the 3D-Prototyping, where youtinker three demensional sketchis made of paper and otherlow cost materials for designing prototyps for haptic inter-faces, which are also called Tangible User Interfaces (TUI).TUIs are made for interacting with the digital informationby grasping or manipulating physical things... and working...with the new generation of capacitive surfaces[12]. Forstarting to create a TUI you make a variety of templateswith various 3D shapes, therewith you can choose the bestform for the interface. Then you set printed marks on thetemplate to visualize where later the conductive ink linesshould be at the prototype. Now the prototype get formedin its shape and the conductive ink lines get sketched onit. The conductive inks must be above the minimum de-tectible distance, that they are safely perceptible. Becauseof our own knwoledge of capacitive devices, the contacts onthe bottom must have the little size 5x5mm and the smalldistance of 5mm. These are the optimized conditions fordetecting ngers on a surface of the latest devices[12]. Thenthe prototyps with different shapes are tested and the besttype of them will be chosen and advanced. So you see, paper

    Figure 3: 3D-Printer

    Figure 4: 3D-Printed Haptic Interfaces

    prototyping is used to design interfaces in 3D, too and youdont have to engage computer specialists to create an paperprototype for a haptic interface. The only skills requiredare cutting, folding, gluing and tracing lines[12].

    2.2 Rapid Prototyping - 3D-PrintFor Rapid Prototyping you have to differentiate two

    different sorts. At the software engineering sector rapid pro-totyping means to develop software for machines very fastand with low effort, but at the manufactoring technology itmeans to produce substantial prototyps in layers so you canuse it to engine hardware and haptic interfaces[5]. The mostimportant method for rapid prototyping is the 3D-Print, seegure 3. The 3D-Print prepares prototyps made of plas-tic, so it is also a very fast and very cheep process. Thematerial plastic takes low costs and you need less personal,because the 3D-Printer produce the prototyps itsself anddoesnt need a supervision. As well you can be very creative,since it is exactly in manufactoring and you are almost free increating the shape of the interface. Therefore the 3D-Printis very suitable to produce haptic interfaces. The fabricationof a 3D-printed object starts with a digital geometric modelthat is converted into a series of slices to be physically fab-ricated layer-by-layer[13]. It is a photopolymer-based pro-cess, where a liquid photopolymer material gets exposed toan ultra-violet light source, after that the material gets into

  • 8/10/2019 Prototyping for the Digital City

    11/37

    Figure 5: Hand with thier receptors

    a solid state. Then a precise laser makes the exact edgesof the object and gives the perfect form to it. The impor-tance of 3D-Print expands more and more, therefore thereare more different materials applicable, like optical-qualitytransparent plastic, deformable rubber, and biocompati-ble polymers[13]. Modern 3D-Printers are able to combinemore materials in one interface, an example for this is thecube in gure 4. It is made of epoxy resin with 80 mm edgelength and has the electronic inside, which is a central pro-cessing and a wireless communication based on a particlecomputer. A communication board integrates a microcon-troller, transceiver, real-time clock, ash memory and twoLEDs. The cube reacts with two two-axes accelerometers,that realize rotating, shaking and compound movements.The cube has six displays, only black and white, for the vi-sual output. According as you rotate the cube, it shows you

    another picture at the display, which is on the top[4]. Soyou see, it is very easy to create a tangible user interfacelike this cube with the 3D-Print and it will develop to theleading role in producing of haptic interfaces in the future.

    3. FEEDBACK POSSIBILITIESInterfaces have three possibilities to give feedback to the

    user: Visual, auditive, haptic. Visual feedback can be just agleaming or blinking light or a whole display on the interface,which shows you a text or pictures, that give you answersto your instructions. The cube you saw in gure 4, whereyou can see pictures on the little displays is an example forthe visual feedback. Auditive feedback is another feedbackpossibility. A special kind of it is the speach output. Thisis very useful for interfaces in the car, like it is excistingin navigation software for example, so that the driver isntdiverted by reading texts on the display. Also auditive feed-back is, when the user gets responses in kind of peeping orcharacterisic melodies through loudspeakers, which are inthe systems integrated, therewith a human knows, that acertain action has happend. A good example for both feed-back methods is a wii-controller. According to which light

    Table 1: Low-Fidelity-Prototyping (Paper Proto-typing)[9]

    Pros ConsLow costs, low effort, rapid Many rude designsNo specialists necessary No functionality of prototypsNo programming effort Limited error detectingTesting in conception phase Prototyps only for LO-FI usableEasy to change and improve Not all ideas are realizableOver-average improvement Interaction process in conceptionproposals phase concrete

    Table 2: Rapid Prototyping (3D-Print)[7]Pros ConsLow costs, low effort, rapid Aliasing on surfaceNo process observation necessary Limited precisionProviding a real-like application Limited on certain materialsLow effort for improvements 3D CAD essentialLarge exibility for designing Knowledge essentialPrototype with completefunctionality

    is gleaming the user knows if he is player 1, 2, 3 or 4. Alsothe controller has also a loudspeaker integrated, that makessounds, if a special event is happening in the video game.So haptic interfaces can give visual and auditive feedback,too, but the haptic feedback is much more fascinating. Thehuman skin has a lot of receptors in the skin and in thengers the receptors are very focused, see gure 5. Thereare the receptors for the tanigble feedback, which are thethermoreceptors, that recognize warming and cooling, thenociceptors which can realize pain and the most importantreceptors for using haptic interfaces are the mechanorecep-tors, because they detect pressure[6]. They are necessary toperceive vibration and the force feedback, where the hapticinterface practice pressure to the user and it simulate thedigital environment to the human. An example for this is asteering wheel for simulating a race in a video game. The

    steering wheel rumbels by driving apart of the road and theuser always realizes a resistance by leading the car. A hapticinterface often combine the three different feedback possibil-ities, because then the user can better imagine the facts andhe is better involved in the virtual environment.

    4. RESULT - COMPARISON OF THE DIF-FERENTMETHODS OF PROTOTYPING

    Now the two Methods Low Fidelity Prototyping (PaperPrototyping) and Rapid Prototyping (3D-Print) are com-pared about thier Pros and Cons in two tables 1 and 2:

    When you compare both tables, then you get the resultthat both prototyping methods are good for fast prototyp-ing and they are very cheep to transform, because thereare low costs and low effort for the production. The Lo-Fi-Prototyping has a little advantage at the expenses, be-cause the matierials for the production are only paper orother minimal materials and there are no specialists neces-sary based on that there is nothing to program during theprototyping process. As opposed to 3D-Print, there youneed a 3D CAD software and staff with little programmingexperiences and knowledge about the 3D-Printer and the rel-

  • 8/10/2019 Prototyping for the Digital City

    12/37

    evant software. The both prototpying techniques are suit-able for doing changes or improvements during the wholeprogress, because people test the prototyps after every de-veloping step. Even paper prototying has over-average im-provements proposals. One disadvantage for the 3D-Print isthat the precision is limited. The 3D-Print objects are notas exactly as the Lo-Fi-Prototyps, because the 3D-Print issusceptible for aliasing at the objects surfaces, whereas theLo-Fi objects can be very accurate, but in consequences of the production of so many prototyps, they often get morerude. An advantage for the 3D-Print is that the fabrica-tion process is possible without an observation, so you canproduce a lot of prototyps and try many different variants,therefore there is a high exibility in testing different designsof the prototyps. The most important convenience for the3D-Print is, that you are able to produce prototyps with acomplete functionality. As opposed to this the paper proto-types have no functionality, the interfaces and buttons areonly indicated. Therefore the error detecting is limited, be-cause you cant test the if all functions and features work.Indeed you nally notice that your ideas, which you want toimplement are not practicable and this is going to be veryexpensive if you recognize it at the end of producing. Atthe 3D-Print the prototyps are all real-like application withworking functions. Summarizing you see the paper proto-

    typing is a bit cheeper than the rapid prototyping, but thetesting is more easier at the rapid prototyping and so therisk that functions dont work is not as big as at the paperprototyping and this is the most important issue. Finallywe note that the rapid prototyping, especially the 3D-Print,is the most effective method for prototyping of tangible userinterfaces and rst of all for prototyping of haptic interfaces.

    5. CONCLUSIONThis paper dened haptic interfaces and its different types

    of prototyping. Also the feedback possibilities were shown.Haptic interfaces get more important at current time, but ithas been underestimated a long time so far. Big discussionsare at the automotive fabrication. Today there is at mostone haptic interface in a car, that could be the touchscreenin the middle of the dashboard. A car is regarded as full of not elaborate haptic[10], because how do you manage it toblare, when you have to lead and brake at the same time?Mercedes trucks try steering with a joystick or in future arumbling car seat protects you against falling asleep behindthe steering wheel. These examples show that haptic inter-faces are right in the middle of there discovering process.The developing process will remain exciting and there willbe a lot of new ideas and inventions in the next years.

    6. REFERENCES[1] B. Buxton. Sketching User Experiences: Getting the

    Design Right and the Right Design: Getting the Design Right and the Right Design . Morgan Kaufmann, 2010.

    [2] R. Ellis, O. Ismaeil, and M. Lipsett. Design andevaluation of a high-performance haptic interface.Robotica , 14(3):321328, 1996.

    [3] V. Hayward, O. R. Astley, M. Cruz-Hernandez,D. Grant, and G. Robles-De-La-Torre. Hapticinterfaces and devices. Sensor Review , 24(1):1629,2004.

    [4] M. Kranz, D. Schmidt, P. Holleis, and A. Schmidt. Adisplay cube as a tangible user interface. In UbiComp2005, the Seventh International Conference on Ubiquitous Computing , 2005.

    [5] M. Macht. Ein Vorgehensmodell f ur den Einsatz von Rapid Prototyping . Utz, Wiss., 1999.

    [6] A. Maelicke and H. M. Emrich. Vom Reiz der Sinne .VCH Weinheim, 1990.

    [7] M. Romero, P. Perego, G. Andreoni, and F. Costa.

    New strategies for technology products development inhealth care. New Trends in Technologies: Control,Management, Computational Intelligence and Network Systems, Sciyo , 2010.

    [8] R. Sefelin, M. Tscheligi, and V. Giller. Paperprototyping - what is it good for?: A comparison of paper- and computer-based low-delity prototyping.In CHI 03 Extended Abstracts on Human Factors in Computing Systems , CHI EA 03, pages 778779, NewYork, NY, USA, 2003. ACM.

    [9] C. Snyder. Paper prototyping: The fast and easy way to design and rene user interfaces . MorganKaufmann, 2003.

    [10] B. Strassmann. F uhlen sie mal: Psychologen,mediziner und autohersteller entdecken einen neuenwahrnehmungskanal: Den tastsinn. und schon gibt esdie ersten labors f urs haptik-design. Die Zeit , 2003.

    [11] K. Van Laerhoven, N. Villar, A. Schmidt,G. Kortuem, and H. Gellersen. Using an autonomouscube for basic navigation and input. In Proceedings of the 5th International Conference on Multimodal Interfaces , ICMI 03, pages 203210, New York, NY,USA, 2003. ACM.

    [12] A. Wiethoff, H. Schneider, M. Rohs, A. Butz, andS. Greenberg. Sketch-a-tui: Low cost prototyping of tangible interactions using cardboard and conductiveink. In Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction , TEI 12, pages 309312, New York, NY,USA, 2012. ACM.

    [13] K. Willis, E. Brockmeyer, S. Hudson, and I. Poupyrev.Printed optics: 3d printing of embedded opticalelements for interactive devices. In Proceedings of the 25th annual ACM symposium on User interface software and technology , pages 589598. ACM, 2012.

  • 8/10/2019 Prototyping for the Digital City

    13/37

    Ubiquitous Interactions in Urban Spaces

    Thomas BockUniversitt Passau

    [email protected]

    ABSTRACTThis paper gives a brief overview on how people can in-teract with public devices in urban spaces and summarizesthe state of the art of research in urban interactions. First,there is a short comparison of different types of public dis-plays. Afterwards, the content, size, privacy and location of public displays are characterized. Subsequently, a brief in-troduction on different interaction techniques is given. Theuser experience of interacting with public displays is de-scribed. Most public displays are large because they are

    meant to attract passersby to interact and make it possi-ble for many individuals or groups to interact together onthe same screen simultaneously. The most important usecases are infotainment services, sending and editing photosand playing games. Social acceptability, learnability andsatisfaction are main factors for designing interactive publicdisplays. Finally, a short overview is provided on how pro-totypical applications on public displays and the ubiquitousdigital city will develop in the future.

    KeywordsUbiquitous computing, pervasive computing, digital city, so-cial interaction, public displays.

    1. INTRODUCTIONNowadays, many persons use the Internet on their mo-

    bile phones, so that they can use the Internet everywherethey are at any time. In general, the Internet has changedpeoples lives during the last few decades. However, in theage of Internet and social web, cities attempt to stay attrac-tive for their inhabitants and tourists. If there are eventstaking place, the organizers try using new technologies toattract people. Nowadays, people do not walk through amarket place only to buy something. Maybe they want tobe entertained by interacting on public displays. One thingorganizers of events can do is installing public displays whichhumans can interact with. Not only can these displays besituated during events, they can also be installed in parks or

    Thomas Bock is a student at the University of Passau .

    This research paper was written for the Advances in Embedded Interactive Systems (2014) technical report.Passau, Germany

    shopping malls or somewhere else in the city throughout theyear, so that they can enrich peoples lives in urban spaces.It is possible to combine interacting on public displays withusing smart phones since many persons use smart mobilephones for writing emails or playing games. As the screensof mobile phones are really small, people could perform thesetasks on large public displays as well.There are different techniques how to use public displaysand how to interact with them, which are described in thispaper. In the following, a brief overview of different kindsof public displays is given.

    2. DIFFERENT KINDS OFPUBLIC DISPLAYS

    Looking back into the history of public displays, the rstpublic display was an installation for showing news on thefacade of a building in New York in 1928 [9]. Since then,different types of public displays have been developed.

    Non-interactive displays.Most of the public displays are non-interactive animated

    displays, like the rst public display mentioned above. Thesedisplays are mostly used for advertising [9]. It is not possi-ble for passersby to interact with them or manipulate theircontent. People passing such displays are attracted to watchvideos or pictures in order to keep the things in mind thatare promoted in those pictures and videos. Besides these ad-vertising displays, there are public displays situated at trainstations or at airports on which television can be watched,too. For example, the BBC has installed big screens in ur-ban locations in the United Kingdom in order to broadcastsport events during the 2012 Olympics in London [9].

    Interactive displays.By contrast, interactive public displays are displays which

    passersby can interact with, by using their hands or mo-bile devices for instance. A selection of different interactiontechniques are described in Section 4. The content of suchdisplays can consist of infotainment services, photos, gamesand other things that are introduced below [11]. Just asnon-interactive public displays, interactive displays can alsocontain advertising [1, 7].

    Media faades.Another kind of public display is a so called media fa cade.

    As it is dened by H ausler, a media fa cade is a fa cade intowhich dynamic communication elements are embedded [6].

  • 8/10/2019 Prototyping for the Digital City

    14/37

    Projectors or screens can be used to put light on a fa cade.Due to the size of facades, people have to keep a big distanceto them in order to see the whole fa cade. Because of this,it is normally impossible to interact with media fa cadesdirectly by touching them [3]. However, interaction by usingmobile devices in a certain distance is possible, as Boring etal. explained [3]. They used mobile phones with live videoto change the color of particular parts of a media fa cade inLinz, Austria.

    3. CHARACTERISTICS OFPUBLIC DISPLAYS

    The main characteristics of public displays such as theircontent, their sizes or where they can be located are dis-cussed in the following.

    Content of public displays.An important question related to public displays is: When

    do people look at public displays? As Huang et al. haveobserved in their study, videos attract people to look at thedisplays much more as reading (animated) text, even if thereare only few sentences. One reason is, that if passersby seean interesting video, they stop walking for a few secondsto watch the clip. If they have to read the content on the

    screen, they are not attracted to look at the display any moreand walk off [8]. Furthermore, Huang et al. found that itseems to be important that the displays are at eye-level inorder to be noticed by passersby.

    Large public displays.The size of a public display plays a major part for being

    noticed by passersby as well. Even if Huang et al. observedin their eld study that people more often look at small dis-plays than at large displays because smaller ones are moreprivate and looking at them feels less exposed [8], most of the interactive public displays are rather large. One reasonfor developing large public displays is that they will be seenearlier if passersby pass them. Another reason is parallelinteraction: if a group of persons or individuals want to in-

    teract simultanously on the same display, the display has tobe big enough.For example, the CityWall installation in Helsinki, Finlandconsists of 2.5 meters wide screens as it is explained by Pel-tonen at al. [12], in order to make it possible for severalusers to interact with the display at the same time. Pel-tonen at al. observed in their research that seven personstried to use a public display simultaneously as individualsby touching it. In addition, more than ten users gatheredin front of the display to watch the interactions of the otherusers. During eight days, there were 512 interaction sessionswith 1199 interactors and 202 persons viewing other peoplesinteractions [12], so many of the interacting persons in thisstudy used the screen simultaneously.To support simultaneous interactions by many people, a me-dia fa cade can be used for interaction as well, but mediafacades often cannot be used during daylight since they arenot visible in bright light [5].

    Privacy concerns while using public displays.Inputting personal information on public displays raises

    privacy concerns of the users. If someone posts messages onpublic displays, he or she does not know who really reads

    Figure 1: UBI-hotspot at downtown Oulu [11].

    the message. As Alt et al. have elucidated, there are threekinds of content viewers: 1. Unknown group : These usersmay not understand the message, but they do perceive it.2. Known group : These persons know the sender of themessage and understand the content. 3. Individual : There isonly one recipient for the message [10]. Though a message is

    directed to the third group (an individual person), everyonepassing the public screen can see the posted information.In the study of Holleis et al. with 6 participants, none of the subjects wanted to present private, intimate content onpublic displays because passersby can notice the postersactivities [7]. Also Alt et al. noted in their controlled labstudy with 20 subjects (average age of 26.8 years) that veryfew of them did not want to exchange sensitive private datawith a public display [2]. In this study the participants hadto post digital classieds on public displays. Therefore theyhad to type their phone number or their email address ona public display, but some of them were afraid of so calledshoulder surfers. As a consequence, individual interactorswanted to post private content by using their mobile phonefor example, where the email address or the phone numberis not relayed to the public display.

    Location of public displays.The content of public displays can usually be seen by ev-

    erybody standing in front of the display. It is possible tosituate public displays indoors, e.g., in tourist informationcentres or swimming halls, or outdoors in shopping malls,parks or main roads [11] as shown in Figure 1. Public dis-plays are often situated at places where many people are onthe road, as it can be seen in the study on the CityWall in-stallation of Peltonen et al. [12] or the so called UBI-hotspotof Ojala et al. [11], in order to attract many passersby tostart interacting with the public display.There are three different categories of displays mentioned byPeltonen et al.: 1. Tabletop displays : People can stand orsit beside them to use the screen. These displays enable col-

    laborative interaction since many persons can stand aroundthe table containing the display, so the display is accessi-ble from many sides. 2. Ambient displays : Those displaysare situated in a space where they attract people that areon the move. 3. Large wall displays : That are displays thatare used in public settings like the CityWall installation,where public displays are situated on buildings walls [12].

  • 8/10/2019 Prototyping for the Digital City

    15/37

    As a conclusion, public displays are mostly located at placeswhere everybody who passes the screens can interact withthem [12].

    4. INTERACTION IN URBAN SPACESThere are many different methods of how one can interact

    with a public display. A selection of them is introduced inthe following.

    Gestures and touches.One interaction method is using gestures and touching

    displays like it is mentioned in the CityWall research byPeltonen et al. [12]. On the CityWall, which is a public dis-play to organize images that can be downloaded from Flickr,people can resize or move pictures using one- or two-handedgestures. This kind of interaction allows every passerby tointeract with the display because he or she does not need aspecial device. Passersby only need to use their hands, sointeraction takes place via direct manipulation [12].

    Mobile phone interaction.Another interaction technique is using a mobile phone.

    Users can send messages in order to make them public, forexample they can install an Android app in order to send

    digital classieds to several displays as explained by Alt etal. [2]. Thus, one can manipulate the content of a publicdisplay through mobile phones. To upload photos to Flickrwith a certain tag likeOuluwhich can be edited on a publicdisplay then, how it is mentioned by Ojala et al, mobilephones can also be used [11].

    Flashlight interaction.So called ashlight interaction is a light-based approach

    of interactions. By using the ashlight of a mobile phonecamera, passersby can control a mouse pointer on a pub-lic situated display. The only thing they have to do is tomove the mobile phone in front of the display where a in-stalled camera gets the ashlight. Instead of a mobile phonecamera, a laser pointer or real ashlight can be used to ma-nipulate the displays content as well [13].

    Camera based interaction.Besides conscious interactions explained above, it is also

    possible that passersby interact with a public display andeven do not notice this. For instance, the UBI-hotspot 1.0uses overhead cameras capable of face recognition [11]. Af-ter detecting the face of a passerby, the content of the pub-lic display changes automatically without direct interaction.Maybe another application or other images are shown then.By changing the content of the display, passersby should beattracted to interact by touching. Face detecting by a cam-era is an interaction method since the passersby rst haveto walk within the area which is supervised by the overheadcamera.

    Unconventional interaction.The interactions mentioned are conventional and could be

    used in urban spaces in general. However, there are someunconventional kinds of interaction, too. One example is atangible user interface (TUI), that is, physical objects wereused for interacting. As Vent a-Olkkonen et al. have pro-posed, people can pull on ropes coming out from a display,

    which contains a world map, at a market square. Pullingon these ropes belonging to other cities, people can interactwith people all over the world which have such installationsin their cities, too. In the city, which the rope that waspulled on the map belongs to, the rope belonging to the citywhere the interactor stands is drawn back into the map [14].This kind of haptic interaction should attract passersby tophysically do something on the installation situated in anurban space.Another unconventional kind of urban interaction is theso called SMSlingshot explained by Fischer and Hornecker:The device a user needs for interaction is a special slingshotwhich contains an input eld and keys for writing a mes-sage. If the rubber band of this special slingshot is pulled, alaser beam is shot to a media fa cade aimed by the slingshot.If the user lets go of the rubber band, the virtual messagehe or she typed in is sent to the fa cade and can be seen byeveryone watching [4].

    After this short overview on how to interact with publicdisplays or fa cades, the user experience of urban interactionswill be discussed in the following.

    5. USER EXPERIENCEExploring the user experience (UX) belonging to urban

    interactions contains probing how people cope with interac-tion techniques and how they accept them.

    5.1 Social AcceptabilityThe social acceptability of public displays is rather small

    because many passersby are not used to interacting withpublic screens they even notice the message on a screen,but usually do not know that they can interact with the dis-play [11]. Ojala et al. found out that about two thirds of 125 persons would interact with the screens in the future.However, 927 questionnaires were submitted, but most of them did not answer the question [11]. If someone has evennoticed that he or she can interact with the public display,other passersby observe the interactor and after that theytry to interact with the display as well [12]. Teamwork is an

    important element of social acceptability of urban interac-tion. About 72% of the CityWall users in Helsinki, Finlandused the touch screens together with other people. Seeingother people interact with the displays attracted more usersto interact [12]. In this case, public situated displays en-courage social life in urban context, since passersby wouldinteract with the same display together with strange people.As a consequence, researchers suggest that parallel use isrelevant for social acceptance [12].

    The displays were mostly frequented in the evening or onthe weekends when many people go shopping, that is, theyare used by freetime users [12].

    5.2 LearnabilityChildren and young people are often familiar with using

    new technology like touch screens because they often usesmart phones with touch screens in everyday life. Since theyare familiar with such interaction techniques, they easily canadopt their knowledge to apply it on public displays [1]. Forelderly people it may be not as easy: They often do not knowhow to use touch screens or new interacting technologies.Even if they know how to interact, they might be afraid of doing something wrong [1]. In general it is easy to learn

  • 8/10/2019 Prototyping for the Digital City

    16/37

    using touch screens through observation of others. Peoplecan watch other people interacting with the display and thenthey can copy the gestures they have seen before [12].

    5.3 Use casesThere are many different use cases of public interactive

    displays. Some of the most commonly used applications arepresented below.

    Infotainment services.As the statistics of UBI-hotspot 1.0 in Oulu, Finland show,most of the display users wanted to display city maps or readinformation about the weather. Furthermore, many peoplewere interested in reading up-to-date news of the local news-paper of Oulu on the display [11].

    Sending and adapting photos.Another big part of the hotspot users wanted to upload

    photos and send them to the public screens [11] in order toshow their images to others. Most of the users only triedto play with the interface while rotating their pictures [12].Downloading photos and moving them is a major elementof using large multi-touch displays. Sending a picture to thescreen (uploading) seems to be easy, too, as it is mentionedby using a specied key for uploading them to Flickr, forexample [12]. In many installations it is also possible to usemobile phones or email to upload photos or send them tosomeone else, e.g., as a postcard [11].

    Fun and games.Besides useful information and working on photos, public

    displays offer the opportunity to do funny things. Primar-ily children enjoy playing public interactive games. A fthup to a quarter of the usage of the public screens is playinginteractive games like Hangman [11]. Not only children areplaying those games, there are many adults who also takepart. Fun and Games was the most popular service cate-gory on using the UBI-hotspots in Oulu, Finland with 31%share of the clicks [11].

    Public displays are not only used for gaming, lots of media

    facades are installed in order to allow many people to takepart in a game at the same time. Watching others playinggames without self being interactive is also an importantaspect [4]. Thus, playing games on public displays or mediafacades can become a social event.Finally, there are much more use cases for public displays,like watching videos or to optain help in different ways.

    5.4 SatisfactionMost of the users who interacted with a public display

    are content with the use cases of the screens. This was theoutcome of 81 interviews where people could address theirmost important issues of the UBI-hotspot in Oulu, Finland[11]. Many passersby enjoyed interacting and had new ideasfor the displays, e.g., new games. Most people interestedin interacting would like to have some more applicationson the public displays distributed all over the whole city.For instance, some users wished to have an address-searchfeature [11]. One problem which was mentioned by a fewinterviewees was that the content of the displays may not beseen when the sunlight falls on the screens [11]. To solve theproblems and develop new features, the research in exploringthe digital city has to go on.

    6. PROTOTYPING PUBLIC DISPLAYSIn order to develop public interaction frameworks, which

    can be used for installing public displays or media fa cadesin cities all over the world, prototypes are required prior toresearch UX design and selecting the correct hardware andsoftware.

    There are different methods how to implement prototypes.One of these methods is paper prototyping. It is used bydeveloping public displays in controlled lab settings, like it

    is proposed by Holleis et al. [7]. Vent a-Olkkonen et al. useda paper prototype, exactly it was a cardboard prototype, toexplore their tangible user interface on a market square [14].

    Novel applications for public displays can be implemented,deployed and evaluated in a real-world setting, too. For ex-ample, a few applications were explored on the public dis-plays in Oulu, Finland. There, the Open Ubiquitous CityChallenge (UBI Challenge) took place multiple times [9, 11,1]. Services can be provided to the citizens and prototypesof applications could be assessed by the citizens of Oulu.

    Since media facades are often not active during daylight,the time for testing applications on real fa cades during devel-oping is restricted [5]. In order to prototype media fa cadesand their applications, Gehring et al. developed a mediafacade toolkit in Java. Their toolkit is exible and general-ized, that is, interactive applications are running on mediafacades with different form factors, different sizes and differ-ent capabilities [5].

    7. CONCLUSION & INTERPRETATIONMaking cities digital contains discussing public displays.

    On the one hand, there are displays that do not require inter-action, for example public screens which show commercials.On the other hand, there are public displays that demandinteractions on it. These displays are mostly rather large inorder to attract passersby to interact with them. There aremany different interaction techniques which were mentionedabove, like touching screens, using mobile phones, being rec-ognized by cameras or unconventional methods. As shownin the literature, it is quite easy to learn interaction tech-

    niques by observing other persons interacting with a screen.The use cases of interaction in urban spaces are infotainmentservices, sending and manipulating photos, up to fun factsand games. Many persons that are involved in interactingare children or young adults. If they know how to interact,the social acceptability is quite big. One issue associatedwith public dispays are privacy concerns, because everyonewho stands in front of such a display can see what otherpeople perform at the display. Furthermore, many publicdisplays are located outside where a lot of people are on theroad in order to motivate them interacting with the display.

    Considering the research discussed above, one can con-clude that public interaction in urban spaces plays majorrole in the development of digital cities. The services pro-vided by interactive public displays can enrich peoples lives either by entertainment through interaction (e.g., games),or showing them useful information. Maybe it is possible toprovide map applications containing information about doc-tors and supermarkets in order to supply support for citizensand tourists. There may be further concepts for interact-ing with public displays. Thus, ubiquitous technologies arelikely to remain under constant research in the domain of digital cities.

  • 8/10/2019 Prototyping for the Digital City

    17/37

    8. REFERENCES[1] F. Alt, T. Kubitza, D. Bial, F. Zaidan, M. Ortel,

    B. Zurmaar, T. Lewen, A. S. Shirazi, and A. Schmidt.Digieds: Insights into deploying digital public noticeareas in the wild. In Proceedings of the 10th International Conference on Mobile and Ubiquitous Multimedia , MUM 11, pages 165174, New York, NY,USA, 2011. ACM.

    [2] F. Alt, A. S. Shirazi, T. Kubitza, and A. Schmidt.

    Interaction techniques for creating and exchangingcontent with public displays. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems , CHI 13, pages 17091718, New York, NY,USA, 2013. ACM.

    [3] S. Boring, S. Gehring, A. Wiethoff, A. M. Bl ockner,J. Sch oning, and A. Butz. Multi-user interaction onmedia facades through live video on mobile devices. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems , CHI 11, pages27212724, New York, NY, USA, 2011. ACM.

    [4] P. T. Fischer and E. Hornecker. Urban hci: Spatialaspects in the design of shared encounters for mediafacades. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems , CHI 12, pages307316, New York, NY, USA, 2012. ACM.

    [5] S. Gehring, E. Hartz, M. L ochtefeld, and A. Kr uger.The media facade toolkit: Prototyping and simulatinginteraction with media fa cades. In Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing , UbiComp 13,pages 763772, New York, NY, USA, 2013. ACM.

    [6] M. Hausler. Media facades: History, technology,content. Ludwigsburg, Av Edition , 2009.

    [7] P. Holleis, E. Rukzio, F. Otto, and A. Schmidt.Privacy and curiosity in mobile interactions withpublic displays. In Workshop on Mobile Spatial Interaction , CHI 07, 2007.

    [8] E. M. Huang, A. Koster, and J. Borchers. Overcomingassumptions and uncovering practices: When does thepublic really look at public displays? In Proceedings of the 6th International Conference on Pervasive Computing , Pervasive 08, pages 228243, Berlin,Heidelberg, 2008. Springer-Verlag.

    [9] V. Kostakos and T. Ojala. Public displays invadeurban spaces. IEEE Pervasive Computing , 12(1):813,Jan. 2013.

    [10] N. Memarovic, M. Langheinrich, and F. Alt. Theinteracting places framework: Conceptualizing publicdisplay applications that promote communityinteraction and place awareness. In Proceedings of the 2012 International Symposium on Pervasive Displays ,PerDis 12, pages 7:17:6, New York, NY, USA, 2012.ACM.

    [11] T. Ojala, H. Kukka, T. Linden, T. Heikkinen,M. Jurmu, S. Hosio, and F. Kruger. Ubi-hotspot 1.0:Large-scale long-term deployment of interactive publicdisplays in a city center. In Proceedings of the 2010 Fifth International Conference on Internet and WebApplications and Services , ICIW 10, pages 285294,Washington, DC, USA, 2010. IEEE Computer Society.

    [12] P. Peltonen, E. Kurvinen, A. Salovaara, G. Jacucci,T. Ilmonen, J. Evans, A. Oulasvirta, and P. Saarikko.

    Its mine, dont touch!: Interactions at a largemulti-touch display in a city centre. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems , CHI 08, pages 12851294, NewYork, NY, USA, 2008. ACM.

    [13] A. S. Shirazi, C. Winkler, and A. Schmidt. Flashlightinteraction: A study on mobile phone interactiontechniques with large displays. In Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services ,MobileHCI 09, pages 93:193:2, New York, NY, USA,2009. ACM.

    [14] L. Venta-Olkkonen, M. Kinnula, G. Dean,T. Stockinger, and C. Z uniga. Whos there?:Experience-driven design of urban interaction using atangible user interface. In Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia , MUM 13, pages 49:149:2, New York,NY, USA, 2013. ACM.

  • 8/10/2019 Prototyping for the Digital City

    18/37

    Prototyping Toolkits

    Philipp NeumeierUniversitt Passau

    [email protected]

    ABSTRACTRapid prototyping is a very important part of early develop-ment to test a product without investing many resources. Tohasten this process and to make it available to designers withno or low specialized knowledge many prototyping toolkitswere developed. There is a multitude of these toolkits whichare of different use in different Situations. This papers goalis to categorize them and to present a method on how tocompare toolkits and determine which one is most useful forthe project.

    Keywordsvisual languages, multimodal systems, prototyping, UserInterface Toolkits

    1. INTRODUCTIONIn the early stages of development major changes to the

    system can be done without spending many resources. Hence,it is a suitable point of time to try out different design choices.But for any form of sophisticated evaluation a graspable and tangible demonstrator of the right form factor and no substi-tute is required [21]. Nevertheless, to make these prototypesyou would still need expert knowledge and time to actuallybuild a one. As prototypes are only an early demonstratorof the nal product, you want designers with little expertknowledge to make them in little time (1-2 hours in the bestcase[16]) to evaluate many different design possibilities. Toachieve this, prototyping toolkits are developed. They shouldencapsule the expert knowledge needed as recommended in[16] and obviously using inexpensive materials and devices aswell as easily accessible visual programming languages wouldbe favorable. There are many different toolkits which havetheir specic strengths and weaknesses.

    This paper presents an overview of notable toolkits andpresents a method on how to compare them to anwer thequestion, which toolkit suits which kind of project. It isstructured as follows:

    Philipp Neumeier is a student at the University of Passau .

    This research paper was written for the Advances in Embedded Interactive Systems (2014) technical report.Passau, Germany

    In section 2 the available prototyping toolkits are cate-gorized and each category is explained with examples.

    In section 3 the prototyping toolkits for multimodularsystems are used to show how toolkits could be com-pared with each other to determine which is the mostuseful for the current project.

    2. PROTOTYPING TOOLKITSThe following Section gives an overview of a subset of avail-

    able prototyping toolkits and categorizes them. Examplesare given for each category to explain what these toolkitshave in common and where they differ. Because of the sheernumber of toolkits and because some of them are for veryspecic situations, this overview does not cover all availabletoolkits. However, it shall provide a broad overview.

    Prototyping toolkits can be put in one of three majorcategories:

    1. Prototype the input and output of a physical device

    2. Prototype software

    3. Prototype a form of input and output other than thecommon keyboard, mouse and screens

    2.1 HardwareHardware prototyping toolkits are used to build prototypesof physical devices by making models out of a cheap material

    and testing different inputs through sensors, hardware cong-urations and outputs. There are also toolkits for very specichardware congurations like The media facade Toolkit[15].

    2.1.1 Physical Computing PlatformSingle-board micro-controllers are used to control external

    devices by getting digital input from sensors or other devices,compute them and providing digital outputs. Hence, theycan replace a PC in this situation but can be acquired ata much lower cost and tend to be less complex to program.One benet of these boards is that a developer does not haveto create a specic micro-controller rst, but can concentrateon the input, output and the application itself. Well-knownones are Arduino[18, 1], Beagleboard[4] and Raspberry PI[6,29]. The hardware and software is open-source for all of them. Arduino also features an IDE based on Processing.

    2.1.2 Hardware User InterfacesToolkits for hardware user Interfaces always come with a

    means to gather sensor data of prototype devices and transferthem to a PC in an easily computable way. For example the

  • 8/10/2019 Prototyping for the Digital City

    19/37

    Figure 1: Navigation controller with the wirelessbuttons placed in a variety of locations around theform.

    Calder Toolkit[24] and IE5[10] come with different sensorslike buttons, RFID tags or a tilt sensor which are attachedto a sketch foam model or already existing devices to rapidlyprototype different kinds interactions with hardware userinterfaces like game controllers or other handhelds. An ex-ample for the Calder Toolkit is shown in Figure 1. Theycan also be more focused on evaluating the gathered data.d.Tools[17] and iStuff[2] for example provide IDEs using avisual programming language to create and test differentinteraction scenarios. iStuff provides special premade devicesto test these were as for d. Tools a prototype device has tobe made rst.

    2.2 SoftwareAnother possiblitiy is software prototyping. These toolkits

    often provide IDEs which use visual programming languagesto facilitate software development.

    2.2.1 Context-Aware ApplicationsContext-aware applications take their context of use into

    account. So for example where, when and by whom they areused. iCAP[32] is a toolkit that allows to sketch input andoutput devices, design interaction rules with these devicesand prototype these rules in an simulated or real context-aware environment without writing any code.

    Figure 2: Architecture of a multimodal system.

    2.2.2 GUI PrototypingThe software prototyping toolkits presented here use an

    IDE that allows designers to build a GUI by drag&dropof already existing GUI elements. It also allows them tosimulate GUI interaction to effectively get an interactibleGUI. Toolkits are for example Balsamiq[3], uid[14] andproto[31].

    2.3 Multimodal SystemsIn the general sense, a multimodal system supports com-

    munication with the user through different modalities such as voice, gesture, and typing [28, 7]. Hence, multiple suchmodalities can be used in parallel or sequentially to issue onecommand contrary to a traditional windows, icons, menus,pointer (WIMP) system. For a system to accept such input itmust be able to evaluate multiple events of probably differentmodalities to understand the intent of the user and generatethe proper output which can also possibly be in differentmodalities. This output could for example consist of audio,video or a synthesized voice.

    As shown in Figure 2 a multimodal system comprises mul-tiple parts: The recognizers continuously sense end decodeuser input like speech and hand gestures. After decoding astream of inputs they inform the fusion engine which merges

    the inputs of all recognizers in order to interpret the usersrequest. The dialog-manager is notied of this request anddecides on how to handle it depending on the context, espe-cially the status of the human-machine dialog. After choosingthe output it sends this task to the ssion component whichchooses synthesizers (computer programs that control render-ing devices) tocreate the output. The context of the usersrequest is also considered. This information is stored in thedata storages called knowledge sources .

    2.3.1 Multimodal InterfacesA toolkit for a multimodular interface consists of a sensor

    conguration and a program to evaluate this data. Thisprogram identies certain events and provides an API sothat other programs can access this form of input.

    A very important type of multimodal Interfaces are multi-touch and tangible user interfaces (TUIs). One of thesetoolkits is TUIO AS3[25]. It rstly provides an multi-touchand TUI interaction API to for example enhance any UI ele-ment with multi-touch interactions for dragging, rotating andscaling. Secondly complex multi-touch or TUI interactionscan be dened with a simple grammar by using a mouse andkeyboard. Another toolkit for a TUI is Papier-M ache[20].

  • 8/10/2019 Prototyping for the Digital City

    20/37

    Figure 3: Left: proxemic relationships betweenentities, e.g., orientation, distance, pointing rays;Right: visualizing these relationships in the Prox-imity Toolkits visual monitoring tool.

    Figure 4: Architecture of a toolkit for rapid proto-typing of multimodal systems.

    It uses a camera to check for pre-dened interactions withpaper and provides an API for these events.

    There are also toolkits for other forms of input like the TheProximity Toolkit[26] shown in Figure 3. It evaluates thedistance and orientation data for example from objects in aroom, proviedes an API to access these and shows the objectsand the data in 3d-gracs to get an easier understanding andrun tests.

    2.3.2 Rapid Prototyping of Multimodal SystemsToolkits for rapid prototyping of multimodal systems ex-

    pands already existing programs with the capabilities toevaluate multimodal input and support multimodal output.Their architecture is shown in Figure 4.They enhance analready-existing application, herein called the client applica-tion, which must be developed with a textual programminglanguage without support of the toolkit. It must imple-ment the functionalities of the system. They consist of aframework and a graphical editor. With the graphical editorvisual models can be created which specify the multimodalcommands[11] which should be recognized by the system.As reaction subroutines of the client program are called tocompute the output and handle the output devices. Someof these commands are pre-programmed in the framework,others have to be implemented by the client application. Ex-amples for toolkits, what they have in common and wherethey differ are explained in section 3.

    3. EXEMPLARYCOMPARISON OF TOOLK-ITS

    Even if you know which kind of toolkit you need, you stillhave to choose from a multitude of toolkits which are in cer-tain situations better than the others. But how to determinewhich one facilitates the development the most? There is fewscientic work on comparing different toolkits and scale themin some way. Although toolkits make prototyping easier, youhave to look into them and in the best case have experience

    with them to determine their usefulness and the extent of their capabilities. This goes against the fact that prototypingtoolkits are supposed to hasten the development.

    To provide hints how this could be done, a method onhow to compare toolkits for rapid prototyping of multimodalsystems is used in this section. This method is described indetail in [9, 8]. Toolkits for rapid prototyping of multimodalsystems were chosen here because of voice commands andother post-WIMP interaction types becoming more importantfor computer devices like tablets and laptops[30] and alsobecause of the recent release of the XBox One with itsKinect device[19].

    3.1 CriteriaAs stated in section 2.3.2 these toolkits use their graphical

    editors and framework to handle some tasks of a multimodalsystem. The support of the toolkits is now measured in howmany of the components of a multimodal system a toolkitcan handle without requiring to implement it in the clientprogram. So for example if a toolkits framework handles thecomponents recognizers and fusion engine its support wouldbe {recognizers, fusion engine }. All the other componentsmust be implemented by the client application. Hence, inthis example it would be the components {dialog-manager, ssion component , synthesizers , knowledge sources }. Thesubsequent section takes a look at available toolkits:

  • 8/10/2019 Prototyping for the Digital City

    21/37

  • 8/10/2019 Prototyping for the Digital City

    22/37

    Interaction , TEI 09, pages 363366, New York, NY,USA, 2009. ACM.

    [11] J. De Boeck, D. Vanacken, C. Raymaekers, andK. Coninx. High-level modeling of multimodalinteraction techniques using nimmit. 2007.

    [12] P. Dragicevic and J. Fekete. Icon: input deviceselection and interaction conguration. In Companion proceedings of the 15th ACM symposium on User Interface Software & Technology (UIST02), Paris,France , pages 2730, 2002.

    [13] B. Dumas, D. Lalanne, and R. Ingold. Descriptionlanguages for multimodal interaction: a set of guidelines and its illustration with smuiml. Journal on Multimodal User Interfaces , 3(3):237247, 2010.

    [14] uid. https://uidui.com .[15] S. Gehring, E. Hartz, M. L ochtefeld, and A. Kr uger.

    The media facade toolkit: Prototyping and simulatinginteraction with media facades. In Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing , UbiComp 13, pages763772, New York, NY, USA, 2013. ACM.

    [16] S. Gill, G. Loudon, and D. Walker. Designing a designtool: working with industry to create an informationappliance design methodology. Journal of Design Research , 7(2):97119, 2008.

    [17] B. Hartmann, S. R. Klemmer, M. Bernstein,L. Abdulla, B. Burr, A. Robinson-Mosher, and J. Gee.Reective physical prototyping through integrateddesign, test, and analysis. In Proceedings of the 19th Annual ACM Symposium on User Interface Software and Technology , UIST 06, pages 299308, New York,NY, USA, 2006. ACM.

    [18] B. Kaufmann and L. Buechley. Amarino: A toolkit forthe rapid prototyping of mobile ubiquitous computing.In Proceedings of the 12th International Conference on Human Computer Interaction with Mobile Devices and Services , MobileHCI 10, pages 291298, New York, NY,USA, 2010. ACM.

    [19] Kinect.http://www.xbox.com/en-GB/xbox-one/innovation (visited Dezember 2013) .

    [20] S. R. Klemmer, J. Li, J. Lin, and J. Landay.Papier-mache: Toolkit support for tangible interaction.In Proceedings of the 16th annual ACM symposium on user interface software and technology (UIST 2003),Vancouver, British Columbia, Canada , 2003.

    [21] M. Kranz and A. Schmidt. Prototyping smart objectsfor ubiquitous computing. In Workshop on Smart Object Systems, 7th International Conference on Ubiquitous Computing (Ubicomp) , 2005.

    [22] W. K onig, R. R adle, and H. Reiterer. Interactive designof multimodal user interfaces. Journal on Multimodal User Interfaces , 3(3):197213, 2010.

    [23] J.-Y. L. Lawson, A.-A. Al-Akkad, J. Vanderdonckt,and B. Macq. An open source workbench for

    prototyping multimodal interactions based onoff-the-shelf heterogeneous components. In Proceedings of the 1st ACM SIGCHI Symposium on Engineering Interactive Computing Systems , EICS 09, pages245254, New York, NY, USA, 2009. ACM.

    [24] J. C. Lee, D. Avrahami, S. E. Hudson, J. Forlizzi, P. H.Dietz, and D. Leigh. The calder toolkit: Wired and

    wireless components for rapidly prototyping interactivedevices. In Proceedings of the 5th Conference on Designing Interactive Systems: Processes, Practices,Methods, and Techniques , DIS 04, pages 167175, NewYork, NY, USA, 2004. ACM.

    [25] J. Luderschmidt, I. Bauer, N. Haubner, S. Lehmann,R. D orner, and U. Schwanecke. Tuio as3: A multi-touchand tangible user interface rapid prototyping toolkit fortabletop interaction. In Sensyble Workshop , 2010.

    [26] N. Marquardt, R. Diaz-Marino, S. Boring, andS. Greenberg. The proximity toolkit: Prototypingproxemic interactions in ubiquitous computingecologies. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology ,UIST 11, pages 315326, New York, NY, USA, 2011.ACM.

    [27] D. Navarre, P. Palanque, J. Ladry, and E. Barboni. Icos:A model-based user interface description techniquededicated to interactive systems addressing usability,reliability and scalability. In ACM Transactions on Computer-Human Interaction 16, 4 , 2009.

    [28] L. Nigay and J. Coutaz. A design space for multimodalsystems: Concurrent processing and data fusion. InProceedings of the INTERACT 93 and CHI 93 Conference on Human Factors in Computing Systems ,CHI 93, pages 172178, New York, NY, USA, 1993.ACM.

    [29] R. Pi. http://www.raspberrypi.org .[30] I. F. Press. Talk with Your Device: The Future of Voice

    Control. http://www.intelfreepress.com/news/talk-with-your-device-the-future-of-voice-control/5335 (visited December 2013) .

    [31] proto. http://proto.io .[32] T. Sohn and A. Dey. icap: An informal tool for

    interactive prototyping of context-aware applications.In CHI 03 Extended Abstracts on Human Factors in Computing Systems , CHI EA 03, pages 974975, NewYork, NY, USA, 2003. ACM.

  • 8/10/2019 Prototyping for the Digital City

    23/37

    Urban Context Awareness

    Sebastian ScheffelUniversitt Passau

    [email protected]

    ABSTRACTAs the number of mobile devices, especially smartphones,has increased in the last years, new types of applicationshave been developed which use information from their en-vironment. Using different contexts allows applications tochange their behaviour accordingly. In the outline of thispaper we will look at the different smartphone sensors, showexisting approaches to create context out of measured sen-sor data and outline existing and possible applications of context awareness in a digital city. We will conclude with a

    short outlook on the possible development of context-awareapplications in the future.

    KeywordsInternet of Things, Smart Building / Smart Home, Intelli-gent Environments, Mobile Devices

    1. INTRODUCTIONIn the last few years, smartphones have become a part of

    our daily lives. There is a new application named GoogleNow. The Now stands for real-time information. Theapplication offers the possibility to many context-sensitivefeatures. One of them tells the user, when to leave for aterm. At the rst sight, this seems to be a very commonand easy feature. But behind the surface, a lot of thingsare used. The application is only possible, because it knowsthe users location, the information on the term (time andlocation), the traffic situation on the route, which vehiclethe user usually uses and the current time. Each of theseitems is a different context. Applications similar to this onewould be not possible without this context information.

    As mobile devices (especially smartphones) are used infrequently context changing environments and by many peo-ple every day, we will concentrate on this in the following.Therefore, at rst a denition of what context awarenessmeans and what mobile devices are, is given in the nextsection.

    Sebastian Scheffel is a student at the University of Pas-sau .

    This research paper was written for the Advances in Embedded Interactive Systems (2014) technical report.Passau, Germany

    2. DEFINITIONS

    2.1 Context AwarenessThe term context awareness was used the rst time by

    Schilit and Theimer in the year 1994.[14] They describedcontext-aware computing as a software, that adapts accor-ding to its location of use, the collection of nearby peopleand objects, as well as changes to those objects over time.

    In 1999, Schmidt et al. mentioned context as interrelatedconditions in which something exists or occurs[15]. They di-

    vided context into two main categories: human factors andphysical environment. These two categories are providedadditional context information by history. The human fac-tors are divided into three subcategories: information onthe user, on the social environment and the users tasks.The physical environment consists of location, infrastruc-ture, and physical conditions like level of noise, brightnessetc.

    In the same year, Dey and Abowd dened context as anyinformation that can be us