Enhancing the user experience with new interaction ... · Enhancing the user experience with new...

79
Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sj¨ ogren January 29, 2007 Master’s Thesis in Computing Science, 2*20 credits Supervisor at CS-UmU: H ˚ akan Gulliksson Supervisor at North Kingdom: David Eriksson Examiner: Per Lindstr¨ om UME ˚ A UNIVERSITY DEPARTMENT OF COMPUTING SCIENCE SE-901 87 UME ˚ A SWEDEN

Transcript of Enhancing the user experience with new interaction ... · Enhancing the user experience with new...

Page 1: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

Enhancing the userexperience with new

interaction techniques forinteractive television

Robert ErikssonFilip Sjogren

January 29, 2007Master’s Thesis in Computing Science, 2*20 credits

Supervisor at CS-UmU: Hakan GullikssonSupervisor at North Kingdom: David Eriksson

Examiner: Per Lindstrom

UMEA UNIVERSITYDEPARTMENT OF COMPUTING SCIENCE

SE-901 87 UMEASWEDEN

Page 2: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨
Page 3: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

Abstract

This master’s thesis concerns user interactivity within interactive television,more specifically how new interaction techniques can improve the user ex-perience for interactive television. A literature study was conducted to giveknowledge about the users and available means to generate input and out-put. Based on the findings from the studies, a prototype for a remote controlfor interactive television was developed. The goal was that the remote controlshould enhance the user experience and be enjoyable, rather than strictly ef-fective to use. The prototype was designed using an iterative design processwith continuous user tests. General design frameworks that are described inthe paper were also used in the development of the prototype.

The last version of the remote control used tilt movements as input, whichwas appreciated by the persons who participated in the user tests. The useof this new interaction technique proved to make the television viewing expe-rience enjoyable, as well as user friendly. By also integrating other interactiontechniques described in this paper in the interaction with television, there ispotential to further enhance the user experience.

Keywords

Interaction design, usability design, HCI (Human-Computer Interaction), digi-tal TV, interactive TV, GUIs (Graphical User Interfaces), speech input, gestureinput, tilt input, iterative design process, evaluation.

Page 4: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

ii

Page 5: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

Contents

1 Introduction 1

2 Task Description 32.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.2 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.3 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.4 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2.4.1 Materials and hardware . . . . . . . . . . . . . . . . . . . . . 42.4.2 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

3 Viewers’ Behaviors and Preferences Regarding Television 73.1 TV viewing in general . . . . . . . . . . . . . . . . . . . . . . . . . . 73.2 Differences between TV and computer usage . . . . . . . . . . . . 83.3 Viewers’ attitudes towards interactivity in TV usage . . . . . . . . 103.4 The remote control . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

3.4.1 Design implications for remote controls . . . . . . . . . . . . 13

4 Interaction Techniques 154.1 Speech interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154.2 Gesture interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

4.2.1 Tilt input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234.3 Touch sensitive input . . . . . . . . . . . . . . . . . . . . . . . . . . 25

5 Prototype Development 275.1 Conclusions and design implications from the literature study . . 275.2 Design methods and frameworks . . . . . . . . . . . . . . . . . . . 29

5.2.1 The iterative design process . . . . . . . . . . . . . . . . . . 295.2.2 Hi-fi and Lo-fi prototypes . . . . . . . . . . . . . . . . . . . . 305.2.3 Design and usability principles . . . . . . . . . . . . . . . . 325.2.4 Affective aspects . . . . . . . . . . . . . . . . . . . . . . . . . 345.2.5 Mental models . . . . . . . . . . . . . . . . . . . . . . . . . . 365.2.6 Design and colour . . . . . . . . . . . . . . . . . . . . . . . . 37

iii

Page 6: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

iv CONTENTS

5.3 The design process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385.3.1 Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385.3.2 Early user tests . . . . . . . . . . . . . . . . . . . . . . . . . . 395.3.3 Prototypes, version 1 . . . . . . . . . . . . . . . . . . . . . . 415.3.4 Prototypes, version 2 . . . . . . . . . . . . . . . . . . . . . . 465.3.5 Prototypes, version 3 (final version) . . . . . . . . . . . . . . 52

6 Conclusions 576.1 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

7 Acknowledgements 61

References 63

A Additional screenshots 67

B The instructive picture 69

Page 7: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

List of Figures

2.1 Buttons (top left), InterfaceKit (top right), joystick (bottom left)and accelerometer (bottom right). Used with permission fromPhidgets Inc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.2 A wire diagram showing how the the sensors in the prototypesare connected to the computer. . . . . . . . . . . . . . . . . . . . . 5

3.1 Two design concepts developed by Enns et al [11] for a remotecontrol utilizing a touchpad. Picture used with permission fromthe authors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

4.1 The Wii remote showing its basic functions, taken from [29], per-mission currently pending. . . . . . . . . . . . . . . . . . . . . . . . 18

4.2 User interacting with Block’s prototype, permission currently pend-ing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

4.3 The prototype developed by Bucolo et al., used with permissionfrom [41]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

4.4 Scrolling through a phonebook by pressing a button and tiltingthe phone. Taken from [31] with permission from the authors. . . 24

4.5 This picture shows how the user can tilt the phone in differentdirections to choose different characters. Taken from [44] withpermission from the authors. . . . . . . . . . . . . . . . . . . . . . 24

5.1 The iterative design process. Picture inspired by [33]. . . . . . . . 29

5.2 The system image. Picture inspired by Benyon et al [3]. . . . . . . 37

5.3 From the top: the directional buttons prototype, the joystick pro-totype and the accelerometer prototype. . . . . . . . . . . . . . . . 40

5.4 Prototype A (top left), prototype C (top right) and prototype B (bot-tom). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

5.5 The idea for the channel selection. . . . . . . . . . . . . . . . . . . 43

5.6 Screenshot of the feedback in the top left corner of the screen. . . 43

5.7 Screenshot of the menu in the interactive commercial. . . . . . . . 44

v

Page 8: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

vi LIST OF FIGURES

5.8 Prototype D (top left), prototype E (bottom left) and the back ofprototype D (with the activation button marked with a circle). . . 46

5.9 The new channel selection with improved feedback. . . . . . . . . 485.10An example of features that can be accessed through the multi-

functional button. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485.11The volume bar at the bottom of the screen. . . . . . . . . . . . . . 495.12The menu interface with the added curly braces (marked out with

an oval shaped symbol). . . . . . . . . . . . . . . . . . . . . . . . . 495.13Screenshot of the interactive commercial movie scenario. Here,

the user is navigating in the timeline by moving the navigationbox to the desired part of the movie. . . . . . . . . . . . . . . . . . 50

5.14Screenshot of the message scenario. . . . . . . . . . . . . . . . . . 505.15Prototype F (top left and top right) and prototype G (bottom left

and bottom right). . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535.16Screenshot of the ”New message” indicator in the top left corner.

The yellow circle around the multi-functional button ”pulsates”by continuously increasing and decreasing in size, to catch theuser’s attention. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

5.17In this screenshot of the interactive commercial movie, the useris currently interacting at the top level, which is indicated by thegreen background. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

6.1 Conceptual sketch of the ”atom” menu interface suggestion. Onthe right picture the ”sound” submenu has been selected. . . . . 59

A.1 Screenshot of the interface when the user presses the accelerom-eter activation button to select a channel. . . . . . . . . . . . . . . 67

A.2 Screenshot of the interface when the user presses the multi-functional button to access miscellaneous functions. . . . . . . . 68

A.3 Screenshot of the menu that appears when the user chooses toread an incoming message. . . . . . . . . . . . . . . . . . . . . . . . 68

B.1 The instructive picture that was shown in the beginning of thelast user tests. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

Page 9: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

List of Tables

5.1 Average ratings for the respective buttons. . . . . . . . . . . . . . . 395.2 Average completion times for the respective techniques and tasks. 40

vii

Page 10: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

viii LIST OF TABLES

Page 11: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

Chapter 1

Introduction

This master’s thesis project has been performed within ”Akademiker i foretag”,a project whose purpose is to contribute to the development of small andmedium sized companies within the counties Vasterbotten and Norrbotten.The project ”Akademiker i foretag” provides the companies with an opportunityto get development project carried out by students, newly graduates and re-searchers from universities and colleges throughout the country. Through anextensive visiting activity the companies’ development projects are identified,and are mediated via a database on the Internet; examensjobb.nu. Financiersare EU’s structural funds, the county administrative boards of Vasterbottenand Norrbotten together with participating municipalities and companies.

This project concerns user interactivity within interactive television, morespecifically how new interaction techniques can improve the user experiencefor interactive television. The task has been assigned by North Kingdom, acreative digital agency mostly working with campaigns, advertisements andcommunication in interactive media. They are interested in different kinds andlevels of interaction in different media, and how users interact with content indifferent media.

This thesis report can roughly be divided into two parts: a more theoretical-oriented literature study, and a practical part where the process of designinga prototype for a remote control is described. The literature study consists oftwo main chapters: ”Viewers’ Behaviors and Preferences Regarding Television”and ”Interaction Techniques”, both of which consist of several sections andsubsections where several previous works in these areas are presented. Cho-rianopoulos et al [9] have developed three factors that are worth consideringwhen designing interfaces for interactive systems and products, such as a TV:Users, tasks and means for interactions to generate input and output. Thefirst part of the literature study concerns the first two factors, and the sec-ond part deals with the means for interaction. Next, the development of theprototype, which is partly based on the findings from the literature study, isdescribed in detail. Also, methods and frameworks used in the design processare described. Every step in the iterative design process is described alongwith results and findings from user tests. Finally, results and conclusions ofthe master’s thesis project are presented and discussed.

1

Page 12: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

2 Chapter 1. Introduction

Page 13: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

Chapter 2

Task Description

2.1 Background

Due to the emerging technology development towards more interactive televi-sion, i.e., the TV becomes more similar to a computer, more advanced andinteractive features and services for the TV will be more common. Contentproviders have the opportunity to create non-linear stories and viewers havethe possibility to interact with and shape the content at the actual viewingmoment. Additionally, features and services that are not generally related tothe television medium today - such as e-mail, web browsing, games etc. - canalso be utilized through the interactive television. In this thesis, it is assumedthat this development will lead to changes regarding how the user will interactwith the TV.

The starting point of this thesis is that new and different content in televi-sion will require new and different means of interaction with the content.

2.2 Problem

With the emergence of this new content and type of services, we argue thata conventional remote control will not be sufficient to provide the experiencethat this interactive content can offer to the user. That is, with today’s remotecontrols, the ease-of-use and enjoyment of the whole user experience withthe interactive content might be affected in a negative way. New means ofinteracting with the television might therefore enhance the user experience.

2.3 Purpose

The purpose of this thesis is to examine how the user experience with in-teractive TV content can be enhanced by evaluating alternative interactiontechniques that could be used as means for generating input. Moreover, theintention is to develop a prototype for a remote control that is supposed toindicate how alternative techniques can be utilized in an interactive TV envi-ronment. To demonstrate how the prototype can be used, a graphical user

3

Page 14: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

4 Chapter 2. Task Description

interface for digital TV will also be developed.

2.4 Methods

To examine how alternative interaction techniques can be used to enhancethe user experience, an in-depth literature study divided in two parts will beconducted. The first part is about people’s behaviors and preferences whenthey watch and use television. This part of the study provides knowledgeabout what people prefer when they watch television, and an understandingof the whole context when people are watching TV.

The second part of the literature study is about different interaction tech-niques that could be used to generate input, and how well they might be suitedto use in an interactive TV environment.

Based on the findings about users’ preferences and the suitability of thedifferent interaction techniques, the development of the prototype will com-mence. The prototype will then be developed according to the iterative designprocess, with continuous user tests. A Macromedia Flash production providedby North Kingdom will be used as an example of interactive content that theremote control prototype can be used on. The production will be modified tosupport navigation with the prototype. In addition to North Kingdom’s pro-duction, an application to simulate TV viewing scenarios will be created, todemonstrate how the prototype can be used in various interactive TV contexts.This application will be developed in Macromedia’s software Flash 8.

2.4.1 Materials and hardware

The prototypes are made with styrofoam, a material often used in prototypedevelopment. The material is very easy to shape. The designer can, with thismaterial, create ”three dimensional sketches” and get a ”tangible feeling” ofthe prototype. Phidgets are also embedded in the prototypes. Phidgets area series of sensors, controllers and other hardware that are easy to manageand program. The prototypes in this master thesis project are equipped withaccelerometers, joysticks and buttons. The sensors are connected to the com-puter through the InterfaceKit. The phidgets used in this project are shown inFig. 2.1 and a general wire diagram over the connections are shown in Fig.2.2.

2.4.2 Software

The object-oriented programming language Actionscript 2.0 in MacromediaFlash 8 was used to simulate the scenarios. Actionscript 2.0 also supports in-tegrating Phidgets with Flash. The sensors continuously provide the softwarewith their current values or states. The accelerometer provides two differentvalues: one for the tilt in the ”x-axis” and one value for the tilt in the ”y-axis”.These values range from -1 to 1 (the zero value represents an untilted state).The joystick also provides values for movement in the x-axis and y-axis. Thesevalues range from 0 to 1000, with 500 being the ”centered” state. Finally, the

Page 15: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

2.4. Methods 5

Figure 2.1: Buttons (top left), InterfaceKit (top right), joystick (bottom left) andaccelerometer (bottom right). Used with permission from Phidgets Inc.

Figure 2.2: A wire diagram showing how the the sensors in the prototypes areconnected to the computer.

buttons are binary, i.e. they have two states: pressed (true) or not pressed(false).

To navigate in the Flash movies by tilting the remote control, the user firsthas to press a button that activates the accelerometer. Then, while holdingthe button, the user tilts in the desired direction. To accomplish this, the soft-ware first saves the values from the accelerometer at the time when the userpressed the button. After that, it continuously checks the values from the ac-celerometer and compares it to the saved value. When the difference betweenthe values reaches a certain threshold (i.e. when the user has tilted the remotecontrol to a certain degree) the software executes the action that correspondsto the tilt movement. A code example for achieving this is presented below.

Page 16: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

6 Chapter 2. Task Description

-//when the button is pressed, get the current values from the accelerometerif (button1.GetInputState() == true) {startX = accelerometer.GetAcceleration(0);startY = accelerometer.GetAcceleration(1);}...if (accelerometer.GetAcceleration(0) - startX < 0.2) {//do something (move right, for instance)}

Page 17: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

Chapter 3

Viewers’ Behaviors andPreferences RegardingTelevision

In this chapter, people’s behaviors and preferences when they watch and usetelevision will be examined. The intention of this part of the project is to explorewhat people prefer when they watch television, and to gain an understandingof the whole context of the TV viewing activity. The following sections willexamine general characteristics of TV viewing, TV usage compared to the moreinteractive computer usage, previous studies about viewers’ attitudes towardsinteractivity in TV usage. Finally, the chapter ends with a section regardinghow people use the remote control.

3.1 TV viewing in general

Television is a natural part of most people’s everyday life. It has been so wellintegrated in the daily routine that it is often not seen as a specific activity,rather just something that belongs to the home, ”a part of the furniture” [39].An extensive study involving about 500 persons over a five year period sur-veyed how people use TV and how the TV affects their everyday life [15]. Thestudy showed that many people use TV programmes as a frame of referencewhen planning the day’s timetable, organizing their spare time around certainshows (mostly news and soap operas). TV is a part of the daily routine, andoften points out transitions between the different phases of the day. The studyalso showed that people often do other things while watching TV, like eatingand socializing with other people. The TV is often on even if no one is actuallywatching, as a sort of background to other activities.

How people watch TV varies. Most viewing, especially after coming homefrom work or school in the afternoon, is relatively unplanned, with a quitelarge amount of switching between channels until one finds something that isappealing enough [39]. This viewing is mainly a way to relax and unwind, andhas a small degree of engagement from the viewer. During weekday evenings,

7

Page 18: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

8 Chapter 3. Viewers’ Behaviors and Preferences Regarding Television

the viewing is more organized and routine, with a higher degree of engagementand planning.

Television is traditionally a medium where the viewer is a passive consumerof the content that is being broadcasted. We watch TV to be entertained, to re-lax after a hard day’s work or simply to kill some time, which also was pointedout in the study mentioned above. A study by Logan et al. at Thomson Con-sumer Electronics [23], that logged all remote control activity during ten weeksin nine households, showed that 98.7% of all the viewing can be consideredpassive. Their definition of passive viewing, based on a function of the timebetween consecutive button pushes and the percent of all button pushes, wasthat the user is passive when there are more than six seconds between con-secutive button pushes. Considering this inherent passiveness of the usage ofTV, it is not obviously successful to add more or less advanced features thatdemand interactivity - similar to computer applications - to it.

3.2 Differences between TV and computer usage

Using interactive applications mediated by a television set is different com-pared to traditional TV usage, and could rather be compared to using a com-puter. Interacting with a computer-like interface on a TV, with a traditionalTV remote control, leads to some complications, because of the differences be-tween traditional TV viewing and the more computer-like usage of interactivetelevision (iTV) [4]. Apart from the passiveness of the user, there are otherdifferences between consuming TV and using a computer. When interactingwith a computer, the user is usually sitting close to the screen, while TV isoften watched from a comparatively far distance, in a comfortable and relaxedmanner [38, 21]. The relaxed and ”leaned-back” posture is not suited for tra-ditional computer interaction, i.e. using a mouse and a keyboard. This isanother circumstance that makes TV usage less suited for more demandingactivities. Television is also a social, collective medium, which means that weoften consume it together with friends, family members and other persons.This is yet another difference compared to computer usage, which is more of-ten used by a single user [38, 21]. The social nature of TV is another possibledisadvantage of interactive TV, since it can be difficult for several persons toagree on how to interact, what to do etc. Disagreements and arguments mayarise and cause the interaction to be less efficient and enjoyable.

The traditional appeal of an ordinary TV and its audience is combined in in-teractive TV. Furthermore, iTV offers new interactive possibilities and servicessimilar to today’s web. If you think of the web today, there is a strong emerg-ing trend towards social networking on the web and also towards a more useradapted web. I.e, users choose what to do and when and with who/friends[19]. In other words, it is not hard today to do the browsing more personal, forexample many sites use cookies that gathers information about users. Thisdevelopment is however harder to accomplish in the TV development industrybecause there are some obstacles that restrain such development.

Chorianopoulos et al discuss three obstacles in their article ”Intelligent userinterfaces in the living room: usability design for personalized television ap-plications” [9]. The first obstacle is about the broadcast environment. When

Page 19: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

3.2. Differences between TV and computer usage 9

a user is browsing the Internet, every webpage is transmitted via a server tothe computer of a specific user. However, to ”reach” a specific user in the TVenvironment is somewhat more difficult to accomplish. Mainly because thecontent of the transmission by iTV is transmitted to many different TVs . Cho-rianopoulos et al argue that it is a contradiction to try to send a personified TVcontent to many different households/people [9]. One solution to this contra-diction according to Chorianopoulos et al is the use of a SET-TOP-BOX (STB).This STB could automatically store a profile of the current user(s). Further-more, the STB also enables and controls the interactivity as well (you couldthink of the STB almost as a small computer).

The second obstacle regarding more personified television is about targetingindividuals. A computer almost always has one single user at a time, the TVon the other hand, have several users at a time both in public areas andin more private areas. The problem is to identify separate households andspecific individuals in the households [9]. Although there exists techniquestoday that enable possible identification of which person in the householdthat are currently watching the TV (e.g., hidden-eye technique or some remotecontrol function like personified shells for the remote or via RFID-tags), this issomething that is not highly appreciated by the users [9].

The third and last obstacle that is considered by Chorianopoulos et al isabout the viewing environment, which also was mentioned earlier. People oftenwatch TV in an environment that is relaxing and with the purpose just torelax. Today the main theme for television is ”infotainment”. The user isentertained and gets knowledge at the same time. Chorianopoulos et al arguein the article mentioned above that an interface for TV that require a skill levelcorresponding to the use of a computer will not work with the present averageuser for the TV [9]. The remote controls for television have, in addition, limitedfunctionality, i.e. the remote offers some buttons for channels, menus, OK-buttons and so forth. The TV itself has limitations also in output. A limitationregarding output for the TV is for example resolution (the ordinary TV has only0.2 megapixel in resolution). In addition, there are also restrictions in usingfonts and colour. All these factors are linked to the distance between the userand the TV.

These challenges, when designing for iTV, are taken from the differencesbetween PC and TV concerning the interaction possibilities to create inputand output, the environment, the context, the quantity of users and the users’familiarity/knowledge regarding computers.

Another difficulty when designing interfaces is that a design that seems towork for one user group might not work for other user groups. E.g, if a de-signer creates an interface that works well for expert users the same interfacemight not support novice users. Therefore, according to Chorianopoulos et al,the designer must consider user requirements, theories regarding UI design,principles, framework and guidelines, and user evalutions regarding usabil-ity to be able to make a good interface for the demanding and challenging TVenvironment. Finally, Chorianopoulos et al have developed three simple andsensible factors to consider when designing interfaces for TV:

Page 20: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

10 Chapter 3. Viewers’ Behaviors and Preferences Regarding Television

– Tasks.

– User.

– Means for interactions to generate input and output.

However, even though these issues concerning the shortcomings of the TVas opposed to a computer indeed should be considered as problems for thedevelopment of interactive TV applications, there seems to be trends that makethe problems smaller. To start with the passivity of TV usage, the picture ofa viewer as a completely passive consumer is somewhat out-of-date [38]. Avery common active use of TV is the teletext function, which could be seen asa very simple web browser. Also, many television programmes today involveviewers to a fairly high degree, for example with polls and viewer feedback viaphone, SMS, e-mail etc. So viewers are actually already active and interactivein front of the TV-set, but of course not as much as when using a computer.Furthermore, the collectiveness of TV as a medium is not as strong as it usedto be. In most households, there are more than one TV-set [38], which makesit more possible for different users to do different things, even if they are livingtogether.

So, it is true that people often want to be passive and relax in front ofthe TV, but at the same time they are becoming more and more active andinteractive, although slowly. However, the fact is that viewers still want to usethe TV mainly for simply watching television programmes.

3.3 Viewers’ attitudes towards interactivity in TVusage

A study conducted by Vivi Theodoropoulou published in 2002 [40] was madeto examine how viewers used certain existing services provided by digital TVbroadcasting in the UK. One type of services was related to the actual TV-watching, for example acquiring statistics during a sports event, backgroundinformation related to a certain news item and other services that took placewithin the context of the television programmes. The other type of servicesdid not have anything to do with television (except for the fact that they weremediated through the TV) but were more like the things one can do on theweb. Examples of these services were banking errands, shopping, e-mailingand other services that are not related to viewing television.

In general, the viewers were rather conservative about using the interactiveservices, which was not unexpected. However, the results of the study alsoshowed that when viewers did interact, they almost exclusively wanted to dothings that had to do with the actual content of the television programmes (i.e.the first type of services) while the services that were not related to TV wereused very rarely. About 94.4% of the subjects had never used the bankingservices, and only 2.1% used the e-mail service on a weekly basis [40]. On theother hand, 23.5% of the subjects used interactive services related to sportshows every week. Interviews with subjects also showed that these servicesrelated to the actual viewing indeed were more popular, since it enhanced theviewing experience, while the other type of services were seen to interfere with

Page 21: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

3.3. Viewers’ attitudes towards interactivity in TV usage 11

it. The results support what was stated earlier, that viewers mainly just wantto watch TV, and any interaction should preferably be related to it.

In 2001 ITC-Use conducted a research regarding how easy digital and in-teractive TV was to use for people [12]. In that research an ordinary computerwas perceived to be difficult to use. Digital TV (dTV) and iTV got the sameusability ratings as an ordinary computer. One of the devices that got highratings on usability was an analogue TV [12].

Dr. Jonathan Freeman et al have conducted a similar study [13] to the onethat ITC-Use made in 2001. The results from the research showed that dTVand iTV were perceived as difficult to use. The users gave dTV and iTV thesame usability ratings as a computer, and also in this study a computer wasconsidered difficult to use.

Because dTV and iTV not has as much functionality as an ordinary com-puter, this result was more or less a surprise for Freeman et al. This usabilityrating/perceived difficulty was of course individual among users, there wereusers in the study who were more familiar to use advanced technology andthese kind of users thought it was relatively easy to use iTV.

In the study by Freeman et al [13], factor and cluster analyses (principalaxis factoring: PAF) were used to identify different groups of possible digitalTV users. Freeman et al identified, for example, these four different types ofattitudes towards dTV and iTV:

– ”In the dark”: This kind of user is uncertain towards the meaning of iTVand its possibilities.

– ”dTV bandwagon” : This kind of user have a more positive attitude to-wards dTV and its possibilities (e.g., the increased channel choice withdTV). This kind of user is more up to date with the technology.

– ”Functional TV”: This kind of user has also a positive attitude towardsdTV and its increased funtionality. E.g., that the TV gets more interactiveis perceived like a positive thing.

– ”Apathy”: This kind of user has a negative attitude towards and a lackof interest in dTV. Mainly, because they think that dTV has to few ad-vantages. Furthermore, motivation for some people regarding dTV can belack of motivation for other people. E.g., for some people it is a positivething that dTV increases choice (more channels and TVprograms). But,for other people the increase choice does not matter because they likequality and not quantity.

Freeman et al argue that the increased number of choices that dTV offersto people, gives people the option to watch what they want when they havetime and when they want to watch [13]. On the other hand there exists peoplethat thinks that they already have enough choices, e.g, 9 channels are enough.These people like quality and not quantity.

Furthermore, Freeman et al points out some possible factors why peoplemight not want to switch to dTV [13]: high perceived cost; lack of knowledge;low interest because of perceived low need, desire or time; perceptions thatdTV is complicated TV. Another negative factor that Freeman et al points out intheir article is that dTV competes with the PC. The PC sets a high benchmark

Page 22: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

12 Chapter 3. Viewers’ Behaviors and Preferences Regarding Television

for the dTV to meet, mainly because it is more user friendly than ever andoverall has good performance.

3.4 The remote control

With more advanced functionality and interactivity in digital television, thelimitations of current remote controls might be a problem, as discussed earlier.However, it is important that the input device used for iTV still is close to theremote controls of today, to make the tranisition easier for the users, especiallythe ones with no or limited computer skills [21]. Since the new remote controlhas to be based on traditional remote controls at least to some degree, it isimportant to understand how today’s remote controls are used, how they aresuited for more interactive applications and general pros and cons.

The most common way to interact with a TV is through a standard remotecontrol. Remote controls can vary in shape, size and button layout but themajority of them basically has the same buttons and functionalities.

The study by Logan et al mentioned earlier [23] showed some interesting re-sults regarding frequency of use of the respective buttons on a remote control.Three types of buttons together constituted about 88% of all button pressesthat were made: the channel up/down buttons (52%); the volume up/downbuttons (19%); and the 0-9 digit buttons (17%). All other buttons each madeup 5% or less of all button presses. It is also interesting to note that the chan-nel up/down buttons are so superior to all other buttons when it comes tofrequency of use. The study also showed that the button for accessing theon-screen menu was frequently used during the first week after acquiring anew television set, but that frequency decreased rapidly after that time. Overthe whole period of the study, the menu button made up only 2% of all buttonpushes. This indicates that most of the features of the on-screen menu wassomething the users used only once (or a few times) to setup the TV and toexplore the functionalities in the beginning of the TV’s ”lifecycle”.

In another study by Logan et al. [24], it was found that 77% of the partic-ipants held the remote control in one hand and used the thumb to press thebuttons. Only 3% of the users used the remote control with two hands.

When interacting with more advanced and computer-like content on a TVwith a standard remote control, one of the biggest problems is that the remotecontrol does not always support the necessary actions, like text entry and di-rect manipulation [11]. Especially the lack of a direct pointing device (e.g. amouse) can be very critical since ”pointing-and-clicking” is a very common andstandard way to perform various tasks [10]. The conventional/customary fourarrow keys can be used as an alternative to a pointing device by allowing theuser to ”jump” between selectable objects (like tabbing in computer applica-tions), but that solution is far less efficient and easy to use than a pointingdevice that lets the user freely steer a cursor to objects. Using arrow keys tomove to an object turns the act of pointing and selecting to a sequence of ac-tions (e.g. two steps down and one step right to get to a certain object) insteadof a single action which is the case with a direct pointing device. According toNielsen [28], this leads to a higher cognitive load for the user.

Enns et al [11] developed a concept for a TV remote control that utilized a

Page 23: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

3.4. The remote control 13

touchpad (see Fig. 3.1). Not much functionality was implemented, it was morethe idea and concept that was introduced to ”open the door” to new ways ofusing the remote control with this technique. They also had the idea to movethe interface from the actual remote control to the TV screen, so that manyfeatures are implemented graphically on the screen, rather than by addingextra buttons to the remote control. This way, the user’s attention is alsofocused more on the screen than on the remote control.

Figure 3.1: Two design concepts developed by Enns et al [11] for a remotecontrol utilizing a touchpad. Picture used with permission from the authors.

3.4.1 Design implications for remote controls

When designing remote controls, there are certain issues to consider. One isthe scalability issue: the more functions, the more buttons and hence highercomplexity [11]. There are often simply too many buttons. Also, users aremostly focused on the TV screen, and want to keep focus on it instead of havingto switch their visual focus between the TV and the remote [10]. Accordingly,the remote control should be designed so that the user has to look at it aslittle as possible. Explicit feedback after pressing buttons on the remote isalso something users find favorable [10, 32].

The fact that most people use the remote control with only one hand, men-tioned above, is of course important to consider for the shape aspects of aremote control, as well as considering both right- and left-handed users [32].It is also important to place the most frequently used buttons in locations thatare easy accessible, and grouping similar buttons and separating and differ-entiating between dissimilar buttons [32]. The differentiation can be done byusing varied shapes, sizes, colours and textures for the buttons, and varyingthe spacing between buttons.

Page 24: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

14 Chapter 3. Viewers’ Behaviors and Preferences Regarding Television

Page 25: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

Chapter 4

Interaction Techniques

In this chapter, different interaction techniques that could be used to generateinput to a TV will be explored. Each interaction technique will be examinedand previous research will be presented. The chapter begins with consider-ing speech as a mean to create input. In addition to speech input, gestureinteraction, tilt input and touch sensitive input will be discussed.

4.1 Speech interaction

Using speech as input and/or output is something that has become popular asthe technology has advanced. There are different types of speech recognizersand one distinction can be made between ones that are used to recognize aspecific person’s voice and ones that are independent of the speaker. Someutilize grammar and more sophisticated rules while others only allow simplerphrases as input [47].

Two main advantages of using speech as input is first the fact that it is pos-sible even when the user’s hands are busy doing something else, and secondlythat it does not require a keyboard, mouse, remote control or any other device[47]. This gives the user much freedom since he or she does not need to be ata particular place in the room or to hold a certain device. The user can movearound and do other things in parallel with the interaction. When it comes tospeech output (feedback), speech has an advantage in the fact that the user’svisual attention does not have to be directed at a certain spot, unlike visualoutput (feedback) [37].

However, the use of speech interfaces has some drawbacks and poses chal-lenges on the design of the system using it. To start with, errors in the recog-nition of the user’s commands are more common than with other input tech-niques [47]. There can be many reasons for these errors, for example noise inthe environment (other people speaking or other sounds in the surroundings)or simply because the user is using words that the recognizer does not have inits vocabulary. The problem with noise in the environment would particularlybe a problem considering speech interaction with a TV in a typical televisionviewing environment, since there are often many persons using the TV at thesame time. The speech recognizer could get somewhat confused when more

15

Page 26: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

16 Chapter 4. Interaction Techniques

than one person is speaking at the same time. There can be problems for therecognizer to know whether the user is issuing a command or just talking toanother person in the room. Another factor that can affect the recognizer isthe sounds from the TV itself. A speech user interface must consider the prob-ability of these errors and provide solutions when they occur. The system canfor instance ask the user to repeat the command (although it is quite likelythat the same problem will occur again) or change the input modality if thereare other input devices or techniques available [47].

Exhaustion of the user due to excessive speaking is a more physical effectthat also should be considered [37], at least with applications that requirecontinuous or plenty of speech input.

One very important issue that is not as obvious as the ones mentionedabove is the cognitive aspects. The processes of speaking and listening usethe same resources in the brain as the working memory [37]. This means thattasks that demands resources from the working memory are more difficult todo when one actively listens or speaks at the same time. Physical activitieson the other hand are handled in different areas of the brain, so walking andother ordinary physical activities are easier to do in parallel with the use of theworking memory. This was illustrated in an experiment where the subjectsused voice commands to do varying word processing tasks [37]. For simpletasks, like scrolling down and changing font style, the use of voice commandsmade the completion of the tasks up to 30% faster, compared to doing thesame tasks with a mouse. In one task the subjects were asked to memorizesymbols while doing other tasks, and recall the symbols later on. Users whoused voice commands tended to recall the symbols worse than the mouseusers. So, to summarize, speech input seems to facilitate the completion ofsimple tasks, but when the tasks are more cognitively demanding, the use ofspeech interaction appears to lower the performance. It is difficult to solveproblems while speaking.

A study done by Van Buskirk and LaLomia [7] confirmed that simple tasksdone with sharp, concise commands with a limited vocabulary were suited forspeech input, while tasks that required commands from a larger vocabularywere less suited. They also stated that voice input were not useful for tasksinvolving movements or selections with high precision. Finally, they stressedthat a short response time from the system is important because of the lowacceptance from users concerning delay from the system.

If the interface also uses speech as output, it can be difficult for the userto understand the spoken feedback, because artificially generated speech isobviously not as good as natural speech [47]. This has more to do with theperformance of the system’s hardware and is thus a technical issue ratherthan a question of the interface design, but of course the technical limitationsalways have to be considered in the design of an interface. Speech as outputis also rather slow (it takes more time to say something than to simply put iton display), so one has to design the feedback with that in mind, i.e. keeping itas concise as possible [47]. Another reason for keeping the amount of spokeninformation to the user small, is that speech burdens the working memorymore than visual information, since it is temporal and not lasting (unlike visualinformation), so the user has to keep the information in the working memory[47].

Page 27: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

4.2. Gesture interaction 17

When it comes to the process of designing speech user interfaces, Yankelovichand Lai [47] provide some suggestions on how to involve the users in the pro-cess. In design processes in general, users can be used at an early stage tohelp define desired functionality and characteristics of the system. But whendesigning speech interfaces, involving users can also be very important to helpthe designers understand the conventions of conversations in the domain inwhich the system is intended to operate. This is very important since the con-text of the usage is particularly important when it comes to speech interaction,because of the significance of all the sounds, conversations and noises in thesurrounding environment [47].

Further on in the design process, Yankelovich and Lai suggest wizard-of-ozmethods to simulate the system before implementing it, to discover usabilityissues. After implementation, typical user tests should be carried out as usualto test the performance of the system. But, an important thing to have inmind is that any conversation between the subject and the tester can disturbthe recognizer, and using the ”think aloud” method is obviously also a badidea, since it also will interfere with the subject’s conversation with the system[47].

4.2 Gesture interaction

People use their body language almost as much as speaking when they wantto communicate or just express something during a simple conversation. Untilrecently the main interaction with computers has almost entirely been througha basic keyboard with a simple mouse used for navigation and pointing. How-ever, there has been an increasing research regarding the field ubiquitouscomputing and this research render the possibility for a change in how weinteract with computers, digital and technical things/devices, and various in-terfaces in the future [43]. The research in the ubiquitous computing field hasa lot to with how computers can become ubiquitous in peoples everyday life.

One way to make computers and digital artefacts more ubiquitous is to letpeople interact with common/everyday things through natural gestures. Forexample, rather small devices that enable gesture interaction can be placed inusers’ clothes, watches and even jewellery [20]. The users can then, if theywear the devices that is, interact with basic gestures. A possible scenario canbe that the users can raise or lower one of their hands to open or close a dooror increase or decrease the volume of their stereo. Other possible scenarioscan be user interaction with home entertainment devices such as: TVs, DVDplayers, PDA, mobile cell phones [20]. Today’s modern web browsers alsosupport gesture interaction.

Another example of interacting with gestures is the Wii video game console(pronounced as the pronoun ”we”) manufactured by Nintendo. One character-istic thing about the Wii console is the wireless controller, i.e, the Wii Remote,which can be used as a pointing device. The Wii Remote can detect motion androtation in three dimensions. It also has embedded accelerometers [46] allow-ing it to sense motion in three dimensions and detect tilt. An optical sensor isused in the Wii controller to determine where it is pointing. Fig. 4.1 illustrateshow the user can interact with gestures in a more ”playful” and ”fun” way (top),

Page 28: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

18 Chapter 4. Interaction Techniques

the figure also shows the Wii remote and some of its functions (bottom).

Figure 4.1: The Wii remote showing its basic functions, taken from [29], per-mission currently pending.

Florian Block has made a prototype to change channels when watching TVin more playful way. The basic idea is that a user can change channels byinteracting also creating physical movements in a 3D-space with a ”handy”cube. A virtual version of the cube is shown on the user’s TV screen to giveadditional feedback to the user [5]. That is, when the user is moving thephysical cube, a virtual version of the cube on the TV is mirroring the movescreated by the user (see Fig. 4.2).

Moreover, Block has also implemented a feature enabling the user to seea TV stream on each side of the cube. Thus, the user can rotate and movethe physical cube to see the different side of the cube and hence different TVchannels. If the user is not moving the cube, the TV channel currently facingthe user on the virtual cube will be enlarged to fill the whole the TV screen.If the user moves the cube again the current channel in full screen will beresized back to the side of the virtual cube currently facing the user. The usercan however see other channels as well on other sides of the cube (not just theone currently facing the user), that is more than one side of the virtual cubecan be shown at the same time.

Bucolo et al have developed a somewhat futuristic device to facilitate inter-action with digital media in a lounge room setting [41]. The device supportsgesture interaction and is supposed to be worn on the hand. This setup allowsthe user to participate in other activities (eating, perhaps browse a newspa-per) while watching TV. The user interacts with the device by pressing the top

Page 29: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

4.2. Gesture interaction 19

Figure 4.2: User interacting with Block’s prototype, permission currentlypending.

control with their thumb and perform a gesture (see Fig. 4.3).

Figure 4.3: The prototype developed by Bucolo et al., used with permissionfrom [41].

The user has to release the top control to inform the device that the gestureis complete. The central panel on the device gives the user feedback once thebutton is pressed. This feedback is also visible on the TV screen.

Page 30: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

20 Chapter 4. Interaction Techniques

Research done by Kela et al [20] shows that users would like to be able tocontrol different set of devices and applications with the same kind of gestures.The results in Kela et al’s study showed that there is also a need to be able tofreely choose a device or application to control, however this can be done byusing laser or infrared pointers [1]. Previous research on gesture interactionhas focused on two kinds of gestures [34]: gestures that are manipulative andsemaphoric gestures.

Manipulative gestures could be defined as gestures that are controllingsomething and create a strong connection between the hand, its movements,and the actual object that is being controlled. This interaction paradigm wasused in the early 80’s by Bolt [6]. Quek et al have shown in their study [34] thatthere is a difference between gestures that are used for conversation and ges-tures that are used for manipulation purposes. According to [34], there is alsoa difference in dynamic behavior between gestures used in conversation andgestures used for manipulation. Finally, persons that are performing manip-ulative gestures can be helped in their manipulation/work by feedback. Thisfeedback can be visual, haptic or given by the manipulative object throughforcefeedback (virtually or physically). Conversation gestures does not needthis kind of feedback.

Semaphoric gestures are used in gesture interaction systems that are usinga kind of ”gesture dictionary” for static or dynamic hand/arm movements/gestures.To identify and recognize a certain kind of gesture, applications and sys-tems using the semaphoric approach needs some kind of identification pro-cess (graph labelling, principal component analysis, hidden Markov models orneural networks). That is, the gesture X is recognized in the predefined set Yof gestures. According to Quek et al [34], semaphoric gestures can be calledcommunicative because semaphoric gestures, in this context, can be used asuniversal symbols which are being communicated to the machine by the user.

Both manipulative and semaphoric have their drawbacks. Manipulativegestures are used in everyday interaction and these situations often requirethat the object that is being manipulated must be present; otherwise theplanned action might not and could not be performed [20]. So called, free-hand manipulation in interfaces often rely almost exclusively on visual feed-back. The semaphoric gestures represent a minority of the gestures that areused in everyday interaction and human communication.

Kela et al in [20] define gesture recognition as hand movements detectedby a certain device or system. This device/system has sensors that enabledetection and recognition of these hand movements. These movements arethen modelled by various mathematical methods so that most of the user’shand movements can be recognized in real-time by the device or the system.The device or the system can be trained or learn by itself to better recognizecertain hand movements. The goal is that a specific movement of the user’shand will execute a specific discrete command for the device or system.

In addition, Hardenberg et al discusses in [42] three types of requirementsfor a system using gesture interaction. The first requirement is about detec-tion. I.e., the system using gesture interaction must be able to recognize oneclass of a certain object. E.g., if the fingertips of a user’s hand are recognizedby the system the user can interact more precisely with an application.

The second requirement discussed in [42] is about identification. I.e., the

Page 31: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

4.2. Gesture interaction 21

system must be able to identify and determine which type of object from acertain class that is present at the moment. This is more important in somecertain situations, e.g.:

– The system must be able to recognize certain movements and gesturesperformed by a user.

– The system must be able to identify the number of fingers present in acurrent scene. This gives the user the option to mimic a mouse with twobuttons [42].

If the hand movement/the specific gesture is performed by the user ratherslow and over a uniform background (a background that is not so cluttered/filledwith distortion), can the recognition/identification process be done more easily[42].

The third and last requirement is about tracking. This capacity is necessarybecause an object might not be on the same place all the time. According to[42] can tracking problems be handled in two ways. The first way is to let thesystem keep track of and remember the object’s last position. This is possibleif the ways to move a certain object is restricted and if the system is using atracking algorithm to track/remember an object (between two frames).

The second way to handle tracking problems is to perform an identificationprocess for every frame. One disadvantage by doing this is that the systemneeds a very fast identification algorithm. According to [42] is this sometimesthe only option because in some cases, during normal gesture interaction, canmovements of a hand reach a speed up to 5 m/s. If the frame rate is 25 framesper second, can hands move 20 cm per frame during normal interaction.

Another strong factor to do an identification process for every frame is mo-tion blur. With motion blurring it is almost impossible for a system to recognizefinger positions. Hence, motion blurring makes it more difficult for a systemto keep track of an object.

According to Kela et al, previous research regarding gesture interaction canbe further divided into two other categories [20]. The first category is basedon cameras to enable and detect gesture interaction. This kind of systemsneeds/requires a special kind of setup and calibration for its sensors/cameras.Many tests and experiments have also had restrictions [42]: non-real time cal-culations and processing, expensive hardware (3D camera, infra red camera,high-tech gloves for enable gesture recognition), controlled background andlight conditions or limited hand movements.

Kela et al argues in [20] that camera-based gesture interaction is most suit-able for a stationary system. One example of a camera-based gesture recogni-tion system is the Gesture Pendant system by Starner et al [14]. This systemallows users to use their hands and fingers to interact with devices in a homeenvironment.

Many approaches have been developed to perform gesture recognition andfinger tracking. Some common techniques are [42]: Colour segmentation, con-tour detection, correlation and infrared segmentation. All of these techniqueshave some kind of limitation [42]. For example, the colour segmentation tech-nique is sensitive to changes in the light conditions and objects with the samecolour can be harder for the system to discern. Contour detection has a ten-dency to be unreliable with shifting backgrounds. Tracking algorithms could,

Page 32: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

22 Chapter 4. Interaction Techniques

although, be used to solve this problem. Finally, image differencing is mostsuitable for moving objects. This method needs, however, high contrast be-tween background and foreground.

To improve the recognition and detection ability for certain gesture recogni-tion systems (especially for camera and video-based systems) Hidden MarkovModels (HMM) can be used. HMM is a statistical model. The system that isbeing modelled is thought of as a Markov process [45]. HMMs can be usedin pattern recognition applications, gesture recognition, speech and handwrit-ing and HMMs ”can be considered as the simplest dynamic Bayesian network”[45].

A second category, not using cameras, is based on other sensors like ac-celerometers, tilt, conductivity, capacitance and pressure sensors. An exam-ple on a movement-based sensor system are the two input devices for wearablecomputers, called GestureWrist and GesturePad developed by Jun Rekimoto[36]. These devices use accelerometers and capacitance sensors to detect andrecognize simple hand and finger gestures. Both of the devices enable inter-action with wearable or nearby computers by using gesture-based commands.Another example using accelerometer-based gesture recognition is a musicalcontrol performance system by Sawada et al [16]. With this system users cancreate music with simple gestures (vertical, horizontal, diagonal hand move-ment etc).

Moreover, an additional way to interact using gestures is to use 2D patternsfor character recognition and gesture recognition when a user is creating char-acter or brush moves on a touch sensitive surface with a stylus or a mouse[20]. Examples of this kind of interaction can be seen on PDAs today or onnew cell phones with a touch sensitive display. This technique can be used forfinger based 2D gesture stroke control as well [20].

Gesture interaction is a relatively new field in Human Computer Interac-tion (HCI), and further research can investigate which gestures that are suit-able and appropriate to use in various contexts or for specific types of tasks.However, this is a very broad research field, mainly because there are a lotof gestures that people can use for interaction in many different tasks andapplications.

Kela et al suggest in [20] that gesture recognition in gesture interactionsystems must be high, almost 100 %, otherwise there is a possibility that theusers will abandon this kind of interaction. Another important requirementsuggested by [20] is that the users must be able to create their own gesturesin the system. I.e., the gesture recognition system must be able to supportcustomization and personalization. Mostly, because users tend to use differentgestures for the same, identical task.

Furthermore the users in [20] thought that the number of possible ges-ture commands should be relatively few. The possibility to train and rehearsegestures that are difficult to perform should be available as well.

In addition to the above, the users who participated in Kela et al’s study [20]generally thought that interacting with gestures felt natural. The usability wasoverall good compared to other means for interaction (e.g., speech, RFID-basedphysical tangible objects and PDA stylus).

Page 33: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

4.2. Gesture interaction 23

4.2.1 Tilt input

One way to use hand movements as interaction is to allow the user to tilta device with a built-in accelerometer in different ways. The accelerometerdetects the angle and direction of the tilt.

Oakley et al [31] distinguish between two types of tilt input: discrete con-trol and continuous control. With discrete control, a specific tilt action servesas an input that triggers a specific action. More technically speaking, thismeans that the system waits for the accelerometer to reach a specified thresh-old value, and then perform a specified action. This use of tilt interactionresembles an ordinary button, it is either ”pushed” or not. Continuous controlmeans that the whole range of the accelerometer input can be used (instead ofjust using a threshold value) to perform different actions, like adjusting valuesor positions. For example, it could be used to make the scroll speed faster asthe angle of the tilt increases.

This is a fairly new interaction technique, but some informal testings havebeen made with small handheld devices such as mobile phones and PDA’s.Most of the work in this area has used tilt input to perform different scrollingor browsing operations. However, some features of this interaction techniquehave been pointed out during testings of prototypes.

One advantage that Rekimoto [35] has recognized is that the user uses thesame hand to hold and to control the devices that are being tilted, unlike whena touch screen or a pen is used. This is a good thing since it gives the userfreedom to do other thing with the other hand. Rekimoto also found, throughinformal testings, that users could control the tilt fairly accurately if the visualfeedback given was well designed. One concern regarding the use of tilt inputstated by Hinckley et al [17] is that there could be a problem with the viewingangle of the screen on the device when it is tilted, making it harder for theuser to view it because of changes in brightness etc. However, in Rekimoto’stests the users normally tilted the device about 10-15 degrees in the differentdirections, which did not cause any problems for the visibility of the screen.

Oakley et al [31] examined how a simple phonebook in a mobile phone couldbe scrolled using tilt input. Previous work had shown that users find it difficultto target a specific item when there are very many to choose from, which oftenis the case with phonebooks, when tilt input was used to control the scrollingspeed. Therefore, they designed a prototype that combined tilt input with usingthe traditional letter selection with the 2-9 keys on the mobile phone. The ideawas that the user first pressed and held one of the keys, and then used tiltingto choose among the names that started with the letters corresponding to thekey that was held (for example, if the 3 key was held, all names starting withd, e or f would appear and be selectable through tilt scrolling). The idea isillustrated in Fig. 4.4. The authors claim that informal testing of this designshowed that it has good potential.

Wigdor et al [44] presented a prototype for text entry in mobile phones,called TiltText with a solution similar to Oakley et al’s. They combined thepressing of standard digit buttons with a tilt movement to select an alphanu-meric character. The tilt at any given time is compared to an absolute value, areference point that can be set by the user (normally an ”untilted” orientationof the phone), and when a key is pressed the current tilt is compared to the

Page 34: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

24 Chapter 4. Interaction Techniques

Figure 4.4: Scrolling through a phonebook by pressing a button and tilting thephone. Taken from [31] with permission from the authors.

reference value. In other words, the user first tilts the mobile phone, and thenpresses the key to select a character. For example, if the user presses the 2key and the phone is tilted to the left, ”a” is selected; if the phone is tiltedforward, ”b” is selected; a right tilt will give ”c”; and holding the phone untiltedwill select ”2”. Fig. 4.5 illustrates the idea.

Figure 4.5: This picture shows how the user can tilt the phone in differentdirections to choose different characters. Taken from [44] with permissionfrom the authors.

User tests were done to compare this technique to the conventional ”Multi-Tap” technique (i.e., pressing the 2 key three times to obtain a ”c” etc.). Resultsvaried slightly depending on different factors, but overall the tests showed thatthe text entry speed with TiltText was about 16% faster than with MultiTap[44]. TiltText was not compared to dictionary-based text entry techniques,where the user only has to press a button one time (e.g. T9), but by cross-

Page 35: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

4.3. Touch sensitive input 25

comparing to other tests, Wigdor et al estimates that TiltText is approximatelyequivalent to these types of techniques.

In the phonebook-scrolling prototype, Oakley et al also implemented vibro-tactile feedback, i.e. feedback to the user through vibrations in the hand-held device. They used this type of feedback to minimize the visual feedbackneeded, so the user is less required to look at the screen of the device. Theyfound that the use of this feedback led to qualitative improvements for theuser experience. In one application, in which they allowed the user to tilt adevice to scroll and explore a map on the screen, they added vibrotactile feed-back when the user tilted to scroll the map. The amplitude of the vibrationschanged depending on the scroll speed, making it stronger the more the usertilted the device. This seemed to help the users to correctly navigate the mapand made the users feel that they had more control of the scrolling, especiallywhen making small adjustments [31].

4.3 Touch sensitive input

Using touch sensing as input is a natural interaction technique, since it letsthe user use his or her hands directly to perform actions. There are differentways of using this haptic information as input.

By using touch screens, one can let the user interact directly with the graph-ical user interface, without a mediating device such as a mouse. I.e., the sur-face that displays information is the same as the surface the user interactswith. This way, there is no real distance between input and output, whichmakes the interface very intuitive to interact with [2]. The fact mentionedabove, that there is no need for an extra input device, is another big advantageof this type of interface.

Having buttons on the display surface that the user can press, instead ofhaving physical buttons, allows the interface to be dynamic and flexible. But-tons can change depending on the situation, all buttons do not have to bevisible at all time, which means that a smaller space is needed for buttons,which in turn can lead to a smaller device. However, the use of virtual, touch-able buttons on a screen instead of physical buttons has a major drawbackbecause of the lack of the natural tactile feedback that physical buttons offer.Pressing an area of a screen does not give the same confirming feedback aspressing down a physical button. Another disadvantage of using virtual dy-namic buttons is that the user cannot perceive where the buttons are locatedwithout looking at the surface, i.e. the user can not feel where the buttonsare located, like with physical buttons. Finally, two drawbacks that Albins-son et al mention is that the user’s hand obscures the display surface duringinteraction, and the lack of precision that human fingers have [2].

The problem with lack of feedback can be overcome or at least reduced byadding tactile feedback in the touch sensitive device. This tactile feedback ismost often some sort of vibration. There are many examples of systems thathave this type of feedback, for example the vibrotactile display used by Oakleyet al [31] described earlier, and a touchpad which generates a ”click sensation”when it is pressed, developed by MacKenzie et al [25]. Tactile feedback issomething that has been appreciated during user tests of systems like this,

Page 36: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

26 Chapter 4. Interaction Techniques

however it obviously demands more of the hardware used.Touch screens is probably what people most often associate with using

touch as input, but there are other ways too. The touch sensitive surface thatthe user interacts with does not have to be the display surface. Touch tablet isa general term for flat surfaces that sense if and where they are being touched[8]. The probably most well-known example of this is the touchpad, which isoften used in laptop computers as an alternative to using a mouse. A prototypefor a remote control that utilizes a touchpad, developed by Enns et al [11], isdescribed earlier in this thesis report.

Apart from sensing the location of where the user is touching a surface,merely the touch itself can be used as input, i.e., just sensing if it is beingtouched or not. Hinckley et al describe a useful way of doing this in [18].By utilizing a touch sensitive mouse that senses whether or not the user isholding the mouse, a system can predict what the user is doing, i.e. a form ofcontext awareness. They call this an ”On-demand interface”, and the idea isto sense the context of the use to determine which parts of the interface thatare important at a given time. For example, when the user is not touching themouse, a word processor can assume that the user is currently typing, andthus remove toolbars and other items so that the area that is displaying thetext document can be as large as possible. This can be done because when theuser is not touching the mouse he or she has no need for toolbars and menusanyway. When the system senses that the user touches the mouse, the systemcan simply make all toolbars and menus visible, since it is likely that the usermight want to access them when he or she is using the mouse. However, thisfunctionality could be perceived as annoying by some users.

Page 37: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

Chapter 5

Prototype Development

This chapter begins with conclusions and key aspects from the literature studythat are considered important for the prototype development. In addition tothis, design methods, principles and guidelines that will be ”kept in mind”in the development of the prototypes will be presented. Finally, the actualprototype development process will be described.

5.1 Conclusions and design implications from theliterature study

One thing that can be concluded from the previous chapters is that generally,viewers are passive in front of the TV. Sometimes the reason for watching TVis just to relax and ”clear your mind”, or to ”hang out” with other people [15].This is a big difference compared to surfing the web, where the user mostoften has a goal or purpose with the time spent [21]. Interactive content thatis suitable to be used on the web is not necessarily suitable for being usedon the TV. The content will likely be in need of adjustments to adapt it tobetter suit the prerequisites and requirements of TV usage. When designinginteractive TV content, one should not demand too much interactivity from theuser. Instead, the designer should provide and present an option to interactto the user, and invite the user to the interactive options, without being tooassertive and intrusive. However, there seems to be a trend towards moreactive and playful interactive means/devices and content, and away from thecompletely passive viewer [38].

An additional conclusion regarding interactive content is that conventionalTV watching still should be considered as the central activity in front of the TV[40]. The interactive content will be an addition or extension to the conven-tional, non-interactive content. This factor further contributes to the conclu-sion above, that the interactive content should be optional and not intrusiveto the user.

From the study by Logan et al mentioned earlier [23], it can be concludedthat the vast majority of all interactions with the remote control constitutedof zapping between channels, changing the volume and changing the channelwith the digit buttons. All other buttons are rarely used. Another conclusion

27

Page 38: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

28 Chapter 5. Prototype Development

is that more buttons/features on a remote control make it more complex anddifficult to use [11]. One additional important conclusion is that the user’svisual attention should be directed at the screen, and s/he should not haveto switch the attention to the remote control when interacting with the remote[10]. These factors have lead to a conclusion that it is possible to outsourcemany of the features that are accessed by a button on the remote control, tothe graphical user interface on the TV. This reduces the number of buttons andputs more focus on the TV screen. This idea is related to the line of thoughtformulated by Enns et al [11] that was described earlier.

Regarding the different interaction techniques, all of them could be used asparts of the way to interact with interaction television, and further enhance theuser experience. Using speech is a natural way for humans to interact, andspeaking does not require any physical effort like using the hands, so it couldbe well suited for interaction in front of the TV in a relaxed manner. However,the fact that people often watch TV together makes it less suitable, since con-versation between persons could disturb the speech input to the TV. Therefore,the use of speech input should probably be limited to short commands, andthere should be a way to activate and deactivate the speech recognizer so thatsounds in the environment does not disturb the speech recognition.

Gesture interaction provides a very natural way to interact, by allowing theuser to interact with the natural body language. The distinguishing feature ofgesture interaction, compared to the other techniques, is that it has great po-tential to enhance the user experience in a more playful way, when interactingwith graphical user interfaces. Gesture interaction provides very good meansto interact when the purpose of the interaction is to be entertaining and fun touse. It also supports collaboration in a good way since many users can interactsimultaneously. However, it is probably less suited for more everyday interac-tions and operations that demands higher efficiency, because of the relativelyhigh error rate in the identification of gestures. This might however changein the future, as the technology becomes more refined. Furthermore, it mightnot be suited for everyday actions if the physical effort needed to interact istoo large, because a relaxed way to interact with the TV is very important, asmentioned earlier.

Another way to use gestures as input is to let the user tilt the remote con-trol. This is easier to implement and requires less demanding and expensivetechnology than systems that support bare-hand gesture interaction. Tilt in-put could be useful for more simple and everyday actions, as it lets the userinteract in the same relaxed manner as when using a conventional remote con-trol, and it is relatively effortless. It also has the ”fun to use”-quality becausethis form of interaction is rather innovative, like bare-hand gesture interaction,although much more limited.

To equip a remote control with a touch screen allows it to be flexible anddynamic, and change the available buttons depending on the situation. How-ever, to have a dynamic remote control forces the user to look at it to see whatoptions are available on the display. Another major disadvantage with virtualbuttons on a display is the lack of tactile information about the location of thebuttons. To be able to feel where the buttons are located is very important ina TV viewing situation, where the viewer wants to focus on the TV. The lackof tactile feedback when a button has been pressed is also a disadvantage, if

Page 39: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

5.2. Design methods and frameworks 29

vibrotactile feedback is not used. However, using touch as input can be donein other ways than using a touch screen. For example, a touchpad could beuseful, and touch can also be used as binary information, like detecting if theuser is holding the remote control.

5.2 Design methods and frameworks

In this section, some guidelines for interaction design that will inspire the up-coming prototype development will be presented. Some of the terms and con-cepts described in this section will be referred to in analyses and evaluationsof the design suggestions for the remote control further on in this chapter.

5.2.1 The iterative design process

Most researchers and practitioners in the human-computer interaction fieldwould probably agree that the process of designing interactive products andsystems has to be iterative. That means that the different stages of the designprocess should be repeated so that the design can be refined until the desiredfinal product is reached. An illustration of a generic version of the iterativedesign process is shown in Fig. 5.1. A short description of the four stages inthe design process, based on [33], is given below. Note that the stages are allinterrelated and not performed in a static order, as the arrows in the figureshow.

Figure 5.1: The iterative design process. Picture inspired by [33].

– Identify needs/establish requirements: Before designing something,one has to know who the user is, what the user needs and wants to do,how the interactive product or system should support the user and otherrequirements that should be considered. The requirements derived fromthis stage form the basis of the design.

Page 40: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

30 Chapter 5. Prototype Development

– (Re)Design: This is the central stage: coming up with different sugges-tions for the actual design. It is good to produce several different designsuggestions to try many different ideas, especially at the early stages ofthe process. This stage can be divided into two sub-stages. First, theconceptual design stage where the product or system is described on arelatively high abstraction level, to form a conceptual model of the prod-uct or system. Next, the physical design is made, which is more detailedand includes for example which shapes and colours to use.

– Build an interactive version: After the design suggestions have been de-veloped, they have to be realized in some way so that they can be tested.The ”interactive version” of a design suggestion can be made in severalways. It does not necessarily have to be a fully functional software imple-mentation; a paper-based mock-up, for example, can also illustrate theinteractivity. The important thing is that it can be tested to identify whatis good with the design, and what is not so good.

– Evaluate: In the evaluation stage, the designer(s) can determine the us-ability and general appreciation of the different design ideas. This isusually done by testing the interactive versions on users, but can also beachieved with, for instance, a heuristic evaluation.

Except for the iterative nature of the design process, Preece et al [33] iden-tify two other key characteristics of the interaction design process: the impor-tance of keeping focus on the users, and the importance of having ”specificusability and user experience goals”. Both these characteristics are more orless ”built-in” in the design process described above.

5.2.2 Hi-fi and Lo-fi prototypes

A prototype is a representation of a system or device that is going to be imple-mented. The prototype can work as a representation of how things could workin a final version of the system/device. Prototypes can be used for showinga concept for a design in an early stage of a design process or how a certaindetail in that concept is working. Prototypes are used in almost every designdomain [3].

Furthermore, a prototype could be as simple as a piece of paper, cardboard,some clay or more advanced software tools or skills. A designer can use almostany material for the prototype as long as the designer thinks the material isappropriate for creating a suitable prototype according to his or her goals.

When creating prototypes for interactive systems it is good to use someearly paper sketches of screenshots of the system yet to be. The designer canalso create early and simple ”mock-ups” of the system with software, just forshowing the concept.

According to Benyon et al in [3], one distinguishing characteristic of proto-types is that they are somewhat interactive. For example, something alwayshappens when a user is interacting with a prototype, a post-it note might workas a menu if a user presses a button that is drawn on a piece of paper.

The detail and appropriateness of prototypes depends on several factors.Who is going to use the prototype is one factor. Another factor is in what

Page 41: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

5.2. Design methods and frameworks 31

stage the design process is. An ambition for the designer to create and usea prototype might be to highlight some feature or functionality in the systemor to test the interface for a system or product. Benyon et al argues in [3]that prototypes main purpose is to let people and clients evaluate a designer’sidea. There exist two kinds of prototypes: low fidelity (lo-fi) prototypes and highfidelity (hi-fi) prototypes.

Lo-fi prototypes

Lo-fi prototypes are often created with a piece of paper. Lo-fi prototypes cantherefore also be called paper prototypes. These are the main characteristicsand features of lo-fi prototypes [3]:

– These kinds of prototypes have the purpose to show the main idea orthe conceptual design for the product. Basic functionality and the broadunderlying design is described and shown with lo-fi prototypes.

– Lo-fi prototypes can be created very fast.

– Lo-fi prototypes have the purpose to focus on the early design ideas.These prototypes should furthermore be used to generate new ideas andaid the designer in the future design work and evaluation of potentialfuture design solutions.

However, how the prototype is implemented is only limited by the imagina-tion of the designer and time and materials at hand. Flexible prototypes canbe created very fast with just a piece of paper and post-it notes. The wholepoint of lo-fi prototyping is not to spend too much time building a prototype.If the designer is trying to create a very detailed prototype on a piece of paperthe designer should instead consider developing a hi-fi software prototype [3].

Benyon et al discusses in [3] some practical issues when it comes to de-signing paper prototypes. These are described below:

– Robustness: If the prototype is going to be used by many people or sur-vive a whole day of user testing the prototype needs to be strong enoughto be able to endure.

– Scope: The scope and focus needs to be on general key elements. Ifthe designer is trying to describe a very detailed functionality about theprototype can this be hard for the user to understand.

– Instructions: It is very difficult to add enough detail so that people canuse the prototype without the designer giving a helping hand. If thedesigner must help people to use the prototype can this affect user re-sponse.

– Flexibility: Benyon et al suggests that parts of the prototype needs to beadjustable so that users can be able to redesign the prototype.

Page 42: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

32 Chapter 5. Prototype Development

Hi-fi prototypes

Hi-fi prototypes are almost similar to the anticipated final product in appear-ance and functionality. Hi-fi prototypes are often developed in the softwarethat the final product will be implemented in. However, they can also be cre-ated with some other software package that is suitable and easy to create”mock-ups” with, just to show and demonstrate the interactivity. Some hi-fiprototype characteristics/features taken from Benyon et al’s book on interac-tion design [3] is described below:

– It is good to use hi-fi prototypes for testing the overall functionality of thesystem that is going to be implemented. That is, it is important to test theusability of the system to see if users can learn the system in reasonableamount of time.

– The prototypes can sometimes work as a final design document whichthe clients have to agree on before the final product can be implemented.

– Hi-fi prototypes are generally developed when the design process is al-most at the final stage. The conceptual ideas and other ideas have be-come stable and the hi-fi prototypes are to be developed according tothese ideas unless some crucial issue or some other brilliant ingeniousidea shows up.

One problem with hi-fi prototypes is that people believe in them almost as afinal product [3]. If the designer creates a really ”good looking” prototype withadvanced functionality can the prototype be mistaken for the final product.This can sometimes cause problems if the designer has not implemented thewhole functionality of the product. It is unpleasant for the designer to haveto say ”we are going to fix that bug for the final version of the product” or”that has not yet been implemented”. In hi-fi prototyping, details are veryimportant [3]. Hi-fi prototypes shows what is going to be implemented for thefinal system and this can also be a problem for the system developers. Perhapssomething that is created in Macromedia Flash might not be programmableand implementable in Java. Furthermore, it takes some time and effort andsometimes a lot of money to create a hi-fi prototype with advanced functionalitycompared to lo-fi prototypes.

5.2.3 Design and usability principles

There exists many design principles in the relatively young HCI field. DonaldNorman has developed several in his book ”The Design of Everyday Things”[30] and another famous usability guru, Jakob Nielsen, has develop anotherset of design principles for more user friendly products (please read his book”Usability Engineering” [27] on usability for more information). Sometimesprinciples can be confusing because of the high abstraction level [3]. For exam-ple: Normans principle ”make things visible” in the book ”Design of EverydayThings” [30] has a high abstraction level. On the other hand design principlescan be more specific as well. For example: Nielsens principle ”provide clearlymarked exits” in the book ”Usability Engineering” [27] has a lower abstractionlevel than the previous example.

Page 43: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

5.2. Design methods and frameworks 33

Moreover, good design principles emerge from psychology as well [3]. Forexample: because people tend to forget, the design principle ”keep the memoryload low” has emerged. Design guidelines are widely used today. For example,computers have the famous undo action (pc: ctrl-z. mac: command key-z),a ”transparent look” of options that not can be used, web browsers have theback button etc. [3]. Design principles are very good to use when developingand evaluating prototypes. Benyon et al have put together a design principleframework from design principles developed by for example Donald Normanand Jakob Nielsen in their book ”Designing Interactive Systems: People, Ac-tivites, Contexts, Technologies” [3]. The framework is built on high-level ab-straction design principles and they overlap, affecting each other, conflictingwith each other and sometimes augment and enhance each other in complexways [3]. But most of the times they help the designer to create products withgood usability.

Benyon et al have grouped the design principles into three categories: learn-ability, effectiveness and accommodation. This categorisation is however notstrict and rigid, the grouping is just a memory aid for the designer. However,it is very good if all systems were learnable, effective and accommodating. Theprinciples in [3] will be listed and discussed below.

Principles 1-4 are about making it easier for the user to access, learn howto use the system and remember it:

1. Visibility: It is important to make things visible. If things are visible theuser can see which functions that s/he can use. Furthermore, if thingsare visible, the user can also know what the current system state is. Thisis a psychological factor for the user of the system, according to Benyonet al, because it is easier to recognize things compared to recall them. If itis to hard to make things visible then consider making things observable,i.e. consider make things observable by the use of sound and hapticfeedback.

2. Consistency: A designer must be consistent with similar systems, sym-bols and standard ways of doing things when designing a new system.Both physical and conceptual consistency is of importance.

3. Familiarity: It is a good idea to use symbols and language that the usersare familiar with. This is not possible all of the time because people havedifferent knowledge domains. Metaphors can be used to help the usertransfer knowledge from a more familiar domain.

4. Affordance: It is very important to design things so it is evident what thedesign is useful for. Design buttons so they look like buttons and thenpeople will press them. According to Benyon et al, affordance is referringto the properties that things have or to how users perceive properties ofthings, and how these properties refers to how the things can be used.For example: Chairs affords sitting, buttons affords to be pressed etc.

Principles 5-7 are about the sense of being in control. That is making theuser aware what the user can to do with the product and also on how touse the product.

Page 44: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

34 Chapter 5. Prototype Development

5. Navigation: The system must support the user navigation so that theuser can explore the system. Support can be created by the use of infor-mation signs, directional signs and maps.

6. Control: People who use the system must be able to take control of thesystem and feel that they are in control. Users must know who and whatare in control. Control is accomplished by a clear mapping between theactions performed by the user and the outcomes of these actions.

7. Feedback: It is very important that the user can get feedback from thesystem so that the user can know what effect their performed actionshave had. Clear and consistent feedback enhances the feeling of being incontrol.

Principles 8-9 are about safety and security.

8. Recovery: If something goes wrong or not as planned, it is good if thedesign support recovery from the actions especially from actions thatcaused error or was a mistake. This recovery should be smooth andeffective.

9. Constraints: The designer should provide constraints so users are forcedto perform actions in an appropriate way. Users should especially be pre-vented from making serious errors by constraining performable actionsand provide confirmation on operations that are dangerous .

Principles 10-12 are about making design suitable for users.

10. Flexibility: The system should support multiple ways of doing the samething. This accommodates novice users to take interest in the system.Moreover, the system must provide the opportunity for the user to changethe look and feel and behave so the user can personalize and customizethe system.

11. Style: ”Designs should be stylish and attractive”.

12. Conviviality: Systems. and especially interactive systems should bepleasant to use and they should be polite and friendly as well. If a mes-sage is rude or very abrupt this message can lower the overall experienceof using interactive systems. Conviviality is also about using interactivetechnologies to support and connect people.

5.2.4 Affective aspects

When designing products and systems, the goal is traditionally to design it tobe as effective as possible. The term usability is often associated to perfor-mance and results, measured quantitatively.

However, designers are becoming more and more interested in other aspectsof the usage of a product. These aspects concern the kind of emotions that theproduct evokes in the user. Preece et al refer to these aspects of a product asaffective aspects [33]. They claim that an interactive system or product can bedesigned to affect the user’s emotions in different ways. A product or a systemthat is designed with affective aspects in mind can make the product or system

Page 45: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

5.2. Design methods and frameworks 35

feel enjoyable and appealing to the user, and encourage the user to explore,learn and be creative or social [33]. In other words, the interactive system orproduct can influence the user to feel, act or perceive the product or system ina desired way.

Another part of these affective aspects is to make the user not feel in acertain way. Even if a product or system is well designed overall to make theuser perform tasks in an efficient way, small details or parts of the overallpicture can make the user feel frustration or even anger [33]. User frustrationcan be caused by many things, for example when the system does not behavein the intended way, bad error messages (or error handling generally) or theoverall appearance of an interface. Eliminating user frustration completely isvirtually impossible, there are always situations where the system does notbehave exactly in the way the user wants it to. According to Preece et al, thebest way to deal with this is to provide good and easy-to-understand errormessages that tell the user what needs to be done in a simple way, and toprovide useful and carefully designed ”help” features [33].

Another term that is used when reasoning about these issues is the ”userexperience”. To make the whole experience of using a product or system aspleasing as possible, both traditional usability goals (completion time, errorrate etc.) and more emotional aspects of the usage has to be considered. Preeceet al define a number of user experience goals that are preferable to make theemotional, subjective part of the user experience as pleasing as possible. It isdesirable that an interactive system should be [33]:

– Satisfying

– Enjoyable

– Fun

– Entertaining

– Helpful

– Motivating

– Aesthetically pleasing

– Supportive of creativity

– Rewarding

– Emotionally fulfilling

One differentiation between these goals and more traditional usability goals isthat these goals concern how an interactive product or system is being expe-rienced from the user’s perspective, and the traditional goals, relating more toperformance aspects, are seen from the system’s perspective [33].

In the article ”Behavioral and emotional usability: Thomson ConsumerElectronics” [22], Logan differentiates between behavioral usability and emo-tional usability, the latter being similar to the affective aspects described above.Emotional usability deals with other needs than requirements on performance,such as how desirable and attractive the product is. The concept of emotionalusability is based on three qualities of a product:

Page 46: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

36 Chapter 5. Prototype Development

– How engaging the product is. That is, how well it captures and holds theuser’s attention, and how well it motivates the user to interact with it.

– How well the product encourages discovery and exploration. By doingthis, it takes advantage of humans’ natural curiosity, to make it easier forthe user to learn and discover the product’s features and functionality.

– How well the product eliminates fear of using it. If the user is afraid ofnegative consequences, s/he will be less likely to use and explore it.

Logan argues that designing a product or system with these qualities inmind will make it more rewarding for the user, and that it also will improvethe learnability and general usability.

In [24], the authors claim that emotional usability is especially impor-tant for products that are intended for entertainment rather than pragmaticor ”business-like” purposes. They also refer to other publications that sug-gest that pointing devices such as remote controls should be designed withemphasis on the user’s appreciation of the device rather than quantitativeperformance-related attributes.

5.2.5 Mental models

How would you describe how your TV works for your best friend? The questionis somewhat unclear and open because it depends how you would define theword ”work” and which abstraction level that you think is appropriate to usein your description. One person might associate the word ”work” with how theelectronics work in the TV or someone else perhaps thinks of which buttonss/he is using when interacting with the TV. How you describe your TV andhow it works refers to your mental model of your TV. A mental model is yourown understanding of how your TV works and thus a cognitive representationof your understanding of your TV’s functionality [3]. Furthermore, your knowl-edge of how your TV works should not be considered as a sequence of storedfacts, instead mental models should relate to how your TV or the interactivedevice works. Benyon et al describe mental models in [3] as ”having a workingmodel of, for example, a device in the real world in our heads”. This model canbe used to visualize how the device can be used and actually works. Mentalmodels are considered to be important when a novice user is learning how asystem or some interactive product works. See Fig. 5.2.

According to Norman [30] good design depends on how well the mappingbetween the user’s own mental model and the designer’s conceptual model ofthe interactive device/system or product works. Providing a good mapping be-tween user’s mental model and the designer’s conceptual model is not as easyas it sounds because the user develops a mental model through interactingwith the product or system. Many designers still assume that their mentalmodel is the same or almost identical as the person who is going to use theproduct or system. Furthermore, the designer communicates with the userthrough, what Norman calls, the ”system image” [30]. The ”system image”may be thought of as the products physical design. If the ”system image” isnot consistent or unclear the user can create a mental model that is confusingand misleading. The subject of mental models is debated among researchers

Page 47: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

5.2. Design methods and frameworks 37

Figure 5.2: The system image. Picture inspired by Benyon et al [3].

in the HCI field and there is not much of agreement among researchers on thissubject except that there is a recognition of the importance of mental models[3].

5.2.6 Design and colour

Colour makes the life more fun and is very important for people. For example,if you describe someone as being colourless you are implying that the personis without character. Designing with colour in interactive systems is not aseasy as it might seem [3]. If it was supposed to be the other way around, whyare almost all existing electronics black or grey? When it comes to softwareand software applications Microsoft is fond of using blue and grey for theirwindows software. The user using Apples OS X is offered an option betweenblue and graphite.

Benyon et al in [3] uses Aaron Marcus rules in his book ”Graphic Design forElectronic Documents and User Interfaces” [26] for guidelines using colour indesign. The rules below are taken directly from Benyon et al’s book, althoughMarcus is the author of them:

– Rule 1. Use maximum of 5 +- 2 colours.

– Rule 2. Use foveal (central) and peripheral colours appropriately.

– Rule 3. Use a colour area that exhibits a minimum shift in colour and/orsize if the colour area changes in size.

– Rule 4. Do not use simultaneous high-chroma, spectral colours.

– Rule 5. Use familiar, consistent colour codings with appropriate refer-ences.

Page 48: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

38 Chapter 5. Prototype Development

As with all guidelines the rules above are just guidelines. They may not besuitable for every situation or context but they should at least provide a soundstarting point. Benyon et al remark that colour connotations also differs withina culture. In [26] the colour blue and how it is interpreted in the US is takenas an example. For healthcare professionals the colour blue is an indicationof death. For ”movie-goers” it indicates pornography and for accountants itis an indication for corporateness or reliability (think IBM and ”Big Blue”).The colour conventions in the Western World by Marcus [26] can be summedas follows: Red can stand for hot, fire or danger. Yellow can stand for test,slow, caution. Green can stand for go, okay, vegetation, clear and safety.Blue can stand for calm, sky, water, cold. Warm colours represent action,response required, and proximity. Cool colours are standing for backgroundinformation, distance, and status. Greys, white and blue are more neutralcolours.

5.3 The design process

This section describes the different stages of the prototype development in de-tail. First, the results of a survey that was made will be presented. Then,a description of early user tests will be given. In the end of the chapter, thethree design iterations that were done will be described in separate sections.Each section begins with a description of the actual prototypes that were madein that iteration. Furthermore, results of user tests that were conducted arepresented. Finally, each section ends with an analysis of the test results. Theanalyses are based on the guidelines and principles described above, specifi-cally the three main categories for usability: learnability, accommodation andeffectiveness.

It was decided to work with phidgets and Macromedia Flash in the proto-type design process because of the compatibility of phidgets and MacromediaFlash and the limited time frame of the project. I.e., the only interaction tech-nique that was implemented and tested during the prototype development wasaccelerometer-based tilt input. However, with more time and technical re-sources, the other interaction techniques could have been implemented andtested as well.

5.3.1 Survey

An online survey was made to examine how people use a remote control fortelevision. The goal of the survey was to find out which buttons are most used,and which buttons should be placed at the most easy-accessible positions onthe remote control.

One question concerned how people most often change the channel whilewatching TV. This was asked because changing channels is presumably themost common thing to do with a remote control. The options were ”Channelup/down”, ”Digit buttons (0-9)” and ”Not sure”. In the other part of the survey,the respondents were asked to rate how important it is that certain buttons areplaced in an easy-accessible way on the remote control. The options for each

Page 49: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

5.3. The design process 39

button were values between 1 to 5 (5 is ”very important”, 1 is ”not importantat all”).

An invitation to participate in the survey was sent via e-mail to studentsat the Interaction and Design program at Umea University. 113 persons re-sponded. Among the respondents, about 73% were male and 27% were female,and the average age was around 24 years. 81% of the respondents claimed touse the ”Channel up/down”-buttons most often to change the channel, 12%used the digit buttons, and 7% chose the ”Not sure”-option.

The importance ratings for the different buttons are shown in the tablebelow:

Table 5.1: Average ratings for the respective buttons.Button(s) Average ratingVolume up/down 4,98Channel up/down 4,80Digit buttons (0-9) 4,18Teletext button 3,82Mute button 3,72Menu button 3,42Setting buttons for contrast, brightness etc. 2,99

5.3.2 Early user tests

The purpose of the first user tests was to test how efficient and easy/fun to usethree different interaction techniques was for steering and navigation tasks.The three techniques that were tested were joystick, four directional buttons(up, down, right, left) and accelerometer-based tilt input. Simple prototypeswere built for each technique. The prototypes were very minimalistically de-signed to make the test persons focus only on the interaction and not on theaesthetics and physical look and feel of the remote. These prototypes can beregarded as lo-fi prototypes but with some implemented functionality. Theprototypes can be seen in Fig. 5.3

Two types of tasks were tested for each technique: controlling a cursor (likecontrolling a mouse in computer interaction) and selecting items in a menu-like environment. The tasks with the cursor was to steer the cursor to ”catch”objects that appeared on the screen in a random fashion. In the menu tasksthe users were asked to select a highlighted object from a list of objects, and insome of the tasks a second list of objects was unveiled as a sub-menu to thefirst menu. These two types of tasks were selected because they were regardedas typical tasks that may take place in computer-like iTV applications.

The test group consisted of five persons, all males between 21-25 yearsold. The time it took to complete each task was logged to give quantitativeresults, the persons who participated in the test were not informed about thisbecause the participants were supposed to interact in a natural way and not ina stressful and competitive way. The persons were encouraged to think aloudand comment on what they thought of the different techniques, to give morequalitative results. The focus of the test was on the qualitative aspects.

Page 50: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

40 Chapter 5. Prototype Development

Figure 5.3: From the top: the directional buttons prototype, the joystick pro-totype and the accelerometer prototype.

Results

For the cursor tasks, the joystick was the most efficient technique for each ofthe participants. The average time for completing the tasks was 1.35 secondswith the joystick, 2.02 seconds with the accelerometer and 2.26 seconds withthe directional buttons. For the menu tasks, the directional buttons recordedthe lowest completion times for three of the participants, while the joystick wasmost efficient for the other two persons. However, the average completion timefor all participants was lowest for the joystick: 2.86 seconds compared to 3.02seconds for the directional buttons and 4.37 seconds for the accelerometer.

Table 5.2: Average completion times for the respective techniques and tasks.Technique Average comple-

tion time, cur-sor tasks (s.)

Average completion time, menu tasks (s.)

Joystick 1.35 2.86Accelerometer 2.02 4.37Directional buttons 2.26 3.02

When asked about the different techniques, the joystick was perceived asthe best technique overall. It was most comfortable to use for the cursor tasks,because it was familiar and easy to control. The accelerometer was perceivedas easy to use for the cursor tasks, it felt natural and allowed for steering thecursor freely. It was not good for the menu tasks however, because it was

Page 51: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

5.3. The design process 41

unstable, difficult to control and sometimes too sensitive. However, all of theparticipants generally described it in terms of ”cool” and ”fun to use”. Theprototype with the directional buttons was perceived as very easy to use in themenu tasks because it was easy to control. It was however not particularlygood for the cursor tasks as it was seen as tedious, and it was difficult to steerthe cursor diagonally.

5.3.3 Prototypes, version 1

In this stage of the design process, three prototypes with rather basic shapesand functionality were created. The purpose of these prototypes was to trythree different basic shapes, and to further examine how the different inter-action techniques worked in other scenarios than the very basic simulationsconducted in the earlier tests. The prototypes will become more hi-fi as thedesign process proceeds and the prototypes, in this version, can be regardedas more hi-fi than the earlier prototypes. The three prototypes are shown inFig. 5.4 below.

Figure 5.4: Prototype A (top left), prototype C (top right) and prototype B (bot-tom).

The idea with prototype A was to try a shape that resembled a standard,conventional remote control. The idea was that it should be rather big in

Page 52: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

42 Chapter 5. Prototype Development

size, and have a basic rectangular shape. No fancy stuff, just simple andstraightforward testing of the shape.

Prototype A was equipped with two buttons, one joystick and an embeddedaccelerometer. Thus, navigation using both joystick and tilt was supportedwith this prototype. The lower button activated the accelerometer whenever itwas to be used, and the other button functioned as an ”OK/confirm”-button.

Prototype B was designed to be somewhat more organic and rounded in itsshape, in contrast to prototype A. Still, this prototype was not so radical in itsshape, it was quite similar to prototype A except for the rounded edges andcorners. The size was about the same, but slightly larger than prototype A.

Prototype B was the only prototype that was equipped with four directionalbuttons, instead of a joystick. It also had an embedded accelerometer andtwo additional buttons, the lower one for activating the accelerometer and thebutton in the middle of the directional buttons functioned as the OK-button.

Finally, prototype C was considerably smaller than the other two proto-types, and was also more rounded than the other two. The idea with prototypeC was to make a prototype that contrasted the other two prototypes in shapeand size.

Prototype C, like prototype A, was equipped with buttons, a joystick andan embedded accelerometer. But unlike prototype A, this prototype had threebuttons. The button below the joystick served both as an OK-button and toactivate the accelerometer, depending on the situation. The two oblong, ellip-tically shaped buttons above the joystick were used as a ”channel up/down”-button and a ”volume up/down”-button respectively. These buttons were usedin combination with tilting the prototype, so these two buttons also activatedthe accelerometer.

User tests

To test these prototypes, two different Flash applications were implementedto simulate two interactive TV scenarios. The first one simulated a normal,conventional TV viewing experience, but with still pictures instead of video.The other scenario was based on the production by North Kingdom, whichwas used to simulate an interactive TV commercial.

A computer running these two Flash applications was connected to a 28”TV to simulate a real TV viewing environment as much as possible. The laptopcomputer was hidden and the test persons sat comfortably in an armchair.

In the first test scenario, the conventional TV viewing experience, the testpersons could change the channel by holding the button that activated the ac-celerometer, then tilting the remote control in the desired direction, and finallyreleasing the activation button. The direction of the tilt that is required tochoose a certain channel is related to how the digit buttons on a conventionalremote control are arranged (see Fig. 5.5).

To provide feedback to the user, the number of the channel correspondingto the current tilt direction was displayed in the top left corner of the screen,so that the user could make sure that the intended channel will be chosenwhen s/he releases the button (see Fig. 5.6). This idea, to use just one buttonand tilting the remote control instead of having several buttons for differentchannels, emerged from the idea to use as few buttons as possible and to use

Page 53: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

5.3. The design process 43

Figure 5.5: The idea for the channel selection.

the graphical user interface as much as possible instead.

Figure 5.6: Screenshot of the feedback in the top left corner of the screen.

Using prototype A and C, the user could turn the channel up and down bymoving the joystick up and down. On prototype B, the same could be achievedby pressing the ”up” and ”down” directional buttons. Turning the channel upand down one step could also be done by holding the OK-button and tiltingthe remote control (the OK-button was used since it does not have any otheruse during TV-viewing).

The second part of the test, the interactive commercial movie, was very lim-ited in functionality and content. In fact, the user could only navigate betweena number of items in a menu. This could be done by using the accelerometerand an activation button (holding the button, tilting and then releasing thebutton), as well as using the joystick or directional buttons. See Fig. 5.7 for ascreenshot of the menu interface.

The tests were conducted on six test persons, four women and two men. Allwere students in the ages between 20-30 approximately. Overall, the subjectsfound it difficult to use the remote controls at the beginning, but after sometraining and guidance they became more and more simple to use. Choosing achannel by pressing a button and tilting the remote control was perceived dif-ferent by the users. One of the subjects thought it worked absolutely brilliantright from the start, while another subject questioned why the tilting featureshould even exist. Most of the subjects had occasional problems with choos-ing the right channel because of the sensitivity, and one person had problems

Page 54: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

44 Chapter 5. Prototype Development

Figure 5.7: Screenshot of the menu in the interactive commercial.

knowing how to tilt the remote control to choose a certain channel. Zappingbetween channels worked well overall for all subjects, both with tilting, joystickand directional buttons.

As for the other part of the test, many of the test persons had problemslocating their initial starting position in the menu. That is, one of the menuitems were highlighted/marked from the start, and the subjects did not alwaysnotice that. To move between the items in the menu was also quite cumber-some, because the system did not always react when the user wanted it toreact to a tilt.

Furthermore, the test persons had different preferences about which waythe remote control should be tilted to move up respectively down.

Regarding the actual shapes of the remote controls, the users overall thoughtthat the prototypes were ”OK”. Few comments were made besides that, but thegeneral tendency was that prototype B and C was preferred over prototype A,presumably because of the more round and smooth shapes. Unfortunately theshape of prototype C did not fully come to its right because of difficulties andcomplications integrating the hardware into it, but the users appreciated thebasic idea of the shape.

Analysis

In the user tests, the subjects had some difficulties tilting the remote controlsto choose channels and navigate through the menus, and some persons haddifficulties to figure out how to use the remote controls at the beginning. Thisis often the case when people are trying products for the first time, but onecan always discuss how long it should take to learn something new. Onerather simple rule of thumb about learning a system is the ”Ten minute rule”[33]. It says that a novice user should have learned how to use a system after

Page 55: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

5.3. The design process 45

ten minutes of using it (except for very large and complex systems of course).Even though this rule of thumb is quite simplified and not very ”scientific”, itcan be a good indication about the learnability of a product or a system, andaccording to this rule the remote control prototypes were actually not so hardto learn. Nevertheless, they took some time to learn and one always wants tominimize that time as much as possible.

The test persons also had difficulties to choose the intended channels andmenu items even after the initial learning period. These difficulties had moreto do with the effectiveness than the learnability of the prototypes. This prob-ably depends on the fact that it takes some time to learn and adapt to theaccelerometer’s sensitivity adjustments. Also, the procedure of holding a but-ton, tilting, and then releasing the button to perform navigation/browsing inthe interactive commercial scenario was not intuitive at all for some users.

When it comes to the accommodation of the prototypes, they were oftenperceived by the users as ”fun” and ”cool” to use. Tilting the remote controlswas described as new and somewhat innovative, which increased the enjoy-ment of using them. However, sometimes the tilting lead to minor frustrationwhen it was difficult to choose the desired channels and items. Again, thishas to do with the sensitivity and calibration of the accelerometer, an issuethat can be adjusted and improved further in the design process. The remotecontrols supported flexibility, another aspect of accommodation, by allowingmultiple ways of doing certain actions (e.g., to tilt the remote control or usingthe joystick to navigate between items).

Conclusions

One thing that needs to be changed in future versions, is to avoid the ”holdbutton, tilt, release button”-sequence of actions to perform simple naviga-tion/browsing between menu items in the second scenario. However, thissequence of actions should still be used in the channel selection, because theuser must be sure that the right channel is selected before confirming the se-lection by releasing the button. The sequence for this task was more intuitivethan in the menu navigation task.

Also, the indication of the initial starting position in the menu should bemarked/highlighted clearer and more distinctively.

The sensitivity of the tilting also have be calibrated so that it is more stableand precise.

Regarding the shape and look of the remote controls, some sort of mergebetween prototype A and B with organic and smooth curves, edges and cor-ners seems appropriate according to user comments. However, the shape ofprototype C will be explored more since the basic shape seemed to be appreci-ated by the users. A modified prototype with this basic shape in mind will bedeveloped to further test if the shape is as good as user comments suggested.

Another decision that was made was to proceed with the channel selectionmethod, and not use any digit buttons.

Finally, after these tests it was concluded that the directional buttonsshould not be used in future prototypes, simply because a joystick eliminatesthe need for the directional buttons. The joystick was ”better” at everythingthe directional buttons could do, like the earliest user tests suggested. Those

Page 56: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

46 Chapter 5. Prototype Development

tests also showed that, quantitatively, the joystick was better than the ac-celerometer, but the accelerometer adds a dimension that neither the joystickor the directional buttons has, namely to allow the user to interact with more”freedom”. The ”fun to use”-quality and enjoyment (i.e. the accommodationaspects) also motivates the use of an accelerometer. At this early stage, it alsofeels as if the accelerometer has a potential to be used for more complex andsophisticated applications and tasks, where it has a potential to enhance userexperiences.

5.3.4 Prototypes, version 2

In the second iteration, two new prototypes were made. These prototypeswere generally more thought through and more thoroughly made concerningergonomic aspects such as grip and button placement. The prototypes areshown in Fig. 5.8 below.

Figure 5.8: Prototype D (top left), prototype E (bottom left) and the back ofprototype D (with the activation button marked with a circle).

Prototype D resembled prototype B in the previous prototype versions, butwas slightly smaller and more ”slender”. A new idea that was tested on thisprototype was to place the accelerometer activation button on the back of theremote control, that was intended to be easy accessible with the index finger.The idea is that the user should be able to easily and effortlessly access thebutton without having to move the thumb to different buttons, and therebyable to maintain the current grip. This way, the natural shape of the handcomes to better use, and the user can now operate the remote control with twofingers, instead of just one. Apart from this button, prototype D has two otherbuttons. The button directly below the joystick functions as the OK-button.

The idea behind the third button is new for this prototype version. The ideais to use it as a ”soft key”, i.e. a button that changes functionality depending

Page 57: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

5.3. The design process 47

on the situation. This button is an extension of the idea formulated earlier, tooutsource functions to the graphical user interface instead of placing them onthe remote control. I.e., the idea is to ”embed” several buttons into this singlebutton, by combining this button with a tilt movement. When the user holdsthe button, several options/items are displayed on the screen, and the usertilts the remote control to choose the desired option. This idea is related toEnns et al’s idea [11] to move the interface to the screen instead of the remotecontrol. This reduces the number of buttons, and focuses the user’s attentionto the TV screen rather than the remote control. In the future, we refer to thisbutton as the ”multi-functional button”.

Prototype E was intended to be an improved version of prototype C in theprevious prototype versions. Prototype C had two oblong buttons, the wholeremote control was quite rounded and oblong, and this kind of shape wasused and developed further for prototype E. The prototype had the same but-tons as prototype D. The OK-button was placed below the joystick, the multi-functional button placed below the OK-button, and the button above the joy-stick activated the accelerometer. The button further above was intended tobe used as a ”back”-button, but was never implemented.

User tests

The applications used in the previous user tests were further developed andused to test the new prototypes. The conventional TV simulation was expandedwith a new interface for selecting a channel using tilt input. Instead of justshowing the number of the channel that corresponds to the current tilt di-rection, a visual representation of the available channels were displayed, withthe current selection highlighted (see Fig. 5.9). This principle was also usedto visualize the available options that could be accessed through the multi-functional button (see Fig. 5.10).

Another feature that was added in the simulation for this test was that theuser could increase and decrease the sound volume. This could be done eitherby moving the joystick left or right, or holding the OK-button and tilting theremote control left or right. Moving the joystick up or down, or holding the OK-button and tilting the remote up or down, changed the channel up or down,just as in the previous version. A volume bar was displayed to indicate thecurrent volume level, as can be seen in Fig. 5.11.

In the interactive commercial scenario, the indication of the initial positionin the menu (which was a problem in the previous version) was modified togive the user better feedback. A curly brace symbol was shown next to thecurrent position (see Fig. 5.12).

Additionally, one part of the interactive commercial that could not be usedwith the remote control in the previous version, was modified to function withthe remote control. The new part was a short informative movie that the usercould interact with by navigating through different parts of the movie. Thismovie was originally designed for the web, and was now modified so it couldbe navigated by using the remote control. The basic way to navigate was tohold the button that activates the accelerometer (i.e. the button that wasused for selecting channels) and tilt the remote control. The user could movebetween icons/buttons that activated play/pause, previous chapter and next

Page 58: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

48 Chapter 5. Prototype Development

Figure 5.9: The new channel selection with improved feedback.

Figure 5.10: An example of features that can be accessed through the multi-functional button.

chapter. In addition to this, the user could also navigate in a timeline to ”jump”between the chapters of the movie. In this case, a ”navigation box” moves be-

Page 59: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

5.3. The design process 49

Figure 5.11: The volume bar at the bottom of the screen.

Figure 5.12: The menu interface with the added curly braces (marked out withan oval shaped symbol).

tween different locations on the timeline, corresponding to the different partsof the movie. The basic interaction sequence for this was to hold the activationbutton, tilt the remote control until the desired chapter is indicated, releasethe button and then press the OK-button to select the desired chapter. Tomove between the timeline and the buttons, the user simply holds the activa-tion button and tilts up or down. Finally, by moving up from the timeline, theuser could access a ”Go to main menu”-option and a ”Sound on/off”’- option.The user moves between these options in the same way as between the iconsat the bottom of the movie. So, there are three different ”levels” where the usercan interact: the bottom level (play/pause, next part and previous part), themiddle level (the timeline) and the top level (”Go to main menu” and ”soundon/off”). A screenshot of the interface can be seen in Fig. 5.13.

A new scenario was created for the conventional TV mode. It was assumedthat the TV had a feature where the user could send and receive short textor multimedia messages, that could be sent to the TV via SMS or MMS. Thetest scenario was that the user received a message while watching TV. A noti-fication was shown in the top left corner, and the user could choose to eitherread the message, or continue watching TV and read it later. If the user choseto read the message, s/he could also choose to reply, or go to the ”messagemenu”. To make selections in this scenario, the multi-functional button wasused. When the user pressed the button, the available options were presented.To indicate that the multi-functional button was to be used, a blue graphicalcircle, intended to refer to the multi-functional button (which was blue), was

Page 60: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

50 Chapter 5. Prototype Development

Figure 5.13: Screenshot of the interactive commercial movie scenario. Here,the user is navigating in the timeline by moving the navigation box to thedesired part of the movie.

shown. A screenshot of when a message is received is shown in Fig. 5.14.

Figure 5.14: Screenshot of the message scenario.

These tests were conducted on six test persons, four women and two men.

Page 61: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

5.3. The design process 51

The persons were approximately between 20-35 years old with various back-ground, all with average to high computer experience.

The modified channel selection function worked better than the last version.One of the test persons also tested the previous version, and said that thegraphical visualisation made it easier to take in the whole picture of what s/hewas doing. Overall, the idea of the multi-functional button was appreciated,although, due to implementational problems, it was somewhat cumbersome touse at times.

In the interactive commercial scenario, the use of the curly brace symbolsmade it somewhat easier to know the starting position. However, some of thesubjects had trouble knowing where their current position was in the informa-tive movie. Especially, they had problems knowing which ”level” they were at,i.e. if they were controlling the icons at the bottom, the timeline or the iconsat the top. Generally, knowing exactly where they were was a problem.

In the message scenario, it was not clear that the multi-functional buttonshould be used. Some users instead pressed the button that activates the ac-celerometer for channel selection. The users simply did not associate the bluecircle on the screen with the multi-functional button on the remote control.

One person commented that the button on the back of prototype D couldbe pressed by mistake, but most of the subjects liked the idea of placing thebutton on the back. Using the joystick to turn the channel and volume upand down worked well. Generally, the shape of prototype D was appreciated,but it was perceived to be somewhat big. The shape of prototype E was alsoappreciated, but just as prototype D, it was a bit big and clumsy, and difficultto use as a pointing device.

Analysis

Overall, the effectiveness of the remote controls has improved since the pre-vious versions, mostly because of adjustments in the implementation of thesoftware. However, there are still some problems to operate the remote con-trols effectively. Knowing exactly where one is in the interactive commercial,and understanding what actions are possible, seems to be the biggest problem.This also relates to learnability aspects, as the understanding of where you areand what is possible to do in different situations is usually something the userlearns over time. However, once again it is difficult to know how much timethis learning process should take, since there is no real frame of reference forthis.

The problem that some users had knowing which of the three levels thatwas active in the interactive commercial movie can be related to Donald Nor-man’s terms ”the gulf of execution” and ”the gulf of evaluation” [30]. The gulfof execution refers to the gap between the user’s goals/intentions and howwell the system provides the information necessary to perform the intendedactions. The gulf of evaluation refers to how well the system communicateswhether the action was successful or not, leading the user closer to his or hergoal. In the interface for the interactive movie, there was a gulf of executionbecause some users did not always know which actions could be performed ata certain time (e.g., the user did not know if s/he was at the level where s/hecould navigate to the ”next chapter” icon). There was also a gulf of evaluation,

Page 62: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

52 Chapter 5. Prototype Development

for example when the user had moved to the top level without knowing thats/he actually was there.

One problem that occurred in the message scenario in the tests, was thatsome users did not immediately associate the graphic blue circle in the inter-face to the physical blue button on the remote control. This relates to ”map-ping”, a design principle originally formulated by Norman [30]. In this case,mapping means the relation between a physical object (i.e., the button on theremote control) and the visual representation on the interface (i.e., the bluecircle). Hence, the user tests clearly indicated that the mapping between thesetwo was not strong enough.

Conclusions

In future versions, more visual feedback must be given to the user, so that s/hewill know exactly where in the interface s/he is, and what can be accessed andperformed in the current state. This is something that will be improved gen-erally throughout the interfaces in both of the user scenarios. However, morespecifically the user’s current position in the interactive commercial movie willbe marked out even more distinctively. Especially, the current level that is”active” needs to be more clearly highlighted. When interacting with a mouseon a computer, these navigational issues are not as strong, because the user”is” always where the mouse cursor is.

The mapping between the blue circle in the graphical user interface andthe multi-functional button on the remote control also needs to be stronger.This can be done by visualizing a picture of the whole remote control, with themulti-functional button clearly highlighted. This way, the button is placed inits context rather than just shown out of context, which should make the userassociate to the physical button on the remote control better. The mappingcould also be stronger by combining the colour with an icon. This solution isused on the Xbox game controller, for example.

Regarding the shape and look of the remote control prototypes, the basicshape of prototype D will be kept, but slightly altered to make it smaller andmore comfortable to hold. The button on the back of the remote control willalso be kept, but it needs to be smaller and less easy to access by mistake.Prototype E must be much smaller and it will also be made more rounded andless clumsy.

5.3.5 Prototypes, version 3 (final version)

Due to time restrictions, this was the final version of the remote control pro-totypes. Two prototypes were made (see Fig. 5.15). Because these were thefinal prototypes, more time was spent on making them look like a ”final prod-uct” than in the earlier versions, where the remote controls looked much likeprototypes. However, the same material, which is commonly used for creat-ing prototypes, was used and therefore the ”prototype feeling” remains. Butstill, a bigger effort was made to make more appealing aesthetics, and to ap-ply a more comfortable texture. Both prototypes were also painted black, tomore resemble a remote control. Furthermore, in the previous versions, allthe chords from the sensors and hardware that connected the remote controls

Page 63: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

5.3. The design process 53

to the computer were visible to the user. In this version, the chords were em-bedded in a bigger chord, creating the illusion that there was only one chordbetween the remote control and the computer. The Interfacekit and the addi-tional hardware were hidden in a box, so that one chord went from the remotecontrol to the box, and one chord from the box to the computer. All this wasdone to hide the technology from the user as much as possible, and to enhancethe illusion of a ”real” remote control.

Figure 5.15: Prototype F (top left and top right) and prototype G (bottom leftand bottom right).

Prototype F was a further development of prototype D in the previous ver-sions of the prototypes. It was made to be thinner and slightly smaller overallthan prototype D, and with a more ergonomic grip. Otherwise, it was equippedwith the same buttons and a joystick and an accelerometer, just as prototypeD. More effort was made to make the button at the back of the remote controlmore discrete and less easy to access by mistake.

Prototype G was an updated version of prototype E, with a considerablysmaller size and a more ”slender” shape. No joystick was utilized on thisprototype, because we wanted to investigate how well it would work with onlytilt input. One idea that emerged from the previous user tests was to place

Page 64: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

54 Chapter 5. Prototype Development

the accelerometer activation button on the tip of the remote control (as can beseen in Fig. 5.15), to create the feeling of pressing a trigger and pointing theremote control. The additional two buttons are placed on the top side of theremote control. The uppermost button is the OK-button, and the other one isthe multi-functional button.

In these prototypes, different colours were used for the different buttons,to make them more distinguishable. The OK- button on both prototypes wasmade green, because green is the conventional colour for ”OK” or ”Confirm”,and is a naturally positive colour. On prototype F, the multi-functional buttonwas assigned a blue colour, like on prototype E. Blue is a neutral colour (seesection 5.2.6 regarding design and colour), and therefore suited for a buttonthat is intended to have different functionality in different contexts. On pro-totype G, a yellow colour was used for the multi-functional button to create astrong contrast to the OK-button and the remote control itself, and to test adifferent colour for the button. The accelerometer activation button on proto-type F was made black, since it is not visible to the user anyway, and to makeit blend in with the remote control. The idea with this button is that the userlocates it naturally with the index finger rather than perceiving it visually bylooking at the remote control. On prototype G, a brown colour, which is warmand somewhat neutral, was chosen for this button.

User tests

Further modifications on the existing scenarios were made. The general graphic”look” of the user interface for the television interface scenario was adjustedto make it more aesthetically appealing. Apart from that, two bigger modifica-tions were made in the interfaces. In the television scenario, the ”alert box”that indicates that a new message has been received was changed. In theprevious tests, the users did not find the mapping between the graphic repre-sentation of the multi-functional button and the actual button on the remotecontrol prototype intuitive. In the new version of the interface, a picture of thewhole remote control was displayed, with the multi-functional button clearlymarked out (see Fig. 5.16). More screenshots of the final interface can be seenin Appendix A.

Furthermore, an instructive picture and an introduction movie for proto-type G were made to show the user how the remote control works. In previoustests, the subjects were instructed verbally, but in these tests only the pictureand the movie was shown before the tests started. The instructive picture canbe seen in Appendix B.

The second change was made in the interactive commercial scenario. Pre-vious user tests showed that the users had difficulties knowing which levelthey were currently at. To make it more clear, a green background was addedat the current level, except for the middle level, where the navigation box wasconsidered to be sufficient to indicate that the user was at the timeline level(see Fig. 5.17).

The new interfaces for the scenarios were tested on four persons, all stu-dents in ages between 20-25 years. One test person was female, the otherthree males. Only prototype G was used in the tests.

Regarding the shape of the prototype, all subjects thought that prototype

Page 65: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

5.3. The design process 55

Figure 5.16: Screenshot of the ”New message” indicator in the top left corner.The yellow circle around the multi-functional button ”pulsates” by continu-ously increasing and decreasing in size, to catch the user’s attention.

Figure 5.17: In this screenshot of the interactive commercial movie, the useris currently interacting at the top level, which is indicated by the green back-ground.

Page 66: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

56 Chapter 5. Prototype Development

G was ergonomic and comfortable to use. They also thought that the buttonswere overall placed in a good way, except for one person who thought that theaccelerometer activation button (the brown button) could have been placedlower and be more smoothly integrated with the shape of the remote control.

In the message scenario, the mapping between the graphical representa-tion of the multi-functional button and the physical button worked well. Thesubjects had no problems navigating, and intuitively understood which buttonto use.

In the interactive commercial movie, it was somewhat easier for the usersto know which level was active. However, it was a bit unclear which icon in thelevel that was active. But overall, the users could navigate and interact moreintuitively than in the earlier versions.

Analysis

The mapping between the graphical representation of the multi-functional but-ton and the physical button in the message scenario seemed to work muchbetter than in the previous tests. To place the button in its context by show-ing the whole remote control with the button clearly marked, instead of justshowing the button like in the previous version, clearly made it easier for theusers to associate the graphical representation with the actual button.

The addition of the backgrounds to highlight the different levels seemedto improve the visibility of the interactive commercial movie’s interface. Theusers could generally more easily navigate and interact with the movie moreefficiently.

Conclusions

To show the whole remote control with the multi-functional button markedout is an idea that could be used in other situations as well. It seems to bea very good way to make the user know what to do intuitively. It is intendedto be used as a ”help”-function that the user can turn off after becoming morefamiliar with the interface and more secure in his or her actions.

Regarding the interactive commercial, the added highlighting of the currentlevel made the navigation a bit easier for the users. However, the visibilityshould be further improved by making the current position in the interface”stand out” more. The appearance of the backgrounds can also be refined tomake it more integrated in the visual character of the interface.

Page 67: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

Chapter 6

Conclusions

To begin with, we think that the interaction techniques presented in this mas-ter’s thesis project can be used to enhance the user experience in a TV viewingcontext. Our tests with the prototypes that used tilt input showed that theusers found the technique exciting and fun to use. We think that the tech-niques are well suited as a more enjoyable alternative to a conventional remotecontrol, especially for entertainment purposes.

Moreover, the concept of using ”soft keys” that can be used to access dif-ferent functions and features depending on the situation also proved to be agood idea. The users appreciated the idea and found it, after a short learningperiod, to be an easy way to interact. This concept makes it possible to reducethe number of buttons on the remote control, but still allow the user to accessmany different functions in an easy way. The concept also makes it possibleto maintain a simple and uncomplicated ”look” of the remote control.

This project has made us understand the enormous importance of involvingthe user at all stages of the design process and to carefully evaluate eachdesign iteration. The user-centered approach to interaction design must notbe underestimated. The user tests conducted in each design iteration really”drived” the prototype development forward. The users that participated in thetests even came up with whole new ideas, and could see things from anotherangle and detect things that otherwise might have passed us by. We havealso learned that it is important to create a system image that is easy for theusers to understand. We think that we, after our design process, are close toachieving an easy-understandable system image. We mean that the users andtheir needs are, at least as much as the technology, the driving force behindproduct development.

Finally, another thing that was emphasized during this project is how time-consuming the product development process really is. In our case, designing aremote control that somewhat requires thinking ”outside the box” has been abig challenge. We think that the ”trial and error”-approach has been suitablefor us to generate design solutions.

57

Page 68: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

58 Chapter 6. Conclusions

6.1 Limitations

The time aspect has undoubtedly been the most significant limitation. Moretime to evaluate each design iteration, and of course more time to performmore iterations, would have been very valuable. With more time, we couldalso have used other evaluation methods, for instance the DECIDE or PACTframeworks, which provide very useful guidelines to aid the designer in theevaluation process.

The available resources were another limitation in this project. With accessto and knowledge about other technologies, such as a speech recognizer orequipment for gesture recognition, we could have explored other possibilities togenerate input, and evaluate and compare the different technologies. This alsohas to do with the time aspect, since we could have improved our knowledgeabout these technologies with more time on our hands.

Because we used styrofoam as prototype material, it was hard to create thefeeling of a ”real” remote control, and to avoid the ”prototype feeling”. A plasticmaterial would probably have enhanced the overall impression of the proto-types. Hence, the prototype material was a limitation, but also the productionmethods to create the prototypes. The prototypes were created by hand, andthis required much time and effort. If the prototypes would have been createdin the same way as commercial remote controls are produced, the ”prototypefeeling” would of course have been reduced significantly. The fact that ourprototypes were connected to a computer via USB communication, and there-fore not cordless as ordinary remote controls, further enhanced the ”prototypefeeling” and weakened the overall impression

6.2 Future work

In the future, if more time and resources were available, the other interac-tion techniques described in this paper would be implemented and tested. Todesign a remote control that uses a combination of different interaction tech-niques could further enhance the user experience. With more time, more iter-ations could be done to approach a final product design. Also, the prototypeneeds to be cordless. To achieve this, we would need sensors that are compat-ible with communication protocols such as Bluetooth or infrared technology.Or, we could implement our own communication protocol stack for wirelessUSB, but that is a master’s thesis project in itself.

Furthermore, the final remote control would probably be equipped withmore buttons than we have used. We have only implemented functionality forthe smallest number of buttons needed to test what we wanted. One idea is to”hide” additional buttons that can be used by users who prefer to interact ina more conventional way. By doing this, the transition to the new interactiontechniques will be less bothersome for the more conservative users.

In addition to refine the remote control, more functionality in both of thescenarios needs to be implemented. For example, an idea that was not im-plemented in the channel selection interface, due to the time aspect of thisproject, is to stream live video content instead of just showing the channellogos, i.e. showing what is currently being broadcasted on the channels. An-

Page 69: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

6.2. Future work 59

other future addition is to further develop the idea to access different functionson the screen with a soft key and a tilt movement in the desired direction. Theidea is called the ”atom menu”. When the user presses a soft key, an interfacesimilar to a pie menu (i.e., the options are placed in a circular way aroundthe currently selected item) is showed. The options are similar to the elec-trons around an atom nucleus, hence the name. When an option is selected, asubmenu can appear around that option, creating a nested ”atom structure”.The idea is illustrated in Fig. 6.1. This idea is another way to visualize anordinary hierarchical menu structure, but the advantage is that every optionin the next level is always one ”step” away, unlike linear menus for example.Applying Fitt’s law to this menu structure implies that it has good potential.

Figure 6.1: Conceptual sketch of the ”atom” menu interface suggestion. Onthe right picture the ”sound” submenu has been selected.

Page 70: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

60 Chapter 6. Conclusions

Page 71: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

Chapter 7

Acknowledgements

We thank our friends and families who supported and believed in us through-out our work.

We would also like to thank our supervisors Hakan Gulliksson and DavidEriksson for creative feedback during our work. Many thanks to the wholestaff at North Kingdom for contributing to a creative and inspiring workingenvironment.

Special thanks goes to Leena Eronen, A.C. Roibas and Anika Schweda forvaluable help finding information sources, and to Sam Bucolo, Neil Enns, IanOakley and Daniel Wigdor for kindly giving their permission to use picturesfrom their projects.

Last but not least, many thanks to all the people who took time to partici-pate in our user tests. Your help is much appreciated.

61

Page 72: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

62 Chapter 7. Acknowledgements

Page 73: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

References

[1] Heikki Ailisto, Johan Plomp, Lauri Pohjanheimo, and Esko Strommer. Aphysical selection paradigm for ubiquitous computing. In Ambient Intelli-gence, pages 372–383. 2003.

[2] Par-Anders Albinsson and Shumin Zhai. High precision touch screeninteraction. In CHI ’03: Proceedings of the SIGCHI conference on Humanfactors in computing systems, pages 105–112, New York, NY, USA, 2003.ACM Press.

[3] David Benyon, Phil Turner, and Susan Turner. Designing Interactive Sys-tems: People, Activities, Contexts, Technologies. Pearson Education Lim-ited, Essex, UK, 2005.

[4] Aseel Berglund and Pontus Johansson. Using speech and dialogue forinteractive tv navigation. Univers. Access Inf. Soc., 3(3):224–238, 2004.

[5] Florian Block. Towards a playful user interface for home entertainmentsystems. In Ambient Intelligence, pages 207–217. 2004.

[6] Richard A. Bolt. Put-that-there: Voice and gesture at the graphics in-terface. In SIGGRAPH ’80: Proceedings of the 7th annual conference onComputer graphics and interactive techniques, pages 262–270, New York,NY, USA, 1980. ACM Press.

[7] Ron Van Buskirk and Mary LaLomia. A comparison of speech andmouse/keyboard gui navigation. In CHI ’95: Conference companion onHuman factors in computing systems, page 96, 1995.

[8] William Buxton, Ralph Hill, and Peter Rowley. Issues and techniques intouch-sensitive tablet input. In SIGGRAPH ’85: Proceedings of the 12thannual conference on Computer graphics and interactive techniques, pages215–224, New York, NY, USA, 1985. ACM Press.

[9] Konstantinos Chorianopoulos, George Lekakos, and Diomidis Spinellis.Intelligent user interfaces in the living room: usability design for personal-ized television applications. In IUI ’03: Proceedings of the 8th internationalconference on Intelligent user interfaces, pages 230–232, 2003.

[10] Owen Daly-Jones and Rachel Carey. Navigating your tv: The usability ofelectronic programme guides. In Proceedings of CHI’2000, 2000.

63

Page 74: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

64 REFERENCES

[11] Neil R. N. Enns and I. Scott MacKenzie. Touchpad-based remote controldevices. In CHI ’98: CHI 98 conference summary on Human factors incomputing systems, pages 229–230, 1998.

[12] Dr. Jonathan Freeman and Dr. Jane Lessiter. Ease of use and knowl-edge of digital and interactive television: Results. Technical report,Department of Psychology, Goldsmiths College, University of London,London, UK, 2001. http://www.ofcom.org.uk/static/archive/itc/uploads/UsE report.pdf, accessed 2006-10-20.

[13] Dr. Jonathan Freeman and Dr. Jane Lessiter. Using attitude based seg-mentation to better understand viewers’ usability issues with digital andinteractive tv. In Proceedings of the 1st European Conference on InteractiveTelevision: from Viewers to Actors?, pages 19–27, 2003.

[14] Maribeth Gandy, Thad Starner, Jake Auxier, and Daniel Ashbrook. Thegesture pendant: A self-illuminating, wearable, infrared computer visionsystem for home automation control and medical monitoring. In FourthInternational Symposium on Wearable Computers, 2000.

[15] David Gauntlett. TV Living: Television, Culture and Everyday Life. Rout-ledge, London, UK, 1999.

[16] S. Hashimoto H. Sawada. Gesture recognition using an accelerometersensor and its application to musical performance control. In Electronicsand Communications in Japan Part 3, pages 9–17, 2000.

[17] Ken Hinckley, Jeff Pierce, Mike Sinclair, and Eric Horvitz. Sensing tech-niques for mobile interaction. In UIST ’00: Proceedings of the 13th annualACM symposium on User interface software and technology, pages 91–100,2000.

[18] Ken Hinckley and Mike Sinclair. Touch-sensing input devices. In CHI’99: Proceedings of the SIGCHI conference on Human factors in computingsystems, pages 223–230, New York, NY, USA, 1999. ACM Press.

[19] Martin Jonsson. Sociala sajter tar over natet, 2006. http://www.e24.se/dynamiskt/reklam media/did 14139534.asp, accessed 2006-11-28.

[20] Juha Kela, Panu Korpipaa, Jani Mantyjarvi, Sanna Kallio, GiuseppeSavino, Luca Jozzo, and Di Marca. Accelerometer-based gesture controlfor a design environment. Personal Ubiquitous Comput., 10(5):285–299,2006.

[21] George Lekakos, Konstantinos Chorianopoulos, and Diomidis Spinellis.Information systems in the living room: A case study of personalizedinteractive tv design. In Proceedings of the 9th European Conference onInformation Systems, 2001.

[22] Robert J. Logan. Behavioral and emotional usability: Thomson consumerelectronics. pages 59–82, 1994.

Page 75: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

REFERENCES 65

[23] Robert J. Logan, Sheila Augaitis, Robert H. Miller, and Keith Wehmeyer.Living room culture - an anthropological study of television usage behav-iors. In Proceedings of Human Factors and Ergonomics Society 39th AnnualMeeting, 1995.

[24] Robert J. Logan, Sheila Augaitis, and Thomas Renk. Design of simplifiedtelevision remote controls: A case for behavioral and emotional usabil-ity. In Proceedings of Human Factors and Ergonomics Society 38th AnnualMeeting, 1994.

[25] I. Scott MacKenzie and Aleks Oniszczak. The tactile touchpad. In CHI’97: CHI ’97 extended abstracts on Human factors in computing systems,pages 309–310, New York, NY, USA, 1997. ACM Press.

[26] Aaron Marcus. Graphic design for electronic documents and user inter-faces. ACM Press, New York, NY, USA, 1992.

[27] Jakob Nielsen. Usability Engineering. Morgan Kaufmann Publishers,1993.

[28] Jakob Nielsen. Webtv usability review, 1997. http://www.useit.com/alertbox/9702a.html, accessed 2006-10-12.

[29] Nintendo. Nintendo wii: Controller, 2007. http://wiiportal.nintendo-europe.com/13.html, accessed 2007-01-04.

[30] Donald A. Norman. The Design of Everyday Things. The MIT Press, Lon-don, England, 2001.

[31] Ian Oakley, Jussi Angesleva, Stephen Hughes, and Sile O’Modhrain. Tiltand feel: Scrolling with vibrotactile display. In Proceedings of Eurohaptics2004, Munich Germany, 2004.

[32] Ofcom. Summary of research on the ease of use of domestic digitaltelevision equipment, 2006. http://www.ofcom.org.uk/research/tv/reports/usability/dtvu.pdf, accessed 2006-10-09.

[33] Jennifer Preece, Yvonne Rogers, and Helen Sharp. Interaction Design -Beyond Human-Computer Interaction. John Wiley & Sons, Inc, New York,NY, USA, 2002.

[34] Francis Quek, David McNeill, Robert Bryll, Susan Duncan, Xin-Feng Ma,Cemil Kirbas, Karl E. McCullough, and Rashid Ansari. Multimodal hu-man discourse: gesture and speech. ACM Trans. Comput.-Hum. Interact.,9(3):171–193, 2002.

[35] Jun Rekimoto. Tilting operations for small screen interfaces. In ACMSymposium on User Interface Software and Technology, pages 167–168,1996.

[36] Jun Rekimoto. Gesturewrist and gesturepad: Unobtrusive wearable inter-action devices. In Fifth International Symposium on Wearable Computers,2001.

Page 76: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

66 REFERENCES

[37] Ben Shneiderman. The limits of speech recognition. Commun. ACM,43(9):63–65, 2000.

[38] James Stewart. Interactive television at home: Television meets the inter-net. In Interactive Television. TV of the Future or the Future of TV?, pages21–51. Aalborg University Press, 1999.

[39] Alex Taylor and Richard Harper. Switching on to switch off: An anal-ysis of routine tv watching habits and their implications for electronicprogramme guide design. usableiTV, 1(3):7–13, 2002.

[40] Vivi Theodoropoulou. The rise or the fall of interactivity? digital televisionand the ”first generation” of the digital audience in the uk. In Proceedingsof the RIPE@2002 Conference - Broadcasting and Convergence: Articulatinga New Remit, 2002.

[41] Duan Varan, Andrew Turk, Sam Bucolo, Deb Polson, Margot Brereton,Jared Donovan, Kim Montgomery, and Kim Montgomery Gael McIndoe.Interactive lounge: an interdisciplinary approach to the design of a gestu-ral interaction device. Personal Ubiquitous Comput., 10(2):166–169, 2006.

[42] Christian von Hardenberg and Francois Berard. Bare-hand human-computer interaction. In PUI ’01: Proceedings of the 2001 workshop onPerceptive user interfaces, pages 1–8, 2001.

[43] Mark Weiser. The computer for the 21st century. pages 933–940, 1995.

[44] Daniel Wigdor and Ravin Balakrishnan. Tilttext: using tilt for text in-put to mobile phones. In UIST ’03: Proceedings of the 16th annual ACMsymposium on User interface software and technology, pages 81–90, 2003.

[45] wikipedia. Hidden markov model, 2007. http://en.wikipedia.org/wiki/Hidden Markov Models, accessed 2007-01-04.

[46] Howard Wisniowski. Analog devices and nintendo collaboration drivesvideo game innovation with imems motion signal processing technol-ogy, 2006. http://en.wikipedia.org/wiki/Hidden Markov Models,accessed 2007-01-04.

[47] Nicole Yankelovich and Jennifer Lai. Designing speech user interfaces.In CHI ’99: CHI ’99 extended abstracts on Human factors in computingsystems, pages 124–125, 1999.

Page 77: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

Appendix A

Additional screenshots

Figure A.1: Screenshot of the interface when the user presses the accelerom-eter activation button to select a channel.

67

Page 78: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

68 Chapter A. Additional screenshots

Figure A.2: Screenshot of the interface when the user presses the multi-functional button to access miscellaneous functions.

Figure A.3: Screenshot of the menu that appears when the user chooses toread an incoming message.

Page 79: Enhancing the user experience with new interaction ... · Enhancing the user experience with new interaction techniques for interactive television Robert Eriksson Filip Sjogren¨

Appendix B

The instructive picture

Figure B.1: The instructive picture that was shown in the beginning of the lastuser tests.

69