Gestures while driving: A guessability approach for a surface gestures … · demonstrated that the...

15
In D. de Waard, K.A. Brookhuis, A. Toffetti, A. Stuiver, C. Weikert, D. Coelho, D. Manzey, A.B. Ünal, S. Röttger, and N. Merat (Eds.) (2016). Proceedings of the Human Factors and Ergonomics Society Europe Chapter 2015 Annual Conference. ISSN 2333-4959 (online). Available from http://hfes- europe.org Gestures while driving: A guessability approach for a surface gestures taxonomy for in-vehicle indirect interaction Joana Vieira¹, Rosane Sampaio 1 , Rafael Nascimento², João Pedro Ferreira², Sara Machado³, Nuno Ribeiro³, Estevão Silva³, & Sandra Mouta¹ ¹Centro de Computação Gráfica, ²Universidade do Minho, ³BOSCH Car Multimedia, SA Portugal Abstract Surface gesture interaction in the automotive context is still exploratory and lacking guidelines. To address this issue, a guessability study was developed to associate end-user gestures with functionalities of an in-vehicle HMI system. Interaction with the system was performed indirectly, with the use of surface gestures. Participants were presented with instructions, followed by a static interface image of a menu (e.g.: music list, contact list), and prompted to create a gesture that would allow them to respond to the instruction (e.g.: “Select previous” or “Make a call”). Results demonstrated that the gestures proposed in the concept phase were simple and familiar, and allowed the creation of a gestures’ taxonomy for adjustment, acceptance, refusal, and navigation actions. The guessability methodology proved to be useful and demonstrated how user-centered design can improve the usability of an interaction even at an advanced stage of the design and development process. Introduction Surface gesture interaction bloomed with the growing applications of smartphone and tablet technology. With the commercialization of motion-sensitive controllers, gesture interaction has slowly become a must-have in current devices. Under the umbrella of “natural” gestures, some interactions are defined without previous testing, or even disregarding standards of interaction design. In some devices, users trigger functions unwittingly, because interaction with systems lack a general interaction consistency (Norman, 2010). Following the need to develop gestures established over solid usability guidelines, guessability or user-elicitation studies are becoming more frequent along the development process of interaction devices. Guessability or user-elicitation studies Guessability is the cost associated with the use of a new interface for the first time, and it can be translated into time, errors or effort in performing a given task (Moyes,

Transcript of Gestures while driving: A guessability approach for a surface gestures … · demonstrated that the...

Page 1: Gestures while driving: A guessability approach for a surface gestures … · demonstrated that the gestures proposed in the concept phase were simple and familiar, and allowed the

In D. de Waard, K.A. Brookhuis, A. Toffetti, A. Stuiver, C. Weikert, D. Coelho, D. Manzey, A.B. Ünal,

S. Röttger, and N. Merat (Eds.) (2016). Proceedings of the Human Factors and Ergonomics Society Europe Chapter 2015 Annual Conference. ISSN 2333-4959 (online). Available from http://hfes-

europe.org

Gestures while driving: A guessability approach for a

surface gestures taxonomy for in-vehicle indirect

interaction

Joana Vieira¹, Rosane Sampaio1, Rafael Nascimento², João Pedro Ferreira², Sara

Machado³, Nuno Ribeiro³, Estevão Silva³, & Sandra Mouta¹

¹Centro de Computação Gráfica, ²Universidade do Minho, ³BOSCH Car

Multimedia, SA

Portugal

Abstract

Surface gesture interaction in the automotive context is still exploratory and lacking

guidelines. To address this issue, a guessability study was developed to associate

end-user gestures with functionalities of an in-vehicle HMI system. Interaction with

the system was performed indirectly, with the use of surface gestures. Participants

were presented with instructions, followed by a static interface image of a menu

(e.g.: music list, contact list), and prompted to create a gesture that would allow

them to respond to the instruction (e.g.: “Select previous” or “Make a call”). Results

demonstrated that the gestures proposed in the concept phase were simple and

familiar, and allowed the creation of a gestures’ taxonomy for adjustment,

acceptance, refusal, and navigation actions. The guessability methodology proved to

be useful and demonstrated how user-centered design can improve the usability of

an interaction even at an advanced stage of the design and development process.

Introduction

Surface gesture interaction bloomed with the growing applications of smartphone

and tablet technology. With the commercialization of motion-sensitive controllers,

gesture interaction has slowly become a must-have in current devices. Under the

umbrella of “natural” gestures, some interactions are defined without previous

testing, or even disregarding standards of interaction design. In some devices, users

trigger functions unwittingly, because interaction with systems lack a general

interaction consistency (Norman, 2010).

Following the need to develop gestures established over solid usability guidelines,

guessability or user-elicitation studies are becoming more frequent along the

development process of interaction devices.

Guessability or user-elicitation studies

Guessability is the cost associated with the use of a new interface for the first time,

and it can be translated into time, errors or effort in performing a given task (Moyes,

Page 2: Gestures while driving: A guessability approach for a surface gestures … · demonstrated that the gestures proposed in the concept phase were simple and familiar, and allowed the

250 Vieira, Sampaio, Nascimento, Ferreira, Machado, Ribeiro, Silva, & Mouta

& Jordan, 1993). As an example, a fire extinguisher should imply less time and

effort in its usage, which means it should have high guessability (Jordan, Draper,

MacFarlane, & McNulty, 1991) as it will only be used in an emergency context.

Systems should crave high guessability in order that a user’s first interaction is

immediately successful without consulting support documentation.

User-elicitation or guessability methodologies have been mainly used to gather

useful information for the design phase of gesture-based interfaces. In a guessability

study, participants are prompted to create a gesture for a particular action, with no

given training or feedback. It is assumed that all input is accepted by the interface.

Lee and Wong (2015) made a guessability study for gesture interaction with public

information displays using augmented reality technologies. The goal was to identify

intuitive gestures for the interaction with the displays, asking the users to define and

act the gestures they thought more adequate to achieve the interaction tasks. A

sequence of animations was shown on the screen and participants had to act out a

type of gesture that would perform the task presented in the animation.

In another study, Vatavu (2012) conducted an experiment to elicit user input for

free-hand gestures for frequent interactions with a TV, resulting in gestures for free-

hand TV controls. Instructions, usually named referents, were presented to the

participants with a short description and by running a video demonstration of the

referent effect (e.g.: volume going up). Participants were then asked to propose a

gesture that would trigger that effect. After an agreement analysis, the results

obtained allowed the creation of twelve proposed gestures for each referent, along

with several guidelines for further application in smart environments.

Similar procedures were used for surface computing (Wobbrock, Morris, & Wilson,

2009) and mobile interaction (Ruiz, Li, & Lank, 2011). In all cases, very simple

interfaces were used, either static or animated, explaining or demonstrating the

expected effect after a given interaction. With this methodology, the end user

informs the designers about the most immediate and intuitive gesture for a given

action. The results are useful as they allow building general taxonomies or specific

gestures when a high agreement rate is observed among participants.

Context and goals of current study

The current guessability study was developed in the context of HMIEXCEL project,

a project of national interest co-funded by the Portuguese government (COMPETE;

FEDER) and the European Commission, based on the co-promotion between Bosch

Car Multimedia Portugal and University of Minho. HMIEXCEL intended to develop

advanced in-vehicle multimedia solutions.

One of the developed concepts for the in-vehicle infotainment system interface

involved indirect interaction with a touchscreen in the central stack area. The basis

of the concept is that by using a limited set of gestures, a driver can interact with this

system without taking the eyes off the road. The system consists of two displays

presented vertically, one atop the other in the central stack area (Figure 14).

Page 3: Gestures while driving: A guessability approach for a surface gestures … · demonstrated that the gestures proposed in the concept phase were simple and familiar, and allowed the

gestures while driving: a guessability approach 251

Figure 14. In-vehicle interaction concept: a top display is used to present information and a

bottom display receives user inputs

The top display presented images and animations like a regular screen, while the

bottom display worked as a touchpad and received all user input. The interaction

was made on the bottom display using touch interaction, which will henceforth be

referred to as surface gesture.

The testing goals included finding proximity between surface gestures from different

users, creating a taxonomy for the selected gestures; and identifying potential design

concerns in order to improve the efficiency, effectiveness, and end-user satisfaction

of the interface. In work process terms, this would mean an analysis of the recorded

videos, the search for recurrent patterns, building a general taxonomy that would

translate this recurrence and, finally, suggesting surface gestures for particular

actions.

Gestures were elicited from participants by presenting a static instruction to perform

an action – a referent – followed by a picture of the HMI system’s interface. The

participants proposed a surface gesture that would perform that action, according

with the interface.

Method

Participants and apparatus

Thirteen participants holding a Portuguese driver’s licence were recruited (8 male, 5

female). All were technology savvy, with ages ranging from 23 to 45 years (M=29,

SD =6). All participants had normal or corrected-to-normal vision, and no upper-

body impairments limiting the interaction with the device.

Participants were seated in a Logitech Racing Kit G27 equipped with a Play Seat

Evolution steering wheel and three pedals. There was no driving task involved, but

participants should have a similar range of movements to that of the driving

position.

Page 4: Gestures while driving: A guessability approach for a surface gestures … · demonstrated that the gestures proposed in the concept phase were simple and familiar, and allowed the

252 Vieira, Sampaio, Nascimento, Ferreira, Machado, Ribeiro, Silva, & Mouta

Two displays were placed on the right side of the participant, at a comfortable

distance for interaction. Both had a visible and functional area of 9.7”. The bottom

display was an iPad© Air (2048x1536, 264 ppi) and it was the only with a touch

surface area. The displays were placed vertically on top of each other, with the

bottom display having 60º degrees of inclination relatively to the top display (Figure

2).

(a) (b)

Figure 2. Set-up used for the study: (a) view from camera and (b) animated capture from the

bottom display’s screen using TUIOpad.

All interactions were recorded with a video camera placed behind the participant.

The bottom display’s screen capture was registered both with video and an open-

source mobile application to record touch data. The iPad© screen was recorded using

Quicktime (Apple, 2015) running on an external computer with OS X Yosemite

(Apple, 2014). Data related with surface gesture interaction was collected with

TUIOpad (Akten & Kaltenbrunner, 2010) an open Source App for iOS used to

capture multi-touch data and send it to a client via a TUIO-based communication

protocol. An external computer, running a custom made application, received data

from the TUIO server and saved it on a file.

The bottom display registered the number of fingers used, the direction and location

of each surface gesture. The referents on the top display were presented using E-

prime (Psychology Software Tools, Inc., 2012).

Referents

The referents are the commands or actions that the participant is instructed to

perform. Twenty-one referents were selected. They were presented using written

instructions followed by static images of the graphic interface of the interaction

concept, such as a music or contact list. Figure 3 depicts three examples of the

twenty-one referents used, as well as their respective graphical user interface.

Page 5: Gestures while driving: A guessability approach for a surface gestures … · demonstrated that the gestures proposed in the concept phase were simple and familiar, and allowed the

gestures while driving: a guessability approach 253

a)

b)

c)

Figure 3. Three examples of referents. From left to right: a) Go to “profile”; b) “Previous”;

c) Accept Call.

The selection of the referents followed the stakeholders’ priorities for the interaction

concept, focusing more on general navigation actions, and navigation through phone

and music menus. The study also served as a test to the visual interface created up to

then, as the context allowed seeing whether it was suggestive enough or needed

further indications. As used in this study, the interface had a permanent black

background, with a six-item main menu arranged in a circle. Once inside a menu, the

navigation was mainly list oriented, presenting items vertically. No explicit cues

were presented suggesting directions of movement.

Some referents were of generic navigation in the HMI system, and others were

specific functionalities of the “Phone” and “Music” sub-menus. All referents used in

the experimental session are summarized in Table 1.

Table 12. Twenty-one referents used in the guessability study, grouped by type of

functionality.

General HMI Navigation Phone Menu Music Menu

Go back to Main Menu Show contact info Next

Select an option End a call Previous

After selected, choose an option Send an SMS Increase volume

Back to previous menu Make a call Decrease volume

Go inside a sub-menu Silence a call

Go to “Profile” Accept a call

To accept or agree Open an SMS

To reject or to cancel Cancel a call

Reject a call

Page 6: Gestures while driving: A guessability approach for a surface gestures … · demonstrated that the gestures proposed in the concept phase were simple and familiar, and allowed the

254 Vieira, Sampaio, Nascimento, Ferreira, Machado, Ribeiro, Silva, & Mouta

Procedure

All participants read and signed an informed consent and filled the questionnaire

about their previous use of technology. Participants were seated to the left of both

displays. The instructions to the participants referred that they would be presented

written commands on the top display, and they should perform a gesture to execute

that action. They were also informed that the input to the system on their right side

was indirect, meaning that there were no buttons or touch areas to be looked at.

After a brief period of training with different referents, the first of two similar blocks

was presented. The referents inside the blocks were organized in sets of two images:

the first image was visible on the top display for 3 seconds and it showed a black

canvas with the written referent (e.g. “Previous”); the following image represented a

static image of the HMI graphical interface and was visible for 10 seconds, during

which period the participant should perform one surface gesture on the bottom

display. This gesture should allow one to execute the action described by the

referent. After performing each gesture no feedback was given, and the experiment

advanced to the next referent. The bottom display remained blank.

The experiment was divided in two blocks with one interval between, managed by

the participant. In each block, twenty-one referents were presented twice. Thus, each

participant performed all prompted gestures four times, in a randomized manner.

This organization gave information about the evolution and internal consistency of

the gestures the participants made. The entire procedure lasted approximately 30

minutes.

Guessability and agreement measures

Two types of measures regularly used in guessability studies were also applied in the

current study, the Guessability and Agreement measures.

To analyze the guessability of a referent, Wobbrock, Aung, Rothrock, and Myers

(2005) proposed a measure of guessability (G) that intends to infer the guessability

of a proposed referent, which was here adapted for surface gestures:

𝐺 =∑ |𝑃𝑠𝑔|𝑠𝑔∈𝑆𝐺

|𝑃|. 100%

where P is the set of proposed gestures for all referents, SG is the set with all surface

gestures made for a given referent, and Psg is the set of proposed surface gestures in

set SG using gesture sg. For instance, if the “one finger tap” gesture was used 250

times, the guessability measure for the “Select an option” referent would be

calculated by: SG = {38} and G = 38/250*100% = 15,2%. Of all the 250 times “one

finger tap” was used, 38 times was for the “Select an option” referent, and this

means this referent was able to accommodate 15,2% of the “one finger tap” gestures

proposed by the participants.

Page 7: Gestures while driving: A guessability approach for a surface gestures … · demonstrated that the gestures proposed in the concept phase were simple and familiar, and allowed the

gestures while driving: a guessability approach 255

The agreement score (A) also proposed by Wobbrock and colleagues (2005), reflects

the degree of consensus among participants, is defined by:

𝐴 =

∑𝑟 ∈ 𝑅 ∑𝑃𝑖 ⊆ P𝑟(|𝑃𝑖|

|P𝑟|)⁄2

|𝑅|

where r is a referent in the set of all referents R, Pr is the set of proposed gestures for

referent r, and Pi is a subset of identical gestures from Pr. For example, referent

“Back to Main Menu” had 8 different gestures proposed (39 in total), one with 1

occurrence and the others with 17, 1, 4, 4, 1, 4 and 7 occurrences. These were

considered different groups, and the calculation was the following, resulting in a

score of 0.26:

Aback to Main Menu =(1

39)

2+ (

17

39)

2+ (

1

39)

2+ (

4

39)

2+ (

4

39)

2+ (

1

39)

2+ (

4

39)

2+ (

7

39)

2 = 0.26

The agreement score ranges between |P𝑟|−1 and 1 (absolute agreement), and it is

higher when a small number of different gestures is proposed.

Results

The technology use questionnaire demonstrated that most participants used Android

systems (54%), although 38% interacted daily with both Android and iOS systems.

Only 8% used exclusively iOS. All were regular smartphone users and interacted

daily with a laptop or desktop. Most users also interacted with tablets (77%), but

inside a car most interaction was made indirectly, using buttons (62%).

A total of 1092 gestures were captured (13 participants x 21 referents x 4

repetitions). For the analysis of the gestures, all videos were replayed and analyzed.

Both the video registered by the camera and the screen capture were played side by

side. All gestures were analyzed according to the following criteria: Number of

fingers (one, two, three); Type of touch (tap, long press, slide, scroll, pinch);

Direction (left to right, right to left, diagonal). Out of several surface gestures

collected, Table 2 depicts and describes the gestures performed more often.

The frequency, guessability and agreement of the gestures were analyzed.

Page 8: Gestures while driving: A guessability approach for a surface gestures … · demonstrated that the gestures proposed in the concept phase were simple and familiar, and allowed the

256 Vieira, Sampaio, Nascimento, Ferreira, Machado, Ribeiro, Silva, & Mouta

Table 2. Description of surface gestures performed more frequently

Frequent surface gestures

One Finger Compass

Rose

Index finger slides in one of

six possible directions,

starting at the center of the

display

Index finger taps once,

regardless of spatial

coordinates

One Finger Tap

One Finger Double

Tap

Index finger taps twice,

regardless of spatial

coordinates

Thumb and index or

medium fingers slide

inwards

Pinch in

One Finger Long

Press

Index finger presses the

display for a longer period

of time, regardless of spatial

coordinates

Thumb and index or

middle fingers slide

outwards

Pinch out

Frequent surface gestures (cont.)

One Finger Slide

Down-Up

Index finger slides upwards Index and middle

finger slide upwards

Two Fingers Slide

Down-Up

One Finger Slide Up-

Down

Index finger slides

downwards

Index and middle

finger slide downwards

Two Fingers Slide

Up-Down

One-Finger Slide

Right-Left

Index finger slides from

right to left

Index and middle

finger slide from right

to left

Two Fingers Slide

Right-Left

One Finger slide Left-

Right

Index finger slides from left

to right

Index and middle

finger slide from left to

right

Two Fingers Slide

Left-Right

Page 9: Gestures while driving: A guessability approach for a surface gestures … · demonstrated that the gestures proposed in the concept phase were simple and familiar, and allowed the

gestures while driving: a guessability approach 257

Figure 4. Surface gestures organized by frequency

In some cases, the first and second most frequent gestures were antagonistic. For

instance, to answer a call, almost half of the participants slided one finger to the

right, but about 20% made the slide on the opposite direction.

Of all 1092 gestures collected, most were distributed among ten categories depicted

in Figure 4. Of these, 915 answers were selected for further analysis, illustrating the

gestures made more than 20 times by participants. The most frequent surface

gestures used one finger and consisted in simple taps and slides.

Table 3 depicts the guessability measure for the referents whose gestures were made

more than 20 times (84% of all proposed gestures).

Page 10: Gestures while driving: A guessability approach for a surface gestures … · demonstrated that the gestures proposed in the concept phase were simple and familiar, and allowed the

258 Vieira, Sampaio, Nascimento, Ferreira, Machado, Ribeiro, Silva, & Mouta

Table 3. Guessability results. Percentage of proposed gestures accommodated by each

referent. The highest value for each referent has a grey background.

One Finger Two Fingers

Co

mp

ass

Ro

se

Tap

Do

ub

le

Tap

Lon

g P

ress

Slid

e Le

ft

Rig

ht

Slid

e

Rig

ht

Left

Slid

e U

p

Do

wn

Slid

e D

ow

n U

p

Slid

e U

p

Do

wn

Pin

ch In

% % % % % % % % % %

Back to Main Menu 0,4 5,7 7,1 9,3 1,4 2,0 31,8 13,3

Select an option 15,2 2,9 7,1 5,9

Selected-choose option 12,0 8,6 14,3 1,1

Previous menu 0,4 2,9 15,4 4,5 4,2 4,5

Go inside a sub-menu 39,3 11,6 5,7 5,4

“Driver Profile” 60,7 8,8 5,7

To accept or agree 7,6 1,4 7,1 3,2

To reject or to cancel 6,0 11,7 5,6 13,3

Show contact info 6,8 8,6 37,5 1,1 0,6

End a call 2,8 2,9 4,4 11,7 8,3 10,0

Send an SMS 5,6 10,0 16,1 2,7 9,1 2,0

Make a call 7,6 8,6 3,6 8,2 6,5

Silence a call 2,0 12,9 3,6 2,2 1,3 15,3 9,1 23,3

Accept a call 2,4 1,4 13,2 5,8 3,9

Open an SMS 12,8 14,3 3,6 3,2

Cancel a call 1,4 1,8 4,9 9,7 4,2 13,3

Reject a call 5,5 15,6 13,3

Next 0,8 7,1 4,4 11,0 1,4 27,5

Previous 12,1 3,2 20,8 3,9

Increase volume 1,6 2,2 54,9

Decrease volume 1,6 2,6 38,9 54,5 13,3

According to Table 3, there are some conflicts as the same surface gestures were

proposed for different referents. Nevertheless, the gesture with the highest value in

each referent, for all referents, was selected (value in grey background for the

highest value in the row). Some gestures are evident for some referents, like

navigation in menus with 61% plus 39% for the compass rose, or increasing and

decreasing volume with 55% of guessability for the slide up and down gesture.

These values mean that the referents are either suggestive enough for a gesture, or

make use of more semantic knowledge, like increasing and decreasing volume.

Others are not so attribuitable to a given referent, like double tapping to “Make a

Call” with only 9% of guessability.

Further agreement analyses were made to better fundament any selection. These

values reflect the conflicts observed previously. Lower agreement scores

Page 11: Gestures while driving: A guessability approach for a surface gestures … · demonstrated that the gestures proposed in the concept phase were simple and familiar, and allowed the

gestures while driving: a guessability approach 259

corresponded to a higher number of proposed gestures by referent. General

agreement score ranged between 0.17 and 0.67 (Figure 5).

Figure 5. Agreement Scores for surface gestures used for referents. The score ranges

between |𝑃𝑟|−1 and 1.

Figure 5 demonstrates that the referents with the highest agreement score between

participants were “Select an option”, “Reject a call” and “Increase volume”. All

other referents had a score of under 0.5, which indicates some variability among

participants, pointing towards the need for a more suggestive interface for the HMI

system.

Taxonomy of Gestures

The creation of a taxonomy of gestures intended to have practical effects on the

development process of the HMI system, and looked for macro patterns. This

allowed verifying patterns that were organized into the proposed gesture taxonomy

for indirect input (Table 4). In this exercise, three main actions were identified

among all referents: To adjust, to make acceptance or refusal actions, and to

navigate through menus.

Page 12: Gestures while driving: A guessability approach for a surface gestures … · demonstrated that the gestures proposed in the concept phase were simple and familiar, and allowed the

260 Vieira, Sampaio, Nascimento, Ferreira, Machado, Ribeiro, Silva, & Mouta

Table 4. Proposed taxonomy of gestures

ADJUSTMENTS/CHANGE MODE

Adjust intensity (volume)

Change position (music)

Change status (on/off)

Change size (zoom)

ACCEPTANCE ACTIONS OK, Accept, Agree, Open, Make, Send…

REFUSAL ACTIONS Reject, Cancel, Finish…

NAVIGATION Move through list

Move inside or through menus

Some referents were not included in the taxonomy because they need specific and

recognizable gestures in any context, such as the “Home”, and “Return” functions.

After analyzing frequency, guessability and agreement measures, it was possible to

associate a surface gesture for 12 of the 21 referents (Table 5), making a total of

eight different surface gestures. These were based in the referents that gathered

higher guessability and agreement scores.

Table 5. Library of surface gestures proposed for 12 of the 21 referents

Taxonomy Referent Proposed Gesture

ADJUSTMENTS/

CHANGE MODE

CHANGE VOLUME

Increase /Decrease Volume

Slide Down-Up/

Up-Down

ACTION:

ACCEPTANCE

Select an Option

One Finger Tap

Open SMS

One Finger Double

Tap

Make / Accept a Call

OK/Agree

One Finger Slide

Left-Right

ACTION:

REFUSAL

End /Cancel/ Reject a Call

Reject/Cancel

One Finger Slide

Right-Left

NAVIGATION

NAVIGATION IN MENUS

Go to “Driver Profile”

Go to a sub-menu

One Finger

Compass Rose

GENERAL/

OTHERS

PREVIEW

Show contact info

One Finger Long

Press

The proposed taxonomy allowed to group surface gestures according to their

function or semantic meaning (acceptance/refusal).

Page 13: Gestures while driving: A guessability approach for a surface gestures … · demonstrated that the gestures proposed in the concept phase were simple and familiar, and allowed the

gestures while driving: a guessability approach 261

Discussion

This study has provided important results regarding preferences and expectations of

the participants when interacting with an HMI system, which should be integrated in

the surface gesture design process.

The fact that the interaction was indirect influenced mapped gestures. For instance,

about 58% “Tap” gestures were made in the corresponding place where the object

was presented on the top display. This indicated that participants transferred the top

spatial representation to the bottom display, and touched it accordingly. Although

the location of the gesture has no influence on the resulting feedback of the concept,

it is an interesting pattern. This result demonstrates how the proposed gestures were

economic, in the sense that the simplest surface gesture could be used in several

contexts if there was a correspondence between the presented interface and the place

where the input was made.

It was also evident that participants used very simple and familiar gestures, and most

gestures used only one finger, mainly the index finger. The fact that all users

interacted daily with smartphones and/or tablets might have influenced the surface

gestures made throughout the experiment, sometimes with direct transposition.

Another explaining factor could be the 10s time limit imposed, favoring simple and

quick surface gestures. This result indicates that the number of fingers should never

be a differentiating factor when activating frequent functions.

Having opposite gestures for opposite actions was also common, which

demonstrated general coherence and semantics of the proposed gestures. For

instance, participants either slided down-up or slided up-down to increase and

decrease volume, respectively. It was also interesting to verify the tendency to

associate positive actions with the left to right movement, and refusal or negative

actions with the right to left movement.

The results suggest that the system would benefit from more guidance cues to

facilitate interaction. Although the main menu seems to be explanatory enough, for

it gathered one of the highest guessability values with the compass rose gesture,

most gestures were grouped in the “Tap” and “Slide” categories, and some referents

shared a high number of inputs from the same surface gestures. Nevertheless, the

twelve referents represented in the final proposal of gestures managed to gather

higher levels of agreement and guessability.

The guessability of the interface under study can be improved, and this first

exploratory analysis allowed to identify where the focus of the next development

phase should be: in placing cues for users indicating the direction of the gestures,

and in making all navigation contextual, in order to have the possibility to use the

same gesture in different contexts.

Page 14: Gestures while driving: A guessability approach for a surface gestures … · demonstrated that the gestures proposed in the concept phase were simple and familiar, and allowed the

262 Vieira, Sampaio, Nascimento, Ferreira, Machado, Ribeiro, Silva, & Mouta

Conclusion

A guessability study was developed to propose a library of gestures for an indirect

interaction concept. The outputs of the study indicate that all participants,

independently of their experience with mobile devices, preferred simple and familiar

gestures using the index finger. It was possible to propose eight gestures for a total

of twelve referents, out of twenty-one. Some referents needed different gestures due

to their specificity (e.g. “Return Home”), and it became clear that participants

converged towards the same simple and familiar surface gestures, avoiding

distinctive input. The study also suggested that the HMI interface could be improved

with more cues for the user, indicating, for instance, the direction of the surface

gesture.

Further stages of this research will include the validation of the gestures,

incorporating the selected gestures in a real interactive context. This is foreseen in

guessability methodology, and would provide further information on the learnability

of the gestures in a real context. The guessability methodology demonstrated how

user-centered design can be applied even at an advanced stage of the design and

development process, using simple set-ups and analysis. Nevertheless, it should be

emphasized how significant changes might result from these studies, and how

valuable these methods can be in initial stages of design and development, leading to

a reduced amount of time spent testing and correcting concepts.

Acknowledgements

This research was supported by: Operational Competitiveness Program -

COMPETE, QREN (Quadro de Referência Estratégico Nacional), European

Regional Development Funds (European Union); BOSCH for the project

HMIEXCEL - I&D crítica em torno do ciclo de desenvolvimento e produção de

soluções multimédia avançadas para automóvel; Strategic program FCT-

UID/EEA/00066/2013.

All illustrations provided and adapted from GestureWorks®

(www.gestureworks.com)

References

Akten, M., Kaltenbrunner, M. (2010). TuioPad. Martin Kaltenbrunner.

Apple. (2014). OS X Yosemite.

Apple. (2015). QuickTime.

Jordan, P.W., Draper, S.W., MacFarlane, K.K., & McNulty, S.A. (1991). Guessability,

learnability, and experienced user performance. In N. Diaper, D., Hammond

(Ed.), HCI’91 People and Computers VI: Usability Now (pp. 237–245).

Cambridge University Press.

Lee, G.A., & Wong, J. (2015). User Defined Gestures for Augmented Virtual Mirrors :

A Guessability Study. In Proceedings of the 33rd Annual ACM Conference

Extended Abstracts on Human Factors in Computing Systems (pp. 959–964).

New York, USA: ACM.

Page 15: Gestures while driving: A guessability approach for a surface gestures … · demonstrated that the gestures proposed in the concept phase were simple and familiar, and allowed the

gestures while driving: a guessability approach 263

Moyes, J., & Jordan, P.W. (1993). Icon design and its effect on guessability,

learnability, and experienced user performance. People and Computers, 8,

49–60.

Norman, D. (2010). Designing For People Gestural Interfaces : A Step Backwards In

Usability Visibility. Interactions.acm.org, 17, 46-49. Retrieved April 24,

2015.

Psychology Software Tools, I. (2012). E-prime. Psychology Software Tools, Inc.

Ruiz, J., Li, Y., & Lank, E. (2011). User-defined motion gestures for mobile

interaction. Proceedings of the 2011 Annual Conference on Human Factors

in Computing Systems - CHI ’11, 197.

Vatavu, R.-D. (2012). User-defined gestures for free-hand TV control. In Proceedings

of the 10th European conference on Interactive tv and video - EuroiTV ’12

(pp. 45–48). New York, USA: ACM Press.

Wobbrock, J. O., Aung, H. H., Rothrock, B., & Myers, B. A. (2005). Maximizing the

guessability of symbolic input. In CHI ’05 extended abstracts on Human

factors in computing systems - CHI '05 (pp. 1869–1872). New York, New

York, USA: ACM Press.

Wobbrock, J. O., Morris, M. R., & Wilson, A. D. (2009). User-defined gestures for

surface computing. Proceedings of the 27th International Conference on

Human Factors in Computing Systems - CHI 09, 1083.