Embodied interface for levitation and navigation in a 3D ...

9
Embodied interface for levitation and navigation in a 3D large space M. Perusquía-Hernández 1 , T. Enomoto 1 , T. Martins 2 , M. Otsuki 1 , H. Iwata 1 , K. Suzuki 1 1 University of Tsukuba, 2 Universität für künstlerische und industrielle gestaltung Linz [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] ABSTRACT We propose an embodied interface that allows both physical and virtual displacement within an Immersive Virtual En- vironment (IVE). It consists of a modular wearable used to control a mechanical motion base from which a user is sus- pended. The motion base is able to navigate in 3D through seven wires whose length is adjusted via a parallel link ma- nipulator. Furthermore, an IMU-based body posture de- tection enables users to “fly” within the IVE at their own will, providing hands-free navigation, and facilitating other tasks’ interactions. To assess the usability of this embodied interface, we compared it with a Joystick-based navigation control. The results showed that this interface allows effec- tive navigation towards several targets located in 3D space. Even though the efficiency of target reach of the Joystick- based interaction is higher, a subjective assessment shows that the interface is comparable to the Joystick in hedo- nic qualities and attractiveness. Further analysis showed that with more practice, participants might navigate with a performance comparable with the Joystick. Finally, we an- alyzed the embodied behavior during 3D space navigation. This sheds light on requirements for further 3D navigation design. CCS Concepts Human-centered computing Interaction design theory, concepts and paradigms; Keywords Embodiment, head anticipation, 3D space navigation 1. INTRODUCTION Technology has allowed us to augment our innate human capabilities and fulfill ancient dreams, including that of fly- ing. As land-based creatures, humans are not evolutionarily predisposed for controlling flight or position in 3D space. Human gait happens on a 2D plane, with moderate vertical Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full cita- tion on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re- publish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. AH ’17, March 16-18, 2017, Mountain View, CA, USA c 2017 ACM. ISBN 978-1-4503-4835-5/17/03. . . $15.00 DOI: http://dx.doi.org/10.1145/3041164.3041173 (a) The LargeSpace and the Motion Base X Y Z 7.7m 15m 25m L i B i T (b) Motion Base wire diagram and LargeSpace co- ordinates. The green cable is the weight support cable. Dark blue circles correspond to each pulley. The vectors used for cable length calculation are also indicated Figure 1: Motion Base of the LargeSpace displacement as allowed by the landscape. Levitating and flying introduce the possibility of vertical movement, which is not completely natural to us. In this sense, technology has opened the possibility for expanding our body-constrained limitations, and for challenging the physical laws. 3D navi- gation is already possible either using planes or Virtual Real- ity (VR) simulators. Both research about human navigation and training for movement in 3D spaces are done using these technologies. Flying simulators are a popular VR application, both for training and entertainment, as they provide a risk-free en- vironment where untrained users can experience flight. Be- yond audiovisual factors, simulators can convey the feeling of vehicle movement. Given that VR flight simulators com- monly mediate the control of a vehicle navigation; interfaces tend to mimic those of already existing vehicles. These in- clude joysticks, buttons, or mouse pointing in a 2D plane translated to 3D navigation [2]. However, the possibilities expand beyond those. In 2001, [23] developed a taxonomy of the design space for navigation techniques. This taxonomy included differences according to the task selection, travel

Transcript of Embodied interface for levitation and navigation in a 3D ...

Page 1: Embodied interface for levitation and navigation in a 3D ...

Embodied interface for levitation and navigation in a 3Dlarge space

M. Perusquía-Hernández1, T. Enomoto1, T. Martins2, M. Otsuki1, H. Iwata1, K. Suzuki11University of Tsukuba, 2Universität für künstlerische und industrielle gestaltung Linz

[email protected], [email protected], [email protected],[email protected], [email protected], [email protected]

ABSTRACTWe propose an embodied interface that allows both physicaland virtual displacement within an Immersive Virtual En-vironment (IVE). It consists of a modular wearable used tocontrol a mechanical motion base from which a user is sus-pended. The motion base is able to navigate in 3D throughseven wires whose length is adjusted via a parallel link ma-nipulator. Furthermore, an IMU-based body posture de-tection enables users to “fly” within the IVE at their ownwill, providing hands-free navigation, and facilitating othertasks’ interactions. To assess the usability of this embodiedinterface, we compared it with a Joystick-based navigationcontrol. The results showed that this interface allows effec-tive navigation towards several targets located in 3D space.Even though the efficiency of target reach of the Joystick-based interaction is higher, a subjective assessment showsthat the interface is comparable to the Joystick in hedo-nic qualities and attractiveness. Further analysis showedthat with more practice, participants might navigate with aperformance comparable with the Joystick. Finally, we an-alyzed the embodied behavior during 3D space navigation.This sheds light on requirements for further 3D navigationdesign.

CCS Concepts•Human-centered computing → Interaction designtheory, concepts and paradigms;

KeywordsEmbodiment, head anticipation, 3D space navigation

1. INTRODUCTIONTechnology has allowed us to augment our innate human

capabilities and fulfill ancient dreams, including that of fly-ing. As land-based creatures, humans are not evolutionarilypredisposed for controlling flight or position in 3D space.Human gait happens on a 2D plane, with moderate vertical

Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full cita-tion on the first page. Copyrights for components of this work owned by others thanACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re-publish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from [email protected].

AH ’17, March 16-18, 2017, Mountain View, CA, USAc© 2017 ACM. ISBN 978-1-4503-4835-5/17/03. . . $15.00

DOI: http://dx.doi.org/10.1145/3041164.3041173

(a) The LargeSpace and the Motion Base

XY

Z

7.7m

15m25m

Li

BiT

(b) Motion Base wire diagram and LargeSpace co-ordinates. The green cable is the weight supportcable. Dark blue circles correspond to each pulley.The vectors used for cable length calculation arealso indicated

Figure 1: Motion Base of the LargeSpace

displacement as allowed by the landscape. Levitating andflying introduce the possibility of vertical movement, whichis not completely natural to us. In this sense, technology hasopened the possibility for expanding our body-constrainedlimitations, and for challenging the physical laws. 3D navi-gation is already possible either using planes or Virtual Real-ity (VR) simulators. Both research about human navigationand training for movement in 3D spaces are done using thesetechnologies.

Flying simulators are a popular VR application, both fortraining and entertainment, as they provide a risk-free en-vironment where untrained users can experience flight. Be-yond audiovisual factors, simulators can convey the feelingof vehicle movement. Given that VR flight simulators com-monly mediate the control of a vehicle navigation; interfacestend to mimic those of already existing vehicles. These in-clude joysticks, buttons, or mouse pointing in a 2D planetranslated to 3D navigation [2]. However, the possibilitiesexpand beyond those. In 2001, [23] developed a taxonomy ofthe design space for navigation techniques. This taxonomyincluded differences according to the task selection, travel

Page 2: Embodied interface for levitation and navigation in a 3D ...

control, and user interface. The variables in such user inter-faces were the input mechanism, the control frequency, thecontrol mapping, the type of display, simultaneous views,and simultaneous existence of objects. By proposing differ-ent interaction techniques, the authors capitalized on theVR visualization to compress, scale, and copy the VR worldin order to facilitate navigation. However, these navigationcontrol paradigms are heavily dependent on the visualiza-tion. It is important to notice that the used input mech-anisms were only traditional button inputs. Nevertheless,as head-mounted displays (HMDs) become more affordable,opportunities increase to create new contexts where humansare able to experience individual flight. Furthermore, track-ing and wearable technologies have also made it possible toexplore new input modalities beyond buttons and thumb-sticks. For instance, gestures tracked via motion capturesystems are a popular interaction alternative that allows theuser’s body to be the controller of the virtual space.

One approach to gesture-based flight control has been tomimic birds, by using the arms as wings [18, 11]. Anotherapproach to embodied interaction is to simulate horizontalgliding [9, 29]. Albeit intuitive, both wing-type or limb-extension type of interactions require to adopt a posturewhich is likely to cause tiredness very quickly. Additionally,the required movements do not allow users to freely use theirhands for interaction with the Virtual Environment (VE).This problem is partially solved by providing a full bodysupport wearable, and a button interface in the handles ofsuch support. However, this does not allow to use otherhand and limb gestures to do other tasks during navigation.

In a previous study, an exploration of multiple controlmodalities for embodied navigation pointed out to the factthat using subtle head movements had the potential to al-low users to navigate in 3D space with a minimum effort[15]. Furthermore, previous research has explored anticipa-tory neural mechanisms that build predictions of future sen-sory and motor events during human gait; including gaze,head, and body movements. It has been shown that gazeanticipates head, and head anticipates body orientation tothe gait direction of movement [6, 1]. Moreover, gravity re-stricts human displacements to 2D space. In this case, thehead is stabilized to keep a continuously upright positionwith respect to gravity. Disorientation has been reported byastronauts when there is no such frame of reference, or whenthe head position is destabilized [27]. Despite the difficul-ties, professionals, whose work implies 3D navigation (e.g.,astronauts or pilots), can master the task after a consid-erable amount of training. Even though there is a bodilyoptimization for terrestrial navigation using egocentric yawrotations only; humans can deal and improve their spatialmemory with allocentric reference frames in 3D-weightlessenvironments [27].

Therefore, we propose an embodied system, that allowsindividuals to levitate and navigate in 3D space using headand body rotation as input mechanisms for 3D navigationin a real space. This is achieved with the support of one ofthe world’s largest immersive virtual environment with full-surround and floor projections, named LargeSpace. Whilestereo projection displays are able to create the illusion ofa virtual world larger than the actual size of the projectionspace; a cable-driven Motion Base supports the movementof a user in 3D space, providing a somatic sensation of flight.

The contributions of this paper are:

• The system design and implementation of an embodiedinterface that augments human 2D navigation to 3Dreal space movement.

• The assessment of the performance, usability, and he-donic qualities of the proposed embodied passive inter-face; relative to an active, button-based interface suchas a Joystick.

• The behavioral assessment of human body posture dur-ing 3D navigation to inform future designs.

This system distinguishes itself from other flight simulationtechnologies in that: (1) it can augment human capabilityby allowing real-flight in a large space; (2) the user is ableto experience it in an individual vertical position; (3) theproposed embodied control method for levitation allows forhands-free control with subtle movements.

2. RELATED WORKThere exist a number of flight simulators and 3D naviga-

tion techniques. First, we provide examples of cable-drivenflight simulators similar to the Motion Base used in thisstudy. Second, we describe in detail the most prominentembodied interaction techniques developed to navigate in3D. Most of the mentioned works focus on virtual displace-ment only. The LargeSpace allows for both virtual and realdisplacements. In this work, we mainly focus on 3D real-displacement control. Its coupling with virtual displace-ment is left for future work. Therefore, previous work fo-cusing on 2D virtual displacements or redirected walking toachieve unlimited virtual displacement sensation with a min-imal real-world displacement (e.g., [21]) are not extensivelymentioned here.

Since their first conceptualization in the 80s, cable-drivenrobots have become popular in applications where loads areto be transported over wide areas of space, for sports train-ing, planar haptic interfaces, locomotion interfaces, and airvehicle simulators [14].

The “Seilroboter mit Passagier” or Cable-robot simulator[3], developed at the Tubingen-based Max Planck Institutefor Biological Cybernetics (MPI) is among the first cable-driven parallel robots able to transport humans. It consistsof a carbon fiber frame suspended by eight steel cables thatcan simulate vehicle trajectories in six directions.

More modest cable-based flight simulators include the IronMan Flight Simulator [9], which is a mounting device madewith a hang glider harness and a small crane. This systemintegrated a HMD with Google Earth images; a Wii remoteto control the flight; and a wind machine to increase im-mersion. Furthermore, they used Ironman-like movementsto steer the flight. Also, earlier work by Ars Electronica hasthe user suspended by a harness, combined with VR andforce feedback to convey the sensation of flying [29]. In-spired by the practice of paragliding, flight is controlled bybody rotation and limb movement. More recently, Krupkeet al. [11], constructed a flight simulator with a climbingharness, a set of climbing ropes, a motor winch, and a Thera-Band sling for load reduction. With this arrangement, up to400kg can be suspended from the ceiling from three pointsoffloaded via pulleys. Users can be suspended while wearinga HMD, and their movements can be tracked from the floorto control a VR-simulated flight. Moreover, they designedtwo different types of control for the navigation. The first

Page 3: Embodied interface for levitation and navigation in a 3D ...

Initial Levitation 2) Joystick-based active allocentric control

1) Posture-based embodied passive egocentric control

up/downfront/back

left/right

up down right left front back

30cm

Figure 2: Overview of the interactions

one was simulating that of a bird, by moving the arms imi-tating wings. The second one was the so-called “Superman”method which solely used the orientation of one of the armsof the participant to steer the navigation. Finally, a userevaluation showed that both control methods were similarin usability, level of presence, and task load. However, par-ticipants reported that the bird method feels more natural.

Other alternatives to enhance the physical sensation of fly-ing are moving body supports. This is illustrated by Birdly,which implements a moving base where the user can liedown, combined with rich audiovisual and haptic feedbackelements to provide an embodied experience of flying [18].Another example of a moving base attached to the ground toprovide individual flight experience is the ICAROS project[19]. The proposed moving base provides support for legsand arms in a platform that allows pitch and roll movement.It includes a button controller to start and stop a flight ap-plication in a phone Application that can be visualized withthe Samsung Gear VR. Flight control signals are obtainedfrom the Smartphone Inertial Measurement Unit.

3. THE LARGESPACEThe LargeSpace is one of the largest immersive displays

in the world. It covers a space of 7.7m height, 25m width,and 15m length.

3.1 Projection systemStereo images are seamlessly projected on the walls and

floor of the LargeSpace by twelve Mirage series projectorsmade by Christie Digital Systems [22]. Furthermore, objectswithin can be precisely tracked via twenty OptiTrack Primeinfrared cameras.

3.2 The Motion BaseThe Motion Base is a triangular-shaped structure that al-

lows to suspend a user in the air (Figure 1). This is achievedby attaching a harness to the Motion Base, allowing users tobe lifted and displaced, to be able to move their body andto make use of their limbs while in the air. The used har-ness is fabricated by Moritoh Corporation and can supporta maximum of 560kg (100kg guaranteed). This harness liftsthe users by the waist, and partly from the thigh.

3.2.1 HardwareThe wire-drive Motion Base is composed of a carbon-iron

base and seven wires and seven pulleys that control wire

length. Each wire is connected from a pulley installed inthe top of the LargeSpace to the vertices and to the centerof the base, in a Stewart Platform mechanism arrangement.Figure 1(b) shows placement of the wires in the LargeSpace.The aforementioned wires are SUS wires of 3mm thickness;except for the weight support wire hanging from the ceil-ing, which is 4mm thick. The maximum load supportedby the wires is 3.5kN, and the maximum length is 30m, al-though only 20m are necessary for 6DoF movement aroundthe LargeSpace. The motors are Panasonic MEME502SCHservomotors with 5.0 kW output, 15.9Nm torque, and 3000guaranteed r/min. The diameter of the wire rod is 85cm,with a 2.5cm groove where the cable fits. The motors use anHPtec controller HCRTEX, and communicate with RTEX,which has 5msec latency [5].

3.2.2 Wire length and position calculationThe motion base is controlled by the rotation of the pulley

and adjustment of wire length. Rotation is achieved with aparallel link manipulator [10]. Using this method, the wirelength is calculated with Equation 1, by considering threevectors. First, the mounting position of the rope from thecenter position of the base to each of the Motion Base’svertices (P). Second, the vector indicating the position ofthe center of the base from the center of the ceiling of theLargeSpace (T). Third, the vector indicating the mountingposition of each rope from the center of the space (B). Formore details check figure 1(b). This system design allows theuser to move around a 3D space of 10m width, 6m height,and 5m depth. The maximum speed is 2.5 m/s.

Li =

[XLiYLiZLi

]= R(φ, θ, ψ)Pi + T(XT , YT , ZT )−Bi (1)

3.2.3 Safety assuranceSecurity during movement is ensured by considering five

factors. First, by constraining the maximum load of thesystem according to the vendor’s safety indications. Sec-ond, by the implementation of a hardware emergency stopswitch. Third, by an infrared sensor for excessive reel de-tection and prevention. Fourth, by a software blocking ofout-of-range movements. Finally, by constraining the inter-action design to avoid sudden human pulling of the harness,and by securely attaching to the body all control wearablesor devices that might fall.

4. SYSTEM DESIGN

4.1 Interaction designThe so-called Levitas interface explores embodied move-

ment control for 3D space navigation. Embodied interactionembraces the fact that human minds are not completely ab-stract or independent of a bodily existence; but rather, thatcognitive and motor capabilities have developed togetherthrough biological evolution [4, 28, 16, 12].The body is em-phasized as locus of interaction and the result of the user’sactions affect the state of the whole body at every instant.

As humans do not possess any natural means to float inthe air, the ability to fly using only our own body constitutesa completely new experience. Thus, an embodied control isa challenge. Our focus has been in interaction modalities

Page 4: Embodied interface for levitation and navigation in a 3D ...

Levitas App Client:Intended direction detectionTorso IMU

Headband IMUMotion base control server:Pulley control

TCP

Figure 3: System overview

which are closer to natural action, even if levitation or flightare not natural capabilities of humans. Therefore, we con-sider body postures which take advantage of innate humanmotor characteristics; and we explore how they expand to3D movement. We capitalize on the head anticipation phe-nomena during locomotion in 2D displacements [7, 1], andexpect to see it generalized in 3D navigation.

Contrary to other flight simulators, the Motion Base al-lows individual flight in a vertical position, similar to floatingor levitating. Previous work has shown the possibility of us-ing a wing-type interaction to navigate in the LargeSpacewith the use of the motion tracking system [5]. However,they did not propose a method to go backwards. Further-more, the use of the limbs for additional interaction purposesis limited while moving the arms as wings; and the broadnature of the gestures may cause untrained users to quicklybecome tired. Therefore, subtler body movements like waistrotations and head orientation were preferred to indicatenavigation direction.

Our proposed interface uses two Inertial MeasurementUnits (IMU) integrated in a headband and a pin for theclothes. They are used to detect head and torso rotationswhich are intended to act as passive control inputs. Passivemodalities are those that require no explicit input command.This design choice was made in an effort to create a percep-tual interface based on body movements and states whichpeople inherently perform [24].

Users are allowed to navigate in three axes. The X axiscorresponds to front and back; the Z axis to left and right;and the Y axis to up and down (Figure 1(b)). These move-ment directions were mapped to multiple control inputs.Figure 2 summarizes the proposed movements for navigationcontrol. For navigation in a horizontal plane, head and torsoorientation are used. By leaning forwards and backwards,the motion base also moves in those directions. Looking leftor right steers the Motion Base sideways. Finally, looking upand down grants the user the power to ascend or descend.The selected movements are subtle, and unlikely to causetiredness.

4.2 System architecture and wearable designThe Motion Base is controlled by two PCs, one of which

functions as a client and hosts the control application; thesecond PC is a server, which effectively controls the Mo-tion Base’s pulleys and wire length. The system overview isprovided in Figure 3.

The sensing system consists of a modular wearable de-vice that includes two custom Inertial Measurement Units(IMUs) implemented with a Bosch BNO055 chip in UARTconfiguration and a RN41 Bluetooth SMD module which re-lays the data to the host of the software control application(Levitas App). The Levitas App receives and aggregatessensor data, computes the resulting motion, and sends com-

Figure 4: Wearable for 3D navigation and body pos-ture tracking. An example of the 3D-real target,and the Joystick used for the user evaluation arealso shown.

mands to the Motion Base via TCP.To allow prototyping iterations we opted on an unobtru-

sive, modular wearable design. It consists of a headbandwith an IMU, and a clip with a second IMU for the torso.The headband is fairly easy to put on and does not coverthe face. One IMU is attached to it in order to measurehead movement. A second IMU is implemented on a clipthat can easily be attached to the user’s clothes on the chestarea. Each IMU is equipped with a Li-on battery, encapsu-lated in a box of 5 x 3.5 x 2 cm. The maximum size andweight is mainly due to the battery. Smaller pins could beachieved considering the trade-off with battery life.

For evaluation purposes, a set of hand-bands and a back-marker board were also designed. They consist of rigidstructures of acrylic, with several markers attached. Theintention is to track the user’s body posture with the Mo-tion Capture system of the LargeSpace (Figure 4). Themotion tracking system maximizes tracking accuracy, andfacilitates calculations by using the same calibration andreference frame.

4.3 SoftwareThe Levitas App is written in C++ and acts as mediat-

ing layer between sensor data and the Motion Base control.Based on the sensor data, it computes the intended direc-tion of movement, which is then sent to the Motion Basesoftware via TCP. Smoothing is performed using a movingaveraging window of 10 samples, to avoid jerky movements.The movement speed is set-up to a fixed rate of 20 cm/s.

The control of the Motion Base is done according to threestates: (1) standing on the ground, (2) levitating, and (3)flying. The users always start on the ground, and theyneed first to achieve a state of levitation before they canfly around. The levitation is achieved by moving upwards30 cm. After the user has reached the levitating state, theflying state is activated. While flying, the user can controlhis position in 3D space.

The IMUs require calibration at the beginning of each ses-sion, by recording the user’s baseline position. The baselineposition is a relaxed straight posture, with the head fac-ing straight along the X axis of the LargeSpace in the Westdirection. Sensor readings after the calibration were sub-tracted from the baseline, to obtain only the rotation fromit. If readings in any of the input channels rise above eachrespective direction threshold, then the user starts to move

Page 5: Embodied interface for levitation and navigation in a 3D ...

XY

Z

Baseline gaze direction

Up, right

Down, right

Down, left

Up, left

Target 1: (3,2,-1.5)

Initial position: (5,0,0)Target 2: (0,0,0)

Target 3: (-3,2.3,1.5)

Target 4: (-5,0,-2)Camera 1

Camera 2

Figure 5: Target location during the user evaluation.The targets were tangible marks in the LargeSpace.

in that direction. Thresholds were defined after testing com-fortable postures for several users, and adjusted slightly foreach individual. Furthermore, they were mutually exclusiveto the opposite direction of movement.

5. USER EVALUATIONA user evaluation was conducted to investigate (1) user’s

understanding, performance, and enjoyment of the proposedcontrol system; and (2) to assess whether human head an-ticipation occurs in 3D navigation even when using othercontrol methods non-dependent on this embodied charac-teristic.

We evaluated the effectiveness and efficiency of Levitas.Also, we conducted a subjective assessment of pragmatic andhedonic qualities of the interface. Being a hands-free embod-ied interface designed to match natural navigation move-ments, and even though reaching the activation threshold isslower than pressing a button; we hypothesized our systemto have good enough performance, usability and to be en-joyable to use. As a reference, we compared the proposedsystem to a basic Joystick navigation interface. Joysticks areergonomic, highly responsive, and the button pressing hasinherent haptic feedback, therefore they are the VR controlinterface per excellence. Despite these advantages, having ajoystick limits the hand usage for other types of interaction,and is often reported as less fun than gesture-interaction[26]. Moreover, we explored the user’s body posture duringusage to investigate whether the expected embodied controlmovements happen even if the user controls the navigationwith a Joystick. For these purposes, an XBOX controllerwas implemented through the Levitas App, using the inter-actions described in figure 2.

5.1 Methods

1. Experiment design. Participants explored two in-teraction type conditions (Levitas and Joystick) in awithin-subjects design. The conditions were presentedin a counterbalanced order. Cable-length change speedwas fixed to 40cm/sec in both conditions, and no VRvisualizations were projected on the surrounding screens.Participants controlled their translation around thereal 3D space as described in figure 2.

2. Participants. Fourteen participants voluntarily joinedthe study (average age=26.4 years old, SD=3.82; 3female). Although some of them had previous expe-rience riding the Motion Base, none of them used the

proposed interface before. From these participants, thefirst one gave valuable qualitative feedback to quicklyimprove calibration and wearable comfort. The nextthree participants were considered as a pilot, and theremaining 10 participated during the full experimentas described below.

3. Task. Participants were provided with graphical in-structions on what type of commands were available tonavigate in a figure similar to figure 2. Afterwards, thesystem was calibrated to their preferred head and torsorotations. Next, they were invited to explore differentpostures to control their flight during three minutes ofpractice. After practicing, they were asked to fly tofour position markers placed around the LargeSpace.Two of those markers were placed at 2 and 2.3 m abovethe ground, and two were placed directly on the ground(figure 5). The start and goal markers were located atthe extremes of the X axis, and two-equally distancedintermediate goals at the extremes of the Z axis. Theywere given as much time as they needed to completethe task. Afterwards, each user was asked to fill in aquestionnaire to assess the condition experienced rightbefore.

4. Measurements. Data from all sensors was loggedas well as the commands received by the server. Ad-ditionally, six reflective markers were attached to theheadband, six to a back acrylic board, and seven towrist boards as shown in Figure 4. This was intendedfor head, torso, and hand tracking during the trials.The subjective assessment questionnaire included: (1)the AttrakDiff questionnaire to assess pragmatic andhedonic qualities, in its online version [8, 25]; (2) aquestion about the degree of sense of agency in a 5-point Likert scale; (3) the Net Promoter Score (NPS)question in a 10-point Likert scale to assess how likelyusers are to recommend the system to friends or col-leagues [17]; and (4) demographic and Joystick expe-rience control questions.

5.2 AnalysisThe video recordings were analyzed to visually inspect

whether subjects were able to complete the task or not.The motion tracking data of the head, torso, and arms wasused to identify in detail the users’ translation and rota-tion. Based on this information, the efficiency, or the timeto complete the task, was estimated. The moment when theuser reached the target was defined as the point in whichthe Euclidean distance between the user and the target isminimized. Additionally, we calculated the percentage ofthe total task time that each participant took to reach eachone of the targets. This allowed us to test if the efficiencywas dependent on the direction of movement, and therefore,on the posture required to control the flight.

Based on calibration data, we also determined the mostcomfortable head and torso rotations consciously chosen bythe users. Next, we used the motion tracking data to an-alyze the actual head and torso rotations during the flightin both conditions. Questionnaire data was analyzed usingstandard statistic methods for repeated-measures. Finally,observations from the video recording and verbal commentsfrom the participants are also reported.

Page 6: Embodied interface for levitation and navigation in a 3D ...

0

100

200

300

400

500

600

700

2 3 4 5 6 7 8 9 10 11 12 13 14

Second

s

Participant

Levitas Joystick

(a) Task completion time per participant (n=13)

0

0.1

0.2

0.3

0.4

0.5

0.6

Up-Right Down-Left Up-Left Down-Right

Levitas Joystick

(b) Target reach normalized time per trajectory directionof movement (n=12)

Figure 6: Effectiveness completing the proposedtask per input interface

Hedoni

c Qual

ity (HQ

)

Pragmatic Quality (PQ)

Neutral

Tooself-oriented

Taskoriented

Tootaskoriented

Self-oriented Desired

Task

(a) AttrakDiff Portfolio(n=10). Blue: Levitas.Black: Joystick

Levitas Joystick

ATT:1,33HQ-S:1,33

HQ-S:0,59HQ-I:1,03

HQ-I:0,69PQ:0,20PQ:1,30

3210-1-2-3

3210-1-2-3

HQ-I HQ-S ATT

ATT:1,64

PQ

(b) AttrakDiff average values (n=10).

Figure 7: Subjective assessment

5.3 Results

5.3.1 Task performanceAll participants finished the task, and reached all targets

with both interaction techniques. Whereas variables in Lev-itas were participant ability and personal calibration, forthe Joystick the main variable was previous gaming experi-ence. 13 participants have used a Joystick before, and mostconsidered themselves as beginner gamers. 12 of the partic-ipants played 0 to 5 hours per week, and only two playedfrom 5 to 10 hours per week. Furthermore, 42% of the par-ticipants considered themselves as beginner gamers, and therest considered their ability as intermediate.

Figure 6 shows task completion times per participant, in-terface, and direction of movement. The overall comple-tion time with the Joystick was significantly shorter thanwith Levitas (t(13) = -5.8719, p<0.05 ). Completion timesranged from participant to participant, with wider variationsin the Levitas control (mean =5.49 , SD=2.91) than withthe Joystick control (mean =1.485, SD=0.62). A repeatedmeasures two-way ANOVA with task completion time as de-pendent variable; and gaming experience and control type asindependent variables was performed. There were no signif-icant main effects of gaming experience (F(2,14) = 1.193,p>0.05); and the interaction between gaming experienceand control type (F(2,14) = 1.112, p>0.05). Additionally,the time spent to reach each individual target yielded tono significant difference on controller type (F(1,95)=0.003,p>0.05). On the other hand, the movement direction (F(3,95)= 3.71, p<0.05), and the interaction between movement di-rection and controller type (F(3,95) = 3.23,p<0.05), weresignificant. Pairwise comparisons with Bonferroni correc-tion showed no further significant differences. Figure 6(b)shows the time per direction of movement and per interface.

5.3.2 Body postureComfortable head and torso rotations reported by the par-

ticipants during calibration were minima of 15◦ for headpitch upwards, 20◦ for head pitch downwards; 40◦ for bothhead yaw to the left, and for head yaw to the right; 4◦

for torso pitch forwards; and 3◦ for torso pitch backwards.These were well below the maximum described as normalhuman movement. The common values are 50◦ for headhyperextension (up), 40◦ for head flexion(down); 55◦ headrotation both to the left and right; and 70◦ torso flexion(front) and 30◦ torso hyperextension (back) [13]. Especiallythe torso rotation was very limited during calibration, due tothe movement constraints imposed by the harness. However,during the actual flight, participants tended to do widermovements than the commonly observed in other contexts,and visibly apply more force while doing them. Even if theywere reminded that it was not necessary. Finally, the sensorplacement was different for some participants due to theirhairstyle, which significantly changed the calibration values.

The motion capture data revealed that participants tendedto keep their torso in the baseline position during the Joy-stick interaction, and that they rotated it just enough tomove forwards and backwards in the Levitas interaction.Furthermore, they constantly reported that torso movementwas challenging due to the harness structure, which sup-ported all weight from the waist. Some participants optedfor strategies such as extending the arms and legs to adopt amore horizontal position. However, this movement required

Page 7: Embodied interface for levitation and navigation in a 3D ...

6 6

-5-4-3-2-1012345

-5-4-3-2-1012345X Y Z

1 256 511 766 1021 1276 1531 1786 2041 2296 2551 2806 3061 3316 3571 3826 4081 4336Samples

Positio

n in the

LargeS

pace [m

]

(a) Levitas translation

Z

-6

-4

-2

0

2

4

6

-6

-4

-2

0

2

4

6X Y

1 256 511 766 1021 1276 1531 1786 2041 2296

Positio

n in the

LargeS

pace [m

]

Samples

(b) Joystick translation

-1-0.8-0.6-0.4-0.2

00.20.40.60.8

-1-0.8-0.6-0.4-0.200.20.40.60.8

X Y Z W

1 256 511 766 1021 1276 1531 1786 2041 2296Samples

Torso R

otation

[quate

rnion]

(c) Joystick torso rotation

1

-1-0.8-0.6-0.4-0.2

00.20.40.60.8

1

-1-0.8-0.6-0.4-0.200.20.40.60.8X Y Z W

1 256 511 766 1021 1276 1531 1786 2041 2296 2551 2806 3061 3316 3571 3826 4081 4336Samples

Torso R

otation

[quate

rnion]

(d) Levitas torso rotation

-1

-0.5

0

0.5

1X Y Z W

1.5

-1

-0.5

0

0.5

1

1.5

1 256 511 766 1021 1276 1531 1786 2041 2296Samples

Head R

otation

[quate

rnion]

(e) Joystick head rotation

-1.5

-1

-0.5

0

0.5

1

1.5

-1.5

-1

-0.5

0

0.5

1

1.5X Y Z W

1 256 511 766 1021 1276 1531 1786 2041 2296 2551 2806 3061 3316 3571 3826 4081 4336Samples

Head R

otation

[quate

rnion]

(f) Levitas head rotation

Figure 8: Body posture and moving trajectories for participant 4. Translations are represented in meters.Rotations are represented in quaternions. Both use the coordinate system of the LargeSpace. From the plots,it can be seen that users took more smooth trajectories with the Joystick than with Levitas. With Levitas,there were small torso rotations, whereas with the Joystick the torso was maintained straight. Finally, headrotations happened both in the Joystick and Levitas conditions.

visible effort.On the other hand, head rotation occurred in both Levi-

tas and Joystick conditions. During the Joystick condition,participants locked their gaze to the target, which helpedthem to achieve the task more effectively. With Levitas, theinteractions were intended to move the head towards the de-sired movement direction, but participants often overshootor undershoot the target. Probably because they were do-ing wider head movements than they would usually do whileonly looking at the target (Figure 8). Furthermore, some ro-tation combinations such as going forward and upwards; orbackwards and downwards were reported as unnatural. Asa strategy to cope with this, participants moved first on onedirection, and then on the other, describing staircase-liketrajectories (Figure 8(a)). In contrast, while using the Joy-stick, the moving trajectories were smoother, and allowedfor diagonal movement (Figure 8(b)).

5.3.3 Subjective assessmentSubjective assessment ratings showed that whereas the

Joystick interaction (PQ=1.3,HQ=0.64) has more perceivedpragmatic qualities, the Levitas control (PQ=0.2,HQ=1.18)appears as attractive, and with slightly better hedonic qual-ities and attractiveness (figure 7(b)). This suggests that theJoystick interaction is more task-oriented, and the Levitasinteraction is rather self-oriented. Even though the perfor-mance of the joystick is perceived as superior to the Levitasone, there is a trend indicating that the Levitas interface ismore fun. The mean agency reported for Levitas was 3.2(SD=1.1) on a 5-point scale. Similarly, the mean agencyfelt during the Joystick manipulation was 3.8 (SD=0.9). Apaired t-test showed no significant difference (t(12)=1.74,p>0.05). Finally, the Net-Promoter Score for Levitas was0.29, as opposed to the 0.14 score given for the Joystickinterface.

6. DISCUSSIONThe chosen interactions allowed for a fully controllable

hands-free embodied interface for 3D navigation. 100% com-pletion rate was achieved in the proposed task with fairlygood usability scores. Despite participants taking longercompleting the task than with a traditional Joystick, thereis a trend indicating that Levitas has higher hedonic quali-ties; i.e., it is more fun to use. Moreover, we observed thathead rotations towards the desired direction of movementwere present in both conditions, which suggest that thesemovements are performed unconsciously, and have the po-tential to reduce mental load when using Levitas to navigate.Moreover, Levitas could be used for hands-free 3D naviga-tion while a Joystick, or other type of interactions, are usedto interact with the virtual environment.

Even though interaction with Levitas is slower, the timeseems proportionally distributed in all movement directions.This is contrary to what we expected, given some uncom-fortable posture combinations such as moving up-forward,or down-backwards. These postures are difficult becauseleaning forward naturally implies looking down, and lean-ing backwards implies looking up. Furthermore, wide headmovements to the sides imply a lack of gaze contact with thetargets. A possible result of this was the stair-like navigationpaths adopted during the Levitas control. These navigationpaths, in combination with target-gaze decoupling lead tofrequent overshoots and undershoots in the Levitas condi-tion.

Although performance did not depend on self-reportedgaming experience, the participant’s experience using a Joy-stick partially explains why they were so much better withit than with Levitas. Some of them even asked to stop thepractice before its allotted time passed.

Interestingly, a couple of participants tried super heropostures to increase forward and backward torso rotations.More generally, participants tried to excessively compensateusing movements for which the interface was not calibrated.Some of them guessed that wider-slower movements wouldhelp, but others opted to apply more force, which just madethem more tired. It seemed that they expected force-speed

Page 8: Embodied interface for levitation and navigation in a 3D ...

coupling, which might be a hint for future design. As men-tioned before, the movement of the Motion Base was setto a constant speed. This was somehow expected when us-ing the Joystick, but for the body, participants seemed toexpect a change. Furthermore, we would like to point outthat in between trials, the users’ position was reset from thefinal target to the initial one. This was done at the Mo-tion Base maximum speed, while the user was still riding it.We observed that participants were not urged to hold theharness when they were in control and at a low displace-ment rate; but they tended to hold the harness more whenthey were not in control or when the speed became “scary”.Thus, future designs should explore in more detail the roleof acceleration in the users’ behavior.

At a constant speed, head rotations happened in boththe Joystick and the Levitas interactions. As observed, andreported by some participants, looking at the target desti-nation is very important for navigation. However, the widehead movements during Levitas interaction made it difficultto always look at the targets. This is probably why par-ticipants took more time reaching the targets with the newinterface.

Finally, we would like to point out that whilst the Joystick-based interaction was advantageous in this particular con-text, participants experienced some confusion when tryingto reach the targets with their hands and they found thembusy holding the Joystick. In these situations, the risk ofdropping the controller was high. Therefore, we decided tophysically tie the control to the user’s wrist; which in turnmade it difficult to change the hand holding the controller.This issue might be enhanced in more complex tasks withvariable speed. In those situations, users would tend to ei-ther hold the harness, or to use their hands for more complexmanipulations in the VR environment. All in all, instead ofa competitor strictly speaking, the Joystick might be a goodcomplement for embodied navigation.

This is reflected in the subjective assessment. Althoughthe performance was superior with the Joystick, the per-ceived hedonic qualities and attractiveness of Levitas seemsmoderately higher. The NPS score was 15% above for Lev-itas. General usability is comparable for both interfaces,and the higher challenge in the Levitas might be explainedby the lack of experience using this type of interfaces, asopposed to using joysticks. An alternative explanation isthe wide-movement usage, which made it difficult to lookat the target all the times. We expect that as people getused to the threshold interface, they would do more efficientmovements.

6.1 LimitationsAs this work outlined, embodied technologies pose new

challenges. Different system requirements and behavioralreactions are expected from using our body as the control.Albeit small, the more salient limitation of our system is theactual, and the subjective delay between the command andthe Motion Base reaction. The limited feedback available inLevitas, and the embodied nature of the interaction madeit difficult for the participants to create an adequate mentalmodel of the expected outcome of each movement. Further-more, although head and torso anticipation were describedas body posture passive inputs, the participants’ tendencywas to pay unnatural attention to them. While using Lev-itas, they became active movements. In fact, most users

did use their head as a Joystick. Further research is neededto determine whether this effect would be alleviated withpractice.

Another constraint was that the participants had manyrigid bodies attached to track their body posture. Thesewere used only for evaluation purposes, and might have con-strained the movement of the participant. Without them,interactions might improve. Finally, the number of partici-pants in the study was limited. More measurement would benecessary to confirm the preferred hedonic qualities of Lev-itas as embodied interface, when compared to a Joystick.

7. CONCLUSION AND FUTURE WORKThe proposed embodied interface for levitation and nav-

igation in 3D space, called Levitas, was successfully imple-mented to drive the LargeSpace’s Motion Base. The Mo-tion Base is a cable-driven parallel robot that allows 3Dtranslations in real space, and moderate rotations. Whencompared with previous work, the provided movement ex-perience is unique, as it allows the user to ride in verticalposition, respecting the natural reference frame for move-ment orientation. Levitas aimed to explore such advantage.Thus, the available control commands were designed to ex-tend natural egocentric locomotion in 2d space to 3d spacenavigation. A user evaluation showed that Levitas is capa-ble to obtain 100% task completion in a simple navigationand target reach task. Furthermore, it was rated as havinggood hedonic qualities, and fairly good usability: partici-pants felt in control of their movements and enjoyed theflight. However, the efficiency and efficacy of the interfacewere not as high as that expected of navigation using a Joy-stick. Despite the aforementioned limitations, we uncoveredevidence that even if a controller is used, people tend toredirect their gaze, and partially their heads to the desiredtarget direction. Showing the potential of an embodied nav-igation paradigm in combination with other control modal-ities to interact with the surrounding virtual environment.Finally, we reported the average head and torso rotationswhen navigating in 3D. This information could be used toinform future interaction designs. In future work, we wouldlike to assess the learning curve to such interfaces. As dis-cussed by [27], the human body is not optimized for 3Dnavigation; however, we have the potential to improve withpractice. Furthermore, a more accurate mental model of howthe embodied interface reacts to the body movements couldbe supported by providing bodily feedback upon commandactivation. Other potentially interesting research directionsinclude the usage of this interface in combination with thesurrounding VR environment to increase the perceived nav-igated distance. Previous research already investigated howthe psychophysical limitations of the human body can beused to give the user the perception of unlimited navigationalthough they did not move beyond a small room in the realworld [21, 20, 30]. Finally, by integrating the virtual view inthe system, more complex tasks where hands free navigationis important could be explored. In those contexts, Levitasmight have a more prominent advantage to the Joystick’sperformance.

8. ACKNOWLEDGMENTSWe thank Dushyantha Jayatilake for his advice regarding

the wearable hardware implementation.

Page 9: Embodied interface for levitation and navigation in a 3D ...

9. REFERENCES[1] D. Bernardin, H. Kadone, D. Bennequin, T. Sugar,

M. Zaoui, and A. Berthoz. Gaze anticipation duringhuman locomotion. Experimental Brain Research,223(1):65–78, 2012.

[2] D. A. Bowman, S. Coquillart, B. Froehlich, M. Hirose,Y. Kitamura, K. Kiyokawa, and W. Stuerzlinger. 3Duser interfaces: New directions and perspectives.IEEE Computer Graphics and Applications,28(6):20–36, 2008.

[3] H. H. Bulthoff and J.-D. Walz. Cable-driven parallelrobots. Press release, Fraunhofer Institute forManufacturing Engineering and Automation IPA,pages 1–2, 2008.

[4] P. Dourish. Where The Action Is: The Foundations ofEmbodied Interaction. The MIT Press, Combridge, 1stpaperb edition, 2004.

[5] T. Enomoto, H. Iwata, and H. Yano. Development ofFly-Through System with Wire-Driven Motion Baseand Body-Motion Interface. In Nihon Virtual RealityGakai Gaikai Ronbunshu, volume 21, pages 14–17,2016.

[6] R. Grasso, L. Bianchi, F. Lacquaniti, E. Owen,G. Cappellini, Y. P. Ivanenko, N. Dominici, and R. E.Poppele. Motor Patterns for Human Gait : BackwardVersus Forward Locomotion. Journal ofNeurophysiology, pages 1868–1885, 1998.

[7] R. Grasso, S. Glasauer, Y. Takei, and A. Berthoz. ThePredictive Brain: anticipatory control of headdirection during steering of locomotion. Neuroreport,7(6):1170–1174, 1996.

[8] M. Hassenzahl, M. Burmester, and F. Koller.AttrakDiff : Ein Fragebogen zur Messungwahrgenommener hedonischer und pragmatischerQualitat. Mensch & Computer 2003: Interaktion inBewegung, pages 187–196, 2003.

[9] J. Horsey. Iron man flight simulator., 2010.

[10] K. Kosuge, K. Takeo, F. Toshio, K. Kai, T. Mizuno,and H. Tomimatsu. Computation of Parallel LinkManipulator Dynamics. Japan Society of MechanicalEngineers, 66(C-569):218–224, 1994.

[11] D. Krupke, P. Lubos, L. Demski, J. Brinkhoff,G. Weber, F. Willke, and F. Steinicke. Controlmethods in a supernatural flight simulator. Proceedings- IEEE Virtual Reality, 2016-July:329, 2016.

[12] S. Oviatt and P. Cohen. Perceptual user interfaces:multimodal interfaces that process what comesnaturally. Communications of the ACM, 43(3):45–53,3 2000.

[13] J. Panero and M. Zelnik. Human dimension andInterior Space. Whitney Library of Design,Watson-Guptill Publications, New York, first editedition, 1979.

[14] S. Perreault and C. M. Gosselin. Cable-Driven ParallelMechanisms: Application to a Locomotion Interface.Journal of Mechanical Design, 130(10):102301, 2008.

[15] M. Perusquia-Hernandez, T. Martins, T. Enomoto,M. Otsuki, H. Iwata, and K. Suzuki. MultimodalEmbodied Interface for Levitation and Navigation in3D Space. In Proceedings of the 2016 Symposium onSpatial User Interaction, page 2016, 2016.

[16] S. Pinker. The Blank Slate: The Modern Denial of

Human Nature. Penguin Books, London, 2002.

[17] F. F. Reichheld. The One Number You Need to Grow.Harvard Business Review, 81(12):46–54, 2003.

[18] M. Rheiner. Birdly an attempt to fly. In ACMSIGGRAPH 2014 Emerging Technologies on -SIGGRAPH ’14, pages 1–1, New York, New York,USA, 2014. ACM Press.

[19] J. Scholl. Design und konzeption fur fitnessgerat undsimulator icaros. PhD thesis, Diplomarbeit,Hochschule der Bildenden Kunste Saar, 2012.

[20] F. Steinicke, G. Bruder, J. Jerald, H. Frenz, andM. Lappe. Estimation of Detection Thresholds forRedirected Walking Thechniques. IEEE Transactionson Visualization and Computer Graphics, 16(1):17–27,2010.

[21] F. Steinicke, G. Bruder, T. Ropinski, and K. Hinrichs.Moving Towards Generally Applicable RedirectedWalking. Proceedings of the 10th Virtual RealityInternational Conference ({VRIC} 2008), pages15–24, 2008.

[22] H. Takatori, Y. Enzaki, H. Yano, and H. Iwata.Development of the large scale immersive displayLargeSpace. Nihon Virtual Reality Gakai Ronbunshi,21(3):493–502, 2016.

[23] D. S. Tan, G. G. Robertson, and M. Czerwinski.Exploring 3D Navigation: Combining Speed-coupledFlying with Orbiting. In Proceedings of the SIGCHIconference on Human factors in computing systems,number 3, pages 418–425, 2001.

[24] M. Turk and G. Robertson. Perceptual user interfaces(introduction). Communications of the ACM,43(3):32–34, 3 2000.

[25] User Interface Design GmbH. AttrakDiff.

[26] M. H. P. H. van Beurden, W. A. Ijsselsteijn, andY. A. W. de Kort. User Experience of Gesture BasedInterfaces: A Comparison with Traditional InteractionMethods on Pragmatic and Hedonic Qualities. pages36–47. Springer Berlin Heidelberg, 2012.

[27] M. Vidal, M. A. Amorim, and A. Berthoz. Navigatingin a virtual three-dimensional maze: How doegocentric and allocentric reference frames interact?Cognitive Brain Research, 19(3):244–258, 2004.

[28] M. Wilson. Six views of embodied cognition.Psychonomic bulletin & review, 9(4):625–636, 2002.

[29] M. Zepetzauer, D. Stefan, and S. Ritter. Humprey II,2003.

[30] R. Zhang and S. A. Kuhl. Flexible and generalredirected walking for head-mounted displays.Proceedings - IEEE Virtual Reality, pages 127–128,2013.