Incorporating Subliminal Perception in Synthetic...

6
Incorporating Subliminal Perception in Synthetic Environments David Pizzi 1 , Ilkka Kosunen 2 , Cristina Viganó 3 , Anna Maria Polli 2 , Imtiaj Ahmed 2 , Daniele Zanella 4 , Marc Cavazza 1 , Sid Kouider 5 , Jonathan Freeman 6 , Luciano Gamberini 3 , Giulio Jacucci 2 1 Teesside University, Middlesbrough, England {d.pizzi; m.o.cavazza} @tees.ac.uk 2 University of Helsinki, Finland {ilkka.kosunen; giulio.jaccucci} @helsinki.fi 3 University of Padova, Italy luciano.gamberini @gmail.com 4 Electrolux Italia S.P.A., Pordenone, Italy daniele.zanella @electrolux.it 5 Ecole Normale Supérieure, Paris, France sid.kouider @ens.fr 6 Goldsmiths, University of London, England j.freeman @gold.ac.uk ABSTRACT Advanced interactive visualization such as in virtual environments and ubiquitous interaction paradigms pose new challenges and opportunities in considering real-time responses to subliminal cues. In this paper, we propose a synthetic reality platform that, combined with psychophysiological recordings, enables us to study in real- time the effects of various subliminal cues. We endeavor to integrate various aspects known to be relevant to implicit perception. The context is of consumer experience and choice of an artifact where the generation of subliminal perception through an intelligent 3D interface controls the spatio-temporal aspects of the information displayed and of the emergent narrative. One novel contribution of this work is the programmable nature of the interface that exploits known perceptive phenomena (e.g. masking, crowding and change blindness) to generate subliminal perception. Author Keywords 3D Environment, Subliminal Cues, Interactive Narrative. ACM Classification Keywords H.5.1 [Multimedia Information Systems]: Artificial, Augmented, Virtual Realities; H.5.2 [User interfaces]: Evaluation/methodology. General Terms Design, Experimentation, Human Factors, Measurement. INTRODUCTION Previous work [5] suggests that unconscious processes are actually involved in the (consumer) choice of products that manifest a certain level of complexity, which we assume to be correlated to a mapping between decision criteria and artifact perceptive features. Products such as appliances can be displayed in 3D environments relying on both system and user agency to customize an emergent product experience and narrative. Traditionally, investigating subliminal cues, implicit responses and other unconscious processing through psychophysiology has been done in laboratory environment on specific tasks [1]. The use of subliminal stimuli has been shown to prevent overloading of the user when a huge amount of data needs to be explored [9]. We propose to investigate a more complex scenario to explore typology and delivery of subliminal cues in the context of realistic information and 3D content of appliances. Furthermore having found the appropriate mechanisms to communicate subliminally with users the interest extends beyond the “local” impact on the immediate action and perception of the user to the potential “global” impact of the intelligent use of such mechanisms to the overall product experience and unfolding narrative. To address these problems we have developed a synthetic reality platform that gives us an exact control of type, timing and placement of subliminal cues during an embodied human-computer interaction with a 3D scene rendered on a large display. In the following, we present the interactive 3D environment we created to study implicit responses and subliminal cues in the context of interaction with appliance models. We discuss the technical set up, content, application and interactivity that are necessary to develop a research platform. The approach is to allow immersiveness and intuitive interaction for an optimal engagement with the content. To this end the size of the display is seen as an important feature. For example, we have experimented with a 65 inch display. When interacting with large displays intuitive interaction includes multi-touch and gesture interfaces [6]. We chose to develop explicit interaction and commands of subjects through a depth camera. Issues in developing gestures in this context include developing effortless interaction (e.g. raising arms can be tiring) and transparent feedback in the 3D environment [3]. Copyright is held by the author/owner(s). UbiComp’12, September 5-8, 2012, Pittsburgh, USA. ACM 978-1-4503-1224-0/12/09. 1139

Transcript of Incorporating Subliminal Perception in Synthetic...

Page 1: Incorporating Subliminal Perception in Synthetic Environmentspure.au.dk/portal/files/70269088/p1139_pizzi.pdf · Interactive Narrative. ACM Classification Keywords H.5.1 [Multimedia

Incorporating Subliminal Perception in Synthetic Environments

David Pizzi1, Ilkka Kosunen2, Cristina Viganó3, Anna Maria Polli2, Imtiaj Ahmed2, Daniele Zanella4, Marc Cavazza1, Sid Kouider5, Jonathan Freeman6, Luciano Gamberini3, Giulio Jacucci2

1 Teesside University,

Middlesbrough, England {d.pizzi;

m.o.cavazza} @tees.ac.uk

2 University of Helsinki, Finland {ilkka.kosunen; giulio.jaccucci}

@helsinki.fi

3 University of Padova, Italy

[email protected]

4 Electrolux Italia S.P.A., Pordenone,

Italy [email protected]

5 Ecole Normale

Supérieure, Paris, France

[email protected]

6 Goldsmiths, University of

London, England j.freeman

@gold.ac.uk

ABSTRACT Advanced interactive visualization such as in virtual environments and ubiquitous interaction paradigms pose new challenges and opportunities in considering real-time responses to subliminal cues. In this paper, we propose a synthetic reality platform that, combined with psychophysiological recordings, enables us to study in real-time the effects of various subliminal cues. We endeavor to integrate various aspects known to be relevant to implicit perception. The context is of consumer experience and choice of an artifact where the generation of subliminal perception through an intelligent 3D interface controls the spatio-temporal aspects of the information displayed and of the emergent narrative. One novel contribution of this work is the programmable nature of the interface that exploits known perceptive phenomena (e.g. masking, crowding and change blindness) to generate subliminal perception.

Author Keywords 3D Environment, Subliminal Cues, Interactive Narrative.

ACM Classification Keywords H.5.1 [Multimedia Information Systems]: Artificial, Augmented, Virtual Realities; H.5.2 [User interfaces]: Evaluation/methodology.

General Terms Design, Experimentation, Human Factors, Measurement.

INTRODUCTION Previous work [5] suggests that unconscious processes are actually involved in the (consumer) choice of products that manifest a certain level of complexity, which we assume to be correlated to a mapping between decision criteria and artifact perceptive features. Products such as appliances can be displayed in 3D environments relying on both system and user agency to customize an emergent product experience and narrative. Traditionally, investigating subliminal cues, implicit responses and other unconscious

processing through psychophysiology has been done in laboratory environment on specific tasks [1]. The use of subliminal stimuli has been shown to prevent overloading of the user when a huge amount of data needs to be explored [9]. We propose to investigate a more complex scenario to explore typology and delivery of subliminal cues in the context of realistic information and 3D content of appliances. Furthermore having found the appropriate mechanisms to communicate subliminally with users the interest extends beyond the “local” impact on the immediate action and perception of the user to the potential “global” impact of the intelligent use of such mechanisms to the overall product experience and unfolding narrative.

To address these problems we have developed a synthetic reality platform that gives us an exact control of type, timing and placement of subliminal cues during an embodied human-computer interaction with a 3D scene rendered on a large display.

In the following, we present the interactive 3D environment we created to study implicit responses and subliminal cues in the context of interaction with appliance models. We discuss the technical set up, content, application and interactivity that are necessary to develop a research platform. The approach is to allow immersiveness and intuitive interaction for an optimal engagement with the content. To this end the size of the display is seen as an important feature. For example, we have experimented with a 65 inch display. When interacting with large displays intuitive interaction includes multi-touch and gesture interfaces [6]. We chose to develop explicit interaction and commands of subjects through a depth camera. Issues in developing gestures in this context include developing effortless interaction (e.g. raising arms can be tiring) and transparent feedback in the 3D environment [3].

Copyright is held by the author/owner(s). UbiComp’12, September 5-8, 2012, Pittsburgh, USA. ACM 978-1-4503-1224-0/12/09.

1139

Page 2: Incorporating Subliminal Perception in Synthetic Environmentspure.au.dk/portal/files/70269088/p1139_pizzi.pdf · Interactive Narrative. ACM Classification Keywords H.5.1 [Multimedia

Figure 1. Overview of Presentation Strategy in the CEEDs Engine. According to the current user and world states, the real-time planner computes and sends sequences of actions to be executed by the immersive visualization engine. User state formalizes both

implicit (e.g. EDA, EMG and EEG signals) and explicit (e.g. visual focus, pointing gestures, etc.) information.

Interactivity of Virtual Appliances Virtual Environments support many new applications in data visualization as well as the possibility to enhance user experience to facilitate access or integration of such data. In addition, recent progress in Intelligent User Interfaces facilitates the staging of virtual experiments on perception and interaction by explicitly encoding such hypotheses in the knowledge layer that underpins the intelligent interface. In this paper, we present on-going work aimed at extending virtual environments to support implicit perception. Previous work [2] [10] suggested that some perceptive phenomena traditionally considered part of implicit perception, such as change blindness, could be exploited in the design of virtual reality systems. Rather than directly encoding specific perceptive phenomena in the design of the Virtual Reality system, our objective is to support the

dynamic use of implicit subliminal visualization as part of the data presentation strategy, and to incorporate this in the real-time, interactive user experience.

The application that supports our experiments explores consumer visualization of domestic appliances. The rationale is that consumer choice has been demonstrated to involve implicit phenomena [5], that these artifacts are well suited for 3D interactive visualization (as well as their enhancement by additional textual or graphic information) and that the mechanisms for perceiving the features of these products are largely unknown. We also hope that the realism of the task can facilitate the definition of ground truth mechanisms for evaluation of perception or decision-making.

1140

Page 3: Incorporating Subliminal Perception in Synthetic Environmentspure.au.dk/portal/files/70269088/p1139_pizzi.pdf · Interactive Narrative. ACM Classification Keywords H.5.1 [Multimedia

Virtual Content of Appliances Electrolux Group’s media library is the database where, at present, it is possible to find all the product images, videos, logos and other content from across all Electrolux group’s brands including Electrolux, AEG and Zanussi. Among the contents, it is possible to find trailers, guidelines, banners, stickers, leaflets, pictures, short movies, animations, radio podcasts, TV commercials and CAD models.

CAD models are prepared during the product development process and traditionally used as the basis for manufacturing and testing the product. With recent developments in realism reached by 3D graphics, Electrolux started to use these models to create valuable content to support activities such as training, marketing, sales and research. For example, CAD models can be used by software houses, which integrate the model as it is in their kitchen configuration software, or agencies, which take care of the communication campaigns, and to create short video clips, pictures for catalogues, applications for mobile devices, etc.

SYSTEM OVERVIEW Our system supports this industrial scenario by showing how this content can be used not only in passive advertisement videos but in interactive applications. The system (Figure 1) presents itself as a virtual environment supporting user tracking and interaction via traditional (i.e. physical) means. The user navigates freely and explores the environment featuring various appliances that can react to user interaction (both explicit, interface-like, and implicit, acquired through physiological sensors). At the heart of the system, the narrative engine aims at inducing specific user experiences by orchestrating the display and behavior of the various appliances so as to induce a given set of perceptive events. We are extending the traditional concept of interactive narrative to encompass the temporal presentation of a set of scenes featuring the appliances. Each scene and each transition between scenes, are aiming at the presentation of a given configuration of perceptive features. Within a given scene, the distribution between features presented explicitly and those presented implicitly form part of the experimental hypotheses to be assessed. The underlying mechanisms for the narrative engine are adapted from state-of-the-art work in interactive narrative. The display of specific features is under the control of visualization operators. Specific perceptive phenomena can be organized through the temporal/causal arrangement of sequence of visualization operators, such sequences being generated in real-time using heuristic search planning. The narrative engine constantly plans for perceptive impressions in the background and these are staged in the virtual environment. User actions and user input through physiological sensors are constantly fed back to the planner so as to adjust the subsequent display strategy.

One important step in the design of the system consists in identifying strategies for implicit information display.

Candidate strategies include crowding, change blindness and masking. A subsequent one consists in devising display strategies for these phenomena, which are defined taking into consideration the real-time detection of user position and field of view (region of interest), and the various spatial, visual and temporal capabilities of the VR display.

STAGING APPLIANCES: VISUALIZATION AND INTERACTION Promotional material appliances are often displayed in contexts such as an integrated kitchen; therefore, we used the same strategy as an introduction to recreate ecological conditions for exploration and choice. We implemented an environment where subjects first encounter appliance categories in a kitchen environment. In this phase users may select one appliance type integrated in the kitchen operating a pointer. This initiates user exploration, navigation and manipulation of the artifacts, which will create an appropriate context for the introduction of subliminal cues. User interaction with the appliances is accomplished using a depth camera (i.e. Microsoft® Kinect™), which allows gesture recognition. As an example of scene exploration, after highlighting an appliance with the pointer by a pushing gesture, the kitchen disappears and the user is shown all the available models of the chosen appliance type (e.g. a refrigerator, an oven and a dishwasher). After selection of one of them, the new appliance replaces the previous one and the kitchen is displayed again. This virtual environment allows focusing on categories of appliances starting from a realistic exploration. During subsequent stages, the system also supports mechanisms for multiple visualizations, duplication of appliance parts, magnification, and enhancement by textual and symbolic elements and, of course, subliminal information display.

We have adapted original CAD models (simplified and with a reduced polygonal resolution), meaning that interaction with a refrigerator, which constitutes our main example in the next sections, is possible through the same mechanism. Appliance behavior is defined within the model itself in terms of interaction events and how mobile parts should respond to interaction events, using standard features from the Unity game engine that we use for visualization.

Individual appliances can also be interactively rendered zooming, rotating and opening doors and drawers. The depth camera interaction provides different opportunities to define interaction following affordance models of intuitive gestures or the pointing, highlighting and selecting paradigm explained above. Furthermore it is possible to activate the Physics engine on some mobile parts where user interaction initiates physical motion providing a more realistic exploration.

ELICITING IMPLICIT PERCEPTION The overall narrative principle consists in displaying information according to certain semantic dimensions of user experience, which can be mapped onto properties of the appliance (Figure 2). Some of these dimensions are

1141

Page 4: Incorporating Subliminal Perception in Synthetic Environmentspure.au.dk/portal/files/70269088/p1139_pizzi.pdf · Interactive Narrative. ACM Classification Keywords H.5.1 [Multimedia

Figure 2. Example of Exploratory User Experiences. Several presentation strategies are deployed according to the different possible scenario progressions. At start, the user faces several objects within a virtual 3D environment. The nearest appliance attracts the user attention by subtle (e.g. oscillations), more salient (e.g. automatic opening) or even implicit (e.g. suggestions)

sequences of actions inside or outside her field of view. The overall theme of the user experience follows three main dimensions (usability, practicality and aesthetics) and different strategies are presented according to the dominant direction chosen (e.g.

lighting effects for design or ease of access for usability). Finally, subliminal information can also be presented at the user visual periphery to suggest new exploratory directions (e.g. when the user state remains stationary for a prolonged period of time).

usability (which characterizes the physical interaction and the appliance’s operation), practicality (relations to usage) and aesthetics (design preferences and social norms). It is important to note that these dimensions are not entirely subordinated to the appliance description; some of them, including practicality and usage, relate to user profiles and how they would interpret generic properties of the appliance from their personal perspective.

The overall context is one of mixed-initiative: the user is free to explore the appliance visually and through interaction, while the appliance itself is endowed with reactive behavior that adapts itself to the interaction history to assist the user exploration. The system assists user exploration based on its analysis of user experience rather than an intrinsic “advertising” strategy dictating which type of information or appliance feature to promote.

In this context, subliminal information should be allocated a specific role: rather than simply behaving as an additional information channel, it is to be used as a mechanism to drive the user exploration at critical stages without

interrupting the task at hand. It is to be used parsimoniously to leverage on its potential.

One typical example would be the display of subliminal information assisting the user in the operation of the appliance upon detection of difficulties in interacting with movable internal components. Let us assume that the user in an exploratory phase of explicit interaction with the various internal compartments of a refrigerator: upon detection of difficulties (which can take place through a combination of physiological signals and task performance analysis), the system may decide to provide subliminal assistance, inspired by an approach described by De Vaul et al. [5], contextualized to i) the type of parts on which the user attention has been concentrated and ii) the current position and orientation of the user.

In a similar fashion, subliminal information can be used to manage transitions between dimensions to be explored, or preserve consistency of user experience by maintaining the user’s interest on a given dimension. One particular case of

1142

Page 5: Incorporating Subliminal Perception in Synthetic Environmentspure.au.dk/portal/files/70269088/p1139_pizzi.pdf · Interactive Narrative. ACM Classification Keywords H.5.1 [Multimedia

Figure 3. Example of Subliminal Presentation Strategy using Masking Techniques (adapted from [8]). We assume a rendering frame rate for the graphics engine of 60 fps, without irregularity during the masking technique. The first sequence (frames 1-31) consists in displaying a mask (here a snowflake symbol). Priming between frames 31-34 makes use of a low-intensity, small size

picture of a tomato. This is followed by a shorter display of the same mask (three frames 34-37), then a more persistent (42 frames) display of the target image (a clearer and larger image of a tomato). The use of such a target has been inspired by some of the

original Electrolux presentations of the refrigerator’s features. The impact of such a target will be assessed by considering subsequent variations in the user’s topics or areas of interest.

transition consists in inviting the user to explore certain aspects after detecting a phase of user inactivity. The nature of subliminal informational content should also be part of this research. We shall explore the use of text, symbols or icons, as well as images directly congruent to the experience themes at hand. One of the issues to be considered is the type of response that can be elicited as a function of the subliminal display strategy and the contents of the subliminal information presented (e.g. semantic, affective, etc.). Techniques for subliminal display include masking (images) [8], crowding (symbol, text) [7], and flashing (symbols). Display can take place preferentially in peripheral areas of the user’s Field of View (FoV) for crowding techniques, or inside the FoV itself (in the case of masking techniques).

EXPLORING CONSUMMER APPLIANCE Unlike traditional Virtual Reality systems, this interactive scenario implements a “mixed-initiative” exploration of kitchen appliances by articulating user actions with system visual presentations. The concept of mixed-initiative was originally developed in the field of human-computer dialogue to authorize anytime user interaction rather than only respond to prompts. In a VR setting, whose purpose is information visualization, it consists in endowing virtual objects with some form of autonomous behavior through which they will present themselves in order to attract or focus the user’s attention. In practice, it is not sufficient just to include smart objects, and there is a need for the overall system behavior itself to be under the control of a specific module which will analyze user experience as a whole (continuously and in real-time), to determine appropriate system response. The form of support to an interactive experience is similar to the one implemented in interactive narrative systems, in which user interaction is interpreted to produce the next stages of a visual narrative. We have thus used similar technologies for system behavior based on AI Planning techniques; one specific variant consists in

devising an appropriate sampling rate to analyze user interaction and plan system behavior.

While the user can autonomously engage in an exploration of the appliance’s features by freely manipulating doors, compartments and drawers, the role of the system is to control and offer a satisfying experience by i) measuring and analyzing user activity and interest, and ii) by planning and integrating which visual strategy to follow over the next interaction period. User experience analysis consists of a knowledge-based interpretation of the user’s interactions with the system, which are available through the system’s I/O devices. Integration is performed at different levels of analysis:

- The lowest level consists of assessing user activity in relation to the appliance. This consists in monitoring user’s position and orientation in relation to the appliance, in particular when it is significant to determine areas of interest: this is based on recognition of body gestures and visual focus through head/eye tracking. The other element of activity analysis is to record the actual operations on the device, i.e. the interaction events that physically manipulate the mobile or reactive parts of the appliance (doors, compartments and any other real-world object commands if available)

- A higher level of analysis consists in interpreting physical interaction in terms of user experience and activity. The underlying principle is to map the region of interest of the appliance to semantic dimensions relevant to user exploration. For instance, external, non-functional regions of the appliance devoid of affordances will be associated to aesthetic dimensions, while internal, mobile and functional parts may be associated to usability.

In the continuous process of integrating the user activity with an appropriate overall experience the system response has to be deliberative and interleaves phases of user interaction with phases of information presentation (the

1143

Page 6: Incorporating Subliminal Perception in Synthetic Environmentspure.au.dk/portal/files/70269088/p1139_pizzi.pdf · Interactive Narrative. ACM Classification Keywords H.5.1 [Multimedia

sampling rate for these phases is not fixed during any given session, and can be adjusted depending on spontaneous user activity). Information presentation takes the form of visual narrative sequences generated using AI Planning techniques. The next presentation operator is selected from the perspective of narrative evolution. For instance the planner determines whether it is preferable to introduce a new perspective on the appliance’s properties rather than remain in the same current state of exploration, deepening the current presentation strategy (acting here as a form of level of narrative details).

These presentation strategies consist of sequences of variable length which can be of different nature depending on the instant characterization of user experience (e.g. exploration, discovery, assessment, etc.). In that sense, both explicit and implicit dramatization elements are present in our scenario. While the former is being used for information reinforcement, the latter is rather more appropriate when we need to change the phase of interaction (Figure 3). For instance, when the analysis reveals the user is facing an important choice (i.e. different of what is actually happening), or where the level of interaction is particularly weak, our hypothesis is that subliminal information display can be accessed without disrupting the user experience.

CONCLUSION We have described work in progress on an intelligent interface designed to elicit and study various forms of implicit perception. To the best of our knowledge, this is a new kind of application which integrates the various contexts in which implicit processes are relevant, both in perception, information acquisition, and decision making. The intelligent part of the interface, which is based on AI planning techniques, makes it possible to experiment with various phenomena associated with subliminal perception, provided these can be expressed as a sequence of visualization operators. This should make it possible to explore more complex phenomena than transient display, including change blindness, flashing, crowding and masking. Meanwhile in our preliminary experiments we have shown that our system is capable of generating contextual responses on both EDA and fEMG, potentially closing the loop between explicit interaction, implicit presentation of information and implicit user response.

ACKNOWLEDGEMENTS This work has been funded (in part) by the European Commission under grant agreement CEEDs (FP7-ICT-258749).

REFERENCES 1. Bernat, E., Bunce, S., and Shevrin, H. Event-related

brain potentials differentiate positive and negative mood adjectives during both supraliminal and subliminal visual processing. In International Journal of Psychophysiology, 42, 1, (2001), 11-34.

2. Beeharee, A. K., West, A. J., and Hubbold. R. Visual attention based information culling for Distributed Virtual Environments. In Proceedings of the ACM symposium on Virtual Reality Software and Technology (VRST '03). ACM, New York, NY, USA, (2003), 213-222.

3. Bowman, D. and Hodges, L. User Interface Constraints for Immersive Virtual Environment applications. In Graphics, Visualization, and Usability Center Technical Report, (1995).

4. De Vaul, R., Pentland, A., and Corey, A. The Memory Glasses: Subliminal vs Overt Memory Support with Imperfect Information. In ISWC 2003. (2003), 146-153

5. Dijksterhuis, A., Bos, M.W., Nordgren, L.F. and van Baaren, R.B. On making the right choice: The deliberation-without-attention effect. In Science, 311, (2006), 1005-1007.

6. Jacucci, G., Morrison, A., Richard, G.T., Kleimola, J., Peltonen, P., Parisi, L., and Laitinen, T. Worlds of information: designing for engagement at a public multi-touch display. In ACM CHI '10: Proceedings of the 28th international conference on Human factors in computing systems, (2010), 2267-2276.

7. Kouider, S., Berthet, V., and Faivre N. Preference is biased by crowded facial expressions. In Psychological Science, 22, (2011), 184-189.

8. Kouider, S. and Dehaene, S. Levels of processing during non-conscious perception: a critical review of visual masking. In Phil. Trans. R. Soc. B. 362, (2007), 857-875.

9. Riener, A. Information injection below conscious awareness: Potential of sensory channels. In 3rd International Conference on Automotive User Interfaces and Interactive Vehicular Applications Automotive UI'11, (2011), 6.

10. Steinicke, F., Bruder, G., Hinrichs, K., and Willemsen, P. Change blindness phenomena for stereoscopic projection systems. In IEEE Virtual Reality 2010, (2010), 187-194.

1144