Chapter 7 Guidelines for Spatial Metaphors

236
Chapter 7 Guidelines for Spatial Metaphors Eye of the Observer (1996) God has no intention of setting a limit to the efforts of man to conquer space. [Pius XII]

Transcript of Chapter 7 Guidelines for Spatial Metaphors

Page 1: Chapter 7 Guidelines for Spatial Metaphors

Chapter 7

Guidelines for Spatial Metaphors

Eye of the Observer (1996)

God has no intention of setting a limit to the efforts

of man to conquer space. [Pius XII]

Page 2: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 7 – Guidelines for Spatial Metaphors 207

Chapter 7

Guidelines for Spatial Metaphors

7.1 Introduction Chapter 6 introduced the MS-Guidelines

• Guidelines for

and discussed some general high-level guidelines. This chapter contains the guidelines for the three groups of spatial metaphors:

spatial visual metaphors• Guidelines for

(section 7.2) spatial auditory metaphors

• Guidelines for (section 7.3)

spatial haptic metaphors

(section 7.4).

7.2 Guidelines for Spatial Visual Metaphors

7.2.1 Guidelines for the Visual Display Space SV-1 Visual space dominates our perception of space. The spatial visual metaphor

is the critical design component as it provides the base for integrating the auditory and haptic displays. The haptic and auditory spatial displays should be integrated to the framework of the visual space (MST-2). As Card, Mackinlay et al. note, "space is perceptually dominant" [Card, Mackinlay et al. 1999].

SV-1.1 Vision has a wide field of view. The visual field of view is 200º horizontally and 160º vertically [Card, Mackinlay et al. 1999]. This entire field can be used to display information.

SV-1.2 Foveal vision is detailed. Foveal vision has much higher acuity than the rest of the retina. This is because the fovea contains a higher percentage of cone receptors and these receptors perceive higher detail than rod receptors. Furthermore, there is less convergence of neurones from the fovea compared to the periphery. The so called

Page 3: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 7 – Guidelines for Spatial Metaphors 208

magnification factor of the fovea is illustrated by the fact that the fovea takes up 0.01 % of retinal area but accounts for up to 8% of area in the visual cortex [Goldstein 1989, p89]. It may be appropriate to design the so that detailed information is presented to the user's foveal vision while less detailed context information is presented to the user's peripheral vision.

SV-1.3 Peripheral vision responds to changes. In the peripheral retina the number of rod receptors outnumber cones by about 20 to 1 [Goldstein 1989, p36]. While details are better seen with cones, rod vision is more sensitive due to a property called spatial summation that allows rod vision to operate under lower levels of light than cone vision [Goldstein 1989, p55]. The peripheral region of the retina is good at detecting movement and changes in the visual environment [Card, Mackinlay et al. 1999].

SV-2 Position in visual space is the most accurate perception. Position in visual space is the most accurate measure for data. Visual space is continuous and can display quantitative data. Space can also be divided into ordered or nominal categories. When high resolution and accuracy is required information should be displayed in this way.

SV-2.1 For spatial location, vision dominates other senses. The visual sense is spatially organised. For example, many areas of the brain responsible for neural processing of visual information are spatially organised, maintaining a direct mapping between the retinal space and the neural organisation [Goldstein 1989, p75].

SV-2.1.1 Visual position dominates auditory position. Psychophysical experiments have demonstrated that vision dominates auditory perception of spatial position [Welch and Warren 1980]. The arrangement of the auditory cortical map is based on tones rather than space. SV-2.1.1 Visual position dominates proprioceptive position. The sense of touch also has an arrangement of neurones in the cortex based on spatial organisation. However, psychophysical experiments have demonstrated that vision dominates haptic perception of spatial position [Welch and Warren 1980].

SV-3 Design visual space around the most important data variables. The most important information for the task should determine which attributes of the data are displayed in the visual space and how that space is organised. Prioritise the data attributes in terms of importance and type (quantitative, ordinal, nominal). Only visual space provides a very accurate representation of quantitative values. (SV-2) SV-4 Consider Orthogonal Spaces as a first choice in design.

SV-4.1 Use a 1D space where appropriate. If only a single quantitative variable is important, it should be represented along a single axis.

Page 4: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 7 – Guidelines for Spatial Metaphors 209

SV-4.2 Use a 2D space where appropriate. Where the relationship of two quantitative variables is important they should be represented along two orthogonal axes. This is a traditional form of creating an abstract space and there are a number of standard forms, such as a scatterplot and lineplot. SV-4.3 Use a 3D space where appropriate. When the relationship between three quantitative variables is important the use of a 3D space should be considered with most important variables on the three axes. The choice to use a 3D display should not be taken lightly. Using 3D space complicates interaction with the display and introduces the complex issues of depth perception. The advantages of using a 3D display need to be carefully considered and evaluated.

SV-4.3.1 Virtual Environments enhance the interpretation of 3D. Although the interpretation of 3D space is difficult, Virtual Environments provide assistance for the user to interpret these spaces. For example, stereoscopic displays provide an additional depth cue, as does the ability to directly map the view on the display to the user's head-position. Furthermore, interaction devices have been developed to assist with spatial navigation and object manipulation in 3D space. SV-4.3.2 Be aware of perceptual depth cues. If spatial location or object depth is being used to display information in a 3D space there are a number of different depth cues that are effective at different distances (table 7-1).

Depth Cue

0-2 meters

2-30 meters

> 30 meters

Accommodation and Convergence Х Occlusion Х Х Х Relative Size Х Х Х Relative Height Х Х Atmospheric Perspective Х Motion Parallax Х Х Binocular Disparity Х Х

Table 7-1 Depth cues are effective at different distances [Goldstein 1989, p 231]

SV-4.3.2.1 Virtual environments provide stereoscopic cues. Virtual Environments can provide a mechanism for providing the user with stereoscopic views of the scene. This enhances the perception of depth at distances of 0-30 meters.

SV-4.3.2.2 Virtual environments provide movement cues. Virtual Environments can provide a mechanism for tracking the user's head movement and updated the view based on head position. This provides useful perspective cues of motion parallax as near objects change more than distant objects.

Page 5: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 7 – Guidelines for Spatial Metaphors 210

SV-5 Consider other uses of space in the design. Although orthogonal spaces are typically used to display information, a distortion of this space may be effective.

SV-5.1 Distorted spaces allow for context and detail. It is possible to develop displays that distort space so that detail is provided at the centre of the display (foveal vision) and context to the periphery (peripheral vision). This takes advantage of the physiological arrangement of the visual system. (SV-1.2, SV-1.3) SV-5.2 Subdividing the space may be a useful strategy. As Tufte notes, "information slices within eyespan allow uninterrupted visual reasoning" and allows for rapid comparison on both a local and global level. Tufte describes the repeated tiling of a constant design as multiples. Using small multiples enforces the user to make comparisons [Tufte 1990].

SV-6 If time is an important variable then integrate it into the display space. For comparing data that changes over time it is useful to choose time as one axis. As described by Tufte, time is often an important causal variable for explaining patterns in data [Tufte 1983]. However, caution needs to be taken as time may be inappropriately seen as the causal influence for trends in the data [Tufte 1983]. Another option is to consider temporal auditory metaphors

.

SV-7 Consider previous organisations of visual space as design options. A number of effective display designs have been previously developed, particularly for 2D space. Consider reusing these designs. For example Bertin’s theory identified the basic elements of diagrams and described a framework for their design [Bertin 1981]. Mackinlay developed a system for automatically creating graphical displays [Mackinlay, 1981]. SV-8 Objects in visual space can be compared. Visual images are persistent and can quickly be compared by scanning between the two objects. As objects are viewed their locations become visually indexed so that search time to relocate them is short. In contrast, both auditory and haptic objects have a temporal component that requires holding information in short term memory for comparison.

7.2.2 Guidelines for Visual Spatial Properties SV-9 Scale of objects in space provides information. Once a visual display space is defined the visual scale of objects in this space can be compared or appraised. Visual scale cues include length, depth, area and volume.

SV-9.1 Visual scale should be proportional to the abstract quantity. This guards against injecting bias into the display. As Tufte notes "the representation of numbers, as physically measured on the surface of the graphic itself, should be directly proportional to the numerical quantities represented" [Tufte 1983].

Page 6: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 7 – Guidelines for Spatial Metaphors 211

SV-9.1.2 Area is not an accurate representation of scale. As noted by Tufte "different people see the same area somewhat differently" [Tufte 1983]. There is also the issue of how well areas scale. For example, doubling the area may not be perceived as increasing the object's size by two. Experiments reveal approximate power laws relating numerical measure to the reported perceived measure [Tufte 1983]. (Ref GP-2, GP-5.2)

SV-9.1.3 Minimise the number of display dimensions. Do not use two dimensions to display one dimensional data. Do not use three dimensions to display two dimensional or one dimensional data. As suggested by Tufte, “the number of information carrying dimensions should not exceed the number of dimensions of data" [Tufte 1983].

SV-9.2 Labelling removes ambiguity. Text labels can remove uncertainty by providing accurate measures. As Tufte notes, "clear, detailed, and thorough labelling should be used to defeat graphical distortion and ambiguity. Write out explanations of the data on the graphic itself. Label important events in the data" [Tufte 1983]. SV-9.3 Perceived size is constant. Once we perceive the size of an object the principle of size constancy ensures that we perceive it to be that size regardless of distance. Our perception of size is based on a constancy scaling mechanism that supplements the information available on the retinas by taking the objects distance into account. [Goldstein 1989, p 250]. (GP-3). SV-9.4 Perceived size is affected by context. The context in which an object is seen can lead the observer to misperceive size. Visual illusions are created by the context in which the objects are perceived and often occur when there is ambiguous, inaccurate or conflicting information [Goldstein 1989, p 246]. An example is the Műller-Lyer illusion [Goldstein 1989, p 258] shown in figure 7-1.

Figure 7-1 Examples of the Műller-Lyer illusion.

Page 7: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 7 – Guidelines for Spatial Metaphors 212

SV-9.4.1 Perceived size is affected by colour. It is known that the colour of an object can affect the perceived size. For example, redness can increase the perceived size [Goldstein 1989, p 258].

SV-9.4.2 Perceived size is affected by perceived distance. Our ability to perceive an object’s size can sometimes be drastically affected by our ability to perceive the object’s distance. [Goldstein 1989, p 245]. Looking towards the horizon can also increase the perceived size [Goldstein 1989, 258].

SV-10 Orientation of objects in space provides information. Once a space is defined the orientation of objects in this space can be compared or appraised. Orientation cues are slope, angle and curvature. There is direct support for perceiving orientation in the way neurones are organised in the cortex. For example, most cortical neurones respond best to barlike stimuli with specific orientations [Goldstein 1989, p77] and the cortex is organised into orientation columns with each column containing cells that respond best to a particular orientation [Goldstein 1989, p90].

SV-10.1 Adaptation to Orientation can occur. If a subject views a stimulus with a particular orientation then fatigue causes a decrease in sensitivity to that orientation [Goldstein 1989, p79]. This may affect long term performance with a display that uses orientation to display information.

7.2.3 Guidelines for Visual Spatial Structures SV-11 To represent global structure use a visual display. Where global structures in the data are relevant it is most appropriate to use a spatial visual display to represent these relationships. As noted by Tufte, "graphical displays should reveal the data at several levels of detail, from a broad overview to the fine structure" [Tufte 1983].

SV-11-1 The whole is different to the sum of its parts. This statement summarises the gestalt principle. It provides a caution that it is not possible to predict how the whole display will be perceived by simply considering each part separately.

SV-11-1-1 We perceive figures in front of a background. While it is not possible to predict what will be seen as figure and what will be seen as background there are some general rules [Goldstein 1989, p188]: • Symmetrical figures tend to be seen as figure. • Comparatively smaller areas are more likely to be seen as figure. • Vertical or horizontal orientations are more likely to be seen as

figure. • Meaningful objects are more likely to be seen as figure.

Page 8: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 7 – Guidelines for Spatial Metaphors 213

SV-11-1-2 Contrast between figure and ground causes noise. Unintentional noise can be created in the display and this may affect the user's ability to interpret the data. For example, many dark lines near white lines can create optical effects (figure 7-2). The noise is directly proportional to the contrast between figure and ground [Tufte 1990].

Figure 7-2 Examples of noise caused by high contrast elements.

SV-11-2 Grouping principles apply to objects. There is a perceptual organisation of separate objects within a scene. Some visual elements become grouped, so that they seem to belong together and others appear separated from one another [Goldstein 1989, p 175]. These principles are shown in table 7-2.

Law of Simplicity "Every stimulus pattern is seen in such a way that the resulting structure is as simple as possible."

Law of Familiarity "Things are more likely to form groups if the groups appear familiar or meaningful."

Law of Similarity "Similar things appear to be grouped together."

Law of Good Continuation

"Points that, when connected, result in straight or smoothly curving lines, are seen as belonging together, and the lines tend to be seen in such a way as to follow the smoothest path."

Law of Proximity "Things that are near to each other appear to be grouped together".

Law of Common Fate "Things that are moving in the same direction appear to be grouped together."

Law of Connectedness "Things that are physically connected are perceived as a unit."

Table 7-2 Gestalt principles of visual grouping [Goldstein 1989].

SV-12 Consider providing substructure to objects in space. Lower level structures may be provided in the display in the form of glyphs. This can provide detailed structure. There is direct support for the perception of specific forms in a display. While some neurones in the cortex respond to simple forms such as bars or

Page 9: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 7 – Guidelines for Spatial Metaphors 214

corners there are also neurones that respond to more complex stimuli. For example, a specific shape of a particular size and at a precise location on the retina or a specific shape combined with a particular colour or texture [Goldstein 1989, p111].

SV-12-1 Object recognition may use an alphabet of basic features. A theory has been proposed that object recognition is based on an alphabet of 36 basic volumetric primitives called geons [Goldstein 1989, p203]. These 3D shapes are view invariant, discriminable and resistant to visual noise and may provide a means of representing 3D categorical shapes.

7.3 Guidelines for Spatial Auditory Metaphors

7.3.1 Guidelines for the Auditory Display Space SA-1 Use spatial auditory metaphorsPosition in space can be used to display categories of data. Data that fall into categories will form spatial clusters recognisable by the position of the sound source.

for displaying categorical data.

SA-1.1 Base spatial categories around the visual space. In the real world the visual space to provides an index to the position of sounds in space. Therefore use the visual space as a framework for the spatial sounds. SA-1.2 Sound localisation occurs at the same time as sound identification. Identifying where a sound is coming from and identifying what the sound is occur in separate auditory pathways [Sekuler and Blake 1994]. The implication is that these two properties can be used to represent different information. Furthermore, that information will be interpreted in parallel. SA-1.3 The relative spatial position of sounds is only inferred. When judging where a sound originates, the position must be inferred from available acoustic information. It is not like vision where spatial relations are retained on the retina. With sound there is no relative information [Sekuler and Blake 1994]. This makes compare and contrast of the position of sounds in space less accurate than for vision.

SA-2 Use spatial auditory metaphorsUse sound to draw attention to specific visual structures or objects. Sound acts as a navigation aid, especially where the eyes may be oriented in another direction. Sound tells the eyes where to look, and this is known as the orienting response [Kramer 1994a].

for drawing attention to visual features.

SA-2.1 Auditory space surrounds the user. Auditory space can surround the user with 360 degrees. This may be useful in 3D environments where vision has a restricted field of view. SA-2.2 Auditory space is not occluded. "Auditory space does not suffer from occlusion like visual space. Sound has the ability to detect, localize and identify signals arriving from any direction"

Page 10: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 7 – Guidelines for Spatial Metaphors 215

[Kramer 1994a]. This can be important in 3D space where objects may occlude one another. SA-2.3 Sounds can be heard from multiple directions. Unlike vision where it requires head and eye movements to serially process objects in multiple directions, hearing allows multiple sounds from different directions to be monitored in parallel.

SA-2.3.1 Use only a few spatial sounds. The ability of the user to detect sound positions when multiple sources are active at different locations is effected by inadequate short term memory [Stuart 1996]. SA-2.3.2 Spatial location is inaccurate to the side. The location of sounds to the periphery is less accurate than in front of the head. For accurate localisation the head must be turned [Stuart 1996].

SA-3 Sound is localised using multiple cues. There are a number of sound cues that are used to detect sound position. These include:

• Differences in sound intensity between two ears (intensity). • Differences in time between sound arriving at each ear (frequency). • Reflections caused by the individual ear shape to discriminate.

SA-3.1 The JND of spatial position depends on sound location. The JND varies depending on location of the sound. It is about 1 degree in front of head but much higher at the side [Stuart 1996]. SA-3.2 Intensity is the most reliable spatial cue. Everyone with normal hearing can detect changes in intensity and it is easier to control these changes [Deatherage 1992].

SA-3.2.1 Stereo panning provides intensity cues. Providing left and right panning of sounds using intensity changes is a simple way to display auditory position.

SA-3.3 Accurate 3D sounds require complex transfer functions. Because position in space is affected by the individual's ear shape this must be factored into the sound rendering to produce accurate 3D locations. Sophisticated sound displays make use of Head Related Transfer Functions to do this.

SA-3.3.1 Sounds from above and below are ambiguous. The most difficult positions to discriminate are sounds above or below the observer when they are equidistant from the ears. This is because the intensity and frequency cues are equal for both these positions and reflections within the pinnae must be utilised to disambiguate the two locations.

Page 11: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 7 – Guidelines for Spatial Metaphors 216

SA-3.3.2 We use head movements to improve 3D sound location. In a 3D environment head movements need to be factored into the sound production. We typically use head movements to discriminate between the ambiguous location of sounds.

SA-3.4 Complex sounds are easier to localise. Complex timbres are easier to locate in space and less tiring to the listener than simple tones [Scaletti and Craig 1991]. Complex timbres are composed of a wider spectrum of frequencies and as noted by Wenzel, "narrowband sounds are usually very difficult to localise, while broadband impulsive sounds are the easiest to locate" [Wenzel 1994].

3.5 Echoes do not affect localisation. Echoes do not effect direction finding to any great extent. This is due to the precedence effect which enhances perception of the direct acoustic wave and suppresses echoes that are late in arriving [Stuart 1996].

SA-4 Auditory space relieves the eyes. Use auditory displays to relieve the eyes. In general, auditory spatial displays are recommended when the eyes are fully engaged and additional spatial information is needed [Deatherage 1992]. SA-5 Auditory space can be a different resolution to visual space. It is possible to overlay a different resolution of auditory space on the visual space. For example, one measure of visual space may equate to 10 measures of auditory space. The user on selecting a visual measure may hear the corresponding 10 auditory measures.

7.3.2 Guidelines for Auditory Spatial Properties SA-6 Spatial position helps to separate sound streams. "The acoustic system can segregate, monitor and switch attention among simultaneous streams of sound when acoustic objects are distinguished by different locations in space" [Wenzel 1994]. SA-7 There is visual capture of sound sources. Sounds usually come from where you see the source of the sound. Sometimes vision and hearing provide conflicting information and the observer sees the apparent sound source somewhere different from where the sound is being produced. In this situation the sound is perceived to come from the position of the visual cue. This is called visual capture or the ventriloquism effect [Goldstein 1989, p242]. SA-8 Perception of sound distance is not reliable. The perception of sound distance is poor because distance cues are not reliable and can be influenced by the direction of the sound source. Also the acoustic environment is critical, for example, echoes do not affect the perception of sound position they do affect the perception of sound distance [Stuart 1996].

Page 12: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 7 – Guidelines for Spatial Metaphors 217

7.4 Guidelines for Spatial Haptic Metaphors

7.4.1 Guidelines for the Haptic Display Space SH-1 Haptic space is useful for displaying constraints in the data. Haptics can be used to display local structures such as boundaries, limits, ranges, or constraints that occur in data. While this does not provide precise quantitative measures it provides a general range of values and is a natural metaphor. SH-2 Haptic feedback can be used to display temporal-spatial data. Because haptics is adept at both sensing both spatial and temporal properties it may be used for displaying information that evolves over both space and time. For example force fields evolve over space and time and have traditionally been difficult to display visually [Stuart 1996].

SH-3 Haptic space can be at a different resolution to visual space. It is possible to overlay a different resolution of haptic space on the visual space. For example, one measure of visual space may equate to 10 measures of haptic space.

SH-3.1 Haptic feedback augments display of global visual models. Haptic feedback can augment global visual models that are too difficult to display in detail locally. SH-3.2 Haptic space provides a finer level of resolution than vision. The sense of touch has a higher spatial resolution than vision [Goldstein 1989]. Therefore for very fine detail touch may be effective where vision is not.

7.4.2 Guidelines for Haptic Spatial Properties SH-4 Haptic spatial properties should be consistent with visual properties. The visual perception of objects can perceptually bias the haptic perception of objects. Visual information can alter the haptic perception of object size, orientation and shape [Srinivasan and Basdogan 1997].

SH-4.1 Visual shape overrides haptic shape. For shape perception the visual perception of shape biases the haptic perception of shape [Welch and Warren 1980].

SH-4.2 Visual size overrides haptic size. The visual estimate of size and length of objects overrides the haptic perception of size and length [Welch and Warren 1980]. SH-4.3 Visual orientation competes with haptic orientation. Whether haptic or visual perception of an object's orientation is dominant varies between users [Welch and Warren 1980].

SH-5 Haptic feedback provides information about position in space. In the real world haptics (and sound) signal contact with an object and thus verify the position of an object in space. It is sometimes difficult to resolve the exact depth of objects in 3D space. Haptic feedback can assist by providing an accurate depth cue.

Page 13: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 7 – Guidelines for Spatial Metaphors 218

SH-5.1 Human spatial resolution is about 0.15mm. The spatial resolution on the finger pad is about 0.15mm. Two points can be distinguished when they are about 1 mm apart [Durlach and Mavor 1995].

SH-5.2 We lose track of spatial location. The human haptic system tends to lose track of absolute spatial location [Sekuler and Blake 1994]. This makes accurate tracking of position in space difficult.

SH-5.3 There is a spatial map in the cortex.

The sense of touch is organised around a spatial map. In the somatosensory cortex there is a map of the human body in which neighbouring neurones represent neighbouring parts of the body. However this map is distorted so that more space is allocated to parts of the body that are more sensitive to stimulation [Goldstein 1989].

SH-5.4 Visual location overrides haptic location There is an overwhelming bias of vision over haptic information about spatial location [Welch and Warren 1980]. This is an example of one modality overriding another so that a single uniform event is perceived. For example, when subjects viewed a stationary hand viewed though a 14 degree displacing prism, it immediately feels as if it is located very near its seen (optically displaced) position [Welch and Warren 1980].

SH-6 The JND of length varies between 1-4 mm. For discriminating the length of objects the JND is about1mm for objects around 10mm in length. This increases to 2-4 mm for objects that are around 80 mm in length [Durlach and Mavor 1995]. SH-7 Sensitivity to rotation varies between joints. Humans can detect joint rotations with different degrees of sensitivity. Proximal joints have greater sensitivity to rotation than more distal joints. The JND is about 2.5 degrees for wrist and elbow and about 0.8 degrees for the shoulder [Durlach and Mavor 1995].

7.4.3 Guidelines for Haptic Spatial Structures SH-8 Use spatial haptic metaphorsIn the real world visual and haptic combine to give overview and low level structure. Spatial perception may not be inherently visual or haptic. Contours may be interpreted the same way whether they come from vision or touch [Goldstein 1989]. Haptic feedback provides a good reinforcement of spatial structure but is only effective over smaller areas because large structures must be temporally integrated into a whole. For example, subjects who had to navigate a maze performed best with a large visual-haptic ratio, that is large visual display and small haptic workspace [Srinivasan and Basdogan 1997].

to represent local spatial structures.

SH-8.1 Point force feedback is very localised. Current haptic devices only allow for point force feedback. With such feedback the stimulus is generated at a single point and thus the display of shapes and other structures requires greater temporal integration. It is like using a finger tip

Page 14: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 7 – Guidelines for Spatial Metaphors 219

to scan tactile information about a very restricted part of a broader picture. This requires piecing together momentary samples and this puts a huge load on a person's short-term memory [Sekuler and Blake 1994].

7.5 Conclusion This chapter has described the guidelines for developing spatial metaphors. These include guidelines for spatial visual metaphors that are summarised in table 7-3. They also include guidelines for spatial auditory metaphors that are summarised in table 7-4Finally some guidelines spatial haptic metaphors were also presented and these are summarised in table 7-5. Guidelines for the Visual Display Space

SV-1 Visual space dominates our perception of space. SV-1.1 Vision has a wide field of view. SV-1.2 Foveal vision is detailed. SV-1.3 Peripheral vision responds to changes.

SV-2 Position in visual space is the most accurate perception. SV-2.1 For spatial location, vision dominates other senses.

SV-2.1.1 Visual position dominates auditory position. SV-2.1.1 Visual position dominates proprioceptive position.

SV-3 Design visual space around the most important data variables. SV-4 Consider Orthogonal Spaces as a first choice in design.

SV-4.1 Use a 1D space where appropriate. SV-4.2 Use a 2D space where appropriate. SV-4.3 Use a 3D space where appropriate.

SV-4.3.1 Virtual Environments enhance the interpretation of 3D. SV-4.3.2 Be aware of perceptual depth cues.

SV-4.3.2.1 Virtual environments provide stereoscopic cues. SV-4.3.2.2 Virtual environments provide movement cues.

SV-5 Consider other uses of space in the design. SV-5.1 Distorted spaces allow for context and detail. SV-5.2 Subdividing the space may be a useful strategy.

SV-6 If time is an important variable then integrate it into the display space. SV-7 Consider previous organisations of visual space as design options. SV-8 Objects in visual space can be compared.

Guidelines for Visual Spatial Properties

SV-9 Scale of objects in space provides information. SV-9.1 Visual scale should be proportional to the abstract quantity.

SV-9.1.2 Area is not an accurate representation of scale. SV-9.1.3 Minimise the number of display dimensions.

SV-9.2 Labelling removes ambiguity. SV-9.3 Perceived size is constant. SV-9.4 Perceived size is affected by context.

SV-9.4.1 Perceived size is affected by colour. SV-9.4.2 Perceived size is affected by perceived distance.

SV-10 Orientation of objects in space provides information. SV-10.1 Adaptation to Orientation can occur.

Guidelines for Visual Spatial Structures

SV-11 To represent global structure use a visual display. SV-11-1 The whole is different to the sum of its parts.

SV-11-1-1 We perceive figures in front of a background. SV-11-1-2 Contrast between figure and ground causes noise.

SV-11-2 Grouping principles apply to objects. SV-12 Consider providing substructure to objects in space.

SV-12-1 Object recognition may use an alphabet of basic features.

Table 7-3 A summary of the guidelines for spatial visual metaphors.

Page 15: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 7 – Guidelines for Spatial Metaphors 220

Guidelines for the Auditory Display Space

SA-1 Use spatial auditory metaphors for displaying categorical data. SA-1.1 Base spatial categories around the visual space. SA-1.2 Sound localisation occurs at the same time as sound identification. SA-1.3 The relative spatial position of sounds is only inferred.

SA-2 Use spatial auditory metaphors for drawing attention to visual features. SA-2.1 Auditory space surrounds the user. SA-2.2 Auditory space is not occluded. SA-2.3 Sounds can be heard from multiple directions.

SA-2.3.1 Use only a few spatial sounds. SA-2.3.2 Spatial location is inaccurate to the side.

SA-3 Sound is localised using multiple cues. SA-3.1 The JND of spatial position depends on sound location. SA-3.2 Intensity is the most reliable spatial cue.

SA-3.2.1 Stereo panning provides intensity cues. SA-3.3 Accurate 3D sounds require complex transfer functions.

SA-3.3.1 Sounds from above and below are ambiguous. SA-3.3.2 We use head movements to improve 3D sound location.

SA-3.4 Complex sounds are easier to localise. SA-3.5 Echoes do not affect localisation. SA-4 Auditory space relieves the eyes. SA-5 Auditory space can be a different resolution to visual space.

Guidelines for Auditory Spatial Properties

SA-6 Spatial position helps to separate sound streams. SA-7 There is visual capture of sound sources. SA-8 Perception of sound distance is not reliable.

Table 7-4 A summary of the guidelines for spatial auditory metaphors.

Guidelines for the Haptic Display Space

SH-1 Haptic space is useful for displaying constraints in the data. SH-2 Haptic feedback can be used to display temporal-spatial data. SH-3 Haptic space can be at a different resolution to visual space.

SH-3.1 Haptic feedback augments display of global visual models. SH-3.2 Haptic space provides a finer level of resolution than vision.

Guidelines for Haptic Spatial Properties

SH-4 Haptic spatial properties should be consistent with visual properties. SH-4.1 Visual shape overrides haptic shape. SH-4.2 Visual size overrides haptic size. SH-4.3 Visual orientation competes with haptic orientation.

SH-5 Haptic feedback provides information about position in space. SH-5.1 Human spatial resolution is about 0.15mm. SH-5.2 We lose track of spatial location.

SH-5.3 There is a spatial map in the cortex. SH-5.4 Visual location overrides haptic location

SH-6 The JND of length varies between 1-4 mm. SH-7 Sensitivity to rotation varies between joints.

Guidelines for Haptic Spatial Structure

SH-8 Use spatial haptic metaphors to represent local spatial structures. SH-8.1 Point force feedback is very localised.

Table 7-5 A summary of the guidelines for spatial haptic metaphors.

Page 16: Chapter 7 Guidelines for Spatial Metaphors

Chapter 8

Guidelines for Direct Metaphors

Fire, Earth, Air and Water (1997)

“A common mistake that people make when trying to design

something completely foolproof is to underestimate the ingenuity

of complete fools.” [Douglas Adams, Mostly Harmless]

Page 17: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 8 – Guidelines for Direct Metaphors 223

Chapter 8

Guidelines for Direct Metaphors

8.1 Introduction Chapter 6 introduced the MS-Guidelines

• Guidelines for

and discussed some general high-level guidelines. This chapter contains the guidelines for the three groups of direct metaphors:

direct visual metaphors• Guidelines for

(section 8.2) direct auditory metaphors

• Guidelines for (section 8.3)

direct haptic metaphors

(section 8.4)

8.2 Guidelines for Direct Visual Metaphors DV-1 Direct visual metaphors are the first choice for displaying categorical data. While direct visual properties

are not suitable for accurate display of quantitative information they are very useful for displaying ordinal and nominal categories. Objects with direct visual properties can be easily compared when arranged in space [Tufte 1990].

8.2.1 Guidelines for Colour DV-2 Colour aids in object searching (speed, accuracy, memorability) In visual search and recognition tasks colours add speed, accuracy and memorability [Hardin 1990].

DV-2.1 Colour reduces search time. When a user is required to search for a target object amongst other objects the search time can be reduced by 33% if the objects are coloured rather than black [Hardin 1990]. The search time is reduced by more if a distinctive hue is used

Page 18: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 8 – Guidelines for Direct Metaphors 224

than if the object is given a distinctive shape or if the achromatic property of intensity (grey scale) is used [Hardin 1990]. If the task is to find a particular coloured object among a number of coloured objects, and if the target object is categorically different in colour from the other objects, the time required to find the target is independent of the number of objects [Nagy and Sanchez 1990]. DV-2.2 Colour improves search accuracy. The gain in accuracy with the use of colours can be 176% better than size, 32% better than brightness, and 202% better than shape [Christ 1975].

DV-3 Colour reduces complexity by differentiating structures. Colours can be useful for reducing visual complexity as they can be used to create layers of information and separate or distinguish structure or clusters of data [Tufte 1990]. For example, in figure 8-1 three categories of information are distinguished by using three different colours and a single item is highlighted using an intense, saturated colour.

DirectVisualProperties

Hue

Colour

Direct Visual ShapeVisual Texture

SaturationIntensity (grey scale)

DirectMetaphors

DirectVisualMetaphors

DirectAuditoryMetaphors

DirectHapticMetaphors

AuditorySpatialStructures

DirectAuditoryProperties

HapticSpatialStructures

DirectHapticProperties

Force

PitchLoudness

Haptic Surface Texture

Timbre

Compliance

WeightFriction

Direct Haptic Shape

MusicalProperties

Everyday Properties

Synthesis Properties

VisualSpatialStructures

Vibration

Figure 8-1 Three different colours are used to distinguish the three types of direct metaphors. A single item is highlighted using red.

DV-4 Colour does not convey exact quantitative values. Colour may be used to quantify a range of values but is not so effective for fine distinctions or exact measurements [Keller and Keller 1993].

DV-4.1 Sudden colour changes are obvious. When looking for a change in data around a particular quantitative value, using a sudden colour change to mark the value will make it stand out [Keller and Keller 1993].

DV-5 For ordinal data use ordered categories of colour. “If levels of a statistical variable are ordered then the colours chosen to represent them should be perceived as preserving order (dark to light - pale to saturated)” [Trumbo 1981].

Page 19: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 8 – Guidelines for Direct Metaphors 225

DV-6 For nominal data use distinctly different colour categories. Important differences in the data should be mapped to colours that are clearly perceived as different. For example, distinct lightness levels or changes of hue [Trumbo 1981]. If asked to identify colours in the absence of a reference standard, even experts cannot reliably do so for more than 30 or 40 colours. It is best to use a few basic memorable categories [Hardin 1990].

DV-6.1 Focal colours provide effective categories. Compared to other colours the focal colours are the most readily picked out of piles of colour chips by young children, and they are more easily remembered and more quickly named by both adults and children [Hardin 1990] (figure 8-2). There are 11 focal colour categories: black, white, grey (middle) and red, yellow, blue, green, orange, purple, pink and brown [Rosch 1975].

DV-6.2 Non-focal colours also provide effective categories. A useful colour coding for visual search probably depends upon the number of discriminate steps separating the colours, rather than their basic (focal) or nonbasic character. Nonbasic colours, such as red-pink, purple-pink, purple-blue, blue-green, yellow-green, yellow-orange, orange-red have been shown to be equally effective in searching for a known colour target [Boynton and Smallman 1990] (figure 8-2).

Seven categories of non-focal colour

purple-pinkred-pink blue-greenpurple-blue

orange-redyellow-green yellow-orange

Eight categories of focal colour

red blue greenyellow

pink purple orange brown

Figure 8-2 Focal and non-focal colours used to define categories.

DV-7 Every person perceives colour differently. Due to the differences in the physiology of each person’s visual system, it is best to assume that even under the same viewing conditions, no two viewers will perceive colours the same [Keller and Keller 1993]. As Tufte notes, "viewers perception (of colour) is uncertain and complicated" [Tufte 1990].

DV-7.1 Some people suffer from colour deficiency. There are a number of types of colour deficiencies that occur with different frequency in the population (figure 8-1). People with a colour deficiency can have trouble discriminating between some colours (table 8-1).

Page 20: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 8 – Guidelines for Direct Metaphors 226

Colour Deficiency Difficulty Discriminating

Frequency

Monochromat

All colours Male 0.001 % Female 0.001 %

Dichromat - protanopia

Red-Green Male 1.0 % Female 0.02%

Dichromat - deuteranopia

Red-Green Male 1% Female 0.01 %

Dichromat - tritanopia

Yellow-Green Male 0.002% Female 0.001%

Anomalous trichromat

Between colours Male 0.001 % Female 0.001 %

Table 8-1 Different frequency rates of colour deficiencies [Goldstein 1989, p149].

DV-8 The perceived colour is affected by its surroundings. Perception of colour is relative to viewing context (figure 8-3). Nearby colours interact to affect hue, saturation and intensity. For example, the same colour appears less saturated against a dark background than against a light background [Keller and Keller 1993]. Two identically coloured squares appear different when they are surrounded by different coloured backgrounds [Goldstein 1989]. This is known as simultaneous contrast and is caused by lateral inhibition in the neural processing of visual stimuli.

DV-8.1 Mach bands can occur with intensity changes. When two colours of different intensities meet there can be an illusion of intensity changes where none exists [Keller and Keller 1993]. These so called Mach bands are due to lateral inhibition in visual neural pathways [Goldstein 1989].

DV-8.2 Colour mixing can occur. The eye may mix small regions of adjacent colours to create the perception of a different colour [Keller and Keller 1993].

The small central grey rectangle is the same colour, however it appears different depending on the surrounding colour.

When light and dark bands of colour are placed next to each other it can appear as though the single bands of colour vary in intensity.

That is, they appear lighter on one side than on the side.

Figure 8-3 The perceived colour is affected by neighbouring colours.

Figure 8-4 Mach bands.

DV-9 Looking at a single colour can cause an afterimage. This is due to what is called the opponent process [Goldstein 1989, p 140]. For example, looking at a red square results in green afterimage. Looking at a blue square results in a yellow afterimage. Looking at a green square results in a red afterimage. Looking at a yellow square results in a blue after image [Goldstein 1989, p 140].

Page 21: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 8 – Guidelines for Direct Metaphors 227

DV-9.1 Chromatic adaptation affects perceived colour. The perceived colour of an object is affected by the observer's state of chromatic adaptation. Prolonged viewing of a particular colour bleaches the colour pigment and changes the observer's sensitivity to that colour [Goldstein 1989].

DV-10 Colour affects the perceived size of objects. Two objects of the same size look larger or smaller depending on their colour relative to the background on which the colour is placed. For example, a light-coloured object on dark background appears larger than if the same object is dark coloured and on a light background [Keller and Keller 1993]. DV-11 Memory affects colour. An object's characteristic colour influences our perception of its colour [Goldstein 1989, p161]. For example, we may perceive an apple shape to be redder than it actually is. DV-12 Colour can create noise. Too many large areas of strong colour can be unpleasant to look at and the "placing of light, bright colours mixed with white next to each other produces unpleasant results" [Tufte 1990].

DV-12.1 Minimise the use of colour One way to reduce noise from colour is to minimise the number of colours selected; in particular highly saturated colours [Keller and Keller 1993]. DV-12.2 Use a neutral background. "For backgrounds choose neutral colours that provide adequate contrast to the main components of the image" [Keller and Keller 1993]. The background colour affects the perception of foreground colours.

8.2.2 Guidelines for Hue DV-13 Users can distinguish about 24 steps of hue. The number of JNDs of hue that an observer can detect is about 24 [Trumbo 1981].

DV-13.1 Hues are continuous without a zero. The colour wheel is circularly continuous without a zero point [Card, Mackinlay et al. 1999]. This makes hue suitable for representing data on a continuous circular scale where no zero point is required. However, hues have no natural ordering. DV-13.2 Hues provide a natural metaphor. Colours are easily associated with things and remembered. For example blue is cool and red is hot. Using hue in this way then allows the intensity or saturation to provide quantification of some variable [Tufte 1990].

DV-13.3 Perceived hue is constant. Our perception of chromatic colour remains constant no matter how the illumination is changed [Goldstein 1989, p158]. Colour constancy works best

Page 22: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 8 – Guidelines for Direct Metaphors 228

when objects of many different colours surround an object [Goldstein 1989, p161]

8.2.3 Guidelines for Saturation DV-14 Users can distinguish about 5 steps of saturation. Saturation is associated with the chromatic vision. The number of JNDs of saturation that an observer can detect is 5 [Trumbo 1981].

DV-14.1 Saturation is ordered and continuous. Saturation is ordered and continuous so it provides a good way to quantify data. As suggested by Keller and Keller, "gradually increase or decrease colour saturation to effectively show a continuum of change" [Keller and Keller 1993]. So hue can be used for recognising categories and saturation can be varied to provide a quantitative measure. Note however, that lightness may be a more effective way to represent quantity.

DV-14.2 Saturation changes can suggest subtle differences. "Use adjacent colours with the same saturation, on the colour wheel to suggest subtle differences" [Keller and Keller 1993].

8.2.4 Guidelines for Colour Intensity DV-15 Users can distinguish about 100 steps of colour intensity. Intensity is also called value, lightness or grey scale and is associated with the perception of black, white and grey colours. It is also called achromatic colour and is sensed with the rod receptors in the retina. The number of different intensities that a user can detect is about 100 [Trumbo 1981].

DV-15.1 Colour intensity is ordered and continuous. Lightness is ordered and continuous so it provides a good way to quantify data. Hue can be used for recognising categories and lightness can be varied to provide a quantitative measure.

DV-15.2 Colour intensity is more effective than saturation for order. Mackinlay notes that intensity is more readily discernable than saturation in the display of ordinal information [Mackinlay 1981].

DV-15.3 Perceived colour intensity is influenced by surface shape. The way we perceive the curvature of a surface can influence the way we perceive its shading [Goldstein 1989, p63].

DV-15.4 Perceived colour intensity is effected by perceived illumination.

Perceived lightness can be affected by the observer’s interpretation of how an light is falling on the object [Goldstein 1989, p165]

DV-15.5 Perceived colour intensity remains constant. Our perception of the lightness remains constant no matter how the illumination is changed [Goldstein 1989, p162]. Because this is maintained by the

Page 23: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 8 – Guidelines for Direct Metaphors 229

comparison of intensity between two areas, lightness constancy can break down if there are few objects in the display or one of them is illuminated.

8.2.5 Guidelines for Visual Texture DV-16 Visual texture can be used to represent ordinal categories. There are neurones in the striate nucleus that respond directly to different spatial frequencies [Goldstein 1989, p86]. Visual texture is ordered from smooth to rough and so is best used to represent ordinal categories.

8.2.5 Guidelines for Direct Visual Shape DV-17 Direct visual shape can be used to represent nominal categories. Neurones in the cortex respond to both simple forms and more complex stimuli such as complex forms of different sizes [Goldstein 1989, p111]. Shapes are readily recognised and can be used to represent categories. Because shapes have no ordering they are best used for nominal categories.

DV-17.1 Perceived shape is constant. The visual system takes orientation into account to maintain a constant perception of shape [Goldstein 1989, p 167]. Knowledge of an objects real shape can influence our perception of the shape. DV-17.2 Perceived shape depends on knowledge. Depending on the orientation, the shape of an object can be ambiguous. However, when we gain more information about the object's identity, this ambiguity vanishes and we see its familiar shape [Goldstein 1989, p170].

8.3 Guidelines for Direct Auditory Metaphors DA-1 Direct auditory metaphors are the second choice for displaying categories. Direct visual properties

, such as colour and shape are generally better for displaying data because they can be objects can easily be compared in space. However, when the visual field is crowded, sound attributes can be used as a second choice for displaying categorical data.

DA-1.1 Pitch and loudness can represent ordinal categories Loudness and pitch can be used to display continuous quantitative data. However these properties cannot be interpreted accurately and so are better used to represent distinct ordinal categories.

DA-1.2 Timbre can represent nominal categories. Timbre is unordered and so is useful for nominal categories. There is a wide range of available timbres that are discriminable. These include musical instruments, and sampled natural sounds, such as voices. Use timbres with multiple harmonics as this helps perception and avoids one sound masking another [Brewster, Wright et al. 1994].

Page 24: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 8 – Guidelines for Direct Metaphors 230

DA-1.3 Sound does not add to visual clutter. One advantage of auditory attributes on objects is that they do not create any additional visual noise if the visual field is already crowded. So sound can provide extra information without producing interference.

DA-1.4 Do not use sound attributes to represent precise values. It is not appropriate to use pitch or modulations in volume, timbre or tempo change to indicate the precise values of variables except in the case where the variable can only take on a very small number of values [Jameson 1994].

DA-1.5 Limit the number of categories. If absolute identification is required, the number of categories should be limited to four, as listeners cannot identify correctly more than a few different intensities or pitches [Deatherage 1992].

DA-2 The properties of sound are not orthogonal. Direct auditory properties

are not independent [Bly 1994]. So for example frequency affects perceived loudness and loudness affects perceived frequency. "A balanced auditory display with independently observable auditory dimensions may not be possible in practice due to psychoacoustic interaction between acoustic parameters in multi-dimensional sonifications" [Kramer 1994b]. Where accuracy is important it is advisable to use only a single auditory property.

DA-2.1 Make use of property overlaps in display. It may be possible to exploit known relationships of auditory properties, to highlight similar relationships in the data [Kramer 1994b]. For example, if pitch and loudness both change in the same direction (up or down) the change will be noticeable. However, if pitch and loudness change in opposite directions the change may not be noticeable. DA-2.2 Use multiple properties for a single data attribute. Mapping the same data attribute to more than one sound property protects against the fact that sound properties are not independent. This can emphasize changes, if for example both loudness and pitch rise as the result of a single data variable increasing. The disadvantage is that only a single data attribute can be displayed using multiple sound attributes.

DA-2.3 The perception of sound properties is under cognitive influences. "The speed and accuracy of identifications of the source of a sound depends on the listeners expectations, context and experience [Ballas 1994].

DA-3 Use natural relationships in sound. The selection of mappings to sound properties should exploit learned or natural relationships of the users. For example, high frequencies are associated with up [McCormick and Sanders 1983].

DA-3.1 Auditory Icons use natural relationships. The concept of the auditory icon makes use of associations between events and sound properties from everyday experience. These associations are naturally learnt [Gaver 1986].

Page 25: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 8 – Guidelines for Direct Metaphors 231

DA-3.2 Sounds have natural associations. Some properties of sounds have natural associations for listeners [Kramer 1994b]. Those associations are:

• Louder is more, softer is less. • Brighter is more, duller is less. • Faster is more, slower is less • Higher pitch is more, lower pitch is less. • Higher pitch is up, lower pitch is down. • Higher pitch is faster, lower pitch is slower.

DA-3.3 There are visual and aural relationships. There are also some known relationships between visual properties and sound properties [Ballas 1994]. Those relationships are:

• Sound frequency relates to vertical placement. High frequency relates to a high position.

• Amplitude relates to size. Loud implies big. • Duration relates to horizontal length. Longer sounds imply longer

lengths.

8.3.1 Guidelines for Loudness DA-4 Loudness is an ordered property. Loudness is the perceived property of the magnitude of auditory sensation [Goldstein 1989] and is ordered from soft to loud. The perceived loudness of the sound is closely related to the amplitude of the original sound event. The perceived loudness is dependent upon both the pressure of the sound wave and the frequency of the sound. The pressure of the sound wave (intensity) is measured in decibels (db).

DA-4.1 Sound intensity is related to perceived loudness. As sound intensity grows so does the perceived loudness but the relationship is not linear. The relationship is expressed in the power law of loudness: Loudness = k.(Intensity)0.67 (where k is a constant) [Sekuler and Blake 1994]. DA-4.2 Loudness depends on frequency. A 20,000Hz tone must reach 20dB to be heard while a 20Hz tone must reach 75db before it can be heard. We are most sensitive to sounds at 2000-4000Hz [Goldstein 1989]. The relationship of perceived loudness, sound pressure and frequency is complex, and can be described by audibility curves [Goldstein 1989, page 351] (figure 4-16).

DA-4.3 Loudness depends on bandwidth. The bandwidth of the sound can effect the perceived loudness. Sounds with frequencies between 1000-5000Hz sound louder than equal amplitude sounds of other frequencies. Sounds composed of energies within this band may sound louder [Stuart 1996].

DA-4.4 Loud noises attract the user's attention. If loudness is mapped to a data attribute, the sudden occurrence of a loud sound can act as an alarm and may distract the user. Loud sounds draw the user's attention while soft sounds do not attract attention.

Page 26: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 8 – Guidelines for Direct Metaphors 232

DA-4.4.1 Loud sounds can be annoying. Loud sounds can be startling or annoying to the user.

DA-4.5 Loud sounds mask soft sounds. A general rule is that louder sounds mask softer sounds [Stuart 1996].

DA-4.6 Loudness may be under the user's control. Many systems provide the users with a volume control. In this case users may vary the volume so that intensity may not be a reliable way to display data [Brewster, Wright et al. 1994].

DA-4.7 Use a range of 10-20db above background noise. It is recommended that intensity should be a minimum of 10db above the background noise and a maximum of 20dB above the background [Brewster, Wright et al. 1994].

DA-4.8 The JND of sound intensity is about 1-2db. Users can judge changes in intensity of about 1-2db [Sekuler and Blake 1994].

DA-4.9 Perceived loudness is subjective. Loudness is a subjective impression; while it is basically a function of intensity, it is also affected by the frequency and bandwidth of the sound wave and may be perceived differently by individuals [Stuart 1996].

8.3.2 Guidelines for Pitch DA-5 Pitch is an ordered property. The perceived property of pitch is closely related to the frequency of the sound stimulus. Different frequencies are signalled by activity at different places in the auditory system. In what is known as place coding the inner hair cells located at different places along the cochlea fire to specific frequencies. The organisation of the cortex maintains this tonotopic map [Goldstein 1989]. Pitch is ordered from low to high and pitch differences naturally represent up and down (high and low pitch).

DA-5.1 Pitch depends on loudness. Changing the intensity of a sound can change the pitch of the sound. For example, at less than 2000Hz an increase in intensity increases the perceived pitch. At greater than 3000Hz an increase in intensity decreases the perceived pitch [Brewster, Wright et al. 1994]. Resonance in the auditory canal has a slight amplifying effect on frequencies between 2,000Hz and 5,000Hz [Goldstein 1989]

DA-5.2 Pitch depends on timbre. Sounds that are bright in timbre (have more high harmonics) are perceived as higher in pitch than less bright sounds of the same frequency [Stuart 1996].

DA-5.3 The JND for pitch is about 3%. To discriminate pitch changes in pure tones the increment of change for the average listener needs to be greater than 3% for frequencies around 100Hz and 0.5% around 2000Hz [Mayer-Kress, Bargar et al. 1994].

Page 27: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 8 – Guidelines for Direct Metaphors 233

DA-5.3.1 Use large changes in pitch for categories. Pitch is not recommended for categories unless there are large differences between the pitches [Brewster, Wright et al. 1994]. DA-5.3.1 Musicians are better able to judge pitch differences. It has been shown in experiments that musicians are better at differentiating pitch [Brewster, Wright et al. 1994].

DA-5.4 Perceived pitch is subjective. Pitch perception is subjective and influenced by the harmonics associated with a sound, not simply its dominant frequency [Sekuler and Blake 1994].

DA-5.4.1 Different tones may have the same pitch Tones with different frequency components can have the same pitch [Goldstein 1989].

DA-5.5 We recognize pitch relations. The ability to recognise the exact pitch of a sound is quite rare, even amongst musicians [Sekuler and Blake 1990]. However, people with musical training often develop the ability to identify the difference between two notes. This is described as relative pitch. People in general are far better at recognising differences between pitches than absolute pitches [Sekuler and Blake 1990].

DA-5.5.1 Pitch intervals are key for displaying data. For displaying data pitch intervals are of greater significance than specific pitch values [Williams].

DA-5.6 It is difficult to perceive the pitch of short sounds. It is difficult to judge the pitch of a short note [Madhyastha and Reed 1994].

DA-5.7 Sounds with similar frequencies mask each other. Masking is greater for sounds around the same frequency [Goldstein 1989]. Masking effects occur between frequency components presented simultaneously or close to each other in time. The perceived pitch depends on the context of other sounds in which it is heard [Williams 1994]

DA-5.7.1 Low frequency sounds mask high frequency sounds. A general rule is that low frequency sounds mask high frequency sounds [Stuart 1996]

DA-5.8 High pitch sounds are irritating. Users may find high frequency sounds more irritating than low frequency sounds [Scaletti and Craig 1991].

8.3.3 Guidelines for Timbre DA-6 Timbre is a nominal property. Timbre is unordered but is useful for representing nominal categories. The timbre is determined primarily by the spectrum of frequencies that make up the sound. Other factors such as onset and offset of the sound are also important [Kramer 1994a].

Page 28: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 8 – Guidelines for Direct Metaphors 234

DA-6.1 Use timbre with multiple harmonics. It is easier to perceive the identity of timbres with multiple harmonics. This also helps avoid problems with masking. Most musical instruments produce sounds composed of multiple harmonics [Brewster, Wright et al. 1994]. DA-6.2 The perceived timbre is affected by pitch. Some sound sprectra will be perceived as changing when played at different pitches [Kramer 1994b] DA-6.3 It is difficult to perceive the timbre of a short sound. It is difficult to distinguish the timbre of very short sound [Madhyastha and Reed 1994]. DA-6.4 It is difficult to perceive the timbre of a high-pitched sound. It is difficult to distinguish the timbre of high-pitched note [Madhyastha and Reed 1994].

8.4 Guidelines for Direct Haptic Metaphors DH-1 Direct haptic metaphors are the third choice for displaying categories. Direct visual properties such as colour and shape are generally better for displaying data because they can be easily be compared. Direct auditory properties such as pitch and timbre are also effective for displaying data categories. However, because auditory properties are not orthogonal, only a few can be used. Direct haptic properties

such as hardness and surface texture provide a third choice for displaying categorical data.

DH-1.1 The visual model affects the perception of haptic properties. Visual information has been shown to alter the perception of haptic properties such as stiffness [Srinivasan, Beauregard et al. 1996] and shape [Srinivasan and Basdogan 1997].

DH-1.1.1 Visual attention can affect the tactile response. For some tasks visual attention can affect the tactile response [Goldstein 1989]. The implication is that in multi-sensory displays visual attention may be focused on visual properties of the display and this reduce the effectiveness of displaying haptic properties.

DH-1.2 The auditory model affects the perception of haptic attributes. Auditory information has been shown to alter the perception of haptic properties such as surface stiffness [DiFranco, Beauregard et al. 1997].

DH-2.0 Individuals have very different haptic perceptions. The individual differences in many measures of haptic perception are large [Stuart 1996].

DH-2.1 Use large differences to display categories. Because of the large differences between individuals, it is safer to use large categorical differences between haptic properties.

Page 29: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 8 – Guidelines for Direct Metaphors 235

8.4.1 Guidelines for Force DH-3 Force is an ordinal property. Force is ordered but it is not judged precisely; this makes it useful for displaying ordinal categories.

DH-3.1 The JND for force is about 7%. The JND for contact force is 7% [Srinivasan and Basdogan 1997], although a range of 5-15 % is possible [Durlach and Mavor 1995]. A variation of 0.5 Newtons can be detected [Stuart 1996].

DA-3.2 Force fields can display vector fields. In some domains, such as scientific visualisation, vector fields are often modelled. The temporal and spatial nature of these fields suggests that force should be a natural metaphor for displaying them. DA-3.3 Strong forces distract attention. If force is mapped to a data attribute, the sudden occurrence of a strong force can surprise and distract a user.

8.4.2 Guidelines for Haptic Surface Texture DH-4 Haptic surface texture is an ordinal property. Surface texture can be experienced as slip on a smooth surface like glass through to the roughness of more abrasive surfaces such as sandpaper. This property is ordered from smooth to rough but it is not judged precisely. This makes it useful for displaying ordinal categories.

DH-4.1 Touch is equal to vision for comparing surface smoothness. It has been shown that touch and vision provide comparable levels of performance when observers attempted to select between smooth surfaces [Morton 1982]. DH-4.2 Visual surface texture affects haptic surface texture. Using vision and touch improves the discrimination of surface texture [Morton 1982]. Thus a combined display may increase the number of categories that can be displayed.

8.4.3 Guidelines for Direct Haptic Shape DH-5 Direct haptic shape is a nominal property. Shape is an unordered haptic property and this makes it useful for displaying nominal categories.

DH-5.1 Direct haptic shape is biased by vision. Touch is usually dominated by vision when they are placed in conflict with one each other for shape perception tasks. This is known as intersensory dominance. For example, in an experiment to test for this effect subjects were asked to view objects through a distorting prism. The object was square in shaped but looked

Page 30: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 8 – Guidelines for Direct Metaphors 236

like a rectangle through the distorting prism. While viewing the object the subject could also feel the square shape of the object. Most subjects reported that seeing and feeling a rectangle shape [Goldstein 1989, p210]. DH-5.2 Visual shape recognition is faster than touch. Vision registers shape more accurately and rapidly than touch [Goldstein 1989, p209]

8.4.4 Guidelines for Compliance DH-6 Compliance is an ordinal property. Surface compliance of objects is an ordered property that cannot be judged precisely. This suggests compliance is useful for displaying ordinal categories.

DH-6.1 The JND of compliance depends on the type of surface. Discrimination of compliance depends on whether the object has a deformable or rigid surface. It is more difficult to judge the compliance of rigid surfaces. The JND of deformable surfaces in a pinch grasp is about 5-15%. The JND of a rigid surface is about 23-34% [Durlach and Mavor 1995].

DH-6.2 The visual model affects perceived stiffness. Changing the visual representation of the object can alter the perceived haptic stiffness of a spring [Srinivasan, Beauregard et al. 1996]. DH-6.3 The auditory model affects the perceived stiffness. Using sound in conjunction with haptics can alter the perceived stiffness of a surface [DiFranco, Beauregard et al. 1997]. DH-6.4 Fast haptic rendering is required for rigid surfaces. The haptic rendering rate on force feedback devices must be maintained at 1000Hz to create the illusion of a rigid surface [Srinivasan and Basdogan 1997]. Rendering at rates slower than this can create the impression of a soft yielding surface.

8.4.5 Guidelines for Friction and Viscosity DH-7 Viscosity is an ordinal property. Viscosity is ordered but it is not judged precisely; this makes it useful for displaying ordinal categories.

DH-7.1 The JND for viscosity is about 12%. Users can discriminate viscosity categories with a JND of about 12% [Srinivasan and Basdogan 1997].

8.4.6 Guidelines for Weight and Inertia DH-8 Weight and inertia are ordinal properties. Weight and inertia are ordered but cannot be judged precisely; this makes them useful properties for displaying ordinal categories.

Page 31: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 8 – Guidelines for Direct Metaphors 237

DH-8.1 The JND for weight is 10-20%. The JND required to distinguish between weights is reported as 10% of the reference value [Durlach and Mavor 1995]. An alternative source estimates that the JND is 20% [Srinivasan and Basdogan 1997].

DH-8.2 Temperature of objects affects perception of weight. The temperature of an object affects its perceived weight. Cold objects feel heavier than warm objects with the same weight [Durlach and Mavor 1995]. DH-8.3 The visual model affects the perception of weight. Larger objects are judged to be heavier than smaller objects even if they weigh the same. For example, subjects make systematic errors in discriminating objects of similar weights when the size was not related to weight. The subjects judged bigger objects as being heavier [Von der Heyde and Häger-Ross 1998].

8.4.7 Guidelines for Vibration DH-9 Vibration is an ordinal property. Vibration is ordered but it is not judged precisely; this makes it useful for displaying ordinal categories.

DH-9.1 Detection threshold for vibration depends on frequency. The intensity of a vibration required for detection depends on the frequency (table 8-2).

Threshold (dB) Frequency (Hz) 28 0.4-3

decreases by -5 each octave 3-30 decreases by -12 each octave 30-250

increases > 250 Table 8-2 Detection thresholds for vibration [Durlach and Mavor 1995].

8.5 Conclusion This chapter has described the guidelines for developing direct metaphors. These include guidelines for direct visual metaphors that are summarised in table 8-3. They also include guidelines for direct auditory metaphors that are summarised in table 8-4. Finally some guidelines for direct haptic metaphors were also presented and these are summarised in Table 8-5.

Page 32: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 8 – Guidelines for Direct Metaphors 238

Guidelines for Direct Visual Metaphors

DV-1 Direct visual metaphors are the first choice for displaying categorical data.

Guidelines for Colour

DV-2.0 Colour aids in object searching (speed, accuracy, memorability) DV-2.1 Colour reduces search time. DV-2.2 Colour improves search accuracy.

DV-3 Colour reduces complexity by differentiating structures. DV-4 Colour does not convey exact quantitative values.

DV-4.1 Sudden colour changes are obvious. DV-5 For ordinal data use ordered categories of colour. DV-6 For nominal data use distinctly different colour categories.

DV-6.1 Focal colours provide effective categories. DV-6.2 Non-focal colours also provide effective categories.

DV-7 Every person perceives colour differently. DV-7.1 Some people suffer from colour deficiency.

DV-8 The perceived colour is affected by its surroundings. DV-8.1 Mach bands can occur with intensity changes. DV-8.2 Colour mixing can occur.

DV-9 Looking at a single colour can cause an afterimage. DV-9.1 Chromatic adaptation affects perceived colour.

DV-10 Colour affects the perceived size of objects. DV-11 Memory affects colour. DV-12 Colour can create noise.

DV-12.1 Minimise the use of colour DV-12.2 Use a neutral background.

Guidelines for Hue

DV-13 Users can distinguish about 24 steps of hue. DV-13.1 Hues are continuous without a zero. DV-13.2 Hues provide a natural metaphor. DV-13.3 Perceived hue is constant.

Guidelines for Saturation

DV-14 Users can distinguish about 5 steps of saturation. DV-14.1 Saturation is ordered and continuous. DV-14.2 Saturation changes can suggest subtle differences.

Guidelines for Colour Intensity

DV-15 Users can distinguish about 100 steps of colour intensity. DV-15.1 Colour intensity is ordered and continuous. DV-15.2 Colour intensity is more effective than saturation for order. DV-15.3 Perceived colour intensity is influenced by surface shape.

DV-15.4 Perceived colour intensity is effected by perceived illumination. DV-15.5 Perceived colour intensity remains constant.

Guidelines for Visual Texture

DV-16 Visual texture can be used to represent ordinal categories.

Guidelines for Direct Visual Shape

DV-17 Direct visual shape can be used to represent nominal categories. DV-17.1 Perceived shape is constant. DV-17.2 Perceived shape depends on knowledge.

Table 8-3 A summary of the guidelines for direct visual metaphors.

Page 33: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 8 – Guidelines for Direct Metaphors 239

Guidelines for Direct Auditory Metaphors

DA-1 Direct auditory metaphors are the second choice for displaying categories. DA-1.1 Pitch and loudness can represent ordinal categories DA-1.2 Timbre can represent nominal categories. DA-1.3 Sound does not add to visual clutter. DA-1.4 Do not use sound attributes to represent precise values. DA-1.5 Limit the number of categories.

DA-2 The properties of sound are not orthogonal. DA-2.1 Make use of property overlaps in display. DA-2.2 Use multiple properties for a single data attribute. DA-2.3 The perception of sound properties is under cognitive influences

DA-3 Use natural relationships in sound. DA-3.1 Auditory Icons use natural relationships. DA-3.2 Sounds have natural associations. DA-3.3 There are visual and aural relationships.

Guidelines for Loudness

DA-4 Loudness is an ordered property. DA-4.1 Sound intensity is related to perceived loudness. DA-4.2 Loudness depends on frequency. DA-4.3 Loudness depends on bandwidth. DA-4.4 Loud noises attract the user's attention.

DA-4.4.1 Loud sounds can be annoying. DA-4.5 Loud sounds mask soft sounds. DA-4.6 Loudness may be under the user's control. DA-4.7 Use a range of 10-20db above background noise. DA-4.8 The JND of sound intensity is about 1-2db. DA-4.9 Perceived loudness is subjective.

Guidelines for Pitch

DA-5 Pitch is an ordered property. DA-5.1 Pitch depends on loudness. DA-5.2 Pitch depends on timbre. DA-5.3 The JND for pitch is about 3%.

DA-5.3.1 Use large changes in pitch for categories. DA-5.3.1 Musicians are better able to judge pitch differences.

DA-5.4 Perceived pitch is subjective. DA-5.4.1 Different tones may have the same pitch

DA-5.5 We recognize pitch relations. DA-5.5.1 Pitch intervals are key for displaying data.

DA-5.6 It is difficult to perceive the pitch of short sounds. DA-5.7 Sounds with similar frequencies mask each other.

DA-5.7.1 Low frequency sounds mask high frequency sounds. DA-5.8 High Pitch sounds are irritating.

Guidelines for Timbre

DA-6 Timbre is a nominal property. DA-6.1 Use timbre with multiple harmonics. DA-6.2 The perceived timbre is affected by pitch. DA-6.3 It is difficult to perceive the timbre of a short sound. DA-6.4 It is difficult to perceive the timbre of a high-pitched sound.

Table 8-4 A summary of the guidelines for direct auditory metaphors.

Page 34: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 8 – Guidelines for Direct Metaphors 240

Guidelines for Direct Haptic Metaphors

DH-1 Direct haptic metaphors are the third choice for displaying categories. DH-1.1 The visual model affects the perception of haptic properties.

DH-1.1.1 Visual attention can affect the tactile response. DH-1.2 The auditory model affects the perception of haptic attributes.

DH-2.0 Individuals have very different haptic perceptions. DH-2.1 Use large differences to display categories.

Guidelines for Force

DH-3 Force is an ordinal property. DH-3.1 The JND for force is about 7%. DA-3.2 Force fields can display vector fields. DA-3.3 Strong forces distract attention.

Guidelines for Haptic Surface Texture

DH-4 Haptic surface texture is an ordinal property. DH-4.1 Touch is equal to vision for comparing surface smoothness. DH-4.2 Visual surface texture affects haptic surface texture.

Guidelines for Direct Haptic Shape

DH-5 Direct haptic shape is a nominal property. DH-5.1 Direct haptic shape is biased by vision. DH-5.2 Visual shape recognition is faster than touch.

Guidelines for Compliance

DH-6 Compliance is an ordinal property. DH-6.1 The JND of compliance depends on the type of surface. DH-6.2 The visual model affects perceived stiffness. DH-6.3 The auditory model affects the perceived stiffness. DH-6.4 Fast haptic rendering is required for rigid surfaces.

Guidelines for Friction

DH-7 Viscosity is an ordinal property. DH-7.1 The JND for viscosity is about 12%.

Guidelines for Weight

DH-8 Weight and inertia are ordinal properties. DH-8.1 The JND for weight is 10-20%. DH-8.2 Temperature of objects affects perception of weight. DH-8.3 The visual model affects the perception of weight.

Guidelines for Vibration

DH-9 Vibration is an ordinal property. DH-9.1 Detection threshold for vibration depends on frequency.

Table 8-5 A summary of the guidelines for direct haptic metaphors.

Page 35: Chapter 7 Guidelines for Spatial Metaphors

Chapter 9

Guidelines for Temporal Metaphors

Modern Day Tragedy (1984)

"Time has been transformed, and we have changed; it has

advanced and set us in motion; it has unveiled its face,

inspiring us with bewilderment and exhilaration." [Kahlil

Gibran, 'Children of Gods, Scions of Apes']

Page 36: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 9 – Guidelines for Temporal Metaphors 243

Chapter 9

Guidelines for Temporal Metaphors

9.1 Introduction Chapter 6 introduced the MS-Guidelines

• Guidelines for

and discussed some general high-level guidelines. This chapter contains the guidelines for the three groups of temporal metaphors:

temporal visual metaphors• Guidelines for

(section 9.2) temporal auditory metaphors

• Guidelines for (section 9.3)

temporal haptic metaphors

(section 9.4).

9.2 Guidelines for Temporal Visual Metaphors TV-1 Use temporal visual metaphors to understand change. Consider temporal visual metaphors

for temporal data, such as data that changes over time or occurs as events in time. Transitions or steps of change in the data can be illustrated by changing position or properties of objects in space. Animation also helps the user maintain their understanding of spatial relationships.

9.2.1 Movement Events TV-2 Movement helps organise objects in the scene. Objects that move together are grouped together [Goldstein 1989, p 293]. This principle holds not only holds for objects moving in the same direction, but any structured movement, such as when objects are scaled or rotated [Friedrich 2002]. TV-3 Simple motion is perceived by default. We typically perceive objects as moving in straight lines. This is known as the shortest path constraint. When two visual stimuli are separated in time and separated in space then movement is usually perceived to occur along the shortest path between the two

Page 37: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 9 – Guidelines for Temporal Metaphors 244

stimuli [Goldstein 1989, p290]. An exception is when motion obeys some simple biological constraints [Goldstein 1989, p 293]. TV-4 Use movement to draw attention to changes. Moving objects attract our attention. This is particularly true for moving objects to the side. Movement in the peripheral visual field usually triggers an eye movement; this brings the moving object's image into the fovea [Goldstein 1989, p274]. There is also direct neural support for detecting types of movement. For example, within the cortex there are a number of cells which respond best when a correctly oriented bar of light moves across the entire receptor field. Many cells respond best to a particular direction of movement. [Goldstein 1989]

TV-4.1 The threshold for movement is 1/6 - 1/3

of a degree of visual angle. Perceiving movement in a homogenous field requires a velocity of 1/6

- 1/3 of a

degree of visual angle [Goldstein 1989, p 280].

TV-4.2 Perception of movement depends on velocity and context. Perception of movement depends on the velocity of the moving stimulus and its surroundings [Goldstein 1989, p280]. Often the context within which the stimulus moves affects our perception of its movement [Goldstein 1989, p 289]. TV-4.3 Vision responds to optical flow patterns. The rate of angular expansion provides valuable information about moving objects. The cortex contains neurones that respond best to patterns of dots that are expanding outwards. It also responds to other optical flow patterns, such as dots that follow circular or spiral motions [Goldstein 1989, p 297]. The implication is that these types of flow patterns may be useful for presenting information

TV-5 Movement provides spatial information. Movement provides additional information about structure, shape and depth.

TV-5.1 Movement provides information about shape. Movement provides information about an object's 3-D shape [Goldstein 1989, p275]. Movement provides perceptual cues about the object's structure; and this is called the kinetic depth effect [Goldstein 1989, p 292]. TV-5.2 Movement provides information about depth. Movement-produced depth cues include motion parallax, deletion and accretion [Goldstein 1989, p 231] (table 2-3). TV-5.3 Movement provides information about form. Movement provides information about figure and ground, enhancing the perception of form [Goldstein 1989, 274].

9.2.2 Alarm Events TV-6 The visual system needs time to perceive changes. While the visual system is rapid it requires time to perceive the movement of complex meaningful stimuli. The information process requires about 200ms [Goldstein 1989,

Page 38: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 9 – Guidelines for Temporal Metaphors 245

p291]. The implication is that more rapid movements in a display can not be interpreted.

9.2.3 Transition Events TV-7 Perceptual constancy resists change. Properties of objects that change slowly over time may not be noticed, as we tend to perceive object properties as constant. (GP-3)

TV-7.1 Motion is constant. Once movement starts in one direction, one perceives it to continue in that direction [Goldstein 1989, p290]. Users of an information display may not identify slight changes in movement over time.

TV-8 Moving images must change constantly. To maintain the illusion of animation requires a minimum of 10 frames per second. [Robertson, Card et al. 1993]. However, for Virtual Environments the aim is usually to provide 25 frames a second. With stereoscopic views the required rendering rate becomes 50 frames a second [Durlach and Mavor 1995].

9.3 Guidelines for Temporal Auditory Metaphors TA-1 Use temporal auditory metaphors to display time series data. Hearing is mainly a temporal sense and so is good for detecting changes over time [Kramer 1994a]. Ears are good for certain types of data analysis; for example picking up correlations and repetitions of various sorts. Hearing detects sounds repeated at regular rhythms. It is especially useful where a sudden change to constant information needs to be detected.

TA-1.1 Sound reveals short-term patterns. Because picking patterns in sound requires holding past events in short term memory, it is most suitable for patterns that occur over a short time period.

TA-1.2 Periodic data sounds better. When the data has periodic or quasi-periodic events it is likely to sound more interesting than a non-periodic data set [Scarletti].

TA-1.3 Auralisation of arbitrary data is not useful.

The direct playing (auralisation) of arbitrary time-series data is often unsuccessful because the data relations are physically unconstrained and generate acoustics that are not matched to perceptual capabilities and expectations of the listener [Hayward 1994].

TA-2 Auditory events can be compressed in time. Time compression of audio signals reduces the time to replay a series of data. Because of the high temporal resolution, auditory events can still be distinguished. The temporal resolution ranges from milliseconds to several thousand milliseconds [Kramer 1994a, Stuart 1996].

Page 39: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 9 – Guidelines for Temporal Metaphors 246

TA-3 Avoid auditory extremes. When mapping data to auditory events, avoid situations where extreme auditory properties are displayed. For example, very loud signals can cause a startle response and actually disrupt the user's performance [McCormick and Sanders 1983].

TA-3.1 Make intensity relative to ambient noise. It is important to establish a signal intensity that is relative to the ambient noise level so that it is not masked and yet not so loud as to startle the listener [McCormick and Sanders 1983].

TA 3.2 Sounds can be unpleasant. Sound should strive to be pleasant and unobtrusive [Cohen 1994]. Sounds can easily become irritating. Even interesting sounds can become annoying when presented repeatedly [Smith, Pickett et al. 1994].

9.3.1 Transition Events TA-4 Sound is good for monitoring data. We hear things we do not see and so it is good for monitoring what is happening in the background. Sound is useful for monitoring a situation when eyes are busy. People can process audio information while simultaneously engaged in an unrelated task [Cohen 1994].

TA-4.1 Constant sound fades into the background. Sounds drop out of conscious awareness if they are too constant [Stuart 1996]. While changing sound events attract attention, sustained or uninformative sounds are relegated to the background [Wenzel 1994, Cohen 1994]. TA-4.2 Use departures from normal to covey information. It is best to provide a standard stimulus to represent the normal situation and then make abrupt changes to indicate departures from the normal. Listeners are sensitive to frequency or intensity changes but poor at identifying a unique signal [Deatherage 1992]. Many cortical neurones respond to abstract changes to features of sound, for example, a sound whose frequency goes up [Sekuler and Blake 1994].

TA-5 Perception of an auditory property remains stable. Having achieved one interpretation of an acoustic signal, that interpretation will remain fixed through slowly changing parameters until the original interpretation is no longer appropriate. This may cause problems for auditory monitoring. Gradual changes in frequency or intensity may exceed normal thresholds by a considerable amount before the listener detects a change of state [Williams 1994]. TA-5.1 Perception adapts to loudness.

There are a number of adaptive mechanisms such as loudness adaptation. This reduces the perceived loudness of a sound after prolonged exposure to that sound. TA-5.2 Gradual sound changes may not be identified. Slow changes to sound may not be noticed. A sudden change to a sound signal is more likely to attract attention than a slowly changing one [Smith, Pickett et

Page 40: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 9 – Guidelines for Temporal Metaphors 247

al. 1994]. Use interrupted or variable signals and avoid steady state signals to minimise perceptual adaptation [McCormick and Sanders 1983].

TA-5.3 Use intermittent changes in signal. It is best to use intermittent or repeated changes in a signal rather than a single change followed by a continuous signal. The ear is much more likely to detect changes in a signal occurring every second or two than at longer intervals [Deatherage 1992].

TA-6 Sounds are organised into streams. A stream of sound is a group of auditory events that are perceived to come from the same sound source. There are actually two types of auditory grouping. Primitive grouping is a fast, innate grouping mechanism that works by acoustic properties or based on properties of the sound stimulus. Schema Segregation is a slower, more conscious selection of elements from groups and an active restructuring by effort of will [Bregman 1990], [Williams 1994].

TA-6.1 We can monitor multiple streams of sound. We are capable of monitoring multiple auditory streams. This is sometimes known as the cocktail party effect. A person can selectively attend to one conversation in the midst of others, or switch attention from one conversation to another. Similarly it is possible for users to monitor multiple background sounds as long as the sounds are distinguishable [Cohen 1994].

TA-6.2 We can detect temporal coherence between streams. Audition is specialised to process multiple temporally defined objects or streams simultaneously. Our ears are very sensitive to temporal coherence between two streams (onset and offset of sound) and such coherence is often used to define a sound source [Fitch and Kramer 1994]. TA-6.3 We cannot compare temporal relations in separate streams Judging temporal relations such as the order of events in separate streams is difficult to make across streams [Bregman 1990]. TA-6.4 We cannot compare the properties of separate streams. While we can monitor separate streams for changes and also compare temporal coherence, it is difficult to make relative judgements about the properties of different streams. To compare properties of sound they should be in the same stream [Barass 1997]. The equivalent perception of two ongoing events is difficult to perceive quantitatively for an extended time [Bargar 1994]. TA-6.5 Simple tasks within a stream are more accurate. Simple tasks such as counting the number of tones are more accurate if the tones are in the same stream [Bregman 1990]. TA-6.6 Stream segregation affects the perceived rhythm. A rhythm tends to be defined by sounds that fall in the same stream. The allocation of sounds to different streams affects what rhythms may be heard [Bregman 1990].

Page 41: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 9 – Guidelines for Temporal Metaphors 248

TA-6.7 Gestalt grouping principles apply to auditory scene analysis. Auditory scene analysis relates to the perception of overlapping auditory elements. Acoustic elements are grouped into streams according to heuristics described by the Gestalt principles [Williams 1994] (table 9-1).

Law of Belonginess

"A sound can only form part of one stream at a time and its percept is relative to the rest of the stream to which it belongs."

Law of Familiarity

"The recognition of prior, well-known groupings of sounds leads to these sounds being grouped together."

Law of Similarity

"Sounds which share attributes are perceived as related." Note that an element in a stream may be captured by another stream with elements that are similar [Bregman 1990]. This principle can be used to avoid unwanted groupings by making sure that sounds are not similar.

Law of Good Continuation

"Sounds that display smooth transitions from one state to another are perceived as related."

Law of Proximity

"Sounds that are close to each other are more likely to be grouped together".

Law of Common Fate

"Sounds which experience the same kinds of changes at the same time are perceived as related".

Law of Stability

"Having achieved an interpretation that interpretation will remain fixed throughout slowly changing parameters until no longer appropriate."

Table 9-1 Gestalt principles of auditory grouping [Williams 1994].

TA-6.8 Sounds streams may be grouped or segregated. Sounds created by a particular source usually come from one position in space or from a slowly changing space so that sounds are grouped by location. Similarity of timbre and pitch also affects grouping because sounds that have the same timbre or frequency are often produced by the same source. Temporal proximity effects perception as sounds that occur in rapid progression tend to be produced by the same source. The onset and offset of sounds also effect the perception of groups as sounds that stay constant or that change smoothly are often produced by the same source [Goldstein 1989]. When comparing multiple sound streams the stream segregation is effected by the similarity of [Stuart 1996]:

• spatial location • synchrony of temporal onset • synchrony of temporal offset • timbre • pitch • intensity • frequency • amplitude modulation • rhythm.

Page 42: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 9 – Guidelines for Temporal Metaphors 249

TA-6.9 Masking may hide signals. In sounds with overlapping frequency, areas masking can become a significant problem [Kramer 1994b]. However, only sounds grouped into the same stream can obscure or camouflage a target sound [Bregman 1990].

TA-6.9.1 Masking can be useful. Masking may be exploited. For example, if several variables are changing in pitch and all are going up then only the general up trend will be noticed. However, if one goes against the trend it is easy to identify the odd one out [Williams].

TA-6.10 Minimise the number of auditory streams. It is not advisable to overload the auditory channel. Only a few displays should be used in any given situation. Two many displays can be confusing and will overload the operator [McCormick and Sanders 1983]. If the sounds used have great similarities in timbre or structure, then the overall sound can become cluttered and distracting [Kramer 1994b].

TA-6.11 Attention affects the perception of a sound stream. Attention plays an important part in perception [Williams 1994]. For example, by making an effort we can listen to a particular voice among many in a crowded room.

TA-6.12 Knowledge affects segregation of streams. Like vision, sound segregation depends on a priori knowledge about the world and is influenced by our experience with familiar sounds [Stuart 1996].

9.3.2 Alarm Events TA-7 Sound is good for alarms. Sound is insistent and is very suitable for alarms and signalling an event. Sound is good at getting attention and different temporal and spectral patterns can be used to encode different levels of urgency. "Acoustic signals tend to produce an alerting and orienting response and can be detected more quickly than visual signals" [Wenzel 1994].

TA-7.1 Alarms should be heard above background noise. The intensity level of an alarm should be between 15db and 25db above the background noise [Patterson 1982].

TA-7.2 Alarms should be composed of pulses. An alarm should be composed of pulses of sound 100-150ms in duration. To avoid startling the user the warning sound should have onsets and offsets 20-30ms in duration. The duration of the gap between pulses can encode the degree of urgency, with a 150ms inter-pulse gap for urgent sounds and a 300ms gap for non-urgent sounds [Patterson 1982].

TA-7.3 Alarms should have a frequency between 500-3000Hz. The fundamental frequency of a warning sound should be within the range 200-5000Hz. A sound between 500-3000Hz is best because the ear is most sensitive to this range [Deatherage 1992].

Page 43: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 9 – Guidelines for Temporal Metaphors 250

TA-7.4 Avoid masking of alarm sounds. More complex waveforms are less likely to be masked. For example, four or more component harmonics helps avoid masking [Patterson 1982]. In the presence of noise, using frequencies different from the loudest frequencies of the noise will also help to reduce masking of the alarm [Deatherage 1992].

9.3.3 Temporal Structure TA-8 Sound can convey complex messages with temporal structures. A number of ways have been devised to provide structure in complex sound messages. For example, in music, the concepts of rhythm, meter and inflection can impose structure. Other hierarchically structured sound messages have been designed and shown to be effective in some instances. These include earcons [Blattner, Sumikawa et al. 1989] and parameter nesting [Kramer 1994b].

TA-8.1 Complex messages must be learnt. There is a learning time for identifying complex messages such as earcons and the user can only learn a limited number in a restricted time [Kramer 1994a]. TA-8.2 Use two stage messages for complex information. The first stage can act as an attention getting alarm and to identify the message category. The second stage provides the message detail and requires the user's attention for decoding [McCormick and Sanders 1983]. TA-8.3 Sound is useful for short messages. Sound is effective for conveying short messages. Sound messages should not provide more information than is necessary [McCormick and Sanders 1983]. TA-8.4 Keep messages constant. Always use the same sound signal to convey the same information [McCormick and Sanders 1983]. TA-8.5 Musicians are better able to hear musical structures. Trained musicians are better able to decipher complex musical structures. Performance on some sound displays may depend on the user's musical skills.

TA-8.6 Rhythms can represent data categories. If rhythm is used to represent categories they should be made as different as possible. Putting a different number of notes in each rhythm is very effective for distinguishing different rhythms, but very short notes should be avoided. [Brewster, Wright et al. 1994]

TA-8.7 Interruption rate indicates speed. Interrupting a sound signal at different rates is a natural indication of speed. An increase or decrease in interruption rate is quickly perceived as a change in speed [Deatherage 1992]. TA-8.8 Awareness affects the perception of melody. The awareness that a particular series of tones is a familiar melody facilitates the perceptual grouping of these tones [Goldstein 1989].

Page 44: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 9 – Guidelines for Temporal Metaphors 251

9.4 Guidelines for Temporal Haptic Metaphors TH-1 Use temporal haptic metaphors to display time series data. Touch is both a temporal and spatial sense. Because of its temporal nature it is good for detecting changes over time.

9.4.1 Display Space Events TH-2 Use temporal haptic metaphors for task-assisted navigation. Any action involving movement can be constrained or assisted with force feedback. This may be useful to assist a user to follow a difficult path. This may assist for training or improving task efficiency.

9.4.2 Transition Events TH-3 Haptic feedback can detect a wide range of frequencies. A wide range of force frequencies can be perceived, from fine vibrations at 5,000-10,000Hz up to coarse vibrations of 300-400Hz [Stuart 1996]. This allows haptics to be used for detecting a wide range of temporal patterns.

TH-3.1 Force feedback models should be simple. The recommended speed of force feedback devices is 1000Hz [Srinivasan and Basdogan 1997]. This is the speed required to give the illusion of hard surfaces. The implication is that perceived properties are very dependent on the rate at which forces are displayed. Most devices will operate at this speed provided the control loop at each step is short. This places some emphasis on the designer to maintain a simple force model. TH-3.2 Very fast changes to force can be detected. The rate of 1000Hz is very fast compared to the visual rate which is 60Hz [Durlach and Mavor 1995]. This may provide opportunities for speeding up time for haptic displays. So for example, 10 minutes of data may be displayed over 1 minute and still allow the user to resolve temporal differences.

9.4.3 Movement Events TH-4 Haptics is concerned with movement. Touch is both a temporal and spatial sense and is designed to both instigate and detect movement [Welch and Warren 1980]. The haptic sense can respond specifically to objects that change position in space with a specific temporal pattern [Goldstein 1989]. This suggests that haptic movement events may be an appropriate way to display information.

TH-4.1 Vibration can create the illusion of movement. When vibration is imposed on muscles and tendons, the corresponding limbs are perceived to be moving [Durlach and Mavor 1995]. Therefore, using both movement and vibration may not be a reliable way to display information.

Page 45: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 9 – Guidelines for Temporal Metaphors 252

9.4.5 Temporal Structure TH-5 Consider transferring temporal auditory metaphors to haptics. Both hearing and touch can detect signals repeated at regular rhythms. They are also useful where a sudden change to constant information needs to be detected. A number of temporal structures have been explored for sound and these could also be applied to haptic monitoring. For example, the musical concepts of rhythm, meter and inflection

9.5 Conclusion This chapter has described the guidelines for developing temporal metaphors. These include guidelines for temporal visual metaphors that are summarised in table 9-2. They also include guidelines for temporal auditory metaphors that are summarised in table 9-3. Finally some guidelines for temporal haptic metaphors

were also presented and these are summarised in table 9-4.

Guidelines for temporal visual metaphors

TV-1 Use temporal visual metaphors to understand change.

Guidelines for movement events

TV-2 Movement helps organise objects in the scene. TV-3 Simple motion is perceived by default. TV-4 Use movement to draw attention to changes.

TV-4.1 The threshold for movement is 1/6 - 1/3

of a degree of visual angle. TV-4.2 Perception of movement depends on velocity and context. TV-4.3 Vision responds to optical flow patterns.

TV-5 Movement provides spatial information. TV-5.1 Movement provides information about shape. TV-5.2 Movement provides information about depth. TV-5.3 Movement provides information about form.

Guidelines for alarm events

TV-6 The visual system needs time to perceive changes.

Guidelines for transition events

TV-7 Perceptual constancy resists change. TV-7.1 Motion is constant.

TV-8 Moving images must change constantly.

Table 9-2 A summary of the guidelines for temporal visual metaphors.

Page 46: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES Chapter 9 – Guidelines for Temporal Metaphors 253

Guidelines for temporal auditory metaphors

TA-1 Use temporal auditory metaphors to display time series data. TA-1.1 Sound reveals short-term patterns. TA-1.2 Periodic data sounds better.

TA-1.3 Auralisation of arbitrary data is not useful. TA-2 Auditory events can be compressed in time. TA-3 Avoid auditory extremes.

TA-3.1 Make intensity relative to ambient noise. TA 3.2 Sounds can be unpleasant.

Guidelines for transition events

TA-4 Sound is good for monitoring data. TA-4.1 Constant sound fades into the background. TA-4.2 Use departures from normal to covey information.

TA-5 Perception of an auditory property remains stable. TA-5.1 Perception adapts to loudness.

TA-5.2 Gradual sound changes may not be identified. TA-5.3 Use intermittent changes in signal.

TA-6 Sounds are organised into streams. TA-6.1 We can monitor multiple streams of sound. TA-6.2 We can detect temporal coherence between streams. TA-6.3 We cannot compare temporal relations in separate streams TA-6.4 We cannot compare the properties of separate streams. TA-6.5 Simple tasks within a stream are more accurate. TA-6.6 Stream segregation affects the perceived rhythm. TA-6.7 Gestalt grouping principles apply to auditory scene analysis. TA-6.8 Sounds streams may be grouped or segregated. TA-6.9 Masking may hide signals.

TA-6.9.1 Masking can be useful. TA-6.10 Minimise the number of auditory streams. TA-6.11 Attention affects the perception of a sound stream. TA-6.12 Knowledge affects segregation of streams.

Guidelines for alarm events

TA-7 Sound is good for alarms. TA-7.1 Alarms should be heard above background noise. TA-7.2 Alarms should be composed of pulses. TA-7.3 Alarms should have a frequency between 500-3000Hz. TA-7.4 Avoid masking of alarm sounds.

Guidelines for temporal structure

TA-8 Sound can convey complex messages with temporal structures. TA-8.1 Complex messages must be learnt. TA-8.2 Use two stage messages for complex information. TA-8.3 Sound is useful for short messages. TA-8.4 Keep messages constant. TA-8.5 Musicians are better able to hear musical structures. TA-8.6 Rhythms can represent data categories. TA-8.7 Interruption rate indicates speed. TA-8.8 Awareness affects the perception of melody.

Table 9-3 A summary of the guidelines for temporal auditory metaphors.

Guidelines for temporal structure TH-1 Use temporal haptic metaphors to display time series data. Guidelines for display space events

TH-2 Use temporal haptic metaphors for task-assisted navigation.

Guidelines for transition events TH-3 Haptic feedback can detect a wide range of frequencies. TH-3.1 Force feedback models should be simple. TH-3.2 Very fast changes to force can be detected.

Guidelines for movement events TH-4 Haptics is concerned with movement. TH-4.1 Vibration can create the illusion of movement.

Guidelines for temporal structure TH-5 Consider transferring temporal auditory metaphors to haptics.

Table 9-4 A summary of the guidelines for temporal haptic metaphors.

Page 47: Chapter 7 Guidelines for Spatial Metaphors

Chapter 10

Designing Multi-sensory Displays

“The process of preparing programs for a digital computer is

especially attractive, not only because it can be economically

and scientifically rewarding, but also because it can be an

aesthetic experience much like composing poetry or music.”

[Donald E. Knuth]

Page 48: Chapter 7 Guidelines for Spatial Metaphors

MS-PROCESS Chapter 10 – Designing Multi-sensory Displays 257

Chapter 10

Designing Multi-sensory Displays

10.1 Introduction The early chapters of this thesis focus on the categorisation of the multi-sensory design space. The nine categories of the MS-Taxonomy are described in detail. Included in this description is a review of previous work in the field of multi-sensory, abstract information display. Based on the MS-Taxonomy a number of new guidelines, called the MS-Guidelines, were developed to assist in the design of multi-sensory displays. However, it was noted that many relevant guidelines already exist, for example, guidelines to assist in visual display [Tufte 1983]. Using the structure of the MS-Taxonomy, existing guidelines and significant results from perceptual and computer science are integrated into the MS-Guidelines

.

Having developed the MS-Taxonomy to help describe the multi-sensory design space and the MS-Guidelines to guide design decisions, a process is now developed to direct the design. The MS-Process, described in this chapter, once again uses the structure of the MS-Taxonomy. The MS-Process defines a series of steps that can be followed for designing multi-sensory displays. It is an iterative process and includes steps for implementing and evaluating multi-sensory displays. However, the MS-Process does not aim to be prescriptive; rather, it aims to provide a flexible development path. This development path encapsulates valid scientific findings by using the MS-Guidelines

. However, the process also makes allowance for the more creative or intuitive leaps which often take place during design [Humphrey 2000].

10.1.1 Why Develop a Process? In the domain of software engineering the design process is often considered the creative phase of development [Pfleeger 1998, Constantine and Lockwood 1999, Humphrey 2000]. This is because this phase often involves the designer making

Page 49: Chapter 7 Guidelines for Spatial Metaphors

MS-PROCESS Chapter 10 – Designing Multi-sensory Displays 258

intuitive leaps towards a solution. As Humphrey notes, "design is a creative process that cannot reduced to routine procedure" [Humphrey 2000]. The design phase has also been referred to as an exercise in complex problem solving [Rumbaugh, Blaha et al. 1991, Pfleeger 1998]. Pfleeger describes the design phase as "trying a little of this or that, developing and evaluating prototypes, assessing the feasibility of requirements, contrasting several designs, learning from failure, and eventually settling on a satisfactory solution to the problem at hand" [Pfleeger 1998]. The creative, unpredictable nature of any problem-solving task may suggest that it is not appropriate to develop a formal step-by-step process for design. However, there are some compelling reasons for developing a process. A process imposes consistency and structure on a task and provides the following benefits [Kellner 1988]:

• it enables effective communication • it supports process evolution and improvement • it provides a precise basis for automation • it facilitates process reuse • it aids process management.

The complexity of the multi-sensory design space makes it difficult to communicate problems and approaches in design. The MS-Taxonomy imposes some structure on this complexity and enables better communication by way of a common language of concepts. Defining the MS-Process

also enables better communication of the steps in the design process. The initial aim is not to propose some definitive set of design steps to follow, but rather to provide a common framework for capturing experience and passing it on to other designers.

A desirable outcome from all problem solving is to arrive at a quality solution. Using a process as the basis for developing a quality product is the foundation of Quality Principles [Deming 1986, Juran and Gryna 1988]. Quality principles have been formulated in a number of places. The principles are often described as TQM (total quality management) and since 1985 many manufacturing companies have adopted this approach to improving their products and services [Kan 1995]. These quality concepts are also formulated in the ISO-9000 standard [Hoyle 2001]. Defining and following a process is fundamental to these concepts as it allows "us to examine, understand, control, and improve the activities that comprise the process" [Pfleeger 1998]. Hence another reason for defining the MS-Process

is that it allows the design process to evolve and improve. Given the immaturity of this field this is a very desirable goal.

The multi-sensory design space is very large and at this stage the problem of designing multi-sensory displays is too complex to entertain the possibility of an automated solution. However, automatic design solutions have been developed for visualising abstract data [Mackinlay 1981] and it may be possible to extend such solutions to multi-sensory display. In the future a refined MS-Process

could form the precise basis for an automated solution to designing multi-sensory displays. Even without automation, the process should aid precision as it ensures that the full design space is considered and that issues are not overlooked.

Because there is such a wide array of possible application areas, it is envisioned that the MS-Process can be used in many application domains. The reusable nature of a defined process also motivates the development of the MS-Process.

Page 50: Chapter 7 Guidelines for Spatial Metaphors

MS-PROCESS Chapter 10 – Designing Multi-sensory Displays 259

Finally the process can aid management. It might be argued that a creative and unpredictable task such as design should not be constrained by management concerns and this is certainly not a primary concern of this work. However, it is realistic to assume that even the development of research display systems will be limited by time or budget constraints. Because the MS-Process

bring some structure to creative design it can also support better tracking and planning of the design process.

This section has discussed the motivations for developing the MS-Process. It is possible to draw many parallels between the MS-Process and the development processes used in traditional software engineering. Indeed the MS-Process

is derived from these traditional models which are now described in section 10.1.2.

10.1.2 Processes for Developing Software During the 1960's the task of software development was largely an ad hoc process characterised by considerable cost and time overruns [Kan 1995]. In the 1970s' an attempt to improve the management of software development led to the adoption of more formal process models [Kan 1995]. By the 1980's quality improvement principles were also being applied to industrial software development processes [Dion 1993, Humphrey 1991]. These improvements were often based around the Capability Maturity Model formulated by the Software Engineering Institute [Cohen 1991]. Central to the Capability Maturity Model is the definition and management of a well-defined software process. By the 1990s, this Capability Maturity Model for software was in widespread use for rating software development companies [Paulk, Weber et al. 1995]. The steps required for designing a multi-sensory display align with many of the steps used for software development. For example, most software process models include a requirements analysis and design phase [Pfleeger 1998]. Because of these close associations and the fact that software process models are now relatively mature they form the basis of the MS-Process

.

The commonly described software process models include [Kan 1995]: • the waterfall model • the prototyping model • the iterative model • the spiral model

The waterfall model (figure 10-1) uses a defined set of steps. These steps are considered to follow each other in a staged logical fashion with predetermined deliverables and sub-goals to be achieved at each stage. The focus is very much on the intermediate deliverables and improving the management of the software development process [Boehm 1981, Kan 1995]. The major criticism of the waterfall model is that it does not reflect the reality of software development where frequent changes often occur [Pfleeger 1998]. For example, during the implementation phase a design fault may be discovered requiring a further iteration through the design phase. The prototyping model (figure 10-2) is based on the waterfall model but it addresses the issue of developing software where the requirements are not known or are poorly

Page 51: Chapter 7 Guidelines for Spatial Metaphors

MS-PROCESS Chapter 10 – Designing Multi-sensory Displays 260

understood [Sametinger 1997]. Developing a partial implementation of the product allows customers to provide early feedback about the requirements. In the prototyping model the software prototypes are not intended to form the basis of the final system. The advantage of using prototypes is that the system requirements can be more accurately identified. This can alleviate the need for costly changes during the actual system development. However, the success of this approach relies on rapid production of prototypes [Kan 1995].

RequirementsAnalysis

Specification

Design

Implementation

Test

Maintenance

SystemEvaluation

PrototypeSpecification

SystemSpecification

SystemImplementation

SystemDesign

PrototypeEvaluation

PrototypeDesign

PrototypeImplementation

PROTOTYPINGSYSTEM DEVELOPMENT

Figure 10-1 The waterfall model of software

development. Figure 10-2 The prototyping model of

software development.

SystemTest

SystemSpecification

SystemImplementation

SystemDesign

SystemEvaluation

System Use

budget

plan next phase

determine alternatives

determine constraints ris

k ana

lysis

prototype

system developmentsystem testing

start

end Figure 10-3 The iterative model of software

development. Figure 10-4 The spiral model of software

development. The iterative model captures the learning that occurs during many software projects [Basili and Turner 1975]. The project begins by developing a functional subset of the system. The system is then modified over a series of iterations to meet the evolving needs of the customer. The early iterations through the process serve as a learning experience for both the customer and the developer. This learning approach is the major advantage of this process and is particularly useful when it is difficult to produce a well-defined statement of the software requirements [Pfleeger 1998]. The spiral model (figure 10-4) is a refinement of the iterative model and was developed by Boehm [Boehm 1988]. This model also uses a number of iterations, although the focus of these iterations is not to learn about the system but to alleviate risks associated with the project. The spiral model also incorporates prototyping into the process, with early iterations using these prototypes to help assess the high-risk items in the project [Sametinger 1997]. During successive iterations the development costs increase while the project risks are reduced [Kan 1995].

Page 52: Chapter 7 Guidelines for Spatial Metaphors

MS-PROCESS Chapter 10 – Designing Multi-sensory Displays 261

10.2 Overview of the MS-Process

10.2.1 Developing the MS-Process The MS-Process is based on the iterative process model (figure10-5). This is in accordance to the MS-Guideline, GD-4 Iterate the design process. The aim of the process is to design multi-sensory applications that allow users to find patterns in abstract data1. However, to build useful applications this high-level requirement needs to be refined. Because the field of multi-sensory display is both complex and new it is not expected that either the designer or the user will be able to provide accurate requirements at the start of the design work. Using a series of iterations through the process allows both the designer and user to experiment and learn as the design evolves. To avoid confusion between the design process and the design product, the evolving design product is called a display map. In the prototyping model, software prototypes are used to better define requirements. In the spiral model prototypes are used to assess project risks. The MS-Process also makes use of prototypes and these serve to both refine requirements and manage risk. On each iteration through the MS-Process a range of different evaluations may be performed. These evaluations provide feedback that is used to derive the next display map. Identified risks in this new display map are then targeted for evaluation in the next stage of prototyping. The MS-Process also adopts the quality feedback cycle of Plan-Do-Check-Act [Deming 1986]. This cycle is the basis of most quality approaches, such as the ISO9000 Quality Standard, and provides a mechanism for optimising processes and enhancing product quality [Juran 1980, Hoyle 2001, Ishikawa 1989]. The other two distinguishing features of the MS-Process are that it uses the MS-Taxonomy to structure parts of the design process and it incorporates the MS-Guidelines into the design process (figure 10-5). During display mapping it is desirable to consider the full range of possibilities from the multi-sensory design space. By using the structure of the MS-Taxonomy the designer is directed to consider the full multi-sensory design space. However, this is not the only benefit of adopting the MS-Taxonomy. In a large study of software designers it was found that the best designers follow a dynamic process in creating complex designs [Curtis 1988]. This dynamic process involves working at a conceptual level for a period and then assessing complex low-level details. This assessment is often required to compare alternative designs. The levels of abstraction in the MS-Taxonomy allow the designer to operate at different levels as required. For example, at one moment the designer can be working at a high-level, contrasting the use of a spatial visual metaphor with a spatial auditory metaphor. At the next moment the designer could be delving into details and trying to decide the most appropriate categories of pitch and loudness to use for an auditory alarm

The

.

MS-Guidelines

1 These applications were also described as Human Perceptual Tools in chapter 1.

help to direct design decisions and as the display map evolves they also serve as a checklist for evaluating the design. A checklist is again a common device used in quality procedures and is one of the well-known seven basic quality

Page 53: Chapter 7 Guidelines for Spatial Metaphors

MS-PROCESS Chapter 10 – Designing Multi-sensory Displays 262

tools of Ishikawa [Ishikawa 1989]. Using the MS-Guidelines

as a checklist forms the basis of the formative evaluation described in section 10.7.2.

MS-Guidelines

IterativeSteps

MS-Taxonomy

MS-Guidelines

STEP 3.Display

Mapping

STEP 4.Prototyping

STEP 1.Task Analysis STEP 2.

DataCharacterisation

STEP 5.Evaluation

Design

Prototype

Figure 10-5 The iterative steps of the MS-Process.

10.2.2 Steps of the MS-Process The main steps of the MS-Process

STEP 1. Task analysis (section 10.3) are:

STEP 2. Data characterisation (section 10.4) STEP 3. Display mapping (section 10.5) STEP 4. Prototyping (section 10.6) STEP 5. Evaluation (section 10.7)

The required inputs for each steps of a process are called entry criteria and the expected outcomes from each step are called exit criteria [Humphrey 2000]. The entry criteria and exit criteria for each step of the MS-Process

are shown in table 10-1 and an overall data flow of the process is shown in figure 10-6.

Entry Criteria Exit Criteria user goals previous work

Task analysis STEP 1. task list, sample data

current methods user requirements

task list, sample data current methods user requirements

Data characterisation STEP 2.

data types data priorities data sources

task list , current methods user requirements data types, sources, priorities

MS-Guidelines

Display mapping STEP 3.

design

design sample data Prototyping

STEP 4. prototype platform limitations

prototype sample data

MS-Guidelines Evaluation STEP 5. evaluation results

recommended changes new guidelines

Table 10-1 Exit and entry criteria for the main steps of the MS-Process.

Page 54: Chapter 7 Guidelines for Spatial Metaphors

MS-PROCESS Chapter 10 – Designing Multi-sensory Displays 263

STEP 3.Display

Mapping

STEP 4.Prototyping

recommended changes

task list

data types

guidelines

platform limitations

MS-Guidelinesguidelines

STEP 1.Task Analysis STEP 2.

DataCharacterisation

sample data

current methodsuser requirements

data priorities

evaluation results new guidelines

MS-Guidelines

data types

STEP 5.Evaluation

Design

Prototype

Figure 10-6 High level data flow diagram for the steps of the MS-Process.

10.3 Task Analysis (STEP 1) Task analysis is the first main step of the MS-Process

STEP 1.1 Describe the user’s tasks (figure 10-7). It has two steps:

STEP 1.2 Consider previous work

The MS-Process

is driven by real world tasks and data according to the MS-Guidelines GD-1 Emphasise the data and GD-3 Design for a task. Task analysis is a procedure commonly used to capture requirements for software systems and it has a long history [Bailey 1989]. It often forms part of what is called a human-centred approach to design as it focuses on understanding the user's needs and preferences for accomplishing goals [Bailey 1989].

10.3.1 Describe the user’s tasks (STEP 1.1) Many formal methods for performing task analysis have been developed [Hackos and Redish 1998, Constantine and Lockwood 1999] and these methods are all applicable to this step of the MS-Process.

However, the basic approach simply involves reading relevant documentation, observing the user and conducting interviews.

Background knowledge for a particular domain, things such as terminology are easily obtained from literature. However, detailed applied knowledge requires extensive collaboration with an expert in the domain. As the aim is to build useful real-world applications then being able to interview and discuss requirements with a domain expert is essential. The starting premise of the task analysis is that the domain has much abstract data and that the user needs to identify patterns in the data. To identify useful patterns in the data

Page 55: Chapter 7 Guidelines for Spatial Metaphors

MS-PROCESS Chapter 10 – Designing Multi-sensory Displays 264

is the user's primary goal. However, this does not provide much direction for design and should be refined for the particular domain. The task analysis seeks to gather more detailed information about what the system will do. This can be achieved by studying the set of tasks the user might carry out to achieve their goal. These goals and tasks are then decomposed into a hierarchy [Schneiderman 1992b]. At this stage the analysis can also identify sample data inputs and outputs. The expected outcomes from this step of the process are:

• A hierarchical list of tasks that enable the user to identify patterns in the data. • A description of sample data. This includes the type of data, the tasks it is used

for and potential output. • General user requirements for the system.

STEP 1Task Analysis

tasklist

usergoals

previouswork

sampledata

currentmethods

userrequirements

STEP 1.1Describe theuser’s tasks

STEP 1.2Consider

previous work

Figure 10-7 The Task Analysis step of the MS-Process.

10.3.2 Consider previous work (STEP 1.2) To complete the task analysis, a review of previous work in the domain related to data-mining or abstract data display should be performed. For example, perhaps other systems have been developed with the goal of finding particular patterns in the domain data. Reviewing this previous work can provide useful information about potential problems or suggest useful directions for exploration. This step of the process can be accomplished with a literature review and by interviewing domain experts. The expected outcome is:

• A description of current methods used to look for patterns in the data.

10.4 Data Characterisation (STEP 2) Data characterisation is the second main step of the MS-Process

STEP 2.1 Describe the data types.

(figure 10-8). It has three steps:

STEP 2.2 Describe the data sources. STEP 2.3 Describe the data priorities.

Page 56: Chapter 7 Guidelines for Spatial Metaphors

MS-PROCESS Chapter 10 – Designing Multi-sensory Displays 265

The task analysis has identified a number of tasks and associated with each of these tasks is a subset of data. In this step each data attribute used in a task is characterised according to type, source and priority (table 10-2). Data Data Type Data Source Priority Name N

Nominal O

Ordinal Q

Quantitative T

Temporal S

Spatial H

High M

Medium L

Low

Table 10-2 Data variables are characterised according to type, source and priority.

STEP 2Data Characterisation

tasklist

datatype

datasource

sampledata

currentmethods

userrequirements

datapriority

STEP 2.1Describe the

data types

STEP 2.2Describe thedata sources

STEP 2.3Describe the

data priorities

Figure 10-8 The Data characterisation step of the MS-Process.

10.4.1 Describe the data types (STEP 2.1) The description of the MS-Taxonomy and the statement of the MS-Guidelines used the three simple data types previously described by Card Mackinlay et al. [Card, Mackinlay et al. 1999]2

• Nominal

. They provide a simple classification that is useful for the display mapping step. The data categorisation uses three data types (Figure 10-9):

• Ordinal • Quantitative

Nominal variables are also known as categorical [Spence 2001] variables as they distinguish different categories. The categories are not related except that they are either equal or not. An ordinal variable is from an ordered set of categories. This implies that an ordinal variable has some ordering. The days of the week are an

2 These data types also correspond to the data categories described by Spence [Spence, 2001].

Page 57: Chapter 7 Guidelines for Spatial Metaphors

MS-PROCESS Chapter 10 – Designing Multi-sensory Displays 266

example of an ordinal variable. Quantitative variables are also known as numerical variables [Spence 2001] and are numbers of any type.

NominalN

OrdinalO

QuantitativeQ

Raw Data

TemporalQt

SpatialQx, Qy, Qz

Monday,Tuesday,

Wednesday,Thursday,

Friday

Apple,Orange,Banana,Pear,Strawberry

123,-1.78,

100324,19.967

52.32N

13.25E

Berlin

GER

47.33N

7.36E

Basel

SWTZ

46.57N

7.26E

Bern

SWTZ

Latitude

Longitude

Country

Dat

a A

ttrib

utes

CasesData tableof cities.

Figure 10-9 Types of data variable. Figure 10-10 An example of a data table

[Card, Mackinlay et al. 1999]. Other more complex descriptions of the variable types have been proposed for designing abstract displays [Bertin 1981, Barass 1997] and could be employed depending on the level of the design. However, the basic premise of this design approach is to keep the high-level design relatively simple and then handle the complexity by a series of evaluations and iterations on the design. In that sense, having a more complicated classification of data types is not necessary. Another useful tool for understanding the data is to transform it into data tables [Bertin 1981] (figure 10-10). Data tables are a more structured set of relations that list the data attributes by row and then give examples or cases in each column of the table [Card, Mackinlay et al. 1999]. For example, numerical data can be transformed into ordinal data by breaking the data attribute into a number of ranges.

10.4.2 Describe the data sources (STEP 2.2) Quantitative variables may also be further defined as temporal or spatial (figure 10-9) based on the data source. Distinguishing data that is recorded over time is important as this data can naturally be mapped to temporal metaphors. In a similar way, data that is identified with a spatial model may also have a natural mapping to spatial metaphors

.

10.4.3 STEP 2.3 Describe the data priorities (STEP 2.3) The aim of this step is a simple analysis of the data to arrive at an understanding of the priority of each data attribute based on the task to be performed. This analysis needs to be performed on a task-by-task basis as different tasks may associate different priorities to data attributes. Each data attribute is given a simple priority rating for a task. The priority information is derived from the task analysis and each data attribute is prioritised as high, medium or low depending on its importance for the task.

Page 58: Chapter 7 Guidelines for Spatial Metaphors

MS-PROCESS Chapter 10 – Designing Multi-sensory Displays 267

10.5 Display Mapping (STEP 3) Display mapping is the third main step of the MS-Process

STEP 3.1 Consider

(figure 10-11). It has three steps:

STEP 3.2 Consider spatial metaphors.

STEP 3.3 Consider direct metaphors.

temporal metaphors.

The important features to note about the display mapping step are: 1. It is structured using the MS-Taxonomy. Therefore each step in the display

mapping is directly related to a level of the MS-Taxonomy. For example, the display mapping for spatial metaphors considers the display space, the spatial properties and spatial structures

2. It incorporates the (figure 10-12).

MS-Guidelines. Because both the MS-Guidelines and the MS-Process are based on the structure of the MS-Taxonomy

3. It is the first of the iterative steps used during the

, the appropriate guidelines can be considered at each step of the process (figure 10-12)

MS-Process

. There is no attempt to create a final design solution on the first iteration through the display mapping step. The sophistication and detail of the design is expected to evolve over a number of iterations. So for example, the first iteration may give rise to several conceptual alternatives. The second iteration may refine the design to a few high-level options. As the iterations continue these high-level designs may be further refined into a single detailed design.

The outcome from the display mapping step is a design that maps the data attributes to display metaphors for each sense. This design can simply be documented as a table. However, there are a number of methodologies and models that have been used to document software design [Booch, Rumbaugh et al. 1999, Pfleeger 1998, Constantine and Lockwood 1999, Rumbaugh, Blaha et al. 1991]. These approaches are also appropriate for capturing the design of a multi-sensory display.

STEP 3Display Mapping

task list

data source

sample data

currentmethods

userrequirements

data priority

STEP 3.1Consider spatial

metaphors

STEP 3.2Consider direct

metaphors

STEP 3.3Consider temporal

metaphors

data type

MS-Guidelines

platformlimitations

Design

Figure 10-11 The Display Mapping step of the MS-Process.

Page 59: Chapter 7 Guidelines for Spatial Metaphors

MS-PROCESS Chapter 10 – Designing Multi-sensory Displays 268

MS-Taxonomy

MS-Guidelines

MS-Process STEP 3.DisplayMapping

Spatialmetaphors

Temporalmetaphors

Directmetaphors

STEP 3.1Consider

spatialmetaphors.

STEP 3.2Consider

directMetaphors

STEP 3.3Considertemporal

metaphors

Guidelinesfor spatialmetaphors

Guidelinesfor temporalmetaphors

Guidelinesfor direct

Metaphors

GuidelinesforMS-Taxonomy

Metaphors

Figure 10-12 The MS-Taxonomy is used to structure STEP 3 of the MS-Process.

10.5.1 Consider Spatial Metaphors (STEP 3.1) The first step of the display mapping is to consider the design of spatial metaphors (figure 10-13). The structure of this part of the process is directly related to the structure of spatial metaphor concepts described in the MS-Taxonomy (figure 10-14). Because this same structure is also used for the MS-Guidelines

it is straightforward to associate the appropriate guidelines with any step (figure 10-14).

The aim of this step is to ensure the designer considers the use of spatial metaphors for each sense. It has three steps:

STEP 3.1.1 Consider spatial visual metaphorsSTEP 3.1.2 Consider

. spatial auditory metaphors

STEP 3.1.3 Consider .

spatial haptic metaphors

.

STEP 3.1Consider Spatial

Metaphorstask list

data source

sampledata

currentmethods

userrequirements

data priority

STEP 3.1.1Consider spatialvisual metaphors

STEP 3.1.2Consider spatial

auditory metaphors

STEP 3.1.3Consider spatialhaptic metaphors

data type

MS-Guidelines

platformlimitations

spatial visual

guidelines

spatial haptic

guidelines

spatial auditory

guidelines

Designspatial

metaphors

Figure 10-13 The first step of the display mapping is to consider spatial metaphors.

Page 60: Chapter 7 Guidelines for Spatial Metaphors

MS-PROCESS Chapter 10 – Designing Multi-sensory Displays 269

Figure 10-14 The MS-Taxonomy is used to structure STEP 3.1 of the MS-Process.

Although the steps are described sequentially, they may be done in parallel or considered during different iterations through the display mapping step. The one principle that imposes some ordering of the steps is that the design of the visual spatial metaphor

needs to be finalised first. This is because it provides the basis for incorporating the other spatial, temporal and direct metaphors. This is in accordance with the guideline, MST-2 Use the visual spatial metaphor as a framework for the display.

10.5.1.1 Consider Spatial Visual Metaphors (STEP 3.1.1) The aim of this step is to ensure the designer considers the design space of spatial visual metaphors. The structure is therefore based on the MS-Taxonomy for spatial visual metaphors

STEP 3.1.1.1 Review (figure 10-15). It has five steps:

spatial visual metaphorSTEP 3.1.1.2 Consider previous designs.

guidelines.

STEP 3.1.1.3 Design the visual display space• Consider

. orthogonal, distorted and subdivided

STEP 3.1.1.4 Design space.

visual spatial properties• Consider

. visual position, visual scale,

STEP 3.1.1.5 Design visual orientation.

visual spatial structures• Consider

. visual global and

local spatial structures.

This is one of the key steps to the design of multi-sensory displays as it forms the basis of integrating other parts of the display. This is captured in the guideline, MST-2 Use the spatial visual metaphor as a framework for the display. Spatial visual metaphors can be used to encode to encode quantitative, ordinal and nominal data. High priority sets of data that needs to be compared are a prime candidate for representing using a spatial visual metaphor

. This is indicated by the guidelines, SV-3 Design visual space around the most important data variables, SV-2 Position in visual space is the most accurate perception and SV-8 Objects in visual space can be compared.

Data may also come from a source that is inherently spatial. For example, the data could be collected from geographical locations. In this case spatial visual metaphors

Page 61: Chapter 7 Guidelines for Spatial Metaphors

MS-PROCESS Chapter 10 – Designing Multi-sensory Displays 270

can provide an intuitive representation of the data. Spatial features such as structure, scale, position and orientation are all accentuated by using spatial visual metaphors

as noted in the guideline, MST-1.1 Vision emphasises spatial qualities. Alternatively, if important data has a temporal source than the designer should consider the guideline, SV-6 If time is an important variable then integrate it into the display space.

Figure 10-15 The MS-Taxonomy is used to structure STEP 3.1.1 of the MS-Process.

Figure 10-16 The MS-Taxonomy is used to structure STEP 3.1.2 of the MS-Process.

10.5.1.2 Design of Spatial Auditory Metaphors (STEP 3.1.2) The aim of this step is to ensure the designer considers the design space of spatial auditory metaphors. The structure is therefore based on the MS-Taxonomy for spatial auditory metaphors

STEP 3.1.2.1 Review (figure 10-16). It has four steps:

spatial auditory metaphorSTEP 3.1.2.2 Design the

guidelines. auditory display space

• Relate the .

auditory display space to the • Consider

visual display space. orthogonal, distorted and subdivided

STEP 3.1.2.3 Design space.

• Consider auditory spatial properties.

STEP 3.1.2.4 Design auditory position.

auditory spatial structures• Consider

. auditory global spatial structures.

Page 62: Chapter 7 Guidelines for Spatial Metaphors

MS-PROCESS Chapter 10 – Designing Multi-sensory Displays 271

Spatial auditory metaphors should be considered for displaying nominal and ordinal data as noted in the guideline, SA-1 Use spatial auditory metaphors for displaying categorical data. This is because hearing is not as accurate as vision at determining spatial properties such as auditory position. Spatial auditory metaphors should be designed around the visual spatial metaphor as directed in the guideline, SA-1.1 Base spatial categories around the visual space. However, the spaces need not be the same resolution as noted by the guideline, SA-5 Auditory space can be a different resolution to visual space. Because the use of spatial auditory metaphors has not been greatly explored there may be opportunities for designing new metaphors, based on spatial visual metaphors. This is suggested by the guideline, MST-4.1 Adapt spatial visual metaphors to spatial auditory metaphors

.

10.5.1.3 Design of Spatial Haptic Metaphors (STEP 3.1.3) The aim of this step is to ensure the designer considers the design space of spatial haptic metaphors. The structure is therefore based on the MS-Taxonomy for spatial haptic metaphors

STEP 3.1.3.1 Review (figure 10-17). It has four steps:

spatial haptic metaphorSTEP 3.1.3.2 Design the

guidelines. haptic display space

• Relate the .

haptic display space to the • Consider

visual display space. orthogonal, distorted and subdivided

STEP 3.1.3.3 Design space.

haptic spatial properties• Consider

. haptic position, haptic scale,

STEP 3.1.3.4 Design haptic orientation.

haptic spatial structures• Consider

. haptic global and

local spatial structures.

The main indications for using spatial haptic metaphors

are to display constraints in the data or to reinforce local spatial structure as directed by the guidelines, SH-1 Haptic space is useful for displaying constraints in the data, SH-8 Use spatial haptic metaphors to represent local spatial structures. Data that is both temporal and spatial in source may naturally map to a haptic display as noted in the guideline, SH-2 Haptic feedback can be used to display temporal-spatial data.

Figure 10-17 The MS-Taxonomy is used to structure STEP 3.1.3 of the MS-Process.

Page 63: Chapter 7 Guidelines for Spatial Metaphors

MS-PROCESS Chapter 10 – Designing Multi-sensory Displays 272

10.5.2 Consider Direct Metaphors (STEP 3.2) After considering spatial metaphors the next step of the display mapping is to consider the design of direct metaphors (figure 10-18). The structure of this part of the process is directly related to the structure of direct metaphor concepts described in the MS-Taxonomy (figure 10-19). Because this same structure is also used for the MS-Guidelines

it is straightforward to associate the appropriate guidelines with any step.

The aim of this step is to ensure the designer considers the use of direct metaphors for each sense. It has three main steps:

STEP 3.2.1 Consider direct visual metaphorsSTEP 3.2.2 Consider

. direct auditory metaphors

STEP 3.2.3 Consider .

direct haptic metaphors

.

STEP 3.2Consider Direct

Metaphorstask list

data source

sampledata

currentmethods

userrequirements

data priority

STEP 3.2.1Consider direct

visual metaphors

STEP 3.2.2Consider direct

auditory metaphors

STEP 3.2.3Consider direct

haptic metaphors

data type

platformlimitations

direct visual

guidelines

direct haptic

guidelines

direct auditory

guidelines

Designdirect

metaphors

MS-Guidelines

Figure 10-18 The second step of the display mapping is to consider direct metaphors.

Figure 10-19 The MS-Taxonomy is used to structure STEP 3.2 of the MS-Process.

In general, direct metaphors are not appropriate for accurate interpretation of quantitative data. However they provide a useful mechanism for displaying categories of data. The indications for use are provided in the three guidelines, DV-1 Direct visual

Page 64: Chapter 7 Guidelines for Spatial Metaphors

MS-PROCESS Chapter 10 – Designing Multi-sensory Displays 273

metaphors are the first choice for displaying categorical data, DA-1 Direct auditory metaphors are the second choice for displaying categories and DH-1 Direct haptic metaphors are the third choice for displaying categories.

10.5.2.1 Consider Direct Visual Metaphors (STEP 3.2.1) The aim of this step is to ensure the designer considers the design space of direct visual metaphors. The structure is therefore based on the MS-Taxonomy for direct visual metaphors

STEP 3.2.1.1 Review (figure 10-20). It has seven steps:

direct visual metaphorSTEP 3.2.1.2 Consider using

guidelines. colour

STEP 3.2.1.3 Consider using .

hueSTEP 3.2.1.4 Consider using

. saturation

STEP 3.2.1.5 Consider using .

colour intensitySTEP 3.2.1.6 Consider using

. visual texture

STEP 3.2.1.7 Consider using .

direct visual shape

.

Figure 10-20 The MS-Taxonomy is used to structure STEP 3.2.1 of the MS-Process.

Figure 10-21 The MS-Taxonomy is used to structure STEP 3.2.2 of the MS-Process.

Page 65: Chapter 7 Guidelines for Spatial Metaphors

MS-PROCESS Chapter 10 – Designing Multi-sensory Displays 274

10.5.2.2 Consider Direct Auditory Metaphors (STEP 3.2.2) The aim of this step is to ensure the designer considers the design space of direct auditory metaphors. The structure is therefore based on the MS-Taxonomy for direct auditory metaphors

STEP 3.2.2.1 Review (figure 10-21). It has four steps:

direct auditory metaphorSTEP 3.2.2.2 Consider using

guidelines. pitch

STEP 3.2.2.3 Consider using .

loudnessSTEP 3.2.2.4 Consider using

. timbre

.

10.5.2.3 Consider Direct Haptic Metaphors (STEP 3.2.3) The aim of this step is to ensure the designer considers the design space of direct haptic metaphors. The structure is therefore based on the MS-Taxonomy for direct haptic metaphors

STEP 3.2.3.1 Review (figure 10-22). It has eight steps:

direct haptic metaphorSTEP 3.2.3.2 Consider using

guidelines. force

STEP 3.2.3.3 Consider using .

haptic surface textureSTEP 3.2.3.4 Consider using

. direct haptic shape

STEP 3.2.3.5 Consider using .

complianceSTEP 3.2.3.6 Consider using

. friction

STEP 3.2.3.7 Consider using .

weightSTEP 3.2.3.8 Consider using

. vibration

.

Figure 10-22 The MS-Taxonomy is used to structure STEP 3.2.3 of the MS-Process.

10.5.3 Consider Temporal Metaphors (STEP 3.3) After considering spatial metaphors and direct metaphors the next step of the display mapping is to consider the design of temporal metaphors (figure 10-23). The structure of this part of the process is directly related to the structure of temporal metaphor

Page 66: Chapter 7 Guidelines for Spatial Metaphors

MS-PROCESS Chapter 10 – Designing Multi-sensory Displays 275

concepts described in the MS-Taxonomy (figure 10-24). Because this same structure is also used for the MS-Guidelines

it is straightforward to associate the appropriate guidelines with any step.

The aim of this step is to ensure the designer considers the use of temporal metaphors for each sense. It has three main steps:

STEP 3.3.1 Consider temporal visual metaphorsSTEP 3.3.2 Consider

. temporal auditory metaphors

STEP 3.3.3 Consider .

temporal haptic metaphors

.

In general, temporal metaphors should be considered with any data that comes from a temporal source or where changes that occur over time need to be understood. There is particular focus on the design of auditory temporal metaphors

as explained in the guideline, MST-1.2 Hearing emphasises temporal qualities.

STEP 3.3Consider Temporal

Metaphorstask list

data source

sampledata

currentmethods

userrequirements

data priority

STEP 3.3.1Consider temporalvisual metaphors

STEP 3.3.2Consider temporalauditory metaphors

STEP 3.3.3Consider temporalhaptic metaphors

data type

MS-Guidelines

platformlimitations

temporal visual

guidelines

temporal haptic

guidelines

temporal auditory

guidelines

Designtemporal

metaphors

Figure 10-23 The third step of the display mapping is to consider temporal metaphors.

Figure 10-24 The MS-Taxonomy is used to structure STEP 3.3 of the MS-Process.

Page 67: Chapter 7 Guidelines for Spatial Metaphors

MS-PROCESS Chapter 10 – Designing Multi-sensory Displays 276

10.5.3.1 Consider Temporal Visual Metaphors (STEP 3.3.1) The aim of this step is to ensure the designer considers the design space of temporal visual metaphors. The structure is therefore based on the MS-Taxonomy for temporal visual metaphors

STEP 3.3.1.1 Review (figure 10-25). It has three steps:

temporal visual metaphorSTEP 3.3.1.2 Consider using

guidelines. visual events

• Consider .

movement, display space, transition and alarm eventsSTEP 3.3.1.3 Consider using

. visual temporal structures

• Consider .

temporal artefacts, rate, rhythm and

variation.

Temporal visual metaphors help to provide additional spatial information about structure, depth and organisation. This is captured in the guidelines, TV-5 Movement provides spatial information and TV-2 Movement helps organise objects in the scene. Temporal visual metaphors are particularly useful where change in spatial structures or spatial properties

may occur. This is indicated in the guidelines, TV-1 Use temporal visual metaphors to understand change and TV-4 Use movement to draw attention to changes.

Figure 10-25 The MS-Taxonomy is used to structure STEP 3.3.1 of the MS-Process.

10.5.3.2 Consider Temporal Auditory Metaphors (STEP 3.3.2) The aim of this step is to ensure the designer considers the design space of temporal auditory metaphors. The structure is therefore based on the MS-Taxonomy for temporal auditory metaphors

STEP 3.3.2.1 Review (figure 10-26). It has three steps:

temporal auditory metaphorSTEP 3.3.2.2 Consider using

guidelines. auditory events

• Consider .

movement, display space, transition and alarm eventsSTEP 3.3.2.3 Consider using

. auditory temporal structures

• Consider .

temporal artefacts, rate, rhythm and

variation.

Temporal auditory metaphors are recommended for displaying data recorded over time as directed by the guideline, TA-1 Use temporal auditory metaphors to display time series data. It is also possible to speed up the display as noted by the guideline; TA-2 Auditory events can be compressed in time. Traditionally sound has been useful for monitoring

Page 68: Chapter 7 Guidelines for Spatial Metaphors

MS-PROCESS Chapter 10 – Designing Multi-sensory Displays 277

background events and also to signify important events. This is captured in the guidelines, TA-4 Sound is good for monitoring data and TA-7 Sound is good for alarms. Very complex auditory temporal structures

have been developed in the field of music and these may also be applicable for displaying data relationships. This is captured in the guideline, TA-8 Sound can convey complex messages with temporal structures.

Figure 10-26 The MS-Taxonomy is used to structure STEP 3.3.2 of the MS-Process.

Figure 10-27 The MS-Taxonomy is used to structure STEP 3.3.3 of the MS-Process.

10.5.3.3 Consider Temporal Haptic Metaphors (STEP 3.3.3) The aim of this step is to ensure the designer considers the design space of temporal haptic metaphors. The structure is therefore based on the MS-Taxonomy for temporal haptic metaphors

STEP 3.3.3.1 Review (figure 10-27). It has three steps:

temporal haptic metaphorSTEP 3.3.3.2 Consider using

guidelines. haptic events

• Consider .

movement, display space, transition and alarm eventsSTEP 3.3.3.3 Consider using

. haptic temporal structures

• Consider .

temporal artefacts, rate, rhythm and variation.

Page 69: Chapter 7 Guidelines for Spatial Metaphors

MS-PROCESS Chapter 10 – Designing Multi-sensory Displays 278

Haptics is characterised as a sense that integrates both space and time. This suggests that haptics is suited to understanding movements that occur with data and this is captured in the guideline, TH-4 Haptics is concerned with movement. The domain of temporal haptic metaphors has not been explored in any detail and there is the possibility of applying concepts from the auditory domain. This is captured in the guideline, TH-5 Consider transferring temporal auditory metaphors

to haptics.

10.6 Prototyping (STEP 4) Prototyping is the fourth step of the MS-Process

STEP 4.1 Consider software platform. (figure 10-28). It has three main steps:

STEP 4.2 Consider hardware platform. STEP 4.3 Implement current design.

After the current iteration of the display mapping step has been completed the next step of the MS-Process is to prototype the current design. Depending on the number of iterations through the display mapping step the prototype may target different aspects of the design. For example, during the first iteration of the MS-Process the prototype may target spatial visual metaphors while in later iterations the prototype may target direct haptic metaphors

.

Design

Prototype

STEP 4Prototyping

sampledata

STEP 4.1Consider software

platform

STEP 4.2Consider hardware

platform

STEP 4.3Implement current

design

platformlimitations

Figure 10-28 The Implementation step of the MS-Process.

These three prototyping steps consider the practical aspects of implementing the current design. For example, the software platform may impose some restrictions on the type of designs that can be implemented. The software used to implement direct haptic metaphors may support properties such as compliance or force but provide no support for displaying haptic surface texture. To implement haptic properties such as

Page 70: Chapter 7 Guidelines for Spatial Metaphors

MS-PROCESS Chapter 10 – Designing Multi-sensory Displays 279

compliance also requires a rapid rendering loop. The software platform must allow for this loop to execute at 1000Hz and the design must consider these limitations. The hardware platform also imposes restrictions on the type of designs that can be implemented. For example, it would not be practical to rely on stereoscopic depth cues in models if the platform does not support stereoscopic display. A further example comes from auditory display where very few platforms currently support 3D spatial displays of sound. The capabilities and limitations of different display systems need to be assessed individually. Tradeoffs between requirements, performance, costs and implementation effort may need to be made. In the chapters 11, 12, and 13 a case study using the MS-Process

describes how the development platform can drive some of the design decisions during prototyping.

Understanding the software and hardware limitations is a practical requirement for implementing prototypes. However, it is beyond the scope of this work to describe platform taxonomies and guidelines for different Virtual Environments. This type of information can be found in other texts [Stuart 1996, Durlach and Mavor 1995] and if desired incorporated into this prototyping step.

10.7 Evaluation (STEP 5) After the prototyping step has been completed the next step of the MS-Process is evaluation. It has three main steps (figure 10-29):

STEP 5.1 Expert heuristic evaluation. STEP 5.2 Formative evaluation. STEP 5.3 Summative evaluation.

The evaluation step like the display mapping and prototyping steps is iterative. Therefore it is expected that a number of different evaluations will be performed during the various iterations of the design. The type of evaluation required is dependent on the stage of the design. For example, the more sophisticated and detailed the design becomes the more formal and detailed the evaluation that is required. There is also a trade-off between the time required to design and perform formal experiments compared with the fast turnaround of less formal evaluation procedures. These same steps have previously been used for evaluating Virtual Environments applications designed for collaboration [Goebbels, Lalioti et al. 2001]. The different types of evaluation provide useful tests of the system and generate feedback for the next iteration of the display mapping step. The different levels of evaluation support a pragmatic approach to development. For example, the domain expert or designer can quickly identify major issues with the display. This allows the designer to make a number of rapid iterations through the display mapping process before more time-consuming experiments are performed. Three types of evaluations are described, one by each of the three steps. The expert heuristic evaluation allows a domain expert to provide feedback specifically related to the domain tasks and data. The formative evaluation uses the appropriate MS-Guidelines as a checklist to perform a quick informal assessment of the prototype. As

Page 71: Chapter 7 Guidelines for Spatial Metaphors

MS-PROCESS Chapter 10 – Designing Multi-sensory Displays 280

the prototype matures a summative evaluation can be used to subjectively test the performance of the display.3

Prototype

STEP 5Evaluation

sampledata

STEP 5.1Expert heuristic

evaluation

STEP 5.2Formativeevaluation

STEP 5.3Summativeevaluation

evaluationresults

MS-Guidelines

recommendedchanges

newguidelines

Figure 10-29 The Evaluation step of the MS-Process.

10.7.1 Expert Heuristic Evaluation (STEP 5.1) The expert heuristic evaluation is designed to evaluate both the usefulness and usability of the system for a specific task. The key to this step is to use the domain expert's knowledge to refine the display for a particular task. The steps of expert heuristic evaluation are:

STEP 5.1.1 System demonstration. STEP 5.1.2 Feedback interview. STEP 5.1.3 Capture problems and suggestions.

The evaluation consists of a demonstration of the system to a domain expert and an interview to gather feedback. The expert should be prompted to consider the usefulness of the display to meet the task's requirements. The expert may also highlight usability issues with the current system. The problems and limitations are captured and used as input to the next iteration of the display mapping step. There are two important points to note. Firstly, the expert heuristic evaluation is an optional step and it may not be performed on every iteration through the MS-Process

3 Summative evaluations may in themselves give rise to new MS-Guidelines.

. Secondly, in the case where significant changes are recommended no further evaluations may be performed. Instead the designer may return to the display mapping step to address these recommendations.

Page 72: Chapter 7 Guidelines for Spatial Metaphors

MS-PROCESS Chapter 10 – Designing Multi-sensory Displays 281

10.7.2 Formative Evaluation (STEP 5.2) The formative evaluation is designed to improve the usability of the display by an informal review of the system. The key to this evaluation is the use of the MS-Guidelines

STEP 5.2.1 System review using as a checklist. The steps of formative evaluation are:

MS-GuidelinesSTEP 5.2.2 Capture problems and suggestions.

.

The evaluation is informal and is carried out by the designer. The intent is to identify obvious usability issues with the system. The MS-Guidelines

prompt the reviewer with possible issues that may occur. Any problems and limitations are captured and used as input to the next iteration of the display mapping step.

Once again the formative evaluation is an optional step and it may not be performed on every iteration through the MS-Process. This type of evaluation is probably most useful during the early iterations of the MS-Process

as the design is still at high level. During this time the formative evaluations may quickly identify issues that can to be addressed by returning to the display mapping step.

10.7.3 Summative Evaluation (STEP 5.3) The summative evaluation is designed to formally test the performance of the system by conducting experiments. The aim of each experiment is to gain some quantitative measure of the systems performance for a given task. Summative evaluations employ a group of subjects to gather data on the display performance. This data can direct the display mapping step when it reaches important decision points. The steps of summative evaluation are:

STEP 5.3.1 Design experiment. STEP 5.3.2 Conduct experiment. STEP 5.3.3 Analyse experimental results. STEP 5.3.4 Capture problems and suggestions.

The summative evaluation consists of a human factors experiment which is designed to test the usability of the system. The design and analysis of such experiments are not described here but can be found in texts on usability testing [Rubin 1994, Dumas and Redish 1999]. It is noted however, that the experiment should be designed carefully so that sufficient data is gathered for statistical analysis. If the designer has insufficient experience then a statistician can be involved in the experimental design. Informal feedback as well as data may be gathered from the experimental subjects. The results of the experiment are analysed for statistical significance. At this stage the results may verify the system's effectiveness or indicate that further display mapping and prototyping needs to be performed. Typically the end of the summative evaluation step marks a major decision point in the overall direction of the design.

Page 73: Chapter 7 Guidelines for Spatial Metaphors

MS-PROCESS Chapter 10 – Designing Multi-sensory Displays 282

10.8 Conclusion The MS-Process

has been described and it serves the purpose of guiding the design, prototyping and evaluation of the multi-sensory display. The steps of the MS-Process are summarised in tables 10-3, 10-4, 10-5, 10-6 and 10-7.

The importance of defining the MS-Process is to ensure that the full range of the multi-sensory design space, defined by the MS-Taxonomy, is considered during design. Following the MS-Process also ensures that the MS-Guidelines are integrated into all design decisions. Furthermore, the MS-Process

provides for a range of informal and formal evaluations to assist in refining the design of the display.

All that remains now is to trial the MS-Process and MS-Guidelines in a real application domain. To thoroughly test and refine the MS-Process would require following the process through many iterations in many application domains. Given the extent of the MS-Process and the multi-sensory design space this is not feasible within the scope of this thesis. Rather the MS-Process

is evaluated by a detailed case study within a single application domain. This case study, which includes a number of iterations through the process, is described in chapter 11, chapter 12 and chapter 13 of this thesis.

During this case study a number of new multi-sensory displays are developed. These displays provide examples from the full design space covered by the MS-Taxonomy. The case study also highlights the use of many of the MS-Guidelines

.

STEP 1. Task analysis. STEP 1.1 Describe the user's tasks. STEP 1.2 Consider previous work.

Table 10-3 Step 1 of the MS-Process.

STEP 2. Data characterisation. STEP 2.1 Describe the data types. STEP 2.2 Describe the data sources. STEP 2.3 Describe the data priorities.

Table 10-4 Step 2 of the MS-Process.

Page 74: Chapter 7 Guidelines for Spatial Metaphors

MS-PROCESS Chapter 10 – Designing Multi-sensory Displays 283

STEP 3. Display Mapping.

STEP 3.1 Consider spatial metaphors. STEP 3.1.1 Consider spatial visual metaphors.

STEP 3.1.1.1 Review spatial visual metaphorSTEP 3.1.1.2 Consider previous designs.

guidelines.

STEP 3.1.1.3 Design the visual display spaceSTEP 3.1.1.4 Design

. visual spatial properties

STEP 3.1.1.5 Design .

visual spatial structures

.

STEP 3.1.2 Consider spatial auditory metaphors.

STEP 3.1.2.1 Review spatial auditory metaphorSTEP 3.1.2.2 Design the

guidelines. auditory display space

STEP 3.1.2.3 Design .

STEP 3.1.2.4 Design auditory spatial properties. auditory spatial structures

.

STEP 3.1.3 Consider spatial haptic metaphors.

STEP 3.1.3.1 Review spatial haptic metaphorSTEP 3.1.3.2 Design the

guidelines. haptic display space

STEP 3.1.3.3 Design .

haptic spatial propertiesSTEP 3.1.3.4 Design

. haptic spatial structures

.

STEP 3.2 Consider direct metaphors.

STEP 3.2.1 Consider direct visual metaphors. STEP 3.2.1.1 Review direct visual metaphor

STEP 3.2.1.2 Consider using guidelines.

colourSTEP 3.2.1.3 Consider using

. hue

STEP 3.2.1.4 Consider using .

saturationSTEP 3.2.1.5 Consider using

. colour intensity

STEP 3.2.1.6 Consider using .

visual textureSTEP 3.2.1.7 Consider using

. direct visual shape

.

STEP 3.2.2 Consider direct auditory metaphors. STEP 3.2.2.1 Review direct auditory metaphor

STEP 3.2.2.2 Consider using guidelines.

pitchSTEP 3.2.2.3 Consider using

. loudness

STEP 3.2.2.4 Consider using .

timbre

.

STEP 3.2.3 Consider direct haptic metaphors. STEP 3.2.3.1 Review direct haptic metaphor

STEP 3.2.3.2 Consider using guidelines.

forceSTEP 3.2.3.3 Consider using

. haptic surface texture

STEP 3.2.3.4 Consider using .

direct haptic shapeSTEP 3.2.3.5 Consider using

. compliance

STEP 3.2.3.6 Consider using .

frictionSTEP 3.2.3.7 Consider using

. weight

STEP 3.2.3.8 Consider using .

vibration

.

STEP 3.3 Consider temporal metaphors.

STEP 3.3.1 Consider temporal visual metaphors. STEP 3.3.1.1 Review temporal visual metaphor

STEP 3.3.1.2 Consider using guidelines.

visual eventsSTEP 3.3.1.3 Consider using

. visual temporal structures

.

STEP 3.3.2 Consider temporal auditory metaphors. STEP 3.3.2.1 Review temporal auditory metaphor

STEP 3.3.2.2 Consider using guidelines.

auditory eventsSTEP 3.3.2.3 Consider using

. auditory temporal structures

.

STEP 3.3.3 Consider temporal haptic metaphors. STEP 3.3.3.1 Review temporal haptic metaphor

STEP 3.3.3.2 Consider using guidelines.

haptic eventsSTEP 3.3.3.3 Consider using

. haptic temporal structures

.

Table 10-5 Step 3 of the MS-Process.

Page 75: Chapter 7 Guidelines for Spatial Metaphors

MS-PROCESS Chapter 10 – Designing Multi-sensory Displays 284

STEP 4. Prototyping. STEP 4.1 Consider software platform. STEP 4.2 Consider hardware platform. STEP 4.3 Implement current design.

Table 10-6 Step 4 of the MS-Process.

STEP 5. Evaluation.

STEP 5.1 Expert heuristic evaluation. STEP 5.1.1 System demonstration.

STEP 5.1.2 Feedback interview. STEP 5.1.3 Capture problems and suggestions.

STEP 5.2 Formative evaluation. STEP 5.2.1 Review system using MS-Guidelines

STEP 5.2.2 Capture problems and suggestions. .

STEP 5.3 Summative evaluation. STEP 5.3.1 Design experiment.

STEP 5.3.2 Conduct experiment. STEP 5.3.3 Analyse experimental results. STEP 5.3.4 Capture problems and suggestions.

Table 10-7 Step 5 of the MS-Process.

Page 76: Chapter 7 Guidelines for Spatial Metaphors

Chapter 11

The Stock Market

“The fluctuations of the daily closing prices of the Dow-Jones rail

and industrial averages afford a composite index of all the hopes,

disappointments and knowledge of everyone who knows anything of

financial matters and for that reason the effects of coming events

(excluding acts of God) are always properly anticipated in their

movement. The averages quickly appraise such calamities as fires and

earthquakes.” [Rhea, 1932]

Page 77: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 289

Chapter 11

The Stock Market

11.1 Introduction The previous chapters of this thesis described the MS-Taxonomy, the MS-Guidelines and the MS-Process. The MS-Taxonomy is a structured group of concepts that categorise the multi-sensory design space. The MS-Process is an iterative process that can be followed when designing multi-sensory displays. This process integrates the MS-Guidelines

, a collection of structured guidelines that help direct and evaluate the design of multi-sensory displays.

In the following chapters the MS-Process is evaluated in a case study from the domain of stock market trading. During the course of this case study, several iterations through the MS-Process

are described. The outcomes include a number of multi-sensory prototypes designed for finding patterns in stock market data.

The work described in this chapter received substantial input from Bernard Orenstein who is a domain expert in the field of stock market trading. Domain descriptions and definitions of financial terminology used in this chapter are derived from course notes by Nicholson [Nicholson 1999a, Nicholson 1999b].

11.2 A Case Study in Technical Analysis Stock market data contains many attributes, far more than traders can readily comprehend. Yet traders attempt to determine relationships between the data attributes that can lead to profitable trading of financial instruments. Developing human

Page 78: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 290

perceptual tools that can assist traders find useful patterns in stock market data is the goal of this case study. There are two types of complementary analysis, which are used to trade on the stock market. They are known as technical analysis and fundamental analysis [Nicholson 1999a]. Fundamental analysis studies the underlying factors that determine the price of a financial instrument. For example, factors such as, a company's profit, market sector, or potential growth can influence the share price. These factors can be considered against more global considerations such as the general economic trend. Fundamental analysis is the more traditional form of analysis used to trade the market. Technical analysis is defined as "the study of behaviour of market participants, as reflected in price, volume and open interest for a financial market, in order to identify stages in the development of price trends" [Nicholson 1999a]. Unlike fundamental analysis, technical analysis ignores the underlying factors that determine price and makes the assumption that the price of a financial instrument already quantifies all these underlying factors. Technical analysis is based on patterns that can be found directly in the data. This case study is based on Technical Analysis. The case study (table 11-1) begins with the task analysis step of the MS-Process described in section 11.3. Included in this task analysis step is a detailed review of existing displays used for technical analysis. The second step of the MS-Process

is data characterisation and this is described in section 11.4.

The last three steps (3,4,5) of the MS-Process are iterative. Chapter 11 only describes the first iteration through these steps. Further iterations are described in chapter 12 and chapter 13. The first iteration through the display mapping considers high level design alternatives. This might also be described as conceptual design. This first iteration through the display mapping step is described in section 11.5. The first iteration through the prototyping step of the MS-Process targets the design of the visual spatial metaphor. Three alternative visual spatial metaphors are implemented and these prototypes are described in section 11.6. Finally, the three prototypes are evaluated in the final step of the MS-Process

and this work is described in section 11.7.

MS-Process Case Study Section STEP 1.

Task Analysis Task analysis for technical analysis. 11.3

STEP 2 Data Characterisation

Characterise stock market data. 11.4

Iteration

STEP 3 Display Mapping

Conceptual designs for technical analysis. 11.5

STEP 4 Prototyping

Implement prototypes of three spatial visual metaphors.

11.6

STEP 5 Evaluation

Evaluate three prototypes. 11.7

Table 11-1 Outline of the case study showing the first iteration through the MS-Process

Page 79: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 291

11.3 Task Analysis (STEP 1) The first step of the MS-Process

is to undertake a task analysis of the application domain and derive a description of the user’s tasks. Included in the task analysis is a review of previous work in the application domain. The steps of the task analysis and their application to this case study are summarised in table 11-2.

MS-Process Case Study Section

Iteration

STEP 1. Task Analysis Task analysis for technical analysis. 11.3 STEP 1.1

Describe the user’s tasks. Construct a hierarchical list of tasks used for trading stock market data.

11.3.1

STEP 1.2 Consider previous work.

Review previous displays used for finding patterns in stock market data.

11.3.2

Table 11-2 Outline of the task analysis step used in the case study.

11.3.1 Describe the user’s tasks (STEP 1.1) The domain expert's high-level goal is to make more profitable trades on financial instruments. The field of technical analysis has developed around the recognition of patterns in charts of financial data. A specialist analyses charts to interpret market action, predict trends, or forecast price movements. The domain expert's lower-level tasks depend on the type of data being used and the type of trading that they perform, but the key is to develop a trading strategy based on the patterns that may exist in the data. Interviewing the domain expert and reviewing literature [Nicholson 1999a, Nicholson 1999b] resulted in a hierarchical list of tasks (table 11-3).

11.3.2 Consider previous work (STEP 1.2) This step of the task analysis requires a review of previous displays used in the domain of technical analysis. This field has a long history of using 2D visualisations for detecting patterns in stock market data. The case study describes a number of these traditional techniques. These include:

• price graphs (section 11.3.2.1) • bar charts (section 11.3.2.2) • volume charts (section 11.3.2.3) • equivolume charts (section 11.3.2.4) • trend indicators (section 11.3.2.5) • momentum oscillators (section 11.3.2.6) • candlestick charts (section 11.3.2.7) • depth of market displays (section 11.3.2.8)

This discussion of previous work describes in some detail the types of patterns that have been traded using these different display techniques. At the end of this section the case study describes more recent attempts to develop 2D and 3D computer based visualisations of stock market data and also describe applications that use sound to display stock market data (section 11.3.2.9). This review step of the task analysis is documented here in detail as it serves to introduce financial terminology and to describe the data used in this domain. This

Page 80: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 292

discussion also helps to clarify many of the user's tasks that were captured in the first step of task analysis (table 11-3). GOAL: To make profitable trades on financial instruments. TASK 1: To identify viable trading strategies based on patterns in the data.

TASK 1.1: To identify profit targets. TASK 1.2: To identify market tops and bottoms. TASK 1.3: To identify buy and sell stops for long trades. TASK 1.4: To identify buy and sell stops for short trades. TASK 1.5: To identify buy and sell signals. TASK 1.6: To time entry into the market.

TASK 2: To develop trading strategies over different time frames.

TASK 2.1: To develop long term (months, years) trading strategies. TASK 2.2: To develop short-term (days, weeks) trading strategies. TASK 2.3: To develop discretionary (intra-day) trading strategies.

TASK 3: To identify trends in the movement of prices.

TASK 3.1: To identify stages in the development of price trends. TASK 3.1.1: To identify starting signals for a new trend.

TASK 3.1.2: To identify continuation signals of a trend. TASK 3.1.3: To identify reversal signals of a trend. TASK 3.1.4: To follow trends. TASK 3.1.5: To identify the phases in primary trends.

TASK 3.2: To identify trends over different time frames. TASK 3.2.1: To identify primary trends.

TASK 3.2.2: To identify secondary reactions. TASK 3.2.3: To identify daily fluctuations. TASK 3.2.4: To identify intra-day movements.

TASK 3.3: To identify different types of trends. TASK 3.3.1: To identify up trends.

TASK 3.3.2: To identify down trends. TASK 3.3.3: To identify ranging (stationary) markets.

TASK 4: To determine the balance of supply and demand of a financial instrument.

TASK 4.1: To identify instruments that are oversold or overbought. TASK 4.2: To identify support and resistance levels. TASK 4.3: To identify accumulation or distribution of stock. TASK 4.4: To identify how easy or difficult it is to move price.

TASK 5: To monitor changes in the price of a financial instrument.

TASK 5.1: To monitor multiple instruments. TASK 5.2: To identify patterns between traded instruments.

Table 11-3 A hierarchical list of tasks from the domain of technical analysis.

11.3.2.1 Price Graphs The price graph is a 2D chart that shows the variation of price over time (figure 5-2). Usually the closing price for a period is used to represent price. This period may be a month, a week, a day or even shorter periods. Generally the length of the period which is used depends on the strategy of the trader. For example, long term traders may

Page 81: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 293

use periods of a week or a day, whereas intra-day traders may be interested in fluctuations that occur in prices over very short periods, say one or two minutes. Charles Dow developed many of the ideas of technical analysis. He identified three main movements in stock prices. These three movements known as primary movements, secondary movements and daily fluctuations occur over different time scales of months, weeks, and days (figure 11-3). These movements are also known as trends. It is important to identify the direction of trends, whether the price is moving up or down. Recognising when a trend is stationary is also critical, as without some movement in the price it is difficult to trade successfully. As well as identifying the direction of trends, it is important to recognise the phases of a trend. For example, recognising the beginning and ending of a trend is important for timing entry or exit from the market. Patterns in the peaks and troughs of a price graph are used to determine direction of a trend and also identify beginnings and endings (figure 11-2). price

time

price

timelow

high

higherhigh

higherhigh

higherhigh lower

high

higherlow

higherlow

higherlow lower

low

Start of an up trend when price passes above the previous high after making a higher low.

The up trend is identified by a series of higher highs and higher lows.

The up trend ends when price falls below the previous low after making a lower high.

Figure 11-1 A price graph. Figure 11-2 Patterns in a price graph that

identify an up trend [Nicholson 1999a].

price

time

primary movements

price

time

secondary movements

price

time

daily fluctuations

Figure 11-3 The different time movements in a stock price identified by Dow Theory.

Other features that are determined from price graphs are trend lines, trend channels, and levels of support and resistance. Trend lines are lines drawn through the low points of an up-trend and through the high points of a down-trend (figure 11-4). Trend lines can assist in gauging the acceleration or deceleration of price movements. Trend channels are constructed by drawing a line parallel to a trend line that passes through two peaks or troughs in a trend (figure 11-5). Trend channels can be useful for indicating where to take profits or to warn that a trend is ending. Support and resistance are significant price levels because the stock price has trouble breaking through these levels. Support and resistance can be identified from charts and used to help a trader determine the best prices at which to trade (figure 11-6).

Page 82: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 294

A number of other specific chart patterns can be identified on price graphs. For example, patterns such as a Head and Shoulders or Double Top can indicate trend reversal (figure 11-7). The continuation of a trend is signalled by patterns called Flags and Pennets (figure 11-8). In each case these patterns not only indicate the state of a trend but also suggest price levels to trade at. These trading points are also described as targets and stop points.

price

time

Trend line drawn on an

up trend

price

time

Trend line drawn on a down trend

Figure 11-4 Trend lines drawn on a price graph.

price

time

Trend channel drawn on an

up trend

price

time

Trend channel drawn on a down trend

Figure 11-5 Trend channels drawn on a price graph.

price

time

Support levels in an

up trend

S

S

S

price

time

Support levels in a down trend

S

S

price

time

price

time

Resistance levels in an

up trend

R

R

R

Resistance levels in a down trend

R

R

Figure 11-6 Support and resistance levels in a price graph.

Page 83: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 295

price

time

The Head and Shoulders signals the reversal of an

up trend

price

time

The Double Top signals the reversal

of an up trend

Figure 11-7 Two types of reversal patterns in an up trend.

price

time

The Flag signals the continuation of an

up trend

price

time

The Pennant signals the continuation of an

up trend

Figure 11-8 Two patterns suggesting trend continuation.

11.3.2.2 Bar Charts Another traditional type of chart used in technical analysis is the bar chart. Each period of trading is shown as a single bar representing the range of prices from low to high for the period (figure 11-9). Also marked on the bar are the opening and closing price for the period. Bar charts typically use a period of one trading day and are used for shorter term trading. However bar charts can be used to detect the same longer term trading patterns described for price graphs. Indeed, simply drawing a line through the closing price of each price bar in a bar chart creates a price graph.

high

low

close

open

price range

The components of a price bar.

price

timeA bar chart.

Figure 11-9 The components of a price bar and an example of a bar chart.

Bar charts can be used to study short-term trends in the market (figure 11-10). By analysing consecutive closing prices, or by comparing opening and closing prices, it is possible to infer whether buyers or sellers have control of the market. As with long-

Page 84: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 296

term trend analysis, a number of different patterns that signal the reversal of the current trend have been identified (figure 11-11).

price

time

A short term up trend identified by a

succession of higher highs and higher lows. low

high

higherhigh

higherlow

higherhigh

higherlow

higherhigh

higherlow

higherhigh

higherlow

lowerhigh

lowerlow

price

time

higherhigh

higherlow

lowerhigh

lowerlow

lowerhigh

lowerlow

lowerhigh

lowerlow

lowerhigh

lowerlow

high

low

A short term down trend identified by a succession of lower

highs and lower lows.

Figure 11-10 Identifying short-term trends in a bar chart.

OPEN / CLOSE

REVERSALCLOSING PRICE

REVERSALKEY

REVERSAL

higher high

open near highclose near low

close > last close

higher lowopen near highclose near low

close < last close

higher high higher low

open > last closeclose < last low

higher high lower low

Figure 11-11 Some reversal signals for a short term up trend in a bar chart.

11.3.2.3 Volume Charting While the price of an instrument is the main consideration of the trader, it is also common to augment bar charts with a histogram of the number of shares traded during each period (figure 11-12). This is called the volume of trading. For futures trading the number of open contracts at the end of each period may also be shown and this is known as open interest (figure 11-12). Both open interest and volume act as secondary indicators of the market activity. Patterns that occur in the volume histogram are frequently used to confirm patterns that are observed in price. The health of a trend can be gauged by the way volume confirms or contradicts the price peaks and troughs (figure 11-13). For example, in an up trend the peaks in volume should coincide with the price peaks. If volume increases as the price increases, then there is a healthy upward trend. If volume is falling as the price increases, then the up trend is not healthy and may end. A sudden increase in volume may confirm movement through a significant support or resistance level (figure 11-14). A high volume of trading during a very stable price can indicate accumulation or distribution of shares and also a possible reversal. Whereas a low volume of trading at a very stable price can indicate that the trend will continue (figure 11-14).

Page 85: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 297

price

time

A bar chart augmented with a volume histogram and open interest.

volumeopen interest

Figure 11-12 A bar chart with volume and open interest.

price

volume

volume confirms up trend.

volume contradicts up trend.

volume confirms down trend.

volume contradicts down trend.

Figure 11-13 The use of volume to gauge the health of a trend.

price

volume

Volume confirms the price breakout through a resistance level.

Volume contradicts the price breakout.

High volume indicates distribution at the end of an up trend.

Low volume indicates up trend to continue.

Figure 11-14 The use of volume to confirm price breakouts and for identifying distribution.

As part of this case study a 3D visual display of volume and price data is developed. This display is called the 3D Bar Chart. The initial design for this display is described in section 11.5.1. Later the case study is used to develop and evaluate a prototype of the 3D Bar Chart and this is described in section 11.6 and section 11.7. The design of the 3D Bar Chart evolves as the case study follows a number of iterations through the MS-Process. Chapter 12 reports on the later iterations of this display and includes a detailed evaluation of the final haptic-visual design.

Page 86: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 298

11.3.2.4 Equivolume Charting Volume is an important secondary indicator in price movements and so a number of volume indicators have also been developed; these include On Balance Volume, Volume Accumulation and the Demand Index [Nicholson 1999a]. These indicators may also be incorporated into the traditional bar charts and price graphs. A special form of chart called the equivolume chart was invented by Arms as a graphical way to emphasise the relationship between price and volume [Arms 1983]. The price bars in a bar chart are extended so that the bar's width represents the volume of trading for the period (figure 11-15). This creates a series of different sized boxes for each period of trading. The shape and size of these boxes indicate how easy it is to move prices and the commitment of the market to the price action. For example, smaller boxes indicate periods where the market was less committed than those periods represented by big boxes. Tall narrow boxes indicate that price was easy to move, while long flat boxes indicate that price was very difficult to move.

high

low

price range

volume

The equivolume boxprice

time

Equivolume chart

large – market is committed

small - market is not committed

long - price is difficult to move

tall - price is easy to move

Figure 11-15 The equivolume box and an example of an Equivolume chart.

11.3.2.5 Trend Indicators While the raw price of a stock is considered to be a more accurate means of analysing movements in the market, it is possible to supplement price with a number of indicators. Indicators are mathematical functions derived from price, volume and open interest. These indicators can be used to automatically identify signals in the market. However, since all indicators are derived, traders usually consider them to be secondary in importance, preferring to rely on the raw data, such as price, for final decision making. There are two basic kinds of indicators: momentum indicators and trend indicators. Momentum indicators, such as momentum oscillators are used to monitor changes in the speed of price movements and these are discussed in section 11.3.2.6. As the name suggests, trend indicators help to track phases in a price trend. The moving average is an example of a common indicator used to track trends. The moving average is simply calculated by averaging the closing price of a stock over a defined number of periods. For example, a 5-day moving average would calculate the average of the closing price over the last five days. Moving averages remove noise caused by the many fluctuations in price. However, as more days are

Page 87: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 299

included in the average the indicator becomes slower to react to changes in trend direction. For finding trading signals the moving average may be incorporated into a bar chart (figure 11-16). Buy and sell signals are indicated at locations where the bar chart crosses the moving average. Some analysts combine multiple moving averages in the same chart. For example, a 15-day and 30-day moving average one can be drawn together (figure 11-16). In this case, buy and sell signals are generated when the two moving averages cross.

price

time

A moving average plotted on a bar chart.

e.g. a buy signal is generated when the bars cross from below to above the moving average line.

e.g. a sell signal is generated when the bars cross from above to below the moving average line.

price

time

Two moving averages.

Slower moving average

Faster moving average

e.g. a buy signals is generated when the faster moving average crosses above the slower moving average.

Figure 11-16 Moving average indicators used to identify buy and sell signals.

Other indicators can be calculated that help trade divergences from a moving average line. For example, Bollinger bands use volume information and standard deviations from the average price to calculate some limits on expected price movements (figure 11-17). Bollinger bands expand and contract as the market volatility increases and decreases and help to confirm signals given by indicators such as a moving average. Bollinger bands also help the analyst in placing trading stops. Trading stops are price levels at which the trader may take profits from successful trades or exit losing trades. While signals may be generated automatically by indicators there is some degree of analyst intervention in the choice of parameters for each mathematical function that defines an indicator. For example, to calculate a moving average, the number of days over which price is averaged is determined by the analyst.

price

time

A moving average plotted on a bar chart with Bollinger Bands.

Moving Average

Bollinger Bands

Price Bars

Figure 11-17 Moving average indicators used to identify buy and sell signals.

Page 88: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 300

As part of this case study a 3D visual display of moving averages is developed. This display is called the Moving Average Surface. The initial design for this display is described in section 11.5.1. Later the case study is used to develop and evaluate a prototype of the Moving Average Surface and this is described in section 11.6 and section 11.7. The design of the Moving Average Surface evolves as the case study follows a number of iterations through the MS-Process

11.3.2.6 Momentum Oscillators

. Chapter 12 reports on the later iterations of this display and includes a detailed evaluation of the final haptic-visual design.

Whereas trend indicators are useful for trending markets, momentum oscillators are primarily used for trading in ranging markets. Ranging markets are characterised by a price that fluctuates between support and resistance levels. Price stabilises within a small range, unable to move above the resistance level or fall below the support level. Momentum indicators are useful because they indicate a stock as overbought or oversold and the peaks and troughs in the oscillator tend to mimic the peaks and troughs in the price curve (figure 11-18). Momentum oscillators are also known as leading indicators as they tend to change direction slightly before the price curve. A simple momentum indicator can be calculated by subtracting the current closing price of a period from the closing price of a previous period. For example a 7-day momentum indicator is calculated buy subtracting the closing price seven days ago from the current closing price. The analyst determines the number of periods used in the momentum calculation according to the requirements. A number of other momentum indicators have been described, including Rate of Change, the Relative Strength Index and the Stochastic Oscillator [Nicholson 1999a]. All of these indicators can be charted and then used in conjunction with a price graph or bar chart to assist in trading decisions. A number of patterns have been identified using these indicators and these patterns can be used for trading both ranging and trending markets.

price

time

A bar chart with a corresponding momentum indicator.

e.g. a buy signal is generated when the momentum indicator falls below the oversold line and moves back above it.

oversold

overbought

momentum

Figure 11-18 The components of a price bar and an example of a bar chart.

Page 89: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 301

11.3.2.7 Candlestick Charts The Japanese technique of candlestick charts originated sometime in the 17th century and was originally used for futures trading. The basic elements of the candlestick chart are the same as the bar chart. It incorporates opening price, closing price and the maximum and minimum price for a period of trading (figure 11-19). Unlike bar charts the relationship between opening and closing price is emphasised by the colour of the candles. Dark candles indicate that closing price is lower than opening price, while light candles indicate that the closing price was higher than the opening price (figure 11-19). A number of distinctive types of candles can be identified in candlestick charts (figure 11-20). The occurrence of these candles in a chart is taken to imply something about the state of the market. For example, the Dragonfly Doji signifies a very bullish market at the end of a down-trend (figure 11-20). The Paper Umbrella is a strong sign of trend reversal and a Spinning Top indicates indecision between buyers and sellers in the market (figure 11-20). Interpreting consecutive candles in the chart identifies more complex patterns. For example, the patterns called Dark Cloud Cover and Three Black Crows signify the reversal of an up trend (figure 11-20).

The components of a candle.

high

low

close

open

A light candle indicates price closed higher than the open price.

high

low

close

open

A dark candle indicates price closed lower than the open price.

price

timeA Candlestick chart.

Figure 11-19 The components of a candle and an example of a Candlestick chart.

DRAGONFLY DOJI

Market opens on its high and then

trades lower before closing on

its high

PAPER UMBRELLA

Market opens and closes near it’s high

but trades lower during the period.

SPINNING TOP

Market opens and closes near the same price but trades over a wide range during

the period.

A black candle covers a white

candle at the end of an up trend.

DARK CLOUD COVER

THREE BLACK CROWS

Three black candles at

the top of an up trend.

Figure 11-20 Some typical patterns used in Candlestick charting.

11.3.2.8 Depth of Market Many of the techniques discussed look for patterns across periods of trading. While these periods of trading can be generalised to very short time frames they are

Page 90: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 302

traditionally used for periods of one day. Shorter-term players of the market, such as day traders, may wish to make minute by minute decisions from live feeds of stock data. In this case each transaction may be charted so that the trader can take advantage of variations that occur from minute to minute. As well as price information, day traders may take advantage of data provided by the depth of market. The depth of market refers to the number of buyers and sellers currently trying to trade a financial instrument. A buyer may make a bid to purchase a specified volume of shares while at the same time a seller may ask a price for some specified volume. The balance of bids and asks determines the state of the current market. The difference between the highest bid and lowest ask is known as the spread. The task of buying and selling shares to make a profit on short-term variations in market prices is called discretionary trading. The emphasis is on making a small profit many times during the period of trading. A trader may decide to sell when the volume of bids around the last trade price outweighs the volume of asks, or conversely sell if asks outweigh bids. Traditionally the depth of market data is displayed in a table (table 11-4), which updates every 30 seconds or so. The bids and asks are sorted so that highest bid and lowest ask are shown at the top. The difference between the highest bid and lowest ask indicates the spread. A wider spread usually indicates a lower likelihood that a trade will occur. The price of the last trade is shown for comparison with the current spread. Other more peripheral information includes the volume of stock in bid and ask quotes, and the context provided by lower bids and higher asks.

Buyers

Trade 12.03 Sellers Volume Price Price Volume 14,533 12.03 1 12.04 42,450 28,850 12.02 2 12.06 20,540 23,000 12.02 3 12.07 8,261 2,121 11.99 4 12.09 35,000

41,000 11.98 5 12.10 120,515 17,000 11.97 6 12.11 574

Table 11-4. Table of Depth of Market Data

As part of this case study a multi-sensory display of depth of market data is developed. This display is called the BidAsk Landscape. The initial design for this display is described in section 11.5.1. Later the case study is used to develop and evaluate a prototype of the BidAsk Landscape and this is described in section 11.6 and section 11.7. The design of the BidAsk Landscape evolves as the case study follows a number of iterations through the MS-Process

11.3.2.9 Recent Displays of Stock Data

. Chapter 13 reports on the later iterations of this depth of market display and includes a detailed evaluation of the final auditory-visual design.

With the wide availability of new technology, computers have been used to chart larger amounts of data, and to produce charts from live data feeds in real time from the market. New visualisations have been developed to enable the perception and analysis of more information more quickly. For example, treemaps have been used for

Page 91: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 303

visualising stock portfolios [Jungmeister and Turo 1992] and Wright implemented 3D visualisations of liquidity information using live data feeds [Wright 1995]. Wright's work is one of the few attempts to visualise depth of market data, however, it is intended to provide a broader view of multiple stocks rather then a tool for identifying patterns in a single stock. This case study is interested in developing multi-sensory displays of financial data, however no work of this nature has been discovered. Nor have haptic displays been investigated for analysing stock market. The case study later develops a haptic-visual prototype and this is described in chapter 12. The case study also develops an auditory-visual prototype and this is described in chapter 13. Unlike haptics, there have been some investigations using sound to display stock market data and this work is reviewed next. Frysinger directly played back price data as a sound waveform [Frysinger 1990]. However, the sounds were difficult to interpret. This was attributed to the fact that the time series of stock market data did not follow physical-acoustic laws that characterise typical sounds from everyday experience. Stock market price data for a single share was mapped to pitch in an experiment by Brewster et. al [Brewster and Murray 2000]. Th experiment found no difference in performance between visual and auditory modes and subjects reported a decrease in workload with the sound display. In another experiment, both price and volume data for a single stock were used [Neuhoff, Kramer et al. 2000]. Price was mapped to pitch and volume to loudness. The subjects were required to judge the price and volume from the sound. Results confirmed that perceptual interactions occurred between pitch and loudness affecting the ability of the subjects to judge price and volume. Price and volume for two stocks were displayed simultaneously [Ben-Tal, Daniels et al. 2001]. The two stocks were mapped to perceptually distinct vowel-like sounds. The closing price of the day was mapped to the number of bursts of sound and the volume of trade to the duration of the bursts. The data for each day was mapped to a second of sound for periods of up to a year. They informally observed that they could categorise high volume, high price trading days as loud, dense sounds, while low volume, low price days were heard as pulsed rhythmic sounds.

11.4 Data Characterisation (STEP 2) The task analysis step of the MS-Process has provided a task list, a summary of current methods and sample data. The second step of the MS-Process

is to perform a data characterisation of the data. This includes an analysis to determine the type, source and priority of the data. The steps of the data characterisation and their application to this case study are summarised in table 11-5. The results from this step are summarised in table 11-6, table 11-7, table 11-8 and table 11-9.

Stock market data is a typical example of multi-attributed data. The main types of stock market data used in the various tasks of technical analysis are:

• tick data (figure 11-21)

Page 92: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 304

• period data (figure 11-22) • market indicators (figure 11-23) • depth of market data (figure 11-24).

MS-Process Case Study Section

Iteration

STEP 2. Data Characterisation. Characterisation of stock market data.

11.4

STEP 2.1 Describe the data types.

Describe the types of stock market data.

STEP 2.2 Describe the data sources

Analyse the sources of stock market data.

STEP 2.2 Describe the data priorities.

Describe the priority of data attributes for different tasks in technical analysis.

Table 11-5 Outline of the data characterisation step used in the case study.

associated with

Exchange

InstrumentTick

ID Date Time Price Volume

TickPeriod

PriceDuration Volume Open Interest

Open PriceClose Price Low PriceHigh Price

Start Date Start TimeEnd Date End Time

Figure 11-21 A UML model of tick data. Figure 11-22 A UML model of period data.

calculated from

Volume Indicator

Period

Momentum Oscillator

Moving Average

Market Indicator

Depth Of Market

Market Offer

ID Price

Volume Disclosure

Tick

TypeBid / Ask

Date Time

Category

Figure 11-23 A UML model of market

indicators. Figure 11-24 A UML model of depth of

market data Tick data consists of a series of trades collected over time. A single trade on the market is called a tick. The primary attribute of tick data is the quantitative attribute of price (table 11-6). The volume is also recorded, though this usually of secondary importance for traders. Stock market data is collected over time and hence the source of the data is temporal. Each tick has an identifying code but this is only used to give each trade a unique code. Date and time are also recorded for all transactions and these can be quite significant. For example, the majority of trading often occurs in the last hour of the days trading and the majority of trading in a futures contract usually occurs as the contract nears its completion date.

Page 93: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 305

Attribute Type Source Priority Price quantitative Temporal high Time quantitative Temporal high Date quantitative Temporal medium Volume quantitative Temporal medium ID nominal Temporal low

Table 11-6 Data characterisation of tick data.

Period data summarises the trades that occur over a defined period of time. Four prices are recorded to represent the period: the opening price, the closing price and the maximum and minimum price (table 11-7). The closing price is considered the most important indicator of the period. Of secondary importance to the price data is the volume of instruments traded. Where the instrument is a futures contract another data attribute called open interest is recorded. Open interest measures the number of open contracts at the end of the period. When data is collected for a period, both the start and end time of the period is recorded. Periods can differ in duration depending on the trading time frame. Some typical periods are 1 minute, 2 minute, 5 minute, 10 minute, 1 hour, 1 day and 1 week.

Attribute Type Source Priority Closing Price quantitative Temporal high Opening Price quantitative Temporal high Low Price quantitative Temporal high High Price quantitative Temporal high Start Time quantitative Temporal medium Start Date quantitative Temporal medium End Time quantitative Temporal medium End Date quantitative Temporal medium Duration nominal Temporal low Volume quantitative Temporal medium Open Interest (futures) quantitative Temporal medium Contract Date (futures) quantitative Temporal high

Table 11-7 Data characterisation for trading periods. Market indicators are derived values that are calculated for use in specific tasks of technical analysis (table 11-8). These indicators include the Advance Decline Line, On Balance Volume, Volume Accumulation and Demand Index for studying variations in stock volume [Nicholson 1999a, Nicholson 1999b]. The Moving Average indicators are used for tracking and trading trends in the market, while momentum oscillators, such as Simple Momentum and Rate of Change are used to track both ranges and trends. These indicators are quantitative in type and temporal in source. While this data is designed for very specific tasks it is considered less significant to the raw price and time data.

Attribute Type Source Priority Advance Decline Line quantitative Temporal medium On Balance Volume quantitative Temporal medium Volume Accumulation quantitative Temporal medium Demand Index quantitative Temporal medium Moving Average quantitative Temporal medium Bollinger Bands quantitative Temporal medium Simple Momentum quantitative Temporal medium Rate of Change quantitative Temporal medium

Table 11-8 Data characterisation for market indicators.

Page 94: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 306

Depth of market data (table 11-9) captures both the trades that occur and all market offers. The market offers represent either a bid from a potential buyer or an ask from a potential seller. This data is very similar to trade data recording both a price and volume. The quantitative attributes of price and time are most important for traders.

Attribute Type Source Priority Price quantitative Temporal high Offer Category (Bid, Ask) nominal Temporal high Time quantitative Temporal high Date quantitative Temporal medium Volume quantitative Temporal medium Type (New, Amend, Delete) nominal Temporal medium Disclosure nominal Temporal medium ID nominal Temporal low

Table 11-9 Data characterisation for depth of market offers.

11.5 Display Mapping (STEP 3) The task analysis and data characterisation steps of the MS-Process provide a task list, a summary of current methods, some preliminary user requirements and a description of the data. The third step of the MS-Process is to create a display mapping from the data in the application domain to the metaphors described by the MS-Taxonomy. The display mapping step is the first of the iterative steps of the MS-Process

.

MS-Process Case Study Section Iteration

STEP 3. Display mapping

Explore the design space in terms of technical analysis. Develop conceptual design alternatives. Focus on developing concepts for spatial visual metaphors.

11.5

STEP 3.1 Consider spatial metaphors

STEP 3.1.1 Consider spatial visual metaphors STEP 3.1.2 Consider spatial auditory metaphors. STEP 3.1.3 Consider spatial haptic metaphors.

Suggest spatial metaphors for technical analysis. Develop new 3D spatial visual metaphors…

• 3D Bar Chart • Moving Average Surface • BidAsk Landscape

11.5.1 11.5.2 11.5.3

STEP 3.2 Consider direct metaphors

STEP 3.2.1 Consider direct visual metaphors STEP 3.2.2 Consider direct auditory metaphors. STEP 3.2.3 Consider direct haptic metaphors.

Suggest direct metaphors for technical analysis.

11.5.4 11.5.5 11.5.6

STEP 3.3 Design temporal metaphors

STEP 3.3.1 Consider temporal visual metaphors STEP 3.3.2 Consider temporal auditory metaphors. STEP 3.3.3 Consider temporal haptic metaphors.

Suggest temporal metaphors for technical analysis.

11.5.7 11.5.8 11.5.9

Table 11-10 The first iteration through the display mapping step explores the design space.

Since the process is iterative there is no requirement to arrive at a perfect solution on the first iteration of the display mapping step. The initial strategy is to consider some possible solutions from across the full multi-sensory design space (table 11-10). This could be described as conceptual or high-level design and the outcome is a number of design suggestions.

Page 95: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 307

At this early stage of the case study, the designer has no well-formed concept of the final solution. Unfortunately, the process does not provide a recipe for inventing new solutions; this requires the creative input of the designer. However, following the process ensures that the full design space is considered. The process integrates the appropriate MS-Guidelines at each step and this can help prompt ideas. For example, at the most general level, direction is provided by the MS-Guidelines

, GD-1 Emphasise the data, GD-2 Simplify the display and GD-3 Design for a task.

11.5.1 Consider Spatial Visual Metaphors (STEP 3.1.1) The focus in this step is on developing appropriate spatial visual metaphors for specific tasks (table 11-11). This is a key component of the multi-sensory display as suggested by the MS-Guideline, MST-2 Use the spatial visual metaphor as a framework for the display. Hence this step of the MS-Process is critical to the overall design of a multi-sensory display as designing the appropriate spatial visual metaphor

ensures a correct framework for integrating the haptic and auditory displays.

MS-Process Case Study Section Iteration

STEP 3.1.1. Consider spatial visual metaphors

Develop new 3D spatial visual metaphors…

• 3D Bar Chart • Moving Average Surface • BidAsk Landscape

11.5.1

STEP 3.1.1.1 Review spatial visual metaphor

Apply guidelines to stock market data.

guidelines

11.5.1.1

STEP 3.1.1.2 Consider previous designs

Consider how existing displays can be used for stock data.

11.5.1.2

STEP 3.1.1.3 Design the

(Consider visual display space orthogonal, distorted and

subdivided

space)

Develop conceptual designs using orthogonal 3D visual display spaces for tasks.

11.5.1.3

STEP 3.1.1.4 Design

(Consider visual spatial properties

visual position, visual scale,

Consider the use of visual position and visual scale in the conceptual designs.

visual orientation)

11.5.1.4

STEP 3.1.1.5 Design

(Consider visual spatial structures

visual global and

Consider the use of spatial structures in the conceptual designs.

local spatial structures)

11.5.1.5

Table 11-11 The first iteration through STEP 3.1.1 Consider spatial visual metaphors.

During the course of the case study substantial effort is directed to this step and so it is discussed in some detail. The outcome from the first iteration is to propose three new visual spatial metaphors

that can be used for specific tasks in technical analysis. They are called the 3D Bar Chart, the Moving Average Surface and the BidAsk Landscape.

11.5.1.1 Review spatial visual metaphor guidelines (STEP 3.1.1.1) From a review of the MS-Guidelines a number of guidelines need to be integrated into the design. These guidelines include: SV-2 Position in visual space is the most accurate

Page 96: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 308

perception, SV-3 Design visual space around the most important data variables and SV-6 If time is an important variable then integrate it into the display space. Using these guidelines the most important data for each task is integrated into the spatial visual metaphor

. For the stock market this includes both price and time as these attributes are of high importance for all the tasks identified for technical analysis (table 11-6, table 11-7, table 11-8, table 11-9).

Because the aim is to use Virtual Environments the design focuses on 3D space. This focus is supported by the guideline, SV-4.3.1 Virtual Environments enhance the interpretation of 3D. The case study initially adopts simple orthogonal coordinates as suggested by the guideline SV-4 Consider Orthogonal Spaces as a first choice in design.

11.5.1.2 Consider previous designs (STEP 3.1.1.2) The case study initially develops the 3D Bar Chart and the Moving Average Surface. These designs use existing organisations of 3D space. This is suggested by the guideline, SV-7 Consider previous organisations of visual space as design options. The design for the 3D Bar Chart (figure 11-25) for financial data is taken directly from a general 3D-Barplot (figure 3-58). It is a simple extension of the traditional 2D Bar Chart and Volume Chart used in technical analysis. The high priority quantitative attributes of price and time are displayed along two dimensions and the third dimension is used to display volume. The display is expected to support the identified task, TASK 1: To identify viable trading strategies based on patterns in the data. The patterns are expected to occur in price over time, with those patterns confirmed by volume.

price

time

A traditional bar chartaugmented with avolume histogram.

volumetime

price

volume

The 3D Bar Chart with volumemapped to the third dimension.

Figure 11-25 The design of the 3D Bar Chart

Volume is often used to confirm price signals such as a trading range breakout. Specialised techniques, such as Equivolume charts have been developed to include volume explicitly in a 2D display (section 11.3.2.4). However, the design of this display makes it difficult to interpret the time attribute, as each bar's width is no longer uniform but rather dependent on the period's trading volume (figure 11-15). The more typical way to chart volume is as a histogram below the price bars (figure 11-2). While this allows price to be observed in relation to volume it requires moving the eyes back and forth between volume and price when trying to distinguish a correlation between the two attributes. In the 3D Bar Chart the price bars are extended in the

Page 97: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 309

third dimension by mapping volume to depth. The user can then simply rotate the chart and so explicitly compare trends in volume and price. The other common element used in 3D visual models is a surface displayed within an orthogonal 3D space. This generic design, called a 3D-Surfaceplot is used for the design of the Moving Average Surface. The calculation of a moving average is often used to assist in the task, TASK 3: To identify trends in the movement of prices (section 11.3.2.5). Some trading systems rely on the intersection of different moving averages to signal the beginning and end of trends. For example, a 1-day, 15-day and 30-day moving average may be calculated and plotted. Signals are then generated where these moving averages intersect. However, the choice of how many time periods to include in the moving averages can vary. Fast moving averages fluctuate and can generate false signals while slow moving averages respond slowly and hence may generate late signals. The choice of how many time periods to use in each moving average line is determined by the analyst but it is somewhat arbitrary. Hence, it is desirable to analyse a wide range of moving averages to choose appropriate signals for trading a particular instrument.

Figure 11-26 The design of the Moving Average Surface

Using a two-dimensional display of multiple moving averages soon becomes crowded if more than a few curves are plotted (figure 11-26). Occlusion makes it hard to distinguish and compare curves. To overcome this problem a surface of moving averages can be constructed with traditional price bars positioned at the centre of the surface. This surface is constructed by joining together a number of strips. Each strip represents a different moving average curve. These moving average strips are joined to create a continuous surface. The surface is a reflected about the central axis so that each boundary represents a moving average of 30 days. As the surface moves towards the central bar chart the number of days in the moving average is reduced, from 30 to 29, to 28 and so on until a 1 day moving average is placed adjacent to the central bar chart (figure 11-26). By definition the closing price on these central price bars corresponds to a one-day moving average.

11.5.1.3 Design the visual display space (STEP 3.1.1.3) The 3D Bar Chart and the Moving Average Surface use a 3D orthogonal display space (table 11-12). While these two designs are based on generic designs the

Page 98: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 310

third alternative is a new organisation of the display space. The BidAsk Landscape subdivides the orthogonal space, with the left of the display being allocated to buyers and the right to sellers (table 11-12). Visual Display Space

Generic design

MS-Guidelines Orthogonal space Distorted space

Subdivided space 1D 2D 3D

3D Bar Chart (TASK 1)

x- time y - price z - volume

3D-Barplot SV-7, SV-6, SV-3, SV-4, SV-4.3.1

Moving Average Surface (TASK 3)

x- time y - price z - days in average

3D-Surfaceplot SV-7, SV-6, SV-4, SV-3, SV-4.3.1

BidAsk Landscape (TASK 2.3)

x- price y - volume z - time

buyers - left trade - middle sellers - right

SV-6, SV-3, SV-4, SV-4.3.1

Table 11-12 The definition of visual display space for the three designs.

The BidAsk Landscape concept evolved from discussions between the author and the domain expert, Bernard Orenstein. The aim was to produce a display that better showed temporal patterns that may occur in depth of market data. This is related to the task, TASK 2.3: To develop discretionary (intra-day) trading strategies. The idea evolved from the concept of a landscape metaphor, that is, a display based on hills and valleys of data. Less natural landscapes such as the Cityscape metaphor [Russo Dos Santos, Gros et al. 2000] have previously been used to display data and it is common to use surfaces to represent abstract data in 3D. The display concept was to turn depth of market data into a display of hills and valleys that changed over time. The changes to the landscape could hopefully provide temporal signals that help a domain expert predict the market direction. Rather than using a single surface to display the data, the space is subdivided and two surfaces used to represent the data. In this case buyers are on the left of the display and sellers on the right. The bids of the buyers naturally fall at a lower price while the asks of the sellers are positioned at a higher price. However the surfaces are not independent but change in response to each other. The display is designed to capture the changing tension between the two market forces. It is expected that a valley will form at the centre of the display as buyers and sellers exchange bids and asks to make a trade. A third surface is used to represent these trades (figure 11-27). Price is the key component that connects the buyers and sellers and this is placed on the horizontal axis. The height of the hills and valleys is determined by the volume of all bids and asks at that price point. The landscape is slightly complicated as bids accumulate as they move away from the maximum price bid and asks accumulate as they move away from the minimum price ask. This can be understood by thinking of a buyer who is prepared to buy at $10. That same buyer is prepared to buy at anything less than $10. Similarly for a seller who asks $12. That seller will also be happy to sell at anything greater than $12. The final axis of the display is used for time. Hence the 3D orthogonal space is defined using the quantitative attributes of price, volume and time (figure 11-28).

Page 99: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 311

Buyers

Sellers

$Fear

Greed

The balance of buyers and sellers determines market direction.

Figure 11-27 The BidAsk Landscape shows the balance between buyers and sellers

Bids

Volume1000

200500

120020000

200

1100

Price1.761.721.66

1.451.50

1.45

1.80

Asks

Volume300

120015002200100

1200

1100

Price1.801.821.83

1.951.90

2.00

1.80

Last Trade

The current view of depth of market datais a table updated in real time. Changesbetween time steps are lost.

Time

300

Volume

$12.34 $16.67$14

Price

Trade

The BidAsk Landscape provides atemporal view showing how changes tothe depth of market evolve over time

Figure 11-28 The design of the BidAsk Landscape

11.5.1.4 Design visual spatial properties (STEP 3.1.1.4) The three designs use position and scale in space to convey information (table 11-13). Slope is also used in the surfaces in the Moving Average Surface and BidAsk Landscape. A number of the MS-Guidelines

11.5.1.5 Design visual spatial structures (STEP 3.1.1.5)

contain further guidance on these aspects of design. These include SV-8 Objects in visual space can be compared, SV-9 Scale of objects in space provides information and SV-10 Orientation of objects in space provides information.

The use of global structure in the spatial visual metaphor is reinforced by the guideline, SV-11 To represent global structure use a visual display. The three suggested designs provide already include global visual spatial structure as summarised in table 11-14. In the 3D Bar Chart, bars are grouped over time. In the Moving Average Surface, the global structure is provided by the surface. In the BidAsk Landscape, three surfaces, one each for bids, asks and trades, provides the global structure. To provide further structure each of the three designs display orthogonal axes.

Page 100: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 312

Visual Spatial Properties

MS-Guidelines Position Scale Orientation 3D Bar Chart (TASK 1)

Position of bars indicate price and time.

bar height - price range bar width - duration bar depth - volume

SV-8, SV-9

Moving Average Surface (TASK 3)

Position of bars indicate price and time. Position along surface indicates number of days in the average.

bar height - price range bar width - period duration

slope and curvature of surface

SV-8, SV-9, SV-10

BidAsk Landscape (TASK 2.3)

Position of surface indicates price, time and volume. buyers - left sellers - right

area under curve - volume slope and curvature of surfaces

SV-8, SV-9, SV-10

Table 11-13 The use of visual spatial properties in the three designs.

Visual Spatial Structure MS-Guidelines Local Visual Spatial

Structures Global Visual Spatial

Structures 3D Bar Chart (TASK 1)

price bars grouping bars axes

SV-11

Moving Average Surface (TASK 3)

price bars moving average surface

grouping bars connection surface axes

SV-11

BidAsk Landscape (TASK 2.3)

bid surface trade surface ask surface

grouping surfaces connection surface axes

SV-11

Table 11-14 The use of visual spatial structure in the three designs.

11.5.2 Consider Spatial Auditory Metaphors (STEP 3.1.2) The focus in this step is on developing appropriate spatial auditory metaphors for specific tasks (table 11-11). However, during this early iteration of the display mapping step this part of the MS-Process

is not critical. Rather some simple ideas that explore this part of the design space are considered. During later iterations through the process this step of the display mapping is revisited.

The task for which this display is designed is TASK 5: To monitor changes in the price of a financial instrument. The choice of sound for a monitoring task is suggested by the MS-Guideline

, TA-4 Sound is good for monitoring data.

The monitoring task also contains the subtasks, TASK 5.1: To monitor multiple instruments and TASK 5.2: To identify patterns between traded instruments. The of sound for these tasks is further suggested by the MS-Guidelines,

TA-6.1 We can monitor multiple streams of sound and TA-6.2 We can detect temporal coherence between streams.

To distinguish between multiple stocks each financial instrument can be categorised by its position in space. This is suggested by the MS-Guideline, SA-1 Use spatial auditory metaphors for displaying categorical data. In this example the financial instruments

Page 101: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 313

form nominal categories. Other relevant MS-Guidelines

are noted, as they may be applicable to the proposed design. These guidelines include, SA-2.3 Sounds can be heard from multiple directions and SA-6 Spatial position helps to separate sound streams.

A simple design is to use a 2D auditory display space with a number of sound sources placed around the listener (figure 11-29). The auditory display space is subdivided into nominal categories, with each division used to display a separate stock. The listeners presumably can direct their attention to any particular sound stream to track a specific instrument. Auditory distance could also be used to differentiate between the importance of each stock. For example, less important stocks could be positioned further away and thus would be less likely to attract the user's attention (figure 11-29). Considering global spatial auditory structures introduces the concept of grouping similar stocks. For example, instruments from the same market sector such as banking or manufacturing could be located in the same regions of space. This would allow the user to attend to an individual stock or activity in a group of stocks (figure 11-29). This aspect of the design is confirmed by the MS-Guideline

, TA-6.8 Sounds streams may be grouped or segregated.

Figure 11-29 A design for monitoring multiple instruments using sound.

Bids

AsksTrades

The sounds of individual bids, asks and trades are located in space according to price.

Figure 11-30 A design for monitoring bids, asks and trades using the BidAsk Landscape.

Page 102: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 314

Considering this conceptual design for the three existing designs suggests using the 3D auditory display space to display specific bids, asks and trades in the BidAsk Landscape. Currently the BidAsk Landscape uses the 3D visual display space to display the accumulated number of bids, asks and trades at each price point. The individual bids, asks and trades could be displayed from positions relative to price (figure 11-30).

11.5.3 Consider Spatial Haptic Metaphors (STEP 3.1.3) The last two steps have considered the use of spatial visual and spatial auditory metaphors. The focus in this step is on developing appropriate spatial haptic metaphors for specific tasks (table 11-11). However, during this early iteration of the display mapping step this part of the MS-Process

is not critical. Rather some simple ideas that explore this part of the design space are considered. During later iterations through the process this step of the display mapping is revisited.

The two major inputs to this design come from the guidelines, SH-1 Use Spatial Haptic Metaphors to represent local spatial structures and SH-3 Haptics is useful for displaying constraints. This design is based around the task, TASK 4.2: To identify support and resistance levels. Support and resistance are constraints that occur in a price graph forming a barrier to price movements. Support is a level at which buyers overcome selling pressure and resistance is a level at which sellers overcome buying pressure. Market participants remember these levels, which cause them to repeat behaviour at these prices. The strength of support and resistance depends on the number of times the level is tested, the volume transacted there, how long ago it was formed, and whether the level is a round number. Support and resistance are also useful concepts for fixing stop-loss and take-profit levels. Support and resistance levels are of secondary importance to the price data and could be implemented as local haptic structures or constraints (figure 11-31). This is supported by the MS-Guideline

; SH-1 Haptic space is useful for displaying constraints in the data. The user would feel these constraints while studying the visual cues of the bar chart. Haptic support and resistance levels could be implemented in either a traditional 2D Bar Chart or on the newly proposed 3D Bar Chart.

The use of haptic constraints could also be applicable for displaying Bollinger Bands around a moving average indicator. These bands are calculated as standard deviations of the average price. It is unusual for price to exceed the limits or constraints of these bands. The analyst may then utilise these bands when trying to set trading stops. It is possible to implement a force constraint field, so that the user is constrained to the moving average line. The user must employ more force to move further from the moving average line and is constrained between the barriers formed by the Bollinger Bands (figure 11-32). This design is applicable in a 2D chart but could also be extended to 3D by developing Bollinger Surfaces. This would allow these haptic constraints to be integrated into the Moving Average Surface (figure 11-32).

Page 103: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 315

price

time

S

S

S

price

time

R

R

R

Support and resistance are structures or constraints that occur in a price graph.

Figure 11-31 A design incorporating haptic support and resistance.

price

time

A moving average with Bollinger Bands implemented as force constraints.

Bollinger Bands

The user must employ greater force to move away from the moving average line and is constrained within the Bollinger bands

The Moving Average Surface showing 3D Bollinger Surfaces as haptic constraints.

Bollinger Surfaces

Moving Average Surface

Figure 11-32 A design incorporating Bollinger Bands as force field constraints.

11.5.4 Consider Direct Visual Metaphors (STEP 3.2.1) Having considered the design space for spatial metaphors the next step of the display mapping is to consider the direct metaphors for each sense. The focus in this particular step is to explore how direct visual metaphors can be used in specific tasks (table 11-11). During this first iteration of the display mapping step this part of the MS-Process is not critical. The final use of direct visual metaphors is influenced by the selected spatial visual metaphor. To explore the concept a few possible direct visual metaphors

are suggested for the previously designed 3D Bar Chart, Moving Average Surface and BidAsk Landscape. During later iterations through the process this step of the display mapping is revisited.

The first guideline to be considered is, DV-1 Use direct visual metaphors as the first choice for displaying categorical data. Most of the data used by the analyst is quantitative and can be categorised. Thus it is appropriate for display with direct visual metaphors. Some of the lower priority data, such as volume or momentum could be converted to ordered categories and then displayed using a direct visual metaphor such as colour intensity (figure 11-33). While it is important for traders to know accurate values of price, only approximate values of volume are required for most tasks. Volume information could be mapped to colour intensity and then overlaid on the Moving Average Surface. The 3D Bar Chart already displays volume spatially however an alternative indicator like momentum could be displayed by using colour

Page 104: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 316

intensity (figure 11-33). The important MS-Guidelines

relevant to these mappings are: DV-4 Colour does not convey exact quantitative values, DV-5 For ordinal data use ordered categories of data and DV-15 Users can distinguish about 100 steps of colour intensity.

Apart from colour intensity, it is also possible to consider mapping ordinal attributes to saturation. The choice of either intensity or saturation in the display needs to consider the MS-Guidelines, DV-15.2 Colour intensity is more effective than saturation for order and DV-14 Users can distinguish about 5 steps of saturation. The MS-Guideline

, DV-16 Texture can be used to represent ordinal categories suggests the possibility of using texture to represent ordinal attributes.

time

price

volume

3D Bar Chart with momentum mapped to colour intensity on each bar.

Figure 11-33 Incorporating colour intensity into the display of the 3D Bar Chart and the

Moving Average Surface.

time

price

volume

The 3D Bar Chart with categories adopted from Candlestick charts. Light bars - closing price > opening price. Dark bars - closing price < opening price.

Figure 11-34 Using colour to distinguish nominal data categories in the 3D Bar Chart and the

BidAsk Landscape. The MS-Guideline

, DV-3 Colour reduces complexity by differentiating structures, can be considered in the BidAsk Landscape. Colour can be used to help distinguish between the three surfaces generated from bids, asks and trades (figure 11-34). These three surfaces represent three nominal categories and so the MS-Guideline, DV-6 For nominal data use distinctly different colour categories applies. In this case yellow, green and red colours are used to distinguish the bids, asks and trades.

Candlestick charts also display nominal categories. Dark and light candles are used to distinguish between days where closing price is either higher or lower than the opening price. This type of direct visual metaphor could also be used in the 3D Bar Chart (figure 11-34).

Page 105: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 317

11.5.5 Consider Direct Auditory Metaphors (STEP 3.2.2) The next step of the MS-Process considers possible designs for direct auditory metaphors. During this first iteration of the display mapping step this part of the MS-Process

is not critical. At this stage, this step continues the exploration of the design space.

Direct auditory metaphors can be considered as alternatives to direct visual metaphors. In the previous step of the process (section 11.5.4) the design suggested mapping share volume to colour. As an alternative this mapping could use a direct auditory property such as pitch. Although the MS-Guideline, DA-1 Direct auditory metaphors are the second choice for displaying categories recommends that sound be used as a second choice to direct visual properties. However the decision for the designer is not clear cut. For example, in this case the choice of a direct auditory property

could be offset by the guideline DA-1.3 Sound does not add to visual clutter. This implies that if the visual display is cluttered, then using sound may be preferable to colour.

In deciding what properties of sound to use in the display, the guideline DA-1.1 Pitch and loudness can represent ordinal categories is useful. So for example, ordered categories of share volume could be displayed with sounds of different loudness. This is captured in the guideline DA-4 Loudness is an ordered property. However, using loudness may not be advisable as suggested by the guidelines, DA-4.6 Loudness may be under the user's control and DA-4.4.1 Loud sounds can be annoying. The guideline, DA-5 Pitch is an ordered property suggests that even ordered quantitative data such as price could be displayed using pitch (figure 11-35). Considering this option, the guideline DA-3.2 Sounds have natural associations (Higher pitch is more, lower pitch is less) would suggest that higher pitch be associated with higher prices to create an intuitive display. For example, in the BidAsk Landscape the price of individual bids, asks and trades could be displayed using pitch.

The timbre of different musical instruments used to distinguish between bids, asks, and trades. The pitch

of the sound can be used to display price values.

Bids

AsksTrades

low pitchlow price

high pitchhigh price

Figure 11-35 Using timbre and pitch in the BidAsk Landscape.

The bids, asks and trades being displayed in the BidAsk Landscape represent three different categories. Since these are unordered categories, timbre could be used to

Page 106: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 318

distinguish between the sound sources (figure 11-35). This is suggested by the guideline DA-1.2 Timbre can represent nominal categories. The guideline, DA-6.3 It is difficult to perceive the timbre of a short sound provides a caution about using sounds of short duration if timbre is to be interpreted. The complex nature of the design process is well illustrated by even this simple use of pitch and timbre proposed for the BidAsk Landscape (figure 11-35). The guidelines, DA-2 The properties of sound are not orthogonal and DA-5.2 Pitch depends on timbre both caution against using more than one direct auditory metaphor

in the design. This does not mean that the design is invalid, it is just that these aspects will need to be closely considered during evaluation.

11.5.6 Consider Direct Haptic Metaphors (STEP 3.2.3) The last two steps have considered the use of direct visual metaphors and direct auditory metaphors. The focus in this step is on considering possible direct haptic metaphors in the application domain. Once again, during this early iteration of the display mapping step, this part of the MS-Process

is not critical. Rather some simple ideas that explore this part of the design space are considered. During later iterations through the process this step of the display mapping is revisited.

Figure 11-36 Using texture to display volume categories in the Moving Average Surface.

Direct haptic metaphors can be considered in place of both direct visual metaphors and direct auditory metaphors. The use of direct haptic metaphors is tempered by the guideline DH-1 Direct haptic metaphors are the third choice for displaying categories. As with direct auditory properties, it is best to map lower priority data, such as share volume or momentum, to any direct haptic property

(figure 11-36). The other caution is provided by the guideline DH-2.1 Use large differences to display categories.

A number of choices for the appropriate direct auditory property

are possible. This is indicated by the guidelines, DH-3 Force is an ordinal property, DH-3 Haptic surface texture is an ordinal property, DH-5 Compliance is an ordinal property, DH-8 Weight and inertia are ordinal properties and DH-9 Vibration is an ordinal property.

Page 107: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 319

11.5.7 Consider Temporal Visual Metaphors (STEP 3.3.1) At this stage of the case study the design space for spatial metaphors and direct metaphors have been considered. The next step of the display mapping is to consider the temporal metaphors for each sense. The focus in this particular step is to explore how temporal visual metaphors

can be used in specific tasks.

In the data characterisation step of the MS-Process, many of the data attributes are identified as temporal in nature. This suggests that temporal metaphors may be an appropriate choice in the design. The guideline, TV-1 Use temporal visual metaphors to understand change suggests that changes to stock market data could be displayed by changing either the direct visual properties or visual spatial structures

of the display.

The identification of temporal patterns is key for the task, TASK 2: To develop trading strategies over different time frames. This is particularly important in the case of depth of market data where the domain expert is searching for very short term trading patterns. Depth of market data is traditionally associated with real time trading. The surfaces in the BidAsk Landscape could be designed to evolve over time in response to data events.

11.5.8 Consider Temporal Auditory Metaphors (STEP 3.3.2) The next step of the MS-Process considers possible designs for temporal auditory metaphors. During later iterations of the display mapping step this part of the MS-Process

will be considered in more detail. However, at this stage, this step continues to exploration the multi-sensory design space.

The data characterisation identified the source of stock market data as temporal. The guideline, MST-1.2 Hearing emphasises temporal qualities suggests that temporal auditory metaphors may be a component of the final multi-sensory display. The guideline, TA-1 Use temporal auditory metaphors to display time series data also supports the use of temporal auditory metaphors

for stock data.

Finding short-term temporal patterns is one of the key drivers for examining depth of market data. This is in relation to the task, TASK 2.3: To develop discretionary (intra-day) trading strategies. The guideline, TA-1.1 Sound reveals short-term patterns suggests the use of auditory events

to display this data. This could be as simple as augmenting the BidAsk Landscape with auditory events designed to signal new bids, asks and trades.

Another task that the trader performs is TASK 5: To monitor changes in the price of a financial instrument. One benefit of using sound to monitor data is that large time periods can be compressed as suggested by the guideline TA-2 Auditory events can be compressed in time. This could allow the analyst to review long periods of data in shorter time frames. For example, overnight markets could be reviewed quickly before the commencement of local day trading.

Page 108: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 320

11.5.9 Consider Temporal Haptic Metaphors (STEP 3.3.3) The last two steps have considered the use of temporal visual metaphors and temporal auditory metaphors. The focus in this step is on considering possible temporal haptic metaphors in the application domain. Once again, during this early iteration of the display mapping step, this part of the MS-Process

is not critical. Rather some simple ideas that explore this part of the design space are considered.

The guideline, MST-1.3 Haptics emphasises movement suggests that temporal haptic metaphors may also be useful for displaying the temporal data that is characteristic of the stock market. The guideline, TH-1 Use temporal haptic metaphors to display time series data further emphasises that haptic events

may be appropriate for displaying stock data. For example, price momentum is a lower priority data attribute and could be displayed by mapping the magnitude and direction of price momentum to a force vector. As momentum changed over time the user would feel changes in the displayed forces (figure 11-37).

price

time

A bar chart with a visualmomentum indicator.

momentum

Momentum displayed using aforce that evolves over time.

timeforce

price

Figure 11-37 Using force that changes over time to display price momentum.

11.6 Prototyping (STEP 4) The next step of the MS-Process

is prototyping the design solution (table 11-15). This step of the process is also iterative so it will also be revisited a number of times during this case study.

At this stage of the case study, the task analysis step, the data characterisation step and the first iteration of the display mapping step of the MS-Process have been completed. The task analysis and data characterisation steps provide the designer with a detailed knowledge of the application domain. The first iteration of the display mapping allows the designer to explore design alternatives from the full multi-sensory design space. However, the primary focus during this first iteration of the display mapping step is to identify appropriate spatial visual metaphors

for tasks in the application domain.

The display mapping step has described three spatial visual metaphors: the 3D Bar Chart, the Moving Average Landscape and the BidAsk Landscape. These design alternatives are now implemented in appropriate Virtual Environments.

Page 109: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 321

The aim of this step is to provide more information about the most appropriate direction for future iterations of the design.

MS-Process Case Study Section Iteration

STEP 4. Prototyping.

Implement and evaluate the alternative designs:

• 3D Bar Chart • Moving Average Surface • BidAsk Landscape

11.6

STEP 4.1 Consider software platform.

Determine any limitations of the software platform used:

• Avango [Tramberend 1999a]

11.6.1

STEP 4.2 Consider hardware platform.

Determine any limitations of the hardware platforms used:

• i-CONETM• L-Shaped Responsive

Workbench [Krüger, Bohn et al. 1995]

[Barco 2001]

11.6.2

STEP 4.3 Implement current design.

Implement the 3D Bar Chart and Moving Average Surface on the L-Shaped Responsive Workbench [Krüger, Bohn et al. 1995]. Implement the BidAsk Landscape on the i-CONETM

[Barco 2001]

11.6.3

Table 11-15 The first iteration of the prototyping step implements the design alternatives.

11.6.1 Consider Software Platform (STEP 4.1) The software is written in C++ [Stroustrup 1986]. The software is designed to read in logged data from the stock market (figure 11-38). The display of actual stock market data is considered by the domain expert to be critical for the evaluation phase. The visual models are generated in either inventor [Wernecke 1994] or VRML [Ames, Nadeau et al. 1997] format. This allows for easy integration into most Virtual Environments.

Online Stock Trading Data

Offline Stock Trading Data

C++Processing

Code

Derived Indicators

3D Visual Models

Inventor models

C++Application

Code

Avango Framework

real-time application

Page 110: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 322

Figure 11-38 Overview of the software platform. The Virtual Environments used for this phase of implementation and evaluation were located at the German National Research Center for Information Technology (GMD) in Germany. These environments all operate under a high-level software framework called Avango [Tramberend 1999a] which allows for rapid prototyping of applications. The Avango framework allows for rapid prototyping of applications in a range of Virtual Environments. Many of the complex issues of real-time hardware control are already catered for within this framework. This includes stereoscopic display, head-tracking of user viewpoint and managing input through a range of interaction tools such as wands and 3D mice. The most complex software is required for converting the stock data into 3D geometry models. This requires a good understanding of the inventor and VRML file description languages.

11.6.2 Consider Hardware Platform (STEP 4.2) The case study used two different Virtual Environments during the prototyping step. The L-Shaped Responsive Workbench [Krüger, Bohn et al. 1995] was used for the 3D Bar Chart and the Moving Average Surface. The i-CONETM

[Barco 2001] was used to prototype the BidAsk Landscape.

The L-Shaped Responsive Workbench is a dual-sided desk display which allows two users to interact with a 3D model displayed above the desktop (figure 11-39). The Workbench supports stereo views with single user head-tracking. Because of its two-sided configuration the Workbench allows models with considerable depth to be displayed (figure 11-39). Because two projectors are used to generate the display, resolution is good. The paradigm provided by the Workbench is one where the user manipulates and examines objects above a table surface. The hardware supports a direct interaction interface through a pointer device with a single click button. This interface provides for a series of simple direct interactions such as selecting, translating, rotating and zooming geometric models. The i-CONETM

provides a large, curved screen display. Designed from a conic section it uses a curved, wall-sized display. The BidAsk Landscape was chosen for this environment because the large spatial display provides better immersion within the model (figure 11-40). Hence the users would feel surrounded by the landscape of bids and asks.

The i-CONETM

supports high-resolution displays through four projectors and head-tracked, active stereo glasses provide the user with a 3D display (figure 11-40). The interaction device, a CUBE, not only allows for intuitive selection, scaling and movement of models, but also supports the use of cutting planes through the x, y and z-axes. This feature allows the user to control time along the z-axis of the BidAsk Landscape.

Page 111: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 323

Figure 11-39 The L-Shaped Responsive Workbench [Krüger, Bohn et al. 1995].

Seen from the top, the curved-shaped CONE uses four projectors to generate the display.

The large size of the display allows the user to feel surrounded and immersed within the model.

Figure 11-40 The configuration of the i-CONETM

[Barco 2001].

11.6.3 Implement Current Design (STEP 4.3) Simple versions of the 3D Bar Chart (figure 11-41) and Moving Average Surface (figure 11-42) are implemented in the L-Shaped Responsive Workbench [Krüger, Bohn et al. 1995]. The BidAsk Landscape is implemented in the i-CONETM

[Barco 2001] (figure 11-43). No particular problems are noted during the implementation step.

Figure 11-41 The 3D Bar Chart in the L-Shaped Responsive Workbench

[Krüger, Bohn et al. 1995]. Courtesy of: Fraunhofer Institute for Media Communication, Germany.

Page 112: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 324

Figure 11-42 The Moving Average Landscape in the L-Shaped Responsive Workbench

[Krüger, Bohn et al. 1995]. Courtesy of: Fraunhofer Institute for Media Communication, Germany.

Figure 11-43 The BidAsk Landscape in the i-CONETM

Courtesy of: Fraunhofer Institute for Media Communication, Germany. [Barco 2001].

MS-Process Case Study Section Iteration

STEP 5. Evaluation.

Perform expert heuristic and formative evaluation of the alternative designs.

11.7

STEP 5.1 Expert heuristic evaluation.

STEP 5.1.1 System demonstration. STEP 5.1.2 Feedback interview. STEP 5.1.3 Capture problems and suggestions.

Perform an expert heuristic evaluation of alternative designs:

• 3D Bar Chart • Moving Average Surface • BidAsk Landscape

11.7.1 11.7.1.1 11.7.1.2 11.7.1.3

STEP 5.2 Formative evaluation.

STEP 5.2.1 Review system using MS-Guidelines. STEP 5.2.2 Capture problems and suggestions.

Perform a formative evaluation of alternative designs:

• 3D Bar Chart • Moving Average Surface • BidAsk Landscape

11.7.2 11.7.2.1 11.7.2.2 11.7.2.3

STEP 5.3 Summative evaluation.

STEP 5.3.1 Design experiment. STEP 5.3.2 Conduct experiment. STEP 5.3.3 Analyse experimental results. STEP 5.3.4 Capture problems and suggestions.

Not performed during this iteration of the evaluation step.

11.7.3

Table 11-16 The first iteration through the evaluation step evaluates the design alternatives.

Page 113: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 325

11.7 Evaluation (STEP 5) At this stage of the case study three prototypes are implemented. The next part of the MS-Process is the evaluation step. The evaluation step is also one of the iterative steps of the MS-Process

and so this step is revisited a number of times during the case study.

The evaluation step is caters for a range of different types of evaluation (table 11-16). Depending on the stage of design, the designer can choose to use one or more of the different evaluations. For example, during the first iteration of the case study only an expert heuristic evaluation and a formative evaluation were performed. The more formal summative evaluations are time consuming and target specific performance questions about the display. The design is not yet refined enough to benefit from such a specific evaluation.

11.7.1 Expert Heuristic Evaluation (STEP 5.1) Although the task analysis step has helped to refine the user’s requirements for the design, the requirements are still vague. The basic requirement is to produce a display that helps a trader to find patterns in stock market data. At this early stage of the case study the domain expert can help to clarify these requirements by checking the prototyped designs. Therefore the three prototypes, the 3D Bar Chart, the Moving Average Surface and the BidAsk Landscape are all evaluated using an expert heuristic evaluation, For each prototype the domain expert uses the system for about 20 minutes. During this demonstration time the domain expert is observed and directed if required.

Spatial Visual Metaphor Recommendations Priority

3D Bar Chart

Use as a framework for the multi-sensory design. High Add multi-sensory features - e.g. haptic attributes. High Map some data to the colour of the price bars. Medium Include open and close price on price bars. Low

Moving Average Surface

Extend the price bars to cover the full surface width. High Remove the redundant half of the surface. High Include open price on price bars. Low

BidAsk

Landscape

Use shorter time periods to construct the surfaces. High Emphasize the data close to the trade. High Check data or algorithm. The landscape should undulate. High Place the trade surface below the bids and asks. Medium

Table 11-17 Recommendations from the expert heuristic evaluation of the three prototypes.

After using each prototype the domain expert was informally interviewed to capture feedback. The interview was targeted to capture information about the usefulness of the three applications for the task of finding useful patterns in stock market data. The feedback is summarised in table 11-17. The purpose of the heuristic review is to refine the application towards a specific task.

Page 114: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 326

11.7.1.1 Heuristic Evaluation of the 3D Bar Chart The domain expert noted that the obvious correlation between volume and price data simple to observe by rotating the 3D Bar Chart. The domain expert immediately remarks that the large increase in volume that can be seen towards the end of the data is indicative of the high volumes of trading that occurs when futures contracts near the end of their contract date. The domain expert also noted that the bar chart is slightly primitive. In particular the price bars did not show open and close price. Also the price bars are simply coloured red. The domain expert suggested that perhaps additional information could be mapped to the colour of the price bars. However, the domain expert considered these issues to be minor and agreed that these features are not critical at this early stage of development. The domain expert commented that while this is an interesting application, it was unlikely to lead to new patterns being discovered in the data. The reason for this was simple. The 3D Bar Chart only offers a small change from traditional techniques where volume and price are graphed on a 2D chart. Finally, the domain expert stated that he would like to see some multi-sensory features added to the model before dismissing it's potential.

11.7.1.2 Heuristic Evaluation of the Moving Average Surface The domain expert found the novel view of multiple moving averages interesting although he seldom uses moving averages himself. These indicators tend to lag behind the market and the domain expert believes that this is not particularly useful for short-term trading, which is his focus. For longer term trading the display may be useful to help select the number of periods used in the moving average calculation. Typically a trader would choose the number of periods based on experience. However, this is not straightforward and depends on the particular stock and the state of the market. It may be advisable to choose the number of periods in an interactive fashion by using the Moving Average Surface. The domain expert comments that the reflection of the surface about the price bars is not particularly useful. During the review process the domain expert noted that the position of the price bars above or below the rises and falls of the Moving Average Surface may provide some indication of the state of trends. For example, if the price bars are below the surface during an up trend then it may signal the start of a down trend. This pattern is noted for further exploration. To allow the trader to confirm this pattern requires a change to the design of the visual models so that the area above or below the price bars is emphasised.

11.7.1.3 Heuristic Evaluation of the BidAsk Landscape The domain expert was immediately encouraged by the potential of the BidAsk Landscape. During the demonstration he tested the display by positioning the cutting plane on the model while trying to predict the market direction. For example, when the

Page 115: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 327

cutting plane is positioned at one place in time, he attempted to guess the direction that the trade price would move. Moving the cutting plane a little revealed the next part of the model and this allowed the domain expert to confirm his prediction. Despite the positive review, a number of issues are noted. The model was built from real stock data and represented one day of trading. The model was constructed from a series of time strips where each time strip represented 10 minutes of trading data. However the domain expert would prefer the model to emphasise very short periods of trading. For example the patterns that he wants to trade are variations that occur from minute to minute. To accentuate the short-term patterns requires that the time strips be calculated more frequently. The domain expert suggested that all bids, asks and trades be displayed. The domain expert commented that the middle of the display is the most useful part of the data. This is the part of the model in the vicinity of the trades. It is a valley between the bid surface and ask surface and the most dynamic part of the display. Peripheral information such as very high asks and very low bids are not likely to have a significant impact on the short-term price movements. It would be good to emphasise the data close to the trade. In the evaluated model the trade surface was positioned above the bid and asks. Therefore the trade surface sometimes occluded the bids and asks. The domain expert noted that the trade data was of lower importance than the bid and ask data and recommended the trade surface be displayed below the bids and ask surfaces. Finally the demonstration revealed a potential problem with either the algorithm used to construct the model or the data itself. The domain expert noticed that the model always grew as time evolves. This was counter to the expert's intuition as it would correspond to the situation where the number of bids and asks are continually increasing during the trading period.

11.7.2 Formative Evaluation (STEP 5.2) Following the expert heuristic evaluation the next possible step is to perform a formative evaluation. The formative evaluation step calls reviews the display in terms of the MS-Guidelines

. During the early iterations of design these subjective assessments about usability can be made quickly allowing recommendations to be captured for the next iteration of the process. The 3D Bar Chart, the Moving Average Surface and the BidAsk Landscape are all evaluated using a formative evaluation. The recommendations are summarised in table 11-18.

A number of the MS-Guidelines are applicable to all three of the models. The primary aim of the first iteration through the process is to develop appropriate spatial visual metaphors

for the multi-sensory display. The guideline, MST-2 Use the Visual Spatial Metaphor as a framework for the display captures this aim The 3D Bar Chart, the Moving Average Surface and the BidAsk Landscape all provide useful frameworks although the BidAsk Landscape seems to be the most promising.

Some of the MS-Guidelines focus on the technology of Virtual Environments. The guidelines, SV-4.3.2.1 Virtual environments provide stereoscopic cues is checked. So

Page 116: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 328

far users have reported no problems with the stereoscopic display although exposure times have usually been less than 20 minutes. The guidelines, SV-4.3.2.2 Virtual environments provide movement cues and TV-5.1 Movement provides information about shape are observed to be critical for interpreting the models. It is only by rotating and moving the models that an accurate understanding of the shape can be gained. It is considered important that this capability is maintained in future developments. Two guidelines, SV-3 Design visual space around the most important data variables and SV-6 If time is an important variable then it should be integrated into the display space have been applied in all three models.

Spatial Visual Metaphor Recommendations Priority

3D Bar Chart

Use the model as a framework for multi-sensory design. High Investigate direct haptic metaphors High Display categorical data using colour. Medium Review eye problems with long exposure times. Low Incorporate substructure in price bars (open, close price). Low

Moving Average Surface

Use the model as a framework for multi-sensory design. High Investigate direct haptic metaphors High Display categorical data using colour. Medium Review eye problems with long exposure times. Low Incorporate substructure in price bars (open, close price). Low

BidAsk

Landscape

Use the model as a framework for multi-sensory design. High Emphasize foveal vision by using spatial distortion at center of display.

High

Investigate the extension of this visual metaphor with auditory metaphors.

High

Review eye problems with long exposure times. Low

Table 11-18 Recommendations from the formative evaluation of the three prototypes.

11.7.2.1 Formative Evaluation of the 3D Bar Chart The guideline, SV-12 Consider providing substructure to objects in space draws attention to the structure of the price bars which at this stage do not show opening and closing prices. While not critical at this point, any final design should incorporate this substructure. While evaluating the 3D Bar Chart, it was noted that a further data attribute could be displayed using colour. The general guideline, MST-3 Increase the human-computer bandwidth recalls the goal of mapping multiple data attributes to the display. The guidelines, DV-1 Use direct visual metaphors as the first choice for displaying categorical data and DV-2 Colour aids in object searching (speed, accuracy, memorability) suggest that colour may be a useful direct visual metaphor

.

Of course the situation is not clear cut, as the guidelines GD-2 Simplify the Display and DV-12.1 Minimise the use of colour provide a conflicting view. A solution may be to use either direct auditory metaphors or direct haptic metaphors as an alternative to a direct visual metaphor

.

Page 117: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 329

For this case study it is decided to investigate the addition of direct haptic metaphors

. This work occurs in the next iteration through the design process and is reported in chapter 12.

11.7.2.2 Formative Evaluation of the Moving Average Surface The guideline, SV-12 Consider providing substructure to objects in space also highlighted that the price bars require additional structure to display the opening and closing prices. One problematic aspect of the price bars was pinpointed by the guideline, SV-9.1.3 Minimise the number of display dimensions. It was noted that the price bars were designed as 3D objects and yet only conveyed information on one dimension, namely the price range along the y-axis. This part of the design may need to be checked in later iterations. After the expert heuristic evaluation the Moving Average Surface some design changes were proposed. These changes included extending the price bars across the surface. Once again colour could be added to the surface or an alternative direct metaphor

used to display additional information.

For this case study it is decided to redesign the model and to investigate the addition of direct haptic metaphors

. This work occurs in the next iteration through the design process and is reported in chapter 12.

11.7.2.3 Formative Evaluation of the BidAsk Landscape Despite the positive results of the expert heuristic evaluation a number of recommendations are suggested by reviewing the guidelines. In particular, the guidelines, SV-1.1 Vision has a wide field of view, SV-1.2 Foveal vision is detailed and SV-1.3 Peripheral vision responds to changes provide a suggestion about how to use the BidAsk Landscape. If a user's viewpoint is positioned in the centre of the landscape the user's foveal vision can focus on the most interesting detail around the trades. At the same time, the user's peripheral vision can observe the extremes of the landscape. Dramatic changes on the periphery will be observed. The i-CONETM

[Barco 2001] display is well suited to providing the user with a wide field of view.

In the heuristic evaluation the domain expert noted that the data around the centre of the display should be emphasised. A solution to this problem is provided by the guideline, SV-5.1 Distorted spaces allow for context and detail. It is possible to distort the space around the trades and thus allow this more interesting part of the landscape to occupy more of the display space. This would act like a fish-eye view [Furnas 1981] on the space. The domain expert also noted that short-term temporal information is a key component of trading depth of market data. A number of guidelines suggest that sound will be appropriate to assist in this task. These guidelines include, TA-1 Use temporal auditory metaphors

to display time series data, TA-1.1 Sound reveals short-term patterns and TA-4 Sound is good for monitoring data.

Page 118: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 11 – The Stock Market 330

For this case study it is decided to investigate the extension of this visual spatial metaphor with temporal auditory metaphors

. This work occurs in the next iteration through the design process and is reported in chapter 13.

11.7.3 Summative Evaluation (STEP 5.3) In this first iteration through the MS-Process no summative evaluations were performed. The current prototypes are deemed too immature to allocate time for thorough experimental tests. This step of the process w

ill be illustrated in chapter 12 and chapter 13 after a few more iterations through the design mapping and prototyping steps. At that stage more detailed haptic-visual and auditory-visual prototypes will have been developed.

11.8 Conclusion At this stage of the case study a single iteration through the MS-Process has been completed. This first iteration included the steps of task analysis and data characterisation, display mapping, prototyping and evaluation. After the display mapping step, three new concepts for spatial visual metaphors were proposed, the 3D Bar Chart, the Moving Average Surface and the BidAsk Landscape

.

The 3D Bar Chart is designed to assist in finding patterns in the trading volumes of a stock. These trading volumes are used to confirm patterns in the price movements. The Moving Average Surface uses a series of moving averages created from price data to create a surface in 3D. This model is designed to assist the analyst in finding patterns for trading trends. The BidAsk Landscape allows for real-time monitoring of bids and asks during market trading and mimics a natural landscape of hills surrounding a river valley. This model is designed to assist the analyst find temporal patterns that may occur in depth of market data. The designs for the three spatial visual metaphors

were prototyped and evaluated. During the evaluation step of the process a number of recommendations were captured using an expert heuristic evaluation (table 11-17) and a formative evaluation (table 11-18). These recommendations provide input for the next iteration through the display mapping step.

The case study now reports on the next iteration of the design where the focus is on integrating haptic and auditory components into the display. The 3D Bar Chart and the Moving Average Surface are redesigned as haptic-visual applications (chapter 12). The BidAsk Landscape is redesigned as an auditory-visual application (chapter 13). The appropriate parts of the MS-Process and the corresponding MS-Guidelines

are indicated as these displays are developed.

More general discussion in terms of the MS-Process, the MS-Guidelines and the MS-Taxonomy are delayed until chapter 14.

Page 119: Chapter 7 Guidelines for Spatial Metaphors

Chapter 12

Haptic-Visual Applications

“Children are born true scientists. They spontaneously experiment and

experience and reexperience again. They select, combine, and test,

seeking to find order in their experiences - "which is the mostest? which

is the leastest?" They smell, taste, bite, and touch-test for hardness,

softness, springiness, roughness, smoothness, coldness, warmness: the

heft, shake, punch, squeeze, push, crush, rub, and try to pull things

apart.” [R. Buckminster Fuller]

Page 120: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 12 –Haptic-Visual Applications 333

Chapter 12

Haptic-Visual Applications

12.1 Introduction This chapter continues to describe a case study of the MS-Process applied within the domain of technical analysis. This process is being used to design multi-sensory displays with the intent of finding new patterns in stock market data. The first iteration through the steps of the MS-Process is described in chapter 11. This first iteration can be described as conceptual design. The intent is to explore design options from the full multi-sensory design space. The additional focus is to develop appropriate spatial visual metaphors for domain-related tasks. In this case study the result of the first iteration was three alternative spatial visual metaphors (figure 12-1).

These possible designs were prototyped and evaluated.

3D Bar Chart Moving Average Surface BidAsk Landscape

Figure 12-1 Three alternative concepts developed during the first iteration of the MS-Process. The last three steps (3,4,5) of the MS-Process are iterative (figure 12-2). This chapter describes a number of further iterations through these steps. The focus of these further iterations is to extend the existing visual displays and create haptic-visual displays.

Page 121: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 12 –Haptic-Visual Applications 334

MS-Guidelines

IterativeSteps

MS-Taxonomy

MS-Guidelines

STEP 3.Display

Mapping

STEP 4.Prototyping

STEP 5.Evaluation

Design

Prototype

Figure 12-2 The iterative steps of the MS-Process.

The second iteration of the process is described in section 12.2 and concentrates on the design of direct haptic metaphors (table 12-1). A summative evaluation is used to assess three direct properties, haptic surface texture, compliance and inertia. The results from this work are then feedback into the next iteration of the MS-Process

.

The third iteration of the process is described in section 12.3 and involves the redesign of the Moving Average Landscape (table 12-1). This redesign uses the feedback gathered from previous evaluations. Two haptic-visual versions of the Moving Average Landscape

are prototyped and evaluated.

The fourth iteration of the process is described in section 12.4 and targets the redesign of the 3D Bar Chart (table 12-1). Once again this redesign uses feedback gathered from previous evaluations. A haptic-visual version of the 3D Bar Chart

is prototyped and evaluated.

MS-Process Case Study Section

Iteration

STEP 3 Display Mapping STEP 4 Prototyping STEP 5 Evaluation

Investigate the design of direct haptic metaphors. Prototype and evaluate.

DirectHapticProperties

Force

Haptic Surface Texture

Compliance

Weight (inertia)

Friction

Direct Haptic Shape

Vibration

12.2

Iteration

STEP 3 Display Mapping STEP 4 Prototyping STEP 5 Evaluation

Redesign Moving Average Surface

. Incorporate direct haptic metaphors. Prototype and evaluate.

12.3

Iteration

STEP 3 Display Mapping STEP 4 Prototyping STEP 5 Evaluation

Redesign 3D Bar ChartIncorporate direct haptic metaphors. Prototype and evaluate.

.

12.4

Page 122: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 12 –Haptic-Visual Applications 335

Table 12-1 The second, third and fourth iteration through the MS-Process.

12.2 Designing Direct Haptic Metaphors The next stage in the case study is to revisit the three iterative phases of the MS-Process (table 12-2). For this second iteration the decision was made to focus on the use of direct haptic metaphors. This was an arbitrary decision and the design work could have proceeded, for example, by considering auditory metaphors

.

MS-Process Case Study Section

Iteration

STEP 3 Display Mapping

Consider direct haptic metaphors in the multi-sensory design.

12.2.1

STEP 4 Prototyping

Implement prototypes of three direct haptic metaphors.

• haptic surface texture • compliance • inertia

12.2.2

STEP 5 Evaluation

Summative evaluation of the three direct haptic metaphors.

12.2.3

Table 12-2 Outline of the case study showing the first iteration through the MS-Process.

12.2.1 Display Mapping for Direct Haptic Metaphors The display mapping step of the MS-Process is revisited with the aim of incorporating direct haptic metaphors into the display (table 12-3). The focus during this iteration is on considering direct haptic metaphors (STEP 3.2.3). Note that this step includes a review of the MS-Guidelines that apply to direct haptic metaphors

.

MS-Process Case Study Section Iteration

STEP 3.2.3. Consider direct haptic metaphors

Augment spatial visual metaphor with direct haptic metaphors. • Haptic 3D Bar Chart • Haptic Moving Average Surface

12.2.1

STEP 3.2.3.1 Review direct haptic metaphor

Review guidelines in context of the existing visual displays.

guidelines.

12.2.1

STEP 3.2.3.2 Consider using force

.

STEP 3.2.3.3 Consider using haptic surface texture

.

Develop display mapping using haptic surface texture.

12.2.2.2

STEP 3.2.3.4 Consider using direct haptic shape

.

STEP 3.2.3.5 Consider using compliance

Develop display mapping using compliance. .

12.2.2.3

STEP 3.2.3.6 Consider using friction

.

STEP 3.2.3.7 Consider using weight

Develop display mapping using inertia. .

12.2.2.4

STEP 3.1.1.8 Consider using vibration

.

Page 123: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 12 –Haptic-Visual Applications 336

Table 12-3 The second iteration through the display mapping step focuses on STEP 3.2.3 Consider

direct haptic metaphors.

The first MS-Guideline

to be considered is DH-1 Direct haptic metaphors are the third choice for displaying categories. This guideline suggests that only low priority data should be displayed this way. In this task domain that includes data attributes such as share volume and momentum.

The strategy of the extended design is to augment the existing visual spatial structures of the 3D Bar Chart and the Moving Average Surface

with a haptic display. In effect the price bars in these models become both visual and haptic structures. This basic model follows the guideline, TH-3.3 Force feedback models should be simple.

Augmenting the price bars with a haptic surface also introduces the guideline, SH-8 Use spatial haptic metaphors

to represent local spatial structures. This design uses both a visual and haptic display of the price bars. There may be an issue with this design as suggested in the guideline, DH-1.1 The visual model effects the perception of haptic attributes.

Auditory display may also impact on the haptic display, as noted in the guideline, DH-1.2 The auditory model effects the perception of haptic attributes. To avoid this complication no auditory display is considered at this time. The low priority data of volume and momentum are selected as data attributes for the display. This data is first converted to ordinal categories before being mapped to the direct haptic properties of surface texture, compliance and inertia

. All three of these properties allow for the display of ordinal data as captured in the guidelines, DH-4 Haptic surface texture is an ordinal property, DH-6 Compliance is an ordinal property and DH-8 Weight and inertia are ordinal properties.

The choice of these haptic parameters is not totally arbitrary. The three parameters of haptic surface texture (roughness), compliance (firmness) and inertia are well supported by the software platform. These parameters also suggest appropriate metaphors for tasks within the domain of technical analysis. For example, a low volume of trading indicates that the price is only a rough indication of market sentiment. Similarly, a high volume of trading at a particular price indicates that a price level is a firm indication of actual price. If a share has a high momentum then its price is easy to move and thus has low inertia. The design maps volume and momentum to ordinal categories. These data categories are then mapped to haptic categories. However, the number of categories needs to be decided. The starting point is provided by the guideline, GP-8 Seven is a magic number. This guideline captures Miller's result that 7 (± 2) categories is a limit imposed due to short term memory [Miller 1957]. The design initially uses five categories as this conservative number allows for larger differences to be used between each haptic category. The importance of this is described in the guideline, DH-2.1 Use large differences to display categories. The next design decision requires the designer to choose the value of the haptic parameters to define the haptic categories. The guideline GP-6 Perceptual responses have thresholds describes the concept of Just Noticeable Difference (JND). The five

Page 124: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 12 –Haptic-Visual Applications 337

haptic categories must each have a JND so that the user can distinguish between them. The appropriate JND for haptic surface texture, compliance and inertia are suggested in the MS-Guidelines

. This provides a starting point for the mapping but the final mapping is dependent on the implementation platform. Therefore this design decision requires prototyping.

After prototyping a more formal evaluation of the user's ability to distinguish between haptic categories is required. The need for this evaluation is reinforced by the guideline, DH-2.0 Individuals have very different haptic perceptions. Therefore to finalise appropriate display mappings a summative evaluation is carried out with 10 subjects.

12.2.2 Prototyping Direct Haptic Metaphors Prototyping is the next step of the MS-Process and it is now revisited during the second iteration of the process. During this iteration the prototyping step aims to support a design decision about the appropriate JND for the haptic categories to be used in the design (table 12-4). In particular, the three direct haptic properties of haptic surface texture, compliance and inertia

are to be assessed.

MS-Process Case Study Section Iteration

STEP 4. Prototyping.

Develop a simple prototype using five haptic categories of:

• haptic surface texture • compliance • inertia

12.2.2

STEP 4.1 Consider software platform.

Determine any limitations of the software platform used:

• Reachin API [Reachin 2001]

12.2.2.1

STEP 4.2 Consider hardware platform.

Determine any limitations of the hardware platform used:

• Haptic Workbench [Stevenson, Smith et al. 1999]

12.2.2.2

STEP 4.3 Implement current design.

Implement a prototype to allow evaluation of the haptic categories.

12.2.2.3

Table 12-4 The first iteration of the prototyping step implements the design alternatives.

12.2.2.1 Software Platform The software platform used for these prototypes was the commercial Reachin API

developed by ReachIn Technologies, Sweden [Reachin 2001]. This API provides an integrated haptic and visual development environment. It is an object oriented C++ API and is based on the familiar hierarchical scene graph concept where the scene graph contains nodes that describe a 3D scene. The API is in fact an extension of the VRML scene graph model [Ames, Nadeau et al. 1997] with additional haptic properties.

Implementing haptic properties is relatively simple using the Reachin API. For example, as part of the VRML Appearance node an object can be defined with a SimpleSurface. The surface of the object is then automatically rendered in the haptic display by implementing a surface repulsion algorithm [Reachin 2001]. The three properties of haptic surface texture, compliance and inertia were all implemented using different the Reachin API.

Page 125: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 12 –Haptic-Visual Applications 338

Haptic surface texture

was implemented using the RoughSurface node (figure 12-3). For example, a cube defined to have this property is rendered with a rough surface. This property is implemented by modelling the mean friction on the surface of the object. The range of mean friction values varies from 0.1 for a very smooth surface through to 0.9 for a very rough surface.

The guideline, DH-3.1 Touch is equal to vision for comparing surface smoothness suggests that this property should be well differentiated by the user. However, the MS-Guidelines no guideline for the JND of this property. Even so it is not clear how the mean friction parameter would relate to JND. The pragmatic choice is to evenly divide the parameter range into five steps. So to create the five categories, mean friction values of 0.1, 0.3, 0.5, 0.7 and 0.9 were used. Shape { appearance Appearance { surface RoughSurface { mean 0.9 deviation 0.1 startingFriction 0.3 stoppingFriction 0.1 dynamicFriction 0.2 stiffness 900 stiffnessT 700 } } geometry Box {size .04 .04 .04} } Figure 12-3 The RoughSurface node defines texture by setting the mean friction. In the evaluation, five randomly arranged cubes are created with different haptic surface textures. Compliance

was implemented using the SimpleSurface node (figure 12-4). For example, a cube defined with this node has a surface of a specified stiffness. To implement surfaces of different compliance the default parameters were held constant and the stiffness value changed. This stiffness value ranges from 30 for a very soft surface (compliant) through to 480 for a very hard (non-compliant) surface.

Shape { appearance Appearance { surface SimpleSurface { stiffness 60 } } geometry Box {size .04 .04 .04} }

stiffness = 120stiffness = 30

stiffness = 60

stiffness = 240

stiffness = 480

Figure 12-4 The SimpleSurface defines compliance by setting the stiffness parameter. For evaluation, five randomly arranged cubes are created with different compliances. A number of guidelines provide information about the perception of compliance. In particular the guideline, DH-5.1 The JND of compliance depends on the type of surface (The JND of a rigid surface is about 23-34 %) suggests the required threshold for determining the different categories. Unfortunately it is not clear how this model relates to the stiffness parameter being set in the ReachIn API. After some trial and error it

Page 126: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 12 –Haptic-Visual Applications 339

is found that doubling the stiffness parameter seems to produce the required difference in compliance. This is not a totally arbitrary choice, the constant doubling being suggested by the guideline GP-6.2 Steven's Power Law. So to create the five haptic categories, stiffness values of 30, 60, 120, 240 and 480 are used. The guidelines, DH-6.2 The visual model affects perceived stiffness and DH-5.2 The visual model affects perceived stiffness and DH-6.4 Fast haptic rendering is required for rigid surfaces are supported by keeping the design models very simple. In the case study the model consists of five white cubes with a haptic compliance. Inertia was implemented using a SpringSlotDynamic (figure 12-5). This class is derived from the Reachin API

SlotDynamic node. Objects constrained by this node's behaviour can be only be moved by applying a force. The force required depends on the mass of the object and the speed of movement. Objects are constrained to move in a straight line and return back to their starting position when the force is removed. This property is modelled by setting the mass parameter in the SpringSlotDynamic node.

There are few guidelines related to inertia. However, the guideline, DH-8.1 The JND for weight is 10-20% suggests a starting point for modelling the five haptic categories. After some trial and error it is found that doubling the mass parameter seems to produce the required difference in inertia. A mass value of 2 describes an object that is easy to move, while an object with a mass of 32 is difficult to move. The five inertia categories are defined by using mass values of 2, 4, 8, 16 and 32. The guideline DH-7.4 The visual model affects the perception of weight reinforces the need to use a relatively simple visual model. In the case study the model consists of five white cubes with different haptic inertia. SpringSlotDynamic { translation -.04 -.03 -.01 start -0.045 -0.03 -0.015 end -0.045 -0.06 -0.015 mass 2 children [ Shape { appearance Appearance { surface SimpleSurface {stiffness 100} } geometry Box {size .04 .04 .04} } ] }

mass = 8 mass = 32mass = 2 mass = 4mass = 16

Figure 12-5 The SpringSlotDynamic node defines inertia by setting the mass parameter. For

evaluation, five randomly arranged cubes are created with different inertia.

12.2.2.2 Hardware Platform The prototypes were implemented on the Haptic Workbench (figure 1-7, 1-12) [Stevenson, Smith et al. 1999]. The haptic display uses the Phantom force feedback device [Salsibury and Srinivasan 1997]. The platform provides a stereo display by using active stereo glasses. Interaction devices allow for both left and right hand manipulation of the displayed models. The user's dominant hand can touch and select

Page 127: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 12 –Haptic-Visual Applications 340

objects with the Phantom probe. The user's other hand can rotate the displayed object. This configuration allows for rotation, translation and selection of the models.

12.2.2.3 Implementation Three haptic prototypes were implemented. Each prototype uses a different haptic property. In each case these haptic properties augment the simple visual geometry of a white cube. This requirement is emphasised by the guideline; DH-1.1 The visual model effects the perception of haptic attributes. In each prototype five cubes are used to display five haptic categories. The cubes are displayed in random positions.

12.2.3 Evaluation of Direct Haptic Metaphors Evaluation is the next step of the MS-Process and it is now revisited during the second iteration (table 12-5). This second iteration of the process is considering the addition of five haptic categories to the multi-sensory display. These categories can be created using the direct haptic metaphors of haptic surface texture, compliance and inertia

. However, it is not clear how to define these categories so that they have a suitable JND. This iteration of the evaluation step aims to gather objective evidence about the appropriate definition of haptic categories. Therefore the most appropriate type of evaluation at this stage is the summative evaluation (STEP 5.3).

MS-Process Case Study Section Iteration

STEP 5.3. Summative evaluation.

Test ability of subjects to differentiate five categories of: • haptic surface texture • compliance • inertia

12.2.3

STEP 5.3.1 Design experiment

Design experiment to test JND of the properties.

12.2.3.1

STEP 5.3.2 Conduct experiment.

Conduct experiment with 10 subjects.

12.2.3.2

STEP 5.3.3 Analyse experimental results.

Subjects can differentiate the haptic categories.

12.2.3.3

STEP 5.3.4 Capture problems and suggestions.

Haptic categories are suitable for low priority categorical data.

12.2.3.4

Table 12-5 The second iteration of the MS-Process uses the summative evaluation step.

12.2.3.1 Design Experiment The experiment is designed so that each subject must rank five cubes in terms of a haptic property. For surface texture the subject must rank each cube from smooth to rough. For compliance the subject must rank each cube according to the compliance of the surface, from soft or compliant through to hard or non-compliant. For inertia the subject must rank each cube in terms of the effort required to move it. The ranking ranges from an easy to move object through to a very hard to move object. Each subject is required to perform the ranking task for each of the three haptic properties. The order that the subjects rank each property is not considered to be a likely cause of variation in the experiment. Therefore the subjects always experience

Page 128: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 12 –Haptic-Visual Applications 341

the haptic properties in the same order (haptic surface texture, compliance, inertia). After the ranking task the subjects are interviewed to gather any relevant feedback. To guard against the visual model influencing haptic perception, the experiment uses cubes that are visually uniform. The cubes are randomly arranged in space (figure 12-3, figure 12-4, figure 12-5). All five haptic cubes are displayed at the same time and the subject has an unlimited time to rank the cubes. Because all cubes are displayed at the same time, the subject may directly compare haptic categories. Note that a more formal experiment to determine JND would display a single haptic category at random. Subjects would be required to identify the category. The experiment would also require a considerable number of trials. This more formal design is considered too time-consuming at this stage, as the intent is simply to guide the design work.

12.2.3.2 Conduct Experiment Ten subjects participated in the evaluation. The subject group ranged in age from 22 to 40 and consisted of 2 females and 8 males. All subjects had prior experience with the Haptic Workbench

[Stevenson, Smith et al. 1999] and were also familiar with the Phantom force feedback device [Salsibury and Srinivasan 1997]. This fact combined with the simplicity of the task meant that no subjects required training time.

The ten subjects evaluated each of the three properties of haptic surface texture, compliance and inertia

, in that order. There were no time constraints for the ranking tasks. However, subjects typically spent about 2-3 minutes ranking each property. After completing the three ranking tasks informal feedback was obtained from each subject about the difficulty of the task.

12.2.3.3 Analyse Experimental Results The results from the summative evaluation were recorded in confusion matrices (table 12-6, 12-7, 12-8). Only one user made an error when ranking the objects using the property of haptic surface texture (table 12-6). This was due to some uncertainty between the fourth and fifth roughest object. No users made errors ranking the objects by compliance (table 12-7). Again, one user in ten made a single error ranking the objects by inertia

(table 12-8). This error was between the ranking of the third and fourth heaviest objects.

Surface Texture of the Cube Smoothest

1 2

3

4

Roughest 5

User Ranking of Surface Texture

Smoothest 1 10 2 10 3 10 4 9 1

Roughest 5 1 9

Page 129: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 12 –Haptic-Visual Applications 342

Table 12-6 Confusion matrix showing the result of the surface texture ranking

Compliance of the Cube Softest

1 2

3

4

Hardest 5

User Ranking of Compliance

Softest 1 10 2 10 3 10 4 10

Hardest 5 10 Table 12-7 Confusion matrix showing the result of the compliance ranking

Inertia of the Cube Easy to

move 1

2

3

4

Hard to move

5

User Ranking of Inertia

Easy to move 1 10 2 10 3 9 1 4 1 9

Hard to move 5 10 Table 12-8 Confusion matrix showing the result of the inertia ranking

The current design proposes the mapping of lower priority data to five haptic categories. Within the context of the design process these results provide sufficient evidence to suggest that users can distinguish five haptic categories in the display. Furthermore all three of the haptic properties provide for suitable categories.

12.2.3.4 Capture problems and suggestions Despite the high success rate a number of users reported that the tests were not easy and they felt only reasonably confident about their results. Most users reported that the determination of inertia categories was the hardest task. The majority of users felt that determining the texture categories was the easiest task.

12.3 Designing the Haptic Moving Average Surface The section continues the description of the case study by describing the third iteration through the MS-Process (table 12-9). This iteration focuses on developing the Haptic Moving Average Surface. The initial design of the Moving Average Surface was developed during the first iteration through the process. This spatial visual model was then prototyped and evaluated. This evaluation resulted in a number of design recommendations being captured (table 12-10). On this third iteration of the process the Moving Average Surface is redesigned using these

Page 130: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 12 –Haptic-Visual Applications 343

recommendations. This redesign also includes the integration of the direct haptic properties of surface texture and compliance

into the display. The viability of using these properties for displaying five haptic categories was validated in the second iteration of the process.

MS-Process Case Study Section

Iteration

STEP 3 Display Mapping

Redesign Moving Average SurfaceIncorporate direct haptic properties:

.

• haptic surface texture • compliance

12.3.1

STEP 4 Prototyping

Implement two prototypes of the Haptic Moving Average Surface

. 12.3.2

STEP 5 Evaluation

Evaluate the two prototypes of the Haptic Moving Average Surface

. 12.3.3

Table 12-9 The third iteration of the MS-Process.

12.3.1 Display Mapping for the Haptic Moving Average Surface The first task on revisiting the display mapping step of the MS-Process (STEP 3) is to modify the 3D visual model. A number of high priority recommendations were captured after the initial Moving Average Surface prototype was evaluated (table 12-10). These recommendations are now addressed. The redundant half of the surface is removed and the price bars are extended across the surface (figure 12-6). The second stage of the modification is to augment the price bars to display haptic categories using surface texture and compliance

.

Spatial Visual Metaphor

Recommendations

Source

Priority

MS-Process

Moving Average Surface

Remove redundant half of the surface. Heuristic High STEP 3.1.1 Extend price bars to cover the surface. Heuristic High STEP 3.1.1 Use the Moving Average Surface as a framework for multi-sensory design.

Formative

High STEP 3

Investigate Direct Haptic Metaphors Formative High STEP 3.2.3 Display categorical data using colour. Formative Medium STEP 3.2.1 Include open price on price bars. Heuristic Low STEP 3.1.1.5.4 Review eye problems with long exposure times.

Formative

Low STEP 3.1.1.1

Incorporate more substructure in price bars (open, close price).

Formative Low STEP 3.1.1.5.4

Table 12-10 Recommendations for the Moving Average Surface taken from initial evaluations.

Page 131: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 12 –Haptic-Visual Applications 344

Figure 12-6 Redesign of the visual model for the Haptic Moving Average Surface.

The lower priority data of share volume is mapped to the haptic categories. Two alternative display mappings are considered. One design maps share volume to haptic surface texture (table 12-11). The second design maps share volume to compliance

(table 12-12).

The volume of trades can give the trader and indication of how committed or relevant a period of trading is. For example, during a period of low volume trading, price tends to fall but this may not be indicative of real price action. Volume is also important for confirming signals in the price curve. The design using the compliance property (softness/hardness) displays low volumes of trade so they feel soft and high volumes so they feel hard. This is intended to suggest that high volumes of trade are firmer indications of real price action. The surface texture property (smoothness/roughness) was designed so that price bars with low volumes of trade feel rough and price bars with high volumes feel hard. This is meant to indicate that rough prices are not accurate indicators or only “rough” indicators of real price. In both designs the volume of trades is categorised using five categories ordered from low to high volume. These five ordinal categories are then mapped to five haptic categories. The haptic categories are displayed using surface texture (table 12-11) or compliance

(table 12-12) depending on the design.

Haptic Moving Average Surface

[Surface Texture]

Volume Categories

Surface Texture

Categories

Mean Friction

Parameter

Haptic Perception

[1, 10) 5 0.9 Rough [10, 100) 4 0.7

[100, 1000) 3 0.5 [1000, 10,000) 2 0.3

≥ 10,000 1 0.1 Smooth

Table 12-11 Volume categories displayed with haptic surface texture.

Haptic Moving

Average Surface

[Compliance]

Volume Categories

Compliance Categories

Stiffness Parameter

Haptic Perception

[1, 10) 1 30 Soft [10, 100) 2 60

[100, 1000) 3 120 [1000, 10,000) 4 240

Page 132: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 12 –Haptic-Visual Applications 345

≥ 10,000 5 480 Hard

Table 12-12 Volume categories displayed with compliance.

12.3.2 Prototyping the Haptic Moving Average Surface The software platform used for these prototypes is the Reachin API [Reachin 2001]. The prototype of the Moving Average Surface developed, in the first iteration used Inventor models [Wernecke 1994]. The use of the Reachin API requires a modification to the existing software to create a VRML model for the Haptic Moving Average Surface

(figure 12-7, figure 12-8).

The price bars in the VRML model are displayed with haptic properties. Two prototypes are implemented. The first displays categories of surface texture implemented using the Reachin RoughSurface node. The second prototype displays compliance categories using the Reachin

SimpleSurface node. The haptic categories are specified using the parameters verified in the previous summative evaluation (section 12.2.3).

The hardware platform used for this prototyping step was the Haptic Workbench

described in chapter 1 (figure 1-7, 1-12) [Stevenson, Smith et al. 1999]. The implementation used previously logged stock market data from the Australia Stock Exchange.

Online Stock Trading Data

Offline Stock Trading Data

C++Processing

Code

Derived Indicators

3D Visual and Haptic

Models

VRML models

C++Application

Code

Reachin API

real-time application

Figure 12-7 Overview of the software platform.

Page 133: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 12 –Haptic-Visual Applications 346

Figure 12-8 The Haptic Moving Average Surface

12.3.3 Evaluation of the Haptic Moving Average Surface To complete this iteration of the process an evaluation step was carried out. In this instance an expert heuristic evaluation (STEP 5.1) of the two Haptic Moving Average Surface

prototypes was performed.

The two prototypes were demonstrated to an expert in the field of technical analysis. The demonstration included some initial familiarisation time with the Haptic Workbench followed by as much time as the domain expert required to trial each prototype. Feedback was captured both during and immediately following the demonstrations. The feedback of the domain expert is described below. The mappings are intuitive.

The domain expert felt that indicating rough prices using rough surfaces was a simple and intuitive mapping. He also endorsed that mapping firm prices to firm surfaces was an easy to understand mapping.

Haptic categories can be interpreted.

The domain expert felt confident that the different haptic categories could be distinguished using both surface texture and compliance. The expert confirmed that for lower priority volume data an accurate judgement is not required. The expert also felt that the number of categories was sufficient to represent the volume data.

Interpreting categories is time consuming.

The domain expert reported that a considerable amount of time was required to interact with a price bar and determine the haptic category. This was particularly true for the surface texture categories. For example, determining these categories required a rubbing action across the surface, sometimes alternating between a gentle grip on the stylus for light rubbing and then a firm grip for applying more forceful rubbing.

It is difficult to compare price bars that are far apart.

The major criticism was that it was difficult to compare the haptic properties of two price bars if they were separated in space. This problem is quickly amplified if the expert tries to compare categories over multiple bars. In general this problem could prevent a trader from finding useful global patterns.

Page 134: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 12 –Haptic-Visual Applications 347

Haptic properties prevent visual clutter.

The domain expert commented that a positive aspect of the haptic display was that extra information was available if required; however this information did not create additional visual noise.

Visual patterns need to be further investigated. The domain expert confirmed that the new visual model for the Moving Average Surface

confirms that patterns to assist trend tracking may be present in the display. This aspect of the display needs to be further investigated.

12.4 Designing the Haptic 3D Bar Chart The section continues the description of the case study by describing the fourth iteration through the MS-Process (table 12-13). This iteration focuses on developing the Haptic 3D Bar Chart. The initial design of the 3D Bar Chart was developed during the first iteration through the process (chapter 11). This spatial visual model was then prototyped and evaluated. This evaluation resulted in a number of design recommendations being captured (table 12-14). On this fourth iteration of the process the 3D Bar Chart is redesigned using these recommendations. This redesign also includes the integration of the direct haptic property of inertia

into the display. The viability of using this property for displaying five haptic categories was validated in the second iteration of the process (section 12.2).

MS-Process Case Study Section

Iteration

STEP 3 Display Mapping

Redesign 3D Bar ChartIncorporate direct haptic property of inertia.

.

12.4.1

STEP 4 Prototyping

Implement a prototype of the Haptic 3D Bar Chart

. 12.4.2

STEP 5 Evaluation

Evaluate the Haptic 3D Bar Chart

12.4.3

Table 12-13 The fourth iteration of the MS-Process.

12.4.1 Display Mapping for the Haptic 3D Bar Chart The first task on revisiting the display mapping step of the MS-Process (STEP 3) is to modify the 3D visual model. A number of high and medium priority recommendations were captured after the initial 3D Bar Chart

prototype was evaluated (table 12-14). These recommendations are now addressed.

Spatial Visual Metaphor

Recommendations

Evaluation

Priority

MS-Process

3D Bar Chart

Add multi-sensory features - e.g. haptic attributes.

Heuristic

High STEP 3.2.3

Use as a framework for the multi-sensory design.

Formative

High STEP 3

Investigate direct haptic metaphors Formative High STEP 3.2.3 Map some data to the colour of the price bars.

Heuristic

Medium STEP 3.2.1

Display categorical data using colour. Formative Medium STEP 3.2.1 Include open and close price on price bars.

Heuristic

Low STEP 3.1.1.5.4

Page 135: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 12 –Haptic-Visual Applications 348

Review eye problems with long exposure times.

Formative

Low STEP 3.1.1.1

Incorporate more substructure in price bars (open, close price).

Formative

Low STEP 3.1.1.5.4

Table 12-14 Recommendations for the 3D Bar Chart taken from initial evaluations.

The first display mapping change was to add some simple categorical display using the direct visual property of colour. Price bars are coloured according to the difference between opening and closing price for each period. A dark red colour is used if the closing price is lower than the opening price and a light blue colour if the closing price is higher than the opening price (figure 12-9). This creates the two nominal categories used in Candlestick charts [Nicholson 1999b]. The second display mapping change was to augment the price bars with haptic categories displayed with inertia. The data mapped to the inertia categories was characterised as low priority. It was obtained from a 7-day momentum calculation. This is simply calculated by subtracting the current period's closing price from the closing price of the period seven days previously. Momentum values are positive if price increases over the period but are negative if the price falls. This momentum values for the data set is divided into 11 ordinal momentum categories. These categories are composed of 5 positive momentum categories, 5 negative momentum categories and a single zero momentum category (table 12-15).

price

A traditional Candlestick chart uses black to indicateclosing price is lower than opening price and whiteto indicate closing price is higher than opening price.

time time

price

volume

The 3D Bar Chart using the candlestickcategories. Dark red indicates price has closedlower. Light blue indicates price has closed higher.

Figure 12-9 The redesigned 3D Bar Chart.

Price Momentum

Momentum Categories *

* specific to each data set

Inertia Categories

Mass Haptic Movement

Haptic Perception

positive

momentum

> 40 1 2 up Easy to move (30, 40] 2 4 up

(20, 30] 3 8 up (10, 20] 4 16 up (0, 10] 5 32 up

zero 0 0 - none Hard to move

negative momentum

[-10, 0) 5 32 down

[-20, -10) 4 16 down [-30, -20) 3 8 down [-40, -30) 2 4 down

Page 136: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 12 –Haptic-Visual Applications 349

[-40, -30) 1 2 down Easy to move

Table 12-15 An example of mapping momentum categories to inertia categories.

This display mapping allows the user to push up or down on the price bars in the Haptic 3D Bar Chart. If momentum is zero then the price bar can not be moved. If the price momentum is positive then the user can only move the price bar upwards. If the price momentum is negative then the user can only move the price bar downwards. The ease with which the price bar can be moved is determined by the momentum categories. This mapping was designed to naturally reflect the trader's interpretation of price momentum. For example, if a period has very high positive momentum then it is easy to move the price bar in the upward direction. If the price bar is associated with a small momentum then it is hard to move. The mapping is reversed for periods having negative momentum. For example, a large negative momentum indicates that price has moved a long way in the negative direction. In this case it is appropriate to make the price bar easy to move in the downward direction. Some example mappings1

12.4.2 Prototyping the Haptic 3D Bar Chart

are shown in table 12-15.

The software platform used for these prototypes is the Reachin API [Reachin 2001]. The prototype of the 3D Bar Chart developed in the first iteration used Inventor models [Wernecke 1994]. The use of the Reachin API requires a modification to the existing software to create a VRML model for the Haptic 3D Bar Chart The price bars in the VRML model are displayed with the haptic property of inertia. Inertia is implemented using a SpringSlotDynamic class derived from the Reachin API

1 Exact values for the momentum categories must be calculated for each data set.

SlotDynamic node. The price bars are constrained by this node's behaviour to only move in a single direction, either up or down. Price bars with positive momentum can only be moved up, while price bars with negative momentum can only be moved down. Price bars with zero momentum can not be moved. Negative and positive momentum categories are specified by the 5 inertia categories verified in the previous summative evaluation (section 12.2.3). A simple prototype was implemented on the Haptic Workbench (figure 1-7, 1-12) [Stevenson, Smith et al. 1999]. This allowed the user to move and rotate the model while investigating the haptic property of inertia (figure 12-10). The implementation used stock market data collected over a number of days of trading on the Australia Stock Exchange.

Page 137: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 12 –Haptic-Visual Applications 350

Figure 12-10 Using the Haptic Workbench to Interact with the Haptic 3D Bar Chart.

Courtesy of: CSIRO, Mathematical and Information Science, Canberra.

12.4.3 Evaluation of the Haptic 3D Bar Chart To complete this iteration of the process an evaluation step was carried out. In this instance an expert heuristic evaluation (STEP 5.1) of the Haptic 3D Bar Chart

was performed.

The prototype was demonstrated to an expert in the field of technical analysis. The domain expert spent about 30 minutes to trial the prototype. Feedback was captured both during and immediately following the demonstration. The feedback of the domain expert is described below. The mapping is intuitive.

The domain expert felt that using inertia to display momentum was a simple and intuitive mapping.

Reduce the number of haptic categories.

The domain expert felt confident that the different haptic categories could be distinguished. However, he also felt that for lower priority data, such as momentum, fewer categories could be used. The suggestion was to use three haptic categories in each direction rather than five.

Colour assists the interpretation of haptic categories.

The domain expert commented that determining the haptic category can be time consuming. Especially as this sometimes requires pushing the bar in both an upward and downward direction. However, the colour of the price bar over previous days is a useful guide to help suggest the momentum direction.

Use haptics to support an interaction task

. The domain expert felt that determining patterns in the haptic display was difficult. The expert felt that the haptic display would better support some kind of force assisted navigation task such as setting stops.

It is difficult to recall haptic attributes

. The domain expert found it difficult to compare price bars because he could not recall the haptic category of a previously touched price bar. Rather to compare the haptic category of two price bars requires moving back and forward between the two bars and feeling each one a number of times. The expert suggested that this makes it hard to determine global patterns in the data.

Page 138: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 12 –Haptic-Visual Applications 351

12.5 Conclusion This chapter has continued to describe the case study within the domain of technical analysis. In particular it described the second, third and fourth iteration through the iterative steps of the MS-Process (table 12-16). The last three iterations focused on designing a haptic component for the previously designed visual displays. This work is primarily directed in the MS-Process by STEP 3.2.3 Consider direct haptic metaphors.

Three haptic-visual display mappings were proposed: • Haptic Moving Average Surface•

(mapping volume to texture) Haptic Moving Average Surface

• (mapping volume to compliance)

Haptic 3D Bar ChartThese three alternative designs were prototyped and evaluated by a domain expert.

(mapping momentum to inertia)

Within this case study the goal is to produce a multi-sensory display that allows a trader to uncover new patterns in stock market data. The domain expert has identified that the Moving Average Surface

contains some interesting visual patterns. These visual patterns need to be further investigated. However, the evaluations suggest that it is difficult to identify global patterns with the haptic displays.

This problem with the haptic display may be due to the fact that current hardware only provides force feedback to a single point. This requires haptic information to be integrated temporally and thus makes it difficult for the user to compare haptic properties across space. No further haptic design work is carried out in this case study. Chapter 13 describes further iterations of the MS-Process. However, the focus in these iterations is on developing an auditory-visual display. More general discussion in terms of the MS-Process, the MS-Guidelines and the MS-Taxonomy

are delayed until chapter 14.

MS-Process Case Study Section

Iteration

STEP 1 Task Analysis

STEP 2 Data Characterisation STEP 3 Display Mapping STEP 4 Prototyping STEP 5 Evaluation

Developed some concepts for spatial visual metaphors. • 3D Bar Chart • Moving Average Surface • BidAsk Landscape

Chapter 11

Iteration

STEP 3 Display Mapping STEP 4 Prototyping STEP 5 Evaluation

Evaluate some direct haptic metaphors. • haptic surface texture • compliance • inertia

12.2

Iteration

STEP 3 Display Mapping STEP 4 Prototyping STEP 5 Evaluation

Develop Haptic Moving Average Surface

displaying categories with surface

texture and compliance. 12.3

Iteration

STEP 3 Display Mapping STEP 4 Prototyping STEP 5 Evaluation

Develop Haptic 3D Bar Chart displaying categories with inertia. 12.4

Page 139: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 12 –Haptic-Visual Applications 352

Table 12-16 Review of current outcomes from the case study.

Page 140: Chapter 7 Guidelines for Spatial Metaphors

Chapter 13

Auditory-Visual Application

Nobby's Beach, Newcastle, Australia

“This reminds me of the story about the lighthousekeeper, which

takes place sometime in the nineteenth century. At the top of the

lighthouse there is a cannon that fires once an hour, 24 hours a

day. One night, it is about three A.M. and the lighthousekeeper is

asleep. The cannon does not fire. The lighthousekeeper jumps up,

exclaiming ‘What was that!’ ” [Cohen 1994]

Page 141: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 353

Chapter 13

Auditory-Visual Application

13.1 Introduction This chapter continues to describe a case study of the MS-Process applied within the domain of technical analysis. The first iteration through the MS-Process

is described in chapter 11. This led to three alternative visual displays being developed (figure 13-1). The second, third and fourth iteration through the process augmented two of these visual displays with a haptic display. These haptic-visual applications are described in chapter 12.

This chapter describes the fifth and sixth iteration through the iterative steps of the MS-Process (figure 13-2). The focus in these iterations is to further develop the visual display, called the BidAsk Landscape,

by adding an auditory display. Work in the design and implementation of the auditory display was carried out in collaboration with Dr Stephen Barrass at the Virtual Environment Laboratory, at CSIRO in Canberra.

3D Bar Chart Moving Average Surface BidAsk Landscape

Figure 13-1 Three alternative visual models from the first iteration of the MS-Process.

The fifth iteration of the process is described in section 13.2, section 13.3 and section 13.4. This iteration includes some redesign of the visual model for the BidAsk

Page 142: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 354

Landscape. However, the main focus during the display mapping is on the design of auditory metaphors (table 13-1). The new design for the auditory-visual display is prototyped and an expert heuristic evaluation is used to assess the display. The results from this work are then feedback into the sixth iteration of the MS-Process

.

The sixth iteration of the process is described in section 13.5, section 13.6 and section 13.7. This work focuses on refining the display mapping for the auditory-visual display to create the Auditory BidAsk Landscape

(table 13-1). This design is then prototyped and evaluated using a summative evaluation.

MS-Guidelines

IterativeSteps

MS-Taxonomy

MS-Guidelines

STEP 3.Display

Mapping

STEP 4.Prototyping

STEP 5.Evaluation

Design

Prototype

Figure 13-2 The iterative steps of the MS-Process.

MS-Process Case Study Section

Iteration

STEP 3 Display Mapping STEP 4 Prototyping STEP 5 Evaluation

Redesign BidAsk Landscape. Investigate the design of spatial, direct and temporal auditory metaphors. Prototype using BidAsk Landscape and evaluate.

Direct AuditoryMetaphors

Temporal AuditoryMetaphors

Spatial AuditoryMetaphors

13.2 13.3 13.4

Iteration

STEP 3 Display Mapping STEP 4 Prototyping STEP 5 Evaluation

Incorporate auditory metaphors into Auditory BidAsk Landscape. Prototype and evaluate.

13.5 13.6 13.7

Table 13-1 The fifth and sixth iteration through the MS-Process.

13.2 Display Mapping for Auditory Metaphors The next stage in the case study is to revisit the three iterative phases of the MS-Process (table 13-2). For this fifth iteration the focus is on incorporating auditory metaphors

into the BidAsk Landscape. This decision was driven by the results of a previous evaluation (table 13-3).

Because the focus is on designing auditory metaphors this iteration will use primarily use STEP 3.1.2 Consider Spatial Auditory Metaphors, STEP 3.2.2 Consider Direct Auditory Metaphors and STEP 3.3.2 Consider Temporal Auditory Metaphors.

Page 143: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 355

However, the previous evaluation of the BidAsk Landscape

also identified some recommendations for changing the visual model (table 12-3). Therefore this iteration through the display mapping first considers some changes to the visual display.

MS-Process Case Study Section Iteration

STEP 3.1 Consider spatial metaphors

Redesign spatial visual metaphor. Add spatial auditory metaphors.

13.2

STEP 3.1.1 Consider spatial visual metaphor

Review recommendations for visual display of BidAsk Landscape. s.

13.2.1

STEP 3.1.2 Consider spatial auditory metaphor

Consider spatial auditory display for depth of market data. s.

13.2.2.2

STEP 3.1.3 Consider spatial haptic metaphor

s.

STEP 3.2 Consider direct

Augment with direct auditory metaphors. metaphors

13.2.2.2

STEP 3.2.1 Consider direct visual metaphor

s.

STEP 3.2.2 Consider direct auditory metaphor

Consider direct auditory display for depth of market data. s.

13.2.2.2

STEP 3.2.3 Consider direct haptic metaphor

s.

STEP 3.3 Consider temporal

Augment with temporal auditory metaphors. metaphors

13.2

STEP 3.3.1 Consider temporal visual metaphor

s.

STEP 3.3.2 Consider temporal auditory metaphor

Consider temporal auditory display for depth of market data. s.

13.2.2.3

STEP 3.3.3 Consider temporal haptic metaphor

s.

Table 13-2 The fifth iteration through the display mapping step.

13.2.1 Redesign of the Spatial Visual Metaphor After the first iteration of the MS-Process, the BidAsk Landscape was reviewed using an expert heuristic evaluation (STEP 5.1) and a formative evaluation (STEP 5.2). These evaluations led to a number of recommendations being captured (table 13-3). The high and medium priority recommendations now feedback into this iteration of the display mapping step. During the redesign of this spatial visual metaphor a number of steps of the MS-Process

are revisited (table13-3).

The first items to be addressed concerned the design of the visual model. The domain expert considered the trade data to be secondary to the bid and ask data. The current location of the trade surface above the bid and ask surface sometimes occluded the view of the bid and ask surface. Hence it was repositioned to be displayed below the bid and ask surfaces. The domain expert also identified a problem with the overall shape of the landscape. It did not undulate as expected. However, the cause of this problem was related to the original data set. When new data was obtained the landscape took on the expected undulating shape (figure 13-4). The landscape currently uses real stock data and represents one day of trading. The landscape is constructed from a series of time strips. Each of these time strips

Page 144: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 356

represents 10 minutes of trading. The domain expert requested shorter periods of trading to be displayed and so the time strips were reduced to 5 minute intervals.

Spatial Visual Metaphor

Recommendations

Source

Priority

MS-Process

BidAsk

Landscape

Use shorter time periods to construct the surfaces.

Heuristic

High STEP 3.1.1

Emphasize the data close to the trade. Heuristic High STEP 3.1.1 Check data or algorithm. The landscape should undulate.

Heuristic

High STEP 3.1.1

Use the BK-Landscape as a framework for multi-sensory design.

Formative

High STEP 3

Emphasize foveal vision by using spatial distortion at centre of display.

Formative High STEP 3.1.1.3.4

Investigate the extension of this visual metaphor with auditory metaphors.

Formative High STEP 3.1.2 STEP 3.2.2 STEP 3.3.2

Place the trade surface below the bids and asks.

Heuristic

Medium STEP 3.1.1

Review eye problems with long exposure times.

Formative Low STEP 3.1.1.1

Table 13-3 Recommendations for the BidAsk Landscape from first iteration of the process.

The final modification involves the design of a fish-eye display around the centre of the bids and asks (figure 13-3, 13-4). Space within this region is distorted by stretching it along the x-axis. This means that more space is dedicated to bids and asks close to the current area of trading. This new design emphasises the use of foveal vision close to the centre of the landscape where the most important data is contained. Peripheral vision is used to monitor large changes on the data periphery. The validity of this design is suggested by the MS-Guidelines

, SV-1.2 Foveal vision is detailed and SV-1.3 Peripheral vision responds to changes.

Figure 13-3 The redesign of the BidAsk Landscape to use a distorted visual space.

Page 145: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 357

Figure 13-4 An overview and close up of the BidAsk Landscape.

13.2.2 Display Mapping for Auditory Metaphors The focus during this iteration of the display mapping step is to use the visual display as a framework for the auditory display. This is supported by the guideline, MST-2 Use the spatial visual metaphor

as a framework for the display. Another goal during this step is to display different information using the auditory display. This type of design is described as complementary display and is directed by the guideline, MST-3 Increase the human-computer bandwidth.

The emphasis for the auditory display is provided by the MS-Guidelines

, MST-1 Use each sensory modality to do what it does best and MST-1.2 Hearing emphasises temporal qualities. The aim of the auditory display is to help the trader uncover short term temporal patterns that may occur in the data.

Currently the visual display shows the accumulated volume of bids, asks and trades over a small time period. The goal of the auditory display is to show each individual bid and ask as they occur. The combined display should help the user to determine the direction of the next trade price, either up or down, from the last trade. The auditory display should provide information about relations between bids and asks such as the current spread. More detail in the auditory display should provide information about current activity in the market, the number of bids and asks occurring, as well as the volume of each bid and ask.

13.2.2.1 Consider Spatial Auditory Metaphors (STEP 3.1.2) The first part of the MS-Process to be used is STEP 3.1.2 Consider spatial auditory metaphors. An obvious design is to display each bid, ask and trade as it occurs within the 3D space defined by the visual display. For example, the bids price, volume and time can be used to determine the auditory position. A much simpler strategy is to use a one-dimensional space and position the bids, asks and trades according to price. Therefore bids will occur to the left of the user, asks to the right and trades in the middle (figure 13-5).

Page 146: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 358

This design has the advantage that it can be implemented by simply panning the sounds. However this spatial mapping only incorporates price information. Time is implicit in the order with which the offers and trades are displayed. The volume information is of low priority and can be displayed using a direct auditory metaphor

, such as the loudness of the sound.

One issue with this design is that price is a quantitative attribute rather than a categorical one. This is counter to the guideline, SA-1 Use spatial auditory metaphors

for displaying categorical data. It is decided to ignore this guideline, as only relative price information needs to be interpreted by the user. Therefore highly accurate identification of auditory position is not required. This decision may need to be reviewed after evaluation.

The guideline, SA-3.1 The JND of spatial position depends on sound location notes that the accuracy of localising sounds is less to the periphery. This should not be a problem as the bids and asks that occur to the sides of the display are assumed to have less impact on current trading. The bids, asks and trades represent three nominal categories. These categories are distinguished by using sounds with different timbre. The use of three sounds aims to keep the noise levels to a minimum as suggested by the guideline, SA-2.3.1 Use only a few spatial sounds. The guideline, SA-1.2 Sound localisation is in parallel to sound identification suggests that it is possible to identify and locate sounds at the same time. Localisation can be assisted by using complex timbre to represent bids, asks and trades. This is suggested by the guideline, SA-3.4 Complex sounds are easier to localise.

Figure 13-5 The design of the spatial sound display for the BidAsk Landscape.

13.2.2.2 Consider Direct Auditory Metaphors (STEP 3.2.2) The next part of the MS-Process to be used is STEP 3.2.2 Consider direct auditory metaphors. The three nominal categories of bids, asks and trades are mapped to different timbre. This mapping is supported by the guideline, DA-1 Direct auditory metaphors

are the second choice for displaying categories and DA-1.2 Timbre can represent nominal categories.

Page 147: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 359

To choose the appropriate timbre, three alternative sets of sound samples were selected, each set with different spectral characteristics (table 13-4). The first set uses natural frog-like sounds. These sounds are consistent with the natural landscape metaphor of the visual display. The frog-like croaks suggested call and response patterns such as those that occur in nature. The second set of sound samples use musical instruments, in particular a drum, guitar and trumpet to represent bids, asks and trades (figure 13-6). These sounds seem easier to listen to than the frog sounds and perhaps highlight the musical qualities of rhythm and melody. The third set of sound samples uses human voices speaking the words "buy", "ask" and "trade". This group of sound samples are most suggestive of the human market place and easy to interpret.

Nominal Category

Sound Samples Frog-like sounds Musical instruments Human voices

Bid Croak 1 (deep) Drum Male voice – “buy” Trade Croak 2 (mid) Trumpet Male voice – “trade” Ask Croak 3 (high) guitar Female voice – “sell”

Table 13-4 Mapping nominal categories to different sound samples.

Figure 13-6 Musical timbre used to display the bid, ask and trade categories.

The three sets of sound sample followed the guideline, DA-6.1 Use timbre with multiple harmonics. The duration of each sounds was 0.25 seconds; this is long enough to avoid problems identified by the guideline, DA-6.3 It is difficult to perceive the timbre of a short sound. The final outcome was three different sets of timbre that could be used to represent the bids, asks and trades. At this stage it may have been appropriate to run a summative evaluation to choose the best group of sound samples for the auditory display. However, each group of sounds seemed appropriate, depending on the criteria chosen to decide which was best. It was decided to maintain all three for the first heuristic evaluation by the expert. The other direct attributes of sound that could be used for displaying information are pitch and intensity. The two guiding principles for using these attributes are, DA-1.4 Do

Page 148: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 360

not use sound attributes to represent precise values and DA-1.1 Pitch and loudness can represent ordinal categories. It was decided to map price to pitch categories and share volume to categories of loudness. In these mappings the guideline, DA-3.2 Sounds have natural associations was followed. In particular, "Louder is more, softer is less" and "Higher pitch is more, lower pitch is less" are used. Some examples of the display mappings are given in table 13-5 and table 13-6.

Share Price (Range = r)

Example Price (Range $20-$27)

Pitch

[low price, low price + r/7] [$20, $21] C5 (low price + r/7, low price + 2*r/7] ($21, $22] D5

(low price + 2*r/7, low price + 3*r/7] ($22, $23] E5 (low price + 3*r/7, low price + 4*r/7] ($23, $24] F5 (low price + 4*r/7, low price + 5*r/7] ($24, $25] G5 (low price + 5*r/7, low price + 6*r/7] ($25, $26] A5

(low price + 6*r/7, high price] ($26, $27] B5

Table 13-5 Categories of price mapped to pitch.

Share Volume Loudness

≤ 10,000 low ≤ 100,000 medium > 100,000 high

Table 13-6 Categories of share volume mapped to loudness.

Note that a number of problems may be encountered. For example, the guidelines note the dependency of perceived loudness on pitch and timbre in, DA-4.2 Loudness depends on frequency and DA-4.3 Loudness depends on bandwidth. Likewise for perceived pitch the guidelines, DA-5.1 Pitch depends on loudness and DA-5.2 Pitch depends on timbre note the possible interaction of sound attributes. Finally the pitch can also affect timbre as recorded in the guidelines, DA-6.2 The perceived timbre is affected by pitch and DA-6.4 It is difficult to perceive the timbre of a high-pitched sound. Despite the potential for such problems, it should be noted that absolute identification of pitch and loudness information is not required. The high priority information is in the timing and price of bids and asks. The price information has been represented in both pitch categories and spatial location. The pitch categories are designed to aid the user in separating the bids, asks and trades into different streams. This principle is captured in the guideline, TA-6.8 Sounds streams may be grouped or segregated.

13.2.2.3 Consider Temporal Auditory Metaphors (STEP 3.3.2) The third part of the MS-Process concerned with auditory metaphors is STEP 3.3.2 Consider temporal auditory metaphors. This aspect of the display was not considered in detail as the temporal patterns are features of the data. By playing back the data over time it is hoped that patterns may occur in both the spatial movement and direct properties of the sounds. It is these types of patterns that the analyst is trying to identify. For example particular rhythms or dynamics may be indicative of certain market conditions.

Page 149: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 361

13.3 Prototyping Auditory Metaphors Prototyping is the next step of the MS-Process and it is now revisited during the fifth iteration of the process. During this iteration the aim is to implement the initial prototype of the auditory-visual BidAsk Landscape

(table 13-7). The aim of producing this prototype is to gather feedback from the domain expert about the validity of the current display mapping.

MS-Process Case Study Section Iteration

STEP 4. Prototyping.

Develop an auditory-visual prototype based on the BidAsk Landscape.

13.3

STEP 4.1 Consider software platform.

Determine any limitations of the software platform used:

• Avango [Tramberend 1999] • Max [Puckette 1991]

13.3.1

STEP 4.2 Consider hardware platform.

Determine any limitations of the hardware platform used:

• Wedge [Gardner and Boswell 2001]

13.3.2

STEP 4.3 Implement current design.

Implement the auditory-visual prototype. 13.3.3

Table 13-6 The fifth iteration of the prototyping step implements an auditory-visual prototype.

13.3.1 Consider Software Platform (STEP 4.1) The prototype software is written in C++ [Stroustrup 1996] and implemented using the Avango framework [Tramberend 1999] (figure 13-7). The 3D visual models are constructed from online trading data and implemented using Inventor [Wernecke 1994]. The auditory display is built using the Avango sound server [Eckel 1998] and the Max synthesis system [Puckette 1991]. Max

is a real-time graphical programming language that allows for object oriented design of sound synthesis algorithms.

13.3.2 Consider Hardware Platform (STEP 4.2) The first prototype of the BidAsk Landscape was implemented on the i-CONETM [Barco 2001] located at GMD in Germany. For this iteration of the prototyping step the Wedge

[Gardner and Boswell 2001] was used. This platform is located at CSIRO's Virtual Environment Laboratory in Canberra, Australia.

The Wedge is a two-sided wall display that allows the user to interact with a 3D model displayed in a small room-sized environment (figure 13-8). The Wedge has a much lower visual resolution than the i-CONETM but there are no other hardware limitations in the environment. The Wedge

supports wide field of view display and provides stereoscopic display with single user head tracking. A 3D mouse provides direct interaction with models. Sound can be displayed using a spatialised display generated on an array of eight speakers (figure 13-9).

Page 150: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 362

Online Stock Trading Data

Offline Stock Trading Data

C++Processing

Code

Derived Indicators

3D Visual Models

Inventor models

C++Application

Code

Avango Framework

real-time application

MAX

Sound data models

Figure 13-7 Overview of the software platform for running the auditory-visual application.

Figure 13-8 The BidAsk Landscape being manipulated in the Wedge.

Courtesy of: Australian National University, Canberra.

Figure 13-9 The 8 speaker array in the Wedge allows for the display of spatialised sound.

Page 151: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 363

13.3.3 Implement Current Design (STEP 4.3) The BidAsk Landscape

is implemented so that the landscape automatically animates, with each visual time strip of the landscape appearing every five seconds. Each visual time step represents five minutes of data. For the eight hours of daily trading data there are 96 steps in total. Thus the animation sequence takes eight minutes to play a single day of stock market data. The data used in the prototype has previously been captured via a real-time feed to the Australian Stock Exchange.

During each five minute visual time step a number of auditory events are displayed. All bids, asks and trades corresponding to that five minute period are heard. The sounds are played sequentially as they occur in the real-time data. The sounds are scheduled with slight separation so they play at 0.01 second intervals. In effect the auditory display is speeded up so that the sounds play within the five second step time of the visual animation. The design for the auditory display suggested three alternative sound samples, frog-sounds, musical instruments and human voices. The implementation allows for one of these three sound samples to be specified at run time.

13.4 Evaluation of Auditory Metaphors The next step of the MS-Process is evaluation. The auditory-visual prototype of the BidAsk Landscape is evaluated in the Wedge

using STEP 5.1 Expert Heuristic Evaluation.

The prototype system is demonstrated to the domain expert over a 1 hour period. During this time the domain expert reviews three days of stock market data. During and after the demonstration the domain expert is informally interviewed to capture feedback. The interview is targeted to capture information about the usefulness of the visual and auditory displays for finding patterns in stock market data. The feedback is summarised in table 13-8.

Spatial Visual Metaphor

Recommendations

Source

Priority

MS-Process

BidAsk Landscape

Use shorter time periods to construct the surfaces. Try 1 minute or less.

Heuristic

High STEP 3.1.1

Use a more dynamic data set. Try open and closing hours.

Heuristic High STEP 2.

Remove trade sounds as they just generate noise.

Heuristic

High STEP 3.2.2

Loudness of some peripheral bids and asks distract your attention.

Heuristic High STEP 3.2.2.5

Focus on the bid and ask sounds near the current trading price.

Heuristic

High STEP 3.1.1.3 STEP 3.1.2.2

It is difficult to interpret relative price from the spatial or pitch displays.

Heuristic

Medium STEP 3.1.2.2 STEP 3.2.2.4

The voices samples are the simplest display for testing with non-experts.

Heuristic Low STEP 3.2.2.3

Table 13-8 Feedback from the expert heuristic evaluation of the auditory-visual prototype.

Page 152: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 364

The domain expert stated that he would like the display to emphasise the short-term changes in the data. He suggested that the visual time steps should occur more frequently, for example the 5 minute steps could be reduced to 1 minute steps. The expert felt that the data sets used in the demonstration were not dynamic enough to show rapid changes. He recommended that more dynamic data should be obtained from highly traded stocks and to use the most dynamic periods of trading, namely the opening and closing hours. Regarding the auditory display, the domain expert felt that there was too much noise and suggested that the trade sounds be removed from the display. He wanted the accent to be on the bids and asks and in particular the bids and asks that occur near the current trading values. With the auditory display the expert reported that the temporal order of events could be distinguished. However, the expert also noted that the actual price values were difficult to interpret using either the pitch or spatial location information. Outliers in the price range could be determined because they occurred at the extreme left or right of the spatial display. Once again the expert suggested that the display should focus more on the central region of the price range. This central region is where the bids and asks are most likely to impact on the next trade. A problem is also noticed with the current mapping of loudness to share volume. A large volume of peripheral bids or asks does not significantly impact on the current trading situation. However, in the current display this can generate a loud sound that draws the user’s attention away from the more important data at the centre of the display. The design for the auditory display suggested three alternative sound samples, frog-sounds, musical instruments and human voices. During the interview, the domain expert is questioned about the most appropriate of these three sets of sound samples. Unfortunately, the domain expert was not definite about the preferred option. However, he suggested that the human voices saying the words "buy" and "ask" was the simplest for non-experts to understand.

13.5 Display Mapping for the Auditory BidAsk Landscape The next and final stage in the case study is to revisit the three iterative phases of the MS-Process (table 13-9). This is the sixth iteration through the iterative steps of the process. During this iteration the emphasis is on refining the display mapping of the Auditory BidAsk Landscape. A final auditory-visual prototype is implemented

and a detailed summative evaluation is used to evaluate the prototype.

During the fifth iteration of the process an expert heuristic evaluation of the current design resulted in a number of recommendations being captured (table 13-8). These recommendations feedback into the display mapping step and lead to some redesign of both the auditory and visual displays. A number of the display mapping steps of the MS-Process

are revisited (table 13-8).

The visual display was changed to update more rapidly. The 5 minute time slices were reduced to 30 second time slices. Therefore every visual time step was generated from

Page 153: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 365

30 seconds of data. The animation rate is increased from one time step every 5 seconds to one time step every 3 seconds. Only one hour of trading data is used, and since there are 120 time steps the total animation sequence takes 6 minutes to play. MS-Process Case Study Section

Iteration

STEP 3 Display Mapping

Redesign visual and auditory display for

Auditory BidAsk Landscape. 13.5

STEP 4 Prototyping

Prototype the Auditory BidAsk Landscape on Barco Baron

[Barco 2001] 13.6

STEP 5 Evaluation

Summative evaluation of the

Auditory BidAsk Landscape.

13.7

Table 13-9 Outline of the sixth iteration through the MS-Process.

A number of problems had been identified with the auditory display: price information is difficult to interpret, there is not enough emphasis on bids and asks that occur near the trading price, the loudness of some sounds is distracting, and the sound of the trades distracts from the bids and asks. To redesign the auditory display the first step of the MS-Process to be considered is STEP 3.1.2.2 Design the display space. In particular the design needs to address the way that price is displayed. It is decided to subdivide the space and thus create spatial categories rather than using a continuous quantitative space to display price. This design better follows the MS-Guideline, SA-1 Use spatial auditory metaphors for displaying categorical data. Seven categories are created to represent price. This conforms to the MS-Guideline

; GP-8 Seven is a magic number.

The obvious choice for the seven spatial categories is to simply divide the price range into seven equal segments and then allocate each bid and ask to a category based on price. Trade data is no longer displayed so only bids and asks need to be categorised. However, this design does not address the other recommendations captured from the expert, namely to accentuate data that is close to the last trade price. To accentuate data that is close to the current trading action price is mapped into seven ordered information categories representing the importance of the bid or ask relative to the last trade (table 13-10). So, for example, bids or asks that are equal to the last trade price are seen as more important than bids or asks that are some cents away. This ‘fish-ear’ mapping emphasises bids and asks close to the price of the last trade. To help separate the sound streams for each of the seven price categories, ordered pitch categories (C5-B5) are used. This mapping of price categories to pitch categories is duplicated by mapping the price categories to spatial categories. The result is that lower pitch notes are mapped to the left-hand or lower priced categories. Higher pitch notes are used for the right-hand or higher priced categories (figure 13-10). To distinguish between bids and asks, voices with different timbre are used (table 13-11). A bid is identified by a male voice saying the word “buy”. An ask is identified by a female voice saying the word “sell”. The auditory display also needs to address the share volume to sound loudness mapping. Loud sounds to the far left or right of the display can be caused by a high

Page 154: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 366

volume of bids and asks. These loud sounds would capture the users attention and yet the domain expert has suggested they are not important for the task.

Figure 13-10 The display mapping used in the Auditory BidAsk Landscape.

To integrate the volume data, it is mapped to three ordered levels of importance using a logarithmic scale as shown in table 13-12. Then a combined mapping from price importance and volume importance to overall importance was devised (table 13-13). This mapping reflects the fact that volume close to the last trade price can strongly influence the direction of the next trade, but that volume is not important in the price periphery.

Offer (Bid /Ask) – last Trade (cents)

Importance of price category

-4 or more - low -2, -3 - medium

-1 - high 0 0 high 1 + high

2, 3 + medium 4 or more + low

Table 13-10 Mapping from price categories to importance.

Type of Offer Timbre

Bid Male voice saying “buy” Ask Female voice saying “sell”

Table 13-11 Mapping from type of offer to timbre.

Page 155: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 367

Share Volume Importance of volume category ≤ 10,000 Low ≤ 100,000 medium > 100,000 High

Table 13-12 Mapping from volume categories to importance.

Importance of price category

Importance of volume category high medium low

- low - low - low - low - medium - medium - low - low

- high - high - medium - low 0 high 0 high 0 medium 0 low + high + high + medium + low

+ medium + medium + low + low + low + low + low + low

Table 13-13 Mapping from price and volume to importance.

The overall design of the display is shown in table 13-14. The display allows the listener to identify bids and asks near the last trade as these are elements at the spatially absolute centre of the display. As price diverges from the last trade the sound moves further from the central position. The mapping of information categories to both pitch and space increases the perceptual segregation and perceived order of the categories. The three ordered categories of volume information are mapped to three equal steps in loudness. All peripheral bids and asks are now low in intensity, while only bids and asks near the last trade can have create loud sounds.

Importance Category

Pitch Category

Spatial Category (Pan)

Loudness (range)

- low C5 -90° low - medium D5 -60° low, medium

- high E5 -30° low, medium,high 0 high F5 0° low, medium,high + high G5 30° low, medium,high

+ medium A5 60° low, medium + low B5 90° low

Table 13-14 Mapping from importance to pitch, spatial position and loudness. The overall effect of listening to the display is as if bids and asks are heard as they are made. The bids and asks come from the left and right respectively as though from a crowd. If the bid or ask is lower than the last trade then it is heard to the left. If the offer is the same as the last trade price then it comes from the centre, and if it is higher then it comes from the right. A flurry of bids to the right could indicate demand to buy at a higher price than the last trade and could indicate upward movement in trade price. A pattern of bids to the left mixed with asks to the right might indicate equilibrium in the market. The general level of market activity can also be discerned from the display. A highly active market is a continuous hubbub. A mid-range of activity of around 20 events per

Page 156: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 368

time step sounds intermittent and overlapping. When market activity is low, the individual events can be heard, while a lack of activity is silent.

13.6 Prototyping the Auditory BidAsk Landscape Prototyping is the next step of the MS-Process and it is now revisited during the sixth iteration of the process (table 13-15). The aim in this iteration is to prototype the Auditory BidAsk Landscape

so that it can be used in the final summative evaluation.

MS-Process Case Study Section Iteration

STEP 4. Prototyping.

Develop a prototype of the

Auditory BidAsk Landscape.

13.6

STEP 4.1 Consider software platform.

Determine any limitations of the software platform used:

• Avango [Tramberend 1999] • Max [Puckette 1991]

13.6.1

STEP 4.2 Consider hardware platform.

Determine any limitations of the hardware platform used:

• Barco Baron [Barco 2001]

13.6.2

STEP 4.3 Implement current design.

Implement the

Auditory BidAsk Landscape for evaluation.

13.6.3

Table 13-15 The sixth iteration of the prototyping step.

13.6.1 Consider Software Platform (STEP 4.1) The software platform for this prototype is the same as that used for the previous iteration. It is based on the Avango framework [Tramberend 1999] and the Max synthesis system [Puckette 1991] (figure 13-7). All software is written in C++

[Stroustrup 1996].

13.6.2 Consider Hardware Platform (STEP 4.2) The previous prototype of the Auditory BidAsk Landscape was implemented on the Wedge [Gardner and Boswell 2001]. For this iteration of the prototyping step the Barco Baron

[Barco 2001] was used. This platform is located at CSIRO's Virtual Environment Laboratory in Canberra, Australia.

The Barco Baron

is a single-sided large screen display which can be tilted to any angle between horizontal and vertical. In this application the display is tilted to the vertical to allow the user to interact with 3D models displayed in the space in front of the screen (figure 13-11). There were no particular hardware limitations in this environment. This platform supports stereo views with single user head-tracking and direct interaction with models using a pointing wand. Sound can be displayed in stereo using two standard speakers, one on each side of the screen. Alternatively the sound can be displayed using Sennheiser HD540 headphones (figure 13-11).

Page 157: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 369

Figure 13-11 The Auditory BidAsk Landscape displayed on the Barco Baron [Barco 2001].

Courtesy of: CSIRO, Mathematical and Information Science, Canberra.

13.6.3 Implement Current Design (STEP 4.3) The Auditory BidAsk Landscape

is implemented so that the visual display is updated every 3 seconds. Each visual time step represents 30 seconds of real time data recorded from the stock market. An auditory event is played for each bid and ask. The sequence of events is maintained. There is a delay of 0.01 seconds between sounds. Time is compressed by a factor of 10, so that all bids and asks are heard within the 3 seconds of the visual time step.

13.7 Evaluation of the Auditory BidAsk Landscape The next step of the MS-Process is evaluation. The prototype of the Auditory BidAsk Landscape is evaluated in the Barco Baron

using STEP 5.3 Summative evaluation (table 13-16).

The motivation for this thesis is to assist in the design of multi-sensory displays that allow the user to find useful patterns in large abstract data sets. The question that this summative evaluation addresses is whether the Auditory BidAsk Landscape, designed with the MS-Process and using the MS-Guidelines,

helps the user find patterns in depth of market data.

It is difficult to design a targeted experiment that measures the subject's ability to discover useful patterns. What a useful pattern means depends on the application

Page 158: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 370

domain. For the technical analysis domain the patterns that the users might find should help them to predict the direction of the share price. For example, if a user could predict the direction of the share price using the Auditory BidAsk Landscape

then it would give a strong indication that there are useful patterns in the display. Therefore this experiment was designed to test the subject's performance at predicting the direction of the trade price.

MS-Process Case Study Section Iteration

STEP 5.3 Summative Evaluation.

Summative evaluation of the

Auditory BidAsk Landscape.

13.7

STEP 5.3.1 Design experiment.

Design an experiment to see if patterns can be detected in the display.

13.7.1

STEP 5.3.2 Conduct experiment.

Run experiment with 15 subjects using real stock market data.

13.7.2

STEP 5.3.3 Analyse experimental results.

Analyse results of the experiment for statistical significance.

13.7.3

STEP 5.3.3 Capture problems and suggestions.

Gather informal feedback from subjects about the

Auditory BidAsk

Landscape 13.7.4

Table 13-16 The sixth iteration of the evaluation step uses a summative evaluation.

Another aim of this work is to increase the human-computer bandwidth by providing users with a multi-sensory display. This is captured in the guideline MST-3 Increase the human-computer bandwidth. This guideline also characterises three types of display; conflicting, redundant and complementary (section 6.5). In the final analysis of the summative evaluation the performance of a multi-sensory display is also analysed in these terms. The design of the summative evaluation and statistical analysis of the experimental results was carried out in collaboration with Warren Muller from CSIRO Mathematical and Information Sciences, Canberra.

13.7.1 Design Experiment (STEP 5.3.1) The experiment was designed to test each subject’s performance using three modes: the visual, the auditory, and the combined auditory-visual display. The order of testing on each mode was randomised as shown in table 13-17.

Order 1 Order 2 Order 3 Order 4 Order 5 Order 6 Visual Visual Auditory Auditory Combined Combined

Combined Auditory Visual Combined Auditory Visual Auditory Combined Combined Visual Visual Auditory Subject

1 & 7 & 13 Subject

2 & 8 & 15 Subject

3 & 9 & 14 Subject 4 & 10

Subject 5 & 11

Subject 6 & 12

Table 13-17 Order of modes for experimental subjects.

The experimental task for subjects was to predict the direction of the next trade based on the data already encountered. For all modes the display evolves over time. At 10 random locations within the data the display was paused and the user asked to predict

Page 159: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 371

whether the next trading price was above (UP) or below (DOWN) the last trade. There are four possible conditions as shown in table 13-18.

Prediction UP DOWN Response

UP Correct Incorrect DOWN Incorrect Correct

Table 13-18 Possible conditions for user responses.

During the experiment 10 predictions were recorded for each subject in each of the three modes, visual (V), auditory (A) and combined (C). The up and down movements in trade price were between 1 and 7 cents, with ~80% of the decision points involving only 1 or 2 cent changes (table 13-19).

Variation in trade price

(cents)

Number of decision points

used in evaluation

% of decision points used in

evaluation 1 29 48.3 2 18 30 3 9 15 4 2 3.3 5 0 0 6 1 1.6 7 1 1.6 TOTAL 60

Table 13-19 Variations in price movements at decision points. The null hypothesis of the experiment is that subjects cannot predict the direction of the next trade from these displays. The alternative hypothesis that subjects can predict trade direction depends on the technical trading hypothesis that the information needed is contained in the data. Since this experiment relies on this premise, the data used was real trading data collected for three shares over a day of trading on the Australian Stock Exchange. The data was divided into 6 subsets, three for training and three for evaluation. These subsets were randomly allocated to visual training, visual evaluation, auditory training, auditory evaluation, auditory-visual training and auditory-visual evaluation for each subject. In all, the experiment was designed to test for variations that might occur between subject, mode and order of using the displays. The general and specific questions targeted by the summative evaluation are summarised in table 13-20. Note that patterns in the display that indicate up and down movements of the trade price may be different. Therefore the experiment considers combined responses (UP and DOWN) and also responses in a single direction (UP or DOWN).

13.7.2 Conduct Experiment (STEP 5.3.2) The subjects were 13 males and 2 females between the ages of 20 and 42. Only one subject had any familiarity with depth of market data and none had traded on the stock market. Of these subjects 10 were familiar with Virtual Environment technology.

Page 160: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 372

The experiment was conducted on the Barco Baron [Barco 2001] at CSIRO's Virtual Environment Laboratory in Canberra (figure 13-12). The application runs on an SGI Onyx2 computer. The display uses shutter glasses and Polyhemus head-tracking. The sound is provided over Sennheiser HD540 headphones. The use of headphones prevents interference from background noise. Experimental Questions General questions

Can the users find consistent patterns in the data? Can people use the visual (V), auditory (A) and combined (C) displays to predict the direction of the next trade from depth of market data? Does the multi-sensory display function in a complementary, redundant or conflicting manner? What differences are there in performance with the visual (V), auditory (A) and combined (C) displays? How do people interpret and make decisions from these displays?

Specific Questions

Considering all responses (UP and DOWN)

Can the BidAsk LandscapeWhich modality is best? How do the modalities compare: auditory vs visual vs combined auditory-visual? In particular is the combined auditory-visual display better than the visual only or auditory only?

(any modality) help predict market direction?

Is there a training effect? Does the order with which the subjects used the different modalities have an effect on results? Do some subjects perform better or worse than others?

Specific Questions

Only considering responses in a single direction (UP or DOWN)

Can the Auditory BidAsk Landscape

Are some modalities more helpful in predicting the market when it is moving in a particular direction (UP vs DOWN)? How do the modalities compare: auditory vs visual vs combined auditory-visual? In particular is the combined auditory-visual display better than the visual only or auditory only for predicting UP or DOWN movements.

(any modality) help predict the market moving in a specific direction – either UP or DOWN?

Is there a training effect? Does the order with which the subjects used the different modalities have an effect on results? Do some subjects perform better or worse than others?

Table 13-20 General and specific questions targeted by the summative evaluation.

Figure 13-12 The Barco Baron being used during the summative evaluation [Barco 2001].

Courtesy of: CSIRO, Mathematical and Information Science, Canberra.

Page 161: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 373

The user has limited interaction with the models being displayed. As the visual model is animated the user's viewpoint of the model is automatically updated so that the user is viewing the landscape from the centre of and slightly above the landscape. This ensures that all users view the model from a similar vantage point. However, there are minor variations in viewpoint which depend on the user's tracked head position. Subject Mode Mode

Order Number Correct

Correct UP

Total UP

Correct DOWN

Total DOWN

1 Visual 1 5 4 7 1 3 Auditory 3 5 3 6 2 4 Combined 2 7 4 4 3 6

2 Visual 1 6 2 4 4 6 Auditory 2 6 2 6 4 4 Combined 3 8 4 5 4 5

3 Visual 2 8 3 5 5 5 Auditory 1 5 1 6 4 4 Combined 3 9 3 4 6 6

4 Visual 3 7 2 4 5 6 Auditory 1 8 4 5 4 5 Combined 2 7 3 4 4 6

5 Visual 3 7 5 7 2 3 Auditory 2 5 1 4 4 6 Combined 1 4 2 4 2 6

6 Visual 2 6 4 7 2 3 Auditory 3 9 4 4 5 6 Combined 1 9 3 4 6 6

7 Visual 1 5 1 4 4 6 Auditory 3 9 4 5 5 5 Combined 2 6 1 4 5 6

8 Visual 1 5 4 7 2 3 Auditory 2 7 2 4 5 6 Combined 3 8 4 4 4 6

9 Visual 2 7 6 6 1 3 Auditory 1 7 4 4 3 6 Combined 3 3 1 4 2 6

10 Visual 3 8 4 5 4 5 Auditory 1 6 2 6 4 4 Combined 2 8 4 4 4 6

11 Visual 3 5 3 4 2 6 Auditory 2 9 5 5 4 5 Combined 1 6 2 5 4 5

12 Visual 2 7 3 5 4 5 Auditory 3 7 3 6 4 4 Combined 1 9 3 4 6 6

13 Visual 1 6 4 7 2 3 Auditory 3 7 4 6 3 4 Combined 2 10 4 4 6 6

14 Visual 2 3 2 7 1 3 Auditory 1 10 5 5 5 5 Combined 3 5 2 6 3 4

15 Visual 1 7 3 6 4 4 Auditory 2 6 3 6 3 4 Combined 3 6 2 4 4 6

Table 13-21 Results of the summative evaluation.

The subject carried out the experiment with the researcher (figure 13-12). At the start each subject was given a written introduction to trading with depth of market stock data, and allowed to ask questions. The subject then carried out a brief training session

Page 162: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 374

followed by an evaluation session for each mode. The combined evaluation and training typically took 45 minutes. In the training session the subjects were shown a display of historical data which was paused at 10 random points. At these points the subjects were told the direction of the next trade, either up or down. In the evaluation the subjects were asked to predict the direction of next trade at 10 random pause points. The experimental results for the 15 subjects across 3 modes are shown in Table 13-21. After each evaluation the subjects were also asked for feedback. The subject's comments about the visual display (table 13-22), the auditory display (table 13-23) and the combined display (table 13-24) were captured.

VISUAL DISPLAY (V) Subject Comments

1 Used the size of each cliff and steepness of cliff. Some problems with visual occlusion.

2 Compared the mass of buyers and sellers especially close to trade. 3 Looked at how close the peaks were in the centre – the further away from the centre the less

significance 4 Preferred combined display missed the auditory information 5 Wasn’t sure what to look for could have used more training. 6 Looked at the last 5 or 6 steps – better than audio. New model would have liked more feedback

on performance during the test. Sounds caused more stress but I did not use them to make decisions.

7 I did not understand what model to use to make decisions. (5/10) 8 Looked right at the frontier for size of lump – it’s steepness and height – especially close to the

valley 9 Looked at large and small hills, chunkiness and steepness of drop. Compared the number of

buyers and sellers. Thought perhaps the model was upside down, that the two forces instead of pushing against each other were pulling each other apart.

10 Looked at the bending in the valley and the pressure caused by the hills, especially their height. 11 Used model as described. It was easier to do one or the other. Either visual or auditory. 12 There was some occlusion and it was difficult to determine what was happening in the short

term without sound feedback. Looked at greater slopes tilting one way or the other. 13 Like at the height either side and used a hunch about which way the hills would push. Looked at

the trend of the river. 14 Looked at trends of information. 15 Preferred the visual to the auditory since it showed the trade price and you could follow the

river.

Table 13-22 Subject comments about the visual display.

AUDITORY DISPLAY (A)

Subject Comments 1 Felt there was lots of information and that it was very dynamic. Visualised dots on a scatter plot

and then decided where most of the dots were. 2 Used the frequency of calls and how loud they were. 3 Listen to ones in the middle and keep track of most recent bids and asks. 4 Used the combined information. Weighted offers close to the centre the highest. Ignore low

intensity offers. Act on offers close to the centre which have greatest volume. 5 Use frequency. The data set was a bit sparse and felt as though he could have processed a lot

more data in the same time frame. 6 Used frequency information to decide on recent trend. Felt that the combined display would be

very improved as you have both recent and historical information combined. 7 Used the most recent information that occurred before the decision point. Especially if the

Page 163: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 375

information was close to the centre. Generally preferred the auditory information. 8 Decided by the most recent offer to occur in the middle. 9 Used the number of buyers and sellers that occurred most recently in the proximity of the

centre. Noted that it would be easier with more training. 10 Noted very few “buys” occurred in the centre but quite a lot of “Sells”. This could be a due to

the nature of the data. Felt that the use of words were like a command rather than information. That is “buy” was telling you to buy. To decide was a bit like judging a boxing match.

11 It was very reactive providing very recent trends in the data. 12 Noted that it contained different information. Provided very latest trends. 13 Sound display was good. It was simple and easy to follow. 14 Used last sounds in the middle to make judgement. There are some places in the data were you

feel very certain about direction. 15 Felt that there was some missing information – not possible to know the last trade price. Used

frequency of bids and asks.

Table 13-23 Subject comments about the auditory display.

COMBINED DISPLAY (C)

Subject Comments 1 Found the combined display very useful as you could use the sound when the visual data

became occluded. Sound could change your decision at the last moment if there was a strong number of buys or sells that occurred. They provided more recent trends and help improve confidence of the decision.

2 This was an improvement over the visual alone as it provide extra information that helped when there was occlusion or the visual information was not sufficient.

3 Watched the visual model to determine general trend but relied on sound to provide most recent information and to help make final decision. If there was a conflict I went with the sound.

4 Preferred the combined display – more information. sound provided extra detail, giving the eagerness of buyers and sellers – a sense of presence in the market place

5 Volume of shapes and the snaking of river provided some information but the sound provided an idea of the frequency of offers.

6 The combined display helped overcome occlusion problems. 7 Visual information set the context but the final decision was based on the last sound. 8 Mainly used the audio – found that it sometimes conflicted with the direction suggested by the

visual display. 9 Influenced most by the last sound heard. There was sometimes conflict in which case a strong

visual signal could override the auditory. 10 The sound helps to keep you focused. Used the visual chunkiness but it could make you hung

up on past trends. The sound provided more recent trends. In the end wasn’t really sure what strategy to use the visual or auditory information.

11 Suggested the use of “up/down” instead of “buy/sell”. When the visual became hard to see then auditory became dominant. Tuned mostly to the auditory and sometimes the auditory and visual were in conflict.

12 Used the voice information about 60-70% of time unless the direction seemed obvious from the picture. You can see where the buyers and sellers are with the sound. Don’t fully understand the 3D visual graph. Preferred sound to visual.

13 Took a while to learn auditory display. Wasn’t sure what sounds to pay attention to. Relied on sound when any visual occlusion occurred or for more recent information. Used the visual for past history. Even though they did not always agree there was no perceived conflict between sound and vision, they just provided different types of information.

14 Found the combined approach more difficult to understand. Relied mainly on the visual model. Predominantly used the picture but the sound and picture worked together to help me decide.

15 Preferred the auditory as it seemed less ambiguous. Took into account the most recent data and didn’t think about past history too much. Still used visual information just to confirm that auditory trends matched the visual model. When conflict occurred I went with the auditory.

Table 13-24 Subject comments about the combined auditory-visual display.

Page 164: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 376

13.7.3 Analyse Experimental Results (STEP 5.3.3) The analysis is divided into three parts:

• analyse all responses (UP and DOWN) (section 13.7.3.1) • analyse responses in a single direction (UP or DOWN) (section 13.7.3.2) • analyse subjective responses of subjects (section 13.7.3.3).

Within each of these parts the results are analysed to answer the same specific questions:

• Can subjects predict the market direction? • Is there a significant difference between subjects? • Does the order of using the different modes (V,A,C) affect performance? • Is there a difference in performance with different modes (V,A,C)?

13.7.3.1 Analysing All Responses The first analysis considered all responses from subjects. This included predictions about both UP and DOWN movements of the trade price. However, before checking for the statistical significance of the results the sources of possible variation between cases are examined. There were three possible causes of variation in the results:

ModeDifferent types of information are being displayed in each mode. One of the aims of the experiment is to determine if subject performances varies between modes. One outcome of particular interest is to determine if the combined display performs better than the auditory or visual display.

(auditory, visual, combined).

OrderSubjects were tested on the three displays in random order. It could be expected that the ordering of display types in the experiment might influence the results. Subjects may, for example, undergo learning as they use each display mode. Therefore the experimental design randomised the order between subjects to allow testing against this possible variation.

(e.g. auditory, followed by combined, followed by visual).

This variation could be due to a number of factors. For example, the ability of the different subjects to form a useful mental model and interpret the displays. Subjects may also have a particular preference for one mode.

Subject

Normally a linear model, called analysis of variance (ANOVA), would be used to determine significant variation [Freund and Walpole 1980]. However, for both mode and order the number of subjects was not balanced (table 13-25). This meant an analysis including both mode and order could not be done by straightforward analysis of variance. Instead regression techniques [Dobson 1990] were used to determine the effects of:

• mode adjusted for order • order adjusted for mode.

Page 165: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 377

Order First Second Third

Mode Auditory 5 5 5 Visual 6 5 4

Combined 4 5 6

Table 13-25 The number of occasion each mode and order combination occurs.

The regression analysis for order adjusted for mode is shown in table 13-26 and table 13-27. For a statistical significance, the F probability should be less than 0.05. None of the sources of variation (subject, order, mode) approach this figure and so are not significant. Similarly the results from a regression analysis of mode adjusted for order (table 13-28, table 13-29) show none of the factors (subject, order, mode) to be significant.

Degrees of

Freedom Sum of Squares

Mean Squares

Variance Ratio

F Probability

Regression 22 43.54 1.979 0.51 0.940 Residual 22 85.71 3.896 TOTAL 44 129.24 2.937

Table 13-26 Summary of regression analysis with order adjusted for mode.

Change Degrees of Freedom

Sum of Squares

Mean Squares

Variance Ratio

F Probability

Subject 14 29.244 2.089 0.54 0.884 Mode 2 7.511 3.756 0.96 0.397 Order 2 0.373 0.187 0.05 0.953

Mode.Order 4 6.408 1.602 0.41 0.799 Residual 22 85.707 3.896 TOTAL 44 129.244 2.937

Table 13-27 Accumulated analysis of variance with order adjusted for mode.

Degrees of

Freedom Sum of Squares

Mean Squares

Variance Ratio

F Probability

Regression 22 43.54 1.979 0.51 0.940 Residual 22 85.71 3.896 TOTAL 44 129.24 2.937

Table 13-28 Summary of regression analysis with mode adjusted for order.

Change Degrees of Freedom

Sum of Squares

Mean Squares

Variance Ratio

F Probability

Subject 14 29.244 2.089 0.54 0.884 Order 2 0.844 0.422 0.11 0.898 Mode 2 7.040 3.520 0.90 0.420

Mode.Order 4 6.408 1.602 0.41 0.799 Residual 22 85.707 3.896 TOTAL 44 129.244 2.937

Table 13-29 Accumulated analysis of variance with mode adjusted for order.

Page 166: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 378

Subjects were required to predict the direction of the next stock market trade, either UP or DOWN. If subjects guessed their prediction randomly then the expected result is a binomial distribution with a probability of 0.5. To determine how well the subjects were actually predicting the direction of the next trade, the results were compared to exact binomial probabilities that these results (or more extreme ones) would be obtained by chance. For r successes out of n trials, where each trial is assumed to have probability p of success, the quantity calculated is the probability of getting the experimental result or a more extreme result. That is, r or more successes, or np – (r-np) or fewer successes. This corresponds to calculating the size of the two tails of the binomial distribution, to perform what is known statistically as a two-tailed test [Freund and Walpole 1980]. The reason that both tails of the distribution are included is that a-priori it was not known whether the observed success rate would be above or below p. The parameters for the two-tailed test are shown in table 13-30 and the results of the significance test in table 13-31. This analysis shows that subjects predicted the direction of the next trade at levels significantly above chance in all three modes, with combined (70%), auditory (70%) and visual (61.3%) as shown in table 13-31.

Mode

Correct responses

r

p

np – (r-np)

Total responses

n

% correct

All Responses

V 92 0.5 58 150 61.3 A 105 0.5 45 150 70.0 C 105 0.5 45 150 70.0

Table 13-30 All Responses. The parameters used in the two-tailed test for significance.

All Responses

Mode Two-tail test using Binomial distribution

5% level of significance

Visual

probability ≥ 92 = 0.0034 probability ≤ 58 = 0.0034 overall probability = 0.0069

yes

Auditory probability ≥ 105 = 0.0000 probability ≤ 45 = 0.0000 overall probability = 0.0000

yes

Combined probability ≥ 105 = 0.0000 probability ≤ 45 = 0.0000 overall probability = 0.0000

yes

Table 13-31 All Responses. Calculated values for the two-tail test and check for significance.

13.7.3.2 Analysing UP or DOWN Movements. The first analysis considered all responses from subjects. A more detailed analysis was then carried out to examine the ability of subjects to predict changes in a specific market direction (UP or DOWN). The aim of the display is to help users identify patterns in the data for trading the market. However, different patterns may be used

Page 167: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 379

when trading UP or DOWN trends in the market. It was possible that the different displays were more suited for trading the market in one direction. The same three possible causes of variation need to be considered (mode, order, subject). There were uneven numbers of UP and DOWN decision points between subjects and modes and so an accumulated analysis of deviance [Dobson 1990] is used. This is similar to the analysis of variance used for all responses and is interpreted in a similar way. For UP decision points the results of the regression analysis for order adjusted for mode are shown in table 13-32 and table 13-33. For a statistical significance, the F probability should be less than 0.05. None of the sources of variation approach this figure and so are not significant. This allowed order to be ignored. The regression analysis is repeated ignoring order and produces the results shown in table 13-34 and table 13-35. The results show that there were no significant difference between modes when subjects made predictions for the UP decision points.

Degrees of

Freedom Deviance Mean

Deviance Deviance

Ratio F

Probability Regression 22 19.32 0.878 0.37 0.989 Residual 22 52.74 2.397 TOTAL 44 72.06 1.638

Table 13-32 UP decisions. Summary of regression analysis with order adjusted for mode.

Change Degrees of

Freedom Deviance Mean

Deviance Deviance

Ratio F

Probability Subject 14 11.218 0.801 0.33 0.981 Mode 2 1.009 0.504 0.21 0.812 Order 2 1.124 0.562 0.23 0.793

Mode.Order 4 5.974 1.493 0.62 0.651 Residual 22 52.738 2.397 TOTAL 44 72.062 1.638

Table 13-33 UP decisions. Accumulated analysis of deviance with order adjusted for mode.

Degrees of

Freedom Deviance Mean

Deviance Deviance

Ratio F

Probability Regression 22 12.23 0.764 0.36 0.983 Residual 22 59.84 2.137 TOTAL 44 72.06 1.638

Table 13-34 UP decisions. Summary of regression analysis ignoring order.

Change Degrees of

Freedom Deviance Mean

Deviance Deviance

Ratio F

Probability Subject 14 11.218 0.801 0.37 0.972 Mode 2 1.009 0.504 0.24 0.791

Residual 28 59.835 2.137 TOTAL 44 72.062 1.638

Table 13-35 UP decisions. Accumulated analysis of deviance ignoring order.

Page 168: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 380

For the DOWN decision points the results of the regression analysis for order adjusted for mode are shown in table 13-36 and table 13-37. Again, for a statistical significance, the F probability should be less than 0.05. Both subject (0.014) and mode (0.29) show an apparent significance while order (0.653) is not significant. Order was ignored and the model was recalculated for only subject and mode to produce the results shown in table 13-38 and table 13-39. The results show that there is a significant difference between subjects (0.008) and modes (0.024) for the predictions made for the DOWN decision points. Hence the conclusion is that some subjects were significantly better able to predict DOWN points in the market. Subjects performed best in the auditory display (0.8313), followed by the combined display (0.7344) and the visual display (0.6489).

Degrees of

Freedom Deviance Mean

Deviance Deviance

Ratio F

Probability Regression 22 44.22 2.0101 2.42 0.022 Residual 22 18.25 0.8297 TOTAL 44 62.48 1.4199

Table 13-36 DOWN decisions. Summary of regression analysis with order adjusted for mode.

Change Degrees of

Freedom Deviance Mean

Deviance Deviance

Ratio F

Probability Subject 14 32.7347 2.3382 2.82 0.014 Mode 2 6.9418 3.4709 4.18 0.029 Order 2 0.7224 0.3612 0.44 0.653

Mode.Order 4 3.8229 0.9557 1.15 0.359 Residual 22 18.2540 0.8297 TOTAL 44 62.4758 1.4199

Table 13-37 DOWN decisions. Accumulated analysis of deviance with order adjusted for mode.

Degrees of

Freedom Deviance Mean

Deviance Deviance

Ratio F

Probability Regression 16 39.68 2.4798 3.05 0.005 Residual 28 22.80 0.8143 TOTAL 44 62.48 1.4199

Table 13-38 DOWN decisions. Summary of regression analysis ignoring order.

Change Degrees of

Freedom Deviance Mean

Deviance Deviance

Ratio F

Probability Subject 14 32.7347 2.3382 2.87 0.008 Mode 2 6.9418 3.4709 4.26 0.024

Residual 28 22.7992 0.8143 TOTAL 44 72.062 1.4199 Table 13-39 DOWN decisions. Accumulated analysis of deviance ignoring order.

To determine how well the subjects were actually predicting the direction of the next trade the same approach as that used for the combined responses was used. The results were compared to exact binomial probabilities that these results (or more extreme ones) would be obtained by chance. The probability of a correct answer was assumed to be 0.5 and a two-tailed test was performed using the appropriate binomial distribution.

Page 169: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 381

The parameters for the two-tailed test are shown in table 13-40 and the results of the significance test in table 13-41 and table 13-42. The analysis shows that for UP movements in trade price, subjects do not predict the direction of the next trade at levels significantly above chance using the visual (58.8%) or auditory (59.0%) display. However, subjects do perform significantly better than chance when using the combined auditory-visual (65.6%) display. For DOWN movements in trade price, subjects perform significantly better than chance in the auditory (81.9%), combined (73.3%) and visual (67.2%) modes.

Mode

Correct responses

r

p

np – (r-np)

Total responses

n

% correct

UP Responses

V 50 0.5 35 85 58.8 A 46 0.5 32 78 59.0 C 42 0.5 22 64 65.6

DOWN Responses

V 43 0.5 21 64 67.2 A 59 0.5 13 72 81.9 C 63 0.5 23 86 73.3

Table 13-40 UP and DOWN Responses. Parameters used in two-tailed test for significance.

UP Responses

Mode Two-tail test using Binomial distribution

5% level of significance

Visual

probability ≥ 50 = 0.0642 probability ≤ 35 = 0.0642 overall probability = 0.1284

no

Auditory

probability ≥ 46 = 0.0703 probability ≤ 32 = 0.0703 overall probability = 0.1405

no

Combined

probability ≥ 42 = 0.0084 probability ≤ 22 = 0.0084 overall probability = 0.0169

yes

Table 13-41 UP Responses. Calculated values for the two-tail test and check for significance.

DOWN Responses

Mode Two-tail test using Binomial distribution

5% level of significance

Visual

probability ≥ 50 = 0.0642 probability ≤ 35 = 0.0642 overall probability = 0.1284

yes

Auditory probability ≥ 46 = 0.0703 probability ≤ 32 = 0.0703 overall probability = 0.1405

yes

Combined probability ≥ 42 = 0.0084 probability ≤ 22 = 0.0084 overall probability = 0.0169

yes

Table 13-42 DOWN Responses. Values for the two-tail test and significance check.

Page 170: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 382

13.7.3.3 Analysing Subject Responses. The subjects were asked for feedback at the end of the experiment and this feedback is shown in table 13-22, table 13-23 and table 13-24. One subject commented that there were certain points in the display where they felt very certain about their decision, and others where they were not so sure. This could indicate that there are patterns at some places in the data that are particularly clear to understand. To examine this the frequency for the proportion of correct responses at each decision point was analysed (figure 13-13). While there were too few predictions made at each decision point to statistically validate the presence of patterns, indications are promising. In all, ten decision points were consistently predicted and subjects consistently made the wrong decision at one of the decision points.

Decision Point Number

Frequency of correct responses ateach decision point (all modes)

Frequencyof correctresponses

1

1.0

0.8

0.6

0.4

0.2

0.010 20 30 40 50 60

Figure 13-13 Frequency of correct responses for the 60 decision points in the data.

The comments from each subject were compared for each of the three modes. In the visual display nine subjects commented that they made decisions based on ‘size, height, slope and steepness of cliffs’, ‘how close the peaks were to the centre’, and ‘bending and trends in the river or valley’. Three said they could not understand how to make decisions. One commented that it was not clear whether the ‘forces shown by hills’ were pushing or pulling against each other. Overall three subjects said they preferred the visual display. In the auditory display the subjects made decisions from ‘frequency of calls’, ‘closeness to the centre’, and ‘loudness’. Six said they found it easy to understand the auditory display. One commented that the words ‘buy’ and ‘sell’ could be interpreted as commands rather than labels which could lead to a prediction in the opposite direction. One commented that information about the last trade price was missing from the auditory display. In the auditory-visual display subjects commented that the combined display contained more information than the visual or auditory alone. Subjects thought the visual display provided ‘context, history, past and general trends’, while the auditory display provided ‘most recent trends’, ‘focus’, ‘eagerness’ and ‘presence’. Four subjects

Page 171: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 383

mentioned that when the visual display became occluded they used the auditory display to make the decision. Four commented that visual and auditory display were sometimes in conflict, and three of these relied on the auditory while the other relied on the visual in this situation. One found that the conflicts in the combined display made it more ambiguous and preferred the auditory alone. Another found auditory display distracting and preferred the visual over the combined display.

13.7.4 Capture Problems and Suggestions (STEP 5.3.4) The results of the experiment do not support the null hypothesis. Subjects can predict the direction of the next trade significantly better than chance. The results are summarised in table 13-43. Subjects were best at predicting down trades from the auditory display (83.1%) followed by down trades from the combined display (73.3%) and up trades from combined display (65.6%). The prediction of up trades from the visual display (58.8%) and auditory display (59.0%) separately were not significantly above chance.

ALL

Responses UP

Responses DOWN

Responses

Mode %

correct

significant %

correct

significant %

correct

significant

Visual

61.3 yes

(0.0069)

58.8 no

(0.1284)

67.2 yes

(0.0081)

Auditory

70.0 yes

(0.0000)

59.0 no

(0.1405)

81.9 yes

(0.0000)

Combined

70.0 yes

(0.0000)

65.6 yes

(0.0169)

73.3 yes

(0.0000) Table 13-43 Summary of experiment results, including significance of result.

The results from analysing experimental variations due to order, subject and mode are shown in table 13-44. There was no difference due to the order of experience with the displays. Some subjects were much better than others at predicting down trades, but there was no difference on up trades. A significant difference between modes was identified for downward predictions of the market.

Variation

Significance (P<0.05) ALL Responses UP Responses DOWN Responses

Subject no (0.884) no (0.981) yes (0.014) Order no (0.953) no (0.793) no (0.653) Mode no (0.397) no (0.812) yes (0.029)

Table 13-44 Summary of analysis for sources of variation in experiment.

The results for subject performance with different modes are not straightforward. When all responses are considered the performance with the auditory (70%) and combined (70%) displays are similar. This indicates that the combined display is redundant for the prediction task. However, subjects were able to predict down trades from the auditory display 81.9% of the time and only 73.3% of the time using the combined display. This indicates that the combined display is conflicting when used for the prediction of down movements. When subjects predicted upward movements, the

Page 172: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 384

combined display was the only mode where performance was significant. This indicates that the combined display was complementary. A number of questions were addressed by the summative evaluation (table 13-20). The answers to these questions are summarised in table 13-45, table 13-46 and table 13-47. However a number of new questions have arisen as a result of the experimental results. These include: 1. Why do subjects predict down trades so well from the auditory display, but not up

trades? Perhaps the auditory display is biasing toward down decisions or some particular pattern is highlighted by the auditory display.

2. Why do some subjects perform better with the auditory display than others, yet there is little difference with the visual or combined auditory-visual displays? Perhaps some subjects are more attentive to auditory displays.

3. Why is the auditory-visual display complementary for up trades but conflicting for down trades?

4. What are the significant patterns in the visual and sound displays? Indications are that some decision points are much more easily and correctly predicted than others. These possible patterns need to be explored by an expert across a wider range of market conditions.

5. How well do the predictions from these displays compare with a table? How do predictions from experts compare with non-experts? These questions could be explored with a further experiment to measure the trading performance of experts on real data.

Experimental Questions Experimental Results General questions

Can the users find consistent patterns in the data?

In total 11 patterns were highlighted in the data. This is where the users consistently predicted market direction at 10 locations and one location were they were consistently wrong. There is not enough experimental data at each decision point to determine statistical significance. This requires more testing across a wider range of market data and deeper analysis of the patterns by an expert.

Can people use the visual (V), auditory (A) and combined (C) displays to predict the direction of the next trade from depth of market data?

Yes, subjects predict the direction of the next trade at levels significantly above chance in all three modes, with A (70%), M (70%) and V (61.3%).

Does the multi-sensory display function in a complementary, redundant or conflicting manner? What differences are there in performance with the visual (V), auditory (A) and combined (C) displays?

Considering all decision points the combined display is redundant. For upward decision points the combined display is complementary. For downward decision points the combined display is conflicting.

How do people interpret and make decisions from these displays?

Subjects thought the visual display provided ‘context, history, past and general trends’, while the auditory display provided ‘most recent trends’, ‘focus’, ‘eagerness’ and ‘presence’.

Table 13-45 Answers to general questions for the summative evaluation.

Page 173: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 385

Experimental Questions Experimental Results Specific Questions Considering all responses. (UP and DOWN)

Can the BidAsk Landscape (any modality) help predict market direction?

Yes, subjects predict the direction of the next trade at levels significantly above chance in all three modes, with A (70%), M (70%) and V (61.3%).

Which modality is best? How do the modalities compare: auditory vs visual vs combined auditory-visual? In particular is the combined auditory-visual display better than the visual only or auditory only?

Considering all decision points there was no significant difference between modes. The performance with the A (70%), and C (70%) displays was the same. This indicates the combined display is redundant for this task.

Is there a training effect? Does the order with which the subjects used the different modalities have an effect on results?

There is no effect of order on results.

Do some subjects perform better or worse than others?

There is no difference between subjects.

Table 13-46 Answers to specific questions for all predictions of market direction.

Experimental Questions Experimental Results Specific Questions Only considering responses in a single direction. (UP or DOWN)

Can the BidAsk Landscape (any modality) help predict the market moving in a specific direction – either UP or DOWN?

Yes, downward price movements are predicted significantly better than chance using the auditory (81.9%), visual (67.2%) and combined (73.3%) displays. For upward movements, only the result for the combined (65.6%) display is above chance.

Are some modalities more helpful in predicting the market when it is moving in a particular direction (UP vs DOWN)? How do the modalities compare: auditory vs visual vs combined auditory-visual? In particular is the combined auditory-visual display better than the visual only or auditory only for predicting UP or DOWN movements.

For downward trades subjects were able to predict down movements from the auditory display 81.9% of the time and only 73.3% of the time using the combined display. This indicates that the visual display may have introduced noise or conflict when combined with the auditory display. However, in the upward direction the combined (65.6%) display was the only mode where performance was significant. This indicates that the combination of the auditory and visual displays was complementary for predicting upward movements.

Is there a training effect? Does the order with which the subjects used the different modalities have an effect on results?

There is no effect of order on results.

Do some subjects perform better or worse than others?

There is a statistical difference between subjects indicating that some subjects are better able to make predictions than others.

Table 13-47 Answers to specific questions for UP and DOWN predictions of market direction.

13.8 Conclusion This completes the work carried out in the case study. In total six iterations of the MS-Process

have been completed. A summary of those iterations and the outcomes are summarised in table 13-48.

The end goal of the case study was to produce a multi-sensory display so that a trader could discover useful patterns in stock market data. In total, four multi-sensory prototypes for different trading tasks were implemented and evaluated. The outcomes

Page 174: Chapter 7 Guidelines for Spatial Metaphors

CASE STUDY Chapter 13 – Auditory-Visual Application 386

from the three haptic-visual applications were inconclusive. However, a detailed summative evaluation of the auditory-visual application showed that non-experts could predict the direction of the stock market significantly better than chance. This suggested that subjects identified useful patterns in the display. Further discussion on the results of the case study and further design work in this application domain are discussed in chapter 14. Chapter 14 also discusses the results of the case study in terms of the MS-Process, the MS-Guidelines and the

MS-Taxonomy.

MS-Process Case Study Section

Iteration

STEP 1 Task Analysis

STEP 2 Data Characterisation STEP 3 Display Mapping STEP 4 Prototyping STEP 5 Evaluation

Consider the full design space for technical analysis. Design, prototype and evaluate three spatial visual metaphors.

Spatial Auditory Metaphors

Spatial Haptic Metaphors

Spatial Visual Metaphors

Direct Auditory Metaphors

Direct Visual Metaphors

Direct Haptic Metaphors

Temporal Auditory Metaphors

Temporal Visual Metaphors

Temporal Haptic Metaphors

11.3 11.4 11.5 11.6 11.7

Iteration

STEP 3 Display Mapping STEP 4 Prototyping STEP 5 Evaluation

Design, prototype and evaluate direct haptic metaphors for technical analysis.

DirectHapticProperties

Force

Haptic Surface Texture

Compliance

Weight (inertia)

Friction

Direct Haptic Shape

Vibration

12.2

Iteration

STEP 3 Display Mapping STEP 4 Prototyping STEP 5 Evaluation

Redesign, prototype and evaluate Haptic Moving Average Surface

.

12.3

Iteration

STEP 3 Display Mapping STEP 4 Prototyping STEP 5 Evaluation

Redesign, prototype and evaluate Haptic

3D Bar Chart.

12.4

Iteration

STEP 3 Display Mapping STEP 4 Prototyping STEP 5 Evaluation

Design, prototype and evaluate auditory metaphors. Direct Auditory

Metaphors

Temporal AuditoryMetaphors

Spatial AuditoryMetaphors

13.2 13.3 13.4

Iteration

STEP 3 Display Mapping STEP 4 Prototyping STEP 5 Evaluation

Redesign, prototype and evaluate Auditory BidAsk Landscape.

13.5 13.6 13.7

Table 13-48 The outcomes from the case study showing six iterations of the MS-Process.

Page 175: Chapter 7 Guidelines for Spatial Metaphors

Chapter 14

Evaluation and Future Work

Monte Verità, Ascona, Switzerland.

“Five senses; an incurably abstract intellect; a haphazardly

selective memory; a set of preconceptions and assumptions so

numerous that I can never examine more than minority of them -

never become conscious of them all. How much of total reality can

such an apparatus let through?” [C. S. Lewis]

Page 176: Chapter 7 Guidelines for Spatial Metaphors

CONCLUSION Chapter 14 - Final Remarks 391

Chapter 14

Final Remarks

14.1 Introduction The goal of this thesis is to assist in the design of better Human Perceptual Tools for finding patterns in abstract data. The major outcomes from this work are:

• the MS-Taxonomy (chapters 2,3,4,5) • the MS-Guidelines (chapters 6,7,8,9) • the MS-Process (chapter 10).

These outcomes are reviewed in the section 14.2 (MS-Taxonomy), section 14.3 (MS-Guidelines) and section 14.4 (MS-Process). To validate these outcomes a case study was performed (chapter 11,12,13). The outcome from this case study is reviewed in section 14.5. Section 14.6 concludes the thesis with an overview of results and suggestions for future work.

14.2 Validating the MS-Taxonomy The MS-Taxonomy is a structured group of concepts that describes the multi-sensory design space for abstract data display. When validating the MS-Taxonomy, the question is: does the MS-Taxonomy provide a good categorisation of the multi-sensory design space? A good categorisation would be expected to:

• model the full multi-sensory design space (section 14.2.1) • provide multiple levels of abstraction (section 14.2.2). • define a useful structure (section 14.2.3)

Page 177: Chapter 7 Guidelines for Spatial Metaphors

CONCLUSION Chapter 14 - Final Remarks 392

14.2.1 Modelling the full multi-sensory design space One concern with developing the MS-Taxonomy is whether or not it adequately covers the full multi-sensory design space. The thesis investigates this question by using the concepts of the MS-Taxonomy to review existing literature (chapter 3-5). This review serves to illustrate the concepts that make up the MS-Taxonomy and to demonstrate how existing applications1 of information display can be described under the taxonomy. In this review does areas of the multi-sensory design space that are not covered by the MS-Taxonomy did not surface. In fact, the MS-Taxonomy covers a number of areas of the multi-sensory design space that have yet to be considered for displaying abstract information. To further verify that the MS-Taxonomy models the full design space it is compared against an alternative taxonomy. An alternative taxonomy of the multi-sensory design space was constructed called the XCM-Taxonomy. The XCM-Taxonomy is created by extending the already existing taxonomy of the visual design space described by Card, Mackinlay et al. [Card, Mackinlay et al. 1999]. The MS-Taxonomy and the XCM-Taxonomy

14.2.2 Providing multiple levels of abstraction

have been compared and shown to be equivalent [Nesbitt 2001b].

A major issue with the multi-sensory design space is that it is very complex. The MS-Taxonomy attempts to provide abstractions of the concepts that make up the design space. Higher level abstractions are useful for hiding some of the complexity of the design space. It is also expected that good abstractions should support the comparison and reuse of concepts across the different senses. At the highest level the MS-Taxonomy provides a very simple description of the design space in terms of nine metaphor classes. Even with this quite abstract view of the multi-sensory design space it is possible to develop a useful design process and to incorporate some sensible high-level design guidelines. An important feature of the MS-Taxonomy is that the concepts that describe spatial and temporal metaphors are general and apply across the different senses. This allows direct comparison of these types of displays. For example, a visual spatial metaphor can be compared to an auditory spatial metaphor. In this example, it is only the ability of each sense to perceive spatial relations that needs to be compared. By contrast, with direct metaphors it is less appropriate to compare displays between senses as it is the individual properties of each sense that defines the display. The abstractions described by the MS-Taxonomy also encourage the transfer of display concepts between senses. For example, the domain of temporal auditory metaphors is well developed and these concepts can be transferred to temporal visual and temporal haptic metaphors. During the review of the MS-Taxonomy (chapter 3-5) a number of such opportunities are highlighted.

1 A summary of applications and more generic information metaphors is provided in Appendix A.

Page 178: Chapter 7 Guidelines for Spatial Metaphors

CONCLUSION Chapter 14 - Final Remarks 393

The MS-Taxonomy provides multiple levels of abstraction that mask complexity and can be used to compare general design concepts at different levels. This thesis aims to develop the higher-level concepts of the MS-Taxonomy, although

14.2.3 Defining a useful structure

it also provides a framework that encapsulates the lower-level complexity of the multi-sensory design space. It is the lower level concepts of the MS-Taxonomy that are most likely to generate debate. In particular some lower level concepts may need to be refined. For example, direct haptic metaphors are fairly new concepts and are have yet to be used for abstract data display. An advantage of providing multiple levels of abstraction is that the taxonomy is extensible. One obvious extension of the MS-Taxonomy would be to include an additional sense such as our olfactory capability. It is expected that the MS-Taxonomy can simply be extended to allow for this additional sense. Once again the concept of spatial, temporal and direct metaphors are applicable.

One issue with the multi-sensory design space is that it is large and detailed. The MS-Taxonomy attempts to provide an appropriate structure to the concepts that a designer must consider. Using the MS-Taxonomy to structure a literature review of abstract information displays2

The MS-Taxonomy aims to provide an intuitive model of display concepts. While it generally succeeds, there is not doubt that some concepts (such as auditory scale) are

(chapter 3-5) validates the general structure of the MS-Taxonomy. This review highlights the different layers of the structure. At the high-level it provides a very simple description of the design space in terms of spatial, direct and temporal metaphors. At the lower levels are very detailed concepts such as pitch or hue. One feature of the structure that is highlighted in this review is the way the concepts for spatial and temporal metaphors can be applied to all three senses. The structure of the MS-Taxonomy is then further evaluated by using it as a framework to develop the MS-Guidelines (chapter 6-9). The structure of the MS-Taxonomy provides a convenient and intuitive organisation for these guidelines. Some guidelines suggest general principles of information display and so are applicable at a high level in the structure. Other guidelines concern detailed issues and fit lower in the taxonomy. Finally, the structure of the MS-Taxonomy was successfully used to define the display mapping step of the MS-Process (chapter 6-10). During a case study of this design process (chapter 11-13) the structure of this step was found to support the way designs are created and then formalised. The designer can work at different levels as appropriate. For example, alternating between high-level design issues and very detailed design questions. These different levels are supported by the structure of the MS-Taxonomy.

2 A summary of these abstract information displays is provided in Appendix A.

Page 179: Chapter 7 Guidelines for Spatial Metaphors

CONCLUSION Chapter 14 - Final Remarks 394

unusual. Refining the MS-Taxonomy, especially at the lower level is the subject of further research.

14.3 Validating the MS-Guidelines This thesis develops a structured series of guidelines for multi-sensory display called the MS-Guidelines

(chapter 6-9). The guidelines are assessed using a case study from the domain of Technical Analysis (chapter 11-13). When validating the MS-Guidelines, the question is: do the MS-Guidelines provide a set of good guidelines for assisting the design of multi-sensory displays?

Although the case study provides a starting point, it is expected that long-term empirical investigations are required to properly assess these guidelines. This evidence could be expected to encompass 30 years, or so, of design work. Because this evidence is not available this validation takes a more qualitative approach. It suggests that good guidelines would:

• direct the design process (section 14.3.1) • hide complexity from the designer (section 14.3.2) • improve the quality of designs (section 14.3.3) • communication of good design principles (section 14.3.4) • assist in design evaluation (section 14.3.5).

4.3.1 Directing the design process The MS-Guidelines are explicitly related to the MS-Process by using the structure of the MS-Taxonomy. This ensures that appropriate guidelines are used at each step of the design process. For example, during the early iterations of the case study the guidelines for visual spatial metaphors are predominant. When haptic display is added the guidelines for direct haptic metaphors are used. The design of the auditory display uses the guidelines for direct auditory and temporal auditory metaphors. Some guidelines are high-level and thus only provide principles rather then very specific direction for the designer. Examples include; GD-1 Emphasise the data and GD-3 Design for a task. These guidelines provide a starting point for the creative process of design. A task analysis and data characterisation of the stock market domain seeded the ideas for early conceptual designs. During the first pass of the display mapping step the full multi-sensory design space is considered. This early design work involved many intuitive leaps and it was appropriate to simply direct these leaps by general guidelines. However, as the design process proceeded, the design decisions became more concrete. For example, the case study used specific haptic properties (chapter 12) to display data attributes. This later work was directed by lower-level guidelines, such as; DH-4 Haptic surface texture is an ordinal property, DH-6 Compliance is an ordinal property and DH-8 Weight and inertia are ordinal properties.

Page 180: Chapter 7 Guidelines for Spatial Metaphors

CONCLUSION Chapter 14 - Final Remarks 395

4.3.2 Hiding complexity During the case study a number of guidelines are used without reference to the original domain. For example in the development of the haptic-visual display, the guideline, GP-8 Seven is a magic number is used. This is used to decide on the number of ordinal display categories and requires no reference back to the domain of perceptual science.

4.3.3 Improving the quality of design The suggestion is that the MS-Guidelines can improve the overall quality of designs by aiming to:

• ensure the designer considers the full multi-sensory design space • provide immediate suggestions for improving designs • highlight different design options.

This was consistent with the outcomes of the case study where the MS-Guidelines assisted with all these aims. Indeed the final result from the case study includes a very successful design for an auditory-visual display (chapter 13). However, once again much longer-term, broad empirical evidence needs to be gathered.

4.3.4 Communicating design principles The MS-Guidelines summarises accumulated knowledge in domains that impact on the multi-sensory display of abstract data. The structure of the MS-Taxonomy provides the framework for this knowledge. The contention is that this framework allows the guidelines to be easily communicated. In particular, the multiple levels of abstraction used in the MS-Taxonomy support information hiding that should also assist the communication of the guidelines. For example in the case study, high level guidelines such as MST-3 Increase the human-computer bandwidth are used as designs are conceptualised. These high-level guidelines prove effective at hiding much of the complexity of the design space from the designer. As the design is refined, guidelines for the lower level concepts are more relevant. For example, DH-1.1 The visual model effects the perception of haptic attributes.

4.3.5 Evaluating design outcomes The MS-Guidelines are used as design checkpoints during the formative evaluation step of the MS-Process. During the formative evaluation step of the MS-Process the designer evaluates the current design in terms of the appropriate guidelines. This approach is used a number of times during the case study to review the current design options (Section 11.7.2). In each instance, the guidelines provided immediate and targeted suggestions for improving the current design.

Page 181: Chapter 7 Guidelines for Spatial Metaphors

CONCLUSION Chapter 14 - Final Remarks 396

14.4 Validating the MS-Process This thesis develops a process for designing multi-sensory displays (chapter 10). The MS-Process is assessed by designing multi-sensory displays for finding patterns in stock market data. When reviewing the MS-Process, the question is: does the MS-Process provide a good process for assisting the design of multi-sensory displays? A good process would be expected to:

• provide structure for the design process (section 14.4.1) • facilitate process reuse (section 14.4.2) • support process evolution and improvement (section 14.4.3) • support management and communication of the process (section 14.4.4) • provide a basis for automating the design process (section 14.4.5).

14.4.1 Providing structure for the design process The MS-Process breaks the design process into five main steps. The display mapping step also uses the MS-Taxonomy to provide further structure. During the case study, this structure ensured that the following factors were considered during design

• the user's requirements (section 11.3) :

• the nature of the abstract data (see Section 11.4) • a range of solutions from the full multi-sensory design space (section 11.5) • the limitations of development tools (see Section 11.6)

The last three steps of the MS-Process are iterative and include prototyping and evaluation steps. The case study described six iterations of the process. This iterative structure was found to be a good match for the design process. In particular the prototyping step allowed the designer to gradually learn about the domain as design proceeded. The heuristic evaluation step was particularly effective for rapid assessment of designs and also afforded the domain expert a better understand of design options. Using the MS-Taxonomy to structure the display mapping step ensured the

full multi-sensory design space was considered during design and also allowed the appropriate MS-Guidelines to be integrated into design.

During the first iteration of the case study a number of different possible conceptual designs are suggested. Three different visual models (3D bar chart, Moving Average Surface, BidAsk Landscape) were prototyped during the first iteration through the process. During the case study this branching nature of design was managed by considered specific design options during different iterations of the process. However, it may be more appropriate to explicitly structure of the design process to support this branching. Finally, the outcomes from the case study also provide a validation of the structure of the MS-Process. Following the MS-Process, a number of new multi-sensory displays for finding patterns in stock market data are prototyped and evaluated. Results from a summative evaluation of the auditory-visual display showed that non-experts can predict the short-term direction of the market. This provides encouraging evidence of the effectiveness of the process.

Page 182: Chapter 7 Guidelines for Spatial Metaphors

CONCLUSION Chapter 14 - Final Remarks 397

14.4.2 Facilitating process reuse Only a single case study was carried out and the MS-Process needs to be evaluated in other domains to ensure that it can be reused. However, no problems are expected as the flexibility of the process has been demonstrated in the in case study. The case study developed a number of displays and encompassed a broad range of activities. For example a number of different evaluations were carried out and a number of prototypes were developed for different Virtual Environments.

14.4.3 Supporting process evolution and improvement Further work needs to be carried out to assess the ability of the MS-Process to evolve and be improved. This suggests further case studies need to be performed. However, the MS-Process is based on already established processes from software engineering and it is expected to be robust. It is expected that straightforward changes, such as, incorporating new guidelines, is relatively simple. This is because new guidelines can be easily integrated within the framework provided by the MS-Taxonomy.

14.4.4 Supporting management and communication Supporting the management of the design process is not a major goal of this work and was not directly evaluated. During the case study the steps of the MS-Process were used to formulate tasks. These tasks were found to be very useful for scheduling work and also afforded the domain expert an understanding of the status of the project.

14.4.5 Providing a basis for automation The MS-Process is considered too immature to formulate an automated multi-sensory design process. This does, however, form a future vision of the work. For example, the MS-Guidelines could eventually provide the appropriate rules to drive an automated design process. In the shorter term a case tool could be developed to assist the user follow the design process.

14.5 Reviewing the Case Study While the main purpose of the case study was to validate and illustrate the MS-Taxonomy, the MS-Guidelines and the MS-process, it is useful to review the case study in the context of a Human Perceptual Tool for finding patterns in abstract data. The MS-Process was followed in the domain of technical analysis. This resulted in the following multi-sensory applications being developed:

• Haptic 3D Bar Chart (section 12.4) • Haptic Moving Average Surface (section 12.3) • Auditory BidAsk Landscape (section 13.5)

Page 183: Chapter 7 Guidelines for Spatial Metaphors

CONCLUSION Chapter 14 - Final Remarks 398

A domain expert evaluated the two haptic-visual applications. The visual patterns in the Haptic Moving Average Surface need to be investigated more closely. However the haptic feedback does not assist the expert in finding patterns in the data. The formative evaluations of these applications highlight the difficulty of using a point force feedback display device. For example, in the Haptic 3D Bar Chart, it would be more useful if the user could feel the surface of several bars simultaneously with multiple fingers. The auditory-visual application was evaluated in a more formal experiment that allowed non-experts to predict the direction of the stock market from depth of market data. This evaluation suggests that useful trading patterns occur in both the auditory and visual displays. More work needs to be performed to isolate these patterns and determine their usefulness for real trading across a wider range of data. For example, future work could integrate real time data feed of market data in the display. This would allow the domain expert to evaluate the display over a longer period with different data. Further evaluation could compare trader performance on traditional table displays with the new multi-sensory display. One question motivating this work was, can we find useful 3D visual structures for abstract data? The BidAsk Landscape has been shown to be a useful structure for predicting temporal patterns in stock market data. This suggests the possibility that this design can be generalised and used in other application domains. To test this requires further work applying this spatial visual metaphor to a number of different domains. Within in the stock market domain further iterations of the design process could be undertaken. Future iterations could consider displaying additional data attributes. For example, currently the use of colour in the display is fairly simple. It would be possible to map another data attribute to this visual parameter. The next stage of the multi-sensory design might also consider integration of some haptic display. While the results from the haptic-visual applications are not encouraging, one design that was suggested in the preliminary design phase was the use of haptics to display support and resistance levels. This design is yet to be implemented and the domain expert suggests that this could be useful ancillary information to display.

14.6 Conclusion and Future Work

14.6.1 Conclusion In the conclusion of chapter 1 of this thesis a number of issues with developing multi-sensory displays was summarised (table 1-2). In this chapter the manner in which the final outcomes of this thesis addressed these issues is summarised (table 14-1). In summary, the work presented in this thesis is motivated by the need to find useful patterns in large sets of multi-attributed abstract data (table 14-1, questions 1-3). Virtual Environments provide a technology for developing multi-sensory Human Perceptual Tools (table 14-1, questions 1-2,5). These tools allow human perceptual skills to be used for exploring large abstract data sets for unexpected patterns (table 14-1, questions 4,5).

Page 184: Chapter 7 Guidelines for Spatial Metaphors

CONCLUSION Chapter 14 - Final Remarks 399

The success of these Human Perceptual Tools for data mining relies on effective models of the information being developed (table 14-1, questions 3,6,7,10). These models are described as Virtual Abstract Worlds. A difficult task when designing Virtual Abstract Worlds is to decide on an appropriate model for the abstract data. That is, how to choose mappings from the abstract data attributes to the different senses (table 14-1, questions 6,8,9). This thesis focuses on providing a more systematic methodology for designing multi-sensory Virtual Abstract Worlds. This structured, engineering-like approach to designing abstract information displays is based on the MS-Taxonomy. The MS-Taxonomy is multi-level categorisation of the multi-sensory design space. This taxonomy affords the designer a useful model of the possible design options. This thesis also develops the MS-Guidelines. These guidelines capture appropriate principles of design for the physiological, perceptual and cognitive capability of each sense. These guidelines are integrated into the design process and used for both designing and evaluating displays. An engineering approach to multi-sensory design also requires design process. This thesis develops the MS-Process for designing multi-sensory displays of abstract data. The MS-Process includes an evaluation phase for refining and testing the application outcomes (table 14-1, questions 7,9). This thesis also develops the MS-Guidelines. These guidelines capture appropriate principles of design for the physiological, perceptual and cognitive capability of each sense. These guidelines are integrated into the design process and used for both designing and evaluating displays. An engineering approach to multi-sensory design also requires design process. This thesis develops the MS-Process for designing multi-sensory displays of abstract data. The MS-Process includes an evaluation phase for refining and testing the application outcomes (table 14-1, questions 7,9).

14.6.2 Future Work The MS-Process, the MS-Guidelines and the MS-Taxonomy have all proved useful tools in the design of multi-sensory applications for finding patterns in stock market data. Suggested future work includes:

• broader evaluation of these conceptual tools in further case studies • refinement of some lower level concepts in the MS-Taxonomy • validation of the MS-Guidelines with longer term empirical evidence • improving the coverage of the MS-Guidelines by targeted some concepts • improving the MS-Process by developing a more formal display mapping

description based on the concepts of the MS-Taxonomy • generalisation of the BidAsk Landscape to other application domains.

Page 185: Chapter 7 Guidelines for Spatial Metaphors

CONCLUSION Chapter 14 - Final Remarks 400

# Question Answer 1 Can new technologies

assist in the data mining process?

Yes. A Virtual Environment is used to develop an auditory-visual application for finding patterns in stock market data. Non-experts uncover patterns that predict the direction of the market.

2 How do we build better Human Perceptual Tools for data mining?

The thesis develops an engineering approach to designing Human Perceptual Tools. It incorporates a process and a set of guidelines.

3 How do we design appropriate multi-sensory models for data mining?

The thesis develops the MS-Process for designing multi-sensory models. To ensure the appropriateness of the display, the process is driven by domain tasks and uses real data. Design decisions are checked by a domain expert.

4 How do we evaluate the effectiveness of a display for finding patterns in data?

The MS-Process supports three different levels of evaluation: • Expert heuristic evaluation. • Formative evaluation • Summative evaluation

5 Can multi-sensory feedback widen the bandwidth between human and computer?

Yes. An auditory-visual application is developed and tested. Results show that in some modes the bandwidth is wider. However, the outcomes are complicated and in some instances bandwidth is reduced.

6 How do we design intuitive mappings from the data to the senses?

The thesis develops an engineering approach structured on a metaphor hierarchy called the MS-Taxonomy.

7 How do we design good, multi-sensory displays for abstract data?

The thesis develops the MS-Process based on the MS-Taxonomy. This ensures that the full multi-sensory design space is considered during design. The MS-Guidelines incorporate previous knowledge from the domain and not only direct design but help evaluate outcomes. The iterative nature of the process allows the solution to evolve and improve.

8 What information do we display to each sense?

The MS-Taxonomy divides the display space into spatial, direct and auditory metaphors. In general, the visual sense is adept at interpreting spatial relations and the auditory sense is adept at interpreting temporal relations. The haptic sense is adept at interpreting a combination of spatial and temporal information. Each sense also perceives specific qualities that can be used to design direct metaphors.

9 How do we prevent problems with sensory-interaction?

The MS-Guidelines capture rules and principles that both prevent and alert the designer to issues of sensory interaction.

10 Can we find useful 3D visual structures for abstract data?

Yes. The case study develops a new 3D visual structure for displaying stock market data. Indications are that the BidAsk Landscape is generic and can be applied to other domains. Work on this generic information metaphor called the BK-Landscape is continuing.

Table 14-1 Outcomes of the thesis related to issues that motivated the work.

Page 186: Chapter 7 Guidelines for Spatial Metaphors

Bibliography 401

Bibliography A Adams, J. A. (1993). “Virtual Reality is for Real.” IEEE Spectrum, October: 22-29.

Agrawala, M., A. C. Beers, et al. (1997). “The Two-User Responsive Workbench: Support for Collaboration Through Individual Views of a Shared Space”. SIGGRAPH 97, Los Angeles, USA.

Ahlberg, C. and B. Shneiderman (1994). “Visual Information Seeking: Tight Coupling of Dynamic Query Filters with Starfield Displays”. ACM Conference on Human Factors in Computing Systems, New York, USA.

Ahlberg, C. and E. Wistrand (1995). “IVEE:An Information Visualization and Exploration Environment”. International Symposium on Information Visualization, Atlanta, GA.

Airey, J. M., J. J. Rohlf, et al. (1990). “Towards image realism with interactive update rates in complex virtual building environments.” Computer Graphics 24(2).

Ali, J. and Y. Nishinaka (1999). “Using 3D Visualization for Understanding Java Class Libraries”. Softvis '99, University of Technology, Sydney, Australia, Information Visualisation Research Group, The Department of Computer Science and Software Engineering, The University of Newcastle, Australia.

Alty, J. L. (1995). “Operator interfaces in the nuclear environment: will multimedia help?” Nuclear Energy 34(No 1. Feb.): 47-52.

Ames, A. L., D. R. Nadeau, et al. (1997). VRML 2.0 Sourcebook. New York, John Wiley & Sons, Inc.

Anderson, T. (1996). “A Virtual Universe Utilizing Haptic Display”. The First PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Anderson, T. (1997). “FLIGHT: A 3D Human-Computer Interface and Application Development Environment”. The Second PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Anderson, T., A. Breckenridge, et al. (1999). “FGB: A Graphical and Haptic User Interface For Creating Graphical, Haptic User Interfaces”. The Fourth PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Andrews, K. (1995). “Visualizing Cyberspace: Information Visualization in the Harmony Internet Browser”. IEEE Symposium on Information Visualization, New York, USA.

Apple (1987). Apple Human Computer Interface Guidelines: The Apple Desktop Interface. Reading, MA, Addison-Wesley.

Archea, J., B. L. Collins, et al. (1979). Guidelines for stair safety. Washington, National Bureau of Standards, U.S. Govt.

Arms, R. W. J. (1983). Volume Cycles in the Stock Market. Homewood, Illinois, Dow Jones-Irwin.

Athènes, S., S. Chatty, et al. (2000). “Human Factors in ATC alarms and notifications design: an experimental evaluation”. Third USA/Europe Air Traffic Management R&D Seminar, Napoli, Italy.

Avila, R. S. and L. M. Sobierajski (1996a). “Haptic Interaction Utilizing a Volumetric Representation”. The First PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Avila, R. S. and L. M. Sobierajski (1996b). “A Haptic Interaction Method for Volumetric Representation”. Visualization '96, IEEE.

Aviles, W. A. and J. F. Ranta (1999). “Haptic Interaction with Geoscientific Data”. The Fourth PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Axen, U. and I. Choi (1996). “Investigating Geometric Data with Sound”. International Conference on Auditory Display, Palo Alto, California.

Aynsley, R. M. (1999). Guidelines for energy efficient housing in the tropics. Queensland, Australia, Australian Institute of Tropical Architecture, James Cook University.

Baecker, R.M. (1981). "Sorting Out Sorting", from Dynamic Graphics Project, University of Toronto. 1981, ACM SIGGRAPH: Los Angeles.

B

Page 187: Chapter 7 Guidelines for Spatial Metaphors

Bibliography 402

Bailey, R. W., A. L. Imbmbo, et al. (1991). “Establishment of a laproscopic cholecystectomy training program.” The American Surgeon 57(4): 231-236.

Baker, M. J. and S. G. Eick (1995). “Space-Filling Software Visualization.” Journal of Visual Languages and Computing 6: 119-133.

Balaniuk, R. (1999). “Using Fast Local Modelling to Buffer Haptic Data”. The Fourth PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Balbo, S. (1995). “Software Tools for Evaluating the Usability of User Interfaces”. 6th International Conference on HCI, Yokohama, Japan.

Ballas, J. A. (1994). “Delivery of Information Through Sound”. Auditory Display:Sonification, Audification and Auditory Interfaces. G. Kramer, Addison-Wesley Publishing Company. XVIII: 79-94.

Banks, W. W., G. D. L., et al. (1982). Human Engineering Design Considerations for Cathode Ray Tube-Generated Displays. Washington, DC, U.S. Nuclear Regulatory Commission: April 1982.

Bara-Cikoja, D. and M. T. Turvey (1991). “Perceiving aperture size by striking.” Journal of Experimental Psychology:Human Perception and Performance 17: 330-346.

Barass, S. (1996a). “EarBenders: Using Stories About Listening to Design Auditory Displays”. First Asia-Pacific Conference on Human Computer Interaction APCHI'96, Singapore.

Barass, S. (1996b). “TaDa! Demonstrations of Auditory Information Design”. ICAD '96 - Third International Conference on Auditory Display, California, USA.

Barass, S. (1997). Auditory Information Design. Computer Science. Canberra, Australian National University.

Barass, S. and B. Zehner (2000). “Responsive Sonification of Well-logs”. International Conference on Auditory Display, Atlanta, Georgia, USA.

Barco (2001). Virtual & Augmented Reality: Immersive and stereoscopic displays and solutions. 2002.

Bardasz, T. (1997). “Extending the Ghost SDK Shape and Dynamic Classes”. The Second PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Bargar, R. (1994). “Pattern and Reference in Auditory Display”. Auditory Display:Sonification, Audification and Auditory Interfaces. G. Kramer, Addison-Wesley Publishing Company. XVIII: 151-166.

Barra, M., T. Cillo, et al. (2001). “Personal Webmelody: Customized Sonification of web servers”. International Conference on Auditory Display, Espoo, Finland.

Basdogan, C., C.-H. Ho, et al. (1998). “The Role of Haptic Communication in Shared Virtual Environments”. The Third PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Basili, V. R. and A. J. Turner (1975). “Iterative Enhancement: A Practical Technique for Software Development.” IEEE Transactions on Software Engineering 1(4): 390-396.

Bastien, J.-M. C. and D. L. Scapin (1993). “Ergonomic Criteria for the Evaluation of Human-Computer Interfaces.” Rapport Technique 156.

Batter, J. J. and F. P. J. Brooks (1972). GROPE-1. IFIP '71.

Bayazit, O. B., G. Song, et al. (1999). “Providing Haptic 'Hints' to Automatic Motion Planners”. The Fourth PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Becker, R. A. and S. G. Eick (1995). “Visualizing Network Data.” IEEE Transactions on Visualization and Computer Graphics 1(1): 16-28.

Becker, R. A., S. G. Eick, et al. (1990a). “Dynamic Graphics for Network Visualization”. Visualization '90, Los Alamitos,CA, IEEE Computer Society Press.

Becker, R. A., S. G. Eick, et al. (1990b). “Network Visualization”. Fourth International Symposium on Spatial Data Handling, Zurich,Switzerland, IEEE Computer Society Press.

Beddow, J. (1990). “Shape Coding of Multidimensional Data on a Microcomputer Display”. Visualization '90, San Francisco, CA.

Bederson, B. and A. Boltman (1999). “Does Animation Help Users Build Mental Maps of Spatial Information?” IEEE Information Visualization.

Bederson, B. B. and J. D. Hollan (1994). “Pad++: A Zooming Graphical Interface for Exploring Alternate Interface Physics”. ACM Symposium on User Interface Software and Technology, Marina del Rey, USA.

Begault, D. R. and E. M. Wenzel (1992). “Techniques and Applications for Binaural Sound Manipulation in Man-Machine Interfaces”. Intl. J. Aviation Psycho. (1992).” International Journal Aviation Psychology.

Page 188: Chapter 7 Guidelines for Spatial Metaphors

Bibliography 403

Bennet, D. J. and A. D. N. Edwards (1998). “Exploration of Non-seen Diagrams”. International Conference on Auditory Display, Glasgow, UK,.

Ben-Tal, O., M. Daniels, et al. (2001). “De Natura Sonoris: Sonification of Complex Data”. World Scientific and Engineering Society.

Bernstein, L. (1992). “Notes on Software Quality Management”. International Software Quality Exchange, San Francisco.

Bertin, J. (1981). “Graphics and Graphic Information Processing”. Readings in Information Visualization:Using Vision to Think. S. K. Card, J. D. Mackinlay and B. Shneiderman. San Francisco, USA, Morgan Kaufmann: 62-65.

Berton, J. A. J. (1990). “Strategies for Scientific Visualization:Analysis and Comparison of Current Techniques”. Extracting Meaning from Complex Data:Processing, Display, Interaction, Santa Clara, California, SPIE-The International Society for Optical Engineering.

Beshers, C. and S. Feiner (1993). “AutoVisual: Rule-Based Design of Interactive Multivariate Visualizations.” IEEE Computer Graphics July: 41-49.

Bilings, C. E. (1991). Human-centered aircraft automation: A concept and guidelines. Houston, USA, NASA.

Blattner, M. M., S. D. A., et al. (1989). “Earcons and Icons: Their Structure and Common Design Principles.” Human Computer Interaction 4(1): 11-14.

Blattner, M. M., A. L. Papp III, et al. (1994). “Sonic Enhancement of Two-Dimensional Graphic Displays”. Auditory Display:Sonification, Audification and Auditory Interfaces. G. Kramer, Addison-Wesley Publishing Company. XVIII: 447-470.

Blattner, M. M., D. Sumikawa, et al. (1989). “Earcons and Icons: Their Structure and Common Design Principles.” Human Computer Interaction 4(1).

Blicq, R. (1987). Writing reports to get results : guidelines for the computer age. New York, USA, IEEE Press.

Bly, S. (1994). “Multivariate Data Mappings”. Auditory Display:Sonification, Audification and Auditory Interfaces. G. Kramer, Addison-Wesley Publishing Company. XVIII: 405-416.

Boehem, B. W., J. R. Brown, et al. (1978). Characteristics of Software Quality. Amsterdam, North-Holland Publishing Company.

Boehm (1988). “A Spiral Model of Software Development and Enhancement.” IEEE Computer May: 61-72.

Boehm, B. W. (1981). Software Engineering Economics. Inglewood Cliffs, New Jersey, Prentice-Hall Inc.

Boehm, B. W. and P. N. Papaccio (1988). “Understanding and controlling software costs.” IEEE Transactions on Software Engineering 14(10): 1462-1477.

Booch, G., J. Rumbaugh, et al. (1999). The Unified Modeling Language User Guide. Boston, Addison Wesley.

Börner, K. (2000). “Visible Threads: A Smart VR Interface to Digital Libraries”. Visual Data Exploration and Analysis Conference VII, 2000, USA, SPIE - The International Society for Optical Engineering.

Boyton, R. M. and H. S. Smallman (1990). “Visual Search for Basic vs. Nonbasic Chromatic Targets”. Perceiving, Measuring and Using Color, Santa Clara, California, SPIE - The International Society for Optical Engineering.

Bray, T., Ed. (1996). “Measuring the Web”. Readings in Information Visualization. San Francisco, California, Morgan Kaufmann Publishers, Inc.

Bregman, A. S. (1990). Auditory Scene Analysis, MIT Press.

Brewster, S. and R. Murray (2000). “Presenting dynamic information on mobile computers.” Personal Technologies 4(2): 209-212.

Brewster, S. A., P. C. Wright, et al. (1994). “A Detailed Investigation into the Effectiveness of Earcons”. Auditory Display:Sonification, Audification and Auditory Interfaces. G. Kramer, Addison-Wesley Publishing Company. XVIII: 471-498.

Brickman, B. J., L. J. Hettinger, et al. (1996). “Haptic Specification of Environmental Events: Implications for the Design of Adaptive, Virtual Interfaces”. VRAIS '96, IEEE.

Brooks, F. P. J., M. Ouh-Young, et al. (1990). “Project GROPE– Haptic Displays for Scientific Visualization.” Computer Graphics 24(4): 177-185.

Brown, C. M. (1988). Human-Computer Interface Design Guidelines. Norwood, NJ, USA, Ablex.

Brown-VanHoozer, A., R. Hudson, et al. (1995). “Designing user models in a virtual cave environment”. 39th Annual Meeting of the Human Factors and Ergonomics Society, San Diego, USA.

Page 189: Chapter 7 Guidelines for Spatial Metaphors

Bibliography 404

Bryson, S. and C. Levit (1992). “The virtual wind tunnel:An environment for the exploration of three dimensional unsteady flows.” Computer Graphics and Applications(July).

Burdea, G. and P. Coiffet (1994). Virtual Reality Technology. New York, John Wiley & Sons.

Burgin, J., B. Stephens, et al. (1998). “Haptic Rendering of Volumetric Soft-Bodies Objects”. The Third PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

C

Caffrey, G. A., P. K. Iwamasa, et al. (1997). Modelling of Iron Flow, Heat Transfer, and Refractory Wear in a Blast furnace Hearth using TASC flow. Newcastle, Australia, BHP Research.

Card, S. K. and J. D. Mackinlay (1997). “The Structure of the Information Visualisation Design Space”. Proceedings of IEEE Symposium on Information Visualization, Phoenix, Arizona, USA, IEEE Computer Society.

Card, S. K., J. D. Mackinlay, et al., Eds. (1999). Information Visualization. Readings in Information Visualization. San Francisco, California, Morgan Kaufmann Publishers, Inc.

Card, S. K., P. Pirolli, et al. (1994). “The Cost-of-Knowledge Characteristic Function: Display Evaluation for Direct Walk Information Visualisations”. ACM Conference on Human Factors in Computing Systems, Boston, USA.

Card, S. K., G. G. Robertson, et al. (1996). The WebBook and the Web Forager: An Information Workspace for the World-Wide Web. ACM Conference on Human Factors in Computing Systems, New York, USA.

Cardois, K. M. and E. D. E. Murphy (1995). Human Factors in the Design and Evaluation of Air Traffic Control Systems, Federal Aviation Administration.

Carpendale, M. S. T., D. J. Cowperthwaite, et al. (1997). “Extending Distortion Viewing from 2D to 3D.” IEEE Computer Graphics and Applications July/August: 42-51.

Chen, C. (2000). “Domain Visualization for Digital Libraries”. Information Visualisation 2000. E. Banissi, M. Bannatyne, C. Chenet al. London, England, IEEE Computer Society: 261-267.

Chen, T., P. Young, et al. (1998). “Development of a Stereoscopic Haptic Acoustic Real-Time Computer (SHARC)”. SPIE Stereoscopic Displays and Applications IX, Jan Jose, California, SPIE.

Chernoff, H. (1973). “The Use of Faces to Represent Points in k-Dimensional Space Graphically.” Journal American Statistical Association 68: 361-368.

Childs, E. (2001). “The Sonification of Numerical Fluid Flow Simulations”. International Conference on Auditory Display, Espoo, Finland.

Christ, R. E. (1975). “Review and Analysis of Color Coding Research for Visual Displays.” Human Factors 17: 542-570.

Chuah, M. C. and S. G. Eick (1997). “Managing Software with New Visual Representations”. Proceedings of IEEE Symposium on Information Visualization, Phoenix, Arizona, USA, IEEE Computer Society.

Chuah, M. C., S. F. Roth, et al. (1995). “SDM:Selective Dynamic Manipulation of Visualizations”. ACM Symposium on User Interface Software and Technology, Pittsburgh, USA.

Churcher, N., L. Keown, et al. (1999). “Virtual Worlds for Software Visualisation”. Softvis '99, University of Technology, Sydney, Information Visualisation Research Group,The Department of Computer Science and Software Engineering, The University of Newcastle, Australia.

Cleary, K., C. Lathan, et al. (1997). “Developing a PC-Based Spine Biopsy Simulator”. The Second PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Cleveland, W. S. (1985). The Elements of Graphing Data. Monterey, California, Wadsworth Advanced Books and Software.

Cleveland, W. S. (1993). Visualizing Data, Hobart Press, Summit NJ,.

Cleveland, W. S. and R. McGill (1984). “Graphical Perception: Theory, Experimentation,and Association to the Development of Graphical Methods.” Journal of American Statistical Association 79(387).

Cohen, J. (1994). “Monitoring Background Activities”. Auditory Display:Sonification, Audification and Auditory Interfaces. G. Kramer, Addison-Wesley Publishing Company. XVIII: 499-534.

Constantine, L. L. and L. A. D. Lockwood (1999). Software for reuse: A Practical Guide to the Models and Methods of Usage-Centered Design. Reading, Massachusetts, Addison-Wesley.

Cook, P. R. (1998). “N>>2: Multi-speaker Display Systems for Virtual Reality and Spatial Audio Projection”. International Conference on Auditory Display, Glasgow, UK.

Page 190: Chapter 7 Guidelines for Spatial Metaphors

Bibliography 405

Coughran, W. M. J. and E. Grosse (1990). “Techniques for Scientific Animation”. Extracting Meaning from Complex Data:Processing, Display, Interaction, Santa Clara, California, SPIE-The International Society for Optical Engineering.

Crossan, A., S. A. Brewster, et al. (2000). “Multimodal Feedback Cues to Aid Veterinary Simulations”. First Workshop on Haptic Human-Computer Interaction.

Cruz-Neira, C., J. Leigh, et al. (1993). “Scientists in wonderland: A report on visualization appplications in the CAVE virtual reality environment”. IEEE Symposium on Research Frontiers in Virtual Reality.

Cruz-Neira, C., D. J. Sandin, et al. (1993). “Surround screen projection-based virtual reality: The design and implementation of the CAVE”. SIGGRAPH, 1993.

Cruz-Neira, C., D. J. Sandin, et al. (1992). “The CAVE: Audio visual experience automatic virtual environment.” Communications of the ACM 35(6): 65-72.

Cugini, J., S. Laskowski, et al. (2000). “Design of 3-D Visualisation of Search Results: Evolution and Evaluation”. Visual Data Exploration and Analysis Conference VII, 2000, USA, SPIE - The International Society for Optical Engineering.

Curtis, B., H. Krasner, et al. (1988). “A Field study of the Software Design Process for Large Systems.” Communications of the ACM 31(11).

D

Darzins, A., Ed. (1995). Guidelines for medical care of older persons in nursing homes. South Melbourne, Australia, Royal Australian College of General Practitioners Services Division.

D'Aulignac, D. and R. Balaniuk (1999). "Providing reliable force-feedback for a virtual, echographic exam of the human thigh". The Fourth PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Davidson, S. W. (1996). "A Haptic Process Architecture using the PHANTOM as an I/O Device in a Virtual Electronics Trainer". The First PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

De Angelis, F., M. Bordegoni, et al. (1997). "Realistic Haptic Rendering of Flexible Objects". The Second PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

De, S. and M. A. Srinivasan (1998). "Rapid Rendering of 'Tool - Tissue' Interactions in Surgical Simulations : Thin Walled Membrane Models". The Third PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Deatherage, B. H. (1992). "Auditory and Other Sensory Forms of Information Presentation" Human Engineering Guide to Equipment Design (Revised Edition). H. P. Van Cott and R. G. Kinkade, McGraw-Hill.

DeFanti, T. A., M. D. Brown, et al. (1989). “Visualization-Expanding Scientific and Engineering Research Opportunities.” IEEE Computer 22(8): 12-25.

DeFanti, T. A., D. J. Sandin, et al. (1993). “A 'room' with a 'view'.” IEEE Spectrum October: 30-33.

Demarey, C. and P. Plénacoste (2001). "User, Sound Context and Use Context: What are their roles in 3D sound Metaphors Design?" International Conference on Auditory Display, Espoo, Finland.

Deming, W. E. (1986). Out of Crisis. Cambridge, MA, MIT Centre for Advanced Engineering Study.

Dent, B. D. (1975). “Communication Aspects of Value-by-Area Cartograms.” American Cartographer 2: 154-168.

DiBattista, G., P. Eades, et al. (1999). Graph Drawing Algorithms for the Visualization of Graphs, Prentice-Hall.

Dickinson, R. R. (1990). "Using our Ears: An Introduction to the Use of Nonspeech Audio Cues". Extracting Meaning from Complex Data:Processing, Display, Interaction, Santa Clara, California, SPIE-The International Society for Optical Engineering.

DiFilippo, D. and D. K. Pai (2000). "Contact Interaction with Integrated Audio and Haptics". International Conference on Auditory Display, Atlanta, Georgia, USA,.

DiFranco, D. E., G. L. Beauregard, et al. (1997). "The Effects of Auditory Cues on the Haptic Perception of Stiffness in Virtual Environments". ASME Dynamic Systems and Control Division.

Dobson, A. J. (1990). An Introduction to Generalized Linear Models. London, Chapman and Hall.

Dodson, D. W. and N. L. J. Shields (1978). Development of user guidelines for ECAS display design. Huntsville, AL, USA, Essex Corp.

Donald, B. R. and F. Henle (1999). "Haptic Vector Fields for Animation Motion Control". The Fourth PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Dufresne, A., O. Martial, et al. (1996). "Sound, Space, and Metaphor: Multimodal Access to Windows for Blind Users". International Conference on Auditory Display.

Page 191: Chapter 7 Guidelines for Spatial Metaphors

Bibliography 406

Durbeck, L. J. K., N. J. Macias, et al. (1998). "SCIRun Haptic Display for Scientific Visualization". The Third PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Durlach, N. I. and A. S. Mavor, Eds. (1995). Virtual Reality: Scientific and Technological Challenges. Washington, D.C., National Academy Press.

Dwyer, T. and P. Eckersley (2001). "Wilmascope 3D Graph Visualisation System". Graph Drawing 2001, Vienna, Springer Verlag, Berlin.

E

Eben, A., III (1995). "Neurosurgical Aspects of Intraoperative MRI". Virtual Reality in Medicine and Developers' Expo, Cambridge Center Marriott, MA.

Eberhardt, S., M. Neverov, et al. (1997). "Force Reflection for WIMPs: A Button Acquisition Experiment". ASME Dynamic Systems and Control Division.

Eckel, G. (1998). "A Spatial Auditory Display for the CyberStage". International Conference on Auditory Display, Glasgow, UK,.

Eckel, G., M. Göbel, et al. (1997). "Benches and Caves". 1st International Immersive Projection Technology Workshop, London, Springer-Verlag.

Eick, S. G. (1994). "Data Visualization Sliders". ACM Symposium on User Interface Software and Technology, Marina del Rey, USA.

Eick, S. G., J. L. Steffen, et al. (1992). “SeeSoft-A Tool for Visualising Line Oriented Statistics Software.” IEEE Transactions on Software Engineering 18(11): 957-968.

Eick, S. G. and G. J. Wills (1993). "Navigating Large Networks with Hierarchies". IEEE Information Visualization 93, San Jose, USA.

Ellson, R. and D. Cox (1988). “Visualization of Injection Molding.” Simulation 51(5): 184-188.

Erbacher, R. F. (2000). "Visual Steering for Program Debugging". Visual Data Exploration and Analysis Conference VII, 2000, USA, SPIE - The International Society for Optical Engineering.

Evans, B. (1990). "Correlating Sonic and Graphic Materials in Scientific Visualisation". Extracting Meaning from Complex Data:Processing, Display, Interaction, Santa Clara, California, SPIE-The International Society for Optical Engineering.

Evans, M. J., A. I. Tew, et al. (1997). "Spatial Audio Teleconferencing - Which Way is Better?" International Conference on Auditory Display, Palo Alto, California.

F

Fairchild, K. M., S. E. Poltrock, et al. (1988). "SemNet: Three-Dimensional Representations of Large Knowledge Bases". Cognitive Science and Its Applications for Human-Computer Interaction. R. Guidon, Lawrence Erlbaum Associates: 201-233.

Feiner, S. a. B., C. (1990). "Worlds within Worlds:Metaphors for Exploring n-Dimensional Virtual Worlds". ACM Symposium on User Interface Software and Technology.

Fernström, M. and C. McNamara (1998). "After Direct Manipulation - Direct Sonification". International Conference on Auditory Display, Glasgow, UK.

Fisher, S. (1982). Viewpoint Dependent Imaging: An Interactive Stereoscopic Display. SPIE 367.

Fisher, S. S., E. J. Wenzel, et al. (1988). "Virtual Interface Environment Workstations". 32nd Annual Meeting of the Human Factors Society, Anaheim, California, USA.

Fishkin, K. and M. C. Stone (1995). "Enhanced Dynamic Queries via Movable Filters". ACM Conference on Human Factors in Computing Systems.

Fitch, W. T. and G. Kramer (1994). "Sonifying the Body Electric:Superiority of an Auditory over a Visual Display in a Complex, Multivariate System". Auditory Display:Sonification, Audification and Auditory Interfaces. G. Kramer, Addison-Wesley Publishing Company. XVIII: 307-326.

Flowers, J. H., D. Buhman, C., et al. (1996). "Data Sonification from the Desktop: Should Sound be part of Standard Data Analysis". International Conference on Auditory Display, Palo Alto, California.

Floyd, J. (1999). "Haptic Interaction with Three-Dimensional Bitmapped Virtual Environments". The Fourth PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Page 192: Chapter 7 Guidelines for Spatial Metaphors

Bibliography 407

Francis, B. and J. Pritchard (2000). "Visualization of criminal careers". Visual Data Exploration and Analysis Conference VII, 2000, USA, SPIE - The International Society for Optical Engineering.

Freeman, E. and S. Fertig (1995). "Lifestreams:Organising Your Electronic Life". AAAI Fall Symposium on AI Applications in Knowledge Navigation.

Freund, J., E. and R. Walpole, E. (1980). Mathematical Statistics. New Jersey, Prentice-Hall.

Friedes, D. (1974). “Human Information Processing and Sensory Modality: Cross-Modal functions, Information Complexity, Memory and Deficit.” Psychological Bulletin 81(5): 284-310.

Friedrich, C. (2002). Animation in Relational Information Visualization. School of Information Technologies. Sydney, University of Sydney: 334.

Fritz, J. P. and K. E. Barner (1996). "Haptic Scientific Visualization". The First PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Fröhlich, B., S. Barrass, et al. (1999). "Exploring Geo-Scientific Data in Virtual Environments". 1999 International Conference on Information Visualization (IV '99), San Francisco, USA, IEEE Computer Society.

Frysinger, S. P. (1990). "Applied research in auditory data representation". Extracting Meaning from Complex Data:Processing, Display, Interaction, Santa Clara, California, SPIE-The International Society for Optical Engineering.

Furnas, G. W. (1981). "The FISHEYE View: A New Look at Structured Files". Readings in Information Visualization:Using Vision to Think. S. K. Card, J. D. Mackinlay and B. Shneiderman. San Francisco, USA, Morgan Kaufmann: 312-330.

Furnas, G. W. (1995). "Generalized Fisheye Views". Human Factors in Computing Systems CHI '95, Denver, CO.

Furnas, G. W. (1997). "Effective View Navigation". ACM Conference on Human Factors in Computing Systems, Atlanta, USA.

G Galitz, W. O. (1989). Handbook of Screen Format Design. Wellesley, MA, USA, Q. E. D. Information Sciences.

Gardner, H. and R. Boswell (2001). "The Wedge Virtual Reality Theatre". Apple University Consortium, Townsville, Australia.

Gaver, W. W. (1986). “Auditory Icons:using sound in computer Interfaces.” Human Computer Interaction 2: 167-177.

Gaver, W. W. (1992). "The affordances of media spaces for collaboration". CSCW'92 - Computer-Supported Collaborative Work, Toronto.

Gaver, W. W. (1993). “What in the world do we hear? An ecological approach to auditory source perception.” Ecological Psychology 5(1): 1-29.

Gaver, W. W. (1994). "Using and Creating Auditory Icons". Auditory Display:Sonification, Audification and Auditory Interfaces. G. Kramer, Addison-Wesley Publishing Company. XVIII: 417-446.

Gibson, J. J. (1983). The Senses Considered As Perceptual Systems. Westport, Conneticut, Greenwood Press.

Gibson, W. (1984). Neuromancer. New York, Berkley Publications Group.

Giess, C., H. Evers, et al. (1998). "Haptic Volume Rendering in Different Scenarios of Surgical Planning". The Third PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Giess, C., H. Evers, et al. (1999). "The use of CORBA in haptic systems - a requirement for future applications". The Fourth PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Giles, M. and R. Haimes (1990). “Advanced Interactive Visualization for CFD.” Computing Systems in Engineering 1(1): 51--62.

Goebbels, G., V. Lalioti, et al. (2001). "Guided Design and Evaluation of Distributed, Collaborative 3D Interaction in Projection Based Virtual Environments". HCI International 2001.

Goldstein, E. B. (1989). Sensation and Perception, Brooks/Cole Publishing Company.

Gorman, P. J., J. D. Lieser, et al. (1998). "Assessment and Validation of a Force Feedback Virtual Reality Based Surgical Simulator". The Third PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Grabowski, N. A. and K. E. Barner (1999). "Structurally-Derived Sounds in a Haptic Rendering System". The Fourth PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Granlund, A., D. Lafreniere, et al. (2001). "A pattern-supported approach to the user interface design process". 9th International Conference on Human Computer Interaction, New Orleans, USA.

Page 193: Chapter 7 Guidelines for Spatial Metaphors

Bibliography 408

Green, D., F. (1997). "Texture Sensing and Simulation Using the PHANToM: Towards Remote Sensing of Soil Properties". The Second PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Green, D., F. and J. K. Salsibury (1998). "Soil Simulation with a PHANToM". The Third PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Grégory, L. and S. Brewster, A. (1998). "An Investigation of Using Music to provide Navigation Cues". International Conference on Auditory Display, Glasgow, UK,.

Groth, R. (1998). Data mining: a hands-on approach for business professionals. Upper Saddle River, N.J., Prentice Hall.

Gunn, C. and C. S. Jackson (1991). Guidelines on patient care in radiography. Edinburgh, Churchill Livingstone.

Gupta, A. K. and P. J. Moss, Eds. (1993). Guidelines for design of low-rise buildings subjected to lateral forces. Raleigh, North Carolina, Boca Raton : CRC Press.

Gutiérrez, T., J. J. Barbero, et al. (1998). "Assembly Simulation through Haptic Virtual Prototypes". The Third PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

H

Haber, R. B. (1990). “Visualization Techniques for Engineering Mechanics.” Computing Systems in Engineering 1: 37-55.

Hamit, F. (1993). Virtual Reality and the Exploration of Cyberspace. Carmel, Indiana, Sams Publishing.

Hancock, D. (1993). “Protoyping the Hubble Fix.” IEEE Spectrum October: 34-39.

Hanrahan, P. and B. Fröhlich (1997). “The Two-User Responsive Workbench: Support for Collaboration Through Individual Views of a Shared Space.” Computer Graphics Proceedings, Annual Conference Series, 1997.

Hardin, C. L. (1990). "Why Color?", Perceiving, Measuring and Using Color, Santan Clara, California, SPIE - The International Society for Optical Engineering.

Harding, C., B. Loftin, et al. (2000). "Geoscientific data visualization on the interactive workbench". Visual Data Exploration and Analysis Conference VII, 2000, USA, SPIE - The International Society for Optical Engineering.

Hayward, C. (1994). "Listening to the Earth Sing". Auditory Display:Sonification, Audification and Auditory Interfaces. G. Kramer, Addison-Wesley Publishing Company. XVIII: 369-404.

Healy, C. G., K. S. Booth, et al. (1995). “High-Speed Visual Estimation Using Prattentive Processing.” ACM Transactions on Computer-Human Interaction 3(2): 107-135.

Helmen, J. and L. Hesselink (1991). "Analysis and Representation of Complex Structures in Separated Flows". SPIE Conference on Extracting Meaning from Complex Data, San Jose, CA.

Helsel, S. (2002). The Benefits of Immersive Visualization in the University. 2002.

Hendley, R. J., N. S. Drew, et al. (1995). "Narcissus: Visualizing Information". International Symposium on Information Visualization, Atlanta, GA.

Hibbard, W., L. Uccellini, et al. (1989). “Application of the 4-D McIDAS to a Model Diagnostic Study of the Presidents’ Day Cyclone.” Bulletin American Meteorological Society 70(11): 1394-1403.

Hill, T., J. Potter, et al. (1999). "Visualising Implicit Structure in Java Object Graphs". Softvis '99, University of Technology, Sydney, Australia, Information Visualisation Research Group, The Department of Computer Science and Software Engineering, The University of Newcastle, Australia.

Hirose, M. (1991). "Visualization Tool Applications of Artificial Reality". International Symposium on Artificial Reality and Teleexistence, Tokyo.

Hirose, S. and K. Mori (1999). "A VR Three-dimensional Pottery Design System Using PHANToM Haptic Devices". The Fourth PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Ho, C.-H., C. Basdogan, et al. (1997). "Haptic Rendering: Point- and Ray-Based Interactions". The Second PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Hollander, A. J. and T. A. Furness III (1994). "Perception of Virtual Auditory Shapes". International Conference on Auditory Display, Santa Fe, New Mexico.

Howard, J. H. and J. A. Ballas (1982). “Acquistition of acoustic pattern categories by exemplar observation.” Organisational Behaviour and Human Decision Process 30(2): 157-173.

Howe, R. D. (1996). "Tactile Display of Shape and Vibration". The First PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Page 194: Chapter 7 Guidelines for Spatial Metaphors

Bibliography 409

Howe, R. D., T. Debus, et al. (1999). "Twice the Fun: Two Phantoms as a High-Performance Telemanipulation System". The Fourth PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Hoyle, D. (2001). ISO 9000 Quality Systems Handbook. New York, Butterworth-Heinemann.

Huang, C., H. Qu, et al. (1998). "Volume Rendering with Haptic Interaction". The Third PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Humphrey, W. S. (2000). A Discipline for Software Engineering. Boston, Addison Wesley.

Hussey, B. M. J., R. J. Hobbs, et al. (1989). Guidelines for bush corridors. Sydeny, NSW, Chipping Norton, NSW.

Hutchins, E. (1989). Metaphors for Interface Design. The Structure of Multimodal Dialogue. M. M. Taylor, F. Neel and D. G. Bouwhuis. Amsterdam, Elsevier Science Publishers: 11-28.

Hutchins, M. and C. Gunn (1999). "A Haptic Constraint Class Library". The Fourth PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

I Inselberg, A. (1997). "Multidimensional Detective". IEEE Information Visualization 97, Los Alamitos, USA.

Ishikawa, K. (1989). Guide to Quality Control. White Plains, New York, Quality Resource.

ISO (1986). Danger signals for work places - auditory danger signals. Geneva, Switzerland, Geneva:Internationals Organization for Standardization.

Itten, J. (1970). The Elements of Color. New York, USA, Van Nostrand Reinhold Company.

J

Jackson, J. A. and J. M. Francioni (1994). "Synchronization of Visual and Aural Parallel Program Performance Data". Auditory Display:Sonification, Audification and Auditory Interfaces. G. Kramer, Addison-Wesley Publishing Company. XVIII: 291-306.

Jacobson, N. and W. Bender (1990). "Deterministic formation of visual color sensation". Perceiving, Measuring and Using Color, Santan Clara, California, SPIE - The International Society for Optical Engineering.

Jain, P. K. (1993). Metric spaces. New Delhi, Narosa Publishing House.

Jameson, D. H. (1994). "Sonnet: Audio-Enhanced Monitoring and Debugging". Auditory Display:Sonification, Audification and Auditory Interfaces. G. Kramer, Addison-Wesley Publishing Company. XVIII: 253-266.

Jansson, G., J. Fänger, et al. (1998). "Visually Impaired Persons Use of the PHANToM for Information about Texture and 3D Form of Virtual Objects". The Third PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Jin, L. and D. c. Banks (1997). “TennisViewer:A Browser for Competition Trees.” IEEE Computer Graphics and Applications July/August: 63-65.

Johannes, S. (1997). Software Engineering with Reusable Components. Berlin, Spinger-Verlag.

Johnson, B. and B. Shneiderman (1991). "Tree-maps: A Space-Filling Approach to the Visualization of Hierarchical Information Structures". IEEE Visualization 91, San Diego.

Johnson, D. A. and G. R. Rising (1972). Guidelines for teaching mathematics. Belmont, California, Wadsworth Pub. Co.

Juarez-Espinosa, O., C. Hendrickson, et al. (2000). "Using Visualisation for Teaching". Visual Data Exploration and Analysis Conference VII, 2000, USA, SPIE - The International Society for Optical Engineering.

Jungmeister, W. and D. Turo (1992). Adapting Treemaps to Stock Portfolio Visualisation. Maryland, Computer Science Department, University of Maryland.

Juran, J. M. (1980). “International Significance of the QC Circle.” Quality Progress 13(11): 18-22.

Juran, J. M. and F. M. Gryna (1970). Quality Planning and Analysis: From Product Development through Use. New York, McGraw-Hill.

Juran, J. M. and F. M. Gryna (1988). Juran's Quality Control Handbook. New York, McGraw-Hill Book Company.

K

Kan, S., H. (1995). Metrics and Models in Software Quality Engineering. Reading, Massachusetts, Addison-Wesley Publishing Company.

Page 195: Chapter 7 Guidelines for Spatial Metaphors

Bibliography 410

Kaper, H. G., S. Tipei, et al. (1999). “Data Sonification and sound visualization.” Computing in Science and Engineering 1(4): 48-58.

Karat, J., C.-M. Karat, et al. (2000). “Affordances, Motivation, and the Design of User Interfaces.” Communications of the ACM 43(8): 49-51.

Kaufman, D. (1993). “Virtual Reality's Future.” IRIS Universe 25: 48-50.

Kehoe, C., J. Stasko, et al. (1999). "Rethinking the evaluation of algorithm animations as learning aids: An observational study". Atlanta, Graphics, Visualization and Usability Center, Georgia Institute of Technology.

Keim, D. A. and H. Kriegel (1994). “VisDB:Database Exploration Using Multidimensional Visualization.” IEEE Computer Graphics and Applications, September: 40-49.

Keller, P. R. and M. M. Keller (1993). Visual Cues. Los Alamitos, USA, IEEE Computer Society Press.

Kellner, M. I. (1988). "Representation Formalisms for Software Process Modeling". 4th International Software Process Workshop, ACM Press.

Kendall, G. S. (1991). “Visualization by Ear: Auditory Imagery for Scientific Visualization and Virtual Reality.” Computer Music Journal 15(4).

Kirkpatrick, A. E. and S. A. Douglas (1999). "Evaluating Haptic Interfaces in Terms of Interaction Techniques". The Fourth PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Klatzky, R. L., J. M. Loomis, et al. (1993). “Haptic Identification of objects and their depictions.” Perception and Psychophysics 54: 170-178.

Kramer, G. (1994a). An Introduction to Auditory Display. Auditory Display:Sonification, Audification and

Auditory Interfaces. G. Kramer, Addison-Wesley Publishing Company. XVIII: 1-78.

Kramer, G. (1994b). "Some Organizing Principles for Representing Data with Sound". Auditory Display:Sonification, Audification and Auditory Interfaces. G. Kramer, Addison-Wesley Publishing Company. XVIII: 185-222.

Kramer, G., B. Walker, et al. (1997). Sonification Report: Status of the Field and Research Agenda, Prepared for the National Science Foundation by members of the International Community for Auditory Display.

Krueger, M. W. and D. Gilden (1997). "Knowhere: An Audio/Spatial Interface for Blind People". International Conference on Auditory Display, Palo Alto, California.

Kruger, W., C. Bohn, et al. (1995). “The Responsive Workbench: A Virtual Environment.” IEEE Computer (July): 42-48.

Kumar, H. P., C. Plaisant, et al. (1997). “Browsing Hierarchical Data with Multi-Level Dynamic Queries and Pruning.” International Journal of Human-Computer Studies 46(1): 103-124.

Kwitowski, A. J., W. D. Mayercheck, et al. (1992). “Teleoperation for continuous miners and haulage equipment.” IEEE Transactions on Industry Applications 28: 1118-1126.

L

Lakshminarayana, A., T. S. Newman, et al. (2000). "Automatic Extraction and Visualisation of Object-Oriented Software Design Metrics". Visual Data Exploration and Analysis Conference VII, 2000, USA, SPIE - The International Society for Optical Engineering.

Lamping, J. and R. Rao (1994). "Laying out and Visualizing Large Trees Using a Hyperbolic Space". UIST '94.

Lamping, J. and R. Rao (1996). “The Hyperbolic Browser: A Focus + Context Technique for Visualizing Large Hierarchies.” Journal of Visual Languages and Computing 7(1): 33-55.

Larsson, P. (1997). "Adding Force Feedback to Interactive Visual Simulations using Oxygen". The Second PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Latimer, C. and J. K. Salsibury (1997). "A Unified Approach to Haptic Rendering and Collision Detection". The Second PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

LeBlanc, J., M. O. Ward, et al. (1990). "Exploring N-Dimensional Databases". Visualization '90, San Francisco, CA.

Lemmens, P. M. C., M. P. Bussemakers, et al. (2000). "The Effect of Earcons on Reaction Times and Error-Rates in a Dual-Task vs. a Single-Task Experiment". International Conference on Auditory Display, Atlanta, Georgia, USA.

Lemmens, P. M. C., M. P. Bussemakers, et al. (2001). "Effects of Auditory Icons and Earcons on Visual Categorization: The Bigger Picture". International Conference on Auditory Display, Espoo, Finland.

Page 196: Chapter 7 Guidelines for Spatial Metaphors

Bibliography 411

Leplâtre, G. and S. A. Brewster (2000). "Designing Non-Speech Sounds to Support Navigation in Mobile Phone Menus". International Conference on Auditory Display, Atlanta, Georgia, USA.

Leung, Y. K. and M. D. Apperley (1994). “A Review and Taxonomy of Distortion-Orientation Presentation Techniques.” ACM Transactions on Computer-Human Interaction 1(2): 126-160.

Leung, Y. K., S. Smith, et al. (1997). "Learning and Retention of Auditory Warnings". International Conference on Auditory Display, Palo Alto, California.

Levkowitz, H. (1991). "Colour Icons: Merging color and texture perception for integrated visualization of multiple parameters". Visualization '91, San Diego, CA.

Lin, M. C., A. Gregory, et al. (1999). "Contact Determination for Real-time Haptic Interaction in 3D Modeling, Editing and Painting". The Fourth PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Lin, X. (1992). "Visualization for the Document Space". IEEE Visualization 92, Boston.

Lipstreu, O. and J. I. Doi (1963). Guidelines for the aspiring professor. Chicago, South-Western Pub. Co.

Littlefield, R. J. (1983). "Using the Glyph Concept to Create User-Definable Display Formats". Fourth Annual Conference and Exposition of the National Computer Graphics Association.

Lodha, S. K., D. Whitmore, et al. (2000). "Analysis and User Evalutaion of a Musical-Visual System: Does music make any difference?" International Conference on Auditory Display, Atlanta, Georgia, USA.

Lokki, T. and H. Järveläinen (2001). "Subjective Evaluation of Auralization of Physics-based Room Acoustics Modeling". International Conference on Auditory Display, Espoo, Finland.

Lorho, G., J. Marila, et al. (2001). "Feasibility of Multiple Non-Speech Sounds Presentation using Headphones". International Conference on Auditory Display, Espoo, Finland.

Luckey, P. H., R. M. Pittman, et al. (1992). Iterative Development Process with Proposed Applications. Owego, NY, IBM Federal Sector Division.

Lunney, D. and R. Morrison, C. (1990). "Auditory Presentation of Experimental Data". Extracting Meaning from Complex Data:Processing, Display, Interaction, Santa Clara, California, SPIE-The International Society for Optical Engineering.

Lux, M. (2000). “The Iceberg Metaphor to Support the Knowledge Crystallisation Cycle”. Information Visualisation 2000. E. Banissi, M. Bannatyne, C. Chenet al. London, England, IEEE Computer Society: 283-287.

Lyons, P. and Y. Zhang (1999). “Visualising Music for Performance Flexibility”. Softvis '99, University of Technology, Sydney, Australia, Information Visualisation Research Group, The Department of Computer Science and Software Engineering, The University of Newcastle, Australia.

Lyons, P. J. (1999). “Programming in several dimensions”. Softvis '99, University of Technology, Sydney, Australia, Information Visualisation Research Group, The Department of Computer Science and Software Engineering, The University of Newcastle, Australia.

M

MacDonald, L. and J. Vince, Eds. (1994). Interacting with Virtual Environments. Chichester, England, John Wiley & Sons Ltd.

MacEachren, A. M. (1995). How Maps Work. New York, The Guilford Press.

Mackinlay, J. D. (1981). “Automating the Design of Graphical Presentations of Relational Information.” ACM Transactions on Graphics 5(2): 111-141.

Mackinlay, J. D., G. G. Robertson, et al. (1991). “The Perspective Wall: Detail and Context Smoothly Integrated”. Human Factors in Computing Systems CHI'91, New York.

Madhyastha, T. M. and D. A. Reed (1994). “A Framework for Sonification Design”. Auditory Display:Sonification, Audification and Auditory Interfaces. G. Kramer, Addison-Wesley Publishing Company. XVIII: 267-290.

Mahemoff, M. and L. Johnston (1998). “Principles for a usability-oriented pattern language”. OZCHI '98, Adelaide, Australia.

Mansur, D. L., M. M. Blattner, et al. (1985). “Sound-graphs: A numerical data analysis method for the blind.” Journal of Medical Systems 9: 163-174.

Marcy, G., B. Temkin, et al. (1998). “Tactile Max: A Haptic Interface for 3D Studio Max”. The Third PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Page 197: Chapter 7 Guidelines for Spatial Metaphors

Bibliography 412

Martens, W. L. (2001). “Psychophysical Calibration for Controlling the Range of a Virtual Sound Source:Multidimensional complexity in Spatial Auditory Display”. International Conference on Auditory Display, Espoo, Finland.

Massie, T. H. and J. K. Salisbury (1994). “The PHANTOM Haptic Interface: A Device for Probing Virtual Objects. Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, Chicago, IL, USA.

Mayer-Kress, G., R. Bargar, et al. (1994). “Musical Structures in Data from Chaotic Attractors”. Auditory Display:Sonification, Audification and Auditory Interfaces. G. Kramer, Addison-Wesley Publishing Company. XVIII: 341-368.

McCabe, K. and A. Rangwalla (1994). “Auditory Display of Computational Fluid Dynamics Data”. Auditory Display:Sonification, Audification and Auditory Interfaces. G. Kramer, Addison-Wesley Publishing Company. XVIII: 327-340.

McCormick, E. J. and M. S. Sanders (1983). Human Factors in Engineering and Design. Singapore, McGraw-Hill Book Co.

McGee, M. R., P. D. Gray, et al. (2000). “Communicating with feeling”. First Workshop on Haptic Human-Computer Interaction.

McLaughlin, J. P. and B. J. Orenstein (1997). “Haptic Rendering of 3D Seismic Data”. The Second PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Meddes, J. and E. McKenzie (2000). “Improving Visualisation by Capturing Domain Knowledge”. Visual Data Exploration and Analysis Conference VII, 2000, USA, SPIE - The International Society for Optical Engineering.

Meyer, G. W. and D. P. Greenberg (1980). “Perceptual Color Spaces for Computer Graphics.” Computer Graphics 14: 254-262.

Mihalisin, T., J. Timlin, et al. (1991a). “Visualization and Analysis of Multi-Variate Data: A Technique for All Fields”. Visualization '91, Los Alamitos,CA, IEEE Computer Society Press.

Mihalisin, T., J. Timlin, et al. (1991b). “Visualizing Multivariate Functions, Data, and Distributions.” IEEE Computer Graphics and Applications 11(13): 28-35.

Miller, G. A. (1957). “The Magic Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information.” Psychological Review 63: 81-97.

Miller, T. (1998). “Implementation Issues in Adding Force Feedback to the X Desktop”. The Third PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Minsky, M., M. Ouh-yong, et al. (1990). “Feeling and Seeing:Issues in Force Display.” Computer Graphics 24(2): 235-243.

Mor, A., B. (1998). “5 DOF Force Feedback Using the 3 DOF Phantom and a 2 DOF Device”. The Third PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Mor, A. B., S. Gibson, et al. (1996). “Interacting with 3-Dimensional Medical Data Haptic Feedback for Surgical Simulation”. The First PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Morton, A. H. (1982). “Visual and Tactile Texture Perception: Intersensory co-operation.” Perception & Pyschophysics 31: 339-344.

Munsell, A. (1979). A Color Notation. Baltimore, Munsell Color Co., Inc.

Munzer, T. and P. Burchard (1995). “Visualizing the Struture of the World Wide Web in 3D Hyperbolic Space”. VRML '95, San Diego, CA.

Mynatt, E. D. (1994). “Auditory Presentation of Graphical User Interfaces”. Auditory Display:Sonification, Audification and Auditory Interfaces. G. Kramer, Addison-Wesley Publishing Company. XVIII: 499-534.

Mynatt, E. D. and G. Weber (1994). “Nonvisual presentation of graphical user interfaces:Contrasting two approaches”. ACM CHI'94 Conference on Human Factors in Computing Systems, Boston.

N

Nagy, A. and R. Sanchez (1988). “Color Difference Required for Parallel Visual Search.” Optical Society of America Technical Digest Series 11(42).

Nakamura, M. and K. Inoune (1998). “Adding Haptic Devices to Virtual Reality Based User Interface Systems”. The Third PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Nesbitt, K. (2002). “Experimenting with Haptic Attributes for Display of Abstract Data”. Eurohaptics 2002 International Conference, Edinburgh, Scotland.

Page 198: Chapter 7 Guidelines for Spatial Metaphors

Bibliography 413

Nesbitt, K. and S. Barass (2002). “Evaluation of a Multimodal Sonification and Visualisation of Depth of Market Stock Data”. International Conference on Auditory Display - ICAD 2002, Kyoto, Japan.

Nesbitt, K. and C. Friedrich (2002). “Applying Gestalt Principles to Animated Visualizations of Network Data”. Sixth International Conference on Information Visualisation, London, IEEE Computer Society.

Nesbitt, K. V. (2000a). “A Classification of Multi-Sensory Metaphors for Understanding Abstract Data in a Virtual Environment”. Information Visualisation 2000. E. Banissi, M. Bannatyne, C. Chenet al. London, England, IEEE Computer Society: 493-498.

Nesbitt, K. V. (2000b). “Designing Multi-sensory Models for Finding Patterns in Stock Market Data”. Third International Conference in Multimodal Interfaces. T. Tan, Y. Shi and W. Gao. Beijing, China, Springer. 1948: 24-31.

Nesbitt, K. V. (2001a). “Interacting with Stock Market Data in a Virtual Environment”. Joint Eurographics IEEE TCVG Symposium on Visualization, Ascona, Switzerland, SpringerWein New York.

Nesbitt, K. V. (2001b). “Modeling the Multi-Sensory Design Space”. Australasian Symposium on Information Visualisation, 2001, Sydney, Australia, Australian Computer Society.

Nesbitt, K. V., R. Gallimore, et al. (2000). “Investigating the Application of Virtual Environment Technology for use in the Petroleum Exploration Industry”. 23rd Australasian Computer Science Conference ACSC 2000, Canberra, Australia, IEEE Computer Society.

Nesbitt, K. V., R. Gallimore, et al. (2001). “Using Force Feedback for Multi-sensory Display”. 2nd Australasian User Interface Conference AUIC 2001, Gold Coast, Queensland, Australia, IEEE Computer Society.

Nesbitt, K. V. and B. Orenstein (1999). “Multisensory Metaphors and Virtual Environments applied to Technical Analysis of Financial Markets”. Advanced Investment Technology, 1999, Gold Coast, Australia, School of Information Technology, Bond University, Gold Coast, Queensland, Australia.

Nesbitt, K. V., B. J. Orenstein, et al. (1998). An Evaluation of Haptic Technology for Information Display. Newcastle, BHP Research: 11.

Nesbitt, K. V., B. J. Orenstein, et al. (1997). “The Haptic Workbench applied to Petroleum 3D Seismic Interpretation”. The Second PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Nesbitt, K. V., B. O. Orenstein, et al. (1997). An Evaluation of the Haptic Workbench for Seismic Interpretation. Newcastle, Australia, BHP Research.

Neuhoff, J. G., G. Kramer, et al. (2000). “Sonification and the Interaction of Perceptual Dimensions: Can the Data Get Lost in the Map?” International Conference on Auditory Display, Atlanta, Georgia, USA.

Nicholson, C. (1999a). Technical Analysis. Course Notes E114. Sydney, Australia, Securities Institute and Australian Technical Analysts Association.

Nicholson, C. (1999b). Specialised Techniques in Technical Analysis. Course Notes E171. S. Hickman. Sydney, Australia, Securities Institute and Australian Technical Analysts Association.

Nierstrasz, O. and T. D. Meijler (1995). “Research Directions in Software Composition.” ACM Computing Surveys 27(2): 262-264.

Nishikawa, N. (1999). “ObjectOrrey - 3D Visualization of Class Library Structure”. Softvis '99, University of Technology, Sydney, Australia, Information Visualisation Research Group, The Department of Computer Science and Software Engineering, The University of Newcastle, Australia.

Nison, S. (1991). Japanese Candlestick Charting Techniques, New York Institute of Finance.

Noma, H., Y. Kitamura, et al. (1996). “Haptic Feedback and Manipulation Aid in a Virtual Environment”. ASME Dynamic Systems and Control Division.

Noma, H. and T. Miyasato (1997). “Cooperative Object Manipulation in Virtual Space using Physics”. ASME Dynamic Systems and Control Division.

North, C., B. Shneiderman, et al. (1996). “User Controlled Overviews of an Image Library: A Case study of the Visible Human”. Conference on Digital Libraries.

Department of Planning, NSW. (1992). Guidelines for hazard analysis. Sydney, Australia, NSW Department of Planning.

Traffic Authority of NSW

. (1979). Guidelines for road closures. Rosebery, N.S.W, Traffic Authority of New South Wales.

NUREG (1981). Guidelines for Control Room Reviews. Washington, DC, U.S. Nuclear Regulatory Commission.

Page 199: Chapter 7 Guidelines for Spatial Metaphors

Bibliography 414

O

Oakley, I., S. A. Brewster, et al. (2000). “Communicating with Feeling”. First Workshop on Haptic Human-Computer Interaction.

Oakley, I., M. R. McGee, et al. (2000). “Putting the feel in look and feel”. ACM Computer Human Interaction 2000, Hague, Netherlands, ACM Press Addison-Wesley.

OECD, N. E. A. (1979). Guidelines for sea dumping packages of radioactive waste. Paris, France, Nuclear Energy Agency, Organisation for Economic Co-operation and Development.

Olson, M. R. (1981). Guidelines and games for teaching efficient Braille reading. New York, N.Y., American Foundation for the Blind.

O'Toole, R., R. Playter, et al. (1997). “A Novel Virtual Reality Surgical Trainer with Force Feedback: Surgeon vs. Medical Student Performance”. The Second PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Ottensmeyer, M., P. and J. K. Salsibury (1997). “Hot and Cold Running VR: adding thermal stimuli to the haptic experience”. The Second PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

P

Pai, D. K. and K. van den Doel (1997). “Simulation of contact sounds for haptic interaction”. The Second PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Pao, L. Y. and D. A. Lawrence (1998). “Synergistic Visual / Haptic Computer Interfaces”. Japan/USE/Vietnam Workshop on Research and Education in Systems, Computation and Control Engineering.

Patterson, R. R. (1982). Guidelines for Auditory Warning Systems on Civil Aircraft. London, Civil Aviation Authority.

Paulk, M. C., V. Weber, et al. (1995). The Capability Maturity Model: Guidelines for Improving the Software Process. Reading, MA., Addison-Wesley.

Pettifer, S., J. Cook, et al. (2000). “Exploring electronic landscapes: technology and methodology”. Visual Data Exploration and Analysis Conference VII, 2000, USA, SPIE - The International Society for Optical Engineering.

Pfleeger, S. L. (1998). Software Engineering: Theory and Practice. New Jersey, Prentice Hall.

Pickett, R. M. and G. G. Grinstein (1988). “Iconographic Displays for Visualizing Multidimensional Data”. IEEE Conference on Systems, Man and Cybernetics, Piscataway, NJ, IEEE Press.

Pirolli, P. and R. Rao (1996). “Table Lens as a Tool for Making Sense of Data”. Workshop on Advanced Visual Interfaces, Gubbio, Italy.

Piron, L., M. Dam, et al. (1998). “A Project to Study Human Motor System”. The Third PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Plaisant, C., B. Milash, et al. (1996). “LifeLines: Visualizing Personal Histories”. ACM Conference on Human Factors in Computing Systems.

Pollack, I. and L. Ficks (1954). “Information of Elementary Multidimensional Auditory Displays.” Journal Acoustic Society America 26: 1550-1558.

Poston, T. and L. Serra (1996). “Dextrous Virtual Work.” Communications of the ACM 39(5): 37-45.

Puckette, M. (1991). “Combining Event and Signal Processing in the Max Graphical Programming Environment.” Computer Music Journal 15(3).

Q

Quinn, M. (2001). “Research set to music: The climate symphony and other sonifications of ice core, radar, DNA, seismic and solar wind data”. International Conference on Auditory Display, Espoo, Finland.

R

Rabenhorst, D. A., E. J. Farrell, et al. (1990). “Complementary Visualization and Sonification of Multi-Dimensional Data”. Extracting Meaning from Complex Data: Processing, Display, Interaction., Santa Clara, California, SPIE-The International Society for Optical Engineering.

Ramloll, R., W. Yu, et al. (2000). “Constructing Sonified Haptic Line Graphs for the Blind Student: First Steps”. ACM Assets 2000.

Ranta, J. F. and W. A. Aviles (1999). “The Virtual Reality Dental Training System - Simulating dental procedures for the purpose of training dental students using haptics”. The Fourth PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Page 200: Chapter 7 Guidelines for Spatial Metaphors

Bibliography 415

Rao, R. and S. K. Card (1994). “The Table Lens: Merging Graphical and Symbolic Representations in an Interactive Focus + Context Visualisation for Tabular Information”. ACM Conference on Human Factors in Computing Systems, New York, USA.

Rasmussen, J. and K. J. Vicente (1989). “Coping with human errors through system design: implications for ecological interface design.” International Journal of Man-Machine Studies 31: 517-534.

Reachin (2001). Reachin API Programmer's Guide, Revision No. 300-501-002-03002. Sweden, Reachin Core Technology.

Reeves, B. and Y. Hayashi (1999). “Looking Good: Exploring Visual Representation of Object Metrics”. Softvis '99, University of Technology, Sydney, Australia, Information Visualisation Research Group, The Department of Computer Science and Software Engineering, The University of Newcastle, Australia.

Reeves, B. and C. Nass (2000). “Perceptual Bandwidth.” Communications of the ACM 43(3): 65-70.

Reinig, K. D. (1996). “Haptic Interaction with the Visible Human”. The First PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Reiss, S. P. (1997). “Cacti: A Front End for Program Visualisation”. Proceedings of IEEE Symposium on Information Visualization, Phoenix, Arizona, USA, IEEE Computer Society.

Rekimoto, J. and M. Green (1993). “The Information Cube: Using Transparency in 3D Information Visualization”. 3rd Annual Workshop on InformationTechnologies & Systems.

Rennison, E. (1994). “Galaxy of News: An Approach to Visualizing and Understanding Expansive News Landscapes”. ACM Symposium on User Interface Software and Technology, New York, USA.

Reynolds, C. W. (1987). “Flocks, herds and schools: A distributed behavioural model.” Computer Graphics 21(4): 25-34.

Rhea, R. (1932). The Dow Theory. Vermont, Fraser Publishing.

Rheingold, H. (1991). Virtual Reality. Great Britain, Mandarin.

Richards, D. (1999). “Visualising Knowledge Based Systems”. Softvis '99, University of Technology, Sydney, Information Visualisation Research Group, The Department of Computer Science and Software Engineering, The University of Newcastle, Australia.

Riedel, B. and A. M. Burton (2002). “Perception of Gradient in Haptic Graphs by Sighted and Visually Impaired Subjects”. Eurohaptics 2002 International Conference, Edinburgh, Scotland.

Risch, J. S., D. B. Rex, et al. (1997). “The STARLIGHT Information Visualization System”. IEEE International Conference on Information Visualization, London, UK.

Roberts, J. C. (2000). “Multiple-View and Multiform Visualisation”. Visual Data Exploration and Analysis Conference VII, 2000, USA, SPIE - The International Society for Optical Engineering.

Robertson, G. G., S. K. Card, et al. (1993). “Information Visualisation Using 3D Interactive Animation.” Communications of the ACM 36(4): 57-71.

Robertson, G. G. and J. D. Mackinlay (1993). “The Document Lens”. ACM Symposium on User Interface Software and Technology.

Robertson, G. G., J. D. Mackinlay, et al. (1991). “Cone Trees: Animated 3D Visualisations of Heirarchical Information”. Human Factors in Computing Systems CHI '91, New York.

Rock, I. and J. Victor (1964). “Vision and Touch: An experimentally created conflict between the senses.” Science 143: 594-596.

Rojdestvenski, I., D. Modjeska, et al. (2000). “Sequence World: A Genetics Database in Virtual Reality”. Information Visualisation 2000. E. Banissi, M. Bannatyne, C. Chenet al. London, England, IEEE Computer Society: 261-267.

Rosch, E. (1975). “The Nature of Mental Codes for Color Categories.” Experimental Psychology: Human Perception and Performance 1: 303-322.

Rose, S. and P. C. Wong (2000). “Driftweed - A visual metaphor for interactive analysis of multivariate data”. Visual Data Exploration and Analysis Conference VII, 2000, USA, SPIE - The International Society for Optical Engineering.

Roth, P., L. Petrucci, et al. (1998). “AB-Web: Active audio browser for visually impaired and blind users”. International Conference on Auditory Display, Glasgow, UK,.

Royce, W. W. (1970). “Managing the development of large software systems: Concepts and techniques”. WESCON.

Page 201: Chapter 7 Guidelines for Spatial Metaphors

Bibliography 416

Rumbaugh, J., M. Blaha, et al. (1991). Object-oriented Modeling and Design. Englewood Cliffs, New Jersey, Prentice Hall International Inc.

Ruspini, D. (1997). “Adding Motion to Constraint Based Haptic Rendering Systems: Issues & Solutions”. The Second PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Ruspini, D. and O. Khatib (1998). “Acoustic Cues for Haptic Rendering Systems”. The Third PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Ruspini, D. C., K. Kolarov, et al. (1997). “Haptic Interaction in Virtual Environments”. International Conference on Intelligent Robots and Systems, Grenoble, France, IEEE.

Russo Dos Santos, C., P. Gros, et al. (2000). “Mapping Information onto 3D Virtual Worlds”. Information Visualisation 2000. E. Banissi, M. Bannatyne, C. Chenet al. London, England, IEEE Computer Society: 379-386.

Sadarjoen, A., T. Van Walsum, et al. (1994). “Particle Tracing Algorithms For 3D Curvilinear Grids”. 5th Eurographics Workshop on Visualization in Scientific Computing, Rostock, Germany.

S

Salisbury, K. (1996). “An Overview of Haptics Research at MIT's AI Lab”. The First PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Salsibury, J. K. and M. A. Srinivasan (1997). “Phantom-Based Haptic Interaction with Virtual Objects.” Computer Graphics and Applications 17(5): 6-10.

Salton, G., J. Allan, et al. (1995). “Automatic Analysis, Theme Generation, and Summarization of Machine-Readable Text.” Science 264(3): 1421-1426.

Satava, R. M. (1995). “Medicine 2001: The King Is Dead”. In InteractiveTechnology and the New Paradigm for Healthcare. Washington, DC, IOS Press: 334-339.

Sawhney, N. and C. Schmandt (1997). “Design of Spatialized Audio in Nomadic Environments”. International Conference on Auditory Display, Palo Alto, California.

Scaletti, C. (1994). “Sound Synthesis Algorithms for Auditory Data Representations”. Auditory Display:Sonification, Audification and Auditory Interfaces. G. Kramer, Addison-Wesley Publishing Company. XVIII: 223-252.

Scaletti, C. and A. B. Craig (1991). Using Sound to Extract Meaning from Complex Data

. SPIE Conference.

Schmandt, C. and A. Mullins (1995). AudioStreamer:Exploiting Simultaneity for Listening

. CHI '95.

Schneider, T. D. and R. M. Stephens (1990). “Sequence Logos: A New Way to Display Consensus Sequences.” Nucleic Acids Research 18: 6097-6100.

Seeger, A., J. Chen, et al. (1997). “Controlling Force Feedback Over a Network”. The Second PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Sekuler, R. and R. Blake (1990). Perception. New York, McGraw-Hill Publishing Company.

Sensable (1997). Ghost Software Developer's Toolkit: API Reference Version 1.0. Cambridge, MA, SensAble Technologies, Inc.

Shaw, R. (1998). "Nearest Neighbor" Approach to Haptic Collision Detection”. The Third PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Shimoga, K. B. (1992). “Finger and touch feedback issues in dexterous telemanipulation”. NASA-CIRSSE International Conference on Intelligent Robotic Systems for Space Exploration, Troy, N.Y.

Shinn-Cunningham, B. and N. I. Durlach (1994). “Defining and Redefining Limits on Human Performance in Auditory Spatial Displays”. International Conference on Auditory Display, Santa Fe, New Mexico.

Shneiderman, B. (1982). “System message design: Guidelines and experimental results”. Directions in Human-Computer Interaction. A. Badre and B. Shneiderman. Ablex, Norwood, NJ: 55-78.

Shneiderman, B. (1992a). “Tree Visualization with Treemaps: A 2D Space Filling Approach.” ACM Transactions on Graphics 11(1): 92-99.

Shneiderman, B. (1992b). Designing the User interface. Reading, Massachusetts, Addison-Wesley Publishing Company.

Shneiderman, B. (1994). “Dynamic Queries for Visual Information Seeking.” IEEE Software 11(6): 70-77.

Simon, A. and M. Göbel (2002). “The i-Cone: A Panoramic Display System for Virtual Environments”. Pacific Graphics 2002, Beijing, China.

Page 202: Chapter 7 Guidelines for Spatial Metaphors

Bibliography 417

Sjöström, C. (1997). “The PHANToM for Blind People”. The Second PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Smith, S. (1990). “Representing Data with Sound”. Proceedings of IEEE Visualization 1990, IEEE Computer Society Press.

Smith, S., R. M. Pickett, et al. (1994). “Environments for Exploring Auditory Representations of Multidimensional Data”. Auditory Display:Sonification, Audification and Auditory Interfaces. G. Kramer, Addison-Wesley Publishing Company. XVIII: 167-184.

Smith, S. L. and J. N. Mosier (1986). Guidelines for Designing User Interface Software. Bedford, MA, USA, The MITRE Corporation.

Snibbe, S., S. Anderson, et al. (1998). “Springs and Constraints for 3D Drawing”. The Third PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Snyder, J. P. (1993). Flattening the Earth-Two Thousand Years of Map Projections. Chicago, University of Chicago Press.

Soukup, T. (2002). Visual data mining: techniques and tools for data visualization and mining. New York, John Wiley & Sons.

Speeth, S. D. (1961). “Seismometer Sounds.” Journal Acoustic Society America 33: 909-916.

Spence, R. (2001). Information Visualization. London, Addison-Wesley.

Spence, R. and M. D. Apperley (1977). “The interactive man-computer dialogue in computer-aided electrical circuit design.” IEEE Transactions on Circuits and Systems CAS-24(2): 49-61.

Spence, R. and M. D. Apperley (1982). “Data Base Navigation: An Office Environment for the Professional.” Behaviour and Information Technology 1(1): 43-54.

Spoerri, A. (1993). “InfoCrystal: A Visual Tool for Information Retrieval”. IEEE Visualization 93, Los Alamitos, USA.

Srinivasan, M. A. (1996). “Haptic Research at the MIT Touch Lab”. The First PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Srinivasan, M. A. and C. Basdogan (1997). “Haptics in Virtual Environments: Taxonomy, Research Status, and Challenges.” Computer & Graphics 21(4): 393-404.

Srinivasan, M. A., G. L. Beauregard, et al. (1996). “The impact of visual information on haptic perception of stiffness in virtual environments”. ASME Dynamic Systems and Control Division.

Stasko, J., T. (1992). Three-Dimensional Computation Visualization. Atlanta, GA, USA, Georgia Institute of Technology.

Stevenson, D. R., K. A. Smith, et al. (1999). “Haptic Workbench: A multi-sensory virtual environment”. Stereoscopic Displays and Virtual Reality Systems VI, USA, SPIE.

Storey, A. D., M. Storey, et al. (1997). “On Integrating Visualization Techniques for Effective Software Exploration”. Proceedings of IEEE Symposium on Information Visualization, Phoenix, Arizona, USA, IEEE Computer Society.

Stroustrup, B. (1996). The C++ Programming Language. Reading, Massachusetts, Addison-Wesley Publishing Company.

Stuart, R. (1996). The Design of Virtual Environments. New York, McGraw-Hill.

Sutherland, I. E. (1965). “The Ultimate Display”. IFIP'65 International Federation for Information Processing.

Sutherland, I. E. (1968). “A head-mounted three-dimensional display”. AFIPS. T Tan, T., Y. Shi, et al., Eds. (2000). Third International Conference in Multimodal Interfaces, Springer.

Tarr, C. and J. K. Salsibury (1997). “Haptic Rendering of Visco-Elastic and Plastic Surfaces”. The Second PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Taylor, R. M. I. (1998). “Network Access to a Phantom through VRPN”. The Third PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Taylor, R. M. I., W. Robinett, et al. (1993). “The Nanomanipulator: A Virtual-Reality Interface for a Scanning Tunneling Microscope”. SIGGRAPH '93, Los Angeles, USA.

Tian, G., Y. and D. Taylor (2000). “Colour Image Retrieval Using Virtual Reality”. Information Visualisation 2000. E. Banissi, M. Bannatyne, C. Chenet al. London, England, IEEE Computer Society: 261-267.

Page 203: Chapter 7 Guidelines for Spatial Metaphors

Bibliography 418

Tramberend, H. (1999a). “Avango: A Distributed Virtual Reality Framework”. IEEE Conference on Virtual Reality, Houston, USA.

Tramberend, H., F. Hasenbrink, et al. (1999b). “AVOCADO - The Virtual Environment Framework”. ERCIM News

Tran, Q. T. and E. D. Mynatt (2000). “Music Monitor: Dynamic Data Display”. International Conference on Auditory Display, Atlanta, Georgia, USA.

Trumbo, B. E. (1981). “A Theory for Coloring Bivariate Statistical Maps.” The American Statistician 35(4).

Tufte, E. (1983). The Visual Display of Quantitative Information. Cheshire, Connecticut, Graphics Press.

Tufte, E. (1990). Envisioning Information. Cheshire, Connecticut, Graphics Press.

Tufte, E. (1997). Visual Explanations. Cheshire, Connecticut, Graphics Press.

Turk, M. and G. Robertson (2000). “Perceptual User Interfaces.” Communications of the ACM 43(3): 33-34.

Tweddie, L., R. Spence, et al. (1996). “Externalizing Abstract Mathematical Models”. ACM Conference on Human Factors in Computing Systems.

Tweddie, L. A. (1997). “Characterizing Interactive Externalizations”. ACM Conference on Human Factors in Computing Systems, Atlanta, USA.

Tzanetakis, G. and R. P. Cook (2000). “Experiments in computer-assisted annotation of audio”. International Conference on Auditory Display, Atlanta, Georgia, USA.

Tzelgov, J., R. Srebro, et al. (1987). “Radiation detection by ear and by eye.” Human Factors 29(1): 87-98

U

V

Van Scoy, F. L. (2000). “Sonification of Remote Sensing Data: Initial Experiment”. Information Visualisation 2000. E. Banissi, M. Bannatyne, C. Chenet al. London, England, IEEE Computer Society: 453-460.

Vedula, S. (1996). “Force Feedback in Interactive Dynamic Simulation”. The First PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Veldkamp, P., G. Turner, et al. (1998). “Incorporating Haptics into Mining Industry Applications”. The Third PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Vickers, P. and J. Alty, L. (1998). “Towards some Organising Principles for Musical Program Auralisations”. International Conference on Auditory Display, Glasgow, UK,.

Vickers, P. and J. L. Alty (1996). “CAITLIN: A Musical Program Auralisation Tool to Assist Novice Programmers with Debugging”. International Conference on Auditory Display, Palo Alto, California.

Vickers, P. and J. L. Alty (2000). “Musical Program Auralisation: Empirical Studies”. International Conference on Auditory Display, Atlanta, Georgia, USA.

Von der Heyde, M. and C. Häger-Ross (1998). “Psychophysical experiments in a complex virtual environment”. The Third PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

W

Walker, B. N., G. Kramer, et al. (2000). “Psychophysical Scaling of Sonification Mappings”. International Conference on Auditory Display, Atlanta, Georgia, USA.

Wall, S. A., K. Paynter, et al. (2002). “The Effect of Haptic Feedback and Stereo Graphics in a 3D Target Acquisition Task”. Eurohaptics 2002 International Conference, Edinburgh, Scotland.

Wanger, L. (1998). “Haptically Enhanced Molecular Modeling: A Case Study”. The Third PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Weelner, P., M. W., et al. (1993). “Computed-Augmented Environments: Back to the Real World.” Communications of the ACM 36(7): 24-25.

Welch, R. B. and D. H. Warren (1980). “Immediate Perceptual Response to Intersensory Discrepancy.” Psychological Bulletin 88(3): 638-667.

Wenzel, E. M. (1994). Spatial Sound and Sonification. Auditory Display:Sonification, Audification and Auditory Interfaces. G. Kramer, Addison-Wesley Publishing Company. XVIII: 127-150.

Wenzel, E. M., M. Arruda, et al. (1993). “Localization using nonindividualized head-related transfer functions.” Journal of the Acoustical Society of America 94: 111-123.

Page 204: Chapter 7 Guidelines for Spatial Metaphors

Bibliography 419

Wernecke, J. (1994). The Inventor Mentor. Reading, Massachusetts, Addison-Wesley Publishing Company.

Wesche, G., O. Kraemer-Fuhrmann, et al. (1997). “Highly interactive visualization of fluid dynamics in a virtual environment”. ISATA 1997, International Symposium on Automotive Technology and Automation, Florence.

Wickens, C. D. and P. Baker (1995). Cognitive Issues in Virtual Reality. Virtual Environments and Advanced Interface Design. B. W. and F. T.A. New York, Oxford University Press: 514-541.

Wilkes, G. A. and W. A. Krebs, Eds. (1990). The Collins Concise Dictionary of the English Language. Australia, Collins/Angus & Robertson Publishers.

Williams, S. M. (1994). Perceptual Principles in Sound Grouping. Auditory Display:Sonification, Audification and Auditory Interfaces. G. Kramer, Addison-Wesley Publishing Company. XVIII: 95-126.

Wilson, J. P., R. J. Kline-Schoder, et al. (1999). “Algorithms for Network-Based Force Feedback”. The Fourth PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Wise, J. A., J. J. Thomas, et al. (1995). “Visualizing the Non-Visual: Spatial Analysis and Interaction with Information from Text Documents”. IEEE Information Visualization 95.

Wright, P., B. Fields, et al. (1994). “Deriving human error tolerance requirements from tasks”. First International Conference on Requirements Engineering, IEEE.

Wright, W. (1995). “Information Animation Applications in the Capital Markets”. IEEE Information Visualization 95, New York.

Wynblatt, M., D. Benson, et al. (1997). “Browsing the World Wide Web in a Non-Visual Environment”. International Conference on Auditory Display, Palo Alto, California.

X

Y

Yeung, E. (1980). “Pattern recognition by audio representation of multivariate analytical data.” Analytical Chemistry 52: 1120-1123.

Young, P., T. Chen, et al. (1997). “LEGOLAND: A Multi-Sensory Environment for Virtual Prototyping”. The Second PHANToM User's Group Workshop, Cambridge, Massachusett, USA, MIT.

Yu, W., K. Cheung, et al. (2002). “Automatic Online Haptic Graph Construction”. Eurohaptics 2002 International Conference, Edinburgh, Scotland.

Yu, W., R. Ramloll, et al. (2000). “Haptic Graphs for Blind Computer Users”. First Workshop on Haptic Human-Computer Interaction.

Zacharov, N. and K. Koivuniemi (2001). “Audio Descriptive Analysis and Mapping of Spatial Sound Displays”. International Conference on Auditory Display, Espoo, Finland.

Z

Zemanovic, B. (1992). Guidelines for good manufacturing practice in the smallgoods industry. Canberra, Australia, Australian Govt. Pub. Service.

Zhang, K.-B. and K. Zhang (1999). “An Incremental Approach to Graph layout based on Grid Drawing”. Softvis '99, University of Technology, Sydney, Australia, Information Visualisation Research Group, Department of Computer Science and Software Engineering, University of Newcastle, Australia.

Page 205: Chapter 7 Guidelines for Spatial Metaphors

Appendix A

Information Metaphors

Information Metaphor (2001)

The following appendix contains a table with an alphabetical listing of the various applications and information metaphors referred to in the thesis. For each application all sections of the thesis that refer to that application are noted. At the end of the list of applications is another series of tables. These tables link the various section numbers to the relevant concepts within the MS-Taxonomy.

Page 206: Chapter 7 Guidelines for Spatial Metaphors

APPENDIX A Information Metaphors 423

A.1 Alphabetically ordered table of applications and information metaphors

Application (Information Metaphor)

SPATIAL DIRECT TEMPORAL Visual Auditory Haptic Visual Auditory Haptic Visual Auditory Haptic

3D Wheel [Chuah and Eick 1997]

3.2.2.2

3D-Rooms [Robertson, Card et al. 1993]

5.2.1.1

Air traffic control alarms [Athènes, Chatty et al. 2000]

5.2.1.45.2.2

Analyse parallel programs [Madhyastha and Reed 1994]

3.3.1 4.3.2

Analysing supply chains [Chuah, Roth et al. 1995]

4.2.4

Animated graph drawings [Friedrich 2002]

5.2.1.3

Animated sort algorithm [Baecker, 1981]

5.2.1.3

Audio Rooms [Mynatt 1994]

3.3.1.5 5.3.1

Audio web browsing [Wynblatt, Benson et al. 1997]

3.3.2.1

AudioStreamer [Schmandt and Mullins 1995]

3.3.1.2

Auditory data display [Bly 1994]

4.3.1 4.3.2 4.3.3

Auditory display of chaos [Mayer-Kress, Bargar et al. 1994]

5.3.2

Auditory fluid flow [McCabe and Rangwalla 1994]

4.3.2 5.3.2

Auditory graph drawing [Bennett and Edwards 1998]

3.3.2.1

Auditory icons [Gaver 1986]

4.3.4 5.3.1.45.3.2

Auditory internet browser [Roth, Petrucci et al. 1998]

3.3.1.33.3.2.1

Auditory map navigation [Blattner, Papp et al. 1994]

5.3.1.2

Auditory menu navigation [Leplâtre and Brewster 1998]

5.3.2

Auditory petroleum wells [Barass and Zehner 2000]

4.3.5

Auditory scatterplot 3.3.1

Auditory tune browser [Fernström and McNamara 1998]

3.3.1.2

Barplot - 3D 3.2.1.33.2.2.2

Beacons [Kramer 1994b]

5.3.2

Chernoff Faces [Chernoff 1973]

3.2.2.2

Cityscape [Russo Dos Santos, Gros et al. 2000].

3.2.1.33.2.2.1

3.3.2.1

Collaborative exploration [Harding, Loftin et al. 2000]

5.2.1.1

Cone Trees [Robertson, Mackinlay et al. 1991]

3.2.2.1 5.2.1.15.2.1.35.2.1.3

Continuous sound space [Scarletti 1994]

5.3.1.3

Data Map 3.2.1.23.2.2.1

Designing circuit diagrams [Spence and Apperley 1977]

5.2.1.1

Page 207: Chapter 7 Guidelines for Spatial Metaphors

APPENDIX A Information Metaphors 424

Application (Information Metaphor)

SPATIAL DIRECT TEMPORAL Visual Auditory Haptic Visual Auditory Haptic Visual Auditory Haptic

Dimensional Stacking [LeBlanc, Ward et al. 1990]

3.2.1.5

Driftweed [Rose and Wong 2000]

3.2.2.2

Dynadraw, Griddraw [Snibbe, Anderson et al. 1998]

3.4.2.1

Earcons [Blattner, Papp III et al. 1994] [Brewster, Wright et al. 1994]

5.3.2

Earthquake audification [Hayward 1994]

5.3.2

Filmfinder [Ahlberg and Shneiderman 1994]

3.2.1.2 4.2.1

Financial Landscapes [Wright 1995]

3.2.1.33.2.2.13.2.2.2

4.2.1 5.2.1.15.2.1.25.2.1.35.2.2

Fisheye View [Furnas 1981]

3.2.1.4

Fluid flow in blast furnace [Nesbitt, Gallimore 2001]

3.4.1.2 4.4.1 5.4.1.2

Friction and viscosity [Ruspini, Kolarov et al. 1997]

4.4.5

Galaxy of News [Rennison 1994]

5.2.1.1

Geographical information [Madhyastha and Reed 1994]

3.3.1 4.3.1 4.3.3

Geological porosity [Aviles and Ranta 1999]

4.4.5

Graph Drawing [diBattista, Eades et al. 1999] [Becker, Eick et al. 1990a] [Becker, Eick et al. 1990b]

3.2.2.1

Graph transitions [Chen 2000]

5.2.1.3

GROPE [Batter and Brooks 1972]

3.4.1.33.4.2.1

Haptic anatomical organs [Mor 1996]

3.4.1.3 4.4.5

Haptic CAD models [Larsson 1997]

3.4.1.3

Haptic clay sculptures [Massie 1996]

3.4.1.3

Haptic constraints [Hutchins and Gunn 1999]

3.4.1.23.4.2.1

Haptic dendrites [Avila and Sobierajski 1996b]

3.4.2.1 5.4.1.1

Haptic drag forces [Ottensmeyer and Salsibury 1997]

4.4.1

Haptic electronics console [Davidson 1996]

3.4.1.3

Haptic grid [Fritz and Barner et al. 1996] [Yu, Ramloll et al. 2000]

3.4.2.1

Haptic histogram [Yu, Cheung et al. 2002]

3.4.3

Haptic interface widgets [Eberhardt, Neverov et al. 1997] [Miller 1998][Ruspini 1997] [Oakley, McGee et al. 2000]

3.4.1.23.4.2.1

4.4.2 4.4.3

Haptic lineplot [Ramloll, Yu et al. 2000] [Sjöström 1997]

3.4.1.13.4.1.23.4.2.13.4.3

Page 208: Chapter 7 Guidelines for Spatial Metaphors

APPENDIX A Information Metaphors 425

Application (Information Metaphor)

SPATIAL DIRECT TEMPORAL Visual Auditory Haptic Visual Auditory Haptic Visual Auditory Haptic

Haptic medical simulators [Reinig 1996]

3.4.1.3

Haptic mesh [Fritz and Barner et al. 1996] [Lin, Gregory et al. 1999]

3.4.2.1

Haptic mineral exploration [Veldkamp, Turner et al. 1998]

3.4.2.1 4.4.1

Haptic motion planning [Bayazit, Song et al. 1999]

3.4.2.1

Haptic paintings [Sjöström 1997]

3.4.2.1

Haptic Pie Chart [Yu, Cheung et al. 2002]

3.4.3

Haptic scatterplot [Fritz and Barner et al. 1996]

3.4.1 3.4.1.2

Haptic slider widgets [Massie 1996]

3.4.1.1

Haptic surfaceplot [Sjöström 1997]

3.4.3

Haptic vector fields [Fritz and Barner et al. 1996] [Durbeck, Macias et al. 1998]

3.4.1.33.4.2.1

5.4.1.1

Histogram 3.2 3.2.1 3.2.1.23.2.2.23.2.3

4.2.3

Histograms within histograms [Mihalisin, Timlin et al. 1991a]

3.2.1.5

Hyperbolic Trees [Lamping and Rao 1994] [Munzer and Burchard 1995] [Carpendale, Cowperthwaite et al. 1997]

3.2.1.4 5.2.1.2

InfoCrystal [Spoerri 1993]

4.2.2 4.2.3 4.2.4

InfoCube [Rekimoto and Green 1993]

3.2.1.5

Legoworld [Young, Chen et al. 1997]

3.4.1.33.4.2.1

4.4.6 5.4.1.1

Lifelines [Freeman and Fertig 1995]

3.2.1.1

Lineplot - 1D 3.2.1.1

Lineplot - 2D 3.2.1.23.2.2.13.2.3

Lineplot - 3D 3.2.1.33.2.3

Loudness Nesting [Kramer 1994b]

5.3.1 5.3.2

Magnetic metaphor [Noma, Kitamura et al. 1996] [Wall, Paynter et al.2002]

3.4.2.1

Matrix of scatterplots [Bertin 1981] [Cleveland 1994]

3.2.1.5

Medical image segmenter [Giess, Evers et al. 1998]

3.4.1.33.4.2.2

Medical trainer [Huang, Qu et al. 1998]

4.4.5

Mercator [Mynatt 1994]

5.3.1.4

Music Monitor [Tran and Mynatt 2000]

5.3.2

Page 209: Chapter 7 Guidelines for Spatial Metaphors

APPENDIX A Information Metaphors 426

Application (Information Metaphor)

SPATIAL DIRECT TEMPORAL Visual Auditory Haptic Visual Auditory Haptic Visual Auditory Haptic

Musical computer programs [Vickers and Alty 1998]

5.3.2

Nanomanipulator [Taylor, Robinett et al. 1993]

3.4.1.2 4.4.3

Narcissus [Hendley, Drew et al. 1995]

3.2.2.1

Nomadic Radio [Sawhney and Schmandt 1997]

3.3.1.23.3.1.53.3.2.1

Parallel Coordinates [Inselberg 1997]

3.2.1.2

Particle tracing [Sadarjoen, van Walsum et al. 1994]

5.2.1.1

Personal WebMelody [Barra, Cillio et al. 2001]

5.3.2

Perspective Wall [Mackinlay, Robertson et al. 1991]

5.2.1.3

Perspective Wall [Mackinlay, Robertson et al. 1991]

3.2.1.4

Physically-based simulators [Latimer and Salsibury 1997]

3.4.1.3 5.4.1.1

Physiological monitoring [Fitch and Kramer 1994]

4.3.2 4.3.3

5.3.2

Pie chart 3.2.3

Pitch and brightness nesting [Kramer 1994b]

5.3.2

Remote collaboration [Oakley, Brewster et al. 2000]

3.4.1.2 4.4.7 5.4.1.3

Remote texture sensing [Green 1997]

4.4.2 4.4.5

Rose diagram 3.2.3

Sandpaper [Minsky, Ouh-yong et al. 1990]

4.4.2

Scatterplot - 2D 3.1.1 3.2.1.23.2.2.13.2.2.23.2.3

Scatterplot - 3D 3.2.1.3

SCULPT [Wanger 1998]

3.4.2.1

SeeNet [Eick and Wills 1993]

3.2.2.1 5.2.1.25.2.1.3

5.3.1.45.3.2

SeeSys [Baker and Eick 1995]

4.2.1

Seismic interpreter [Nesbitt, Orenstein et al. 1997] [McLaughlin and Orenstein 1997]

3.4.1.23.4.1.33.4.2.1

4.2.1 4.4.1 4.4.6

SemNet [Fairchild, Poltrock et al. 1988]

4.2.1

SequenceWorld [Rojdestvenski, Modjeska et al. 2000]

4.2.4

ShareMon [Cohen 1994]

4.3.4

Small multiples [Tufte 1990]

3.2.1.5

Sonic graph [Mansur, Blattner et al. 1985]

4.3.2 5.3.1.35.3.2

Sonic grid [Scarletti 1994]

5.3.2

Sonic histogram [Scarletti 1994]

5.3.1.3

Page 210: Chapter 7 Guidelines for Spatial Metaphors

APPENDIX A Information Metaphors 427

Application (Information Metaphor)

SPATIAL DIRECT TEMPORAL Visual Auditory Haptic Visual Auditory Haptic Visual Auditory Haptic

Sonic scatterplot [Madhyastha and Reed 1994]

5.3.1.3

SonicFinder [Gaver 1994]

5.3.1.35.3.1.4

Sonnet software debugging [Jameson 1994]

4.3.1 4.3.3

5.3.1.3

Sound fields [Eckel 1998]

3.3.2.1

Spiral circular plot [Cugini, Laskowski et al. 2000]

3.2.1.1

Spotfire [Ahlberg and Wistrand 1995]

3.2.1.2

Stick figures [Pickett and Grinstein 1988]

3.2.2.2

Streamlines [Giles and Haimes 1990]

5.2.1.1

Submarine [Sjöström 1997]

3.4.2.1

Surfaceplot - 3D 3.2.1.33.2.2.23.2.3

Surgical trainer [O'Toole, Playter et al. 1997]

4.4.4

TennisViewer [Jin and Banks 1997]

5.2.1.3

Texture Navigation [Tian and Taylor 2000]

4.2.3

Themescape [Wise, Thomas et al. 1995]

3.2.1.33.2.2.1

Traffic Collision Avoidance [Begault and Wenzel 1992]

3.3.1.3

Treemaps [Shneiderman 1992] [Johnson and Shneiderman 1991]

3.2.1.53.2.2.1

4.2.2 5.2.1.3

Tuning parallel algorithms [Jackson and Francioni 1994]

4.3.1 4.3.2 4.3.3

5.3.1

UML diagrams [Booch, Rumbaugh et al. 1999]

3.2.2.1

Veterinary simulator [Crossan, Brewster et al. 2000]

5.4.1.2

VisDB (Spiral rectangular plot) [Keim and Kriegel 1994]

3.2.1.1 4.2.1

Worlds-within-Worlds [Feiner 1990]

3.2.1.5

Page 211: Chapter 7 Guidelines for Spatial Metaphors

APPENDIX A Information Metaphors 428

A.2 Tables linking section numbers to concepts in the MS-Taxonomy

SPATIAL METAPHORS (chapter 3)

Section Concept

3.2 Spatial Visual Metaphors

VisualSpatialProperties

VisualDisplaySpace

Subdivided Space

Distorted Space

Orthogonal Space1D2D3D

VisualSpatialStructures

SpatialVisual

MetaphorsLocal VisualSpatial Structures

GlobalVisualSpatial Structures

visual

3.2.1 Visual Display Space 3.2.1.1 Orthogonal 1D Visual Space 3.2.1.2 Orthogonal 2D Visual Space 3.2.1.3 Orthogonal 3D Visual Space 3.2.1.4 Distorted Visual Space 3.2.1.5 Subdivided Visual Space 3.2.2 Visual Spatial Structures 3.2.2.1 Global Visual Spatial Structures 3.2.2.2 Local Visual Spatial Structures 3.2.3 Visual Spatial Properties

3.3 Spatial Auditory Metaphors

AuditorySpatialProperties

AuditoryDisplaySpace

Subdivided Space

Distorted Space

Orthogonal Space1D2D3D

AuditorySpatialStructures

SpatialAuditory

MetaphorsLocal AuditorySpatial Structures

Global AuditorySpatial Structures

auditory

3.3.1 Auditory Display Space 3.3.1.1 Orthogonal 1D Auditory Space 3.3.1.2 Orthogonal 2D Auditory Space 3.3.1.3 Orthogonal 3D Auditory Space 3.3.1.4 Distorted Auditory Space 3.3.1.5 Subdivided Auditory Space 3.3.2 Auditory Spatial Structures 3.3.2.1 Global Auditory Spatial Structures 3.3.2.2 Local Auditory Spatial Structures 3.3.3 Auditory Spatial Properties

3.4 Spatial Haptic Metaphors

HapticSpatialProperties

HapticDisplaySpace

Subdivided Space

Distorted Space

Orthogonal Space1D2D3D

HapticSpatialStructures

SpatialHaptic

MetaphorsLocal HapticSpatial Structures

Global HapticSpatial Structures

hapti

3.4.1 Haptic Display Space 3.4.1.1 Orthogonal 1D Haptic Space 3.4.1.2 Orthogonal 2D Haptic Space 3.4.1.3 Orthogonal 3D Haptic Space 3.4.1.4 Distorted Haptic Space 3.4.1.5 Subdivided Haptic Space 3.4.2 Haptic Spatial Structures 3.4.2.1 Global Haptic Spatial Structures 3.4.2.2 Local Haptic Spatial Structures 3.4.3 Haptic Spatial Properties

Page 212: Chapter 7 Guidelines for Spatial Metaphors

APPENDIX A Information Metaphors 429

DIRECT METAPHORS (chapter 4)

Section Concept

4.2 Direct Visual Metaphors

Hue

Colour

Direct Visual Shape

Visual Texture

Saturation

Intensity(grey scale)

DirectVisualMetaphors

VisualSpatialStructures

DirectVisualProperties

visual

4.2.1 Colour - Hue and Saturation 4.2.2 Colour - Intensity 4.2.3 Visual Texture 4.2.4 Direct Visual Shape

4.3 Direct Auditory Metaphors

PitchLoudness

TimbreMusicalProperties

EverydayProperties

SynthesisProperties

DirectAuditoryMetaphors

AuditorySpatialStructures

DirectAuditoryProperties

auditory

4.3.1 Loudness 4.3.2 Pitch 4.3.3 Timbre 4.3.4 Everyday Properties 4.3.5 Sound Synthesis Properties

4.4 Direct Haptic Metaphors

DirectHapticMetaphors

HapticSpatialStructures

DirectHapticProperties

Force

Haptic Surface Texture

Compliance

Weight

Friction

Direct Haptic Shape

Vibrationhaptic

4.4.1 Force 4.4.2 Surface Texture 4.4.3 Direct Haptic Shape 4.4.4 Compliance 4.4.5 Viscosity and Friction 4.4.6 Inertia and Weight 4.4.7 Vibration and Flutter

Page 213: Chapter 7 Guidelines for Spatial Metaphors

APPENDIX A Information Metaphors 430

TEMPORAL METAPHORS (chapter 5)

Section Concept

5.2 Temporal Visual Metaphors

Visual Alarm Event

VisualEvent

Visual Movement Event

VisualTemporalStructure

TemporalVisualMetaphors

Visual Transition Event

Visual DisplaySpace Event

visual

5.2.1 Visual Event 5.2.1.1 Visual Movement Event 5.2.1.2 Visual Display Space Event 5.2.1.3 Visual Transition Event 5.2.1.4 Visual Alarm Event 5.2.2 Visual Temporal Structure

5.3 Temporal Auditory Metaphors

Auditory Alarm Event

AuditoryEvent

Auditory Movement Event

AuditoryTemporalStructure

TemporalAuditoryMetaphors

Auditory Transition Event

Auditory DisplaySpace Event

auditory

5.3.1 Auditory Event 5.3.1.1 Auditory Movement Event 5.3.1.2 Auditory Display Space Event 5.3.1.3 Auditory Transition Event 5.3.1.4 Auditory Alarm Event 5.3.2 Auditory Temporal Structure

5.4 Temporal Haptic Metaphors

Haptic Alarm Event

HapticEvent

Haptic Movement Event

HapticTemporalStructure

TemporalHapticMetaphors

Haptic Transition Event

Haptic DisplaySpace Event

haptic

5.4.1 Haptic Event 5.4.1.1 Haptic Movement Event 5.4.1.2 Haptic Display Space Event 5.4.1.3 Haptic Transition Event 5.4.1.4 Haptic Alarm Event 5.4.2 Haptic Temporal Structure

Page 214: Chapter 7 Guidelines for Spatial Metaphors

Appendix B

Train of Thought Network

Abstract Train of Thought (2003) This thesis is based on a complex network of various trains of thought. For example, established fields such as data mining, Virtual Environments and human perception interweave throughout the thesis. The thesis also introduces several new tracks of thought, such as the MS-Taxonomy, the MS-Guidelines and the MS-Process. The following appendix contains a series of network diagrams that describe the various tracks of thought that make up this thesis. The diagrams are based on the familiar railway network map. Each track on the diagram represents a specific train of thought and the stations represent an idea associated with that track of thought. Following the various tracks may assist the reader understand the various pathways through the thesis.

Page 215: Chapter 7 Guidelines for Spatial Metaphors

Virtual Environments

Case Study

MS-Guidelines

MS-Process

MS-Taxonomy

Software Engineering

Human Perception

Information Display

Data Mining

Abstract Data

largeABSTRACTDATA

SOFTWAREENGINEERING

VIRTUALENVIRONMENTS

HUMANPERCEPTION

DATAMINING

MS-TAXONOMY

INFORMATIONDISPLAY

CASESTUDY

MS-GUIDELINES

MS-PROCESS

findingpatterns

humanperceptual

tools

virtualabstractworlds

virtual hybridworlds

datacharacterisation

task analysis

virtual realworlds

new user-interfacetechnology

increase human-computerbandwidth

many interactionstyles

physiology

sensoryinteraction

sensory bias

perceptualdata mining

multi-attributedcognition

automatedintelligent tools

visualdatamining

informationvisualisation

informationhaptisation

informationsonification

hearing

haptics

vision

informationmetaphors

VEplatforms

spatialmetaphors

directmetaphors

temporalmetaphors

guidelinesfor information

displayguidelines for perception

guidelines forMS-Taxonomy

guidelinesfor spatialmetaphors

guidelinesfor direct

metaphors

guidelinesfor temporalmetaphors

abstraction

qualityprinciples

designprocess

architecture

taxonomy

guidelinesstructure

iterativeprototyping

designguidelines

findingtrading rules

stockmarketdata

technicalanalysis

displaymapping

prototyping

evaluation

expert heuristicevaluationsummative

evaluationformative

evaluation

i-CONE

processstructure

HapticWorkbench

ResponsiveWorkbench

WEDGE

Barco Baron

mappingtemporalmetaphors

mappingdirectmetaphors

mappingspatialmetaphors

3D barchart

movingaverage surface

bidAsklandscapehaptic 3Dbar chart

hapticmovingaveragesurfaceauditory bidAsklandscape

considersoftwareplatform

considerhardwareplatform

informationperceptualisation

Page 216: Chapter 7 Guidelines for Spatial Metaphors
Page 217: Chapter 7 Guidelines for Spatial Metaphors
Page 218: Chapter 7 Guidelines for Spatial Metaphors
Page 219: Chapter 7 Guidelines for Spatial Metaphors
Page 220: Chapter 7 Guidelines for Spatial Metaphors
Page 221: Chapter 7 Guidelines for Spatial Metaphors
Page 222: Chapter 7 Guidelines for Spatial Metaphors

Software Engineering

SpatialMetaphor

SpatialProperty

Length (x) Width (y)

Position Scale Orientation

Depth (z)

DisplaySpace

SpatialStructure

Global SpatialStructure

Local SpatialStructure

DistortedSpace

SubdividedSpace

OrthogonalSpace

taxonomy

SOFTWAREENGINEERING

abstractionarchitecture

qualityprinciples

designguidelines

iterativeprototyping

design process

1D 2D 3D

IterativeProcessModel

SystemTest

SystemSpecification

SystemImplementation

SystemDesign

SystemEvaluation

System Use

Many of the approachesused to derive theMS-Taxonomy come fromSoftware Engineering.For example, abstraction,architectures andtaxonomies.

Process concepts are alsoused in software engineering.For example, the iterative andprototyping process models.

Guidelines are also used toimprove the quality of designs.

Page 223: Chapter 7 Guidelines for Spatial Metaphors

MS-GUIDELINES

guidelines forinformation

display guidelines forperception

guidelinesfor spatialmetaphors

guidelinesfor directmetaphors

guidelinesfor temporalmetaphors

guidelinesstructure

formativeevaluation

designguidelines

guidelines forMS-Taxonomy

To assist in the design ofmulti-sensory displays by…..• directing the design process• check design outcomes

MS-Guidelines

• perceptual science• information display• HCI• User interface design

Collected from a range of sources…

Structured usingthe MS-Taxonomy

spatial guidelines

direct guidelines

temporal guidelines

Page 224: Chapter 7 Guidelines for Spatial Metaphors

mappingdirect

metaphors

mappingspatial

metaphors

mappingtemporalmetaphors

formativeevaluation

MS-PROCESSdata

characterisationdisplay

mapping

evaluation

expertheuristicevaluation

considersoftwareplatform

considerhardwareplatform

taskanalysisdesign

process

summativeevaluation

prototyping

MS-Process

IterativeSteps

MS-Taxonomy

MS-Guidelines

STEP 3.Display

Mapping

STEP 4.Prototyping

STEP 1.Task Analysis STEP 2.

DataCharacterisation

STEP 5.Evaluation

Design

Prototype

MS-Guidelines

The display mapping step isstructured using the MS-Taxonomy.The MS-Guidelines direct designand help with formative evaluations.

An iterative 5 step process fordesigning multi-sensory displays.Prototypes are used to help refinethe design and evaluate designs.

Accommodates a rangeof different evaluations...

expert heuristic evaluation formative evaluationsummative evaluation

Page 225: Chapter 7 Guidelines for Spatial Metaphors
Page 226: Chapter 7 Guidelines for Spatial Metaphors

Appendix C

The XCM-Taxonomy

Piece of Mind (2000)

One concern with developing the MS-Taxonomy is whether or not it adequately covers the full multi-sensory design space. To investigate this question an alternative taxonomy of the multi-sensory design space was constructed called the XCM-Taxonomy

. This extended taxonomy is based on a previous visual taxonomy developed by Card, Mackinlay and Shneiderman [Card, Mackinlay et al. 1999].

Appendix C contains a copy of a previously published paper [Nesbitt 2001b]. This paper describes the extended XCM-Taxonomy. The paper also compares the MS-Taxonomy and the XCM-Taxonomy

to show how they are equivalent. This provides confidence that both taxonomies cover the full multi-sensory design space.

Page 227: Chapter 7 Guidelines for Spatial Metaphors

Modeling the Multi-Sensory Design Space

Keith V. Nesbitt Basser Department of Computer Science

University of Sydney, Australia [email protected]

Abstract Research into the visualization of abstract data has resulted in a number of domain specific solutions as well as some more generic techniques for visualising information. Similarly, the field of sonification has explored the display of information to the human auditory sense. As haptic displays such as force-feedback devices become more readily available, the sense of touch is also being used to help understand information. While applications that use multi-sensory information ‘displays’ are becoming more common, frameworks to assist in design of these displays need to be developed. This paper extends a previously proposed structure of the ‘visual’ design space to include hearing and touch and hence define a multi-sensory design space. It then correlates this space with another classification of the design space based on metaphors. Metaphors are often used as a starting point in designing information displays. Metaphors allow the user to take advantage of existing cognitive models as well as ecologically-developed perceptual skills. Metaphors provide another useful structuring of this multi-sensory design space. Throughout the paper all discussions are illustrated using the UML modeling notation. UML is a standard notation frequently used to document the design of software systems. 1 Introduction Sometimes the ‘Information Age’ we live in would be better thought of as the ‘Data Age’. The widespread use of computer systems has seen a massive growth in the size of data sets available to analysts, designers, managers and engineers from many domains. These data sets can be characterised as abstract, multivariate and large. Data used for stock market trading, software analysis and marketing are three example domains but many more exist.

Whether the task is to understand or, alternatively to explore these data sets, many cognitive processes can benefit from the use of external models to support thinking. ‘Information Visualisations’ is the term commonly used to describe interactive computer systems

Copyright © 2001, Commonwealth of Australia. This paper appeared at the Australian Symposium on Information Visualisation, Sydney, December 2001. Conferences in Research and Practice in Information Technology, Vol. 9. Peter Eades and Tim Pattison, Eds. Reproduction for academic, not-for profit purposes permitted provided this text is included.

that provide the user with external visual models of abstract data (Card, Mackinlay et al. 1999). Many approaches have been developed in this field to allow exploration of large multivariate data spaces. ‘Sonification’' is a newly evolving field, that uses sound rather than vision to represent abstract data (Kramer, Walker et al. 1997). Likewise, the term ‘Tactilization’ has been used when the sense of touch is used for displaying abstract data (Card, Mackinlay et al. 1999). This might better be called 'Haptization' as the word haptics refers to both the tactile and kinesthetic components of the sense of touch. Most interactions involving touch require a combination of both tactile and kinesthetic feedback. Because some of the data sets being explored are so large and have so many attributes it has been proposed that multiple senses be used to analyze information in parallel. This requires the design of models that make use of multiple senses. ‘Perceptualization’ is the term that has been suggested to describe the multi-sensory display of abstract information (Figure 1)(Card, Mackinlay et al. 1999). Modeling the multi-sensory design space is the focus of this paper.

Figure 4. Information Perceptualization makes use of Visual, Auditory and Tactile Structures The field of ‘Information Visualization’ has progressed largely by invention as applications are developed for specific domains. The field is disjointed as it brings together not only disparate application domains but all so many research areas. These research fields include computer graphics, information modeling, perceptual and cognitive science, human computer interaction and user

Page 228: Chapter 7 Guidelines for Spatial Metaphors

interface design. Sometimes it seems that the design of an information display may be better described as an ‘art’ and not a ‘science’. Yet it would be desirable for non-specialists to adopt an engineering approach to the design of information visualization. For example, to follow some simple guidelines and not have to understand complex perceptual theories beforehand or alternatively run time-consuming human factors experiments after the display is built. While this is true for designing visual displays it is even more critical when attempting to build a multi-sensory display where sensory interactions and conflicts complicate perceptual issues. Categorizing the design space is an important step to assist in the development of general principles of design. Card and Mackinlay have proposed a taxonomy of the visual design space (Card, Mackinlay et al. 1999). This taxonomy is described and then extended in Section 2 to include auditory and haptic displays and so provide a categorization of the multi-sensory design space. Another approach to understanding the design space is to consider how ‘metaphors’ have been used as a basis of information displays. Section 3 describes the motivation behind using metaphors in design and presents a taxonomy of the design space based on metaphors. The correspondence between the ‘Metaphor Design Space’ and the ‘Extended Card and Mackinlay Design Space' is discussed. 2 Extended Card-Mackinlay Design Space 2.1 The Extended Process Creating visualisations for a specific task is typically an iterative process of analysis, design and use. Card and Mackinlay describe this 'Visualization Process' (Card, Mackinlay et al. 1999) that maps from 'Raw Data' into 'Views'. This process has been simplified in Figure 2 to show a more linear process from 'Raw Data' to 'Data Tables' to 'Visual Structures' and finally 'Views'. Figure 3 shows the extension of this process to include the design of multi-sensory displays. Visual structures are augmented by auditory and haptic structures (Figure 3). The word 'haptic' has been used because it refers to all the components that make up the sense of touch. Rather than the user seeing and interacting with a 'View' of the data they interact in a display which is multi-sensory. The type of display is dependent on the system but covers the range of emerging user interfaces that can be found in Virtual Environments. Many of these environments provide 3D visual, auditory and haptic displays [3,4,5].

Figure 2. A process model for Information Visualisation, adapted from Card-Mackinlay (Card, Mackinlay et al. 1999)

Figure 3. The process model extended to include multi-sensory displays. 'Visual Structures' have been augmented by 'Auditory' and 'Haptic Structures'. 2.2 Raw Data The multi-sensory extension (Figure 3) shows no changes to the initial part of the process. Raw data is still broadly characterized as quantitative, nominal or ordered as shown in Figure 4. Quantitative data may also be described in more detail if it has the intrinsic properties of being spatial, temporal or geographical. Raw data is transformed to a data table (Card, Mackinlay et al. 1999). Data tables place data attributes in rows and particular cases along columns of the data table (Card, Mackinlay et al. 1999). Data tables are more structured than the raw data and describe relations that can more easily be mapped into visual, auditory and haptic structures (Figure 5).

Figure 4. A UML Model of the categories of raw data. Normal abbreviations for each class are shown, for example, 'N' for Nominal or 'Qlon' for Quantative data that is inherently longitudinal.

Page 229: Chapter 7 Guidelines for Spatial Metaphors

Figure 5. A UML Model of a data table. Each row of the table is associated with an attribute of the data and each column is an instance or case of data. 2.3 Visual, Auditory, Haptic Structures Visual structures encode information by augmenting a "spatial substrate with marks and graphical properties" (Card, Mackinlay et al. 1999). Temporal encoding can be thought of as an animation during which the marks, their position in the spatial substrate, or, their visual properties change. A high level model of 'Visual Structure' is shown in Figure 6. Figure 7 and Figure 8 show corresponding models of auditory structures and haptic structures. Note that I have adopted the naming conventions of the visual structure model so that all three models use the basic concepts of: • Spatial Substrate • Marks • Properties • Temporal Encoding These concepts have been proposed for visualization (Card, Mackinlay et al. 1999) but they are also applicable for both sound and touch displays. This allows us to abstract these concepts in the model of the design space. So we augment the visual, auditory and haptic spatial substrates with visual, auditory and haptic marks. These marks in turn have visual, auditory and haptic properties. 2.4 The Spatial Substrate A spatial substrate is the most important starting point for any information display. Space is perceptually dominant and the decision about how to use space is the first design choice(Card, Mackinlay et al. 1999). The Card-Mackinlay model describes four different ways to organise space. An unstructured space has no axis. A nominal axis divides the space into regions that represent different categories. Similarly, an ordinal axis uses sub-regions but the categories have some ordering within the space. A

quantitative axes uses a metric. One, two or three axis of any type may be used to describe a 1-dimensional, 2-dimensional or 3-dimensional space. A model of the spatial substrate is shown in Figure 9.

Figure 6. A UML Model of the components of 'Visual Structures'.

Figure 7. A UML Model of the components of auditory structures.

Figure 8. A UML Model of the components of haptic structures.

Figure 9. A UML Model of the spatial substrate. In a multi-sensory display it would be most natural to use the same spatial substrate for each sense. If we think of the normal world, objects within space are felt and heard at the same location they are seen. It is, however, possible to consider different spaces for each sense. One useful

Page 230: Chapter 7 Guidelines for Spatial Metaphors

way to use this may be to provide different resolution displays for each sense. Visual resolution is often insufficient to represent a detailed view of data without some kind of zooming. Zooming in can reveal more detailed visual structure but the visual display must sacrifice global context. It is possible to use a much more compact auditory space within the same visual space. This allows detailed data attributes to be heard at much higher resolution while maintaining the global context with the visual display. This approach has been used for example with 'sonification' of high resolution well-log data overlaid on a much sparser visual model showing a petroleum exploration field with well locations (Fröhlich, Barrass et al. 1999). The spatial substrate can have one, two or three dimensions. There are also a number of additional artifacts that can be contained in this substrate to assist interpretation (Figure 10, Figure 11, Figure 12). We can think of such visual landmarks as a labeled axis, grid marks or navigation aids in the space. In 3D virtual worlds it is also common to have a lighting model associated with the global space and to define the user's viewpoint in the space. However, these concepts are also applicable in the auditory space. A user may hear a 'tick' each time they pass an auditory grid mark for example. Sound can provide an excellent landmark in 3D space. The nearer the user gets to the landmark, the louder the sound becomes and the cues remain irrespective of the direction the user is facing. Globally we might also associate different acoustic models with the space. With haptics these same concepts are also possible. For example it is easy to imagine a 'haptic grid' where marks in the grid are felt as lines or bumps on a surface. Haptic navigation aids have been previously implemented where the user is 'drawn' in to buttons or 'held' by force on dial control (Hutchins and Gunn 1999). Again global haptic models can be associated with the spatial substrate. For example objects are given some weight and a gravitational field is associated with the space.

Figure 10. A UML model showing possible contents of the visual substrate.

Figure 11. A UML model showing possible contents of the auditory substrate.

Figure 12. A UML model showing possible contents of the haptic substrate. 2.5 Marks For visual structures, marks are simply elementary things that are visible in the spatial substrate (Card, Mackinlay et al. 1999). The most elementary types of marks are points(0D), lines(1D), areas(2D) and volumes(3D). The UML model of marks in Figure 13 shows these components. These elementary marks can also be used to describe elementary modeling units for both the sounds in an auditory display and the forces and tactile surfaces in a haptic display.

Figure 13. A UML Model of 'marks'. Note that an 'area' in 3D space is often called a 'surface'. Note also that 'Connection' and 'Enclosure' use the elementary marks to describe relationships within data.

Page 231: Chapter 7 Guidelines for Spatial Metaphors

Sound, for example, can be modeled as arising from a point source in space. This is our common experience in the real world where sound originates from a source somewhere in the space. In a computer model using sound it is also possible for the user to interact with the space to generate sounds, for example, moving a cursor within a volume, area or along lines or surfaces in the space may generate auditory feedback. A simple example is where the user touches an object and sound associated with that object is heard. This can be either a surface or a volume depending on how the object is modeled. A more complex example is the modeling of fluid flow, where parameters such as flow magnitude are determined by position in the field and these parameters in turn determine the sound generated. The fluid flow field may be an area in a 2D model or a volume in a 3D model. Haptic displays can provide both tactile and force feedback. Tactile displays provide displacements directly to the skin surface. Tactile surfaces are very consistent with our everyday use of touch. For example touching a surface with our fingertips and registering surface texture, slip and shape. Tactile feedback can be considered as the spatial and temporal distribution of displacements on the skin (Durlach and Mavor 1996). It is possible to model this distribution as points, lines, areas(surfaces) and volumes. Another component of the haptic sense is the kinesthetic sense which tracks position and motion of limbs and joints. Forces are sensed by the kinesthetic sensory systems and are understood as a spatial and temporal distribution of forces. Again a model of haptic marks consisting of points, lines, areas (surfaces) and volumes is appropriate. Force displays frequently use the idea of surfaces to model haptic objects, however, models that generate forces based on points, lines or volumes are also used. A gravitational field for example, may influence an area (2D) or volume (3D) of the haptic display. Again the idea of marks is useful for describing haptic structures if we think of a mark as a modeling unit that is used to generate the appropriate force or tactile feedback to the user. 2.6 Properties The other component of visual, auditory and haptic structures that we need to consider are the properties associated with each sense. Visual, auditory and haptic properties are very different, yet there are some similarities in the organisation of our sensory perceptions that allow us to keep the models consistent. Aligned with the Card-Mackinlay model we can distinguish 'Automatic' processing from 'Controlled' processing (Card, Mackinlay et al. 1999). 'Automatic' processing involves a direct encoding of data attributes to perceptual capabilities.

'Controlled' processing requires cognitive effort to understand or interpret a more abstract encoding. A good example of controlled visual processing is text. Speech, for the auditory, and Braille, for the haptic sense, are good examples of encodings that require controlled processing. Text, Speech and Braille all require some level of cognitive processing to decode the information.

Figure 14. A UML Model of direct and abstract visual properties.

Figure 15. A UML Model of direct and abstract auditory properties. 'Earcons' are analogous to Icons used in visualizations but use a syntax that describes musical motifs (Blattner, Sumikawa et al. 1989).

Figure 16. A UML Model of direct and abstract haptic properties. 'Touchcons' have not been described but it would be possible to use a syntax based on haptic attributes such as surface texture.

Page 232: Chapter 7 Guidelines for Spatial Metaphors

Note that I have used a slightly different terminology to that used by Card-Mackinlay. They describe abstract encodings as 'features' to distinguish them from direct encodings which are called 'properties'. I use the naming convention of 'direct properties' and 'abstract properties'. I have also used the word "Visual" where Card-Mackinlay use the term "Graphical". The reason for these changes is to allow a more consistent naming in the UML models. High level UML models of Visual, Auditory and Haptic properties are shown in Figure 14, Figure 15 and Figure 16. We are primarily interested in direct properties because these perceptually-direct encodings require less learning than abstract encodings and attributes of the data encoded in this way can be processed in parallel. The advantage of abstract properties is that they can express very complex concepts, and it is common for information displays to utilise them in some way, especially where detailed accurate information is required. 2.7 Visual Properties We will now consider Visual, Auditory and Haptic properties individually. Of the three 'Visual properties' have been the most frequently studied and applied. Bertin (Bertin 1999) for example described six basic properties of size, orientation, gray scale, color, texture and shape and Mackinlay (Mackinlay 1986) used a rule-based system for directly mapping from data attributes rules to properties. A number of other properties have also been suggested that allow for automatic processing. These include, length, line orientation, width, curvature, intensity and 3d depth cues (Figure 17) (Card, Mackinlay et al. 1999).

Figure 17. A UML Model of direct visual properties. 2.8 Auditory Properties Two different types of listening have been distinguished, ‘everyday listening’ and ‘musical listening’ (Gaver 1986). ‘Everyday listening’ focuses on attributes of the sounds

source. In the real world these attributes can reveal a lot of information. For example, when a object is tapped a number of questions are asked. What material is the object made of? Is the object hollow, does it contain anything? How hard was the object struck? Thus properties or attributes of the object are revealed by interpreting the sound. When displaying abstract data everyday sounds are useful for displaying categorical data by using a cross-section of sounds as signatures of categorical data. ‘Musical Listening’ involves the skills used to appreciate music. Musical listening helps to discriminate pitch and timbre. It also helps discern a number of temporal patterns such as rhythm and melody. The ability to hear information in music varies greatly between individuals. Figure 18 shows the 'Direct Auditory Properties'. Note that our sense of hearing is very proficient at monitoring and so understanding information that evolves over time. Hence temporal encodings are most important for interpreting both everyday and musical sounds.

Figure 18. A UML Model of types of sound. 2.9 Haptic Properties Tasks performed by the sense of touch can be divided into either "exploration" or "manipulation" tasks (Durlach and Mavor 1996). Since we are most interested in display of properties we are more interested in the exploration tasks which are sensory dominant. However, despite the division into sensory and manipulation, most haptic tasks involve some combination of exploring and manipulating (Figure 19). There are many types of information gathered by the different sensory receptors. While this information includes temperature, pain, tactile, chemogenic and kinesthetic information (Figure 20) it is only tactile and kinesthetic information that can currently be displayed by existing user interface devices. Tactile refers to the sense of contact with an object, while kinesthetic refers to the sense of motion and position in space of joints and limbs (Figure 21).

Page 233: Chapter 7 Guidelines for Spatial Metaphors

The direct tactile properties are shown in Figure 22 and the direct kinesthetic properties are shown in Figure 23. The sense of touch is like hearing in that information is often conveyed by how the properties evolve over time. Hence temporal encodings for haptics must be considered carefully. Haptic displays developed for use in telemanipulation and only in relatively recent times have they been integrated more generally as a component of the user interface. The Phantom from Sensable(Salsibury and Srinivasan 1997) technologies is a force feedback device. It is the most generally available and practical devices for displaying haptic properties. However, it allows only forces to be displayed at a point which limits the display of tactile properties and prevents the simultaneous display of forces over a space greater than a single point. Despite this it is still possible to display many direct properties.

Figure 19. A UML Model of the two main types of haptic tasks, exploration and manipulation. For most tasks our haptic sense both senses and acts on the environment.

Figure 20. A UML Model of types of the types of haptic sensory receptors.

Figure 21. A UML Model of the tactile and kinesthetic senses. Currently it is not practical to provide output to the other haptic components of pain, heat, itch.

Figure 22. A UML Model of direct haptic properties, showing tactile properties.

Figure 23. A UML Model of direct haptic properties, showing kinesthetic properties. 2.10 Temporal Encoding The final part considered are temporal encodings for each sense. Temporal encodings modify the position of marks or their properties over time (Figure 6, 7, 8). Visual temporal encodings are relatively straightforward and are useful for studying data which changes over time. This can be reflected by a movement of the marks or change to

Page 234: Chapter 7 Guidelines for Spatial Metaphors

some property like colour (Figure 24). Auditory temporal encodings are very important as the variation in sound properties over time conveys detailed information and complex properties in the data can be understood as musical qualities such as rhythm, melody (Figure 25). It is also possible to design haptic temporal encodings, for example where the hardness of an object changes (Figure 26).

Figure 24. Visual Temporal encoding.

Figure 25. Auditory Temporal Encoding

Figure 26. Haptic Temporal Encoding 3 The Metaphor Model of the Design Space The previous discussion has focused on the extension of a design space that was originally proposed by Card-Mackinlay (Card, Mackinlay et al. 1999) for information visualisation. A previous classification of the same multi-sensory design space based on metaphors has also been proposed (Nesbitt 2000).

Metaphors provide a starting model that the user may take advantage of to help understand structure or relations in the visualisation. Metaphors are often used as a starting point in designing information displays. Metaphors not only allow the user to take advantage of existing cognitive models but they also take advantage of ecologically-developed perceptual skills. There are many difficulties in designing a multi-sensory display as perceptual conflicts occur. Using metaphors from the real world as models for the design of displays can make sure that we utilise our perceptions in the information world as we do in the real world. The metaphor classification describes 5 broad clusters of metaphors (Table 1) which are discussed below. The first two clusters contain spatial and temporal metaphors that are applicable to all senses. The other three clusters are metaphors from sight, sound and force specific to the visual, auditory and haptic sense. Metaphor Cluster Spatial Temporal Sight Sound Touch Visual Display

Auditory Display

Haptic Display

Table 1. Classification of metaphors 1. ‘Spatial Metaphors’ relate to scale, location and

structure. Spatial metaphors can carry quantitative information. Relationships can be described by position on a map or a two or three-dimensional grid. Data placed in proximity can be visually assessed in compare and contrast activities. Structures such as tree graphs or data maps can carry broader overview information that is important to the user. Both touch and sound also provide useful cues for object position in space. Spatial metaphors are appropriate for all senses and concern the way pictures, sounds and forces are organised in space.

2. ‘Temporal Metaphors’ are concerned with how data changes with time. It includes concepts of movement, animation, rhythms and cycles. Interaction in a virtual environment, as in the real world, presents information that can change with time. These changes can carry information. Temporal metaphors will also be applied to all senses and concern how we perceive changes to pictures, sounds and forces over time.

3. ‘Sight Metaphors’ use direct mappings from information to the attributes of sight. These include colour, light, shape, and surface texture (Hutchins and

Page 235: Chapter 7 Guidelines for Spatial Metaphors

Gunn 1999). The use of a colour scale is an example of a common mapping used to represent data values (Nesbitt 2000). Icons are an example of how abstract shapes can be used to convey information using intuitive symbols.

4. ‘Sound Metaphors’ deal with direct mappings of typical sound properties such as pitch, amplitude, timbre and also more musical qualities such as rhythm and melody. Auditory metaphors are less common, however good examples exist in the real world. The Geiger counter uses sound to display radiation levels and is a well understood auditory metaphor (Salsibury and Srinivasan 1997).

5. ‘Touch Metaphors’ relate to tactile properties such as force, inertia and vibration. It is possible to assess object properties such as weight and density by the inertia that needs to be overcome when moving an object. Other object properties such as the hardness and surface texture can also be used to encode information.

Altogether there are nine broad categories of metaphor which make up this classification. They are:

1. Visual Spatial Metaphors 2. Visual Temporal Metaphors 3. Sight Metaphors 4. Auditory Spatial Metaphors 5. Auditory Temporal Metaphors 6. Sound Metaphors 7. Haptic Spatial Metaphors 8. Haptic Temporal Metaphors 9. Touch Metaphors

Sense Card-Mackinlay Metaphor

Visual spatial substrate augmented with visual marks

Visual Spatial Metaphors

Visual temporal encoding

Visual Temporal Metaphors

Direct visual properties Sight Metaphors

Auditory spatial substrate augmented with auditory marks

Auditory Spatial Metaphors

Auditory temporal encoding

Auditory Temporal Metaphors

Direct auditory properties

Sound Metaphors

Haptic spatial substrate augmented with auditory marks

Haptic Spatial Metaphors

Haptic temporal encoding

Haptic Temporal Metaphors

Direct haptic properties Touch Metaphors Table 2. Correlation between the metaphor design space and the extended Card-Mackinlay design space.

The correlation between the metaphor design space and the Card-Mackinlay multi-sensory design space is shown in Table 2. There is a natural correlation between temporal metaphors and the concept of temporal encodings. Similarly the idea of a spatial substrate with marks matches to the idea of spatial metaphors. Finally the sight, sound and touch metaphors correspond to the direct properties for each sense. 4 Conclusion I have outlined two architectures of classification for multi-sensory design space. One is based on an extension of the previously proposed classification of Card-Mackinlay. The other classification is a previously proposed metaphor classification. The two classification are shown to cover the same design space. During the next stage of work guidelines are being developed to assist in understanding how to apply different parts of the design space for different tasks. This will be tested in different application domains. More work also needs to be done in verifying the correctness of the architecture especially in the less well understood domains of sound and haptics. 5 References BERTIN, J. (1999): Graphics and Graphic Information Processing. Berlin:De Gruyter (1977/1981). In Readings in Information Visualization: Using Vision to Think. Morgan Kaugmann Publishers. BLATTNER, M.M., SUMIKAWA, D. and GREENBERG, R. (1989): Earcons and Icons: Their Structure and Common Design Principles, Human Computer Interaction, 4(1). CARD, S.K., MACKINLAY, J.D. and SHNEIDERMAN, B., (1999): Readings in Information Visualization: Using Vision to Think. Morgan Kaugmann Publishers. DURLACH, N. I. and MAYOR, A.S. (1996): Virtual Reality. Scientific and Technological Challenges, National Academy Press, Washington, DC. FRÖHLICH, B., BARRASS, S., ZEHNER, B., PLATE J. and GÖBEL, M.(1999): Exploring GeoScience Data in Virtual Environments. Proc. IEEE Visualization, 1999.

Page 236: Chapter 7 Guidelines for Spatial Metaphors

GAVER, W.W. (1986): Auditory Icons: using sound in computer Interfaces. Human-Computer Interaction. 2. 167-177 HUTCHINS, M. and GUNN, C. A (1999): Haptic Constraints Class Library. Proceedings of the Fourth PHANTOM Users Group Workshop. A.I. Laboratory Technical Report No. 1675. November, 1999. MIT. KRAMER, G., WALKER, et al., (1997): Sonification Report: Status of the Field and Research Agenda. 1997, Prepared for the National Science Foundation by members of the International Community for Auditory Display. MACKINLAY J.D. (1999): Automating the Design of Graphical Presentations (1986). In Readings in Information Visualization: Using Vision to Think. Morgan Kaugmann Publishers. NESBITT, K. V. (2000): A Classification of Multi-sensory Metaphors for Understanding Abstract Data in a Virtual Environment. Proc. of IEEE international Conference on Information Visualisation, London. SALISBURY, J. K. and SRINIVASAN, M. A. (1997): Phantom-Based Haptic Interaction with Virtual Objects. IEEE Computer Graphics and Applications, 17( 5): 6-10.