The interactive workspaces project: experiences with...

8
1536-1268/02/$17.00 © 2002 IEEE PERVASIVE computing 67 The Interactive Workspaces Project: Experiences with Ubiquitous Computing Rooms T he interactive workspaces project started at Stanford University in 1999 to investigate human interac- tion with large high-resolution dis- plays. The project initially operated in a busy lab in which the display proved to be no more than a curiosity since it could not be used for long periods of time and offered little integration with other devices. It became clear that the poten- tial of a large display device would emerge only by embedding it in a ubiquitous computing environment that could sustain realistic interactive use. The interactive workspaces project therefore began to design and use rooms containing one or more large displays that had the ability to integrate portable devices. The idea of ubiquitous computing 1 encompasses many different kinds of settings and devices. We chose to narrow our focus by Investigating how to map a single defined physical location to an underlying systems infrastructure and a corresponding model of interaction 2 Emphasizing the use of large interactive walk-up displays, some using touch interaction Collaborating with other research groups, both within and outside the field of computer science, to construct nontoy applications We constructed several versions of our prototype interactive workspace, which we call the iRoom, created a software infrastructure for this envi- ronment, called iROS, and conducted experi- ments in human-computer interaction in the workspace. We also assisted outside groups in using our technology to construct application suites that address problems in their own domains, and we deployed our software in pro- duction environments. Overview, goals, and contributions As we began to construct the iRoom, we devel- oped some guiding principles: Practice what we preach. From the beginning we used the iRoom as our main project meeting room and employed the software tools that we con- structed. Much of our continuing research has been motivated by our frustration at encounter- ing something we could not accomplish in the iRoom. Emphasize colocation. There is a long history of research on computer-supported cooperative work The interactive workspaces project explores new possibilities for people working together in technology-rich spaces. The project focuses on augmenting a dedicated meeting space with large displays, wireless or multimodal devices, and seamless mobile appliance integration. INTEGRATED ENVIRONMENTS Brad Johanson, Armando Fox, and Terry Winograd Stanford University

Transcript of The interactive workspaces project: experiences with...

1536-1268/02/$17.00 © 2002 IEEE PERVASIVEcomputing 67

The Interactive Workspaces Project:

Experiences withUbiquitous ComputingRooms

The interactive workspaces projectstarted at Stanford University in1999 to investigate human interac-tion with large high-resolution dis-plays. The project initially operated

in a busy lab in which the display proved to be nomore than a curiosity since it could not be used forlong periods of time and offered little integrationwith other devices. It became clear that the poten-tial of a large display device would emerge only by

embedding it in a ubiquitouscomputing environment thatcould sustain realistic interactiveuse. The interactive workspacesproject therefore began to designand use rooms containing one or

more large displays that had the ability to integrateportable devices.

The idea of ubiquitous computing1 encompassesmany different kinds of settings and devices. Wechose to narrow our focus by

• Investigating how to map a single definedphysical location to an underlying systemsinfrastructure and a corresponding model ofinteraction2

• Emphasizing the use of large interactive walk-updisplays, some using touch interaction

• Collaborating with other research groups, bothwithin and outside the field of computer science,to construct nontoy applications

We constructed several versions of our prototypeinteractive workspace, which we call the iRoom,created a software infrastructure for this envi-ronment, called iROS, and conducted experi-ments in human-computer interaction in theworkspace. We also assisted outside groups inusing our technology to construct applicationsuites that address problems in their owndomains, and we deployed our software in pro-duction environments.

Overview, goals, and contributionsAs we began to construct the iRoom, we devel-

oped some guiding principles:

• Practice what we preach. From the beginning weused the iRoom as our main project meeting roomand employed the software tools that we con-structed. Much of our continuing research hasbeen motivated by our frustration at encounter-ing something we could not accomplish in theiRoom.

• Emphasize colocation. There is a long history ofresearch on computer-supported cooperative work

The interactive workspaces project explores new possibilities for peopleworking together in technology-rich spaces. The project focuses onaugmenting a dedicated meeting space with large displays, wireless ormultimodal devices, and seamless mobile appliance integration.

I N T E G R A T E D E N V I R O N M E N T S

Brad Johanson, Armando Fox, and Terry WinogradStanford University

for distributed access (teleconferencingsupport). To complement this work, wechose to explore new kinds of supportfor team meetings in single spaces, takingadvantage of the shared physical spacefor orientation and interaction.

• Rely on social conventions. Many pro-jects have attempted to make an inter-active workspace smart.3,4 Rather thanhave the room react to users, we choseto focus on letting users adjust the envi-ronment as they proceed with their task.In other words, we set our semanticRubicon2 so that users and social con-ventions take responsibility for actionsand the system infrastructure is respon-sible for providing a fluid means of exe-cuting those actions.

• Aim for wide applicability. Rather thaninvestigating systems and applicationsjust in our specific space, we decided toinvestigate software techniques thatwould also apply to differently config-ured workspaces. We wanted to createstandard abstractions and applicationdesign methodologies that apply to anyinteractive workspace.

• Keep it simple. At both the interface andsoftware development levels, we tried tokeep things simple. On the human-inter-face side, we faced a fundamental trade-off in interaction design between the

necessity of supporting diverse hardwareand software and the need to provide aninterface simple enough so that peoplewould use it. On the software develop-ment side, we tried to keep APIs as sim-ple as possible, both to make the client-side libraries easier to port and tominimize the barrier-to-entry for appli-cation developers.

The iRoom is our second-generationprototype interactive workspace (see Fig-ure 1). Several other iRooms have been cre-ated at Stanford and elsewhere (see thesidebar “The iRoom and Beyond: Evolu-tion and Use of Deployed Environments”).The iRoom contains three touch sensitivewhite-board displays along the side walland a custom-built 9 megapixel, 6-footdiagonal display called the interactivemural. In addition, there is a table with a3 × 4 foot display that was designed to looklike a standard conference-room table. Theroom also has cameras, microphones,wireless LAN support, and several wire-less buttons and other interaction devices.

We started our research by determiningthe types of activities users would carry outin an interactive workspace. Through ourown use, and through consultation withcollaborating research groups, we arrivedat the three following task characteristics:

1. Moving data. Users in the room needto be able to move data among thevarious visualization applications thatrun on screens in the room and on lap-tops or PDAs that are brought into theworkspace.

2. Moving control. To minimize disrup-tion during collaboration sessions, anyuser should be able to control anydevice or application from his or hercurrent location.

3. Dynamic application coordination.The specific applications needed todisplay data and analyze scenariosduring team problem-solving sessionsare potentially diverse. One companyreported using over 240 software toolsduring a standard design cycle. Anynumber of these programs might beneeded during a single meeting. Theactivities of each tool should coordi-nate with others as appropriate. Forexample, the financial impacts of adesign change in a CAD programshould automatically show up in aspreadsheet program that showsrelated information running elsewherein the room.

Based on our experiences with theiRoom, we identified some key character-istics to be supported by the infrastructureand interfaces in an interactive workspace.Multiple devices (PDAs, workstations, lap-tops, and so forth) will be in simultaneoususe in a workspace, with each chosen for itsefficacy in accomplishing some specifictask. There will also be heterogeneoussoftware running on these devices, includ-ing both legacy and custom-built applica-tions. All of these must be accessible to oneanother in a standard way so that the usercan treat them as a uniform collection. Thismeans that any software framework mustprovide cross-platform support. From theHCI perspective, interfaces must be cus-tomized to different displays and possiblyto different input-output modalities, suchas speech and voice.

68 PERVASIVEcomputing http://computer.org/pervasive

I N T E G R A T E D E N V I R O N M E N T S

Figure 1. A view of the interactive room(iRoom).

Furthermore, unlike a standard PC, aninteractive workspace by its nature has mul-tiple users, devices, and applications allsimultaneously active. On short time scales,the individual devices might be turned off,wireless devices might enter and exit theroom, and pieces of equipment might breakdown for periods of minutes, hours, ordays. On longer time-scales, workspaces

will incrementally evolve rather than becoherently designed and instantiated onceand for all. While providing for thesedynamic changes, interactive workspacesmust also work with a minimum of admin-istration if they are to be widely deployed.It is not realistic to expect a full-time sys-tem administrator to keep a workspace run-ning, so we need to anticipate failure as a

common case rather than as an exception.2

The system must provide for quick recov-ery either automatically or through a sim-ple set of user steps.

iROS meta-operating systemFor any real-world system to support the

modalities and characteristics just described,the system infrastructure must mirror the

APRIL–JUNE 2002 PERVASIVEcomputing 69

The iRoom project started with the first version of the Inter-

active Mural, a four-projector tiled display built in several

stages from 1998 to 1999. It included a pressure-sensitive floor

that tracked users in front of the display with one-foot accuracy.

The pressure sensitive floor was used in some artistic applications

but has not been duplicated in the iRoom. The first iteration of

the iRoom was constructed in Summer 1999. Like the current

version, it had three smart boards and the iTable, but it had a

standard front-projected screen instead of the interactive mural

at the front of the room.

Perhaps the biggest mistake we made in constructing the first

iRoom was in planning the cabling. It might seem like an obvi-

ous thing in retrospect, but the number of cables needed to con-

nect mouse, keyboard, video, networking, and USB devices

quickly escalated, leaving a tangle of cables that were not quite

long enough. For the second version of iRoom, we learned from

our mistakes and made a careful plan of cable routes and lengths

in advance. We made sure to label both ends of every cable with

what they were connecting.

In Summer 2000, we integrated the Interactive Mural at the

front of the iRoom, requiring a reconfiguration of the entire

workspace. During the remodel, we introduced more compact

light-folding optics for the projectors on the smart boards and

did a better job of running the cables for the room. We added a

developer lab adjacent to the room along with a sign-in area

that holds mobile devices and a dedicated machine that could

be used for room control. We configured the developer station

with a KVM (keyboard-video-mouse) switch so that all of the

iRoom PCs can be accessed from any of four developer stations.

Figure A shows the floor plan of the second version of iRoom.

One of the big headaches in building the sec-

ond version of iRoom was dealing with projec-

tor alignment and color calibration.1

Since building the second version, Interactive

Workspaces technology has been deployed at

six more locations around our campus. Through

various collaborations, Interactive Workspaces

group software is now being used in iRooms in

Sweden and Switzerland. The i-Land2 group

has also done some work that uses the Event

Heap in conjunction with their own software

framework.

REFERENCES

1. M.C. Stone, “Color and Brightness AppearanceIssues in Tiled Displays,” IEEE Computer Graphics &Applications, vol. 21, no. 5, Sept./Oct. 2001, pp.58–66.

2. N. Streitz et al., “i-LAND: An interactive Landscapefor Creativity and Innovation,” Proc. ACM Conf.Human Factors in Computing Systems (CHI 99),ACM Press, New York, 1999, pp. 120-127.

The iRoom and Beyond: Evolution and Use of Deployed Environments

Figure A. Floor plan and behind-the-scenes look at the second version of iRoom.

41’-0”20’-4” 20’-3”

Smart board projectorsTech support

stations

“Mural”projectors

B3.0

A3.0 Sign-in

area

Meeting room

Wired utility cartFlatscreen monitor

Mobile tech support station

Storagebenches

Floor-mounted power and data sources, see 2.0

Developerworkstations 17

’-10”

20’-3

Stacking chairs

Scanningstation

applications and human-computer inter-faces written on top of it. In addition,human-computer interfaces must considerthe underlying system’s properties to ensurethat they are not too brittle for use in real-world situations. We call our system infra-structure the Interactive Room OperatingSystem (iROS). It is a meta-OS that tiestogether devices that each have their ownlow-level OS. In designing the system, wekept the boundary principle2 in mind. Theboundary principle suggests that ubiquitouscomputing infrastructure must allow inter-action between devices only within thebounds of the local physical space—in ourcase, an interactive workspace.

The three iROS subsystems are the DataHeap, iCrafter, and the Event Heap. Theyare designed to address the three usermodalities of moving data, moving control,and dynamic application coordination,respectively. Figure 2 shows how the iROScomponents fit together. The only systemthat an iROS program must use is the EventHeap, which provides for dynamic appli-cation coordination and forms the under-lying communication infrastructure forapplications in the interactive workspace.

iROS subsystemsGiven the heterogeneity in interactive

workspaces and the likelihood of failure inindividual devices and applications, it isimportant that the underlying coordina-tion mechanism decouple applicationsfrom one another as much as possible.Doing so encourages applications to be lessdependent on one another, which tends tomake the overall system less brittle andmore stable. We derive the Event Heap5

coordination infrastructure for iROS froma tuplespace model,6 which offers inherentdecoupling.

The Event Heap stores and forwards mes-sages known as events, each of which is acollection of name-type-value fields. It pro-vides a central repository to which all appli-cations in an interactive workspace can postevents. An application can selectively accessevents on the basis of pattern matching fieldsand values. One key extension we made totuplespaces was to add expiration to events,which allows unconsumed events to beautomatically removed and provides sup-port for soft-state through beaconing. Appli-cations can interface with the Event Heapthrough several APIs, including Web, Java,and C++. The Event Heap differs fromtuplespaces in several other respects thatmake it better suited for interactive work-spaces.5

The Data Heap facilitates data move-ment by allowing any application to placedata into a store associated with the localenvironment. The data is stored with anarbitrary number of attributes that charac-terize it. By using attributes instead of loca-tions, applications don’t need to worryabout which specific physical file systemstores the data. The Data Heap stores for-mat information, and, assuming it loads theappropriate transformation plug-ins, it willautomatically transform the data to the bestformat supported by retrieving applica-tions. If a device only supports JPEG, forexample, the Data Heap will automaticallyextract and convert a retrieved PowerPointslide into that image format.

The iCrafter system7 provides a systemfor service advertisement and invocation,along with a user interface generator forservices. iCrafter services are similar tothose provided by systems such as Jini,8

except that invocation happens through theEvent Heap. The interface manager servicelets users select a service to control and then

automatically returns the best interface forthe user’s device. iCrafter communicatesdirectly with the services through the EventHeap. When a custom-designed interfaceis available for a device-service pair, theiCrafter system sends it. Otherwise, a moregeneric generator renders the interface intothe highest quality type supported on thedevice. Generation happens with interfacetemplates that are automatically cus-tomized according to the local environ-ment’s characteristics. If room geometry isavailable, for example, a light controllercan show the actual positions of the lightson a graphical representation of the work-space. Templates also let users combinemultiple services in a single interface.

General principlesiROS applications do not communicate

directly with one another; instead, they useindirection through the Event Heap. Thishelps avoid highly interdependent applica-tion components that could cause each otherto crash. All of the iROS systems decoupleapplications referentially with informationrouted by attribute rather than recipientname. Attribute-based naming is also used,among other places, in the Intentional Nam-ing system.9 The Event Heap and DataHeap also decouple applications temporally,allowing applications to pick up messagesgenerated before they were running or whilethey were crashed and restarting.

In our design, we treat failure as a com-mon case. When something breaks, the sys-tem can simply restart it. Clients automat-ically reconnect, so the Event Heap server,interface manager, and Data Heap servercan all be restarted without interfering withapplications other than during the periodwhen connectivity is lost. Thus, any sub-set of machines malfunctioning in theworkspace can be restarted. Any impor-tant state that might be lost during thisprocess is either stored in persistent form inthe Data Heap or is beaconed as soft-stateas the clients come back online.

Because the Web is popular, a great deal

70 PERVASIVEcomputing http://computer.org/pervasive

I N T E G R A T E D E N V I R O N M E N T S

Interactive workspace applications

= Standard iROS = Other infrastructure

iCrafter

Servicediscovery

DataHeap

Serviceinvocation

Statemanager

Event HeapPersistent store File stores Other APIs

Key:

Figure 2. The iROS component structure.

of technology has been developed anddeployed that uses browsers and HTTP. Wetried to leverage this technology whereverwe could in the iROS system. The systemsupports moving Web pages from screen toscreen, event submission through URLs andforms, and automatic HTML UI generationthrough iCrafter.

Human-computer interactionIn designing the interactive aspects of the

iRoom, our goal has been to let the userremain focused on the work being donerather than on the mechanics of interac-tion. The HCI research has included twomain components: developing interactiontechniques for large wall-based displaysand designing “overface” capabilities toprovide access and control to informationand interfaces in the room as a whole.

Our primary target user setting is one thatwe call an open participatory meeting. Inthis setting, a group of 2 to 15 people worksto accomplish a task, usually as part of anongoing project. People come to the meet-ing with relevant materials saved on theirlaptops or on file servers. During the meet-ing there is a shared focus of attention on aprimary display surface, with some amountof side work that draws material from theshared displays and can bring new materialto it. In many cases, a facilitator stands atthe board and is responsible for overallactivity flow. Other participants might alsopresent something during the meeting.These meetings can at times include con-ventional presentations, but our goal is tofacilitate interaction among participants.

Examples of such meetings conductedin the iRoom include our own projectgroup meetings, student project groups incourses, construction management meet-ings, brainstorming meetings by designfirms, and training-simulation meetings forschool principals.

Large high-resolution displaysWe were initially motivated to take

advantage of interaction with large high-res-olution displays. In a meeting, the presenteror facilitator focuses on the board’s contentsand on the other participants. Using a key-board is distracting, so we designed methods

for direct interaction with a pen or withdirect touch on the board. The interactivemural is too large for today’s touch-screentechnologies, so we tested both laser andultrasound technologies10 as input devices.The current system uses an eBeam ultrasonicpen augmented with a button to distinguishtwo modes of operation, one for drawingand one for commands. The eBeam systemdoes not currently support multiple simul-taneous users.

We wanted to combine the benefits oftwo research threads in our interface:whiteboard functionality for quick sketch-ing and GUI functionality for applications.We developed the PostBrainstorm inter-face11,12 to provide a high-resolution dis-play that has the ability to intermix directmarking, image control, 3D rendering, andarbitrary desktop applications. Our keydesign goal was to provide fluid interac-tion that would not divert user focus fromperson-to-person interaction. This goal ledto developing several new mechanisms:

• FlowMenu is a contextual pop-up menusystem that combines the choice of anaction with parameter specification in asingle pen stroke, which makes it possi-ble to avoid interface modes that can dis-tract users not devoting their full atten-tion to the interface.13 Because the menuis radial rather than linear, multileveloperations can be learned as a singlemotion path or gesture; in many casesthe user does not even need to look atthe menu to select an action.

• ZoomScape is a configurable warpingof the screen space that implicitly con-trols an object’s visible scale. The objectretains its geometry while being scaledas a whole. In our standard configura-tion, the top quarter of the screenreduces the object size. An object can bemoved out of the main area of the screenand reduced, providing a fluid mecha-nism to manage screen real estate with-out requiring explicit commands.

• Typed Drag-and-Drop is a handwritingrecognition system that runs as a back-ground process, leaving the digital inkand annotating it with the interpretedcharacters. Through FlowMenu com-

mands, you can specify a sheet of writingto have a desired semantic and then dragit onto the target object to have theintended effect. This provides a crossoverbetween simple board interaction andapplication-specific GUI interactions.

Several groups of industrial designers fromtwo local design firms (IDEO and Speck-Design) tested the system. Their overallevaluation was positive11 and alerted us tospecific areas needing improvement. Inaddition to experimenting with these facil-ities on the high-resolution interactivemural, we ported them to a standard Win-dows systems and used them on the nor-mal touch screens in the iRoom.

Room-based cross-platform interfacesOne obvious advantage of working in a

room-based environment is that peopleshare a common model of where devicesare positioned, which they can use as a con-venient way of identifying them. Our roomcontroller (see Figure 3) uses a small map ofthe room to indicate the lights, projectors,and display surfaces. We use simple togglesand menus associated with objects in themap to switch video inputs to projectorsand to turn lights and projectors on or off.

Initial versions of this controller werebuilt as standard GUI applications, whichcould only run on some systems. We broad-ened their availability to a wider range ofdevices by providing them as Web pages(using forms) and as Web applets (usingJava). Our later research generalized theprocess with iCrafter.7 The room-controlsystem stores the geometric arrangement ofscreens and lights in the room in a config-uration file and will automatically providecontrollers on any device supporting a UIrenderer available through iCrafter. Figure3 shows examples for several devices.

In addition to providing environmentcontrol, the same room control interfaceserves as the primary way to move infor-mation onto displays. The user indicates aninformation object (URL or file), the appro-priate application to display it, and the dis-play on which it should appear using theinterface on their device. The user actionsgenerate an event that is picked up by a dae-

APRIL–JUNE 2002 PERVASIVEcomputing 71

mon running on the target machine, whichthen displays the requested data.

Room-based input devicesIn an interactive workspace, physical

input devices belong to the space ratherthan a specific machine. We implementedan overhead scanner based on a digitalcamera. This scanner lets users digitizesketches and other material placed in a cer-tain area of the table. This provided analternative to sketching on tablet comput-ers, which had the wrong feel when usedby a team of brainstormers. The overheadscanner provides a method to bring tradi-tional media into the space in a mannerthat has low cognitive overhead.

In addition to the overhead scanner, weintroduced other devices, such as a bar-code scanner and simple wireless inputdevices, such as buttons and sliders. Weused the bar-code scanner, for example, toimplement a system similar to the Blue-Board system.14 When the bar-code scan-ner posts an event, the application checksa table of codes registered to individualiRoom users; if there is a match, it poststhe user’s personal information space toone of the large boards. We can associatehandheld wireless iRoom buttons with anyactions through a Web-form interface. Forexample, a push on a particular button canbring up a set of predesignated applicationson multiple devices in the room.

Distributed application controlOne aspect of the moving-control

modality for interactive workspaces is aneed for both direct touch interaction with

the GUIs on the large screens and the abil-ity for users standing away from thescreens to control the mouse and enter text.While it is possible to walk up to the screento interact or to request that the person atthe screen perform an action on yourbehalf, both of these actions disrupt theflow of a meeting. Several previous systemshave dealt with multiuser control of ashared device, often providing sophisti-cated floor-control mechanisms to manageconflicts. In keeping with our keep-it-sim-ple philosophy, we created a mechanismcalled PointRight,15 which provides keyfunctionality without being intrusive.

With PointRight, any machine’s pointingdevice can become a superpointer whosefield of operation includes all of the displaysurfaces in the room. When a device runsPointRight, the edges of its screen are asso-ciated with other corresponding displays.The user simply continues moving the cur-sor off the edge of the local screen, and itmoves onto one of the other screens, as ifthe displays in the room were part of a largevirtual desktop. In addition to allowing thiscontrol through laptops, the room has adedicated wireless keyboard and mouse thatis always available as a general keyboardand pointer interaction device for all of thesurfaces. For each active user, their key-strokes go to the machine on which theirpointer is currently active.

Through calls to the Event Heap, inter-face actions in one application can triggeractions in another running on any of themachines in the workspace. The Center forIntegrated Facility Engineering16 employedthis technique in a suite of applications

developed for use in construction-manage-ment meetings. Figure 4 shows some of theapplication viewers that they constructed.

All applications communicate throughthe Event Heap and emit events in a com-mon format and watch for events to whichthey can respond. Users can coordinate theapplications by bringing them up on anyof the displays in an interactive workspace.As users select and manipulate informa-tion in one of the viewers, correspondinginformation in the other viewers updatesto reflect changes. Because the componentsare loosely coupled, the absence or disap-pearance of an event source or eventreceiver does not affect any of the applica-tion components currently in use.

The Smart Presenter system lets usersconstruct coordinated presentations acrossthe displays in an interactive workspace.Users create a script that determines thecontent to display. PowerPoint and Webpages are two of the supported formats,but we can also use any other format thatthe Data Heap supports. In addition tocontent, we can send any arbitrary eventso it is easy to trigger lighting changes orswitch video inputs to a projector during apresentation. Smart Presenter leverages theData Heap to insure it can show any dis-play in the workspace. In the iRoom, forexample, the high-resolution front display,which only supports JPEG images, can stilldisplay PowerPoint slides because they areextracted and transformed for that display.

Future directionsSeveral topics require additional inves-

tigation, including security, adaptation,and bridging interactive workspaces. Whileproviding many important attributes, theloose coupling model introduces somesecurity concerns. The indirect communi-cation makes all messages public, whichmakes it easy to adapt programs to workwith one another through intermediationat the expense of privacy. For now, we have

72 PERVASIVEcomputing http://computer.org/pervasive

I N T E G R A T E D E N V I R O N M E N T S

Figure 3. Examples of iCrafter-generatedcontrol interfaces: (a) Java Swing, (b) Palm handheld, and (c) HTML.

(a) (c)(b)

a firewall between the iRoom and the restof the world and assume that users work-ing together in a room have implicitlyagreed to public communication. We areinvestigating the types of security that usersneed in interactive workspaces, with thehope of developing a social model for secu-rity that will in turn help us to define theappropriate software security protocols.

While our system tries to minimize theamount of time required to integrate adevice into an interactive workspace, thereis still overhead in configuring roomgeometries and specifying which servers touse. We plan to make it simpler to createand extend a workspace and to moveportable devices between them. Usersshould only have to plug in a device orbring it into a physical space for it tobecome part of the corresponding software

infrastructure. User configuration shouldbe simple and prompted by the space. Forexample, the user might have to specifywhere in the room the device is located.The logical extension of this is to allow adhoc interactive workspaces to form wher-ever a group of devices are gathered.

Allowing project teams in remotelylocated workspaces to work with oneanother is both interesting and useful. Themain issues here include how to facilitatecoordination between desired applicationswhile insuring that workspace-specificevents remain only in the appropriate loca-tion. For example, sending an event to turnon all lights should probably remain onlyin the environment where it was generated.As we extend our research to multiplelinked rooms and remote participants, wewill use observations of users working in

these linked-spaces to determine howmuch structure we need to add. We are dri-ven not by what is technically possible, butby what is humanly appropriate.

Akey design philosophy for ourproject has been that the usershould have a minimum of spe-cific controls and modes to

learn and remember, which means that theinterface should take advantage of naturalmappings to the physical structure. Al-though it would be an overstatement to saythat the interface has become completelyintuitive and invisible, we continue to makesteps in that direction. As with all systemsbuilt in relatively new domains, and par-ticularly with systems that involve user

APRIL–JUNE 2002 PERVASIVEcomputing 73

Figure 4. Some of the viewers in the suite that the Center for Integrated Facility Engineering developed for construction-management settings. Each would typically be run on its own display.

interaction, it is difficult to come up witha quantitative measure of success.

We ran several experimental meetings,including brainstorming sessions by pro-fessional designers, construction of classprojects built on the iROS system, trainingsessions for secondary school principals,construction-management experiments fora civil engineering research project, groupwriting in an English course, project groupsfrom an interaction design course, and, ofcourse, our own weekly group meetings.The overall results have been positive, withmany suggestions for further developmentand improvement.

Comments from programmers whohave appreciated how easy it is to developapplications with our framework are alsoencouraging. The adoption and spread ofour technology to other research groupssuggests that our system is meeting theneeds of the growing community of devel-opers for interactive workspaces. For moreinformation on the Interactive Workspacesproject and to download the iROS soft-ware, visit http://iwork.stanford.edu.

ACKNOWLEDGMENTSWe thank Pat Hanrahan, one of the founders of theproject, for his support and efforts. The accom-plishments of the Interactive Workspaces group aredue to the efforts of too many to enumerate here,but you can find a complete list on the InteractiveWorkspaces Web site. DoE grant B504665, NSFGraduate Fellowships, and donations of equipmentand software from Intel, InFocus, IBM, and Microsofthave supported the work described here.

REFERENCES1. M. Weiser, “The Computer for the 21st Cen-

tury,” Scientific American, vol. 265, no. 3,1991, pp. 66–75.

2. T. Kindberg and A. Fox, “System Softwarefor Ubiquitous Computing,” IEEE Perva-sive Computing, vol. 1, no. 1, Jan./Feb.2002, pp. 70–81.

3. B. Brumitt et al., “EasyLiving: Technologiesfor Intelligent Environments,” Proc. Hand-held and Ubiquitous Computing 2nd Int’lSymp. HUC 2000, Springer-Verlag, NewYork, 2000, pp. 12–29.

4. M.H. Coen et al., “Meeting the Computa-tional Needs of Intelligent Environments:The Metaglue System,” Proc. 1st Int’l Work-

shop Managing Interactions in Smart Envi-ronments, 1999.

5. B. Johanson and A. Fox, “The Event Heap:An Coordination Infrastructure for Interac-tive Workspaces,” to be published in Proc4th IEEE Workshop Mobile Computer Sys-tems and Applications (WMCSA 2002),IEEE Press, Piscataway, N.J., 2002.

6. N. Carriero and D. Gelernter, “Linda inContext (Parallel Programming),” Comm.ACM, vol. 32, no. 4, 1989, pp. 444–458.

7. S. Ponnekanti et al., “ICrafter: A ServiceFramework for Ubiquitous ComputingEnvironments,” Proc. Ubicomp 2001,Springer-Verlag, New York, 2001, pp.256–272.

8. J. Waldo, “The Jini Architecture for Net-work-Centric Computing,” Comm. ACM,vol. 42, no. 7, 1999, pp. 76–82.

9. W. Adjie-Winoto et al., “The Design andImplementation of an Intentional NamingSystem,” Operating Systems Rev., vol. 33,no. 5, Dec. 1999, pp. 186–201.

10.X.C. Chen and J. Davis, “LumiPoint: Multi-User Laser-Based Interaction on Large TiledDisplays,” Displays, vol. 22, no. 1, Mar.2002.

11.F. Guimbretière, Fluid Interaction for HighResolution Wall-Size Displays, PhD disser-tation, Computer Science Department, Stan-ford Univ., Calif., 2002.

12.F. Guimbretière, M. Stone, and T. Winograd,“Fluid Interaction with High-ResolutionWall-Size Displays,” Proc. of the ACMSymp., ACM Press, New York, 2001, pp.21–30.

13. J. Raskin, The Humane Interface: NewDirections for Designing Interactive Sys-tems, Addison Wesley, Reading, Mass.,2000.

14.D. Russell and R. Gossweiler, “On theDesign of Personal & Communal LargeInformation Scale Appliances,” Proc. Ubi-comp 2001, Springer-Verlag, New York,2001, pp. 354-361.

15.B. Johanson, G. Hutchins, and T. Winograd,“PointRight: A System for Pointer/KeyboardRedirection Among Multiple Displays andMachines,” tech. report CS-2000-03, Stan-ford Univ., 2000; http://graphics.stanford.edu/papers/pointri ht.

16.K. Liston, M. Fischer, and T. Winograd,“Focused Sharing of Information for Multi-disciplinary Decision Making by ProjectTeams,” Proc. ITcon, vol. 6, 2001, pp.69–81.

For more information on this or any other comput-ing topic, please visit our digital library at http://computer.org/publications/dlib.

74 PERVASIVEcomputing http://computer.org/pervasive

I N T E G R A T E D E N V I R O N M E N T S

Brad Johanson is a PhD candidate in the electrical engineering department at Stan-ford University and is one of the student leads in the interactive workspaces project.His research interests include genetic programming, computer networking, andcomputer graphics. He received a BA in computer science and a BS in electrical engi-neering and computer science from Cornell University, an MS in computer sciencefrom the University of Birmingham in England, and an MS in electrical engineeringfrom Stanford University. Contact him at [email protected].

Armando Fox is an assistant professor at Stanford University. His research interestsinclude systems approaches to improving dependability and system software sup-port for ubiquitous computing. He received a BS in electrical engineering from MIT,an MS in electrical engineering from the University of Illinois, and a PhD in electricalengineering from the University of California at Berkeley. He is a member of theACM and a founder of ProxiNet (now a division of PumaTech), which commercial-ized the thin client mobile computing technology he helped develop at UC Berkeley.Contact him at [email protected].

Terry Winograd is a professor of computer science at Stanford University, where hedirects the interactivity laboratory and the program in human-computer interactiondesign. He is one of the principal investigators in the Stanford digital libraries projectand the interactive workspaces project. His research interests include human-com-puter interaction design, with a focus on the theoretical background and conceptualmodels. Contact him at [email protected].

the AUTHORS