Modeling On and Above a Stereoscopic Multitouch...

8
Modeling On and Above a Stereoscopic Multitouch Display Bruno R. De Ara` ujo INESC-ID, DEI IST, Technical University of Lisbon, Portugal [email protected] Joaquim A. Jorge INESC-ID, DEI IST, Technical University of Lisbon, Portugal [email protected] ery Casiez LIFL, INRIA Lille, University of Lille, Villeneuve d’Ascq, France gery.casiez@lifl.fr Martin Hachet INRIA Bordeaux - LaBRI, Talence, France [email protected] Figure 1: 3D object modeling by interacting on and above a multi-touch stereoscopic surface. Copyright is held by the author/owner(s). CHI’12, May 5–10, 2012, Austin, Texas, USA. ACM 978-1-4503-1016-1/12/05. Abstract We present a semi-immersive environment for conceptual design where virtual mockups are obtained from gestures we aim to get closer to the way people conceive, create and manipulate three-dimensional shapes. We developed on-and-above-the-surface interaction techniques based on asymmetric bimanual interaction for creating and editing 3D models in a stereoscopic environment. Our approach combines hand and finger tracking in the space on and above a multitouch surface. This combination brings forth an alternative design environment where users can seamlessly switch between interacting on the surface or in the space above it to leverage the benefit of both interaction spaces. Author Keywords 3D Modeling; 3D User Interface ACM Classification Keywords H.5.2 [User Interfaces]: Graphical user interfaces (GUI). Input devices and strategies (e.g., mouse, touchscreen), Interaction styles (e.g. commands, menus, forms, direct manipulation) Introduction Despite the growing popularity of Virtual Environments, they have not yet replaced desktop CAD systems when it

Transcript of Modeling On and Above a Stereoscopic Multitouch...

Modeling On and Above aStereoscopic Multitouch Display

Bruno R. De AraujoINESC-ID, DEI IST, TechnicalUniversity of Lisbon, [email protected]

Joaquim A. JorgeINESC-ID, DEI IST, TechnicalUniversity of Lisbon, [email protected]

Gery CasiezLIFL, INRIA Lille, University ofLille, Villeneuve d’Ascq, [email protected]

Martin HachetINRIA Bordeaux - LaBRI,Talence, [email protected]

Figure 1: 3D object modeling by interacting on and above amulti-touch stereoscopic surface.

Copyright is held by the author/owner(s).CHI’12, May 5–10, 2012, Austin, Texas, USA.ACM 978-1-4503-1016-1/12/05.

AbstractWe present a semi-immersive environment for conceptualdesign where virtual mockups are obtained from gestureswe aim to get closer to the way people conceive, createand manipulate three-dimensional shapes. We developedon-and-above-the-surface interaction techniques based onasymmetric bimanual interaction for creating and editing3D models in a stereoscopic environment. Our approachcombines hand and finger tracking in the space on andabove a multitouch surface. This combination brings forthan alternative design environment where users canseamlessly switch between interacting on the surface or inthe space above it to leverage the benefit of bothinteraction spaces.

Author Keywords3D Modeling; 3D User Interface

ACM Classification KeywordsH.5.2 [User Interfaces]: Graphical user interfaces (GUI).Input devices and strategies (e.g., mouse, touchscreen),Interaction styles (e.g. commands, menus, forms, directmanipulation)

IntroductionDespite the growing popularity of Virtual Environments,they have not yet replaced desktop CAD systems when it

comes to modeling 3D scenes. Traditional VR idioms arestill umbilically connected to the desktop metaphor theyaim to replace, by leveraging on the familiar ”Windows,Icons, Menus, Pointing” (WIMP) metaphors. Worse, thecommand languages underlying many of these systemsalso do not map well to the way people learn to conceive,reason about and manipulate three-dimensional shapes.

In this paper, we explore 3D interaction metaphors to yielddirect modeling techniques in stereoscopic multitouchvirtual environment. Combined with user posture trackingbased on a depth camera and three-dimensional fingertracking, this rich environment allows us to seamlesslypick and choose the sensing technique(s) most appropriateto each modeling task. Based on this groundwork, wehave developed an expressive set of modeling operationswhich build on user’s abilities at creating andmanipulating spatial objects. Indeed, from a small set ofsimple, yet powerful functions users are able to createmoderately complex scenes with simple dialogues viadirect manipulation of shapes in a less cumbersome way.

Related WorkWith the widespread adoption of multitouch devices andless expensive and intrusive tracking solutions such as theMicrosoft Kinect, academic research on tabletop hasrefocused on “on” and “above” surface interactiontechniques. Muller-Tomfelde et al. proposed differentmethods to use the space above the surface to provideways of interacting with 2D tabletop content closer toreality [13]. While tangible devices complement thesurface physically with a direct mapping to the GUI suchas in the Photohelix system and StereoBlocks [9], gesturesabove the surface mimic physical interaction with realobjects. Wilson et al. proposed several metaphors tointeract with different displays while capturing full body

posture [16]. In this way, users can interact on or abovethe surface with 2D content or even between surfacesusing the body to transfer virtual content to the hand orto another surface while moving their bodies in space.Users can also interact physically in space with projectedGUI. In our system, we prefer to use the surface for GUIsince it is more adequate for discrete selection and explorespace gesture for modeling actions.

Our approach explores the continuous space as presentedby Marquardt et al. [12]; however we enrich theirapproach by combining it with the bimanual asymmetricmodel proposed by Guiard [6]. This model proposesguidelines for designing bimanual operations based onobservations of users sketching on paper. For these tasks,Guiard identified different rules and actions for thepreferred (also dominant-hand or DH) and non-preferred(also non-dominant hand, or NDH) hand. While the DHperforms fine movements and manipulates tools, the NDHis used to set the spatial frame of reference and issuecoarse movements. This approach has been explored byseveral systems [1, 8, 10, 11] by combining finger- orhand- gestures with pen devices. Brandl et al. proposed asketching system where the user selects options throughtouches using the NDH on a WIMP–based graphicalinterface, while the DH is used to sketch using a pendevice [1]. Such a configuration allows to better explorehand gestures proposing richer interaction concepts torepresent 2D editing operations such as demonstrated byHinckley et al. [8]. Indeed, this makes switching betweenmodalities easier and allows users to perform a wide rangeof 2D editing tasks without relying on gestures or GUIinvocations. Lee combined hand gestures while sketchingusing a collapsible pen to define curve depth on atabletop [10]. The NDH is tracked allowing users toseamlessly specify 3D modeling commands or modes such

as the normal direction of an extrusion while specifyingthe displacement by interacting with the pen on thevirtual scene. Contrary to their approach, we preferred to

Figure 2: Our stereoscopicmultitouch modeling setupshowing the multitouchsteroscopic display, theGametraks used to track thefingers above the surface and aKinect to track user’s head.

keep the surface for fast and accurate 2D drawing, whilebenefiting from the 3D input space for controlling depthdirectly. Lopes et al. adapted the ShapeShop sketchbased free-form modeler to use both pen and multitouchsimultaneously [11]. They found out that the asymmetricbimanual model allows users to perform moremanipulations in less time than conventional singleinteraction point interfaces, which increased thepercentage of time spent on sketching and modelingtasks. By tracking the hands of the user, we adopt theasymmetric bimanual model to easily switch betweensketching, model editing, navigation and spatialmanipulation of objects. In addition, we do not need torely on special input devices nor extra modalities to assigndifferent roles to each hand.

We rely on a stereoscopic visualization setup forarchitectural model visualization similar to [3]. While thissystem allows navigating or annotating the 3D scenemainly as if it was inside the table and use fingers asproxies over the scene, our interaction techniques focus onmodeling and direct manipulation since 3D models arerendered as if they were lying atop the table. To avoidhands occlusions over the visualization, Toucheo [7]proposed a fish-tank like setup using a multitouch surfaceand a stereoscopic display. However such as other setupsrelying on semi-transparent mirrors to create holographicillusion,it both reduces the working space and constrainsthe usage of the above surface space to hand gestures.Our stereoscopic visualization setup provides morefreedom of movement allowing a continuous space ofinteraction. In addition, adopting a bimanual asymmetricmodel makes possible new interaction techniques which

could benefit interaction with holographic displaytechnologies when they become available.

Hardware Modeling SetupOur setup consists in a semi-immersive environment basedon a stereoscopic multitouch display 96×72 cm (42”)combined with a Kinect depth camera and twoGametraks1. Head tracking is achieved in a non-intrusiveway thanks to the Kinect using its skeleton detectionalgorithm. The skeleton is also used to track user handsallowing to locate the dominant hand according to thehandedness of the user. Finger tracking is done using theGametraks with a good precision over the working spaceabove the table while reducing occlusion problems andproviding a higher framerate (125 Hz) compared totechniques based on the Kinect device alone (Figure 2).The visualization relies on a back-projection based systemlocated under the table running at 120 Hz with a XGApixels resolution. It is coupled with active shutter glassesfrom 3D Vision NVIDIA for the stereoscopic visualization.The 3D scene is rendered on top of the surface and thepoint of view is updated according to the position andorientation of the user’s head to take into account motionparallax. The IR transmitter for the glasses uses an IRwavelength different from the multitouch table which isbased on the Diffuse Illumination technique. It is set at aposition to cover the working volume around the tablewhere the user interacts. We use the iLight2 frameworkversion 1.6 for fingers detection and tracking. Fingers dataare then sent using TUIO messages to our custom builtapplication. The two Gametraks are used to track the 3Dposition of the index and thumb of each hand when theyare no longer in contact with the multitouch surface.

1http://en.wikipedia.org/wiki/Gametrak2iliGHT Tactile Table product page: http://www.immersion.fr

These low cost gaming devices are placed in a reverseposition centered above the table at a distance of 120 cm.The 3D position of each finger is computed from the twoangles of rotation and the length of each cable, digitalizedon 16 bits and reported at 125Hz to the host computer.The retractable strings are attached to the fingers through

Figure 3: Fingers tracking abovethe multitouch surface using twoGametrak devices.

Figure 4: Sketching on thesurface using the DH.

Figure 5: Extruding a shapealong its normal in space with apinch gesture.

a ring. Although strings introduce some visual clutter,they were not found to distract users from their task. Thestrings create a minor spring effect which reduces userhand tremor without adding fatigue. We added a 6mmdiameter low profile momentary switch button on eachindex finger to detect pinch gestures without ambiguity(Figure 3). This simple solution provides a good trade-offregarding precision, cost and cumbersomeness comparedto using a high end marker based optical tracking systemor low sampling frequency device such as the Kinect.

To obtain a continuous interaction space, the coordinatesfrom the different input devices are converted to theKinect coordinate system.By identifying the fourmultitouch surface corners in the image captured by theKinect, we are able to compute the transformation matrixfrom 2D touch position to 3D space. Regarding theGametrak devices, a transformation matrix is computed,for each tracked finger, by sampling the multitouchsurface screen in one thousand positions. The rigidtransformation is then computed using a RANSACalgorithm [4]. Kinect Skeleton tracking data is used toenrich our user model and input data is fused by proximityin a unique reference space. Doing so, we can define thefrustum of the off-axis stereo perspective projection torender 3D content on top of the surface from the userpoint of view. The redundancy of information from thedifferent input devices allows us to identify which finger ofwhich hand is interacting on the surface or in the air or tochoose the input source with the best tracking resolution.

Our Modeling ApproachWe propose a direct modeling approach to create, editand manipulate 3D models using a small set ofoperations. After drawing sketches, users can create 3Dmodels by pushing and pulling existing content of thescene such as [14] or Google Sketchup. Our models arerepresented using a boundary representation whichdecomposes the topology of objects into faces, edges andvertices. We adapt our modeling approach to takeadvantage of the bimanual interaction model while theuser is interacting on and above the surface.

Sketching On the SurfaceThe multi-touch surface is primarily used as a sketchingcanvas where the user interacts using fingers as depictedby Figure 4. User can sketch on the surface creatingplanar shapes from close contours using the DH.Contoursmight use lines, curves or both and can be sketched usingmultiple strokes. Open strokes whose extremities are closeto each other are merged into a single stroke. Whilesketching, input data is fitted incrementally to the best fitof lines and cubic Bezier curves. Our incremental fittingalgorithm based on curve fitting tries to guarantee thecontinuity between curves and segments by addingtangency constraints during the fitting process. When aclosed contour is created on the surface, simple planarpolygons can be created by the user. We perform a simplestroke beautification based on constraints detected fromsketches. These constraints rely on line segments todetect parallel and perpendicular line pairs and segmentpairs with equal length. We use a threshold on anglesbetween segments for parallelism and perpendicularity anda threshold ratio relationship between segments withsimilar length. An energy function is specified for eachtype of constraint and we perform an error minimizationmethod to beautify user sketches. Thanks to this process,

regular shapes can be created using line drawing.Regarding closed conic sections, we use a 2D shape

Figure 6: Extruding a profilealong a curvilinear trajectory withthe DH.

Figure 7: Moving a shape on thesurface using NDH.

Figure 8: Scaling an object inspace using both hands.

recognizer [5] to detect circles and ellipses which areapproximated by a closed piecewise curve using four cubicBezier segments. We also use the 2D shape recognizer todetect simple gestures such as an erasing command bydrawing a scribble. When an erasing gesture is recognized,if it overlaps open strokes, they are erased. However, if itoverlaps only shapes and not open strokes, overlappedshapes are erased. This solution allows to use openstrokes as construction lines while modeling.

Creating 3D Shapes by Pushing and Pulling operationsGestures with the DH above the surface are interpreted as3D object creation or edition. Creation of 3D shapesconsists in extruding a planar shape previously sketchedon the surface following the push and pull modelingmetaphor. The user first approaches the DH index fingernear a planar shape on the surface to highlight it. He thenperforms a pinch gesture, pressing the button located onthe index finger, to extrude the shape along the normal ofthe surface (Figure 5). The height of the extruded objectis then continuously updated and co-located with theposition of the finger until the button is released. Planarshapes can also be extruded along the trajectory definedin the air after the user has selected this operation in amenu displayed on the NDH (Figure 6). While the user isdefining the trajectory, the path is continuouslyre-evaluated and fitted into line segments and curve piecessimilarly to what is done for strokes on the surface.Segments and curve pieces are then used to create smoothfree form extrusion of the profile offsetting the gesturefrom the centroid of the face to its vertices as presentedby [2]. This method enables to extrude both poly-line andcurvilinear profiles along linear or curvilinear paths.

Additionally, topological features of the shape (vertices,edges and faces) can be selected and displaced along anormal direction updating the geometry of the object butnot changing its topology as done by the extrusionoperation. It offers edition by pushing and pulling anytopological feature of our boundary representation.Selection of features is done implicitly by touching ageometrical feature on the surface and explicitly using apinch gesture in space. Since edges and vertices can beshared by more than one face or edge respectively, acontinuous selection mechanism is provided for selectiondisambiguation analyzing the previously highlighted entity.For example, it is possible to highlight a particular edge ofa face shared by two faces by selecting it from the facethe user is interested in. If no geometrical feature isselected while doing the pinch gesture with the DH, theuser can sketch 3D lines or curves in space.

Manipulating 3D ShapesWhen starting a gesture on the surface with the NDH, itis interpreted as object transformation if it is performedon an object or world manipulation otherwise. Singletouch gestures are interpreted as object or worldtranslation. More than one finger gestures are interpretedas translation, rotation and scale operations on objects orworld following the well-known RST paradigm. 3D objectsare constrained to movements along the plane parallel tothe multitouch surface. A gesture started with the NDHcan be complemented by the DH allowing translation,rotation and scale with both hands (Figure 7).

The bimanual interaction used on the surface is also validabove the surface allowing to rotate, translate and scaleobjects using two fingers. As on the surface, the NDHbegins the interaction using a pinch gesture. The NDHdefines translations only while the DH adds rotation and

scale operations using the method proposed by Wang etal. [15] as depicted in Figure 8. These direct 3D object

Figure 9: Contextual menupresented while selecting a facein space.

Figure 10: Selecting a verticalface for snapping.

Figure 11: Cutting a face on thecanvas after been snapped.

manipulations appear much more efficient compared toindirect interactions on the multitouch surface alone.

Menu based InteractionWe rely on menu based graphical user interface todistinguish between modeling modes such as linear andcurvilinear extrusion or other operations such as copy.Modes are presented through items shown in a contextualmenu presented under the NDH while a shape or part of itis selected with the DH. Modes presented in thecontextual menu correspond to the ones available in thecurrent mode associated to the operation performed bythe DH (Figure 9). If the operation carried by the DHonly supports a single mode, no contextual menu is shownunder the NDH. To avoid visual clutter, the contextualmenu transparency is adjusted based on the distancebetween the NDH and the surface. Above 15 cm, themenu is fully transparent and becomes progressivelyopaque as the NDH approaches the surface. To improvethe accessibility, the contextual menu follows the NDHbut its location is progressively fixed as the NDH comescloser to the surface to avoid spatial instabilities andreduce errors while selecting an item.

The discrete mode selection includes the extrusion type(normal to a face or along a trajectory), updating theobject topology or simply moving it, the cloning operationand the snapping operation described in the following subsection. When a shape is created, we associate to eachface the straight extrusion along the normal as the defaultmode since it is the most likely operation in the push andpull modeling approach. When the straight extrusionstarts, we automatically change the mode to the facemove operation, updating the shape without adding new

topological changes. Successive extrusions can be done tocreate stacked like shape parts by interacting with themenu. Since the menu follows the position of the NDH, itcan be used to define the location where clones appearwhen the cloning operation is selected by the user. Thecloning is available when any shape is selected and itduplicates the entire shape as illustrated in Figure 12.

Navigating between Surface and SpaceCreating 3D planar shapes in space remains an operationdifficult to perform due to lack of physical constraints toguide the hand. We propose a snapping operator to easilyswitch between the surface and space allowing to usesketches on the surface or gestures in 3D space atconvenience. Snapping is available through the contextualmenu accessible on the NDH to snap on or back on anyselected face (Figure 10). It works by computing atransformation matrix to align the 3D scene to the visiblegrid defined as a representation of the table surface. Asimple linear animation between the two orientations isrendered to help the user understand the new orientationof the model. Furthermore, it allows sketching details onexisting shapes (Figure 11) or guaranteeing that newshapes are created on top of an existing shape.

Constraining 3D OperationsSince most of 3D editing operations are performed usingonly the DH, we decided to use the free NDH to enrichour 3D operators and constrain both sketching and 3Dmodeling to create more rigorous and controlled shapes.The simplest constrained operation allows sketchingsymmetrical shapes on the surface. First, the usersketches a straight line defining a mirroring plane whichcan be selected by touching it with the NDH. While themirroring plane is selected, sketches using the DH areautomatically mirrored and are considered as additional

strokes if the selection remains active at the end of thesketch. By creating a closed shape formed by a stroke and

Figure 12: Cloning an objectselected by the DH through thecontextual menu.

Figure 13: Scaling a profile withthe NDH while extruding ashape.

Figure 14: TableWare sceneexample.

its mirrored version, users can create symmetrical shapes.It can also be used to add symmetrical details to anexisting stroke or shape.

3D operations above the surface can also be constrained.For example, while an object is being extruded with theDH, the NDH can select a face of an object to define amaximum or minimum height constraint. Once theconstraint is defined, the user continues to move his DHuntil the maximum or minimum height is reached. Furthermovements along the preceding direction do not continueto update the height of the object. This allows the user toalso define that the height of an object should not behigher or lower that the height of another object.

While the two previous operations illustrate discreteconstraints defined by the NDH which can be activatedbefore or during an editing operation, we also explore theusage of dynamic constrains which can be updatedcontinuously during an extrusion operation. This isillustrated with the scale constraint which consists inscaling the profile while extruding a shape (Figure 13).This allows the creation of a cone or a frustum from acircle or a quadrilateral planar face respectively. Thescaling factor can be controlled dynamically using a 2Doverlay menu accessible by the NDH while extruding theshape.

While traditional modeling interfaces usually requireconstraints to be defined before performing 3D operationsin order to define a problem to be solved by theapplication, our approach proposes an interactiveconstraint modeling solution. Doing so, we takeadvantage of the increase of expressiveness provided bybimanual interaction techniques. Furthermore, we

hypothesis that this definition of constraints on the flyallows to improve the flow of interaction and better fitsconstraint based modeling in conceptual design stages.

Preliminary EvaluationOur system was informally tested throughout itsdevelopment to assess the different design choices anditeratively improve the design of the interface. In total,between 15 and 20 participants tested the interface. Mostof the participants were undergraduate and graduatestudents in Computer Science with variable experiencewith CAD applications. Our observations suggest thatparticipants quickly learned how to use the interface.However, we noticed that participants can confuse theusage of the two hands at the beginning: sometimesparticipants wanted to move an object using theirdominant hand. As they were first presented an overviewof the interface with the basic operations available,further evaluation would help evaluate to determine thelearning curve. Figure 14 presents a set of objects createdusing our system by an expert user in 5’20”.

Conclusions and Future WorkWe have described an approach to model 3D scenes usingsemi-immersive virtual environments through a synergisticcombination of natural modalities afforded by novel inputdevices. While early experiments and informal assessmentsof our system show promise and seemingly validate someof these assumptions, we plan to run formal evaluationswith both novice and expert users to highlight and exploreboth the strengths and the weakness of our modelinginterface.

AcknowledgementsThis work was partially funded by the ANR InSTInCTproject (ANR-09-CORD-013) and the Interreg IV-A 2 seas

SHIVA project. Bruno Araujo would like to thank FCT fordoctoral grant reference SFRH/BD/31020/2006.

References[1] Brandl, P., Forlines, C., Wigdor, D., Haller, M., and

Shen, C. Combining and measuring the benefits ofbimanual pen and direct-touch interaction onhorizontal interfaces. In Proceedings of AVI ’08,ACM (NY, USA, 2008), 154–161.

[2] Coquillart, S. Computing offsets of b-spline curves.Comput. Aided Des. 19 (July 1987), 305–309.

[3] De la Riviere, J.-B., Dittlo, N., Orvain, E.,Kervegant, C., Courtois, M., and Da Luz, T. ilight 3dtouch: a multiview multitouch surface for 3d contentvisualization and viewpoint sharing. In Proceedingsof ITS ’10, ACM (NY, USA, 2010), 312–312.

[4] Fischler, M. A., and Bolles, R. C. Random sampleconsensus: a paradigm for model fitting withapplications to image analysis and automatedcartography. Commun. ACM 24 (June 1981).

[5] Fonseca, M., and Jorge, J. Using fuzzy logic torecognize geometric shapes interactively. InProceedings of IEEE Fuzzy Systems 2000, vol. 1(May 2000), 291–296 vol.1.

[6] Guiard, Y. Asymmetric division of labor in humanskilled bimanual action: The kinematic chain as amodel. Journal of Motor Behavior 19 (1987),486–517.

[7] Hachet, M., Bossavit, B., Cohe, A., and De laRiviere, J.-B. Toucheo: multitouch and stereocombined in a seamless workspace. In Proceedings ofUIST ’11, ACM (NY, USA, 2011), 587–592.

[8] Hinckley, K., Yatani, K., Pahud, M., Coddington, N.,Rodenhouse, J., Wilson, A., Benko, H., and Buxton,B. Pen + touch = new tools. In Proceedings of

UIST ’10, ACM (NY, USA, 2010), 27–36.[9] Jota, R., and Benko, H. Constructing virtual 3d

models with physical building blocks. In Proceedingsof CHI EA ’11, ACM (NY, USA, 2011), 2173–2178.

[10] Lee, J., and Ishii, H. Beyond: collapsible tools andgestures for computational design. In Proceedings ofCHI EA ’10, ACM (NY, USA, 2010), 3931–3936.

[11] Lopes, P., Mendes, D., Araujo, B., and Jorge, J. A.Combining bimanual manipulation and pen-basedinput for 3d modelling. In Proceedings of SBIM ’11,ACM (NY, USA, 2011), 15–22.

[12] Marquardt, N., Jota, R., Greenberg, S., and Jorge,J. A. The continuous interaction space: interactiontechniques unifying touch and gesture on and abovea digital surface. In Proceedings of INTERACT’11,Springer-Verlag (Berlin, Heidelberg, 2011), 461–476.

[13] Muller-Tomfelde, C., Hilliges, O., Butz, A., Izadi, S.,and Wilson, A. Interaction on the tabletop: Bringingthe physical to the digital. In Tabletops - HorizontalInteractive Displays, C. Muller-Tomfelde, Ed.,Human-Computer Interaction Series. SpringerLondon, 2010, 189–221.

[14] Oh, J.-Y., Stuerzlinger, W., and Danahy, J. Sesame:towards better 3d conceptual design systems. InProceedings of DIS ’06, ACM (NY, USA, 2006),80–89.

[15] Wang, R., Paris, S., and Popovic, J. 6d hands:markerless hand-tracking for computer aided design.In Proceedings of UIST ’11, ACM (NY, USA, 2011),549–558.

[16] Wilson, A., and Benko, H. Combining multiple depthcameras and projectors for interactions on, aboveand between surfaces. In Proceedings of UIST ’10,ACM (NY, USA, 2010), 273–282.