T1 Visualization Trends: Applications in...

8
Seminars in Laparoscopic Surgery, Vol 10, No 3 (September), 2003: pp xx–xx T he laparoscopic environment is intentionally constrained in order to benefit the patient. Minimizing the damage in establishing access to the operative field greatly improves recovery times and lessens the risk of postoperative complications. Although these constraints benefit the patient, they create a physical and informational bottleneck that spawns a number of serious technical challenges. For the patient, "less is more"--less invasive access is more beneficial. For the surgical team, however, "less is more" means that less invasive access leads to procedures that are more challenging, require more skill, more training, and reliance on more technologically sophisticated tools. This article fo- cuses on how current trends in visualization tech- nology may be applied in laparoscopy to put more and more of the burden on the technology, enabling the surgeon to give the patient more benefit with less risk. Substantial technical development has estab- lished a laparoscopic environment that is both min- imally invasive and procedurally viable. It is very similar in construction to the desktop computer en- vironment, which relies on the windows, icons, menus, and pointing devices (WIMP) interface. 1 The camera establishes the "desktop," and the mon- itor that shows the image from the camera acts as a window into that desktop. Other pointing devices (laparoscopic instruments visible in the window that displays the image of the operative field) dur- ing surgery "drag, click, point," and implement the surgical equivalent of "cut and paste." Newly developed visualization technology moves beyond the constraints of the WIMP inter- face and its associated computer engine (the tight- ly-coupled, single CPU computer). 2 In application to laparoscopy, this can create potential advances by breaking barriers that may seem now to be in- herent and unavoidable. For example, it may seem that the best laparoscopic view of the operative field must inherently be a two-dimensional (2D) image sequence, captured by a camera and delivered to the surgeon on a 2D computer monitor. As other work has shown, however, this 2D assumption can and is being challenged. 3 Similarly, it may seem that preoperative data, such as computed tomographic (CT) scans and magnetic resonance imaging (MRI) cannot play a role in the constrained, minimally invasive surgical (MIS) environment because of the disparities and barriers involved: preoperative data may not be available in the operating room, 3D fixed-scan data cannot be adequately aligned with 2D live-camera views, and the surgical team cannot effectively con- trol how to optimize, navigate, and control the many possible ways to fuse different data. The ques- tion is "can new visualization technologies continue to expand the limits?" Of interest is the similarity between the oper- ating room as a whole, with its sterile boundary that divides and separates it from the outside con- text, and the minimally invasive environment that has its own set of boundaries and constraints. Set within the larger context of the operating room, minimally invasive procedures establish a second boundary that divides and separates the surgeon in the operating room from the operative field. The same challenges in surmounting the required bar- riers of the operating room apply to the embedded T1 From the University of Kentucky, Computer Science Department, Lexington, Kentucky. Address reprint requests to W. Brent Seales, PhD, Associate Professor, Computer Science Department, University of Kentucky, 773 Anderson Hall, Lexington, KY 40506; E-mail: [email protected] ©2003 Westminster Publications, Inc., 708 Glen Cove Avenue, Glen Head, NY 11545, USA. Visualization Trends: Applications in Laparoscopy W. Brent Seales, PhD and Jesus Caban, BS Recent advances in visualization technology are being driven by two important trends: (1) continued increases in speed and function of hardware devices and (2) increasingly parallel, dis- tributed, cooperative systems. The incorporation of fast, pow- erful devices into cooperative systems enables a complex in- terplay of sensors, displays, and computational components that can create a seamless, perceptually rich and flexible en- vironment. Although these trends have fueled a number of ad- vances in visualization research, the unique requirements of laparoscopy make direct, effective use of visualization tech- nology as it is applied in other contexts extremely challeng- ing. This article discusses promising new capabilities in visu- alization technology. The costs and tradeoffs create new chal- lenges, which are addressed in some visualization applica- tions, but must be carefully assessed in the context of the la- paroscopic environment. Incorporating new visualization tech- nology in a way that captures its benefits and meets stringent laparoscopic requirements will very likely precipitate an enor- mous surge forward in the capabilities of the surgical team and in the quality of patient care.

Transcript of T1 Visualization Trends: Applications in...

Page 1: T1 Visualization Trends: Applications in Laparoscopydmn.netlab.uky.edu/~jesus/pub/seales-caban-laparo.pdf · Visualization Trends: Applications in Laparoscopy W. Brent Seales, PhD

Seminars in Laparoscopic Surgery, Vol 10, No 3 (September), 2003: pp xx–xx

The laparoscopic environment is intentionallyconstrained in order to benefit the patient.

Minimizing the damage in establishing access to theoperative field greatly improves recovery times andlessens the risk of postoperative complications.Although these constraints benefit the patient, theycreate a physical and informational bottleneck thatspawns a number of serious technical challenges.For the patient, "less is more"--less invasive access ismore beneficial. For the surgical team, however,"less is more" means that less invasive access leadsto procedures that are more challenging, requiremore skill, more training, and reliance on moretechnologically sophisticated tools. This article fo-cuses on how current trends in visualization tech-nology may be applied in laparoscopy to put moreand more of the burden on the technology, enablingthe surgeon to give the patient more benefit withless risk.

Substantial technical development has estab-lished a laparoscopic environment that is both min-

imally invasive and procedurally viable. It is verysimilar in construction to the desktop computer en-vironment, which relies on the windows, icons,menus, and pointing devices (WIMP) interface.1The camera establishes the "desktop," and the mon-itor that shows the image from the camera acts as awindow into that desktop. Other pointing devices(laparoscopic instruments visible in the windowthat displays the image of the operative field) dur-ing surgery "drag, click, point," and implement thesurgical equivalent of "cut and paste."

Newly developed visualization technologymoves beyond the constraints of the WIMP inter-face and its associated computer engine (the tight-ly-coupled, single CPU computer).2 In applicationto laparoscopy, this can create potential advancesby breaking barriers that may seem now to be in-herent and unavoidable. For example, it may seemthat the best laparoscopic view of the operative fieldmust inherently be a two-dimensional (2D) imagesequence, captured by a camera and delivered tothe surgeon on a 2D computer monitor. As otherwork has shown, however, this 2D assumption canand is being challenged.3

Similarly, it may seem that preoperative data,such as computed tomographic (CT) scans andmagnetic resonance imaging (MRI) cannot play arole in the constrained, minimally invasive surgical(MIS) environment because of the disparities andbarriers involved: preoperative data may not beavailable in the operating room, 3D fixed-scan datacannot be adequately aligned with 2D live-cameraviews, and the surgical team cannot effectively con-trol how to optimize, navigate, and control themany possible ways to fuse different data. The ques-tion is "can new visualization technologies continueto expand the limits?"

Of interest is the similarity between the oper-ating room as a whole, with its sterile boundarythat divides and separates it from the outside con-text, and the minimally invasive environment thathas its own set of boundaries and constraints. Setwithin the larger context of the operating room,minimally invasive procedures establish a secondboundary that divides and separates the surgeon inthe operating room from the operative field. Thesame challenges in surmounting the required bar-riers of the operating room apply to the embedded

T1

From the University of Kentucky, Computer Science Department,Lexington, Kentucky.Address reprint requests to W. Brent Seales, PhD, AssociateProfessor, Computer Science Department, University ofKentucky, 773 Anderson Hall, Lexington, KY 40506; E-mail:[email protected]©2003 Westminster Publications, Inc., 708 Glen Cove Avenue,Glen Head, NY 11545, USA.

Visualization Trends: Applications in LaparoscopyW. Brent Seales, PhD and Jesus Caban, BS

Recent advances in visualization technology are being drivenby two important trends: (1) continued increases in speed andfunction of hardware devices and (2) increasingly parallel, dis-tributed, cooperative systems. The incorporation of fast, pow-erful devices into cooperative systems enables a complex in-terplay of sensors, displays, and computational componentsthat can create a seamless, perceptually rich and flexible en-vironment. Although these trends have fueled a number of ad-vances in visualization research, the unique requirements oflaparoscopy make direct, effective use of visualization tech-nology as it is applied in other contexts extremely challeng-ing. This article discusses promising new capabilities in visu-alization technology. The costs and tradeoffs create new chal-lenges, which are addressed in some visualization applica-tions, but must be carefully assessed in the context of the la-paroscopic environment. Incorporating new visualization tech-nology in a way that captures its benefits and meets stringentlaparoscopic requirements will very likely precipitate an enor-mous surge forward in the capabilities of the surgical teamand in the quality of patient care.

Page 2: T1 Visualization Trends: Applications in Laparoscopydmn.netlab.uky.edu/~jesus/pub/seales-caban-laparo.pdf · Visualization Trends: Applications in Laparoscopy W. Brent Seales, PhD

environment of the MIS operative field. It is verylikely that the same technologies will be the basisfor solutions.

What elements of visualization environmentsare emerging with current trends? The crucial com-ponents are acquisition, modeling and processing,and display. In the current laparoscopic environ-ment, the camera is a sensor for image acquisition,the modeling and processing step is embeddedwithin the camera hardware, and a cathode raytube (CRT) is the display device. As shown in Figure1, the scope is tightly coupled with the display, andthe surgical team must integrate information be-tween and among devices as well as the patient.

Within the operating room, there is very littlenotion of computation for the purpose of cue de-tection, enhancement, modeling, and integration--all devices function separately. It is the surgicalteam's responsibility to interpret the imagery andcontinuously evaluate the procedure by integratingand understanding information from a parallel setof completely separate devices.4

Current visualization environments address thisproblem by going beyond the WIMP metaphor andgrappling with the issues that are required to sup-port immersion, interaction, scalability, and data in-tegration from diverse sources. This article discuss-es how visualization environments for various ap-plications have solved the problems that arisewhen certain constraints are lifted. The require-ments of the application are crucial, since they dic-tate what can be traded in order to develop aworkable solution for the problem at hand. The

purpose of the discussion is to identify trends invisualization, including tradeoffs and task-based re-quirements, and consider how to appropriately ex-ploit the technology for improvements in la-paroscopy.

It is important to note the clear distinction be-tween the operating room environment and prac-tice-oriented training and simulation environments.Training environments continue to advance at arapid rate and provide a safe test bed for developingnew ideas.5,6 The requirements are much higher inthe operating room, however. The goal is to deploynew, revolutionary technology in the operatingroom and to develop a simulation environment thatso closely resembles surgical procedures that it isindistinguishable. Toward that goal, simulation andtraining tests, especially for visualization technolo-gy, are an extremely useful strategy for assessingthe potential of a new technology. Success in simu-lation is a precursor to success at bringing newtechnology to the operating room.

It is also important to observe that visualizationis not the same thing as virtual reality. In fact,equating these two may unnecessarily bias solutionsand approaches toward aspects of visualization thatare highlighted in virtual reality systems, such as"spatial immersion." What is necessary is a carefullook at the requirements and the structure of thecurrent laparoscopic environment, with the contin-ued goal of improving it. The cognitive burden is al-ready great within the walls of the operating room;it is intensified by the added constraint of minimalinvasion. Successful visualization technology for the

Seales and CabanT2

Figure 1: The laparoscopic environment: the scope,instruments and trocars are shown in a simulation setup and arevery tightly coupled, which minimizes latency. The surgical teammust integrate information from many separate devices as well asdirectly from the patient.

Page 3: T1 Visualization Trends: Applications in Laparoscopydmn.netlab.uky.edu/~jesus/pub/seales-caban-laparo.pdf · Visualization Trends: Applications in Laparoscopy W. Brent Seales, PhD

laparoscopist must lessen that burden and yet in-crease the ability to form and execute a complex,successful procedure under heavy constraints.

Visualization Technology

Visualization is the human activity of exploring, in-terpreting, and understanding complex data. Thecomputational aspect of visualization is a transfor-mation of symbols: turning numbers and raw datainto geometry, which enables the human observerto interpret complex simulations and phenomena.7The data that form the basis of this transformationpipeline must be acquired, processed, and displayed.Technology is critical at each of these points.

Accepted laparoscopic surgical environmentsrely on imagery acquired with a camera system thatilluminates and images the region of interest.8,9 Afteracquisition, the imagery is processed and manipu-lated in order to improve its presentation to theviewer. The imagery is then presented on a displaysystem, most often a CRT or flat panel display,which is positioned within easy view of the surgicalteam.

Advances in visualization environments derivefrom improvements in data acquisition, modelingand processing, and display, and in the overall ar-chitecture to support the interplay between theseelements. The development of new types of acqui-sition technology must be supported by expandedalgorithms and architectures for manipulating andstructuring the acquired data as well as new dis-play technology that efficiently communicates thisstructure to the observer.10 As a system, the infor-mation bandwidth is increasing at the sensor and atthe display, supported by more processing powerin between

Sensors and Acquisition

Both sensors and algorithms (simulations) can pro-duce data for visualization. Sensors function at thepoint of contact with a physical phenomenon inorder to sample its essential characteristics.Simulation data are obtained from an algorithm.Often a visualization environment, such as weathersimulation, teleimmersion, telecollaboration, anddistance learning, combines real and simulated data.

The continued trend of exponential speedup incomputing11,12 and device miniaturization directlyaffects both real data acquisition and the produc-

tion of simulation data. Smaller devices equippedwith more processing power can detect and deliverbetter data, such as higher-resolution cameras-on-a-chip and 3D sensors.13,14 Miniaturization is certain-ly a welcome trend for laparoscopy, where smallersensors support the goal of minimal invasion. Everyindication is that devices will continue to get small-er and more powerful. For example, the Program-mable Digital Camera project15 investigates algo-rithms, circuits, and different architectures forportable, small, flexible, and high-resolution digitalcamera sensors.

As individual sensors increase in resolution anddecrease in size, an equally profound trend is thatmany more sensors are being used cooperativelywithin a single system. Rather than relying on one,super-high-resolution camera, for example, systemsuse a number of distributed, lower-resolution cam-eras. The Office of the Future environment usesnetworks of cameras and microphones to captureand then remotely render images and audio.16 Thatvisualization environment is constructed as a wayto promote immersive collaboration, and it does sothrough cooperative, distributed sensors (camerasand microphones) in order to create a sense ofpresence between physically separate sites.

The use of many sensors increases the com-putational load and creates the problem of datafusion. Systems have found solutions by couplingthe distributed sensing network with a distributedcomputing architecture that can process data inparallel and fuse data through distributed algo-rithms that communicate via high-speed networkconnectivity. As shown in the architectural dia-gram of Figure 2, clusters of machines connectedvia a high-speed network manage tasks such asdevice integration and control, complex simula-tion processing, and display. This departure fromthe "single device" architecture (Figure 2, left)supports scalability, parallelism, and informationfusion.

Large-scale visualization environments almostall rely on clusters of machines to perform render-ing and to drive large-scale simulations.17-19 Withinsurgical simulation, the Virtual Reality BasedLaparoscopic Surgery Simulator project20 uses anarchitecture based on a parallel processing engineto support the communications, computation, andprocessing needed for laparoscopic applications.Similarly, the daVinci system21 implements a tight-ly coupled set of processes to control servos andmanage a user interface.

Distributed miniature sensors that are drivenby a distributed computing environment are

Visualization Trends: Applications in Laparoscopy T3

Page 4: T1 Visualization Trends: Applications in Laparoscopydmn.netlab.uky.edu/~jesus/pub/seales-caban-laparo.pdf · Visualization Trends: Applications in Laparoscopy W. Brent Seales, PhD

Seales and CabanT4

bound by the need to communicate in order to or-ganize and structure the disparate data being col-lected. Tethered connectivity for the purpose ofcommunication and power may be unimportant invisualization environments but could be a largebarrier in surgical environments. The Metaverseproject, for example, uses a communication net-work to support a grid of cameras that coopera-tively senses the state of the display area andmakes geometric and photometric correctionswhen required.22 In that application, it is of littleconsequence that the cameras are tethered to pro-vide power and network, since the display areacan readily support that infrastructure. However,in surgical environments, where the area of inter-est is within the body, wireless transmission andassociated protocols can be very important. TheMassachusetts Institute of Technology's WirelessMicrosensor System project23 emphasizes thestudy of protocols, networks, and designs for flex-ible, wireless, and rapidly configurable arrays ofsensors.

Increasingly powerful, small, distributed, un-tethered sensors have brought new capabilities andunprecedented flexibility to visualization environ-ments. Their use also brings a new set of problemsthat must be solved, including the need for a com-putational framework capable of fusing data fromthe sensor network together with other data rele-vant to the task. The solution adopted in most vi-sualization environments is the distributed "clus-ter" architecture with high-speed communicationconnections for data transfer and access.

Modeling and Processing

The processing environment is the glue that mustreconcile and fuse data from disparate sensors andprepare that data for rendering and display. Fusioninvolves incorporating both sensor data and modeland simulation data such as virtual models beingmanipulated or data collected prior to the real-timeoperation in the environment. The modeling andprocessing environment must transform data fromraw form into the form that will be presented viadisplay interfaces to the viewer.

When data acquired from sensors exactlymatches the end display device, there is little needfor a modeling and processing layer. The current la-paroscopic environment has been engineered sothat the camera acquires an image sequence that ismapped directly onto the display device (Figure 2,left). Flexibility and power can be gained by decou-pling this connection and inserting an intermediatedistributed processing environment, which enablesa substantial departure from this sensor-to-displaydirect mapping (Figure 2, right).

Since the early days of simulation technology, anumber of strategies, issues, and techniques havebeen proposed to facilitate the development oflarge-scale systems to support simulations.24

Surgical simulation environments have mirroredthis trend. The Spring surgical environment,5,6 forexample, implements a multi-threaded system formanaging a number of sensors (haptic, camera)while running a tissue simulator based on themass/spring physical simulator.

Toward scalable, massively parallel clusters,systems such as the Princeton Display Wall25 use a

CPU CPU

CPU

CPU

CPU CPU

CPU CPU

CPU

Display

CPU

Display

CPU

Network Devices Simulation

and modeling

Display

Figure 2: (left) The architecture minimizes latency by directly connecting a device (shaded circle) driven by aCPU to the display device. (right) A scalable, distributed architecture uses CPU clusters on a high-speed network todrive sets of devices, complex simulations, and a scalable, flexible display environment.

Page 5: T1 Visualization Trends: Applications in Laparoscopydmn.netlab.uky.edu/~jesus/pub/seales-caban-laparo.pdf · Visualization Trends: Applications in Laparoscopy W. Brent Seales, PhD

cluster of machines to run complex simulationstogether with a cluster to drive the distributedrendering environment. The simulation clusterproduces simulation data in real-time and the ren-dering cluster renders it to a scalable, large-scaledisplay device. These parallel, distributed systemsprovide the required processing power to run thesimulation and to drive the large-scale display.

In the case of medical simulation and morespecifically laparoscopy, the stated goal26,27 is tobuild a system that can reconcile imagery from thescope with preoperative imagery captured by CTscans and MRI, and any other method than mightbe of some service during the laparoscopic proce-dure. For example, comparison or recognition ofanatomy with respect to a large database of proce-dures might be helpful. It is clear that these goalscannot be accomplished with a computing environ-ment that so tightly couples the data acquisition de-vices (camera) with the display device (monitor).Parallel distributed computing environments maybe the answer, and it may be the case that surgicalsimulation and eventually operating rooms becomedriven by massively parallel clusters of computersdesigned specifically to manage distributed sensorsand preoperative, potentially collaborative patientdatabases.

The ultimate end of the visualization environ-ment is the presentation of the data to the viewer,who then interprets the data and reacts, explores,and plans accordingly. Display is normally linked tothe CRT, and the tight coupling that exists betweenthe scope and the CRT reduces the flexibility thatotherwise might be achieved. Visualization envi-ronments have moved toward a decoupling of indi-vidual sensors and simulation elements from theend display environment, providing flexibility inhow to integrate and present information to theviewer.

Rendering and Display

Although visualization environments emphasize vis-ible, image-driven display, they also can incorpo-rate haptic and vestibular manipulations and audiorendering to induce spatially localized effects. High-fidelity multisensory display is desirable in order toincrease bandwidth at the human--machine inter-face. It is implausible to expect to increase theamount of data in the system through large num-bers of sensors and simulation elements while ex-pecting the observer to continue to understand andinterpret that new data without difficulty.

Along with a desire to support multisensory dis-plays, the need for flexibility has also become ap-parent. The desire for flexibility in the display envi-ronment runs counter to the rigidity of most visual-ization tasks. Commonly available display devices,such as CRTs, stereographic systems, the CaveAutomatic Virtual Environment (CAVE),28 theReconfigurable Advanced Visualization Environ-ment(RAVE),29 head-mounted displays, and other similarsystems, are used to display processed data, 3D rep-resentations, and simulated environments. However,these systems are all notoriously inflexible, and manyare monolithic and expensive.

How can display systems support the need forscalability, multisensory input, and flexibility? First,they leverage the distributed computing architec-ture (Figure 2). Second, they incorporate a specialset of sensors that monitor the environment itselfin order to provide flexibility. Thus, the distributedcomputing environment is responsible for control-ling and sampling sensor input, running simulationcode, controlling sensors that monitor the visual-ization environment itself, and combining every-thing into a form that can be rendered on the localdisplay environment, which may itself be flexiblychanging.

A new degree of flexibility can now be achievedthrough sensor monitoring. In the display realm,this has been demonstrated through camera-basedapproaches for geometric and photometric blend-ing.22,30,31Figures 3 and 4 show how a four-projectorsystem that is casually aligned (the projected im-ages do not line up correctly on the display surface)can be corrected automatically. The cameras detectthe geometry and compare it to what should be vis-ible, since they can be told what to expect to see(Figure 3). Once the geometric correction is calcu-lated, it can be applied in order to form a unifieddisplay area across the projection surface. Figure 4(left) shows a grid after the correction is applied.The grid aligns perfectly even though the underly-ing projectors do not physically align.

Illumination compensation applied to areaswhere more than one projector overlaps makesthe whole illuminated area appear to be a singleimage. The geometric and photometric correc-tion, applied on-the-fly from cameras that ob-serve how the projectors are configured, createsthe appearance of a unified, seamless displaythat is very flexible and scalable in resolutionand size. Figure 4 (right) shows an observerviewing volumetric data set projected onto afour-projector display. The contributing projec-tors are so well calibrated that it is very difficult

Visualization Trends: Applications in Laparoscopy T5

Page 6: T1 Visualization Trends: Applications in Laparoscopydmn.netlab.uky.edu/~jesus/pub/seales-caban-laparo.pdf · Visualization Trends: Applications in Laparoscopy W. Brent Seales, PhD

to tell which projector is illuminating particularpixels on the wall.

This kind of flexibility allows projectors to berapidly configured to cover a small amount of areaat a very high number of dots per inch for applica-tions that require close-in, very bright work. On theother hand, projectors can also be adjusted to coverlarge areas to fill entire walls with imagery. A blendis also possible: a high-resolution inset can be over-laid in an area where more detail may be required.In any case, the cameras detect the geometry andcorrect the imagery as required.

In the case of the surgical simulation environ-ment, dedicated sensors can be designed to calcu-late the transformation between real-time scope po-sition (perhaps given by a tracking sensor) and aset of preoperative data such as an MRI or a CTscan. Likewise, flexible, scalable display technologymay help to achieve a much higher degree of flexi-bility and configurability for particularly unusual

procedures and patient positions. The key trends in visualization of advanced

hardware capability and distributed computing en-vironments have led to advances in data acquisition,processing and modeling, and rending and display.The distributed computing framework can incorpo-rate a large number of independent, powerful sen-sors and simulation elements and helps to decou-ple the sensors from the end display technology,leading to flexibility and new algorithms for fusingand rendering disparate data. However, a numberof issues must be addressed when these techniquesare applied in the laparoscopic arena.

Visualization and Laparoscopy: Issues

Non-mission-critical visualization tasks, such asscholarly study and the exploration of digitized

Seales and CabanT6

Figure 4: After calibration the projectors work cooperatively to display a seamless image

Figure 3: Projection-based display hardware can be casually aligned (left) and calibrated via automatic, camera-based algorithms (right), which calculate and apply the correct alignment transformation.

Page 7: T1 Visualization Trends: Applications in Laparoscopydmn.netlab.uky.edu/~jesus/pub/seales-caban-laparo.pdf · Visualization Trends: Applications in Laparoscopy W. Brent Seales, PhD

sculpture, or the analysis of weather patterns orfluid flow, can all be accomplished as a casual, re-peatable, non-time-critical study in an environmentthat need not be fault tolerant. The relative lack ofconstraints has freed visualization researchers to ex-plore new architectures and approaches, and muchof what has been developed is indeed intriguing.The requirements for medical analysis are muchhigher than the casual scientific visualization task,however. A few such requirements are:

Equipment must be nondistracting and comfort-able for the surgeon

Latency must be minimal Color fidelity is crucial for identifying and as-

sessing anatomy Spatial fidelity is crucial

Calibration and registration tolerances, whichcan never be zero, must be very small

Equipment: Certain visualization devices arevery difficult to apply in the medical realm, becausethey can be uncomfortable. Head-mounted displaysand data gloves, for example, can be cumbersomeand difficult to manage in a surgical setting. The re-quirement that the equipment be sterile, nondis-tracting, and capable of use for long periods of timewithout adverse effects, such as sickness and dis-orientation, makes it difficult to incorporate thesekinds of visualization devices directly into surgicalapplications.

Latency: Latency is the time difference be-tween the acquisition of data at the sensor and thedisplay of that data to the observer. The end-to-endlatency present in a system is obvious when sensordata (the image) and direct observation (cues fromprobes) are juxtaposed. Unfortunately, any pro-cessing between acquisition and display adds laten-cy. The current system for laparoscopy reduces la-tency to a very low level by mapping the camerasignal directly to the CRT. Users will likely criticizeany system that creates a latency that is higher thanthe existing system, despite any advantages it mayprovide in such things as flexibility and enhance-ment. The latency constraint is a perfect exampleof how difficult it is to apply advances in visualiza-tion technology to laparoscopy. Laparoscopic sys-tems minimize latency, and it will be difficult tomove to a parallel, distributed system without sac-rificing something from this constraint. One canargue that it would be easier to tolerate more la-tency than to work in a 3D space using only amonocular 2D image sequence. But for surgeonswho are already accomplished at using the 2D, low-

latency imagery to get the job done, a higher-laten-cy system would involve loss of skill and the needfor retraining.

Color fidelity: Digital cameras are built tovery high standards for laparoscopy. The benefitof the high fidelity is lost, however, if this signal isdisplayed on a device that inadequately reproducescolor. Unfortunately, many display devices do notmatch the high standards of the camera. In particu-lar, commodity display devices (CRTs, LCD panels,and projectors) often trade color fidelity for a vari-ety of other characteristics, such as brightness andportability. In particular, the commodity projectorsthat are used for cooperative, scalable display in vi-sualization environments often show large varia-tions in color and brightness characteristics.

Spatial fidelity: Mismatch between acquisi-tion and display means that algorithms must inte-grate and sometimes decimate data. The algorithmsfor blending and fusing data can cause artifacts thatmar spatial fidelity. The display has often been alimiting factor in many applications, which is whyscalable display technology may be a valuable an-swer for expanding display devices to handle bettercamera resolution and to present integrated datafrom other devices in close proximity to cameradata.

Calibration and registration: Single de-vice calibration can often be done at the factory tospecified tolerances. The distributed architectureshown in Figure 2 elevates the need for real-time,consistent, accurate calibration and registration be-tween devices in a way that is, however, difficult orimpossible to do off-line. Cooperative device com-munication and calibration is essential in the dis-tributed environment.

Given these requirements and the high degreeof reliability required of the solutions, it remainschallenging to apply visualization technology in thedomain of laparoscopy. Clearly, simulation envi-ronments are the first step for development, de-ployment, and testing. The double barrier of the op-erating room and the minimally invasive procedurestand as one of the most challenging environmentsin which to apply new visualization technology,with a potential payoff that is very high. In order toachieve a substantial breakthrough, it is crucial toembrace and exploit the trends that are driving vi-sualization technology and to find ways to addressthe associated challenges that are unique to mini-mally invasive surgery.

Visualization Trends: Applications in Laparoscopy T7

Page 8: T1 Visualization Trends: Applications in Laparoscopydmn.netlab.uky.edu/~jesus/pub/seales-caban-laparo.pdf · Visualization Trends: Applications in Laparoscopy W. Brent Seales, PhD

References

1. Benbasat I, Todd PA: An experimental investigation of in-terface design alternatives: Icon vs. text and direct manip-ulation vs. menus. International Journal of Man-MachineStudies 38(3):369-402, 1993.

2. van Dam A: Post-WIMP user interfaces. Communications ofthe ACM 40(2):1997.

3. Durrani AF, Preminger GM: Three-dimensional video imag-ing for endoscope surgery. Comput Biol Med 25(2): 237-247, 1995.

4. Sandberg W, Ganous T, Steiner C: Setting a research agen-da for perioperative systems design. Sem Laparosc Surg10(2):57-70.

5. Montgomery K, Bruyns C, Brown J, Sorkin S: "Spring: Ageneral framework for collaborative, real-time surgical sim-ulation in: Westwood J et al(eds.): Medicine Meets VirtualReality; Amsterdam, IOS Press; 2002.

6. Montgomery K, Heinrichs L, Bruyns C, Wildermuth S, etal: Surgical simulator for hysteroscopy: A case study of vi-sualization in surgical training. Presented at IEEEVisualization 2001, San Diego Paradise Point Resort, SanDiego, October 21-26, 2001.

7. McCormick BH, DeFanti: Special report: Visualization inscientific computing - a synopsis. IEEE Computer Graphicsand Applications (July 1987), pp 61-70.

8. http://www.strykercorp.com/. Last accessed August 6,2003.

9. http://www.intuitivesurgical.com/. Last accessed August6, 2003.

10. Rattner D, Park A: Advanced devices for the OperatingRoom of the Future. Sem Laparosc Surg 10(2):85-89.

11. Moore G: Nanometers and gigabucks--Moore on Moore'sLaw. Distinguished Lecture, 1996.

12. Lundstrom M: Is nanoelectronics the future of microelec-tronics? In: Source International Symposium on LowPower Electronics and Design. Monterey, California, 2002;pp 172-177.

13. Fuchs H, Livingston MA, Raskar R, Colucci D, et al:Augmented reality visualization for laparoscopic surgery.Proceedings of the First International Conference onMedical Image Computing and Computer-AssistedIntervention, pp 934-943.

14. http://www.inneroptic.com. Last accessed August 6, 2003.

15. http://smartcamera.stanford.edu. Last accessed August 6,2003.

16. Yang R, Brown MS, Seales WB, Fuchs H: Geometrically cor-rect imagery for teleconferencing. In: Proceedings of theSeventh ACM International Conference on Multimedia,October 30-November 5, 1999.

17. Crockett TW, Orloff T: A MIMD rendering algorithm fordistributed memory architectures. 1993 Parallel RenderingSymposium Proc; IEEE, October 1993; pp 35-42.

18. Humphreys G, Hanrahan P: A distributed graphics systemfor large tiled displays. Proceedings of Visualization `99,IEEE, October 1999, pp 215-223.

19. Muraki S, Ogata M, Ma K-L, Koshizuka K, et al: Next-gen-eration visual supercomputing using PC clusters with vol-ume graphics hardware devices. ACM/IEEE Conference onSupercomputing, November 2001.

20. Rhomberg A, Enzler R, Thaler M, Tröster G: Design of aFEM computation engine for real-time laparoscopic surgerysimulation. Proceedings of the First Merged InternationalParallel Processing Symposium and Symposium on Paralleland Distributed Processing, Los Alamitos, California, 1998,pp 711-715

21. Guthart G Salisbury J Jr: The intuitive telesurgery system:Overview and application. Proc. IEEE ICRA, 2000.

22. Jaynes C, Seales B, Calvert K, Fei Z, et al: The Metaverse -a collection of inexpensive, self-configuring, immersive en-vironments. 7th International Workshop on ImmersiveProjection Technology/Eurographics Workshop on VirtualEnvironments, Zurich Switzerland, May 2003.

23. Chandrakasan A, Min R, Bhardwaj M, Cho S-H, et al: Poweraware wireless microsensor systems. Keynote paper, ESS-CIRC, Florence, Italy, September 2002.

24. Clema JK: General purpose tools for system simulation.11th Annual Simulation Symposium Tampa, Florida, 1978,pp 37-60.

25. Chen H, Wallace G, Gupta A, Li K, et al: Experiences withscalability of display walls. In: Immersive ProjectionTechnology Symposium (IPT), March 2002.

26. http://www.orofthefuture.org/overview/. Last accessedAugust 6, 2003.

27. Grimson E, Leventon M, Ettinger G, Chabrerie A, et al:Clinical experience with a high precision image-guidedneurosurgery system. MICCAI 1998, pp 63-73.

28. Cruz-Neira C, Sandin D, DeFanti T: Virtual reality: The de-sign and implementation of the CAVE. Proceedings of SIG-GRAPH 93 Computer Graphics Conference, ACM SIG-GRAPH, August 1993, pp 135-142.

29. http://www.fakespacesystems.com/. Last accessed August6, 2003.

30. Raskar R, Brown M, Yang R, Chen W, et al: Multi-projectordisplays using camera-based registration. IEEEVisualization '99, 1999.

31. Brown M, Seales W, Webb S, Jaynes C: Building large for-mat displays for digital libraries. Communications of theACM 44(5): 2001.

Seales and CabanT8