Three-dimensional medical imaging -...

79
Three-Dimensional Medical Imaging: Algorithms and Computer Systems M. R. STYTZ Department of Electrical and Computer Engineering, ALr Force Institute of Technology, Wrzght-Patterson AFB, Ohzo 45433 G. FRIEDER School of Computer and Information Science, Syracuse University, Syracuse, New York 13224 0. FRIEDER Department of Computer Science, George Mason Unwersity, Fairfax, Virgznia 22030 This paper presents an introduction to the field of three-dimensional medical imaging It presents medical imaging terms and concepts, summarizes the basic operations performed in three-dimensional medical imaging, and describes sample algorithms for accomplishing these operations. The paper contains a synopsis of the architectures and algorithms used in eight machines to render three-dimensional medical images, with particular emphasis paid to their distinctive contributions. It compares the performance of the machines along several dimensions, including image resolution, elapsed time to form an image, imaging algorithms used in the machine, and the degree of parallehsm used in the architecture. The paper concludes with general trends for future developments in this field and references on three-dimensional medical imaging. Categories and Subject Descriptors: A1 [General Literature]: Introductory and Survey; C.0 [Computer Survey Organization]: General; C,5 [Computer Systems organization]: Computer System Implementation; 1,3.2 [Computer Graphics]: Graphics Systems; 1.3,7 [Computer Graphics]: Three-Dimensional Graphics and Realism; J.3 [Computer Applications]: Life and Medical Science General Terms: Algorithms, Design, Experimentation, Performance Addltlonal Key Words and Phrases: Computer graphics, medical imaging, surface rendering, three-dimensional imaging, volume rendering CASE STUDY A patient with intractable seizure activ- ity is admitted to a major hospital for treatment. As the first step in treatment, the treating physician collects a set of 63 magnetic resonance imaging (MRI) im- age slices of the patient’s head. These two-dimensional (2D) slices of the pa- tient’s head do not disclose an abnormal- ity. A three-dimensional (3D) model of the MRI study reveals flattening in the gyri of the lower motor and sensory strips, a condition that was not apparent in the cross-sectional MRI views. The physician orders a second study, using positron emission tomography (PET), to portray the metabolic activity of the brain. Using the results of the PET study, the physician assembles a 3D model of Permission to copy without fee all or part of this material KSgranted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its data appear, and notice is given that copying is by the permission of the Association for Computing Machinery To copy otherwise, or to republish, requires a fee and/or specific permission. @ 1991 ACM 0360-0300/91/1200-421 $0150 ACM Computing Surveys, Vol 23, No. 4, December 1991

Transcript of Three-dimensional medical imaging -...

Page 1: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging: Algorithms andComputer Systems

M. R. STYTZ

Department of Electrical and Computer Engineering, ALr Force Institute of Technology,

Wrzght-Patterson AFB, Ohzo 45433

G. FRIEDER

School of Computer and Information Science, Syracuse University, Syracuse, New York 13224

0. FRIEDER

Department of Computer Science, George Mason Unwersity, Fairfax, Virgznia 22030

This paper presents an introduction to the field of three-dimensional medical imaging

It presents medical imaging terms and concepts, summarizes the basic operations

performed in three-dimensional medical imaging, and describes sample algorithms for

accomplishing these operations. The paper contains a synopsis of the architectures and

algorithms used in eight machines to render three-dimensional medical images, with

particular emphasis paid to their distinctive contributions. It compares the performanceof the machines along several dimensions, including image resolution, elapsed time toform an image, imaging algorithms used in the machine, and the degree of parallehsm

used in the architecture. The paper concludes with general trends for futuredevelopments in this field and references on three-dimensional medical imaging.

Categories and Subject Descriptors: A 1 [General Literature]: Introductory and

Survey; C.0 [Computer Survey Organization]: General; C,5 [Computer Systems

organization]: Computer System Implementation; 1,3.2 [Computer Graphics]:

Graphics Systems; 1.3,7 [Computer Graphics]: Three-Dimensional Graphics and

Realism; J.3 [Computer Applications]: Life and Medical Science

General Terms: Algorithms, Design, Experimentation, Performance

Addltlonal Key Words and Phrases: Computer graphics, medical imaging, surfacerendering, three-dimensional imaging, volume rendering

CASE STUDY

A patient with intractable seizure activ-ity is admitted to a major hospital fortreatment. As the first step in treatment,the treating physician collects a set of 63magnetic resonance imaging (MRI) im-age slices of the patient’s head. Thesetwo-dimensional (2D) slices of the pa-tient’s head do not disclose an abnormal-

ity. A three-dimensional (3D) model ofthe MRI study reveals flattening in thegyri of the lower motor and sensorystrips, a condition that was not apparentin the cross-sectional MRI views. Thephysician orders a second study, usingpositron emission tomography (PET), toportray the metabolic activity of thebrain. Using the results of the PET study,the physician assembles a 3D model of

Permission to copy without fee all or part of this material KSgranted provided that the copies are not madeor distributed for direct commercial advantage, the ACM copyright notice and the title of the publicationand its data appear, and notice is given that copying is by the permission of the Association for ComputingMachinery To copy otherwise, or to republish, requires a fee and/or specific permission.@ 1991 ACM 0360-0300/91/1200-421 $0150

ACM Computing Surveys, Vol 23, No. 4, December 1991

Page 2: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

422 * Stkytz et al.

I CONTENTS

CASE STUDYINTRODUCTION1. UNIQUE ASPECTS OF THREE-DIMENSIONAL

MEDICAL IMAGING2 THREE-DIMENSIONAL IMAGING

COORDINATE SYSTEMS, OBJECT SPACEDEPICTION MODELS, AND TYPES OFIMAGE RENDERING

3 THREE-DIMENSIONAL MEDICAL IMAGINGRENDERING OPERATIONS

4 THREE-DIMENSIONAL MEDICAL IMAGINGMACHINES41 Farrell’s Colored-Range Method42 Fuchs/Poultm Pixel-Planes Machine43 Kaufman’s Cube Architecture4.4 The Mayo Clime True Tree-Dlmenslonal

Machine and ANALYZE45 Medical Image Processing Group Machines4.6 Pmar/Vlcom Image Computer

and Pixar;Vicom II47 Reynolds and C.oldwasser’s Voxel Processor

ArchitectureSUMMARYAPPENDIX A: HIDDEN-SURFACE REMOVAL

ALGORITHMS‘ APPENDIX B: RAY TRACING

APPENDIX C: IMAGE SEGMENTATIONBY THRESHOLDING

APPENDIX D: SURFACE TRACKINGALGORITHMSAPPENDIX E: SHADING CONCEPTSAPPENDIX F: SHADING ALGORITHMSACKNOWLEDGMENTS

REFERENCES

the average cortical metabolic activity.The model reveals a volume of hyperme-tabolic activity. Because PET has rela-tively poor modality resolution, thelocation of the abnormality cannot becorrelated with surface anatomical land-marks. To correlate the results of the twostudies, the physician combines the 3DMRI model of the patient’s brain withthe 3D PET model using post-hoc imageregistration techniques to align the twosets of data. This combined model pro-vides the anatomical and metabolic in-formation required for precise location of

lImage, or tissue, registration permits Images ofvolumes acquired at different times and with differ-ent modalities to be combined mto a single imagefor display

the abnormality within the lower part ofthe motor and sensory strips.

The physician uses a surgery rehearsalprogram to simulate surgery on the com-bined brain model. To assist the physi-cian, the program displays the brainmodel and the overlying skin surfacesside by side. Using a mouse-controlledcursor, the physician outlines the abnor-mal area of the brain on the combinedmodel. The rehearsal program uses thistracing to perform a simulated cran-iotomy. Pictures of the simulated opera-tion are taken to surgery to guide theactual procedure. An intraoperative elec-troencephalogram demonstrates seizureactivity at the site predicted by the medi-cal images. After resectioning of theabnormal area, the patient’s seizureactivity ceased [Hu et al. 19891.

INTRODUCTION

The Case Study illustrates the threebasic operations performed in 3D medicalimaging: data collection, 3D data dis-play, and data analysis. Three-dimen-sional medical imaging, also calledthree-dimensional medical image render-ing or medical image volume visualiza-tion, is the process of accurately andrapidly transforming a surface or volumedescription of medical imaging modalitydata into individual pixel colors for 3Ddata display, Three-dimensional medicalimaging creates a depiction of a singlestructure, a select portion of an imagedvolume, or the entire volume on a com-puter screen. Stereo display, motionparallax using rotation, perspective,hidden-surface removal, coordinatetransformation, compositing, trans-

parency, surface shading, andlorshadowing3 can be used to provide depth

‘Rendering is a general term for the creation of adepiction of a structure or volume on a computerscreen using a sequence of operations to transformthe structure/volume information from object space(a data structure in memory) to screen space (theCRT)3Coordinate transformation, which consists oftranslation, rotation, and scaling, orients the vol-ume to be displayed Hidden-surface removal en-sures that only the visible portions of the objects inthe volume are displayed Shading provides depthcues and enhances image reahsm

ACM Computing Surveys, Vol 23, No 4, December 1991

Page 3: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging 8 423

cues and make the rendered image closelyresemble a perceived or photographic im-age of the structure/volume. (We de-scribe these operations and their use laterin this paper. ) Three-dimensional medi-cal imaging presents the remotely sensedmorphological and physiological patientdata in such a way that the physician isrelieved of the chore of mentally recon-structing and orienting the volume andinstead can concentrate on the practice ofmedicine. To achieve this capability, 3Dmedical imaging techniques have beendeveloped to give the user the ability toview selected portions of the body fromany angle with an appearance of depth tothe image.

The Case Study also points out thedifference between 3D medical imag-ing and 3D computer graphics. Three-dimensional medical imaging generatesaccurate graphical representations of aphysical volume. The broader field of 3Dcomputer graphics forms realistic imagesfrom scene descriptions that may or maynot have a physical counterpart. The twofields share a common challenge in theirattempt to portray a 3D volume within a2D space and so use many of the samegraphics operations to provide depthcues. These operations include coordi-nate transformation, stereo display, mo-tion parallax using rotation, perspective,hidden-surface removal, surface shading,and/or shadows. Three-dimensional med-ical imaging uses these techniques togenerate credible portrayals of the inte-rior of patients for disease diagnosis andsurgical planning.

The development of 3D medical imag-ing techniques has not occurred inisolation from other fields of medicalimaging. A few examples illustrate thispoint. On the image acquisition side, therecent development and ongoing im-provement of medical imaging modali-ties, such as x-ray computed tomography(CT), ultrasound, PET, single photonemission computed tomography (SPECT),and MRI, provide a wide range of imageacquisition capabilities. Barillot et al.

[19881 and Stytz and Frieder [19901 pre-sent a survey of the operation of thesemodalities.

These modalities can be used to acquiredata for accurate diagnosis of bony andsoft tissue diseases and to assess physio-logical activity without exploratorysurgery. Data archiving, using special-ized picture archiving and communica-tion systems (PACS) [Flynn et al. 1983],is an acknowledged requirement for themodern radiology department and has ledto the development of techniques for tis-sue /image registration. The effect ofthese related advances is an ever-increasing need for higher display reso-lution, increased image rendering speed,and additional main memory to renderthe image data rapidly and precisely.

The requirement for accurate imagesarises from the need to formulate a diag-nosis in the data analysis step. Accuracyconcerns us because the displayed datamust provide trustworthy visual infor-mation for the physician. There are twocomponents to display accuracy — datacollection accuracy and 3D medical im-age rendering accuracy. Data collectionis the radiologist’s arena and deals withissues concerning the statistical signifi-cance of collected data, patient dosage,medical imaging modality operation,development of new techniques andmodalities for gathering data, and 2Dimage reconstruction. Data rendering is-sues belong to the computer scientist.They encompass computer graphics, sys-tem throughput, computer architecture,image processing, database organization,numerical computation accuracy, and theuser interface. As attested by the num-ber of radiologists and physicians in-volved in researching 3D data displayissues, a knowledge of the physician’srequirements, as well as the capabilitiesand limitations of the modalities, is es-sential to addressing the questions posedby the image display process.

The difficulty in meeting the physi-cian’s need for image accuracy and highimage processing speed arises from thecharacteristics of the data produced bythe modalities. Because the modalitiessample space and reconstruct it mathe -matically, image accuracy is limited tothe imaging modality resolution. Thelarge amount of data processed, up to

ACM Computing Surveys, Vol 23, No 4, December 1991

Page 4: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

424 “ Stytz et al.

35MB (megabytes) produced per patientby a single modality study, hinders rapidimage formation. The challenge posed tothe computer scientist lies in the devel-opment of techniques for rapid, accuratemanipulation of large quantities of datato produce images that are useful to aphysician.

There are many computational aspectsto 3D medical imaging. The most promi-nent ones are the significant (2-35 MB)quantity of data to be processed (prefer-ably in real time~), the need for long-range retention of data, and the need formanipulation and display of the result-ing 3D images of complex internal andexternal anatomical structures (againpreferably in real time).

Traditionally, when rendering 3Dmedical images for disease diagnosis,srendering speed has been sacrificed forreconstructed image accuracy. The firstmachines, from the Medical Image Pro-cessing Group (presently at the Univer-sity of Pennsylvania), used algorithmsfor data management and medical imag-ing modality data reduction by organboundary detection to provide their dis-plays. Even with a significant reductionin the quantity of data manipulated, gen-erating a single series of rotated views ofa single x-ray CT study was an over-night process [Artzy et al. 1979, 1981].Initial attempts at achieving interactivedisplay rates using the raw modality datarelied upon special-purpose multiproces-sor architectures and hardware-basedalgorithm implementations. Althoughthese special-purpose architectures theo-retically produce images rapidly, thedrawback to their approach is in theplacement of image rendering algorithmsin hardware. Because 3D medical imag-ing is a rapidly evolving field, flexiblealgorithm implementations are superior

4A real-time image display rate m defined to bea display rate at or above the flicker-fusion rateof the human eye, approximately 30 frames persecond.50ther applications of 3D medical imagmg are dis-cussed in Jense and Huusmans [1989]

since they facilitate the incorporation ofimprovements in 3D medical imaging al-gorithms into the rendering system. Thecomparatively recent advent of high-power, inexpensive processors with largeaddress spaces opens the possibility forusing a multiprocessor architecture anda software-based algorithm implementa-tion to provide interactive displaysdirectly from the raw modality data.This recent approach to 3D medical im-age rendering foregoes medical imagingmodality data reduction operations,achieves rapid display rates by distribut-ing the workload, and attains implemen-tation flexibility by using software torealize the rendering algorithms.

Even as researchers approach the goalof real-time 3D medical image rendering,they continue to address the perennial3D medical imaging issues of workloaddistribution, data set representation, andappropriate image rendering algorithms.Addressing these issues and the peculiarneeds of 3D medical imaging broughtforth the development of the computersystems that are the subject of the pre-sent survey. 6 Other surveys of the fieldof 3D medical imaging that the readermay find useful are Barillot et al. [1988],Farrell and Zappulla [1989], Gordon andDeLoid [1990a], Herman and Webster[1983], Ney et al. [1990b], Pizer andFuchs [1987b], Robb [19851, and Udupa[1989]. Fuchs [1989b] presents a brief,but comprehensive, introduction toresearch issues and currently usedtechniques in 3D medical imaging.

We organized this article as follows.Section 1 discusses the unique aspects of3D medical imaging. Section 2 des-cribes 3D medical imaging data models,types of rendering, and coordinate sys-tems. Section 3 describes 3D medicalimage rendering operations. Section 4discusses eight significant 3D medicalimaging machines. The focus of the de-

6The inclusion of a particular machine does notconstitute an endorsement on the part of the au-thors, and the omission of a machine does not implythat it is unsuitable for medical image processing

ACM Computmg Surveys, Vol. 23, No 4, December 1991

Page 5: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging “ 425

scription of each machine is on the inno-vations and qualities that set themachine apart from others. The paper

concludes with a brief prognosticationon the future of 3D medical imaging.The appendixes provide brief descrip-tions of selected 3D image renderingalgorithms.

1. UNIQUE ASPECTS OFTHREE-DIMENSIONAL MEDICAL IMAGING

There is no one aspect of 3D medicalimaging that distinguishes it from othergraphics processing applications. Rather,the convergence of several factors makesthis field unique. Two well-known, closelyrelated factors are data volume and com-putational cost. The typical medical im-age contains a large amount of data. Inrelation to data volume, consider thatthe average CT procedure generates morethan 2 million voxels per patient exami-nation. MRI, PET, and ultrasound proce-dures produce similar amounts of data.The second factor is that the algorithmsused for 3D medical imaging have greatcomputational cost even at moderate 3Dresolution.

Our survey of the medical imaging lit-erature disclosed additional factors. First,there are no underlying mathematicalmodels that can be applied to medi-cal image data to simplify it. In medicalimaging, a critical requirement is accu-rate portrayal of patient data. Physicianspermit only a limited amount of abstrac-tion from the raw data because they basetheir diagnosis upon departures from thenorm, and high-level representations mayreduce, or eliminate, these differences.Therefore, from both the clinical andmedical imaging viewpoints, models oforgans or organ subsystems are irrele -vant for image rendering purposes.

Second, the display is not static. Typi-cally, physicians, technicians, and otherusers want to interact with the display toperform digital dissection and slicing op-

7Dissection provides the illusion ofcutting away overlying layers of tissue

peeling or

erations. Third, for a 3D representationto provide an increase in clinical useful-ness over a 2D representation it must becapable of portraying the scene from allpoints of view. The convergence of thesefacts sets 3D medical imaging apart fromall other imaging tasks.

A final aspect of 3D medical imaging isthe wide range of operations required toform a high-quality 3D medical imageaccurately and interactively. Typical re -quired capabilities include viewing un-derlying tissue, isolating specific organswithin the volume, viewing multiple or-gans simultaneously, and analyzing data.At the same time, the physician demandshigh quality and rapidly rendered im-ages with depth cues. Although the algo-rithms used for these operations are notunique to 3D medical imaging, their usefurther complicates an already computa-tionally costly effort and so deservemention.

2. THREE-DIMENSIONAL MEDICAL IMAGINGCOORDINATE SYSTEMS, OBJECT SPACEDEPICTION MODELS, AND TYPES OFIMAGE RENDERING

Before examining 3D medical image pro-cessing machines, it is appropriate to

investigate the techniques used tomanipulate 3D medical image data. Thissection presents an overview and class-ification of the terminology used in the3D medical imaging environment. The3D medical imaging environmentincludes the low-level data models usedin 3D medical imaging applications, thecoordinate systems, and the graphics op-erations used to render a 3D image.Detailed descriptions of the graphicsprocessing operations and conventionsdiscussed here and in the following sec-tion can be found in a standard graphicstext such as Burger and Gillies [19891,Foley et al. [19901, Rogers [19851, andWatt [19891.

The section opens with a discussion ofthe voxel data model and is followed by adiscussion of the coordinate systems usedin 3D medical imaging. The importanceof the voxel model comes from its use in

ACM Computing Surveys, Vol. 23, No 4, December 1991

Page 6: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

426 0 Stytz et al.

the CT, MRI, SPECT, and PET medicalimaging modalities as well as for3D medical image rendering. The voxelmodel is the assumed input data formatfor the three major approaches to medi-cal image object space modeling de-scribed in this section: the contourapproach, the surface approach, and thevolume approach. These three objectspace modeling techniques providethe input data to the two major classesof 3D medical image renderingtechniques—surface and volume render-ing. We summarize these rendering tech-niques at the end of this section.

A voxel is the 3D analog of a pixel.Voxels are identically sized, tightlypacked parallelepipeds formed by divid-ing object space with sets of planes paral-lel to each object space axis. Voxels mustbe nonoverlapping and small comparedto the features represented within an im-aged volume. In 3D medical imaging,each voxel is represented in object space,image space, and screen space by its three

coordinate values. Because medicalimaging modalities characterize an im-aged volume based on some physical pa-rameter, such as x-ray attenuation forCT, proton density and mobility for MRI,number of positrons emitted for PET, andnumber of y-rays emitted for SPECT, thevalue assigned to each voxel is the physi-cal parameter value for the correspond-ing volume in the patient. For example,in the data collected by a CT scan thevalue assigned to a voxel corresponds tothe x-ray attenuation within the corre-sponding volume of the patient. 8 Thehigher the resolution of the modality,the better the modality can characterizethe variation of the physical parameterwithin the patient. The lower limit onthe cross section of a voxel is the resolu-tion of the modality that gathered theimage data. Representative values for the

‘The value is calculated using a process calledcomputed tomography; a complete discussion of theissues involved is presented m Herman and Coin[19801.

dimensions of a voxel in a CT slice are0.8 mm x 0.8 mm x 1-15 mm.

Figure 1 presents a voxel representa-tion of a cube-shaped space. Figure 2presents a notional voxel representationof a slice of CT data. The figure shows anidealized patient cross section with a gridsuperimposed to indicate the division ofspace into voxels, A 2D slice of voxeldata naturally maps into a 2D array.

To represent a volume, a set of 2Dvoxel-based arrays is combined, possiblywith interpolation, to form a 3D arrayof voxels. The 3D voxel array naturallymaps into a 3D array. Each array ele-ment corresponds to a voxel in the 3Ddigital scene. The array indexes corre-spond to the voxel coordinates. The valueof an array element is the value of thecorresponding voxel.

Three-dimensional medical imagingsystems commonly use four different

coordinate systems: the patient spacecoordinate system, the object spacecoordinate system, the image space coor-dinate system, and the screen space coor-dinate system. Patient space is the 3Dspace defined by an orthogonal coordi-nate system attached to the patient’s na-tive bone. The z-axis is parallel to thepatient’s head-to-toe axis, the y-axis isparallel to the patient’s front-to-backaxis, and the x-axis is parallel to thepatient’s side-to-side axis. Mapping pa-tient space into object space requires thesampling and digitization of patient spaceusing a medical imaging modality.

The 3D coordinate system that definesand contains the digitized sample of pa-tient space is object space. The 3D coordi-nate system that contains the view of theobject space volume desired by the ob-server is image space. The 2D coordinatesystem that presents the visible portionof image space to the observer is screenspace. Figure 3 shows the relationshipbetween object and image space, with thedirection of positive rotation about eachaxis indicated in the figure.

The object and image space coordinatesystems are related to each other usinggeometric transformation operations[Foley et al. 1990]. The object space coor-

ACM Computing Surveys, Vol 23. No 4, December 1991

Page 7: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging “ 427

Y

zf***z

.4,**

,*

x

Figure 1. Voxel representation of a cube in object space

VolumeGrid

Figure 2. Voxel representation of a CT slice

VOX(2I

ACM Computing Surveys, Vol. 23, No 4, December 1991

Page 8: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

428 ● Stytz et al.

Y

“1

x

Figure 3. Object and image space coordinate systems.

dinate system, represented by x, y, andz, is fixed in space and contains a com-plete or partial description of the volumeas computed by a medical imagingmodality. A partial description can bethe surface of one or more organs of in-terest, whereas a complete description isthe entire 3D voxel data set, possiblyinterpolated, output from a medicalimaging modality. The image space coor-dinate system, signified by x’, y’, and z’,contains the volume description obtainedafter application of coordinate transfor-mation matrixes, image Segmentationgoperators, or other volume modificationoperations. Image space contains theview of the volume desired by the ob-server. The object space representationremains unchanged as a result of theoperations; changes are evident only inimage space. The screen space coordinatesystem, signified by u and v, is the 2Dspace that portrays the visible contentsof image space. Shading, shadows, and

‘Segmentation is the process of dividing a volumeinto discrete parts. One way to perform segmenta-tion 1s by thresholding. See Appendix D.

other visual depth cue information is usedin this space to portray the 3D relation-ships upon a 2D CRT.

To accommodate the unique require-ments imposed by the 3D medical imag-ing environment, three major approachesto portraying an object in a 3D medi-cal image volume have been developed.These are the contour, surface, and vol-ume object space depiction methods, alsocalled the contour, surface, and volumedata models. Each technique uses a dif-ferent type of scene element 10 to depictinformation within an imaged volume.We summarize these methods here andrefer the reader to Farrell and Zappulla[19891 and Udupa [19891 for detailedsurveys of these models.

Contour and surface object space por-trayal methods provide rapid image ren-dering by deriving lower dimensionalobject space representations for an iso-lated surface of an organ/object of inter-est from the complete 3D object space

10Ascene element is a primitive data element thatrepresents some discrete portion of the volume thatwas imaged A scene element M the prlmltive dataelement in object space

ACM Computmg Surveys, Vol 23, No 4, December 1991

Page 9: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging ● 429

array. Accordingly, these methods re -duce the amount of data manipulatedwhen forming an image. The data reduc-tion is obtained at the cost of being ableto provide only one type of rendering,called surface rendering. These tech-niques suffer from the requirement forreprocessing the volume to extract thesurface of the organ/object from the im-aged volume whenever selecting a differ-ent organ/object or altering the organ/object. For example, cutting away part ofthe surface of the object using a clippingplane alters the visible surface of theobject. In this instance, the 3D objectspace array must be reprocessed to ex-tract that portion of the object that isvisible behind the clipping plane. A sur-vey of machines that perform 3D medicalimaging based on contour and surfacemethods can be found in Goldwasseret al, [1988bl, Volumetric object spaceportrayal methods avoid the volume re-processing penalty by using the 3D objectspace array data directly. These methodspay a render-time computation penaltydue to the large volume of data manipu-lated when generating an image. Be-cause volumetric methods use the entiredata set, they support two different typesof rendering: the display of a single sur-face and the display of multiple surfacesor composites of surfaces. Note that allthree portrayal methods begin with a 3D

array of voxel data derived from a 3Dmedical imaging modality. The differ-ence between the three lies in the exis-tence of an intermediate one-dimensional(ID) or 2D representation of a structureof interest for contour and surface meth-ods and the lack of this representationfor volume (3D) methods. Table 1 sum-marizes the salient characteristics of thethree object space portrayal methods.

Referring back to Figure 2, it containsan idealized representation of a slice ofmedical imaging modality data. The fig-ure shows a single object within objectspace, with a well-defined borde~ andvarying voxel values within the slice. Weuse this diagram as the basis for thefollowing descriptions of the 3D medicalimaging data models.

The contour, lD-primitive, approachuses sets of lD contours to form a de-scription of the boundary of an object ofinterest within the individual slices thatform a 3D volume. The boundary is rep-resented in each slice as a sequenceof points connected by directed linesegments. The sequence of points corre-sponds to the voxels in the object that areconnected and that lie on the border be-tween the object and the surroundingmaterial, Before forming the boundaryrepresentation, the desired object contourmust be isolated using a segmentationoperator. Thresholdingll can be usedif the boundary occurs between high-contrast materials. Otherwise, boundarydetection using automatic or manualtracking 12 is used for segmentation. Dueto the complexity found in the typicalmedical image, automatic methods some-times require operator assistance toextract the boundary of an object accu-rately. Figure 4 contains the object inFigure 2 represented as a set of contours,with the heads of the arrows indicatingthe direction of traversal around the ob-ject. The use of this methodology for dis-play of 3D medical images is describedin Gibson [1989], Seitz and Ruegseger

llThresholding is a process whereby those voxelsthat lie within or upon the edge of an organ aredetermined based solely upon the value of eachvoxel, Typically, the user selects a range of voxelvalues that is assumed to lie within the object andalso do not occur outside of the object. The voxelswithin the volume are then examined one by one,with those voxels that fall within the range se-lected for further processing to depict the surface.lzTracking is a procedure whereby those voxelsthat lie within or upon the edge of an organ aredetermined based upon the value at each voxel andthe values of the voxels lying nearby. For manualtracking, the user defines the surface by moving acursor along the border of the object, with theborder defined by high visual contrast between theobject and surrounding material In automatictracking, a single seed voxel is specified by theviewer; then the remainder of the voxels on theborder are located by gradually expanding upon theset of voxels known to lie on the edge One way ofexpanding the set is to select the next voxel in theedge based upon its adjacency to voxels known tobe on the edge and its similarity in value to voxelsknown to be on the edge.

ACM Computing Surveys, Vol 23, No. 4, December 1991

Page 10: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

430 “ Stytz et al.

Table 1. Three-Dimensional Medical Imagmg Object Space Portrayal Methods

Method

Contour Surface Volume

Data primitivedimension

Objectrepresentation

Type ofrenderingsupported

Object selection

Number ofobjectsdisplayed

Representationflexibility

Object spacememoryrequirement

ID 2D 3D

Directed contours of Tiles between in-shce None; voxels usedthe object boundary contours or faces of dmectly

object boundaryvoxels

Surface Surface Surface or volume

Performed in apreprocessing” stepusing segmentation

One per preprocessingstep

Low; must reprocessthe data for each newobject or change inobject

Lowest

Performed in apreprocessing stepusing segmentation

One per processingstep

Low; must reprocessthe data for each newobject or change inobject

Low

Performed at image-rendering time usingsegmentation

Determined at image-rendering time

High; all decisionsdeferred until image-rendering display time

High

‘Preprocessing is a term used to indicate the operations that must be performed before the volume, or agiven object within the volume, can be rendered For example, the use of a contour-based model of anobject requu-es extraction of the contours of the object in a preprocessing step before the object can berendered.

Figure 4. Contour-based representation of a sliceof medical imaging modality data

[1983], Steiger [1988], Tuomenoksa

[19831, Udupa [19851, and Udupa andTUY [1983].

Surface, 2D-primitive, approaches to3D medical imaging use 2D primitives,usually triangles or polygons, to showthe surface of a selected portion of thevolume. Because the data from a CT,

MRI, PET, SPECT, or ultrasound scan-ner have gaps between the slices, amethod that approximates the surfacebetween the slices must be used to repre-sent the surface. An additional difficultyfaced by this technique and ID primitivetechniques is the separation of the desiredsurface from the surrounding modalitydata. The two major classes of surfacedepiction algorithms are the tiling meth-ods and the surface-tracking methods.

The goal of tiling techniques [Ayache1989; Chen et al. 1989b; Fuchs et al.1977; Lin and Chen 1989; Linn et al,1988; Shantz 1981; Toennies 1989;Toennies et al. 1990], also called patch-ing, is to determine the surface that en-closes a given set of slice contours, thenrepresent the surface with a set of 2Dprimitives. As a preliminary step, tilingrequires the extraction of the contours ofthe object of interest in each slice usingany of the methods for the contour ap-proach. The extracted contours are thensmoothed, and geometric primitives, typ -

ACM Computing Surveys, Vol 23, No 4, December 1991

Page 11: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging g 431

ically triangles, are fitted to the contoursto approximate the surface of the objectin the interstice space. This approach ispopular because tiled surfaces furnish re-al istic representations of the objects se-lected for display, especially when objectsare differentiated with color and. whenmultiple light sources are used to illumi-nate the scene. Because of the data re-duction inherent in going from the inputdata to a tiled surface representation, thecomputational cost of displaying the ob-ject is low. The disadvantages inherentin this approach are the time required toextract the surface, the difficult y in per-forming the tiling (structures can splitand merge between slices), and the re -quirement for reaccomplishing objectcontouring whenever the user selects adifferent organ or alters the organ. Fig-ure 5 contains a tiled depiction of aportion of the surface defined by threeadjacent slices; the raw data for eachslice is simi lar to that in Figure 2. Acontour description represents each slice;the inter slice portion of the volume isdelineated by triangles.

Surface-tracking methods [Artzy 1979,1981; Chine et al. 1988; Ekoale 1989;Gordon and Udupa 1989; Herman andLiu 1978, 19’79; Liu 1977; Lorensen andCline 1987; Rhodes 1979; Udupa 1982;Udupa and A~anagadde 1990; Udupa andOdhner 1990; Udupa et al. 1982; Xu andLu 1988] also represent the surface of anobject of interest using 2D primitives. Inthis case, the 2D primitive is the face of avoxel. The object’s surface is representedby the set of faces of connected voxelsthat lie on the surface of the object.Surface-tracking operations generally be-gin by interpolating the original imagedata to form a new representation of thevolume composed of voxels of equallength in all three dimensions. Thestructure to be examined is isolated byapplying a segmentation operator to thevolume to form a binary 13 representation

131n a binary representation, voxels located withinthe object are assigned a value of 1, all other voxelsare set to O

A B c

Figure 5. Tiled representation of a portion of thesurface of three slices of medical imaging modalitydata.

of the 3D volume. Surface tracking be-gins after the user specifies a seed pointin the object. The surface is then“tracked” by finding the connected vox-els in the object that lie on the boundaryof the object. Voxel connectivity can beestablished using a variety of criteria,the most common being face connectiv-ityy, face or edge connectivity, and face,edge, or vertex connectivity. If the face-connectivity criterion is used, for exam-ple, two voxels are adjacent in the objectif any face on one voxel touches any faceon the other. 14 When displaying the sur-face, the surface display procedure se-lects only the voxel faces oriented towardthe observer, The highest quality displayprocedures shade each visible voxel facebased on an estimation of the surfacenorma115 at the center of the face. Figure6 shows a portion of the surface of threeadjacent slices formed by a surface-tracking operation. The visible faces ofthe cubes are dark, with the faces thatlie on the interior of the object coloredwhite. Appendix D presents a short de-scription of three surface-tracking algo-rithms.

.—14Edge connectivity requires that the edge of onevoxel touch the edge of the other, vertex connectiv-ity requires that the vertex of one voxel touch avertex of the other,15The surface normal is the line perpendicular to aline tangent to a given point on the face,

ACM Computing Surveys, Vol 23, No. 4, December 1991

Page 12: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

432 * Styt.z et al.

A B c

Figure 6. A portion of the surface of three slices ofmedical imaging modality data formed using a sur-face-tracing operation.

The volume, 3D-primitive, -based vol-ume approach has grown in popularitydue to its ability to compute volumerenderings as well as surface renderingswithout performing surface extraction

preprocessing. Volume methods operatedirectly, possibly with interpolation,upon the data output from the medicalimaging modality. Surface extraction

processing, if performed, is deferred untilthe transformation of the imaged volumefrom object space into image space. 16 Asa result, 3D-primitive-based renderingscan portray the surface of selected or-gans, as in the ID- and 2D-primitive-based approaches, or they can provide aview of the entire contents of the volume.The drawback to this approach is thelarge amount of data processed whenconstructing a view of object space. Thecommonly used 3D primitives are thevoxel (possibly interpolated to form cubicvoxels) and the octree. (We discuss theoctree data structure later. )

16An exception to this general rule is the use of thebinary volume object space representation In thiscase, object space M preprocessed to segment thevolume into an organ of interest and backgroundThis technique offers the capability for interactiveviewing of the inside and outside of the organwithout recomputing vmible surfaces while reduc-ing the memory requirement for storing the objectspace 3D array.

The voxel-based object space volume isoften represented within the machine asa 3D array, with array access based uponthe object space x, y, and z coordinates.One method for extracting informa-tion from the 3D array is back-to-frontaccess,17 described in Frieder [1985a].To compute a visible surface render-ing without segmentation, the algorithmstarts at the deepest voxel in the objectspace array and works its way to theclosest voxel, transferring voxel valuesfrom object space to image space as itproceeds. When it concludes, the algo-rithm has extracted the visible portionsof all the objects within object space fordisplay. Because of the computationalcost of this procedure, techniques basedon this algorithm that reduce image ren-dering time have been developed. Thesetechniques accelerate processing by re-ducing the number of data elements ex-amined and/or by reducing the timerequired to access all the data elements.For examples of these acceleration tech-niques see Goldwasser and Reynolds[19871, Reynolds [1985], and Reynoldset al. [19871. By specifying a segmenta-tion threshold or threshold window,ls thetechnique in Frieder et al. [1985a] canextract the visible surface of a singleobject for display. Later in this paper, wedescribe how 3D-primitive-based volumemethods can be used for volume render-ing. Figure 2 is a voxel-based depiction ofa volume.

Because of the computational expenseincurred when accessing the 3D spacearray, several researchers have used theoctree data structure, or a variation of it,to reduce data access time. The octree isan eight-ary tree data structure formedby recursive octant-wise subdivision ofthe object space array. Octant volumescontinue to be subdivided until a termi-nation criterion is satisfied. Two commontermination criterion are the total vol-

17Back.to.front access can be used with ID, 2D, and

3D primitives‘8A threshold window is a user-defined range ofvoxel values specified to separate items of interest,such as an organ, from the remainder of a volume.

ACM Computmg Surveys, Vol 23, No 4, December 1991

Page 13: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging ● 433

ume represented by a node and the com-plexity (homogeneity) of the volumerepresented by the node. In the octree,each node of the tree represents a volumeof space, and each child of a given noderepresents an octant within the volumeof the parent node. Each node of theoctree contains a value that correspondsto the average value of the object spacearray across the octant volume repre -sented by the node. The root node of thetree represents the entire object spacevolume, and leaf nodes correspond to vol-umes that are homogeneous, or nearlyso. Leaf nodes do not represent identi-cally sized volumes; instead they repre-sent object space volumes that satisfy thetermination criteria. For example, ifhomogeneity is a criterion, each leaf rep-resents an object space volume that en-compasses a set of voxels having the samevalue. Intermediate levels of nodes rep-resent nonhomogeneous octant volumes;the value for each of these nodes is deter-mined by volume-weighted averaging thevalues of the child nodes.

We classify the octree as a 3D-prim-itive object space representation becausethe nodes of the octree represent volumesin object space. The octree is not,however, a pure 3D-primitive-based rep-resentation because of the voxel classifi-cation preprocessing performed whencreating the octree data structure. Chenand Huang [1988] provides an introduc -tory survey and tutorial on the use ofoctrees in medical imaging, Samet [1990]describes techniques for the creation anduse of octrees for representing objects bytheir surfaces. Further work on the for-mation and use of octrees in medicalimaging is reported in Amans andDarrier [1985], Ayala et al. [1985],Carlbom et al. [1985], Chen andAggarwal [1986], Gargantini [1982],Gargantini et al. [1986a, 1986 b],Glassner [1984, 1988], Jackins andTanimoto[1980], Kunii et al. [1985],Levoy [1988a, 1988b, 1989a, 1989b,1990a, 1990b], Levoy et al. [1990], Maoet al. [1987], Meagher [1982a, 1982b,1985], Samet [1989], Spihari [1981], Stytz[1989], Tamminen and Samet [1984], Tuy

and Tuy [1984], and Weng and Ahuja

[19871. Note that the preprocessing stepfaces the same voxel classification prob-lems encountered when using lD and 2Dobject space representations even thoughsurfaces are not explicitly extracted.

Using one of the three object spaceportrayal methods, a desired object ofinterest or the entire volume can berendered. The two major approaches to3D medical image rendering built uponthese three depiction methods are sur-face rendering and volume rendering. Asurface rendering presents the user witha display of the surface of a single object.A volume rendering displays multiplesurfaces or the entire volume and pre-sents the user with a visualization of theentire space. Volume rendering uses3D primitives as the input data type,whereas a surface rendering can becomputed using lD, 2D, or 3D primitives.

To compute a surface rendering, theinput data must be processed to extract adesired surface or organ for display.To compute a surface rendering, voxelsmust be deterministically classifiedinto discrete object-of-interest andnonobject-of-interest classes (possibly ina preprocessing step). The resulting ren-dered image provides a visualization ofthe visible surface of a single objectwithin the 3D space. Note that duringthe classification phase, object space mustbe examined for qualifying voxels. If arecord of qualifying voxels is preserved,subsequent renderings of the object’s sur-face can be made without reaccomplish-ing voxel classification. Maintaining arecord substantially reduces the render-ing computational burden because onlythe object of interest, rather than all ofobject space, must be processed. The useof a record of qualifying voxels differenti -ates the surface renderings derived fromthe contour (lD) and surface (2D) meth-ods from the surface renderings derivedfrom the volume (3D) method.

For illustrative purposes, Figure 7 de-picts surface renderings of a human skullcomputed from 2D (left) and 3D (right)primitives. These two particular figureslook very similar, and to the lay person,

ACM Computing Surveys, Vol 23, No 4, December 1991

Page 14: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

434 “ Stytz et al.

Figure 7. Surface rendering of human skull from 2D primitives (left) and 3D primitives (right) using thetechnique described in Udupa and Herman [19901. Reproduced with permission from Udupa and Herman[19901.

are for all intents and purposes identical,Two-dimensional-primitive-based render-ings are, however, commonly crisper indetail than 3D-m-imitive-based surfacerenderings, whic~ can appear fuzzy. Theother differences between the two surfacerenderings are the amount of time re -quired to compute the surface renderingand the greater volume manipulationflexibility inherent in the 3D-primitiveapproach. The 3-D-primitive approach ismore flexible because all of the data inthe volume is available at renderingtime.

Volume rendering treats voxels asopaque or semiopaque volumes with theopacity of each voxel determined by itsvalue. Typically, the value is assumed tocorrelate to the type(s) and amount ofmaterial in the voxel. For each renditionall the voxels in object space are exam-ined, making this type of renderingsignificantly more computationally ex -pensive than surface renderings gener-ated from ID or 2D primitives. The aim

of this type of imaging is to present theobject (s) of interest within the anatomi-cal context of surrounding tissue.

Currently, there are four major typesof volume rendering: volume renderingusing multiple threshold windows, volu-metric compositing, max-min display, andradiographic display. Volume renderingusing multiple threshold windows usesthe threshold windows to extract differ-ent tissue types from the volume. Thistype of volume rendering generates animage containing multiple opaque objectsurfaces. Volumetric compositing, some-times called compositing, generates animage containing multiple semiopaqueand/or opaque tissue surfaces in a two-step process. The first step is determina-tion of the blend of materials present ineach voxel, possibly using a probabilisticclassification scheme, yielding an opacityvalue and a color for the voxel. The sec-ond step is the combination of the voxelopacity and color values into a singlerepresentation of light attenuation by the

ACM Computing Surveys, Vol 23, No. 4, December 1991

Page 15: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging ● 435

Figure 8. Volume rendering of a human head. Three radiation treatment beams encompassing atreatment region and MRI intensities are superimposed on cut planes in a volume rendering of a patient’shead. The head with exposed cortex was rendered using 3D primitives generated from 109 MRI slices withother objects superimposed by hybrid volume-rendering techniques. Photograph courtesy of the Universityof North Carolina Departments of Computer Science and Radiation Oncology.

voxels along the viewer’s line of sightthrough image space. The resulting ren-dered image can provide a visualizationof the entire contents of the 3D space orof several selected materials (such asbone and skin surfaces) in the space.We describe volumetric compositing ingreater detail in the next section. Max-min display, also called angiographic dis-play, determines the shade at each screenspace pixel using the maximum (or mini-mum) voxel value encountered along thepath of each ray cast from each screenspace pixel through image space. Radio-graphic display, also called transmissiondisplay, determines the shade at eachscreen space pixel from the weighted sumof the voxel values encountered along thepath of each ray cast from each screenspace pixel through image space. Except

for changes in lighting and voxel valueto displayed-color mapping, object spacemust be completely examined to computeeach type of volume rendering.

The next two figures show differenttypes of volume renderings. The patientdata in Figure 8 were acquired using anMRI unit; those in Figure 9 were ac-quired using a CT. Figure 8 is a volumet-ric compositing that displays the firstvisible patient surface along a ray com-posited with three radiation treatmentbeams. Figure 9 is an example of a vol-ume rendering that uses multiple thresh-old windows to display the surface of twodifferent objects in a volume. At eachpixel in Figure 9, the surface that ap-pears in the image is the one surface ofthe two that is closest to the observer inimage space.

ACM Computing Surveys, Vol 23, No. 4, December 1991

Page 16: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

436 ● Stytz et al.

Figure 9. Volume rendering showing the lungs and spine of a patient. The 3D primitives used in therendering were acquired in a 24-slice CT study. Image courtesy of Reality Imaging Corporation, Solon,Ohio. Images rendered using the Voxel Flinger.

We have adopted a definition for ‘thesurface- and volume-rendering opera-tions that emphasizes what the user seesin the rendered image, as in Levoy[1990b]. A different set of definitions forthese same operations emphasizes theamount of data processed to compute therendering [Udupa 1989; Udupa andOdhner 1990]. In this other set ofdefinitions, a surface rendering is anyrendering formed from a contour- or sur-face-based object space representation. Avolume rendering is any renderingformed from a volume method objectspace representation. The distinction be-tween the two sets of definitions lies inthe classification of those techniques thatprocess all of object space but display asingle surface. Both sets of definitions

classify renderings from contour and sur-face object space portrayal methods assurface renderings and the renderings ofmultiple surfaces using volume methodsas volume renderings. The set of defini-tions used in this survey, however, clas-sify a rendering from a volume methodthat depicts a single surface as a surfacerender ing. The other set of definitionswould classify this same rendering as avolume rendering.

Whether a surface or a volume render-ing is computed, an important aspect toproviding a 3D illusion within 2D screenspace is the shading (i. e., coloring) ofeach pixel. Shading algorithms assign acolor to a pixel based upon the surfacenormal, orientation with respect to lightsources and the observer, lighting, image

ACM Computing Surveys, Vol. 23, No. 4, December 1991

Page 17: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-D in-.ensional Medical Imaging ● 437

space depth, and type of material por-

trayed by the pixel. We present a moredetailed description of the concepts used,to perform shading in the next sectionand in Appendix E. In this section, welimit the discussion to a classificationscheme for 3D medical imaging shadingtechniques.

We classify 3D medical image shadingtechniques based upon the methodologyused to estimate the surface normal.There are two categories of surface nor-mal estimation techniques— object spacemethods and image space methods.Object space methods use informationavailable in object space; image spacemethods use information available in im-age space. Object space surface normalestimation shading algorithms, as inChen et al. [1985], Chuang [1990],Gourand [1971], Hohne and Bernstein

[19861, and Phong [19751, calculate thenormal for a visible voxel face using thegeometry of the surrounding voxels orvoxel faces or using the gray-scale gradi-ent (local pixel value gradient) betweenthe target pixel and its neighboring pix.els. The object space surface normal esti -mation does not change due to rotation ofthe object. This category of techniquescan reduce image-rendering time but atthe cost of increased time for preprocess-ing the data. Image space surface normalestimation shading algorithms, as inChen and Sontag [1989], Gordon andReynolds [1!383, 1985], and Reynolds[1985], estimate the normal using thedistance gradient (z’ gradient) betweenthe target pixel and its neighbors. Thistechnique requires recomputing the esti-mated surface normal after each rota-tion. Appendix F contains a descriptionof object space surface normal estima-tion algorithms described in Chenet al. [1985], Gourand [1971], Hohne andBernstein [19861, and Phong [1975], aswell as the image space surface normalestimation algorithm described inGordon and Reynolds [19851.

The contour- and surface-based meth-ods are suitable for computing surfacerenderings of an object of interest,whereas the volume object space por -

trayal method can be used for surface orvolume rendering. Tiede et al. [1987]compares the performance of fourdifferent surface-rendering techniques,Comparisons of surface- and volume-rendering methods and shading techniques are presented in Gordon andDeLoid [1990b], Lang et al. [19901,Pommert et al, [1989, 1990], Tiede et al,[1990], and Udupa and Hung [1990a,1990b]. Talton et al. [1987] compares theefficacy of several different volume-rendering algorithms.

Table 2 illustrates how the conceptsdiscussed in this section have been im-plemented in a wide variety of 3D medi-cal imaging machines. We classify the3D medical image rendering machinesby object space portrayal method, type ofrendering, type of shading algorithm, ar-chitecture, and development arena. Thearchitecture indicates the type of hard-ware used to execute the algorithms inthe machine. We mention the develop-ment arena to indicate whether themachine was designed as a research ex-periment or for commercial purposes.References for the machines summarizedin the table and discussed later in thissurvey are presented in the body of thesurvey.

3. THREE-DIMENSIONAL MEDICAL IMAGINGRENDERING OPERATIONS

This section presents a survey of theoperations commonly performed whenrendering a 3D medical image. Becauseadditional capabilities are under devel -opment, these operations are not an all-inclusive list. Rather, they indicate thebroad range of capabilities desired withina 3D medical imaging machine.

Adaptive histogram equalization [Pizeret al. 1984, 1986, 1987, 1990b] is an im-age enhancement operation used to in-crease the contrast between a pixel andits immediate neighbors. For each pixel,the technique examines the histo~am ofintensities in a region centered on thepixel and sets the output pixel intensityaccording to its rank within its histo-gram. The technique has been used to

ACM Computing Surveys, Vol 23, No. 4, December 1991

Page 18: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

>

;

Page 19: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

cd

Page 20: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

U2

Page 21: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

CJ

Page 22: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

v

Page 23: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging 8 443

enhance visual contrast in 2D imageslices and to amplify feature boundarycontrast before performing segmentation(described below).

Antialiasing [Abram and Westover1985; Booth et al. 1987; Burger andGillies 1989; Cook et al. 1984; Crow 1977,1981; Dippe and Weld 1985; Foley et al.1990; Fujimoto and Iwata 1983; Lee andRedner 1990; Max 1990; Mitchell 1987;Watt 1989] is an image-rendering opera-tion that diminishes the appearance ofjagged edges within the rendered imageby eliminating spurious high-frequencyartifacts. Antialiasing accomplishes thisobjective by smoothing the edges of ob -jects in the scene. A common approach toantialiasing in medical imaging is to in-crease the resolution of the image duringprocessing and then resample the databack to the original resolution for finaldisplay. This technique is referred to assuper sampling.

Supersampling achieves an acceptablelevel of antialiasing at low computa-tional cost. It determines the shade of apixel by using multiple probes of imagespace per display pixel, The weightedsum of the values returned by the probesdetermines the shade of the pixel. Therendering system must perform graphicsoperations at high resolution(at least two or more times the resolutionof the final display) and determine thelow-resolution display-pixel values by av-eraging the high-resolution pixel values.For example, by calculating the image ata resolution of 1024 x 1024 pixels andthen displaying it at a resolution of 512 x512 pixels, four pixel values in the high-resolution image are averaged into onepixel in the final display. The averag-ing blurs the final image, thereby re-ducing the occurrence of jagged edgesand high-frequency artifacts.

Boundary detection is a method for im-age segmentation that produces objectboundary information for an object of in-terest. A boundary detection algorithmidentifies the surface of the organ of in-terest in the scene and extracts the sur-face from the remaining portion of the3D digital scene. This technique could,

for example, be used to extract the skullin Figure 7 and the spine and lungs inFigure 9. A popular approach to bound-

ary detection is surface tracking. Surfacetracking locates all the connected surfacevoxels in the scene that are in the sameobject /organ. An advantage of boundarydetection over image segmentation bythresholding (see below) is improved iso-lation of the connected components of theobject within the scene. Boundary detec-tion accomplishes connected componentextraction implicitly when determiningthe object boundary. Because of the diffi-culties and inaccuracies inherent inmaking a binary decision concerning thepresence or absence of a material withina volume, probabilistic classificationschemes combined with image composit -ing operations have been developed.These techniques depict the boundary ofan object fuzzily rather than discretely.Appendix D presents three algorithmsdevised to perform boundary detection by

surface tracking. These are the algo-rithm proposed by Liu [1977], thealgorithm specified in Artzy et al. [1981],

and the “marching cubes” algorithmdescribed by Lorensen and Cline [1987]and Cline et al. [1988].

Clipping, also called data cutout, iso-lates a portion of a volume for examina-tion by using one or more clipping planes.The clipping planes delineate the bound-ary of the subvolume to be extracted. InFigure 8, clipping planes were used toremove the wedge-shaped volume fromthe back of the patient’s head to exposethe underlying material. Using the otheroperations described in this section, theextracted subvolume can be manipulatedin the same manner as the wholevolume.

Digital dissection is a procedure usedto cut away portions of overlying mat-erial to view the material lying un-derneath it within the context of theoverlying material. For example, if a userwishes to view a portion of a fracturedskull, this operation would allow theskull to be viewed along with the skinand surface of the head surrounding thefracture. Figure 8 presents an example of

ACM Computmg Surveys, Vol. 23, No 4, December 1991

Page 24: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

444 “ Stytz et al.

this technique with its simultaneous de-piction of the patient’s skin, skull, andbrain. Cutting planes are not commonlyused for this operation. Instead, a graph-ical icon displayed upon the cathode raytube (CRT) indicates the location ofthe digital scalpel. The scalpel indicateswhere to remove material. The cuttingoperation can be accomplished inthe renderer by treating the “cut away”voxels’ values as transparent, therebyallowing the underlying materials to beviewed. This operation is a fine-graincounterpart of clipping. Instead of re -moving entire planes of voxels, however,digital dissection removes areas only oneor two voxels wide.

False color (also called the coloredrange method) [Farrell et al. 1984] is animage enhancement operation used todifferentiate multiple objects displayedwithin the scene. False coloring is a two-step procedure. In the first step, the userassigns a color value to each range ofvoxel values to be visualized. In the sec-ond step, the renderer classifies and col-ors individual voxels in image space.Coloring is accomplished by assigning acolor to a voxel then darkening the in-tensity of the color according to the im-age space distance between the voxel andthe observer. After coloring the voxel inimage space, the voxel is projected intoscreen space. By making voxels at theback of the scene darker than those atthe front, the display provides the viewerwith depth cues while simultaneous-ly differentiating the structures in thescene.

Geometric transformations, consistingof scaling, translation, and rotation[Burger and Gillies 1989; Foley et al.1990; Rogers 1985], allow the user toexamine the scene from different orienta-tions and positions. Scene rotation,scaling, and translation are expressedrelative to the object space x-, y-, andz-axis. Geometric transformations take ascene in object space, transform it, andplace the result in image space. Scalingenlarges or shrinks the rendered image.Translation moves the rendered volumewithin image space. Rotation allows the

user to view portions of the rendered im-age that would otherwise be invisible andalso enhances the 3D effect in the ren-dered image. Successive rotations are cu-mulative in their angular displacementof objects in the scene relative to their

starting positions. The 3D effect providedby rotation can be further enhanced bymaking a movie of the rotation of thescene through a sequence of small anglesaround one axis [Artzy et al. 1979; Chen1984b; Robb et al. 1986].

Hidden-surface removal [Burgerand Gillies 1989; Foley et al. 1990;Frieder et al. 1985a; Fuchs et al. 1979;Goldwasser and Reynolds 1987; Meagher1982a, 1982b; Reynolds 1985; Reynoldset al. 1987; Rogers 1985; Watt 1989; Ap-pendix A] is one of the most computation-ally intensive operations in 3D medicalimaging. Hidden-surface removal deter-mines the surfaces that are visible/in-visible from a given location in imagespace. Hidden-surface removal algo-rithms fall into the three main types:scanlinelg based, depth-sort, and z-buffer. Each type uses the depth, alsocalled the z’ distance, of each scene ele-ment that projects to a pixel to determineif it lies in front of or behind the sceneelement currently displayed at the samepixel, Appendix A provides further infor-mation and example algorithms.

Histogram equalization [Gonzalez andWintz 1987; Hummel 1975; Richards1986] is an image enhancement opera-tion that attempts to use the availablebrightness levels in a CRT displayequally. Histogram equalization modifiesthe contrast in an image by mappingpixel values in original screen spaceto pixel values in modified screenspace based on the distribution of occur-rences of pixel values. The procedure al-locates a greater proportion of the grayscale to the ranges of values with a largenumber of screen space pixels than toranges with fewer pixels. Histogramequalization, unlike adaptive histogram

19A scanline is a horizontal line of pixels on a CRTA scanline M also called a scan.

ACM Computing Surveys, Vol. 23, No. 4, December 1991

Page 25: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging ● 445

equalization, makes gray-scale value as-signments across the entire image and sois not sensitive to regional variations inthe displayed image. Let ho(x) be thehistogram function for the value x in theoriginal image. If there are P pixels inthe image and B brightness values, thenan ideal modified image would have P/Bpixels per brightness value associatedwith it. Histogram equalization can notachieve an ideal modified image inpractice because pixels with the samebrightness value in the original imagehistogram are not split between bright -ness values in the modified image. Thedesired modified image brightness value,y, for a given original image brightnessvalue, x, is computed by

B–1 ‘y= /()ho X dx.

P

The integral is found by computingthe cumulative histogram for the image.Since the y value in the modified imageis constant for a given x in the originalimage, a look-up table is used to assignpixel values in the original image topixel values in the modified image. BothRichards [1986] and Gonzalez and Wintz[19871 provide further descriptionsof histogram equalization, includingits mathematical basis, as well as gen-eral information concerning the use ofhistograms for image enhancementprocessing.

Interpolation [Artzy et al, 19791 con-verts the sparsely spaced 2D slices formedby a CT, MRI, SPECT, PET, or ultra-sound study into a continuous 3D scene.Because medical imaging modalitiesleave unimaged space between adjacentslices of patient data, interpolation isused to fill in the space between theslices. Interstice density values are esti-mated by computing a linear average ofthe density values found in pairs of op-posing voxels in the original slices. Twoapproaches have found widespreaduse: nearest-neighbor (zeroeth-order) in-terpolation and trilinear (first-order)interpolation. Both techniques superim-pose a grid of sample points upon the

volume to be rendered. Nearest-neighborinterpolation estimates the density valueof the interstice voxels using the value ofthe closest in-slice voxel. Trilinear inter-polation estimates the density value foran interstice voxel by taking a weightedaverage of the eight closest in-slice vox -els. Trilinear interpolation yields imagesthat are superior to nearest-neighborinterpolation but at a cost of increasedcomputation. Raya and Udupa [19901describes a new approach to interpola-tion called shape-based interpolation.This technique is particularly usefulwhen using 2D (surface) primitives.Shape-based interpolation uses the in-slice object boundary to estimate theinterstice object boundary. This tech-nique does not yield interpolated valuesfor interstice voxel values but insteadyields the interpolated boundary locationfor the object.

Multiplanar reprojection (MPR)[Herman and Liu 1977; Kramer et al.1990; Mosher and Hook 1990; Rhodeset al. 19801 is a procedure for displayinga single slice of a volume. Multiplanarreprojection can extract single-slice 2Dviews from a volume of medical imagingmodality data at arbitrary, oblique an-gles to the three main (x, y, z) objectspace axes. In general, the proceduresteps along the oblique plane, computingthe possibly interpolated voxel value ateach point on the display plane from thenearby object space voxel value(s).

Multiplanar display (MPD) is thesimultaneous display of multiple slices ofmedical imaging modality data. Threeorthographic views are commonly pro-vided in an MPD: a sagittal view, anaxial view, and a coronal view. Takingthe z-axis to be the patient’s longest di-mension and the y-axis to be orientedalong the patient’s front-to-back axis, thesagittal, axial, and coronal views are de-fined as follows: An axial view is a viewalong the z-axis of 3D medical imagingmodality data that portrays image slicesparallel to the x-y plane. An axial viewis also called a transverse view. A coro-nal view is a view along the y-axis of 3Dmedical imaging modality data that

ACM Computing Surveys, Vol. 23, No. 4, December 1991

Page 26: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

446 “ Stytz et al.

reveals image slices parallel to the X–Z

plane. A sagittal view is a view along the

x-axis of 3D medical imaging modalitydata that reveals image slices parallel tothe y-z plane.

Projection is an operation that mapspoints in an n-dimension coordinate sys-tem into another coordinate system ofless than n dimensions. In 3D medicalimaging, this operation performs the

mapping from points in image space to

screen coordinates in screen space. Thereare two broad classes of projections—perspective and parallel. A perspectiveprojection is computed when the distancefrom the viewer to the projection plane(commonly the CRT screen) is finite. Thistype of projection varies the size of an

object inversely with its distance in im-age space from the projection plane. Anobject at a greater distance from the

viewer than an identical closer object ap-

pears smaller. A parallel projection, on

the other hand, places the viewer at aninfinite distance from the projectionplane. The size of an object does not vary

with its depth in image space.There are two broad classes of parallel

projections—orthographic and oblique. Inan ortho~aphic parallel projection, the

direction of projection (the direction fromthe projection plane to the viewer) is

parallel to the normal to the projectionplane. In an oblique parallel projec-tion, the direction of projection is not

parallel to the normal to the projectionplane. For further information see Burgerand Gillies [19891, Foley et al. [19901,Rogers [19851, and Watt [19891.

Ray tracing [Burger and Gillies 1989;Foley et al. 1990; Glassner 1989; Rogers1985; Watt 1989] is an image-renderingtechnique that casts rays of infinitesimal

width from a given viewpoint through

a pixel atid on into image space.

The path of the ray and the location of

the objects in the volume determine the

object(s) in the volume encountered by

the ray. The color of the object(s), their

depth, and other shading factors deter-

mine pixel intensity. In addition to per-

forming hidden-surface removal and

shading, ray tracing can be extended to

produce visual effects such as shadows,reflection, and refraction. zo The addi-tional effects come at the cost of addi-tional computations per ray and at thecost of spawning additional rays at eachray/surface intersection. Even thoughray tracing is a point-samplingzl tech-

nique, it can perform antialiasing byspawning several rays at each pixel andcomputing a weighted sum of the ray

intensities. The weighted sum of the ray

intensities determines the intensity forthe pixel. Appendix B surveys severaltechniques for increasing the speed andfor improving the image quality obtainedwith ray tracing.

Segmentation [Ayache et al. 1989;Back et al. 1989; Chuang and Udupa1989; Fan et al. 1987; Herman et al.

1987; Shile et al. 1989; Trivedi et al.1986; Udupa 1982; Udupa et al. 1982] isany technique that extracts an object orsurface of interest from a scene, For ex-ample, segmentation can be used to iso-late the bony portions of a volume fromthe surrounding tissue. Both the contour

and surface approaches to scene repre-

sentation use segmentation to isolate

specific features within a volume from

the remainder of the volume. Some type

of image segmentation was used to ex-

tract the skull in figure 7 and the spine

and lungs in Figure 9.

Boundary detection is one approach

to segmentation. Another technique for

image segmentation is thresholding. im-

age segmentation by thresholding locates

the voxels in a scene that have the same

property, that is, meet the threshold

requirement. Extraction of an object in a

medical volume using this technique is

difficult because multiple materials can

be present within the volume of each

voxel. The presence of multiple materials

‘“Reflection is caused by light bouncing off a sur-face of an object. Refraction is caused by lightbending as It passes through a transparent object,21point ~amPling is a rendering technique that de-

terrnines the shade of a pixel based upon one probeof image space. The value returned by the probedetermines the shade of the pixel. This method ofsampling typically results in aliasing.

ACM Computing Surveys, Vol. 23, No. 4, December 1991

Page 27: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging g 447

causes misclassification of voxels. Theimportant step in the thresholding pro-cess is the classification of the elements

of the scene according to criteria thatseparate the object(s) of interest fromother elements of the scene. A criterioncommonly used in 3D medical imaging isthe voxel value threshold or window, Avoxel whose value lies within the win-dow or above the threshold is assumed tobe part of the object; otherwise it is clas-sified as background. Because objects andsurfaces in a medical image are notdistinct, even the best segmentation op-erators provide only an approximation,albeit an accurate one, of the desiredobject or surface. Appendix C describestwo techniques for segmentation bythresh olding along with shortsynopses/classifications of other tech-niques for segmentation. Udupa [19881discusses several boundary detection andsegmentation by thresholding schemesand their use in surgical planning.

Shading [Burger and Gillies 1989;Foley et al. 1990; Rogers 1985; Watt1989] enhances the 3D appearance of animage by providing an illusion of depth.Chen et al. [1984a] provides an overview

of several shading techniques used in themedical imaging environment. In combi-nation with gray scale or color and rota-tion, shading enhances the 3D visualeffect of images rendered in 2D. Figures7, 8, and 9 illustrate how shading canenhance the perception of depth in animage. Shading achieves the depth illu-sion by varying the color of surfacesaccording to their image space depth,material content, and orientation withregard to light sources and the viewer.To shade a scene correctly, a series ofcomputations based on one or more prop-erties of each visible object must be per-formed. These properties are the refrac-tive properties, the reflective properties,the distance from the observer, the angleof incidence between rays from the illu-minating light source(s) and the surfaceof the object, the surface normal of theobject, the type and color of the illumi-nating light source(s), and the color ofthe object. Appendix E describes the in-

teraction of these properties. Appendix Fdescribes the following shading algo-rithms: distance shading, normal-basedcontextual shading, gradient shading,gray-scale gradient shading, Gouraudshading, and Phong shading.

Tissue / image registration [Byrne et al,1990; Gamboa-Aldeco 1986; Hermandand Abbott 1989; Hu et al. 1989;Pelizzariet al. 1989; Schiers et al. 1989;Toennies et al. 1989; Toennies et al.1990] is a procedure that allows volumeand/or surface renderings taken of a pa-tient at different times to be superim-posed so that the patient axes in eachimage coincide. For example, if a patienthad both a CT and a PET scan, it may beuseful to overlay the CT anatomical in-formation with the PET physiologicalinformation, thereby combining differentinformation into a single rendered im-age. Registration can be accomplished byeither marking points on the patient thatare used later to align the images or bymanual, post-hoc alignment of the im-ages using anatomical landmarks.Preparation of the patient for registra-tion allows the acquired data to bealigned with great accuracy, but it isimpractical to perform the preparationfor every patient. Manual alignment isnot as precise as prepared alignmentbecause patient axes alignment is merelyapproximated. Manual alignment, how-ever, gives the physician a capability forad hoc registration. Developing proce-dures for accurately performing manual,ad hoc alignment is an active researcharea.

Transparency effects [Foley et al. IggO;

Watt 1989] allow a user to view an ob-

scured object within the context of

an overlying transparent (or semi-

transparent) object, thereby providing a

context for the obscured object in terms

of the overlying transparent structure(s).

For example, one may want to view the

skull through overlying transparent skin,

as shown in Figure 8. In that figure, the

combination of semitransparent treat-

ment beams and the image of the pa-

tient’s head provide an anatomical

context for the location of the beams.

ACM Computing Surveys, Vol. 23, No 4, December 1991

Page 28: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

448 = Stytz et al.

This process is useful because it permits

visualization of portions of the anatomy

that are inside the patient within the

setting of anatomical landmarks on the

outside of the patient. To achieve a

transparency effect, the rendering sys-

tem uses the formula 1 = tIl + (1 – t) 12,

where O s t s 1, to overlay volume data.

11 is the intensity attributed to a point

on the transparent surface; 12 is the in-

tensity calculated for the point lying on

an opaque surface behind 11, and t is the

transparency factor for ll. Volumetric

compositing uses the transparency con-

cept when forming a visualization of a

medical imaging volume.

True 3D display is a total volume

display method that uses human

vision system physiological depth cues

like movement parallax and binocular

parallax (also called stereo vision) to

cause the perception of depth in a dis-

play. True 3D display techniques can be

used in combination with or instead of

the depth cues provided by traditional

computer graphics techniques (such as

shading, shadows, hidden-surface re-

moval, and perspective). Binocular paral-

lax is a physiological depth cue based

upon the disparity in the image seen by

each eye. Binocular parallax is the

strongest depth cue for the human visual

processing system. Movement parallax is

a strong physiological depth cue elicited

by head movement and the correspond-

ing perceived change in the image. Some

forms of true 3D display (such as the

varifocal mirror and holographic image

techniques) further enhance the depth

illusion by allowing the user to see the

superposition of structures in the volume

and to achieve segmentation of the struc-tures by moving his or her head. Varifo-cal mirrors, holograms, several types of

CRT shutters, and several types of stereo

glasses have been used to generate true

311 displays. Techniques for displaying

true 311 images are described in

[Brokenshire 1988; Fuchs 1982a, 1982b;

I+arris 1988; Harris et al. 1986; Hodges

and McAllister 1985; Jaman et al. 1985;

Johnson and Anderson 1982; Lane 1982;

Mills et al. 1984; Pizer et al. 1983; Robb

1987; Reese 1984; Stover 1982; Stover

and Fletcher 1983; Suetens et al. 1987;

Williams et al. 1989; Wixson 1989].

Lipscomb [19891 presents a compara-

tive study of human interaction with

particular 3D display devices.

Volume measurement [Bentley and

Karwoski 1988; Udupa 1985; Walser and

Ackerman 1977] is an operation that es-

timates the volume enclosed by an object

within image space. The volume mea-

surement operation, first described in

Walser and Ackerman [19771, can be ac-

complished by taking the product of the

number of voxels that form the object

with the volume of a single voxel. Typi-

cally, this operation calls for some form

of segmentation to extract the object from

the surrounding material in the scene.

Because segmentation is not a precise

operation, there is a small error in the

computed volume figure. The error is

currently not clinically significant. As

pointed out in Walser and Ackerman

[19771, it is the change in the volume of

an object that provides clinically

relevant information. Even with its asso-

ciated eyror, volume measurement is

valuable because it provides the phy-

sician with the means to accurately

detect the change noninvasively.

Volume of interest (VOI) operation se-

lects a cube-shaped region out of object

space for the purpose of isolating a struc-

ture of interest. This operation is used to

isolate a portion of object space for fur-

ther display and analysis. The primary

benefit of the VOI operation lies in its

reduction of the amount of storage space

and computation required to render an

image.

Volumetric compositing (also calledcornpositing) [Drebin et al. 1988; Duff

1985; Foley et al. 1990; Fuchs et al.

1988c, 1989a, 1989b; Harris et al. 1978;

Heyers et al. 1989; Levoy 1989a, 1990a,

1990b; Levoy et al. 1990; Mosher and

van Hook 1990; Pizer 1989a, 1989b;

Porter and Duff 1984; Watt 1989] is a

methodology for combining several im-

ages to create a new image. Compositing

uses overlay techniques (to effect

hidden-surface removal) andior blending

ACM Computmg Surveys, Vol. 23, No. 4, December 1991

Page 29: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging w 449

(for voxel value mixing) along with opac-

ity values (to specify surfaces) to combine

the images. Figure 8 illustrates this

technique with its composition of the

treatment beams and the image of

the patient’s head.

A composite is assembled using a bi-

nary operator to combine two subimages

into an output image. The input subim -

ages can be either two voxels or a voxel

and an output image from a prior compo-

sition. When performing an overlay, a

comparison of z’ distances along the line

of sight determines the voxel nearest

the viewer; that voxel’s value is output

from the composition. Image blending re-

quires tissue type classification of each

sample point22 in the 3D voxel grid by

its value. The tissue type classification

determines the color and opacity associ-

ated with the voxel. The voxel opacity

determines the contribution of the sam-

ple point to the final image along the

viewer’s line of sight. The a-channe123

stores the opacity value for the voxel. An

a-channel value of 1 signifies a com-

pletely opaque voxel, and a value of O

signifies a completely transparent voxel.

If voxel composition proceeds along the

line of sight in front-to-back order, the

value at the nth stage of the composition

is determined as follows.

Let C,. be the composition color value

from the n – 1 stage, COut be the output

color value, c. be the color value for the

current sample point, and a ~ be the a-

channel value for the current sample

point. Then cOUt a OUt = c,. a,. + C. a ~

(1 – al.), aOU, = ain + a~(l – a,.). The

displayed color, along the line of sight C,

is C = COUt /aOUt.

Three-dimensional medical imaging

systems use one or more of the opera-

tions mentioned above to depict a data

volume accurately. Meeting the 3D med-

‘2 Sample point values are commonly computed us-ing a trilinear interpolation of the eight neighbor-ing voxel values.23The a-channel (alpha channel) is a data structureused to hold the opacity information needed tocompose the voxel data lying along the viewer’sline of sight.

ical imaging quality requirements while

performing the medical imaging opera-

tions rapidly are difficult challenges.

In fact, many of the requirements and

operations work at cross purposes, and

achieving one usually requires sacrific-

ing performance in one or more other

areas. For example, one method that can

be used to achieve rapid medical image

volume visualization is the use of tiling

techniques to depict the surface of the

object to be rendered. The use of tiling,

however, requires a lengthy surface ex-

traction processing step that the volume

approach avoids. To date, no single ma-

chine has met all objectives and imple-

mented all the operations, and it is an

open question whether one machine can

meet all the objectives.

4. THREE-DIMENSIONAL MEDICAL IMAGINGMACHINES

This section presents an examination

of the techniques used in previous

3D medical imaging research to solve

the problem of presenting high-quality

2D reconstructions of imaged volumes

at high speed. It describes several

image processing architectures: Farrell’s

Colored-Range Method, Fuchs/Poulton

Pixel-Planes 4 and 5 machines,

Kaufman’s Cube architecture, the Medi-

cal Imaging Processing Group (MIPG)

machine, the Pixar/Vicom Image Com-

puter and Pixar/Vicom 11,24 Reynolds

and Goldwasser’s Voxel Processor ma-

chine, and the Mayo Clinic True 3D

machine and ANALYZE.

These eight 3D medical imaging ma-

chines encompass a wide range of paral-

lelism in their architectures, a wide range

of image-rendering rates, and a wide

number of approaches to object space dis-

play. Three clistinct approaches to

3D medical imaging are, however,

evident: software-encoded algorithms on

24The Pixar Image Computer and Plxar II weresold to Vicom in the spring of 1990. Vicom 1s atrademark of Vicom Inc. Pixar is a trademark ofPixar Inc.

ACM Computing Surveys, Vol. 23, No 4, December 1991

Page 30: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

450 “ Stytz et al.

general-purpose computer systems

(MIPG, Farrell, ANALYZE), graphics ac-

celerators adapted for 3D medical imag-

ing (Pixar/Vicom), and special-purpose

computer systems with hardware encod-

ing of algorithms (Pixel Planes, Voxel

Processor, Cube). The MIPG machines

demonstrate the relevance and feasibil-

ity of constructing 3D medical images

using contour and surface descriptions of

objects in CT data. The Cube and Voxel

Processor machines establish the useful-

ness of innovative memory access meth-

ods for rapid display of 3D medical

images. Farrell’s approach shows the

usefulness of false color images in a 311

medical imaging environment. The

Fuchs/Poulton machines confirm that the

pixel-coloring bottleneck can be over-

come by expending hardware resources

at the pixel level, and that this invest-

ment yields significant throughput re-

turns. The Fuchs/Poulton machines and

the Voxel Processor machine demon-

strate the validity of a rendering pipeline

approach to 3D medical imaging, albeit

using different pipelines. Finally, the

Mayo Clinic work establishes the useful-

ness of performing 3D medical image

rendering with cooperating processes in

their ANALYZE system. These research

machines laid the foundation for the cur-

rent generation of commercial machines

reviewed here and in Stytz and Frieder

[19911. The commercial Pixar/Vicommachines established that the images

produced using compositing techniques

are applicable to 3D medical imaging.

In the following sections, we character-

ize each of the machines by the overall

machine architecture, the processing

strategy, the data model, the shading

algorithm(s), the antialiasing tech-

nique(s), the hidden-surface removal

algorithm, the performance of the

machine, the supported image resolution,

the level at which the machine exploits

computational parallelism, and the oper-

ational status of the machine (whether it

exists as a proposal, a prototype, an as-

sembled machine undergoing testing, a

fully functional machine, or a marketed

machine). Stytz and Frieder [19911

characterize additional commercial and

research-oriented 3D medical imaging

machines using these same parameters.

Because the objective of this review is to

survey these machines with particular

emphasis placed on their ground-

breaking achievements, we discuss many

but not all of the capabilities provided by

each machine. Because of space limita-

tions, we do not discuss the shading, an-

tialiasing, and hidden-surface removal

technique parameters in depth and refer

the reader to the appendixes and Stytz

and Frieder [19911. For a discussion of

the complete type of capabilities for each

machine, we refer the reader to the bibli-

ographic references for each machine.

We selected the first nine parameters

for examination because they provide in-

sight into the options available when

forming 3D medical images as well as

the effect of various design choices. The

machine architecture describes the inter-

relationships of the hardware and

software components of the 3D medical

imaging machine. Our description of the

architecture concentrates on the major

components of the machine and the

components’ operation. The choice of ma-

chine architecture sets the performance

limits in terms of rendering throughput

and image quality for the 3D medical

imaging machine. The machine architec-

ture also affects the type of data models

that can be used to meet the image

rendering rate design goal.

For example, suppose the machine ar-

chitecture consists of a number of com-

municating processes running on a

single, low-power CPU with CRT display

support for an 8-bit image. The design

goal stresses the need for relatively rapid

image rendering and display rates. These

design constraints indicate selection of a

contour or surface data model, since these

models would help achieve a rapid dis-

play rate because they reduce rendering

time computational cost. In addition, the

shading and antialiasing 3D medical

imaging operations can be simple be-

cause 8 bits per pixel does not support

the display of high-quality medical im-

age volume visualizations. On the other

ACM Computmg Surveys, Vol. 23, No 4, December 1991

Page 31: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging “ 451

hand, if image-rendering rate is not a

prime consideration, then a more compu-

tationally expensive model, such as the

voxel model, can be used.

The processing strategy parameter

sketches the approach adopted to use the

machine architecture to perform 311 med-

ical image-rendering tasks. The strategy

addresses the constraints imposed by the

machine architecture, data model, and

screen resolution in addition to the de-

sired rendered image quality and render-

ing speed. To continue with the example

presented above, recall that minimal

elapsed display and rendering times are

important goals. One possible rendering

time minimization strategy would be the

use of a surface model of the object of

interest with depth shading and no an-

tialiasing to render the image of the ob-

ject. A rapid display rate can be achieved

by rendering a sequence of images in

batch mode and displaying them after all

rendering computations end. Note that

this processing strategy precludes a ca-

pability for rapid interaction with the

rendered image.

The data model parameter indicates

both the amount of preprocessing re-

quired to render the image and the com-

putational cost of rendering the image.

For example, a contour-based description

of an object requires a preprocessing step

to extract the contours but permits rapid

display of the object from various orien-

tations. The drawback of this model is

that any change to the object requires

reaccomplishing the preprocessing step.

On the other hand, a voxel-based descrip-

tion of a volume requires no preprocess-

ing before rendering, but the rendering

procedure is computationally expensive.

The shading, antialiasing, and hidden-

surface removal algorithm parameters

reflect both the image resolution attain-

able by the machine and the computa-

tional cost incurred in rendering the

image. For example, assume that one

machine uses Phong shading and an-

other machine uses depth shading. We

can then infer that the machine using

Phong shading usually produces higher

quality images at greater computa-

tional expense than the one using depth

shading.

The machine performance parameter

specifies the rendered image produc-

tion rate for the 3D medical imaging

machine. The resolution parameter, ex-

pressed in pixels, reflects the image reso-

lution attainable by the machine. Image

resolution is a measure of the fineness of

detail displayed on the CRT. This mea-

sure has two components— CRT resolu-

tion and modality resolution. Modality

resolution is the resolution of the medi-

cal imaging modality used to acquire the

image, typically expressed as the dimen-

sion of a voxel along the x-, y-, and z-axis.

The resolution of the modality used to

acquire the image imposes the absolute

limit on the level of detail that can be

displayed. CRT resolution is the number

of pixels on the CRT, usually expressed

as the number of pixels along the u-axis

and U-axis. For example, a CRT resolu-

tion of 256 x 256 means that the screen

is 256 pixels wide and 256 pixels high.

If the pixels on the CRT have approxi-

mately the same dimension as the face of

a voxel, then increases in CRT resolu-

tion, up to the limit imposed by modality

resolution, result in the display of finer

detail in the rendered image. Because

the user cannot alter modality resolution

at rendering time, in this survey we use

the term resolution when discussing CRT

resolution. To achieve acceptable resolu-

tion, the 3D medical imaging machine’s

resolution should be the same as the

resolution provided by medical imaging

modalities. At the present time, the

typical 3D medical imaging machine

resolution is 256 x 256 pixels. Higher

resolution allows for magnification and

complete display of the rendered image.

The computational parallelism param-

eter indicates the capability of the se-

lected machine architecture to perform

rapid image rendering. Typically, there

are two computational bottlenecks en-

countered when rendering an image:

hidden-surface removal and pixel color-

ing. Parallelism can be used to attack

both of these bottlenecks by permitting

rapid extraction of visible surfaces using

ACM Computing Surveys, V.1 23, No 4, December 1991

Page 32: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

comm

Page 33: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging g 453

Image Mainframe Workstation FinalDatabase Display

Figure 10. Colored-range imaging system architecture,

parallel operation on discrete portions of

the volume and by parallel computation

of pixel values. A typical tradeoff en-

countered in the use of parallelism is the

choice of giving speed or image quality

primacy in the performance goals for the

machine. In the extreme, for a given level

of parallel operation, a parallel process-

ing capability can be used to increase

image-rendering speed while maintain-

ing image quality, or image quality can

be improved while maintaining the

image-rendering rate. Table 3 summa-

rizes the key parameters for each of the

machines.

4.1 Farrell’s Colored-Range Method

The 3D medical imaging machine pro-

posed by Farrell and Zappulla [1989] and

Farrell et al. 1984, 1985, 1986a, 1986b,

19871 takes an approach to 3D medical

image rendering that is different from

the other machines in this survey in two

regards, First, the colored-range method

does not require preprocessing of the

data. Second, it relies upon the use of

color and image rotation to highlight the

3D relationships between organs in the

volume. The machine has a high image-

processing speed but is expensive. The

volume-rendering image-processing sys-

tem consists of a host and a workstation

(Figure 10). The host is an IBM

370/4341. The workstation is an IBM

7350.

A significant problem faced by the de-

signers of the machine was deciding how

to divide the workload between the work-

station and the host. The solution adopted

was to assign the computationally inten-

sive operations, such as 3D rotation and

data smoothing, 25 to the host and assign

the remaining operations, such as oblique

projection, transparency, and color, to the

workstation.

The processing strategy used in the

machine seeks to reduce the computa-

tional load by eliminating or simplifying

the operations required to form a 3D im-

age. The strategy is implemented using

the colored-range method for back-to-

front 3D medical image rendering. The

colored-range methodology does not re -

quire preprocessing of the data and can

rapidly display object space from an arbi-

trary point of view. It emphasizes the 3D

relationships of the constituent parts of

the image by image rotation and the use

of color. The machine performs angular

rotation about an axis by diagonally

painting the image across the screen in

the back-to-front order defined by the de-

sired scene orientation. The system com-

putes the required diagonal offset for

successive 2D frames using x, y tables of

screen coordinates for small rotational

values of up to +400 from a head-on

viewpoint. The machine performs larger

angular rotations by reformatting the

data so that either the +x- or + y-axis

becomes the + z-axis. Segmentation of the

components of the image is accomplished

by specifying a discrete voxel value range

to identify each organ in the image and

then by assigning a different color to

25Data smoothing is the filteringvolume to reduce aliasing effects

ACM Computmg Surveys, Vol. 23, No.

of the rendered

4, December 1991

Page 34: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

454 “ Stytz et al.

each range. The rendering indicates the

relative depth of the parts of a given

object by varying the color intensity

according to the depth of the part, with

the more distant parts being darker.

The colored-range method has several

advantages in addition to supporting

rapid image rendering. It allows differ-

ent structures to be displayed simultane-

ously through the use of different colors,

and it allows the relative size and posi-

tion of structures to be visualized by

overlaying successive slices. The voxel

data need not be continuous in 3D space,

thereby eliminating the need for data

interpolation to form a continuous 3D

volume. Additionally, the colored-range

method displays accurate 3D images with

just the use of simple logical and arith-

metic functions in the processor.

There is little parallel operation in

Farrell’s machine. On the other hand,

the machine generates images at very

high speeds. Farrell’s image overlay

technique permits 3D images to be pro-

cessed and presented in a matter of 10-15

sec. Farrell’s machine attains this ren-

dering speed by simplifying the computa-

tions involved in representing and

rendering the volume. Volume represen-

tation computations are minimized by

eliminating the preprocessing operations

that derive ID and 2D object representa-

tions and that interpolate voxel values to

form a continuous 3D volume. The ren-

dering computation workload is reduced

by simplification of shading and hidden-

surface removal operations.

4.2 Fuchs / Poulton Pixel-Planes Machines

At the opposite end of the parallelism

scale from Farrell’s Colored-Range

Method is the Fuchs/Poulton Pixel-

Planes machines and associated algo-

rithms, described in Ellsworth et al.

[1990], Fuchs et al. [1977, 1979, 1980,

1983, 1985, 1986, 1988a, 1988b, 1988c,

1989a, 1989cI, (loldfeather [19861, Levoy

[1989 b], Pizer and Fuchs [19871,

and Poulton et al. [1987]. This

series of machines achieves rapid image-

rendering performance by using massive

parallelism to attack the pixel-coloring

bottleneck. The latest machines in the

series are Pixel-Planes 4 (Pxp14) and

Pixel-Planes 5 (Pxp15). We focus the dis-

cussion on Pixel-Planes 5.

The goal of the Pixel-Planes 5 architec-

ture is to increase the pixel display rate

by performing select pixel-level graphics

operations in parallel within a general-

purpose graphics system. To achieve this

goal, the system architecture relies upon

two components— a general-purpose mul-

tiprocessor “front end” and a special-

purpose “smart” frame buffer. The

front end specifies on-screen objects in

pixel-independent terms, and the

“smart” frame buffer converts the pixel-

independent description into a rendered

image.

The Pixel-Planes 4 machine [Fuchs

et al. 1985, 1986; Goldfeather 1986;

Poulton et al. 1987] laid the foundation

for the current Pixel-Planes 5 machine inits pioneering use of a smart frame bufferfor image generation. The design goal forthis machine was to achieve a rapidimage-rendering ability by providing aninexpensive computational capability foreach pixel in the frame buffer, hence thename smart frame buffer. Within thesmart frame buffer are the logic-enhanced memory chips that provide thepixel-level image-rendering capability.The importance of the smart frame bufferlies in its capability for widening thepixel-writing/coloring bottleneck byprocessing each object in the scene simul-taneously at all pixels on the screen.

Pixel-Planes 4 accomplishes object dis-play in three steps. The first step is imageprimitive scan conversion. Scan conver-sion processing determines the pixels thatlie on or inside a specific convex polygondescribed using a set of linear expres-sions. ZG The second step accomplishes

‘6Each edge of a polygon is defined by two vertices,U1 = ( xl, yl) and V2 = (x2, y2), which are orderedso the polygon hes to the left of the directed edgeU1U2 The equation of the edge is given by Ax + By

+C=O, where A=y2–yl, B=x2–xl, andc= x1y2 –X2.Y1,

ACM Computing Surveys. Vol 23, No 4, December 199]

Page 35: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging o 455

visibility determination for the currentprimitive relative to primitives computedpreviously, Pixel-Planes 4 determines thevisibility of each polygon at the pixellevel by a comparison of z’ values usingdata stored in the local pixel memory.The third step consists of the operations,such as antialiasing, contrast enhance-ment, transparency, texturing, and shad-ing, required to render the image.

The logic-enhanced memory chips per-form the pixel-oriented tasks, such aspixel coloring and antialiasing, simulta-neously at the individual pixel level.Since pixel-level operations must beperformed on linear expressions,27 thegraphics processor must construct a lin-ear expression model for each object be-fore sending the image description to thesmart frame buffer. To develop linearequations for each object in the displaylist, the graphics processor performsdisplay list traversal,28 viewing transfor-mation, lighting, and clipping opera-tions. These four operations yield aset of colored vertex descriptions2g foreach object. The graphics processor usesthe colored vertex descriptions to derivethe linear equation coefficients sent tothe frame buffer.

Pixel-Planes 5 builds upon the pixel-level processing foundation laid by Pxp14but extends it by using Multiple Instruc-tion, Multiple-Data (MIMD) processorsfor display list traversal and a token ringnetwork for interprocessor communica-tion. These changes, along with decou-pling the pixel-processing elements of thesmart frame buffer from the frame buffermemory and using a virtual pixel ap -

27The linear expressions are of the form Ax + B-Y

+ C, where x and y are the Image space coordi-nates of the pixel. Linear expressions can be usedto define polygon, sphere, line, and point primitiveobjects.28A display list is a list of the objects within theobject space, which is also the list of the objects tobe displayed Display list traversal is the process ofexamining each of the entries in the display list inthe order specified by the list.‘gVertex descriptions are sent when processingpolygons; individual pixel values are sent whenprocessing medical images

preach, provide a capability for render-ing several primitives simultaneously.Pxp15 evaluates quadratic, instead oflinear, expressions. Figure 11 shows thearchitecture for the Pixel-Planes 5machine.

Pixel-Planes 5 processes primitives onescreen patch (128 x 128 pixels) at a timein each Renderer unit. It uses multipleRenderers to provide a multiple primi-tive processing capability and multipleGraphics Processors (GP) to sort primi-tives into different screen patches (bins).The system dynamically assigns screenpatches to Renderers during processing,with each GP sending the primitives inits patches to the appropriate Rendererin turn. The host workstation is respon-sible for user interaction support and im-age database editing. The 32 GP andMIMD units are used for two purposes.

Each GP manages its assigned Pixel.PHIGS (a variant of PHIGS +30 designedfor Pixel-Planes) data structures andsorts its set of primitive objects into binsthat correspond to parts of the screen.Each of the 8-10 Renderers in the sys-tem, diagramed in Figure 12, are inde-pendent SIMD processing units withlocal memory descended from the PxP14chips. Each Renderer chip has 256 pixel-processing elements and 208 bits ofmemory per pixel. Each Renderer oper-ates on an assigned 128 x 128 pixel patchof the screen. In concert, the Rendererscan operate on several discrete patches ofthe screen simultaneously. The memoryon the board serves as a backing store forholding pixel color values for the currentpatch of screen being processed by theRenderer. The Renderer memory holdscolor, z depth, and other associated pixelinformation. The quadratic expressionevaluator on the chip evaluates thequadratic expression Ax+ By+C+

30PHIGS + (Programmers’ Hierarchical Interac-tive Graphs System) 1s an extension to the ANSIgraphics committee’s current PHIGS 3D graphicsstandard that supports lighting, shading, complexprimitives, and primitive attributes PHIGS tutori-als are m Cohn [19S61 and Morrissey [19901

ACM Computing Surveys, Vol 23, No 4, December 1991

Page 36: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

456 . Stytz et al.

T

L[

Rendere m? ?Rendere

Rendere— Renderer

m

FrameBuffer

bCRTDfiplay

Figure 11. Pixel-Planes 5 architecture (based upon Fuchs 1989b)

DX2 + Exy + Fyz at each pixel using theA, B, C, D, E, and F coefficients broad-cast from the GP. The 1280 x 1024 pixelframe buffer is double buffered to sup-port the 24 frames/second screen updaterate design goal for Pxp15. A multichan-nel token ring network interconnects theindividual units of the system. The net-work can transmit up to eight simultane-ous messages at 20M words/second. 12

Image rendering commences after anapplication on the host workstationmakes changes to the image databaseand sends the results of these changes tothe GPs via the network. One GP, desig-nated the Master GP, has the responsi-bility for assigning Renderers to portionsof the screen and informing the GPs ofthe Renderer assignments. Using thechanges sent from the host, each GPtransforms the primitives assigned to itand then sorts them into the correctscreen bins according to the virtualscreen patches the primitives intersect,

Pixel-Plane 5 renders a screen patch

by authorizing each GP to broadcast, inturn, the screen patch bin containing theprimitives and instructions for process-ing them to the Renderer with the corre-sponding screen patch assignment. Aftera GP completes its broadcast to a Ren-derer, it notifies the next GP that it maybegin its processing for the patch. TheGP then waits for its next turn to broad-cast. The final GP to broadcast its bin tothe Renderer informs it that the currentscreen patch for the Renderer is com-plete. The Renderer then moves the pixelcolor values for that patch to the backingstore and receives a new patch assign-ment from the Master GP. After all theRenderers finish rendering all their as-signed screen patches, each Renderertransfers the stored results of the compu-tations for itsframe buffer.

Pixel-Planes

operations as

assigned patches to the

5 can perform the samePxp14, but its use of

ACM Computing Surveys, Vol 23, No 4, December 1991

Page 37: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging “ 457

Ck,tom Pixel Processing Chips

\ t

: Renderer Board

\

ImageGenerationController

,1

\;I

\’ +1’:

+“

&l..

Backing Store(4k hitshmel )-,,-----

Figure 12. Pixel-Planes 5 Renderer (based upon Fuchs 1989a)

quadratic expression evaluators and itshigher rendering speed provides addi-tional capabilities. Pixel-Plane 5 per-forms 3D medical imaging by storingobject space voxels within the backingstore at each Renderer. Medical imagerendering begins with classification andshading of the 3D array of voxels at theRenderers using the Gouraud shadingmodel. Each Renderer retains the result-ing color and opacity values in its back-ing store. To compute a rendition, theGPs trace parallel viewing rays into the3D array from the viewer’s position. Aseach ray progresses, the Renderers trans-mit requested voxel values to the GPs fortrilinear interpolation and compositionwith the current pixel value in the re-questing GP. After completing its por-tion of the rendition, each GP transmitsits computed pixel values directly to theframe buffer for display. Pixel-Plane 5can render between 1 and 10 frames/sec-ond, depending upon desired medicalimage quality.

Pixel-Planes 5 is a hybrid architec-ture, having both Single-Instruction,Multiple-Data (SIMD) and MIMD compo-nents, with a predicted capability forrendering 256 x 256 x 256 voxel 3Dmedical images at the rate of 1-10frames/second. As of this writing, Pixel-Planes 5 is undergoing initial testing andis not yet fully functional.

4.3 Kaufman’s Cube Architecture

Kaufman designed the Cube machine topermit real-time computation of volumeand surface renderings of medical imagesand geometric objects from 3D primi-tives. He described the architecture inBakalash and Kaufman [19891, Cohenet al. [19901, Kaufman [1987, 1988a,1988b, 1988c, 1989], Kaufman andBakalash [1985, 1988, 1989a, 1989b,1990], and Kaufman and Shimony [19861.The system development plan calls forthe production of a system with a displayresolution of 512 x 512 pixels and a 512

ACM Computing Surveys, Vol 23, No. 4, December 1991

Page 38: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

458 0 Stytz et al.

CubicFrameBuffer

m —Legend

FBP3 3dlime bufferpmesmr I

mFigure 13. Kaufman’s Cube machine architecture, Legend: FBP3, 3Dframe buffer processor; GP3, 3D geometry processor; VP3, 3D viewingprocessor; FB, frame buffer; VP, video processor (adopted fromKau88b).

x 512 x 512 voxel capacity. The cur-rently operational hardware prototypesupports rendering of a 16 x 16 x 16voxel volume. The software for emulat -ing the full-scale machine’s functionalityis fully operational. Figure 13 presents a

diagram of the system.The Cube machine provides a suite of

3D medical image rendering capabilitiesthat support the generation of shadedorthographic projections of 313 voxel data.The architecture makes provision for col-onizing the output and for selection ofobject(s) to be displayed translucently.The machine achieves real-time imagedisplay by using parallel processing attwo levels. The Cube uses coarse grainparallel processing to provide user inter-action with the system while the ma-chine computes renditions. To accelerateimage rendition, the machine uses fine-grain parallelism to process an entirebeam31 of voxels simultaneously instead

of examining the voxels along the beamone at a time. The user can alter colorselection, material translucency, and

—31A beam of voxels is a row, column, or diagonalaxle of voxels within the scene

volume slicing in real time, thereby al-lowing for rapid inspection of interiorcharacteristics of the volume. The Cubeperforms shifting, scaling, and rotationoperations by calculating the coordinatesfor a voxel based upon the calculatedcoordinate position of the closest neigh-bor voxel previously transformed.

The image processing system consistsof a host and four major components, the3D Frame Buffer Processor (FBP3), theCubic Frame Buffer (CFB), the 3DGeometry Processor (GP3), and the 3DViewing Processor (VP3), which includesthe voxel multiple write bus. 32 The hostmanages the user interface. The 3DFrame Buffer Processor loads the CFBwith 3D voxel-based data representingicons,33 objects, and medical images andmanipulates the data within the CFB toperform arbitrary projections of the datatherein. The 3D Geometry Processor isresponsible for scan conversion of 3Dgeometric models into voxel representa-

‘2 Based upon the design reported in Gemballa andLmdner [1982] for a multiple-write bus,33Example icons are cursors, synthetic needles, andscalpels

ACM Computing Surveys, Vol 23, No, 4, December 1991

Page 39: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging ● 459

tions for storage within the CFB.34 TheCFB is a large, 3D, cubic memory withsufficient capacity to hold a 512 x 512 x512 voxel representation of a volume.Each voxel is represented by 8 bits. Tofacilitate rapid retrieval and storageof an entire beam of voxels, the CFBuses a skewed-memory scheme. The CFBskewed-memory organization conceptu-ally regards storage as a set of n memory

modules instead of as a linear array. All

the voxel values in memory module ii

satisfy the relation k = ( x + y +

z) mod n, where x, y, and z are theobject space coordinates of a voxel. The xand y coordinates of a voxel determinethe storage location within its assignedmodule. This memory organization guar-antees that for any desired scene orienta-tion no more than one voxel is fetchedfrom each module.

When forming an image, the 3D View-ing Processor retrieves beams of voxelsfrom the CFB and places them in thevoxel multiple write bus. To meetthe requirement for real-time image gen-eration, the 3D Viewing Processor simul-taneously processes all the voxels in abeam using the voxel multiple write bus.The voxel selection algorithm usedin the voxel multiple write bus enables itto select, from a beam of n voxels, thevisible voxel closest to the observer inlog(n) time. The bus implements trans-

parency and clipping planes by disabling

the processors in the clipped region or

containing transparent voxel values be-

fore performing visible voxel determina-

tion. The voxel multiple write bus has nprocessors for a scene n voxels deep, or512 processors for the CFB example citedpreviously. To process a 512 x 512 x 512voxel volume, the 3D Viewing Processormakes 5122 sequential accesses to theCFB. After determining the visible voxel,the 3D Viewing Processor shades thevoxel and sends the value to the framebuffer.

34For a description of the geometric scan con-version process see Kaufman [1987, 1988al andKaufman and Shimony [19861.

The voxel selection algorithm uses thelocation of each voxel along the beamand its density value to compute the visi-ble voxel in the beam. To avoid calculat-ing the location of each voxel on thebeam, each processor in the voxel multi-ple write bus has an index value. Theselection algorithm regards the indexvalue for each processor as the depthcoordinate of the voxel it contains. Thelowest index value belongs to the proces-sor at the back of the beam. When pro-cessing a beam of voxels, the location ofthe clipping plane and any voxel value(s)considered to be transparent are broad-cast to all processors on the voxel multi-ple write bus. If a processor contains atransparent voxel value or if it lies in theclipped region, the processor disablesitself during the current round of beamprocessing. Each of the remaining pro-cessors then places its index value on theVoxel Depth Bus bit by bit, starting withthe most significant bit. In each round ofbit processing, all active processorssimultaneously place their next bit valueon the bus. The bus does an “or” on thevalues and retains the largest value asthe current Voxel Depth Bus value. Theneach active processor examines the bitvalue on the bus, and if it is not equal tothe processor’s own bit value, the proces-sor disables itself for the remainder ofthe processing of the current beam. Oneprocessor eventually remains, and the 3DViewing Processor uses that voxel valueand depth for colonization and shading.

The key elements to the ability ofthe Cube architecture to form an ortho-graphic projection rapidly are theskewed-memory configuration in the CFBand the voxel multiple write bus design.The CFB memory organization supportssimultaneous access to a beam of voxels.The design of the voxel multiple writebus enables rapid determination of thevoxel in the beam closest to the observer.A recent modification to CFB addressingallows for conflict-free retrieval of voxelsfrom the CFB when performingnonorthographic projections. To performarbitrary parallel and perspective projec-tions, three additional 2D buffers were

ACM Computing Surveys, Vol 23, No 4, December 1991

Page 40: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

460 - Stytz et al.

added to the machine. This new architec-ture performs parallel and perspectiveprojection by retrieving a plane of projec-tion rays from the CFB and then by us-ing the additional buffers to align theplane for conflict-free projection-ray re-trieval. Every projection ray within eachplane is then analyzed by the voxel mul-tiple write bus to determine the projec-tion of that ray. Additional informationon this enhancement appears inKaufman and Bakalash [19901.

The Cube machine exploits parallelismat the beam level, where it processes fullbeams simultaneously using the CFBskewed-memory organization and thesimple logic in the voxel multiple writebus. The estimated35 Cube machine per-formance varies depending on the typeof operation it must perform. A 512 x512 x 512 voxel volume can be renderedto a 512 x 512 pixel image using ortho-graphic projection, shading, translu-cency, and hidden-surface removal in0.062 sec. Use of the modified CFB ac-cess scheme results in a 0.16 sec. esti-mated rendering time for an arbitraryprojection of a 512s voxel volume.

4.4 The Mayo Clinic True Three-DimensionalMachine and ANALYZE

The Mayo Clinic machine, Harris et al.

[1979, 19861, Hefferman and Robb [1985a,1985 b], Robb [1985, 1987], Robb andBarillot [1988, 1989], Robb and Hanson[1990], and Robb et al. [1986], is the onlymachine described in this survey thatprovides a true 3D display. The systemperforms image-rendering operations ina workstation without using parallel pro-cessing. The design objectives of thesystem are to provide an environmentsuitable for flexible visualization andanalysis of 3D data volumes and to pro-vide a true 3D display capability. Thesystem achieves the first objective by us-

35Estimated performance figures are based uponprinted circuit board technology; higher speedsare anticipated with the VLSI implementationcurrently under construction,

ing the cooperating processes in theANALYZE 36 system. The True 3D ma-chine architecture achieves the secondobjective by displaying up to 500,000voxels/sec as a continuous true 3D im-age. The central concept of the ANA-LYZE software system is that 3D imageanalysis is an editing task in that properformatting (editing) of the scene data willhighlight the important 3D relation-ships. In addition to 3D operations, theANALYZE system supports a wide vari-ety of interactive 2D scene editing, 2Ddisplay, and image analysis options. Thesystem displays true 3D images using avarifocal mirror assembly consisting of amirror, a loudspeaker, and a CRT. 37 Fig.

ure 14, based upon Robb [1985], presentsa diagram of the True 3D display system.

The True 3D machine’s hardwareand data format were designed to permitrapid image display. The machine ac-complishes true 3D display by storingthe rendered data slices in high-speedmemory from where they are output at areal-time rate to a CRT that projects ontoa varifocal mirror. The key to the high-speed memory to CRT data transfer rateis the use of a custom video pipelineprocessor to perform intensity transfor-mations on the voxel data before display.The four intensity transformation opera-tions are displaying a voxel value atmaximum CRT intensity, displaying avoxel value at minimum CRT intensity,passing the value through unchanged,and modifying the value using tablelook up. A 2-bit field appended to eachvoxel value controls the operation of thepipeline. As each voxel value enters thepipeline, the pipeline processor usesthe 2-bit field value to determine theintensity transformation operation toapply. Once the pipeline finishes proces-

3tiANALYZE for UNIX workstations is avadablefrom CEMAX, Inc., 46750 Freemont Blvd., Suite207, Freemont, California 94538. UNIX 1s a trade-mark of AT&T Bell Laboratories.37The loudspeaker is used to vibrate the mirrorsynchronously with the CRT display at 30frames/see

ACM Computmg Surveys, Vol. 23, No 4, December 1991

Page 41: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging * 461

2D Display

D+-.

MassStorage

High-SpeedInterface

and Memory&

Varifocal MirrorAssembly

CRT Dsplay

~ Mirror

Y

,,,,:,,.,,.::,,:,,,..,*., :,,.,,.. .. : ,.,:,,,,,, Spcakel-,:.

-----r-l / /

2D Display

Figure 14. True 3D machine architecture

sing a voxel value, it forwards the

value to the CRT for immediate display

upon the varifocal mirror.

The ANALYZE 38’39 software system

produces the input image frames for the

True 3D system. ANALYZE is a set of

communicating processes running within

the workstation, with each process de-

signed to perform a different class of op-

erations. In addition to forming true 313

images, ANALYZE processes perform

user interface management, interprocess

communication, 3D object-editing tasks,

and surface and volume rendering, To

facilitate rapid processing of the image,

ANALYZE maintains the entire image

within a shared block of memory. Robb

[Robb and Barillot 1988, 1989; Robb and

Hanson 1990; Robb et al. 1986] discusses

the operation of all the ANALYZE modu-

38This software package runs on standard UNIXworkstations.~9Another software system that uses multiple, inde-pendent, cooperating modules to visualize multidi-mensional medical image data is described in Raya[1990].

lUS. We summarize their operation here.Figure 15 depicts the hierarchical rela-tionships between the ANALYZEprocesses.

Image processing begins when the hostworkstation uses the TAPE or DISK pro-cessio to bring an image volume into themachine and convert the data intothe format required by the machine. TheDISPLAY processes perform manipula-tion and multiformat display of selected2D sections within the 3D volume as wellas rendering of 3D transmission and re-flection displays of the entire data set.DISPLAY can render projections of thevolume only along one of the three majoraxes. The 2D section display process usesthe 3D data set to produce a series of 2Dslices that lie parallel to one of the threecoordinate axes. The 2D section displayprocess supports windowing, threshold-ing, smoothing, boundary detection, androtation to produce the desired 2D views.

4“Combined into the MOVE processes in Robb andBarillot [1989] and Robb and Hanson [19901.

ACM Computing Surveys, Vol 23, No 4, December 1991

Page 42: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

462 ● Stytz et al.

l-[

L-J–—

EEZII[12FE31

EE!iiclwziz zlllmllmlFigure 15. ANALYZE processes in the Mayo Clime True 3Dmachine (from Robb et al [1986b].

DISPLAY forms 3D transmission and re-flection displays by using ray-tracingtechniques to render parallel projectionsof the 3D data set. A reflection display,which provides an image closely resem-bling a photograph, can be either a 3Dshaded-surface display or a transparencydisplay. DISPLAY constructs shaded-surface displays by terminating the rayas soon as the ray either intersects avoxel lying within a specified thresholdor exits image space. This is essentially atechnique for surface rendering bythresholding. DISPLAY computes atransmission display from either thebrightest voxel value along each ray(max-min display) or the weighted aver-age of the voxel values along each raythat lie within a specified threshold(radiographic display). DISPLAY gener-ates radiographic displays by applyingan a-channel-based image compositiontechnique to a single transparent surfaceand an underlying opaque surface withinthe volume. DISPLAY uses thresholdingto exclude other surfaces from the finalimage. DISPLAY determines the valuereturned by a ray by weighting the total

lighting intensity value along the ray bythe composite a-channel value along theray. ~l DISPLAY determines the lightingintensity value along a ray by calculat-ing the reflection at the transparent sur-face, the light transmitted throughthe surface, and the reflection at theopaque surface. DISPLAY bases its ci-channel calculations upon three factors.These are the orientation of the trans-parent surface relative to the light source,a weighting value for a based upon theorientation, and an assigned maximumand minimum a-channel range for thetransparent surface.

The OBLIQUE~z process generates anddisplays arbitrary oblique views of the

41The formula, from Robb and Bardlott [198S 1, is:I = Itsp*u + (1– a)*Iopq where a = amin +( amax – amin)*cos p O. amin and ccmax define thelight transmission coefficient range, o is the anglebetween the surface normal and the light source,and p 1s the distribution coefficient for the a chan-nel range assigned according to cos 0, p normallylies m the range 1 s p <4421ncluded in the DISPLAY processes in Robb andBarillot [1989] and Robb and Hanson [1990]

ACM Computing Surveys, Vol. 23, No 4, December 1991

Page 43: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging 0 463

3D volume on the 2D CRT. OBLIQUEperforms many of the same 2D-slice dis-play functions as DISPLAY but can ren-der arbitrary oblique projections ofthe volume. The process uses nearest-neighbor interpolation to facilitate rapidrendering of the 2D display. To help ori-ent the operator, OBLIQUE presents acube-shaped outline of the imaged volume on the CRT, with the current obliquecutting plane position displayed withinthe cube. As the operator changes theposition of the cutting plane, the systemresponds with a 2D display of the desiredsection, an update to the cutting planeposition within the cube, and a display ofthe intersection of the oblique plane withthe sagittal, transverse, and coronal 2Dplanes.

The MIRAGE process is the heart ofthe True 3D display system. This processconsists of several modules that formatthe data, perform arbitrary rotations ofthe formatted data, window out selectedvoxel values, specify and display obliqueplanes through the data, and accomplishdissolution/dissection.i3 MIRAGE dis-plays the results of these computationsupon the varifocal mirror as a true 3Dimage. Data formatting is a preprocess-ing operation that reduces the 128 inputimage planes down to 27 display framesand appends a 2-bit field to the voxelvalues. MIRAGE uses these 27 renderedframes to generate the 3B image. Therendered frames can also be interactivelyedited by other modules within iMI-RAGE. MIRAGE uses the 2-bit field toperform dissolution, dissection, and win-dowing operations. MIRAGE performsdigital dissection by setting the voxels inthe region to be removed to black.MIRAGE accomplishes dissolution simi-larly, except that it gradually fades vox-els to black over the course of severalpasses instead of setting them to black inone pass. MIRAGE displays arbitraryoblique planes using one of two operator-

-—43Dissolution appears as tissue dissolving, or melt-ing, away. Dissection appears more as a peelingaway of overlying layers.

selected formats. An oblique plane canbe displayed within the volume either byselectively “dimming” the voxels aroundthe desired plane or by superimposing asparse, bright cutting plane overthe data.MIRAGE forms the displays by appropri-ately reducing or maximizingthe inten-sity of the affected voxels.

After MIRAGE completes its renderingoperations, it places the resulting datavalues into a 27-frame stack within thehigh-speed buffer. The high-speed bufferdumps the stack to the system’s CRTin back-to-front order synchronously withthe vibration of the varifocal mirror. Theobserver sees the rendering in the varifo-cal mirror. The images appear to be con-tinuous in all three dimensions, andthe operator can look around foregroundstructures by moving his or her head.

The EDIT44 process provides interac-tive modification of the data in memoryby thresholding and object tracing. TheMANIP (MANIPULATE in Robb andBarillot [1989] and Robb and Hanson[1990]) processes support addition, sub-traction, multiplication, and division ofthe voxel values in memory by scalarvalues or other data sets. These pro-cesses can enhance objects in addition tointerpolating, scaling, partitioning, andmasking out all or portions of the vol-ume. MANIP also provides the capabilityfor image data set normalization, selec-tive enhancement of objects within theimage data, edge contrast enhancement,and “image algebra” functions. After theMANIP, EDIT, MIRAGE, and/orDISPLAY processes isolate the region(s)of interest, the BIOPSY45 processes canbe used to gather values from specificpoints and areas within the region.BIOPSY performs two functions. The firstfunction is the display of a set of serial2D slices extracted from the volume atthe orientation set by its server pro-cesses. The display allows the operator to

44Included in the MANIPULATE processes in Robband Barillot [1989] and Robb and Hanson [1990].451ncluded in the MEASURE processes in Robb andBarillot [1989] and Robb and Hanson [1990].

ACM Computing Surveys, Vol 23, No. 4, December 1991

Page 44: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

464 * Stytz et al.

specify subregions to be examined inter-

actively. Within each subregion,

BIOPf5Y’s second function is to perform

point sampling, line sampling, and area

sampling with histogram, mean, and

standard deviation information provided

for the indicated subregion.

The SURFACE*G process is responsi-

ble for identifying and extracting speci-

fied surfaces from the volume. The

process supports interactive isolation of

the surface to be extracted, formation of

a binary volume by thresholding using

an operator-specified threshold window,

and extraction of the binary surface. The

process stores the extracted surface as a

list of voxel faces and can display the

surface on a CRT as a set of contours or

as a shaded 3D volume. The PROJECT47

process creates user-specified projection

images of the volume by altering the

view angle, dissolution range, and dissec-

tion parameters. 48 The MOVIE 49 process

displays the projection sequences.

MOVIE displays provide the operator

with additional information concerning

the composition of the volume by exploit-

ing motion parallax, tissue dissolves, dis-

section effects, and combinations of

these effects.

There is no parallelism in the Mayo

clinic True 3D machine. For true 311

images on the varifocal mirror, the sys-

tem can display up to 27 frames every

second (this rate is required to form the

true 3D image) at a resolution of 128 x

128 pixels. The True 3D machine is fully

operational at the Mayo Clinic as is the

ANALYZE software system. ANALYZE

performance depends upon the worksta-

tion that hosts the software.

461ncluded in the DISPLAY processes in Robb andBarillot [1989] and Robb and Hanson [1990].471ncluded m the DISPLAY processes in Robb andBardlot [1989] and Robb and Hanson [1990],48Dissolution range sets the number of projectionsperformed and the amount of dissolution (fading) tobe applied over the sequence The dissection pa-rameters specify the starting and ending coordinatevalues of the planes to be shced away and thenumber of projections to be performed.4gIncluded in the DISPLAY processes in Robb andBardlot [1989] and Robb and Hanson [1990]

4.5 Medical Image Processing Group

Machines

The Medical Image Processing Group(MIPG) has developed a series of ma-chines [Artzy 1979; Artzy et al. 1979,1981; Chen et al. 1984a, 1984b, 1985;Edholm 1986; Frieder et al. 1985a,1985b; Herman 1985, 1986; Herman andCoin 1980; Herman and Liu 1978, 1979;Herman and Webster 1983; Hermanet al. 1982; Reynolds 1983b] that pro-duce surface renderings of medical im-ages. The machines rely upon shading toprovide depth cues. We describe the oper-ation of 3D98, the last version of themachines, in this section. The MIPG se-ries of machines is unique among thosediscussed in this survey in that the pri-mary design objectives are low systemcost with acceptable image quality ratherthan rapid image formation. The currentMIPG machine [Raya et al. 1990; Udupaet al. 1990] adopted these same designobjectives, but this machine provides a3D medical imaging capability using aPC-based system. The MIPG machinesachieve their primary design objective byreducing the amount of data to be manip -ulated.50 To reduce the data volume, theMIPG researchers developed many ofthe surface-tracking and contour extrac-tion techniques described elsewhere inthis survey. The MIPG machines use thecuberille51 data model, a derivative ofthe voxel model.

3D98 uses the minicomputer that con-trols the x-ray CT scanner to perform allsurface-rendering calculations. The3D98-processing strategy is to reduce thecomputational burden by reducing therendered data volume. 3D98 implementsthis strategy in two stages. First, 3D98isolates the object of interest early in theimage formation process by applying

50Another system for performing 3D medical imagerendering using a CT scanner processor is outlinedin Dekel”[1987r51Cuberille—the division of a 3D space into cubes,much as quadrille is the division of a 2D space mtosquares, using three mutually perpendicular sets ofumformly spaced parallel planes,

ACM Computmg Surveys, Vol. 23, No. 4, December 1991

Page 45: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging e 465

a segmentation operator to the datavolume. The resulting isolated object isrepresented in a 3D binary array. Thesecond portion of the strategy furtherreduces the computational burden by us-ing surface-tracking operations. The out -put from the surface tracker is a list ofcube faces that lie upon the surface of theobject. When compared to the volume ofdata output by the CT scanner, the two-step preprocessing strategy greatly re -duces the amount of data to be renderedand shaded. The segmentation andsurface-tracking functions, however, in-evitably discard scene information asthey operate. Selection of a new object ora change to the object of interest requiresreprocessing the entire volume to detectand render the new desired surface.

The 3D98 image-rendering procedurehas three steps. The first step, interpola-tion and segmentation, produces a binaryarray of cubic voxels from a set of paral-lelpiped-voxel-based slices. In this step,3D98 interpolates the voxel values pro-duced by the CT scanner and extracts thedesired object from the volume. A binaryarray holds the output of this step. Theentries in the array that contain onescorrespond to cubes within the object ofinterest. The entries in the array that

contain zeros correspond to cubes inthe scene background. The secondstep is surface tracking. The input tothis step is the binary array produced instep one; the output is a list of faces ofcubes. Each face on the list lies on theborder of the object and so separates avoxel labeled O from a voxel labeled 1.The faces on the list form a closed con-nected surface. The surface depicts theobject that contains the cube face identi-fied by the user as the seed face for thesurface-tracking process. Further detailscan be found in Herman and Webster[1983] and Frieder et al. [1985b]. Thethird and final image processing step isshaded-surface display. The input to thisstep is the surface defined in the listoutput from step two. The output is adigital image showing the appearance ofthe medical object when viewed from auser-specified direction.

The 31398 machine produces imagesslowly. A single 256 x 256 pixelimage can take 1 –3 minutes to render

and display. There is no parallelism inthe machine. There is one CPU, thatfound in the CT scanner, and it performsall the rendering calculations. The MIPG3D98 machine is fully operational.

4.6 Pixar / Vicom image Computer and Pixar IVicom II

The Pixar/Vicom Image computer 5Zand the Pixar/Vicom 115s are com-mercially available, general-purposegraphics computers. The Pixar/Vicommachines require a host computer fornonimage computing functions such asnetwork access, the program develop-ment environment for the Chap (ChannelProcessor) C compiler and Chap assem-bler, and the user interface. We drewthe following material from Carpenter[1984], Catmull [1984], Cook [19841,Cook et al. [1984, 1987], Drebinet al. [1988], Levinthal and Porter [1984],Ney [1990bl, PIXAR [1988a, 1988b,1988c, 1988d, 1988e]; Porter and Duff[19841; Robertson [19861; Springer [19861.We limit the following description andanalysis of these machines to the imageprocessing algorithms supplied with thePixar/Vicom machines. Since the algo-rithms are software encoded, however,the user can modify them.

The unique processing requirements ofthe motion picture special effects and an-imation image composition environmentdrove the Pixar/Vicom Image computerand Pixar/Vicom H software and hard-ware designs. The techniques used inboth Pixar/Vicom machines for render-ing, antialiasing, image compositing,rotation, scaling, and shading are de-scribed in Carpenter [19841, Catmull[1984], Cook [1984], and Cook et al. [1984,1987]. The concepts developed in thesepapers form the basis of the Pixar/Vicomapproach to 3D medical image rendering.

52Pixar is a trademark of Plxar, Inc.53Vicom is a trademark of Vlcom, Inc.

ACM Computmg Surveys. Vol. 23. No. 4, December 1991

Page 46: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

466 * Stytz et al.

The needs of the image-compositing pro-

cess strongly influenced the data struc-

ture used in both machines. The data

structure, called a pixel, consists of three

12-bit color channels (red, green, and

blue) and the 12-bit transparency (alpha)

channel required for compositing an im-

age. This four-channel format also sup-

ports the display of voxel data. The red,

green, and blue channels hold object

space coordinate values, and the alpha

channe154 holds the voxel value. The de-

signers chose a four-channel data strut-

ture because it permits simultaneous

operation upon the four components of

each pixel by the four-component parallel

processing unit, the Chap. The Chap is

the basic processing unit of the

Pixar/Vicom computer systems.

The two Pixar/Vicom machines are

general-purpose graphics computers that

use a limited amount of special-purpose

hardware. The hardware provides sup-

port for, but not implementation of, conl-

puter graphics algorithms as well as 3D

medical imaging volume and surface-

rendering algorithms. The machines’ de-

sign exploits SIMD parallel processing at

the pixel level to obtain high-speed im-

age generation and allows for system ex-

pandability by using a modular machine

design. Figures 16 and 17 present the

overall design of the two machines.

The basic Pixar/Vicom Image com-

puter system has one Chap, one video

board, one memory controller, and three

8MB (24MB total) memory boards, with

the option of equipping the system

with three 32MB memory boards for

a total of 96MB of memory. The sys-

tem can be expanded to three Chaps.

With three Chaps, up to six 32MB

memory boards can be installed, giving

a maximum of 192MB of memory for the

system.

The basic Pixar/Vicom II system con-

sists of one Chap and one Frame Store

54The a-channel is a data structure used in 3Dmedical image compositmg to control the blendingof voxel values as described in Porter and Duff[1984]

Processor (FSP)55 with 12MB of image

memory. The Pixar/Vicom II can be ex-

panded to two Chaps and three memory

boards. The memory boards can be any

combination of FSP memory, Frame Store

Memory (FSM), and Off-screen Memory

(OSM) 56 boards, Both the PiXar/ViCOm

Image computer and Pixar/Vicom II sys-

tems can support the display of image

frames with 48-bit color and 1280 x 1024

pixel resolution at up to 60 frames/

sec. The Pixar/Vicom II also supports

monochrome image display at 2560 x

2048 pixel resolution.

The Pixar/Vicom machines use the

Sysbus (System bus) to connect to

the host computer’s57 1/0 bus. The Sys-

bus transmits address and data packets

between the host and the Pixar/Vicom

machine. The Sysbus gives the host ac-

cess to the control unit, the address gen-

erator, and memory. The four Chap ALU

processors are tightly coupled to the

Scratchpad image memory. Each Chap

communicates with peripherals or other

Chaps over the Yapbus (Yet Another

Pixar Bus). Chaps communicate with

picture memory using the Pbus

(Processor Access Bus). The Pbus moves

bursts of 16 or 32 pixels between the

Chap and picture memory. The Chap is

55The FSP board contains a video display processorthat generates the analog raster Image and mem-ory control circuitry that allows Chap and hostmemory access m parallel with video display opera-tions. The memory appears as a linear array of32 x 32 four-component pixels called tiles5GThe FSM on the FSP contains 12MB of memory,and the OSM board contains 48MB of memory.Neither the FSM nor the OSM contain displayhardware. The memory model used in the OSM MIdentical to that in the FSM, the difference m theboards being that an image stored in an OSM mustbe moved to a FSP board for display whereas animage stored in an FSM is moved within the sameboard to an FSP for display,57The host can be a Sun 3, Sun 4. Sihcon GraphicsIRIS 3100, Sdicon Graphics 4D, or Digital Equip-ment Corp. Micro VAX II~Ultrix or VMS Sun M atrademark of Sun IncSdicon Graphics is a trademark of SiliconGraphics Inc.IRIS is a trademark of Silicon Graphics Inc.MicroVAX 1s a trademark of DEC Inc.Ultrix is a trademark of DEC Inc

ACM Computing Surveys, Vol 23, No 4, December 1991

Page 47: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging ● 467

SysbusI i

II 1

tother F&a computers

1- disks

Figure 16. Pixar/Vicom Image computer block diagram (fromPIX88C)

uHost

Sysbus

I& I

Yapbus80 MB/see to video

momtor

Eother Pmar computers

disks

other hosts

Figure 17. Pixar/Vicom II block diagram (from PIXAR 1988c).

responsible for communicating with pe -

ripherals and other computers, receiving

instructions from the host, and perform-

ing image processing operations. The

memory controller handles scheduling

of data transfers between video memory

and the Chaps. The Scratchpad mem-

ory can store up to 16K pixels (16 lK

scan lines) of data for use by the Chap

processor. There are four segments

within the Scratchpad memory, with each

segment used to store one channel of the

pixel word.

The heart of the rendering system

is the Chap, a four-ALU, one-multiplier

machine controlled by an Instruction

Control Unit (ICU). The Chap operates

as a four-operand vector pipeline with a

peak rate of 40 MIPS (10 MIPS/ALU).

The Chap can process two different

ACM Computing Surveys, VO1 23, No 4, December 1991

Page 48: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

468 “ Stytz et al,

%=7--+ Iwa Pbus

MBus

ISbus

Figure 18. Chap programmer’s model (from PIXAR 1988c)

data types: 4-channel, 12-bit/channel in-

teger (the pixel format), and 12-bit single

channel integer. Figure 18 contains a

model of the data paths available between

Chap components and the remainder of

the machine.

A crossbar switch connects the four

ALU elements and the multiplier in the

Chap with the Scratchpad memory. The

switch allows the multiplier and ALUs to

operate in parallel, with the Mbuses usedto load data into the multiplier and theAbuses used to load data into the ALUs.The ALUs and multiplier use the Sbus(Scalar bus) for reading and writing sin-gle pixel channels. The ICU uses theSbus for distributing instructions to theALUs, multiplier, and buses. The ICUalso calculates Scratchpad memory ad-dresses. The address space for ICU pro-gram memory consists of 64K words of96 bits each and is completely separatefrom Scratchpad memory.

When performing 3D medical imaging,the machine operates upon volume datasets comprised of voxels organized intoslices. At the user’s discretion, each voxelcan be classified by its coordinates, color,opacity, and/or refractive index duringthe course of processing the data volume.Processing begins by classifying thevoxel-based medical image data into ma-terial percentage volumes composed ofvoxels represented by the pixel datastructure. The number of material per-centage volumes formed depends uponthe number of materials the user wantsto visualize. One material percentagevolume must be formed for each mate-rial. The Pixar/Vicom system performsmedical image voxel classification usingeither user-supplied criteria or a proba-bilistic classification scheme. ThePixar/Vicom machines represent eachvoxel in a material percentage volumeusing the pixel data structure. In a mate-

ACM Computing Surveys, Vol 23, No. 4, December 1991

Page 49: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging “ 469

rial percentage volume, the amount of agiven material (such as bone, fat, or air)present in the original data voxeldetermines the corresponding pixel’s a-channel value. The rendering systemgenerates the composite volume for agiven set of appearance conditions byforming other pixel-based data structurerepresentations of the original space,then composing these representationsinto a final image .58

The Pixar/Vicom system performsvolume compositing by combining the a-channel values for the material percent-age volumes with the a-channel valuesstored in other voxel-based volume repre-sentations derived from the original voxeldata. These representations are the mattevolume, the color volume, the opacityvolume, the density volume, and theshaded color volume. The matte, color,opacity, density, and shading effects de-sired in the final image determine thea-channel values in these volumes. Therendering software combines the a-channel values from each volume type torender a single composite view of theoriginal image space. Matte volumes pro-vide the capability for passing an arbi-trary cutting plane or shape through thevolume or for highlighting specific mate-rial properties in a region. The color andopacity volumes are themselves compos-ite volumes formed using material per-centage volumes and the color andopacity values assigned to the type ofmaterial in each volume. A two-stepprocedure computes the a-channel valueassigned to each pixel data structure inthe color and opacity composite volumes.First, the rendering system forms prod-uct volumes by calculating the product ofthe a-channel value for each voxel in a

5sLevoy describes a different, ray tracing-based ap-proach to 3D medical image volume rendering inLevoy [1988a, 1988b, 1989a, 1989b, 1990a, 1990bl,Levoy et al. [1990], Plzer et al. [1989a, 1989bl.Levoy’s technique performs material classificationon a voxel-by-voxel basis as each ray progressesthrough the volume and relies upon the use ofan octree and adaptive ray termination to rapidlyrender 3D medical images.

material percentage volume and the

color/opacity value assigned to that ma-

terial. One product volume must be

formed for each material percentage vol-

ume. Then for each composite pixel data

structure in the desired color or opacity

volume, the rendering system sums the

corresponding a-channel values of all the

product volumes to compute the compos-

ite pixel data structure’s a-channel

value.

The rendering system extracts bound-

aries between materials by forming a

density volume. The density volume is

a composite volume formed from the sum

of the products of the pixel data strut-

ture’s u-channel voxel values in each

material percentage volume and the cor-

responding assigned material density.

The largest density gradient occurs where

rapid transitions between materials with

different densities occur. The system cal-

culates the gradient vector between each

voxel and its neighbors and stores the

result in two separate volumes—the sur-

face strength volume and the surface

normal volume. The surface strength

volume contains the magnitude of the

density gradient. The surface normal

volume contains the direction of the gra-

dient. The rendering system uses the

surface strength volume to estimate the

amount of surface present in each voxel.

The renderer uses the surface normal

volume in shading calculations.

The rendering system computes the

shaded color volume by compositing

the surface normal volume, the surface

strength volume, the color volume, the

given light position(s) and color(s), and

the viewer position using a surface re-

flectance function. The system also takes

surface scattering and emission into ac-

count when computing shaded-color vol-

ume values. The renderer assumes that

the amount of light emitted from a voxel

is proportional to the amount of lumi-

nous material in the voxel. The system

determines the amount of luminous ma-

terial in a voxel using the material

percentage volumes.

Three-dimensional medical image

rendering begins by forming the shaded-

ACM Computing Surveys, Vol. 23, No 4. December 1991

Page 50: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

470 “ Stytz et al.

color volume. Then, the rendering sys-tem transforms the shaded-color volumeaccording to the user inputs; finally, itresamples the shaded-color volume fordisplay. All of the above-mentioned vol-ume types need not be formed for everyimage. The rendering system determinesthe volumes to compute based upon thetype of data to be viewed and the repre-sentation desired. The number of vol-umes computed determines the total timerequired to render the final image.

The Pixar/Vicom machines performparallel operations at the pixel datastructure level. The number of Chaps inthe machine and the type of processingperformed determine the amount of par-allelism realized. Since there can be nomore than 3 Chaps in a machine, nomore than 12 pixel data structure levelcalculations can be performed simulta-neously. The high pixel data structurethroughput of the Pixar/Vicom machinescomes from their use of algorithmsdesigned to use computationally inexpen-sive calculations at the pixel data struc-ture level and from specially designedprocessors that can quickly process pixeldata structures. The time required toform a 3D medical image from a 256 x

256 x 256 voxel volume can vary from afew minutes to an hour. The amount oftime required depends upon the numberof slices in the volume and the number ofdifferent volumes required to be formedto render the final image. Vicom cur-rently markets the Pixar/Vicommachines.

4.7 Reynolds and Goldwasser’s VoxelProcessor Architecture

Reynolds and Goldwasser describe theVoxel Processor in Goldwasser [1984a,1984b, 1985, 1986], Goldwasser andReynolds [1983, 1987], Goldwasser et al.1985, 1986, 1988a, 1988b, 1989] andReynolds [1983a, 1985]. The Voxel Pro-cessor machine provides near real-time2D shaded-surface display while support-ing volume matting, threshold windowspecification, geometric transforma-tion, and surgical-procedure simulation

operations on a 3D volume. The VoxelProcessor is a special-purpose, dis-tributed-memory machine whose only ap-plication is the pipelined processing ofvoxel-based 3D medical images. 59 Themachine processes discrete, voxel-based,subcubes of object space in parallel. Eachsubcube is a 64 x 64 x 64 voxel volumeformed by an octantwise recursive subdi-vision of object space. The machine’sdesign allocates one processor to eachsubcube, thereby allowing all subcubesto be rendered simultaneously. Succes-sive stages of their image-renderingpipeline merge the subcube-derived 2Dsurface renderings in back-to-front orderto form a single image.

The design objective for the Voxel Pro-cessor is to perform real-time 3D medicalimage rendering. The processing strat-e~ adopted to achieve this goal has twocomponents. One component is installa-tion of the image-merging and smallimage-generation algorithms in hard-ware instead of software. This decisionsacrifices implementation flexibility forprocessing speed. The other component isthe use of medium-grain parallelismwithin the image-rendering pipeline.

There are seven components to the im-age processing pipeline. These compo-nents are the host computer, the objectaccess unit, the object memory system(64 modules each storing a 64 cube ofvoxels), the processing elements (PEs),the intermediate processors (IPs), theoutput processor (OP), and the postpro-cessor. Figure 19 presents a diagram ofthe machine.

The host computer handles object dataacquisition, database management, andcomplex object manipulation. The host isalso responsible for generating twoSequence Control Tables (SCTS). Theprocessors use the SCTS to control theback-to-front voxel readout sequence asdecided in Reynolds [1985]. The objectaccess unit supports object database

69A commercial machine incorporating the parallelprimitives introduced in the Voxel Processor archi-tecture M described in Goldwasser [1988a]

ACM Computmg Surveys, Vol 23, No 4, December 1991

Page 51: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging ● 471

Host BUS

vObjectAccessunit

v

Obiect Memow Bus I

Yk:..FT ‘-F ‘T’

Ot’ Bus1

vPost

Pmeess-

, ‘ 0

CRT

~gDisplay

Figure 19. Voxel Processor architecture (based upon Goldwasserand Reynolds [1987].

management by the host and manages

communication between the Voxel Proc-

essor and host. The object memory sys-

tem (Oi?IS) provides the 16MB of RAM

required to hold the 256 x 256 x 256

voxel image. The OMS stores the voxel-

based object data within 64 memory

modules distributed among the 64

processing elements.

The PEs render images from their

64 x 64 x 64 voxel subcubes of the vol-

ume. Each PE has two 128 x 128 pixel

output buffers, a copy of the SCTS, an

input density look-up table, and an arith-

metic processor. During operation, each

PE accesses the data in its own subcubein back-to-front order. The back-to-frontseries of computations on the subcubestored at the PE yields the 2D subimageof the volume that is visible given thecurrent set of user-editing inputs. This2D image contains the visible voxel val-ues and their image space z’-distancevalues. The PEs place the 2D image intoone of their two output buffers for use by

the 1P.

The next two stages of the pipelineperform the task of merging the 64 sepa-rate pictures produced by the PEs into asingle picture that portrays the desired

ACM Computing Surveys, Vol. 23, No 4, December 1991

Page 52: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

472 “ Stytz et al.

portion of the volume. To perform the

merge operation, the eight IPs and the

OP use the SCTS to determine the input

picture position offsets and memory ad-

dresses. Each of the eight IPs merges the

minipictures generated by its set of eight

input PEs into one of its two 256 x 256

pixel output buffers. The OP forms the

final image by merging the contents of

the eight 1P output buffers into a 512 x

512 pixel frame buffer. Once the image is

in the frame buffer, the postprocessor

prepares the image for display. The post-

processor is responsible for shading,

brightness, and pseudocoloring of the fi-

nal image using texture maps and either

distance or gradient shading.

Two critical steps in the operation of

the Voxel Processor are subimage merg-

ing at the IPs and OP and mapping the

voxels from object space to image space

at the PEs. The machine accomplishes

these operations under the control of two

SCTS. Each SCT contains eight entries,

one for each octant of a cube. The host

computer determines the two SCTS based

on the desired orientation of the output

image, sorts the entries in both SGTS

into back-to-front order, then broadcasts

them to the processors in the machine.

SCT1 contains the back-to-front octant

ordering for the desired scene orienta-

tion. The lPs and OP use SCT1 to per-

form the back-to-front merging of the

subirnages output from the PEs and IPs.

The PEs use SCT2 for back-to-front gen-

eration of subimages from their individ-

ual subcubes of the volume.

The Voxel Processor uses parallelism

at two different levels to attack the

surface-rendering throughput bottleneck.

First, within each of the first two stages

of the processor pipeline there is parallel,

independent operation by the PEs and

IPs on disjoint sets of voxels. Second,

each stage of the pipeline operates inde-

pendently due to the dual buffering of

the output from each stage. Since each

stage operates independently on a differ-

ent output frame, at any one time there

are four frames in the pipeline. This de-

gree of parallelism is responsible for the

claimed real-time performance, 25 frames

of 512 x 512 pixels each per second, at-tained by the Voxel Processor. The VoxelProcessor was only proposed, never built.The uniprocessor-based Voxelscope 1160uses slightly modified Voxel Processoralgorithms. Dynamic Digital Displayssells the Voxelscope II machine.

SUMMARY

The use of 3D techniques for medicalimaging is still controversial. Althoughmany physicians consider it a majorbreakthrough, some prominent ones op-pose it, mainly because of the processingassumptions made to create 3D displays.For example, interpolation is commonlyused, which, in clinical terms, is not pre-cise but is nevertheless required toachieve scene continuity. Two broadissues remain to be addressed: clinicalapplicability and clinical utility. Con-vincing physicians that 3D images haveclinical utility requires continued re -search into 3D medical imaging architec-tures that provide improved imagequality at faster rendering rates (ap-proaching real time) with an easy-to-useuser interface. To establish 3D medicalimaging’s clinical applicability requirespsychophysical studies on large numbersof patients. These psychophysical studieshave yet to be performed. In addition,psychophysical studies can help to an-swer a broader question. What are thegenuine physician and clinical require -ments for 3D medical imaging? Psy-chophysical studies may disclose thatcomputer cycles spent on one aspe@ ofthe 3D medical imaging process could bebetter spent on another.

In this article we discussed varioussolutions to the problems of displayingmedical image data. These solutionsranged from the general-purpose (e. g.,the MIPG machine, which uses insight-ful algorithms on the computer availablein the imaging modality equipment, andthe Mayo Clinic ANALYZE machine) tothe specialized (the Pixel-Planes ma-

6’)Voxelscope II is a trademark of Dynamic DlgltalDisplays, Inc

ACM Computing Surveys. Vol 23, No 4, December 1991

Page 53: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging ● 473

chines, Kaufman’s Cube, and the VoxelProcessor). In all cases, certain tradeoffswere evident. In particular, we noted thatthe general-purpose solutions are eitherslow or use expensive equipment,whereas the special-purpose solutions arequite powerful but limited to solving onlyone problem. Underuse of these special-purpose machines diminishes their costeffectiveness.

This situation is not an unexpected one;as we implied at the beginning of thearticle, the field of 3D medical imaginghas special needs. It behooves us to learnfrom the attempts discussed here and tocombine these lessons with the state ofcurrent technology. Then, we can movecloser to providing broadly availablegeneral-purpose 3D medical imagingsystems that have the performance andeffectiveness of contemporary special-purpose systems. Although the presenta-tion of such solutions should not bea part of the present survey, weshall delineate our conclusions andsome research avenues we are currentlypursuing.

First, the special-purpose-hardwaresolutions indicated the right approach butdo not go far enough—parallel computa-tion, on as massive a level as possibleand in a grain amenable to using thehardware by other tasks. These othertasks should be of interest to medicalcare providers, since these 313 medicalimaging machines have to be situated inhospitals and clinics at least until ultra-high-speed communication media areavailable. This implies that parallelmachine solutions should be provided byfairly general-purpose multiprocessorcomputers.

Second, the basic data for the displayshould be the actual slice data, possiblywith interpolated data added. Additionalprocessing should only be done whenrequested by the user. This is a contro-versial conclusion. It can be supportedonly after the 3D medical imaging com-munity performs the psychophysicalstudies mentioned earlier.

Third, the need for new approachesto the 3D medical imaging task is still

with us. For the foreseeable future, hard-ware cannot provide the processing speedneeded to generate photorealistic imagesin real time. Gl Insightful algorithms andconcepts that reduce the computation-al burden are required. This need is es-pecially critical because the medicalimaging modality community continuesto increase the resolution of theirequipment.

Last, any hardware support necessaryto create a real-time environment shouldtake the form of coprocessor or special-purpose display hardware. The coproces-sor or special-purpose hardware shouldbe separated relatively cleanly from theactual general-purpose processor(s). Thisseparation is present today in somegeneral-purpose, high-powered commer-cial graphics machines like the Titan,Stellar, and AT&T machines referencedin Table 2.

APPENDIX A: HIDDEN-SURFACE REMOVALALGORITHMS

Hidden-surface removal algorithms fallinto three classes: scanline based, depthsort, and z-buffer. Scanline-based algo-rithms have found little applicationin 3D medical image processing. Thissection describes four hidden-surface re-moval algorithms: depth-sort, z-buffer,back to front, and front to back.

A depth-sort algorithm requires sort-ing the scene elementsGz according totheir distance from the observer’s view-point, then painting them to the framebuffer in order of decreasing distance.See Frieder [1985a] for an example. Thedepth-sort algorithm developed byNewell, Newell, and Sancha illustratesthe major operations. Three steps areperformed after rotating the scene. First,

‘l For a discussion of graphics hardware and itsforeseeable ~erformance hmltations, see England[19s91 ‘621n Appendices A-F, the term scene elementis used to signify the basic data element in thecontour (ID), surface (2D), and volume (3D) datamodels.

ACM Computmg Surveys, Vol 23, No. 4, December 1991

Page 54: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

474 “ Stytz et al.

sort scene elements bythe largest z’-coor-dinate of each scene element. Second,resolve conflicts that arise when z’-extents of scene elements overlap usingthe tests described in their paper. Third,paint the sorted list to the display bufferin order of decreasing distance. Since thescene elements nearest the observer arewritten last, their values overwrite thevalues of the scene elements they ob-scure. The resulting image displays onlythose scene elements visible from the ob-server’s position.

The z-buffer algorithms use the samepixel-overwriting strategy used in thedepth-sort algorithm, but they imple-ment the strategy using the frame bufferand a z-buffer. The frame buffer storespixel intensity values. The z-buffer is adata structure with the same dimensionsas the frame buffer. The z-buffer storesthe z’-value of the portion of the sceneelement mapped to each pixel. Beforerendering the scene, the z-buffer is ini-tialized with the largest representablez-value, and the display buffer is initial-ized to a background value. During ren-dering, the projection of each sceneelement into image space yields a depthZ(X, y) at screen position (x, y) = (u, u).If the newly computed value for Z( x, y)is less than the current value of Z( x, y)in the z-buffer, then the new Z( x, y) re-places the old Z( x, y) in the z-buffer, andthe intensity value for the scene elementat Z( x, y) replaces the intensity value atscreen position (x, y).

A third technique for hidden-surfaceremoval is back-to-front (BTF) readout ofthe scene as discussed in Frieder et al.

[1985 al. This algorithm accesses the en-tire data set in BTF order relative to theobserver. In most circumstances, thismethod of access requires sorting the dataset by z’-value before the algorithm canbe used. In the 3D medical imagingenvironment, however, sorting is not re-quired because the array of scene ele-ments emerges from the scanner in sortedorder. Therefore, the only required workis reading out the scene elements in cor-rect BTF order. This algorithm is sim-pler to implement than the z-buffer

algorithm and requires less memory sincethere is no z-buffer to maintain. Notethat the scene elements can be read outin order of decreasing x’ or y’ or z’ coor-dinates. The fastest-changing index ischosen arbitrarily. The algorithm has twosteps. First, rotate all scene elements totheir proper location in image space.Then, extract the scene elements fromimage space in BTF order and paint theminto the frame buffer.

The fourth technique for hidden-surfaceremoval is front-to-back (FTB) readout ofthe scene as discussed in Reynolds et al.[19871. Front-to-back algorithms operatein basically the same way as BTF algo-rithms. There are two differences. First,the FTB sequence for reading out sceneelements is determined by increasing z’distance from the observer. Second, oncethe algorithm has written a scene ele-ment value to a point in screen space, noother scene element value may be writ-ten to that same point in screen space.

APPENDIX B: RAY TRACING

Ray tracing is a technique for locatingand shading the visible surfaces within ascene. It operates by following the pathtaken by rays of light originating at theobserver through screen space and oninto image space. The interaction of thelight color with the material(s) encoun-tered by the ray determines the valueassigned to the intersected screen spacepixel. The appeal of ray tracing comesfrom its ability to produce “ photorealis -tic” or “lifelike” images by modelingcomplicated reflection/refraction pat-terns and transparency.

Whitted’s [1980] paper is the seminalwork in ray tracing. It describes aray-tracing scheme that incorporates ashading model into the recursive ray-castng image-rendering calculations,thereby producing high-quality, photore-alistic images. Ray tracing operates bycasting primary rays from screen spaceinto image space. Whenever a primaryray intersects a surface, three types ofsecondary rays are spawned- shadowray(s), a reflection ray, and a refraction

ACM Computmg Surveys, Vol 23, No 4, December 1991

Page 55: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging ● 475

ray. A ray-casting tree maintains theparent -child relationship between theprimary ray and all shadow, reflection,and refraction rays subsequentlyspawned from it. Shadow rays determineif the object surface is in shadow withrespect to each of the light sources. Anobject, 8, is in the shadow of anotherobject with respect to a given light sourceif the shadow ray intersects an objectbetween the surface of object 8 and thelight source. Reflection and refractionat the object surface are determined byspawning reflection and refraction raysat the object surface intersection point.These two newly spawned rays gatheradditional information about the lightarriving at the ray/object intersectionpoint. Moreover, whenever reflection orrefraction rays intersect other surfacesthey, in turn, spawn new refraction,reflection, and shadow rays. All primaryand secondary rays continue to traverseimage space until they either intersectan object or leave the image space vol-ume. When ray casting terminatesthe ray-casting tree is traversed inreverse depth order. At each tree node ashading model (Whitted uses Phong’s63)is applied to determine the intensity ofthe ray at the node. The computed inten-sity is attenuated by the distance be-tween the parent and current child nodeand is used as the input to the intensitycalculations at the parent node as eitherreflected or refracted light,

To render antialiased images, Whittedprojects four rays per pixel, one from eachcorner. If the values of the four rays aresimilar, the pixel intensity is the averageray value. If the ray values have a widevariance, the pixel is subdivided into fourquadrants and additional rays cast fromthe corners of the new quadrants. Quad-rant subdivision continues until the newquadrant corner rays satisfy the mini-mum variance criteria, whereupon the

‘i3Any other illumination model may be used, suchas the Cook-Torrance model [1982] as long as themodel is based upon geometric optic concepts andnot radiosity.

intensity of the pixel is calculated from

the area-weighted sums of the values for

the quadrants and subquadrants within

the pixel. Whitted recognized the over-

whelming computational expense of the

intersection testing required by the algo

rithm. He accelerated intersection

test processing by enclosing each object

in a regular-shaped bounding volume G4

(a sphere). A ray-bounding volume in-

tersection test determines if the ray

passes close enough to the enclosed object

to warrant a ray-object intersection test.

An object-ray intersection test is per-

formed only if the ray intersects the

bounding volume of the object; otherwise

the ray continues on its journey through

image space. Because it is much simpler

to determine if a ray intersects the

surface of a regularly shaped volume in-

stead of the possibly convoluted surface

of an object, bounding volume intersec-

tion testing pays off in reduced image

rendering time.

Since Whitted’s paper, research on im-

proved methods for ray tracing has con-

centrated on reducing the computational

cost of ray tracing with additional efforts

directed toward improving the photoreal -

ism of the rendered image. Techniques

for reducing ray value computation time

propose using novel data structures

Fujmoto 1986; Glassner 1984, 1988;

[Kaplan 1987], parallel processing [Dippe

and Swenson 1984; Gudmundsson and

Randen 19901, adaptive ray termination

[@son and Keeler 1988], and combina-tions of these techniques [Levoy 1988a,

1988b, 1989a, 1989b, 1990a, 1990b; Levoy

et al. 1990]. Techniques for improving

image quality are presented in Cook

[1984], Cook et al. [1984], and Carpenter

[1984]. The references cited concerning

aliasing are also relevant to improving

the quality of ray-traced images. We re-

fer the reader to Glassner [19891 for a

more thorough discussion of principles

“A bounding volume is a regularly shaped con-tainer, such as a cube, sphere, or cone. which isused to enclose an object within a volume to berendered

ACM Computmg Surveys, Vol. 23, No. 4, December 1991

Page 56: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

476 0 Stytz et al.

and techniques for ray tracing, as well as

for an extensive annotated bibliographyof this field.

APPENDIX C: IMAGE SEGMENTATION BYTHRESHOLDING

Image segmentation by thresholding is a

technique for locating regions in a scene

that have the same property. There are

two steps in the process. First, the scene

elements must be classified according to

some criteria, usually the voxel value.For example, the user can define a max-

imum voxel value (a threshold) or a

user-defined range of voxel values (a

threshold window) to separate theobject(s) of interest from the remainder(background) of the scene. Thresholdingassumes that any voxel meeting the cri-teria for selection, no matter where it

lies, is part of the object. Since there is

no general theory of image segmenta-tion, we illustrate the process with two

algorithms—a generic algorithm andFarrell’s algorithm [Farrell et al. 1984,1985].

Assume a 2D scene composed of pixelsin which there is one object. The valuesassigned to the pixels lying within thebounds of the object fall within a contin-uous range of numbers, and no pixel out-

side the object has a value withinthis range. Scene size and the screenresolution are identical. The goal ofthe procedure is to form a binary scene.

The procedure forms a binary scene by

assigning all the pixels within the objecta value of 1 and all other pixels a value

of O. One way to accomplish this task is

as follows. First, the user indicates a

range of pixel values that encompassesthe range of pixel values for the object.

Second, starting at the upper-left-handcorner of the scene, determine the binaryscene values for screen pixels as follows:

If a scene pixel has a value within therange, assign the corresponding screenpixel a value of 1; otherwise assign thescreen pixel a value of O. Perform thisoperation for all the scene pixels. Whenthe procedure terminates, display theobject on the screen.

Farrell’s technique [Farrell et al. 1984,19851 is also straightforward. It differen-tiates the object of interest from theremainder of the volume by assign-ing a unique color to the range of voxel

values that specifies an object. This tech-nique assumes that the object corre-sponds to a continuous range of values(i.e., there are no discontinuities in thewindow) and that no other objects are

incorrectly assigned the same color.

Because of the noise and low contrastencountered in medical image data, seg-mentation by thresholding often does not

produce an accurate portrayal of softtissue or bony objects. As a result,other image segmentation techniqueshave been developed to supplement or

replace the thresholding process. In gen-eral, these techniques make the segmen-tation decision on a voxel-by-voxel basisbased upon information lying in the

neighborhood of each voxel rather thanupon a single global threshold setting.These image segmentation techniquescan be classified into a few different ap-proaches based on the technique(s) usedto improve the segmentation procedure.There are the algorithms that apply sta-

tistical analysis [Choi et al. 1989; Fanet al. 1987; Rounds and Sutty 1979], al-

gorithms that use neighborhood-based(instead of global) thresholds [Dohertyet al. 1986; Tuomenoksa et al. 1983], al-gorithms that use a combination ofsmoothing and edge detection operators[Ayache et al. 1989; Bomans et al 1990;Cullip et al. 1990; Nelson et al. 1988;Pizer et al. 1988, 1990a; Raman et al.

1990], algorithms that use gradients

[Trivedi et al. 19861, algorithms that usestatistical pattern recognition techniques

[Coggins 1990; Low and Coggins 19901,algorithms that use image pyramids and

stacks [Kalvin and Peleg 1989; Vincken

et al. 1990], algorithms based upon ex-

pert systems and artificial intelligence

techniques [Acharya 1990; Chen et al.

1989a; Cornelius and Fellingham 1990;

Nazif and Levine 1984; Raya 1989a,

1989b, 1990a; Stansfield 1986], and algo-

rithms that are combinations of these

techniques [Gambotto 1986].

ACM Computmg Surveys, Vol. 23, No 4, December 1991

Page 57: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging ● 477

APPENDIX D: SURFACE-TRACKINGALGORITHMS

This appendix presents three algorithmsthat have been used for boundary detec-tion and surface tracking. These are

the algorithm proposed in Liu [1977], the

algorithm proposed in Artzy et al. [1981],

and the “marching cubes” algorithm

described in both Lorensen and Cline

[19871 and CHine et al. [19881.

Liu’s algorithm extracts a surface from

a volume by exploiting three properties

of an object: contrast between the object

and surrounding material(s), connectiv-

ity, and agreement with a priori knowl-edge. The algorithm exploits these threeproperties, along with the requirementthat at least one and at most two neigh-bors of every boundary voxel also lie on

the boundary, to determine the boundaryvoxels that outline the object. The possi-ble boundary voxels are considered to lie

along the curves in 3D space that havehigh voxel-value gradient values. Bound-ary detection begins with the user’s se-

lection of an initial boundary voxel. The

seed voxel is then localized based on a

maximum voxel-value gradient criterion.

The next and all subsequent boundary

voxels are located by examining the

neighbors of the current boundary voxel.

The neighbor that lies across the largest

voxel-value gradient and was not previ-

ously identified as a boundary voxel be-

comes the current boundary voxel. If none

of the gradient values at a given bound-

ary voxel exceeds the minimum gradient

value, the algorithm backtracks to the

immediately preceding boundary voxel.

The preceding boundary voxel once again

becomes the current voxel. The algo-

rithm then searches for another bound-

ary voxel from among the set of voxels

around the current voxel. Liu’s algo-

rithm restricts its search to those voxels

not previously considered as a boundary

voxel. If none of the restricted set of

voxels around the current voxel meets

the minimum voxel-value gradient trite-

ria, the algorithm backtracks again. The

algorithm continues backtracking until

it finds a boundary voxel. The search for

the object boundary then resumes from

the newly identified boundary voxel.

Artzy’s algorithm models the surface

of an object as a directed graph and

translates the problem of surface detec-

tion into a problem of tree traversal.

Artzy et al. [1981] in conjunction with

Herman and Webster [1983] prove that,

given directed graph G whose nodes cor-

respond to faces of voxels separating the

object in the scene from background, the

connected subgraphs of G correspond to

surfaces of connected components in the

scene, Therefore, finding a boundary sur-

face corresponds to traversing a sub-

graph of a digraph. A key property of the

digraph is that every node in the digraph

has indegree two and outdegree two. This

characteristic, in conjunction with the

fact that every connected component of

the digraph is strongly connected, sup-

ports two important conclusions. First,

for every node in the graph there is a

binary spanning tree rooted at that node.

Second, the binary spanning tree is a

subgraph for the digraph and it spans

the connected component containing the

given node. This property of the digraph

guarantees that given a single face of a

voxel on the boundary of an object, every

voxel face in the boundary can be found.

Gordon and Udupa [19891 describes an

improvement to this algorithm.

Artzy’s surface detection algorithm

assumes a 3D array representation for

the 3D scene. The object in the scene is

defined to be a connected subset of ele-

ments of the 3D array. First, a binary

volume representation of the object is

formed by segmentation. The one-voxels

in the binary volume representation are

all the voxels that may comprise the ob -

ject of interest, but the object does not

contain all the one-voxels. The algorithm

separates the object- and non-object

exposed one-voxel faces by finding the

connected subset of exposed one-voxel

faces. The algorithm logically constructs

the binary spanning tree as it proceeds

by locating exposed, adjacent one-voxel

faces and adding them to the tree. The

selected faces are the nodes in the t~.ee.

The algorithm never actually builds the

ACM Computing Surveys, Vol 23, No 4, December 1991

Page 58: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

478 “ Stytz et al.

tree. Instead, it maintains the important

current aspects of the tree’s status using

two lists. These lists are the list of once-

visited nodes and the list of nodes to

be considered.

To start the algorithm, the user speci-

fies a seed voxel. An exposed face of the

seed voxel is the root node of the binary

spanning tree. When the algorithm visits

a node in the binary spanning tree,

the node is made current and is checked

against a list of once-visited nodes. If the

algorithm previously examined the node,

the node is removed from further con-

sideration since it can never be visited

again. If the node has not been previ-

ously examined, it is added to the list of

once-visited nodes. The algorithm then

adds all the exposed one-voxel faces adja-

cent to the current node to the list of

nodes to be considered. Using a first-in

first-out (FIFO) discipline, the next node

to visit is then selected from the list of

nodes to be considered. Then, the algo-

rithm examines the selected node. The

algorithm terminates when the queue of

nodes to be considered is empty.

The third surface-tracking algorithm is

the “marching cubes” algorithm out-

lined in Cline et al. [1988] and Lorensen

and Cline [19871. This technique pro-

cesses the data in a medical image

volume in scanline order, creating a

triangle-tiled model of a constant density

surface as it moves along. The algorithm

has two basic steps. The first step locates

the desired surface within the cube and

defines it with triangles. The second step

calculates the normals to the surface at

each vertex of each triangle in the cube.

The marching cube performs surfacelocation. It is a logical cube created fromeight adjacent points, four from each oftwo adjacent slices of data gathered by a

medical imaging modality. As the firststep in locating the surface, the datavalue at each vertex of the cube is exam-ined. The algorithm assigns a 1 to a

vertex if its value meets or exceeds thethreshold value that defines the object;otherwise it assigns a value of O. One-vertices are within or on the surfaceof the object; zero-vertices are outside

the surface. This preliminary processingroughly locates the surface /cube inter-section, since the surface intersects themarching cube only along the edges con-taining a one-vertex and a zero-vertex.By inspection, the authors determinedthat there are 14 unique cases of sur-face/cube intersection to be considered.After the algorithm determines the typeof cube/surface intersection, the locationof the intersection along each “marchingcube” edge is calculated. The algorithmestimates the location of the intersectionusing the result of a linear interpolationof the data values at the cube vertices onthe edge. The point of intersectionon each edge is a vertex of a triangle. Forshading purposes, the algorithm com-putes the surface normal at each trianglevertex based on the marching cube ver-tex gradient values. By taking the cen-tral difference of the marching cubevertex gradient values along the edgecontaining the triangle vertex, the algo-rithm estimates the gradient, and so thesurface normal at the enclosed trianglevertex. This completes processing for thecurrent location of the cube, so the trian-gle vertices and vertex normals are out -put and the cube marches deeper intoobject space.

APPENDIX E: SHADING CONCEPTS

Shading is a computationally intensiveprocess that attempts to depict theappearance of a rendered object usingan illumination model. The illumina-tion model depicts the interaction ofdifferent types of light, from possiblydifferent sources, at different locations,with the materials in the scene on apixel-by-pixel basis. The characterizationmust also consider the position of eachmaterial relative to an observer, the po-sition of each material relative to eachindividual light source, and the composi-tion and light reflection/transmissioncharacteristics of each material. Illumi-nation models are based on the conceptthat the amount of light reflected fromand transmitted through an object makesthe object visible. The wavelength of the

ACM Computmg Surveys, Vol. 23, No. 4, December 1991

Page 59: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging * 479

incident light in combination with thesurface properties of the object determinethe amount of light absorbed, reflected,and transmitted by the object, as well asthe color of the object as seen by thehuman eye. The interaction of incidentlight with the surface of an object can becharacterized using a combination ofwavelengths in the incident light, itsdirection, the type of light source (area,point, or diffuse), the orientation of theobject surface, and the composition ofthe surface.

The two broad classes of illuminationmodels used in computer graphics arethe global and local illumination models.Global illumination models provide ahigher quality rendering than local mod-els, albeit at the cost of additional ren-dering computations. They characterizethe appearance of each object by consid-ering many parameters for each object. Apartial list of these parameters includesthe surface properties and orientation ofmaterials, the location, intensity, area,and color of all the light sourcesthat shine upon the object, the amount oflight reflected from surrounding objects,the distance from the light source(s) tothe object, and the observer’s position.These considerations allow the rendererto determine the amount of refractedlight transmitted through the object andthe surface specular and diffuse reflec-tion. Local illumination models need onlycompute the diffuse reflection at thesurface of an object. Local illuminationmodels base their computations onthe surface orientation of the object,illumination from a single point lightsource (possibly in conjunction withan area light source), and the ob-server’s position.

The following global illuminationmodel [Rogers 1985] considers diffuse andspecular reflection as well as refractioneffects to render the surface of the object.Figure 20 contains a 2D depiction of therelationships involved.

Diffuse reflection is caused by theabsorption and uniformly distributedreradiation of light from the illuminatedsurface of an object and appears as a

dull, matte surface. Diffuse reflection canbe modeled using Lambert’s cosine lawfor reflection from a point light source fora perfect diffuser:

I= Ilk~cos8 05+ (1)

1 is the reflected intensity, 11 is theincident light from the point light source,k ~ (defined over the range O-1) is thediffuse reflection constant for the reflect -ing material, and 0 is the angle betweenthe surface normal and the light direc-tion. Because of the large computationalcost involved in computing the illumina-tion for an area light source, area lightsources are usually treated as a constantterm and linearly combined with thediffuse reflection computed with (1)yielding

I. is the intensity of the area lightsource and k ~ is the diffuse reflectionconstant for the reflecting material.Because the perceived intensity of lightreflected from an object falls off linearlywith the distance between observerand object, the intensity for the pointlight source computed in (2) is linearlyattenuated.

Ilk~cos8I = I~k. + 050<;.

d+K

(3)

In (3), d is the distance from the light

source to the object surface, and K is anarbitrary constant.

Specular reflection is the reflectioncaused by light bounding off the outersurface of an object. It has a directionalcomponent and typically appears as ahighlight on the surface of an object. Theempirical formulas described by Phong[19751 as well as Cook and Torrance[19821 model this phenomena. The fol-lowing discussion is based on Phong’smodel, which considers the angle of theincident light, i, its wavelength, X, theangle between the reflected ray and the

ACM Computing Surveys, Vol 23, No 4, December 1991

Page 60: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

480 ● Stytz et al.

LightScmrw. viewer

,T“4 N ,’

s

L

s l!mfaw/

4(3

~T_N,,’

Tmmwlmntltd

Light

Legend

CastRaysPhyslcdL@t Path

D DiffuseReflectionRay

R SpecularReflectionRayT TmmmmedLrghtRay

N SurfaceNonmdVector

s Lm-of-Sight Vector

L L@-SOurceVector

Figure 20. Surface-shading relationships, Legend:—, cast rays; ---,physical light path; D, dif~use reflection ray; R, specular reflectionray; T, transmitted light ray; N, surface normal vector; S, line-of-sight“.vector; L, light-source vector,

line of sight, a, and the spatial distribu-tion of the incident light to compute thespecular reflectance of an object, 1s:

IS = @(i, A)cos’ a. (4)

The reflectance function, w ( i, h), gives

the ratio of specularly reflected light to

incident light. The Cosn term approxi-mates the spatial distribution of the

specularly reflected light. Because W( i, h)

is a complex function, it is usually

replaced by an experimentally deter-

mined constant, k~; k~ is selected to yield

a Pleasing appearance. Combining (3)and (4) yields the desired globalillumination model:

~l(kd COS@+ kscos’ a)I = Iak?a +

d+K

Os 05:. (5)

There are more accurate, and computa -tionally intensive, models than the one

described above, For exam~le, Cook. .and Torrance [1982] describe a more com-

prehensive model that considers the

material properties of each object and

accurately portrays the resulting reduc-

tion in the intensity of reflected light.

Their model also accounts for the block-

ing of ambient light by surrounding ob-

jects and the existence of multiple light

sources of different intensities and

different areas.

Reflection is not the only lighting effect

to be considered. If transparent objects

appear in the scene, refraction effects

must be allowed for. Snell’s law de-scribes the relationship between the inci-

dent and refracted angles:

qlsin8 = rlz sin f)’. (6)

In (6), ql and qz are the indices of

refraction of the two mediums; 0 is the

incident angle, and 0‘ is the angle of

refraction. To eliminate the effects that

can arise from an image space approach

ACM Computing Surveys, Vol. 23, No. 4, December 1991

Page 61: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging ● 481

to refraction, the refraction computationsare typically performed in object spaceusing ray tracing. Simple implementa-tions of transparency effects ignorerefraction and the decrease in light in-tensity with distance. These implemen-tations linearly combine, or composite,the light intensity at an opaque surface,Iz, with the light intensity at the trans-parent surface, 11, lying between it andthe observer. Taking t as the trans-

parency factor for 11, the relationship is

I=tll+(l–t)12. (7)

If IZ is transparent, the algorithm isapplied recursively until it encounters anopaque surface or until it reaches theedge of the volume. More realistic trans-parency effects can be achieved byconsidering the surface normal whencomputing t.

The computational cost incurred whenusing a global illumination model miti-gates against its use for 3D medical im-age rendering, especially when rapidimage generation is an important consid-eration. When combined with ray trac-ing, however, a global illumination modelyields high-quality 3D medical images.

APPENDIX F: SHADING ALGORITHMS

The following discussion provides anoverview of shading algorithms that haveproven useful in the 3D medical imagingenvironment. This overview is not allinclusive but instead provides a repre-sentative subset of approaches tothe problem. These algorithms presentseveral different approaches to rend-ering shaded surfaces. The first four-1 ,.”av;4 L_. . distance-only, gradient,g~~y-sc~lti ~. ddient, and normal-basedcontextual shading— use local illumina -tion models. These algorithms do notconsider lighting effects such as refrac-tion in their calculations but neverthe-less provide usable 3D medical imageshading at a reasonable cost in time. Thegradient and normal-based contextualshading algorithms can be used to imple-ment an approximation to reflection andare notable for the simplifying assump -

tions they make to compute the localsurface normal. The Gouraud and Phongalgorithms, on the other hand, are com -putationally intensive and commonlyused when rendered images of the high-est quality are desired. The Gouraudmodel relies on a global illuminationmodel to provide the light intensity ateach polygon vertex, then uses this valueto compute the light intensity at pointsbetween vertices. Phong takes the nor-mal computed at each polygon vertex andinterpolates these values along the edgesin the scene, with the final intensity foreach point determined using a global il-lumination model.

Distance-only, or depth, shading is thesimplest of the six techniques. Depthshading does not estimate the surfacenormal at the shaded points. Depth shad-ing assigns a shade to a point on a visiblesurface based upon a distance criterion.The criterion can be the distance of thepoint from the assumed light source, fromthe observer, or from the image spacez’-axis origin. This computational sim-plicity comes at some cost, however, inthat it tends to obscure surface details.This technique is not an object space orimage space surface normal estimationshading algorithm.

Gradient shading [Chen and Sontag1989; Gordon and Reynolds 1983, 1985;Reynolds 1985] is an image space surfacenormal estimation algorithm. It com-putes a realistically shaded image at rel-atively low computational cost. Becausethis technique produces high-quality ren-derings, it is useful in the 3D medicalimaging environment. The key to gradi -ent shading is its use of the z:gradient.The z~gradient approximates the amountof change in the z~dimension betweenneighboring entries in the z~buffer.After determining the z:gradient, the lo-cal surface normal can be approximated,and using this, the light intensity at thesurface can be calculated. The algorithmestimates the surface normal by firstfinding the gradient vector Vz = (a z /a U,dz/tlu, 1). The derivatives az/au anda z /d v can be estimated using the centraldifference, the forward difference, the

ACM Computing Surveys, Vol. 23, No 4, December 1991

Page 62: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

482 - Stytz et al.

backward difference, or a weighted aver-age differencing technique. Taking i, j to

be the current location in the z-buffer,the d z /d u forward difference is: d z / d U,f

= z: – Z:+l. Using the backward differ-ence, az/au, b = z; – Z;.l. Using thecentral difference, dz/dul C = Z;+l – z;_l.The d z/d u is estimated similarly. Thesurface normal, N, is computed using Vz.N=O.

Gray-scale gradient shading describedin Hohne [1986], is an object space sur-face normal estimation method. Thistechnique, like gradient shading, pro-duces high-quality 3D medical images.Gray-scale gradient shading uses thegray-scale gradient between neighboringvoxel values in the z-buffer to approxi-mate the surface normal. This approachuses the ratio of materials comprising avoxel (i. e., the gray-scale voxel value) inconjunction with the ratios in neighbor-ing voxels to compute the depth and di-rection of the surface passing throughthe voxel. The gray-scale gradient ap-proximates the rate of change in the ra-tio of materials between the voxel and itsneighbors and hence the rate of changein surface direction. After determiningthe gray-scale gradient, the local normalcan be approximated using a differencingoperation; and using this, the light inten-sity at the surface can be calculated.

Normal-based contextual shading

[Chen et al. 1984a, 19851 is an objectspace surface normal estimation method.This technique calculates the shade forthe visible face of a scene element usingthe estimated surface normal for the visi-ble face. The algorithm estimates thesurface normal from two factors. First, ituses the orientation of the visible face forthe scene element with respect to thedirection to the light. Secondj the algo-rithm considers the orientation of thefaces of adjacent scene elements with re-spect to the direction to the light. Thesetwo factors provide the context for theface. For each edge of a face to be shaded,the algorithm classifies the face adjacentto that edge into one of three possibleorientations, yielding a total of 81 possi-ble arrangements of adjacent faces. At

each face, the algorithm stores the ar-rangement around the scene element facein a neighbor code. Chen et al. [1984a,19851 describe how the neighbor codescan be used to approximate the surfacenormals for the face of the central sceneelement. If a surface-tracking procedureis performed before the shading opera-tion, the context for each face can bedetermined at little additional computa-tional cost. When rendering a medicalimage, the algorithm can calculate theshade at each face using the techniquesdescribed in Chen et al. [1984al or anyother local illumination model.

Gouraud [19711 shading attempts toprovide a high-quality rendered image bycalculating the shading value at pointsbetween scene element vertices. We clas-sify this algorithm as an object spacesurface normal estimation technique.Gouraud shading relies upon the obser-vation that shade and surface normalvary in the same manner across a sur-face. Gouraud’s technique first calculatesthe surface normal at a scene elementvertex, then determines the shade atthe vertex. The algorithm computes sceneelement vertex normals directly from theimage data. The algorithm then bilin-early interpolates these vertex shade val-ues to estimate the shade at points lyingalong the scanline connecting the ver-tices. Since the interpolation only pro-vides continuity of intensity across sceneelement boundaries and not continuity inchange in intensity Mach band effectsare evident in the images shaded withthis algorithm.

Phong shading [Burger and Gillies1989; Foley et al. 1990; Phong 1975;Rogers 1985; Watt 19891 lessens the Machband effect present in Gouraud shadingat the cost of additional computation.Phong’s approach to shading interpolatesnormal-vectors instead of shading val-ues. First, the algorithm computes thescene element vertex normals directlyfrom the image data. Then, the algo-rithm computes the surface normal ateach scanline/scene element edge inter-section point by bilinearly interpolatingthe two scene element vertex normal val-

ACM Computmg Surveys, Vol 23, No. 4, December 1991

Page 63: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging ● 483

ues for that edge. Finally, the algorithmestimates the surface normal at pointsalong each scanline within the scene ele -ment. The algorithm computes theseestimates by bilinearly interpolatingthe normals computed at the two scan-line/scene element edge intersectionpoints for that scanline. Of the sixshading techniques discussed in thisappendix, Phong shading renders thehighest quality images. We classify thisalgorithm as an object space surface nor-mal estimation method.

ACKNOWLEDGMENTS

The authors are indebted to the employees of Pixarand the researchers whose machines are mentioned

for reviewing their particular portions of this sur-vey. We also thank Gabor Herman and the refereesfor their suggestions and criticisms.

REFERENCES

ABRAM, G., AND WESTOVER, L, 1985. Efficientalias-free rendering using bit-masks and look-up tables. Comput. Graph. 19, 3 (July), 53-59.

ACHARYA, R. 1990. Model-based segmentation formultidimensional biomedical image analysis.Biomedical Image Processing, SPIE, 1245,

(Santa Clara. Ca/if.), pp. 144-153.

AKELEY, K. 1989. The silicon graphics 4Di240GTX superworkstatlon. IEEE Comput.Graph. Appl. 9, 4 (July), 71-83.

AMANS, J.-L., AND DARRIER, P. 1985. Processingand display of medical three dimensional ar-rays of numerical data using octree encoding.Medical Image Process. SPZE 593, (Cannes,France, Dec.), 78-88.

APGAR, B., BERSACK, B., AND MAMMF:N, A. 1988. Adisplay system for the Stellar graphics super-computer model GS1OOO. Comput. Graph. 22,4 (Aug.), 255-262.

ARDENT COMPUTER. 1988a. DORE’ Technical

Oueruiezo. (April).

ARDENT COMPUTER. 1988b, The Tztan Graphics

Supercomputer: A System Oueruzew.

ARDENT COMPUTER. 1988c. Tztan Specifications.

ARTZY, E. 1979. Display of three-dimensional in-formation in computed tomography. Comput.

Graph. Image Process. 9, 196-198.

ARTZY, E,, FRIEDER, G., HERMAN, G. T., AND LIU, H.K. 1979. A system for three-dimensional dy-namic display of organs from computed tomo-grams. In Proceedings of the 6th Conference onComputer Applications in Radzology & Com-puter/ Azded Analysis of Radiological Images(Newport Beach, Calif., June), pp. 285-290.

ARTZY, E., FRIEDER, G., AND HERMAN, G. T, 1981.The theory, design, implementation and evalu-ation of a three-dimensional surface detectionalgorithm. Comput. Graph. Image Process. 15(Jan.), 1-24.

AT&T PIXEL MACHINES, INC. 1988a. AT& T PtxelMach ines PXM 900 Series Standard Software.

AT&T PIXEL MACHINES, INC. 1988b. PIClib: ThePixel Machine’s Graphics L,Lbrary.

AT&T PIXEL MACHINES, INC. 1988c. RA YLib: The

Pixel Machine 900 Series Ray Tracing Library.

AT&T PIXEL MACHINES, INC. 1988d. The Ptxel

Machine System Architecture: A TechnicalReport.

AUSTIN, J. D., AND VAN HOOK, T. 1988. Medicalimage processing on an enhanced workstation.Medical Imaging II: Image Data Managementand Display. SPIE 914, 1317-1324.

AYALA, D., BRUNET, P., JUAN, R., AND NAVAZO, I1985. Object representation by means of non-minimal division quadtrees and octrees, ACM

Trans. Graph. 4, 1 (Jan.), 41-59.

AYACHE, N., BOISSONNAT, J. D., BRUNET, E., COHEN,L., CHIEZE, J. P., GEIGER, B., MONGA, O ,ROCCHISANI, J. M., AND SANDER, P. 1989.Building highly structured volume representa-tions in 3D medical images. In Proceedings ofthe International Symposium: CAR ’89, Cornputer Assisted Radiology (Berlin, Germany),Computer Assisted Radiology Press, pp.766-772,

BACK, S., NEUMANN, H., AND STICHL, H. S. 1989.On segmenting computed tomograms. In Pro-ceedings of the International Symposium: CAR’89, Computer Assisted Radiology (Berlin,Germany), Computer Assisted Radiology Press,pp. 691-696.

BAKALASH, R., AND KAUFMAN, A. 1989. Medicube:A 3D medical imaging architecture. Comput.

Graph. 13, 2, 151-154.

BARILLOT, C., GIBAUD, B., Lm, O., MIN, L. L.,BOULIOU, A., LE CERTEN, G , COLLOREC, R.,AND COATRIEUX, J. L. 1988. Computer graph-ics in medicine: A survey. CRC Crztzcal Rev.

BiomecZ. Eng. 15, 4 (Oct.), 269-307.

BELL) C. G., MIRANK~R) G. S., ANrI RUBINSTEIN, J.1988. Supercomputing for one. Spectrum 25,4 (Apr.), 46-50.

BENTLEY, M. D , AND KARWOSIU, R. A. 1988 Esti-mation of tissue volume from serial tomo-graphic sections: A statistical random markingmethod. Inuestzgatzue Radzol. 23, 10 (Oct.),742-747.

BERGER, S. B., LEGGIERO, R. D., MARCH, G. F.,OREFICE, J. J., TUCKER, L. W , AND REIS, D J1990. Advances in microcomputer biomedicalimaging. In Proceedings of the 11th AnnualConference and Expos~t~on of the National Com-puter Graphics Association, NCGA ’90(Anaheim, Calif,, Mar. 19-22), pp. 136-145.

ACM Computing Surveys, Vol. 23, No. 4, December 1991

Page 64: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

484 ● Stytz et al.

BOMANS, M,, HOHNE, K.-H , TIEDE, U , AND REIMER,M 1990. 3-D segmentation of MR Images ofthe head for 3-D display. IEEE Trans. Med

Imaging 9, 2 (June), pp. 177-183.

BOOTH, K. S., BRYDEN, M P , COWAN, W. B ,MORGAN, M. F., AND PLANTE, B. L. 1987. Onthe parameters of human visual performance:An investigation of the benefits of antialiasmg.IEEE Comput Graph Appl 7, 9 (Sept ), pp.34-41

BORDEN, B. S 1989 Graphics processing on agraphics supercomputer IEEE Comput.

Graph. Appl. 9, 4 (July), pp. 56-62

BROIIENSHIRE, D. 1988. REAL 3-D stereoscopicImaging. In Proceedings of the 9th AnnualConference and E.tpositlon of the National

Computer Graphm Association, NCGA ’88(Anaheim, Calif., Mar 20-24), pp 503-514

BURGER, P,, AND GILLIES, D. 1989. Interactive

Computer Graphzcs. Addison-Wesley, Reading,Mass.

BYRNE, J, P , UNDEILL, P. E., AND PHILLIPS, R. P1990 Feature based image regustratlon usingparallel computing methods In Fzrst Confer-

ence on Visualization m BLomedzcal Computmg(Atlanta, Georgia, May 22-25), IEEE Com-puter Society Press, pp. 304-310.

CAHN, D U. 1986 PHIGS: The programmer’shierarchical interactive graphics system. InProceedings of the 7th Annual Conference and

Exposition of the Natmnal Computer Graphzcs

Assoczatzon, NCGA ’86 (Anaheim, Calif , May11-15), pp. 333-355.

CARLBOM, I., CHAIIRAVARTY, I., AND VANDERSCHEL.D 1985 A hierarchical data structurefor representing the spatial decomposition of3-D objects. IEEE Comput. Graph Appl. 5, 4(Apr ), 24-31,

CARPENTER, L. 1984. The A-buffer: An an-tlaliased hidden surface method. Comput.

Graph. 18, 3 (July), 103-108.

CA.TMULL, E. 1984. An analytic visible surface al-gorlthm for independent pixel processingComput. Graph 18, 3 (July), 109-115.

CHEN, C. H , AND AGGARWAL, J K 1986 Vol-ume~surface octrees for the representation ofthree-dimensional objects. Comput Vision,

Graph.. Image Process. 36, 1 (Oct.). 100-113

CH~N, H. H , AND HUANG, T. S. 1988. A surveyof construction and manipulation of octrees.Comput. Vzsion, Graph., Image Process. 43,409-431.

CHEN, L. S., HERMAN, G T., REYNOLDS, R A , ANDUDUPA, J. K 1984a. Surface rendering in thecuberille environment. Tech. Rep. MIPG 87,Dept. of Rad]ology, Umv. of Pennsylvania.

CHEN, L. S , HERMAN, G T., ME-I-ER, C. R,,REYNCMM, R A , AND UDUP~, J K. 1984b3D83: An easy-to-use software package forthree-dimensional display from computed to-mograms IEEE Computer Society Joznt Inter-

national Symposzum on Medical Images and

Icons (Arhngton, Virginia, July), IEEE Com-puter Society Press, pp 309-316,

CHEN, L -S,, HERMAN, G, T., REYNOLDS, R A , ANDUDUPA, J K. 1985. Surface shading in thecuberille envu-onment. IEEE Comput. Graph.Appl 5, 12 (Dee ), 33-43.

CHEN, S.-Y., LIN, W.-C., AND CHEN, C.-T 1989aAn expert system for medical Image segmen-tation Medical Imagwzg III: Image Process-ing, SPIE 1092, (Newport Beach, Calif. ), pp.162-172.

CHEN, S -Y , LIN, W -C., .AND CHEN, C.-T. 1989b.Improvement on dynamic elastic interpolationtechnique for reconstructing 3-D objects fromserial cross-sections. In Proceedings of the In-

ternational Symposium: CAR ’89, ComputerAsmsted Radw/ogy (Berlin, Germany)j Com-puter Assisted Radiology Press, pp. 702-706.

CHEN, L.-S , AND SoNTA~, M. R. 1989 Representation. display. and manipulation of 3D dlgltalscenes and them medical applications. ComputV1s~on, Graph , Image Process. 48, 2 (Nov ),

190-216,

CHOI, H S., HAYNOR, D R , AND Km, Y 1989Multivariate tissue classification of MRI im-ages for 3-D \,olume reconstruction: A statisti-cal approach. Medl cal Imaging III Jmage

Processing, SPIE 1092, (Newport Beach,Calif ), pp 183-193

CHUANG, K.-S. 1990 Advances m microcomputerbiomedical imagmg, In Proceedings of the 11thAnnual Conference and Exposltzon of the Na-tional Computer Graphws Assoclatzon, NCGA’90 (Anaheim, Cahf Mar 19-22), 136-145

CHUANG, K.-S., AND UDUP.A, J K. 1989 Boundarydetection m gray-level scenes In Proceedingsof the 10 Annual Conference and Exposition ofthe Natzonal Computer Graphics Assoczatlon,

NCGA ’89 (Philadelphia, Penn , Apr 17-20),pp. 112-117

CLINE, H, E., LORENSON, W E , LUDKE, S ,CR~WFORD, C. R , AND T~ETIZR, B. C. 1988.Two algorithms for the three-dimensional reconstruction of tomograms. Med Phys 15, 3(Mayl’June), 320-327

COGCHNS,J M. 1990 A multiscale description ofimage structure for segmentation of biomedicallmage~. Fu-st Conference on Vzsua[Lzatu>n inBzomedzcal Computzng (Atlanta, Georgia, May22-25), IEEE Computer Society press. pp123.130

COHEN, D., KAUFMAN, A , AND BAKALASH, R 1990Real-time discrete shading. Vzsual Comput. 6,3 (Feb.), 16-27.

CooK, R. L 1984. Shade trees Comput Graph18, 3 (Jul.), 223-231.

CooK, R L., AND TORRANCE, K. E 1982 A re-flectance model for computer graphics ACMTrans. Graph 1, 1 (Jan ), 7-24.

ACM Computmg Surveys, Vol 23, No 4, December 1991

Page 65: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging ● 485

COOK, R. L., PORTER, T., AND CARPENTER, L. 1984.Distributed ray tracing. Comput. Graph. 18, 3(July), 137-145.

COOK, R. L., CARPENTER,L., AND CATMULL, E. 1987.The Reyes image rendering architecture. C’om -

put. Graph. 21, 4 (July), 95-102.

CORNELIUS, C., AND FELLIN~HAM, L. 1990. Visual-izing the spine using anatomical knowledge.Extracting Mean ing from Complex Data: Pro-cessing, Display, Interaction, SPIE 1259, (SantaClara, Calif., Feb. 14-16), pp. 221-231.

CROW, F. C. 1977. The aliasing problem in com-puter-generated shaded images. Commun.ACM 20, 11 (Nov.), 799-805.

CROW, F. C. 1981. A comparison of antialiasingtechniques. IEEE Comput. Graph. Appl., 1, 1(Jan.), 40-48.

CULLIP, T. J., FREDERICKSON, R. E., GAUCH, J. M.,AND PIZER, S. M. 1990. Algorithms for 2D and3D image description based on IAS. In Pro-ceedings of the 1st Conference on Visualization

in Biomedical Computing (Atlanta, Georgia,May 22-25), IEEE Computer Society Press, pp.102-107.

DEKEL, D. 1987 A fast, high quality, medical 3Dsoftware system. Medical Imaging I, SPIE 767,

(Newport Beach, Calif., Feb. 1-6), pp. 505-508.

DE~EL, D. 1990. CAMRA Oueruiew Presentatwn.ISG Technologies, Inc., NIississauga, Ontario,Canada.

DEV, P., WOOD, S., WHITE, D. N., YOUNG, S. W.,AND DUNCAN, J. P. 1983. An interactivegraphics system for planning reconstructivesurgery. In Proceedings of the 4th Annual Con-ference and Exposition of the Natzonal Com-puter Graphics Association, NCGA ’83

(Chicago, 111.,June 26-30), pp. 130-132.

DEV., P., FELLINGHAM, L. L., VASSILIADIS, A ,WOOLSON, S. T.j WHITE, D. N., AND YOUNG, S.L. 1984. 3D graphics for interactive surgicalsimulation and implant design. Processing andDisplay of Three-Dimensional Data II, SPIE507, San Diego, Calif., pp. 52-57.

DEV, P., FELLINGHAM, L L., WOOD, S. L., ANDV.MSILIAIIIS, A. 1985. A medical graphics sys-tem for diagnosis and surgical planning. InProceedings of the International Symposium:

CAR ’85, Computer Assisted Radiolog.v (Berlin,West Germany), Computer Assisted RadiologyPress, pp. 602-607.

DIEDE, T., HAGENMAIER, C. F., MIRANKER, G. S.,RUBINSTEIN, J. J., AND WORLEY, W. S. JR. 1988.The Titan graphics supercomputer architec-ture Computer 21, 9 (Sept.), 13-29.

DIMENSIONAL MEDICINE, INc. 1989 10901 BrenRoad East, Minnetonka, Minn. MaxiuiewRadzology Workstations.

DIPPE, M., AND SWENSEN, J. 1984. An adaptivesubdivision algorithm and parallel architec-ture for realistic image synthesis. Comput.Graph. 18, 3 (July), 310-319

DIPPE, M. A. Z., AND WOLD, E. H. 1985 Antialias-ing through stochastic sampling Comput.Graph. 19, 3 (July), 69-78

DOHERTY, M. F., BJORKLUND, C. M , AND NOGA, M.T. 1986. Split-merge-merge: An enhancedsegmentation capability. In Proceedings of the1986 IEEE Computer Society Conference onComputer Vzsion and Pattern Recogn ltzon

(Miami Beach, Fla., June 22-26), IEEE Com-puter Society Press, pp 325-328.

DREBIN, R. A., CARPENTER, L., AND HANRAHAN, P.1988. Volume rendering. Comput. Graph. 22,4 (Aug.), 65-74.

DUFF, T. 1985. Compositing 3-D rendered images.Comput. Graph, 19, 3 (July), 41-44.

EDHOLM) P. R., LEWITT, R. M., LINDHOLM, B.,HERMAN, G. T., UDUPA, J. K., CHEN, L. S.,MARGASAHAYAM, P. S , AND MEYER, C. R.1986. Contributions to reconstruction anddisplay techniques in computerized tomogra-phy. Tech. Rep. MIPG 110, Dept. of Radiology,Univ. of Pennsylvania.

EDMONDS, I. R. G. 1988. The graphics supercom-puter: Bringing visualization to technical com-puting. In Proceedings of the 9th Annual Con-ference and Exposition of the National Com-puter Graphics Association, NCGA ’88

(Anaheim, Calif. Mar. 20-24), pp. 521-528.

EKOULE, A., PEYRIN, F., ANU GOUTTE, R 1989.Reconstruction and display of biomedical struc-tures by triangulation of 3D surfaces. SCL. EngMed. Imag. SPIE 1137, (Paris, France, Apr.24-26), pp. 106-113,

ELLSWORTH, D , GOOD, H., AND TEBBS, B. 1990.Distributing display lists on a multicomputerIn Proceedings of the 1990 Symposium on

Znteractiue 3D Graphics (Snowbird, Utah,Mar. 25-28), pp 147-154.

ENGLAND, N. 1989. Evolution of high performancegraphics systems. In Proceedings of GraphicsInterface ’89 (June), pp. 144-151.

FAN, R T., TRIVE~I, S. S., FELLINGHAM, L. L., GAM-BOA-ALDECO, A , AND HE~GCOc~, M W. 1987.Soft tissue segmentation and 3D display fromcomputerized tomography and magnetic resonance imaging. Medical Imagmg: Part Tu,o,SPIE 767, (Newport Beach, Calif., Feb. 1-6),pp. 494-504

FARRELL, E J., ANII ZAPPULLA, R A. 1989. Three-dimensional data visualization and biomedicalapplications. CRC Critical Ret,. Biomed. Eng.16, 4, 323-363.

FARRELL, E. J., YANG, W, C., AND ZAPPULLA) R.1985. Animated 3D CT imaging. IEEE Cormput. Graph Appl. 5, 12 (Dec.), 26-30.

FARRELL, E. J , ZAPPULLA, R., AND YANG, W. C1984. Color 3-D imaging of normal and patho-logic intracranial structures. IEEE CorrzP.Graph. Appl. 4, 10 (Sept.), 5-17.

FARRELL, E. J., ZAPPULLA, R A., KANTROWTTZ, A.1986a. Planning neurosurgery with interac-tive 3D computer imaging. In Proceedings of

ACM Computing Surveys, Vol. 23, No. 4, December 1991

Page 66: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

486 “ Stytz et al.

the 5th Conference on Med~cal Informatm RSalamon, and B. Blum, Eds., North-Holland, Amsterdam, Denmark, pp. 726-730.

FARRELL, E. J., ZAPPULLA, R. A., KANTROWITZ, A.,SPIGELMAN, M., YANG, W. C. 1986. 3D dis-play of multiple intracranial structures fortreatment planning. In Proceedings of the 8thAnnual Conference of the lEEE/Engineering inMediczne and Biology Society (Fort Worth,Texas, Nov. 7-10), IEEE Press, pp. 1088-1091.

FARRELL, E. J., ZAPPULLA, R. A., AND SPIGELMAN,M. 1987. Imaging tools for interpretingtwo- and three-dimensional medical data. InProceedings of the 8th Annual Conference and

Exposition of the National Computer Graphics

Assoczatzon, NCGA ’87 (Philadelphia, PA, Mar.22-26), pp. 60-69.

FLYNN, M., MATTESON, R., DICKIE, D., KEYES, J. W.JR,, AND BOOKSTEIN, F. 1983 Requirementsfor the display and analysis of three-dimen-.wonal medical image data. In Proceedings of2nd International Conference and Workshop onPicture Archwmg and Communication Systems( PA CSII) for Mcd~cal Apphcatzons, SPIE 418.(Kansas City, Missouri, May 22-25), pp.213-224.

FOLEY, J. D., VAN DAM, A., FEINESZ, S. K., .4NDHUGHES, J. F. 1990. Fundamentals oflnterac-tive Computer Graphzcs, 2nd Edition Addison-Wesley Publishing Company, Reading, Mass

FRWDER, G., GORDON, D., AND REYNOLDS, R A1985a Back-to-front display of voxel-basedobjects. IEEE Comput. Graph Appl 5, 1(Jan ), 52-60.

FRIE~ER, G., HERMAN, G T., MEYER, C., AND UDUPA,J. 1985b. Large software problems for smallcomputers: An example from medical imaging.IEEE Softw 2, 5 (Sept ), 37-47.

FUCHS, H , KEDEM, Z, M., AND USELTON, S P 1977.Optimal surface reconstruction from planarcontours. Commun. ACM, 20. 10 (Nov.),

693-702.

FUCHS. H . KEDEIVr, Z. M , AND NAYLOR, B. 1979.Predetermining visibility priority in 3-D scenes(prehminary report) Computer Graph. 13, 2

(Aug.), 175-181.

FUCHS, H , KEDE~, Z. M., AND NA~LOR, B F 1980On visible surface generation by a priori treestructures Comput. Graph 14. 2 (Aug ),124-133

FUCHS, H . PJZER, S. M., HEINZ, E R., BLOOMBERG,S. H., TSAI, LI-C., ANn STmcrm%rm, D C 1982a.Design of and image editing with a space-filling three-dimensional display based on astandard raster graphics system. Process/.ngand Display of Three-Dimensional Data, SPIE,367 SPIE, (San Diego, Calif., Aug. 26-27), pp117-127.

FUCHS, H , PIZER, S. M., TSAI, L.-C., BLOOMBERG, SH , AND HEINZ, E. R 1982b. Adding a true3-D display to a raster graphics system IEEEComput. Graph. Appl. 2, 7 (Sept.), 73-78

FUCHS, H., ABRAM, G. D., AND GRANT, E. D 1983Near real-time shaded display of rigid objects.Comput. Graph. 17, 3 (Jul.), 65-69

FUCHS, H , GOLDFEATHER, J., HULTQUIST, J. P.,SPACH, S., AUSTIN, J. D., BROOKS, F. P. JR ,E~LES, J G., AND POULTON, J. 1985. Fastspheres, shadows, textures, transparencies,and image enhancements in pixel-planes.Computer Graph. 29, 3 (Jul.), 111-120.

FUCHS, H., POULTON, J., EYLES, J., AUSTIN, J , AINDGREER, T. 1986. Pixel-planes A parallel ar-chitecture for raster graphics Pixel-Planes

Project Summary, Univ. of North Carolina,Dept of Computer Science

FUCHS, H., POULTON, J,, EYLES J., AND GRK.ER,T1988a, Coarse-gram and fine-grain paral-

lelism m the next generatmn Pixel-Planesgraphics system. TR88-014, The Univ. of NorthCarolina at Chapel Hall, Dept of ComputerScience and Proceedings of the InternationalConference and Exh lb~tion on Parallel Process-ing for Computer Viszon and Dlspla.v (Univer-sit y of Leeds, United Kingdom, Jan. 12-15)

FUCHS, H., POULTON, J., EYL~s, J., GR~ER, T., HILL,E , ISRAEL, L , ELLSWORTH, D., GoorL H.,INTERRANTE, V., MOLNAR, S., N~UMANN, U.,RHOADES, J., TEBBS, B , AND TURIL G. 1988bPixel-Planes: A parallel architecture for rasterg-aphics Pwel-Planes Project Summary, Umv.of North Carolina, Dept of Computer Science.

FUCHS, H., PIZER, S M., CREASY, J. L., RENNER, J.B., AND ROSENMAN. J G. 1988c. Interactive,richly cued shaded display of multiple 3D ob-jects in medical images. Medical Imagmg II:Image Data Management and Display, SPIE914, (Newport Beach, Calif, Jan. 31-Feb 5),pp 842-849.

FUCHS, H., LmToY, M., AND PIZER, S M. 1989aInteractive visualization of 3D medical data.IEEE Comput. 22, 8 (Aug.), 46-52.

FUCHS, H , LEVOY, M., PIZER, S. M., AND RosE~-MAN, J. G. 1989b, Interactive visuahzatmnand manipulation of 3-D medical image dataIn Proceedings of the 10th Annual Conferenceand Exposition of the Natzonal Computer

Graph ics Association, NCGA ’89 (Phda -delphla, Penn., Apr. 17-20), pp. 118-131

FUCHS, H., POULTON. J , EYLES, J., GREER, T .GOLDFEATHE~, J., ELLSWO~TH, D , MOLNAR. S ,AND TURW G 1989c Pixel-Planes 5: A hetero-geneous multiprocessor graphics system usingprocessor-enhanced memories. Comput. Graph.23, 4 (Jul.), 79-88.

FUJIMOTO, A., AND IWATA, K 1983. Jag-free im-ages on raster displays. IEEE Comput. Graph.Appl. 3, 9 (Sept.), 26-34.

FUJIMOTO, A., TANAKA, T., AND IWATA, K 1986ARTS: Accelerated ray-tracing system. IEEE

Comput Graph Appl 6, 4 (Apr ), 16-26.

GAMBOA-ALDECO, A., FELLINGH~M, L L., AND CHEN,G. T. Y. 1986. Correlation of 3D surfaces from

ACM Computmg Surveys, Vol. 23, No. 4, December 1991

Page 67: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging “ 487

multiple modalities in medical imaging. Appli-

cation of Optical Instrumehtation m Medicine

XIV and Picture Archiving and Communica-

tion Systems ( PACS IV) for Medical Applica-tions, SPIE 626, (Newport Beach, Calif., Feb.2-7), Pp. 467-473.

G~MBOTTO, J. P. 1986. A hierarchical segmenta-tion algorithm. IEEE Eighth International

Conference on Pattern Recognition, (Oct.), pp.951-953.

GARGANTINL I. 1982. Linear octrees for fast pro-cessing of three-dimensional objects. Comput.

Graph. Image Process. 20, 4 (Dec.), 365-374.

GARGANTINI, I., WALSH, T. R. S., AND Wu, O. L1986a. Displaying a voxel-based object vialinear octrees. Application of Optwal Instru-mentation in Medicine XIV and Picture Archiu-

ing and Commurucation Systems (PACS IV)for Medical Applications, vol. 626. SPIE,Newport Beach, Calif., pp. 460-466.

GARGANTINI, I., WALSH, T. R., AND Wu, O. L. 1986b.Viewing transformations of voxel-based objectsvia linear octrees. IEEE Comput. Graph. Appl.6, 10 (Oct.), 12-21.

GEIST, D., AND VANNIER, M. W. 1989. PC-based3-D reconstruction of medical images. Comput.

Graph. 13, 2 (Sept.), 135-143.

GELBERG, L. M , KAMINS, D., AND VROOM, J. 1989a.VEX: A volume exploratorium: An integatedtoolkit for interactive volume visualization. InProceedings of the Chapel Hill Conference onVolume Visualization (Chapel Hill, N.C , May18-19), Univ. of North Carolina Press. pp.21-26.

GELBERG, L. M., MACMANN, J. F., AND MATHIAS, C.J, 1989b. Graphics, image processing, and theStellar graphics supercomputer. Imaging

Workstations and Document Input Systems,

SPIE 1074, (Los Angeles, Calif., Jan. 12-20)pp. 89-96.

GEMBALLA, R., AND LINDNER, R. 1982. The multi-ple-write bus technique. IEEE Comput. Graph.Appl. 2, 7 (Sept.), 33-41.

GIBSON, C. J. 1989. Rapid three-dimensional dis-play of medical data using ordered surface listrepresentations. Science and Engineering of

Medical Imaging, SPIE 1137, (Paris, France,Apr. 24-26), pp. 114-119.

GLASSNER, A. S. 1984. Space subdivision for fastray tracing. IEEE Comput. Graph. Appl. 4, 10

(Oct.), 15-22.

GLASSNER, A. S. 1988. Spacetime ray tracing foranimation. IEEE Comput. Graph. Appl. 8, 2(Mar.), 60-70.

GLASSNER, A. S., Ed. 1989. An Introduction toRay Tracing. Academic Press, London, Eng-land.

GOLDFEATHER,J., AND FUCHS, H. 1986. Quadraticsurface rendering on a logic-enhanced frame-buffer memory. IEEE Comput. Graph. Appl.6, 1 (Jan.), 48-59.

GOLDWASSER, S. M. 1984a. A generalized objectdisplay processor architecture. In Proceedingsof the 11th Annual International S.vmposium

on Computer Arch ztecture (Ann Arbor, Mich.May), IEEE Computer Society Press, pp. 38-47.

GOLDWASSER, S. M, 1984b. A generalized objectdisplay processor architecture. IEEE Comput.

Graph. Appl. 4, 10 (Oct.), pp. 43-55.

GOLDWASSER,S. M. 1985 The voxel processor ar-chitecture for real-time display and manipula-tion of 3D medical objects. In Proceedings ofthe 6th Annual Conference and Exposition of

the National Computer Graphics Association,NCGA ’85 (Dallas, Tx,, Apr. 14-18), pp. 71-80.

GOLDWASSER, S. M 1986. Rapid techniques forthe display and manipulation of 3-D biomedicaldata. In Proceedings of the 7th Annual Confer-

ence and Exposition of the National ComputerGraphics Association, NCGA ’86 (Anaheim,Calif., May 11-15), pp. 115-149.

GOLDWASSER, S. M., AND REYNOLDS, R. A. 1983An architecture for the real-time display andmanipulation of three-dimensional objects. InProceedings of the International Conference onParallel Processing (Bellaire, Mich., Aug ),IEEE Computer Society Press, pp. 269-274.

GomwAse!m, S. M., AND REYNOLDS, R. A. 1987.Real-time display and manipulation of 3-Dmedical objects: The voxel processor architec-ture. Comput. Vision, Graph., Image Process.39, 1-27.

GOLDWASSER, S. M,, REYNOLDS, R. A., BAPTY, T.,BARAFF, D., SUMMERS, J., TALTON, D. A., ANDWALSH, E. 1985. Physician’s workstation withreal-time performance. IEEE Comput. GraphAppl. 5, 12 (Dec.), 44-57.

GOLDWASSER,S. M., REYNOLDS, R. A., TALTON, D.,AND WALSH, E 1986. Real-time interactivefacilities associated with a 3-D medical work-station. Application of Optical Instrumentation

m Medicine XIV and l%cture Archiuing andCommunication Systems (PACS IV) for Medt-cal Applications, SPIE 626, (Newport Beach,Calif., Feb. 2-7), pp. 491-503.

GOLDWASSER, S. M., REYNOLDS, R. A., TALTON,D, A,, AND WALSH, E S. 1988a. High-performance graphics processors for medicalimaging applications. In Proceedings of the In-

ternational Conference on Parallel Processing

for Computer Viszon and Display (University ofLeeds, United Kingdom, Jan. 12-15), pp. 1-13.

GOLDWASSER,S, M., REYNOLDS, R. A., TALTON, D.A., AND WALSH, E, S, 1988b. Techniques forthe rapid display and manipulation of 3-Dbiomedical data. Comput. Med. ImagmgGraph. 12, 1 (Jan. -Feb.), 1-24.

GOLDWASSER,S. M., REYNOLDS, R. A., TALTON, DA., AND WALSH, E. S. 1989, High perfor-mance graphics processors for medical imag-ing applications, In Parallel Processing for

Computer Vision and D~splay, Dew, P. M.,Earnshaw, R. A., and Heywood, T. R. (eds.).Addison-Wesley, Workingham, England.

ACM Computing Surveys, Vol. 23, No 4, December 1991

Page 68: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

488 “ Stytz et al.

GONZALEZ, R. C , ANI) ~lNTZ, P 1987 Dzgital

Image Processing, 2nd edition, Addison-Wesley, Readmg, Mass.

GORDON, D , AND REYNOLDS, R. A 1983 Imagespace shading of three-dimensional objectsTech Rep MIPG 85, Dept. of Radiology, Univ.of Pennsylvania

GORDON. D., .AND REYNOLDS. R. A. 1985. Imagespace shading of 3-dimensional objects Com-put. Vzslon, Graph , Image Process., 29,361-376.

GORDON,D, AND UDUPA, J. K. 1989. Fast surfacetracking in three-dimensional binary imagesComput. Vision, Graph , Image Process 45, 2(Feb.), 196-214

GORDON,M., AND DELOID, G. 1990a Advanced 3DImage reconstruction from serial slices. In Pro-

ceedings of the 11th Annual Conference andExposztzon of the National Computer Graphzcs

AssocLat~on, NCGA ’90 (Anaheim, Calif., Mar19-22), pp. 176-183

GORDON, M , AND DELOID, G 1990b. Surface andvolumetric 3-D reconstructions on an engineer-ing workstation. In SCAR ’90: Proceedz ngs of

the 10th Conference on Computer ApplzcatLonsin Radiology and the 4th Conference on Com-puter Assisted RadLology (Anaheim, Calif.,June 13- 16), Symposia Foundation Press, pp.533-537

GOUR~UD, H 1971. Continuous shading of curvedsurfaces. IEEE Trans. Comput C-20, 6 (June),623-629.

GUDMUNDSSON, B , AND R.ANDEN, M 1990 Incre-mental generation of projections of CT-volumes.In Proceedings of 1st Conference on Visualzza-twn m Biomedical Computzng (Atlanta, Ga ,May 22-25), IEEE Computer Society Press pp.27-34.

HARRIS, L. D, 1988, A varifocal mirror displayintegrated into a high-speed image processorThree-D zmenslonal Imaging and Remote Sens-

ing Imaging, SPIE 902, (Los Angeles, Calif.,Jan. 14-15), pp. 2-9.

HARRIS, L D., Ro~B, R A., YUEN, T. S., .AND RIT-MAN, E L 1978. Noninvasive numerical dis-section and display of anatomic structure usingcomputerized X-ray tomography. Recent and

Future Developments in Medical Imagmg, SPIE152, pp. 10-18.

HARRIS, L. D., ROBB, R. A., YUEN, T S., ANDRITMAN, E. L. 1979. Display and visualiza-tion of three-dimensional reconstructedanatomic morphology: Experience with the tho-rax, heart, and coronary vasculature of dogs,J Comput Assisted Tomograph. 3, 4 (Aug.),439-446

HARRIS, L. D., CAMP, J. J., RITMAN, E. L., ANDROBB, R. A. 1986, Three-dimensional displayand analysls of tomographic volume imagesutilizing a varifocal mirror, IEEE Trans. Med.Imag. MI-5, 2 (June), 67-72.

HEFFERMAN, P. B., AND ROBB, R. A 1985a, A newmethod for shaded surface display of biological

and medical images IEEE Trans. Med. ImagM14, 1 (Mar ), 26-38.

HEFFERMAN, P B., AND ROBB, R. A. 1985b, Dis-play and analysls of 4-D medical Images. InProceedings of the International Symposium:

CAR ’85. Computer Assisted Radiology (Berhn,West Germany), Computer Assisted RadiologyPress, pp. 583-592

HEMMY. D. C., AND LINDWIST, T R. 1987. Opti-mizing 3-D Imagmg techmques to meet chnicalrequirements. In Proceedings of the 8th An-nual Conference and Exposztzon of the National

Computer Graphics Assoclatlon, NCGA ’87(Philadelphia, Penn., Mar, 22-26), pp 69-80.

HIZRMAN. G. T 1985 Computer graphics mradiology In Proceedl ngs of the Intern at[on alSymposium: CAR ’85, Computer Assisted

RadLology (Berhn, West Germany), ComputerAssisted Radiology Press, pp 539-550.

HERMAN, G T. 1986. Computerized reconstruc-tion and 3-D imaging in medicine TechmcalReport MIPG 108, Dept. of Radiology, Univ ofPennsylvania

HERMAN, G T . AND ABBOTT, A. H. 1989 Repro-ducibility of landmark locatlons on CT-basedthree-dimensional images. In Proceedings of

the 10th Annual Conference and Exposltwn ofthe Natzonal Computer Graphics Asso[:latmn.

NCGA ’89 (Philadelphia, Penn Apr 17-20).pp 144-148.

HERMAN, G T., AND COIN, C G. 1980. The use ofthree-dimensional computer display in thestudy of disk disease J. Comput. A.sszstedTomogruph. 4, 4 (Aug ), pp 564-567

HERMAN, G. T , AND LIU, H. K. 1977. Display ofthree-dimensional information in computedtomography J Comput. Assisted Tomograph,

1, 1, 155-160

HERMAN, G, T., AND LIU, H K 1978 Dynamicboundary surface detection. Comput Graph.Image Process, 7, 130-138

HERMAN, G, T., .4ND LIu, H K. 1979. Three-dlmenslonal display of human organs fromcomputed tomograms Comput Graph. IWLageProcess. 9, 1-21

HERIVSAN,G T , AND WEBSTER, D. 1983 A topolog-ical proof of a surface tracking algorlthm.Comput, Vmmn, Graph , Image Process 23,162-177.

HERMAN, G. T., UDUPA, J. K.. KRAMER, D., LMJTER-BUR, P. C., RUDIN, A M , AND SCHNEIDER, J. S1982 Three-dimensional display of nuclearmagnetic resonance images Opttcal Eng. 21, 5

(Sept.-Ott.), 923-926.

HERMAN, G. T., ROBERTS, D . AND RABE. B 1987The reduction of pseudoforamma (false holes)in computer graphic presentations for cramofa-cial surgery In Proceedings of the 8th AnnaalConference and Exposztmn of the NatloaalComputer Graphics Association, NCGA ’87(Phdadelphia, Penn , Mar 22-26), pp 81-85

ACM Computmg Surveys, Vol 23, No. 4, December 1991

Page 69: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging “ 489

HEYERS, V., MEINZER, H. P., SAURBIER, F.,SCHEPPELMANN, D., AND SCHAFER, R. 1989.Minimizing the data preparation for 3D vis-ualization of medical image sequences. InProceedings of the International S.vmposium:CAR ’89, Computer Asswted Rad~ology(Berlin, Germany), Computer Assisted Radiol-ogy Pressj pp. 743-746

HILTEBRAND, E. G. 1989 Hardware architecturewith transputers for fast manipulation of vol -ume data. In Parallel Processing for Computer

Viszon and Display, Dew, P. M., Earnshaw, R.A., and Heywood, T. R. (eds. ). Addison-Wesley,Reading, Mass , pp. 497-503.

HODGES, L, F., AND MCALLISTER, D. F. 1985.Stereo and alternating-pair techniques for dis-play of computer-generated images. IEEEComput, Graph. Appl. 5, 9 (Sept.), 38-45.

HOFFMEISTER, J. W., RINEHART, G. C., AND VAN-NIER, M. W. 1989. Three-dimensional surfacereconstructions using a general purpose imageprocessing system. In Proceedings of the Inter-national Sympos~um: CAR ’89, ComputerAsszsted Radzology (Berlin, Germany), Com-puter Assisted Radiology Press, pp. 752-757.

HOHNE, K. H., AND BERNSTEIN, R. 1986 Shading3D images from CT using gray-level gradients.IEEE Trans. Med. Imag. MI-5, 1 (Mar.),45-47.

HOHNE, K.-H., BOMANS, M., TIEDE, U., REIMER, M.1988. Display of multiple 3D objects usingthe generalized voxel-model. Medical Imaging

IL Image Data Management and Display, SPIE914, (Newport Beach, Calif., Jan. 31-l?eb. 5),pp. 850-854,

EIu, X , TAN, K. K., LEVIN, D. N., GALHOTRA, S. G.,PELIZZARI, C A , CHEN, G. T. Y., BECK, R. N.,CHEN, C -T., AND COOPER, M. D. 1989. Volu-metric rendering of multimodality, multivari-able medical imaging data. In Proceedings of

the Chapel Hill Workshop on Volume Visual-ization (Chapel Hill, N. C., May 18– 19, Univ. ofNorth Carolina Press, pp. 45-49

HUMMEL, R. A. 1975. Histogram modificationtechniques. Comput. Graph. Image Process. 4,

10, 209-224.

ISG TECHNOLOGIES, INC. 1990a. CAMRA Allegro:

A Technical Oueruiezo, ISG Technologies, Inc ,Mississauga, Ontario, Canada.

ISG TECHNOLOGIES, INC. 1990b. CAMRA Allegro:

A Full-Solution 2 D/3D Workstation, lSGTechnologies, Inc., Mississauga, Ontario,Canada.

JAC~ELj D. 1985. The graphics Parcum system: A3D memory based computer architecture forprocessing and display of solid models. Corn-put. Graph. Forum 4, 21-32.

JACKINS, C. L., AND TANIMOTO, S. L. 1980. Oct-trees and their use in representing three-dimensional objects. Comput. Graph. ImageProcess. 14, 249-270.

JAMAN, K. A., GORDON, R,, AND RANGAYYAN, R. M.1985. Display of 3D anisotropic images from

limited-view computed tomograms. Comput.Vzszon, Graph., Image Process. 30, 3 (June),345-361.

JENSE, G. J., AND HUUSMANS, D. P. 1989, Interac-tive voxel-based graphics for 3D reconstructionof biological structures. Cornput. Graph. 13, 2,145-150.

JOHNSON, E. R., AND MOSHER, C. E. JR. 1989. In-tegration of volume rendering and geometricgraphics. In Proceedings of the Chapel HL1lWorkshop on Volume Visualization (ChapelHill, N. C., May 18-19), Univ. of North Caro-lina Press, pp 1-7.

JOHNSON, S. A., AND ANDERSON, R. A. 1982. Clini-cal varifocal mirror display system at the Uni-versity of Utah Processing and Display ofThree-Dimensional Data, SPIE 367, (San Diego,Calif., Aug. 26-27), pp. 145-148

KALVIN, A., ANI) PELEG, S. 1989. A 3D multireso-lution segmentation algorithm for surfacereconstruction from CT Data. Medzcal Imag-

ing III: Image Processing, SPIE 1092,

(Newport Beach, Calif,, Jan. 31-Feb. 3), pp.173-182.

KAPLAN, M. R. 1987. The use of spatial coherencein ray tracing. In Tech ntques for Computer

Graphzcs, D. F. Rogers, and R, A, Earnshaw,(Eds.) Springer-Verlag, New York: pp.173-190.

KAUFMAN, A. 1987 Efficient algorithms for 3Dscan-conversion of parametric curves, surfaces,and volumes. IEEE Comput. Gr[zph. 21, 4(July), 171-179.

KAUFMAN, A. 1988a. Efficient algorithms forscan-converting 3D polygons. Comput. Graph.12, 2, 213-219.

KAUFMAN, A. 1988b. The CUBE workstation: A3-D voxel-based graphics envu-onment. VzsualComput. 4, 4 (Oct.), 210-221

KAUFMAN, A. 1988c. The CUBE three-dimen-sional workstation. In Proceedings of the 9thAnnual Conference and Exposition of the National Computer Graph tcs Assoclatlon. NCGA’88 (Anaheim, Calif., Mar. 20-24), pp.344-354.

KAUFMAN, A. 1989, The voxblt engine: A voxelframe buffer processor. In Aduances in Graph-ics Hardware III, Kuijk, F , and Strasser, W(eds.), Springer-Verlag, Berlin, Germany.

KAUFMAN, A., AND BAKALASH, R. 1985. A 3-D cel-lular frame buffer. In Proceedings of EURO-

GRAPHICS ’85, Vandoni, C. E., (cd.), ElsevierScience Publishers B. V. (North-Holland).Amsterdam, The Netherlands, pp. 215-220.

KAUFMAN, A., AND BAKALASH, R 1988. Memoryand processing architecture for 3D voxel-basedimagery. IEEE Comput. Graph. Appl 8, 6(Nov.), 10-23

KAUFMAN, A., AND BAKALASH, R. 1989a. Parallelprocessing for 3D voxel-based graphics. InParallel Processing for Computer Visionand Dw-play, Dew, P. M., Earnshaw, R. A.,

ACM Computing Surveys, Vol. 23, No. 4, December 1991

Page 70: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

490 * Stytz et al.

and Heywood, T, R. (eds). Addison-Wesley,Workingham, England

KAUFMAN, A , ANII BAKALASH, R. 1989b Thec u B Esystem as a 3D medical workstation 7’hree-Dimensconal Vmualizatlon and Display Tech-

nologies, SPIE 1083, (Los Angeles, Cal if.,Jan. 18-20), pp. 189-194,

KAUFhlAN, A., AND BA~ALASH, R. 1990, Viewingandrendering processor for a volume visualiza-tion system (extended abstract). In Aducmcesm Graph Lcs Hardware IV, Grimsdale, R L.,and Strasser, W., (eds. ), Springer-Verlag,Berlin, Germany

KAUFMAN, A., AND SHIMONY, E. 1986. 3D scan-conversion algorithms for voxel-based graph-ics. In Proceedings of the ACM Worhshop onInteractme 3D Graphics, (Chapel Hill, NC ,Oct. 23-24), pp. 45-75.

KLEIN, F , AND KUEBLER, O 1985 A prebufferalgorithm for instant display of volume data,Arch ztectures and Algorithms for Digztal Image

Processing, SPIE 596, (Cannes, France, Dec.5-6), pp 54-58

KLIEGIS, U,, NEUMANN, R., KORTMANN, TII,, SCHWE-SI~, W., MITTELSTADT, R., WMGEL, H., ANDZEN~ER, W. 1989. Fast three dimensional vi-sualization using a parallel computmg system,In Proceedings of the International Symposium:

CAR ’89, Computer Assisted Radiology (Berlin,Germany), Computer Assisted Radiology Press,pp 747-751.

KRAMICR, D. M., KAUFMAN, L , GLTZMAN, R. J , ANDHAWRYSZKO, C 1990. A general algorithm foroblique Image reconstruction IEEE Compu tGraph. Appl. 10, 2 (Mar.), 62-65,

KUNII, T. L., SATOH, T., AND YAM~GUCHI, K. 1985,Generation of topological boundary representa-tions from octree encoding. IEEE Comput,

Graph, Appl. 5, 3 (Mar.), 29-38.

LANE, B 1982. Stereoscopic displays. Processingand Dzsplay of Three-dimensional Data, SPIE

367, (San Diego, Calif,, Aug 26-27), pp. 20-32,

LANG, P., STEIGER, P , AND LJN~QUIST, T. 1990,Three-dimensional reconstruction of magneticresonance images: Comparison of surface andvolume rendering techniques SCAR ’90: Pro-ceedings of the 10th Conference on ComputerApp/tcatLons m Radlolog.v and the 4th Confer-ence on Computer Assisted Radiology

(Anaheim, Calif., June, 13-16), SymposiaFoundation Pub,, pp. 520-526.

LEE, M, E., AND REDNER, R. A 1990. A note onthe use of nonlinear filtering m computergraph~cs. IEEE Comput. Graph. Appl. 10, 3(May), 23-29.

LEMKE, H U , BOSING, K., ENGELHORN, M,, JACKEL,D., KNOBLOCH, B., SCHARNWEBER, H., STIEHL,H. S., AND TONNIES, K. D. 1987. 3-D com-puter graphic work station for biomedical in-formation modeling and display. MedicalZmaging, SPIE 767, (Newport Beach, Calif.,Feb. 1-6), pp. 586-592.

LENZ, R., GUDMUNDSSON, B , ~NEI LHNDSKCW,B. 1985, Mampulation and display of 3-Dimages for surgical planning Medccal Image

Processing, SPIE 593, (Cannes, France,Dec. 2-3), pp. 96-102,

LENZ, R., GU~MUNDSSON, B , LINDs~o~, B., ANDDANI~LSSON, PER E 1986. Display of densityvolumes IEEE Comput. Graph Appl 6, 7(July), 20-29.

LEVINTHAL, A., AND PORTER, T. 1984, Chap: ASIMD graphics processor Comput, Graph.,, 18,

3 (July), 77-82

L~voY, M 1988a. Display of surfaces from vol-ume data IEEE Compu t Graph Appl 8, 5(May), 29-37.

LEVOY, M. 1988b. Direct visuahzatlon of surfacesfrom computed tomography data Medical

Imaging II: Image Data Management and Dts -

play, SPIE 914, (Newport Beach, Calif,, Jan,31-Feb. 5), pp. 828-841,

LEVOY, M 1989a Display of Surfaces from Vol-ume Data, Ph. D Dissertation, Dept of Com-puter Science, University of North Carolina atChapel Hill.

LEVOY, M, 1989b Design for a real-time hlgh-quality volume rendering workstation, InProceedings of the Chapel HLll Workshop onVolume Visuahzution (Chapel Hill, N.C.),Univ of North Carohna Press, pp 85-92.

LEVO~, M 1990a A hyhrid ray tracer for render-ing polygon and volume data IEEE Comput

Graph Appl 10.2 (Mar ), 33-4o.

LEVOY, M, 1990b, Efficient ray tracing of volumedata ACM Trans Graph, 9,3 (July), 245-261.

LEVOY, M , FUCHS, H., PIZER, S. M., ROSENMAN, J ,CHANEY, E. L., SHEROUSE, G. W , INTERRANTE,V,, AND KIEL, J 1990 Volume rendering inradiation treatment planning Ftrst Conference

on VlsualzzatLon m Blomedtcal Computmg(Atlanta, Ga., May 22-25), IEEE ComputerSociety Pressj pp. 4-8

LIN, W.-C., AND CHEN, S -Y, 1989. A new surfaceinterpolation techmque for reconstructing 3Dobjects from serial cross-sections. Comput, VZ-

slon, Graph , Image Process. 48, 1 (Ott ),124-143

LIN, W C., LIANG, C C., AND CHEN, C. T, 1988.Dynamic elastic interpolation for 3-D objectreconstruction from serial cross-sectional im.ages IEEE Trans Med. Imag 3, 3 (Sept.),225-232

LINDQUIST, T. R. 1990. Personal communicationwith authors. Dimensional Medicine, Inc.,10901 Bren Road East, Mmnetonka, MN.December, 1990.

LIPSCOMB, J S, 1989. Experience with stereo-scopic display devices and output algorithms.Three-D1mens~onal Vwualization and DisplayTechnologies, SPIE 1083, (Los Angeles, Calif.,Jan 18-20), pp. 28-34

LIU, H, K. 1977 Two- and three-dimensionalboundary detection. Comput. Graph. ImageProcess. 6, 123-134.

ACM Computing Surveys, Vol. 23. No 4, December 1991

Page 71: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging 0 491

LORENSEN, W. E., AND CLINE, H. E. 1987. March-ing cubes: A high resolution 3D surface con.struction algorithm. Cornput. Graph. 21, 4

(July), 163-169

Low, K.-C,, AND COGGINS, J. M. 1990. Biomedicalimage segmentation using multiscale orienta-tion fields. First Conference on Visualization in

Biomedical Computing (Atlanta, Ga,, May22-25), IEEE Computer Society Press, pp.378-384.

MAMMEN, A. 1989, Transparency and antialiasingalgorithms implemented with the virtual pixelmaps technique. IEEE Comput. Graph. Appl,

9, 4 (July), 43-55.

MAO, X., KUNII, T, L,, FUJISHIRO, ISSEI, AND NOMA,T. 1987. Hierarchical representations of2D/3D gray-scale images and their 2D/3D two-way conversion. IEEE Comput. Graph. Appl.

7, 12 (Dec.), 37-44.

MAX, N, L 1990. Antialiasing scan-line data.IEEE Comput Graph. Appl. 10, 1 (Jan.),18-30.

MEAGHER, D. 1982a. Geometric modeling usingoctree encoding. Comput Graph. Image Pro-cess. 19, 129-147.

MEAGHER, D 1982b. Efficient synthetic imagegeneration of arbitrary 3D objects. In Pro-ceedings of the 1982 IEEE Computer Society

Con ference on Pattern Recognition and Image

Processing (Las Vegas, Nev., June 14-17),IEEE Computer Society Press, pp. 473-478.

M~AGHER, D. J. 1985. Applying sohds processingmethods to medical planning. In Proceedings

of the 6th Annual Conference and Exposztlon ofthe Natton al Computer Graphics Association,

NCGA ’85 (Dallas, Tex., May 14-18), pp.101-109.

MILLS, P. H., FUCHS, H., AND PIZER, S. M. 1984.High-speed interaction on a vibrating-mirror3D display. Processing and Display of Three-Dimensional Data II, SPIE 507, (San Diego,Calif., Aug. 23-24), pp. 93-101.

MITCHELL, D. P, 1987, Generating antialiased im-ages at low sampling densities. Comput. Graph,21, 4 (July), 65-72.

MORRISSEY,T. F’ 1990 Programmers hierarchicalinteractive graphics system PHIGS et al. InProceedings of the 11th Annual Conference and

Exposition of the Natwnal Computer GraphicsAssociation, NCGA ’90 (Anaheim, Calif.,March 19-22), pp 666-690.

MOSHER, C. E., AND VAN HOOK, T. 1990. A geo-metric approach for rendering volume data. InProceedings of the 11th Annual Conference andExposition of the National Computer GraphicsAssociation, NCGA ’90 (Anaheim, Calif.,March 19-22), pp. 184-193.

NAZIF, A. M., AND LEVINE, M. D 1984. Low levelimage segmentation: An expert system. IEEETrans. Pattern Analysis Machine IntelligencePAMI-6, 5 (Sept.), 555-577.

NELSON, A. C , KIM, Y., HARALICK, R. M., ANDER-SON, P. A., JOHNSON, R. H., AND DESOTO, L. A.

1988. 3-D display of magnetic resonanceimaging of the spine. Three-Dimensional Imag-

ing and Remote Sensing Imaging, SPIE 902,(Los Angeles, Calif., Jan, 14-15), pp. 103-112.

NEY, D., FISHMAN, E. K., AND MAGID, D. 1990a.Three dimensional imaging of computed to-mography: Principles and applications. First

Conference on Visualization in Biomedical

Computing (Atlanta, Ga., May 22-25), IEEEComputer Society Press, pp. 498-506.

NEY, D. R., FISHMAN, E. K., MAGID, D., AND DR~BIN,R. A. 1990b. Volumetric rendering of com-puted tomography data: Principles and tech-niques. IEEE Comput. Graph. Appl., 10, 2

(Mar.), 24-32.

OHASHI, T,, UCHIKI, T., AND TOKORO, M. 1985. Athree-dimensional shaded display method forvoxel-based representation In Proceedings ofEUROGRAPHICS ’85, Vandoni, C. E., (cd.)Elsevier Science Publishers B.V. (North-Hol-land), Amsterdam, The Netherlands, pp.221-232,

PELIZZARI, C. A., CHEN, G. T. Y., SPELBRING, D R.,WEICHSELBAUM, R. R., AND CHEN, C.-T. 1989.Accurate three-dimensional registration of CT,PET, and/or MR images of the brain J. Corn-

put. Assist. Tomograph., 13, 1,20-26.

PHONG, B. T. 1975. Illumination for computergenerated graphics. Commun. ACM, 18, 6

(June), 311-317.PIXAR CORPORATION.1988a, Ch apLibraries Tech-

nical Summary for the UNIX OperatingSystem.

PIXAR CORPORATION. 1988b Chap Tools Tech ni-

cal Summary for the UNIX Operating System

PIXAR CORPORATION. 1988c. Pixar II Hardu,are

Overview.

PIXAR CORPORATION. 1988d. Chap Image Pro-

cessing System Techntcal Summary.

PIXAR CORPORATION. 1989, Chap Volumes Vol-

ume Rendering Package Technical Summary.

PIZER, S M., AND FUCHS, H. 1987. Three di-mensional image presentation techniques inmedical Imaging In Proceedings of theInternational Symposium: CAR ’87, Comp-

uter Assisted Radiology (Berlin, WestGermany), Computer Assisted Radiology Press,pp. 589-598.

PIZER. S. M., FUCHS, H., HEINZ, E. R., STAAB, E. V.,CHANEY, E L., ROSENMAN, J. G., AUSTIN, J D ,BLOOM~~RG, S H., MACHARDY, E. T., MILLS, PH., AND STRICKLAND, D. C. 1983. Interactive3D display of medical images In Proceedingsof the 8th Conference on Information ProcessingLn Medical Imaging (Brussels, Belgium,Aug.29-Sept. 2), pp 513-526.

PIZER, S. M., ZIMMERMAN, J. B., AND STAAB, E. V.1984. Adaptive grey level assignment in CTscan display. J. Comput. Asswt. Tomograph.8, 2 (Apr.), 300-305.

PIZER, S. M., AUSTIN, J. D., PERRY, J. R., SAFRIT,H. D., AND ZIMMERMAN, J. B. 1986 Adaptivehistogram equalization for automatic contrastenhancement of medical images Apphcatzon

ACM Computing Surveys, Vol. 23, No. 4, December 1991

Page 72: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

492 - Stytz et al.

of Optical Instrumentation In Medimne XIV and

Picture Archiving and Communmatton Systems

(PACS IV) for Medtcal Appllcatlons, SPIE626, (Newport Beach, Calif., Feb 2-7), pp242-250.

PIZER, S. M , Amumi, E P., AUSTIN, J. D.,CItOMARTIE, R., GESELCIWITZ, A., GREER, T ,ROMENY, B. TER H., ZIMMERMAN, J B., ANDZUIDERVELD, K. 1987 Adaptive histogramequalization and its variations Comput. Vi-sion, Graph., Image Process 39, 3 (Sept ),355-368.

PIZER, S. M, GAUCH, J. M, AND LIFSHITZ, L M1988. Interactive2D and 3D object definitionin medical images based on multiresolutionimage descriptions. Medlca[Imagmg II. ImageData Management and D[splay, SPIE 914.(Newport Beach, Cahf., Jan 31 -Feb 5), pp.438-444.

PIZER, S. M., LEWOY, M . FUCHS, H , AND Rosin -M.4N, J G 1989a. Volume rendering for dis-play of multiple organs, treatment objects, andImage intensities Scwnce and Engineering ofMeducal Imagmg, SPIE 1137, (ParLs, France,Apr. 24-26), pp. 92-97

PIZER, S. M., FUCHS, H , LEVOY, M., ROSENM.4N, JG , DAVIS, R E , AND RENNER, J B 1989b.3D display with mmimal predefinition In Pro-ceedings of the International Symposium: CAR’89, Computer Assisted RadlolOgy, (Berlin,Germany), Computer Assisted Radiology Press,pp. 724-736.

PJz~R, S M , GAUCH, J. M , CULLIP, T. J , ANDFREURRICKSON,R E. 1990a Descriptions ofimage intensity structure via scale and symmetry. First Conference on VisualLzatton inIhomedlcal Computing (Atlanta, Ga , May22-25), IEEE Computer Society Press, pp.94-101.

PIZ~R, S M , JOUNSTON, R. E., ERICKSEAT,J. P.,YAN~AS~AS, B C., AND MULLER, K. E. 1990b.Contrast-hmlted adaptive histogram equahza-tion’ Speed and effectiveness Frost Conferenceon Vlsuulization rn Biomedical Computing,(Atlanta, Ga., May 22-25), IEEE ComputerSociety Press. pp 337-345

PQMMWLT, A , TI~DE, U., WIEBECXE, G , HOHNW, K.H 1989. Image quality m voxel-based surfaceshading In Proceedings of the International

Symposium. CAR ’89, Computer Assisted

Radiology (Berlin, Germany), Computer As-sisted Radiology Press, pp. 737-741.

POMMEiZT,A., TIEDE, U , WIEBECKE, G., AND HOHNE,K H 1990 Surface shading m tomographicvolume visualization A comparative study.Fzrst Conference on Visualization m B1omedZ-cal Computmg (Atlanta, Ga , May 22-25),IEEE Computer Society Press, pp 19-26

PORTER,T., AND DUFF, T. 1984. Compositing digi-tal Images Comput Graph 18, 3 (July),253-259,

Porrhmsm, M., AND HOFFERr, E. M 1989. The pixelmachine: A parallel image computer. Comput.Graph. 23, 3 (July), 69-78.

POULTON, J., FUCHS, H., AUSTIN, J., EYLES, J., ANDGREER, T. 1987. Building a 512 x 512 Pixel-Planes system, 1987 Conference on Aduanced

Research m VLSI, Stanford Umversity, PaloAlto, Calif,, preprint,

R~MAN, S. V., SARKAR, S , AND BOYER, K. L 1990Tissue boundary refinement in magnetic reso-nance images using contour-based scale spacematching SCAR ’90: Proceedings of the 10thConference on t20mputer Applicatmns m Radt-ology and the 4th Conference on Computer As-szsted Radmlogy (Anaheim, Calif., June 13- 16),Symposia Foundation Press, pp 618-624.

RP.YA, S P. 1989a. Low-level segmentation of 3-Dmagnetic resonance brain Images: An expertsystem. Vwual Commun Lcatlons and ImageProcessing IV, SPIE 1199, (Philadelphia,Penn , Nov. 8-10), pp. 913-919.

RAYA, S P. 1989b Rule-based segmentationof multi-dimensional medical images In Proc-eedings of the 10th Annual Conference andE.zpos~tlon of the Natmnal Computer GraphicsAssoclatLon, NCGA ’89 (Philadelphia,Penn , Apr 17-20), pp. 193-198

RA~A, S P. 1990a. Low-level segmentation of 3-Dmagnetic resonance brain images: A rule basedsystem IEEE Trans. Med. Imag. 9, 3 (Sept.),327-337.

RAxi~, S P. 1990b SOFTVU: A software packagefor multidimensional medical image analysisMedical Imaging IV Image Capture and Dis-play, SPIE 1232, (Newport Beach, Calif , Feb.4-5), pp. 162-166

RAYA, S P., AND UDuPii, J. K, 1990, Shape-basedinterpolation of multidimensional objects.IEEE Trans Med. Imag, 9, 1 (Mar.), 32-42,

RA.~A, S P , UDIJPA, J K , AND BARRETT,W A,1990 A PC-based 3D imagmg system: Algo-rithms, software, and hardware considerations,Comput Med Imag Graph 14, 5, 353-37o.

REAIHTY IM.+~ING CORPORATION 1990a, VoxelFhnger 3-D Imaging S-vstem OEM Integration

Guide. Reahty Imagmg Corporation

REALITY IMA~iNG CORPORATION 1990b. Vo.xelFl[nger System Ocer~tezo. Reality Imaging Cor-poration

REYNOLDS, R. A 1983a Simulation of 3D displaysystems. Dept. of Radiology, Umversity ofPennsylvania, Unpublished report

REYNOLDS, R A 1983b. Some architectures forreal-time display of three-dimensional objects:A comparative survey. Tech Rep, MIPG 84,Dept. of Radiology, Umv. of Pennsylvania

RE~NOLDS, R A 1985 Fast Methods for 3D DM-play of Medical Objects, Ph.D. Dissertation,Dept. of Computer and Information Science,Univ of Pennsylvama

REYNOLDS, R A., GORDON, D , AND CHE~, L -S.1987. A dynamic screen technique for shadedgTaphics display of slice-represented objectsComput, Vlslon, Graph., Image Process 38, 3(June), 275-298.

ACM Computmg Surveys, Vol 23, No 4, December 1991

Page 73: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging ● 493

REYNOLDS, R. A., GOLDWASSER,S. M., DONHAM, ME., WYATT, E. D , PETERSON,M. A., WIDERMAN,L. A., TALTON, D, A., AND WALSH) E. S. 1990.VOXELSCOPE: A system for the explorationof multidimensional medical data. Nordic

Workshop on Visualization and Analysis of

Scientific and Techntcal Computations onHLgh-Performance Computers (Umea, Sweden,Sept. 25-28), preprint,

RHODES, M. L, 1979. An algorithmic approach tocontrolling search in three-dimensional imagedata. Comput. Graph. 13, 2 (Aug.), 134-141.

RHODES, M. L., GLENN, W. V., AND AZZAWI, Y. M.1980. Extracting oblique planes from serialCT sections. J, C’omput. Assist. Tomograph4, 5 (Oct.), 649-657.

RICHARDS, J. A. 1986. Remote Sensing and

Digital Image Analysls: An Introduction.

Springer-Verlag, Berlin, Germany,

ROBB, R. A 1985. Three-Dimensional BiomedicalImagmg, vol. 1. CRC Press, Boca Raton, Fla.

ROBB, R. A. 1987, A workstation for interactivedisplay and analysis of multidimensionalbiomedical images, In Proceedings of the Inter-

national Symposium: CAR ’87, ComputerAss~sted llad~ology (Berlin, West Germany),Computer Assisted Radiology Press, pp.642-656.

ROBB, R. A., AND BARILLOT, C. 1988. Interactive3-D image display and analysis In Proceedingsof the Society of Photo-OptLcal InstrzzmentatzonEngineers: Hybrid Image and Signal Process-ing 939 D P Casasent, and A. G. Tescher,Eds., (Orlando, Fla.), pp. 173-202.

ROBB, R. A., AND BARILLOT, C, 1989, Interactivedisplay and analysis of 3-D medical Images.IEEE Trans. Med. Imag, 8, 3 (Sept.), 217-226.

ROBB, R. A., AND HANSON, D. P. 1990. ANA-LYZE: A software system for biomedical imageanalysis. In Proceedings 1st Conference on Vi-sualization in Biomedical Computing (Atlanta,Ga., May 22-25), IEEE Computer SocietyPress, pp. 507-518.

ROBB, R. A., HEFFERMAN, P. B., CAMP, J, J., .mnHANSON, D. P, 1986 A workstation for inter-active display and quantitative analysls of 3Dand 4D biomedical images. In Proceedings ofthe 10th Annual Symposium on Computer Ap-

plications in Medtcal Care (Washington, D, C,,oct. 25-26), pp. 240-256.

ROBERTSON,B 1986. PIXAR goes commercial in anew market. Comput, Graph. World 9, 6 (June),61-70.

ROESE, J. A. 1984. Stereoscopic electro-optic shut-ter CRT displays: A basic approach. Processingand Display of Three-D lmens~onal Data II,

SPIE 507, (San Diego, Calif., Aug. 23-24), pp.102-107.

ROGERS, D F. 1985. Procedural Elements forComputer Graph its. McGraw-Hill, New York.

ROUNDS, E. M,, AND SUTTY, G. 1979. Segmenta-tion based on second-order statistics. Image

Understanding Systems H, SPIE 205, pp.126-132.

SAIWET,H. 1989. Neighbor finding in images rep-resented by octrees Computer Vision, Graph.,Image Process. 46, 3 (June), 367-386.

SANIET, H. 1990. The Des~gn and Analysis of Spa-tial Data Structures. Addison-Wesley, Reading,Mass.

SCHIERS, C , TIEDE) U., mvn HOHNE, K, H. 1989.Interactive 3D registration of image volumesfrom different sources, In Proceedings of theInternational Symposzum: CAR ’89, Computer

Assisted Radiology (Berlin, Germany), Com-puter Assisted Radiology Press, pp. 666-670.

SEITZ, P., AND RU~GSEGER, P, 1983. Fast contourdetection algorithm for high precision quanti-tative CT IEEE Trans. Med. Imag, MI-2, 3

(Sept.), 136-141,

SHANTZ, M 1981. Surface definition for branch-ing, contour defined objects. Comput. Graph.

15, 2 (July), 242-270.

SHILE, P. E., CHWIALKOWSKI, M. P., PFEIFEZZ,D,,PARKEY, R. W., AND PESHOCK, R, M, 1989,Automated identification of the spine in mag-netic resonance images: A reference point forautomated processing. In Proceedings of theInternational Sympostum: CAR ’89, Computer

Asszsted RadLotogy (Berlin, Germany), Com-puter Assisted Radiology Press, pp. 678-690,

SPORER,M., Moss, F. H., AND MATHIAS, C. J. 1988An introduction to the architecture of the Stel-lar graphics supercomputer. In Proceedings ofthe 33rd IEEE Computer Society InternationalConference (San Francisco, Calif., Feb. 29-Mar,4), pp. 464-467.

SPRINGER, D, 1986. PIXAR image computer revo-lutionizes industrial design. In tell. Instz-u.

Comput. (Nov./Dee.), 276-277.

SRIHARI, S. N 1981. Representation of three-dimensional digital images Conzput Suru.13, 4 (Dec.), 399-424.

STANSFIELD, S, A. 1986. ANGY: A rule-based ex-pert system for automatic segmentation ofcoronary vessels from digital subtracted an-glograms, IEEE Trans. Pattern Anal. Mach,

In tell. PAMI-8, 2 (Mar.), 188-199.

ST~DIN~, T. L. 1989. A new-generation imagingplatform with integrated pixel-polygon process-ing: The Titan graphics supercomputer. Zm ag

ing Workstations and Document Input Systems,SPIE 1074, (Los Angeles, Calif, Jan. 17-20),pp 42-45.

STEIGER, P, 1988. Computer graphics for highlyreproducible quantitative medical imageprocessing, In Proceedings of the 9th Ann ua[Conference and Exposition of the Nat~onalComputer Graphics AssoczatLon, NCGA ’88(Anaheim, Calif., Mar 20-24), pp. 178-187.

STELLAR COMPUTER, INC. 1989. GS2000 SeriesBackgrounder.

STOVER, H, 1982. True three-dimensional displayof computer data. Processing and Dmplay of

ACM Computing Surveys, Vol 23, No. 4, December 1991

Page 74: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

494 “ Stytz et al.

Three-Dimensional Data, SPIE 367, (SanDLego, Callf., Aug 26-27), pp. 141-144,

STOVER, H., AND FLETCHER, J. 1983. True three-dimensional display of computer generatedImages. Three-D imens~onal Imaging, SPIE

402, (Geneva, Switzerland, Apr. 21-22), pp.166-169.

ST~TZ, M. R 1989, Three-dimensional medicalimage analysls using local dynamic algorithmselection on a multiple-instruction, multiple-data architecture Ph. D. Dissertationj Dept. ofElectrical Engineering and Computer Science,Univ of Michigan

STYTZ, M. R AND FRIED~R, O 1991 ComputerSystems for three-dimensional diagnostic imag-ing: An examination of the state of the art.CRC CrZt. Rex. BLomed. Eng. 19, 1 (Aug. ) 1-46.

STYTZ, M. R., AND FRIEDER, 0. 1990. Three-dimen-sional medical Imaging modalities: Anoverview CRC Crzt, Reu. B1omed, Eng, 18, 1(July), 1-26.

SUETENS, P., OOSTERLINCK, A., GYBELS, J , VANDER-MUELEN, D., AND MARCHAL, G, 1987, Apseudo-holographic display system for inte-grated 3-D medical images Medzcal Zmagmg I,

SPIE 767, (Newport Beach, Calif., Feb. 1-6),pp. 606-613.

SUN MICROSYSTEMS,INC 1988a TAAC.I App/tca.tlon Accelerator Techn lcal Notes,

SUN MICROSYSTEMS, INC. 1988b Sun’s Softuare

Overuieu

SUN M1cFzOsysTEMS, INC 1988c Suns’s TAAC-IAppll catzon accelerator.

TAMMINEN, M.. .4ND SAMET, H 1984 Efficientoctree conversion by connectivity labeling.Comput. Graph. 18, 3 (July), 43-51

TALTON, D. A., GOLDWASSER,S. M., REYNOLDS, R.A., AND WALSH, E. S 1987, Volume rend-ering algorithms for the presentation of 3Dmedical data In Proceedings of the 8th AnnualConference and Exposition of the NatLonal

Computer Graphws Assoczat~on, NCGA ’87(Philadelphia, Penn,, Mar. 22-26), pp.119-128

TIEDE, U , HOHN~, K H , AND REIMER, M. 1987Comparison of surface rendering techniques for3-D tomographic objects In Proceedings of theInternational Symposium CAR ’87, ComputerAssisted Radzology (Berlin, West Germany),Computer Assisted Radiology Press, 599-610.

TIEDE, U , RIEMER, M , BOMANS, M,, AND HOHNE,K H. 1988 DLsplay techmques for 3-D tomo-graphic volume data. In Proceedings of the 9thAnnual Conference and Exposztzon of the Na-tzonal Computer Graphics Assoclatlon, NCGA’88 (Anaheim, Calif , Mar. 20-24), pp.188-197.

TIEDE, U , HOEHNE, K. H , BOMANS, M , POMMERT,A., RIEMER, M , AND WIEBECKE, G. 1990Investigation of medical 3D-renderingalgorithms. IEEE Comput Graph. Appl 10, 2(Mar.), 41-53.

rOENNIES, K. D, 1989. Surface triangulation bylinear interpolation in intersecting planes. SC1-ence and Englneerlng of Medical Imagmg, SPIE1137, (Paris, France, Apr 24-26), pp. 98-105,

TOENNI~S, K. D 1990. Automatic contour genera-tion in binary scenes SCAR ’90: Proceedingsof the 10th Conference on Computer Applica-

tions m Radiology and the 4th Conference onComputer Asszsted Radzology (Anaheim, Cahf,,June 13-16), Symposia Foundation Press, pp.633-639.

TOENNIES, K. D,, HERMAN, G T., ANDUDUPA, J K1989. Surface registration for the segmen-tation of implanted bone grafts. In Proceed-ings of the International Symposlam: CAR

’89, Computer Assisted Rad~olog.v (Berlin,Germany), Computer Assisted Radiology Press,pp. 381-385.

TOENNIES, K. D., UDUPA, J K , HERMAN, G T.,WORNOM, I L III, .4ND BUCHMAN, S. R 1990Registration of 3D objects and surfaces. IEEEComput. Graph Appl 10, 2 (Mar ), 52-62

TRIVEDI, S S , HERMAN, G. T., AND UDUPA, J. K.1986. Segmentation into three classes usinggradients IEEE Trans. Med. Imagzng MI-5, 2(June), 116-119.

TRONNIER, U , HASENBRINK, F , AND WITTICH, A1990. An experimental environment for true3D surgical planning by slmulatlon: Methodsand first experimental results. SCAR ’90: Pro

ceedmgs of the 10th Conference on ComputerAppllcatzons m Radu)logy and the 4th Confer-

ence on Computer Ass Lsted Radiology

(Anaheim, Cahf., June 13- 16), SymposiaFoundation Press, pp. 594-601,

TUOMENOKSA, D. L., ADAMS, G B III, SIEGEL,H J., AND MITCHIX,L, 0. R, 1983. A parallelalgorithm for contour extraction: Advantagesand architectural implications In Proceedl ngsof the 1983 IEEE Computer Soclet.v Conferenceon Computer VLsZon and Pattern Recognttton(Washington, D. C., June 18-23), IEEE Com-puter Society Press, pp. 336-344.

Trm-, H, K., AND Tu~, L T 1984. Du-ect 2-Ddmplay of 3-D objects. IEEE Comput. Graph.Appl, 4, 10 (Ott ), 29-33.

UDtTPA, J K. 1982. Interactive segmentation andboundary surface formation for 3-D digital im-ages. Comput. Graph Image Process 18,213-235

UDUPA, J K 1985 D,splay and analysis of 3Dmedical images using directed contours InProceedings of the 6th Annual Conference andExposlt~on of the Nattonal Computer Graph lcsAssocLatwn, NCGA ’85 (Dallas, Tx., May14-18), pp 145-155

UDUPA, J. K. 1988 Computer graphics m surgicalplanning In Proceedings of the 9th AnnualConference and Exposltzon of the Natwna[ Com-puter Graphics Assoclatlon, NCGA ’88(Anaheim, Calif., Mar, 20-24), pp. 67-77.

ACM Computing Surveys, Vol 23, No. 4, December 19~

Page 75: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging ● 495

UDUPA, J. K 1989. Computer aspects of 3D imag-ing in medicine: A tutorial. Tech. Rep.MIPG 153, Dept. of Radiology, Univ. ofPennsylvania.

UDUPA, J, K,, AND AJJANAGADDE, V, G. 1990,Boundary and object labeling in three-dimensional images, Comput. Vision, Graph.,Image Process. 51, 355-369.

UDUPA, J. K., AND HERMAN, G. T., eds. 1990. 3DImaging in Medicine. CRC Press, Boca Raton,Fla.

UDUPA, J. K., AND HUNG, H.-M. 1990a. Surfaceversus volume rendering: A comparative as-sessment. In Proceedings of the 1st Conference

on Visualization in Biomedical Computing(Atlanta, Ga., May 22-25), IEEE ComputerSociety Press, pp. 83-91.

UDUPA, J. K., AND HUNG) H.-M, 1990b. A compar-ison of surface and volume rendering methods.SCAR ’90: Proceedings of the 10th Conferenceon Computer Applications in Radiology and the

4th Conference on Computer Assisted Radiology

(Anaheim, Calif., June 13- 16), SymposiaFoundation Press, pp. 464-470.

UDUP~, J, K., AND ODHNER, D. 1990. Interactivesurgical planning: High-speed object renditionand manipulation without specialized hard-ware. In Proceedings of the 1st Conference on

Visualization in Biomedical Computing(Atlanta, Ga., May 22-25), IEEE ComputerSociety Press, pp. 330-336.

UDUPA, J. K., ANDTUY, H. K. 1983. Display of 3Dinformation in medical images using directed-contour representation. In Proceedings of the4th Annual Conference and Exposition of the

National Computer Graph ics Association,

NCG’A ’83 (Chicago, Ill., June 26-30), pp.98-105.

UDUPA, J. K,, SRIHARI, S, N., AND HERMAN, G. T.1982. Boundary detection in multidimen -sions. IEEE Trans. Patt. Anal. Mach. In tell.PAMZ-4, 1 (Jan, ),, 41-50.

UDUPA, J. K., RAYA, S. P., AND BARRETT, IV. A.1990. A PC-based 3D imaging system forbiomedical data. In Proceedings of the 1st Con-ference on Visualization in Biomedical Comput-ing (Atlanta, Ga., May 22–25), IEEE Com-puter Society Press, pp. 295-303.

UPSON, C., AND KEELER, M, 1988. V-BUFFER:Visible volume rendering, Comput. Graph. 22,4 (Aug.), 59-64.

VINCKEN, K. L., DEGRAAF, C. N , KOSTER, A. S. E.,VIER~EVER, M. A., APPELMAN, F. J. R., ANDTIMMENS, G. R. 1990. Multiresolution seg-mentation of 3D images by the hyperstack. InProceedings of the 1st Conference on Visualiza-tion in Biomedical Computing (Atlanta, Ga.,May 22-25), IEEE Computer Society Press, pp.115-122.

VITAL IMAGES, INC. 1989. The VOXEL VIEW In-teractive Volume Rendering System.

WALSER, R. L., AND ACKERMAN, L. V. 1977. Deter-mination of volume from computerized tomo-

grams: Finding the volume of fluid-filled braincavities. J. Comput. Asswt. Tomograph, 1, 1,117-130.

WATT, A. 1989. Fundamentals of Three-Dimen-

sional Computer Graphics, Addison-Wesley,Reading, Mass.

WEISBURN, B. A., PATNAIK, S., AND FELLINGHAM,L. L. 1986. An interactive graphics editor for3D surgical simulation. Apphcahon of OpticalInstrumentation in Medicine XIV and Picture

Archiving and Communication Systems [PACS

IV) for Medical Applications, SPIE 626, (New-

port Beach, Calif,, Feb. 2-7), pp. 483-490,

WENG, J., AND AHUJA, N. 1987 Octrees of objectsin arbitrary motion: Representation and effi -ciency. Comput Viston, Graph., Image Pro-cess. 39, 2 (Aug.), 167-185.

WHITTED, T. 1980, An improved illuminationmodel for shaded display, Comm u n. ACM 23,

6 (June), 343-349.

WILLIAMS, R, D., AND GARGIA, F., JR. 1989. Vol-ume visualization displays, In f, Display 5, 4(Apr.), 8-10.

WIXSON, S. E. 1989. True volume visualization ofmedical data. In Proceedings of the Chapel HzllWorkshop on Volume Visualczatton (ChapelHill, N. C., May 18-19), Univ. of North Caro-lina Press, pp. 77-83.

WOOD, S. L,, AND FELLINGHAM, L. L. 1985 Solidmodeling and display techniques for CT andMR images. In Proceedings of the ZnternatLonalSymposium: CAR ’85, Computer Assisted Ra-diology (Berlin, West Germany), ComputerAssisted Radiology Press, pp. 612-617.

WOOD, S. L., Dtiv, P,, DUNCAN, J, P., WHITE, D, N.,YOUNG, S. W., AND KAPLAN, E. N, 1983, Acomputerized system for planning reconstruc-tive surgery. In proceedings of the 5th Annual

IEEE Engineering in Medicine and B~ology So-

ciety Con ference on Frontiers of Engineer-ing and Computing in Health Care— 1983(Columbus, Ohio, Sept. 10-12), pp. 156-160.

Xu, S. B., AND Lu, W X. 1988 Surface recon-struction of 3D objects in computerized tomog-raphy. Comput. Vmion, Graph., Image Process.44, 3 (Dec.), pp. 270-278.

YASUDA, T., HASHIMOTO, Y., YOKCH, S., ANDTORIWAKI, J.-I. 1990. Computer system forcraniofacial surgical planning based on CT im-ages. IEEE Trans. Med. Imag. 9, 3 (Sept.),270-280.

YOKOI, S., YASUDA, T., HASHIMOTO, Y., TORIWAKI,J.-I , FUJIOKA, M., AND NAKAJIMA, H. 1987. Acraniofacial surgical planning system. In Pro-ceedings of the 8th Annual Conference andExposition of the National Computer GraphicsAssociation, NCGA ’87 (Philadelphia,Penn., Mar. 22-26), pp. 152-161.

BIBLIOGRAPHY

Farrell Colored-Range Method BibliographyFARRELL, E. J., AND ZAPPULLA, R. A. 1989. Three-

dimensional data visualization and biomedi-

ACM Computing Surveys, Vol. 23, No. 4, December 1991

Page 76: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

496 “ Stytz et al.

cal applications CRC Crlt Rev Biomed,

Eng. 16, 4, 323-363.

FARRELL, E. J,, YANG, W. C,, AND ZAPPULLA, R,1985. Animated 3D CT imaging IEEE Com-

put. Graph. Appl, 5, 12 (Dec.), 26-30,

FARRELL, E. J,, ZAPPULLA, R., AND YANG, W, C,1984, Color 3-D imaging of normal and patho-logic intracranial structures IEEE CompGraph. Appl. 4, 10 (Sept ), 5-17

FARRELL, E. J., ZAPPULLA, R. A., KANTROWITZ, A1986a. Planning neurosurgery with intera-ctive 3D computer imagmg. In Proceedings of

the 5th Conference on Medical Informatzcs,

Salamon R., and Blum, B. (eds.), Amsterdam,Denmark, North-Holland, pp. 726-730.

FARRELL, E. J., ZAPPCTLLA,R A , KANTROWITZ) A.,SPIGELMAN) M., YANG, W. C 1986b. 3D dis-play of multiple intracranial structures fortreatment planning. In Proceedings of the 8thAnnual Conference of the IEEE/Engineering inMedicine and Biology Society (Fort Worth, Tx.,Nov. 7-10), IEEE Press, pp. 1088-1091,

F.ARRELL, E, J., ZAPPULLA, R, A,, AND SPIGELMAN,M. 1987 Imaging tools for interpretingtwo- and three-dimensional medical data, InProceedings of the 8th Annual Conference andExposltLon of the Nat~onal Computer GraphwsAssociation. NCGA ’87 (Philadelphia, Penn.,Mar, 22-26), pp 60-69,

Fuchs / Poulton Pixel-Planes MachinesBibliography

ELLSWORTH, D., GOOD) H., AND TEBBS, B. 1990,Distributing display lists on a multicomputer.In Proceedings of the 1990 Symposium on In-

teractzue 3D Graphics (Snowbird, Utah, Mar25-28), pp. 147-154

FUCHS, H., KEDEM, Z, M,, AND USELTON, S. P. 1977.Optimal surface reconstruction from planarcontours. Commun. ACM 20, 10 (Nov.),693-702,

FUCHS, H,, KEDEM, Z, M., AND NAYLOR, B, 1979.Predetermining visibility priority in 3-D scenes(preliminary report). Cornput. ‘Graph. 13, 2

(Aug.), 175-181

FUCHS, H,, KEDEM, Z. M,, AND NAYLOR, B. F. 1980,On visible surface generation by a priori treestructures. Comput, Graph. 14, 2 (Aug.),124-133

FUCHS, H,, ABRAM, G. D., AND GRANT, E D. 1983,Near real-time shaded display of rigid objects.Comput. Graph. 17, 3 (July), 65-69

FUCHS, H., GOLDFEATHER, J., HULTQUIST, J. P.,SPACH, S., AUSTIN, J. D., BROOKS, F. P. JR.,EYLES, J. G., AND POULTON, J, 1985. Fastspheres, shadows, textures, transparencies,and image enhancements in Pixel-Planes,Comput Graph 19, 3 (July), 111-120,

FUCHS, H., POULTON, J., EYLES, J,, AUSTIN, J., ANDGREER, T, 1986. Pixel-Planes: A parallel ar-chitecture for raster graphics. Pixel-Planes

Project Summary. Ilept of Computer Science,Univ. of North Carolina,

FUCHS, H., POULTON, J., EYLES, J., AND GREER, T,1988a Coarse-~ain and free-grain paral-lelism m the next generation Pixel-Planesgraphics system TR88-014, Umv of NorthCarolina at Chapel Hill, Dept of ComputerScience, and Proceedings of the InternationalConference and Exhlbction on Parallel Process-ing for Computer Vlslon and Dtsplay (Univer-sity of Leeds, United Kingdom, Jan. 12- 15)

FUCHS, H,, POULTO~, J., EYLES, J , GREER, T , HILL,E., ISRAEL, L., ELLSWORTH, D., GOOD, H ,INTERRANTE, V., MOLNAR, S,, NEUMANN, U ,RHOADES, J., TEBBS, B., AND TURK, G, 1988bPixel-Planes: A parallel architecture for rastergraphics, PLxel-Planes Pro]ect Summary, Deptof Computer Science, Umv. of North Carohna,

FUCHS, H., PIZER, S. M , CREASY, J. L , RENNER,J. B , AND ROSENMAN, J. G 1988c, Interac-tive, richly cued shaded display of multiple 3Dobjects in medical images. Medical Imaging II:Image Data Management and Dlspla.v, SPIE914, (Newport Beach, Calif , Jan. 31-Feb 5),pp. 842-849.

FUCHS, H., POULTON, J., EYLES, J., GREER, T ,GOLDFEATHER, J., ELLSWORTH, D., MOLNAR, S,,AND TURK, G. 1989c. Pixel-pLanes 5: A het-erogeneous multiprocessor graphics system us-ing processor-enhanced memories Compu tGraph 23, 4 (July), 79-88

FUCHS, H , LEVOY, M , AND PIZER, S M. 1989a,Interactive visualization of 3D medical data,IEEE Comput 22, 8 (Aug.), 46-52

GOLDFEATHER, J , AND FLTCHS,H, 1986 Quadraticsurface rendering on a logic-enhanced frame-buffer memory. IEEE Comput Graph. Appl,6, 1 (Jan ), 48-59

LEVOY, M. 1989b, Design for a real-time high-qu-alit y volume rendering workstation. In Pro-ceedings of the Chapel Hill Workshop on Vol-ume Vlsualzzation (Chapel Hill, N C., May18- 19), Umv. of North Carolina Press, pp.85-92.

PIZER, S. M,, AND FUCHS, H. 1987, Three dimen-sional image presentation techniques in medical imagmg. In Proceedings of th e InternationalSymposium: CAR ’87, Computer Ass~sted Ra -dtology (Berlin, West Germany), Computer As-sisted Radiology Press, pp. 589-598.

POULTON, J., FUCHS, H , AUSTIN, J , EIiLES, J , ANDGREER, T. 1987 Building a 512 x 512 Pixel-Planes system, 1987 Conference on Aduanced

Research Ln VLSI. Stanford University, PaloAlto, Calif., preprint

Kaufman’s Cube Machine Bibliography

BAK.A.LASH,R , AND KAUFMAN, A 1989 Medlcube:A 3D medical imagmg architecture. ComputGraph, 13, 2151-159.

ACM Computmg Surveys, Vol 23, No 4, December 1991

Page 77: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging e 4.97

CO~i~N, D., KAu~hfAN, A., ANDBAKALASH, R. 199~.Real-time discrete shading. V~sual Comput. 6.314-27.

GEMBALLA, R., AND LINDNER, R. 1982. The multi-ple-write bus technique. IEEE Comput. Graph.Appl. 2, 7 (Sept.), 33-41.

KACTFM~N, A , AND BAKALASH, R. 1985 A 3-D cel-lular frame buffer In Proceedings of EURO-

G’RAPHICS ’85, Vandoni, C. E., Ed., ElsevierScience Publishers B.V. (iVorth-Holland), Ams-terdam, The Netherlands, pp. 215-220.

KAUFMAN, A., AND SHIMONY, E. 1986. 3D scan-conversion algorithms for voxel-based graph-ics. In Proceedings of the ACM Workshop onInteractive t3D Graphics (Chapel Hill, N .C., Ott.23-24), pp. 45-75.

KAUFMAN, A. 1987. Efficient algorithms for 3Dscan-conversion of parametric curves, surfaces,and volumes. IEEE Comput. Graph 21, 4(July), 171-179.

KAUFMAN, A. 1988a. Efficient algorithms forscan-converting 3D polygons. Comput. Graph.12, 2, 213-219.

KAUFMAN, A. 1988b The cube workstation: A 3-Dvoxel-based graphics environment. VisualComput. 4, 4 (Oct.), 210-221

KAUFMAN, A., ANI) B~KALASH, R. 1988. Memoryand processing architecture for 3D voxel-basedimagery. IEEE Comput. Graph. Appl. 8, 6(Nov ) 10-23.

KAUFMAN, A. 1988c. The CUBE three-dimen-sional workstation In Proceedings of the 9thAnnual Conference and Exposition of the Na-tional Computer Graph~cs Assoclatmn, IVCGA’88 (Anaheim, Cahf’., Mar. 20-24), pp.344-354.

K~UFMAN, A 1989 The uoxblt engine: A voxelframe buffer processor. In Aduances in Graph-

ics Hardware III, Kuijk, F , and Strasser, W.(eds.), Springer-Verlag, Berlin, Germany.

KAUFMAN, A., AND B~KALASH, R 1989a. Parallelprocessing for 31) voxel-based graphics. InParallel Processing for Computer Visionand Dwplay, Dew, P. M., Earnshaw, R. A.,and Heywood, T. R. (eds. ), Addison-Wesley,Workingham, England.

KAUFMAN, A , AND BA~ALASH, R. 1989. The cubesystem as a 3D medical workstation. 7’hree-Dimensional Vwualization and Display Techn-

ologies, SPIE 1083, (Los Angeles, Calif., Jan.18-20), pp. 189-194.

KA[TFMAN, A., AND BAKALASH, R. 1990. Viewingand rendering processor for a volume visualiza-tion system (extended abstract). In Advancesm Graphics Hardware IV, Grimsdale, R. L.,and Strasser, W. (eds. ), Springer-Verlag,Berlin, Germany.

Mayo Clinic True 3D Machine BibliographyHARRIS, L. D , ROBB, R. A., YUEN, T. S., AND

RITMAN, E. L. 1979. Display and visualiza-

tion of three-dimensional reconstructedanatomic morphology: Experience with the tho-rax, heart, and coronary vasculature of dogs.J. Comput. Assist. Tomog. 3, 4 (Aug.),439-446.

HARRIS, L. D., CAMP, J J , RITMAN, E. L., ANIJROBB, R. A. 1986. Three-dimensional displayand analysis of tomographic volume imagesutilizing a varifocal mirror. IEEE Trans. Med.

Imag. MI-5, 2 (June), 67-72.

HEFFERMAN, P B., AND ROBB, R. A. 1985a. A newmethod for shaded surface display of biologicaland medical images. IEEE Trans. Med. Im ag.

MZ4, 1 (Mar.), 26-38.

HEFFERMAN, P. B.j AND ROBB, R. A 1985b. Dis-play and analysis of 4-D medical images. InProceedings of the International Symposium.CAR ’85, Computer Assisted Radwlogy (Berlin,West Germany), Computer Assisted RadiologyPress, pp. 583-592

ROBB, R. A. 1985. Three-Dlmt’ns~onal BLomedwalImaging. 1. CRC Press, Boca Raton, Fla.

ROBB, R. A., HEFFERMAN, P. B., CAMP, J. J., ANDHANSON, D. P. 1986. A workstation for inter-active display and quantitative analysis of 3Dand 4D biomedical images In Proceedings ofthe 10th Annual Symposium on Computer Ap-plications in Medical Car-e (Washington, D. C.,Oct. 25-26), pp. 240-256.

ROBE, R. A. 1987, A workstation for interactivedisplay and analysis of multidimensionalbiomedical images. In Proceedings of the Inter-national Symposium: CAR ’87, ComputerAswsted Radiology (Berlin, West Germany),Computer Assisted Radiology Press, pp.642-656.

ROBB, R. A., AND BAIULLOT, C, 1988. Interactive3-D image display and analysis. In proceedings

of the Srmety of Photo-Optical InstrumentationEngineers: Hybrid Image and Signal Process-ing, 939, Casasent, D. P., and Tescher, A. G.,(eds ), (Orlando, Fla.), pp. 173-202.

ROBB, R. A , ANLI BARILLOT, C 1989. Interactivedisplay and analysis of 3-D medical ImagesIEEE Trans. Med Imag. 8, 3 (Sept.), 217-226.

ROBB, R. A., AND HANSON, D. P. 1990, ANA-LYZE: A software system for biomedical Imageanalysis. In Proceedings of the 1st Conference

on Vzsua[uatzon [. BLotnedlca/ Computmg(Atlanta, Ga , May 22-25), IEEE ComputerSociety Press, pp. 507-518.

MIPG Machines Bibliography

ARTZY, E. 1979. Display of three-dimensional in-formation in computed tomography ComputGraph. Image Process., 9, 196-198.

ARTZ~. E., FRI~DER, G.. HERMAN. G. T., AND LIU.H. K 1979. A system for three-dimensionaldynamic display of organs from computedtomograms, In Proceedings of the 6th

Conference on Computer Applications mRudLology and Computer/ .4ded Anal.ysLs

ACM Computing Surveys, Vol. 23, No 4, December 1991

Page 78: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

498 * Stytz et al.

of Racllologlcal Images (Newport Beach,Calif., June), pp. 285-290.

ARTZY, E., FRIEDER, G., AND HERMAN, G T. 1981.The theory, design, implementation and evalu-ation of a three-dimensional surface detectionalgorithm Comput. Graph Image Process. 15(Jan ), 1-24.

CHEN, L. S., HERMAN, G T., REYNOLDS, R A., ANDUDUPA, J. K. 1984. Surface rendering in thecuberille envwonment. Tech Rep. MIPG 87.Dept. of Radiology, Univ of Pennsylvania.

CHEN, L. S., HERMAN, G. T., MEYER, C R.,REYNOLDS, R A., AND UDUPA, J K. 1984b3D83: An easy-to-use software packagefor three-dimensional dwplay from computedtomograms In Proceedings of IEEE

Computer Society JoZnt International

Symposwm on Medical Images and Icons(Arlington, Va., July), IEEE Computer SocietyPress, pp 309-316.

CHEN, L.-S., HERMAN, G. T , REYNOLDS, R. A., ANDUDUPA, J. K 1985 Surface shading in thecuberdle envu-onment IEEE Comput. GraphAppt. 5, 12 (Dec.), 33-43

EDI~OLM, P. R,, LEWITT, R M , LINDHOLM, B ,HERMAN, G. T., UDUPA, J. K , CHEN, L S.,MARGASAHAYAM,P. S., AND MEYER, C. R.1986, Contributions to reconstruction anddisplay techniques in computerized tomogra-phy. Tech, Rep. MIPG 110, Dept. of Radiology,Univ of Pennsylvania,

FRIED~R, G., GORDON, D., AND REYNOLDS, R. A1985a Back-to-front display of voxel-basedobjects. IEEE Comput Graph. Appl. 5, 1(Jan ), 52-60

FRIEDER, G., HERMAN, G, T., MEYER, C., AND UDUPA,J. 1985b. Large software problems for smallcomputers. An example from medical imaging.IEEE Softw. 2, 5 (Sept ), 37-47.

HERMAN, G. T., AND LIU, H. K, 1978. Dynanncboundary surface detection. Comput. Graph.Image Process. 7, 130-138.

HERIVMN, G T , AND LIU, H. K. 1979, Three-dimensional display of human organsfrom computed tomograms. Comput, Graph,

Image Process. 9, 1-21HERMAN, G. T . AND COJIV, C G 1980 The use

of three-dimensional computer display in thestudy of disk disease J Comput. .4ss,stTomograph, 4, 4 (Aug.), 564-567.

HERMAN, G. T., UnUPA, J. K., KR.AM~R, D., LAUT~R-BUR, P C., RUDIN, A. M., AND SCHNEIDER, J, S.1982. Three-dimensional display of nuclearmagnetic resonance images Opt. Eng. 21, 5(Sept /Ott ), 923-926

HERMAN, G T , AND WEBSTER, D. 1983 A topolog-ical proof of a surface tracking algorithm.Comput. Vwlon, Graph., Image Process. 23,162-177.

HERMAN, G. T. 1985 Computer graphics ]nradiology In Proceedl ngs of the In tcrnatlonal

Symposium: CAR ’85, Computer Assisted Ra-

diology (Berlin, West Germany), Computer As-sisted Radiology Press, pp. 539-550.

HERMAN, G T, 1986, Computerized reconstruc-tion and 3-D imaging m medicine. Tech.Rep MIPG 108, Dept. of Radiology, Umv ofPennsylvama,

REYNOLDS, R A. 1983b. Some architectures forreal-time display of three-dimensional objects:A comparative survey, Tech Rep. MIPG 84,Dept. of Radiology, Univ. of Pennsylvania.

UDUPA, J. K , RAY~, S. P., AND BARRETT.W. A 1990 A PC-based 3D imaging systemfor biomedical data. In Proceedings of 1st

Conference on Vlsuali.zation tn Bwmedwal

Computmg (Atlanta, Ga , May 22-25), IEEEComputer Society Press, pp 295-303.

UDUPA, J, K., AND HUNG, H.-M. 1990. A com-parison of surface and volume renderingmethods. SCAR ’90. Proceedings of the10th Conference on Computer Apphcationsm Radiology and the 4th Conference onComputer Asswted Radwlogy (Anaheim,Calif., June 13-16), Symposia FoundationPress, pp. 464-470

Pixar / Vicom Image Computer and Pixar /Vicom II Bibliography

CARPENTER, L. 1984. The A-buffer, An antialiasedhidden surface method Comput Graph. 18, 3(Ju1y), 103-108,

CATMULL, E. 1984, An analytic visible surface al-gorithm for independent pixel processingCornput. Graph 18, 3 (July), 109-115.

CooK, R, L. 1984 Shade trees Comput. Graph18, 3 (July), 223-231,

COOK, R L , PORTER,T., CARPENTER,L. 1984. Dis-tributed ray tracing Comput. Graph. 18, 3

(July), 137-145

COOL R L., CARPENTER,L , AND CATMULL, E. 1987The reyes image rendering architecture Com-put. Graph, 21, 4 (July), 95-102.

DREBIN, R, A,, CARPENTER, L,, AND HANR~HAN, P1988 Volume rendering, Cornput. Graph, 22,

4, (Aug.), 65-74.

LEVLNTHAL, A., AND PORT~R, T 1984. Chap: ASIMD graphics processor Comput Graph 28,3, (Julj-), 77-82.

NEY, D. R , F[SH~AN, E K., MAGID, D., AND DREBIN,R. A. 1990b Volumetric rendering of com-puted tomography data: Prmclples and tech-niques. IEEE Comput, Graph. Appl 10, 2(Mar ), 24-32

PIXAR Cowow%’rloN, 1988a. ChapLLbrartes Tech-

nwal Summary for the UNIX OperatingSystem.

PIXAR CORPORATION 1988b ChapTools TechnicalSummary for the UNIX Operating System,

PIXAR CORPORATION. 1988c. Pzxar II Hardwareouerllew

ACM Computing Surveys, Vol 23, No. 4, December 1991

Page 79: Three-dimensional medical imaging - ir@Georgetownir.cs.georgetown.edu/publications/downloads/91-3d_analysis_medica… · 1. unique aspects of three-dimensional medical imaging 2 three-dimensional

Three-Dimensional Medical Imaging g 499

PIXAR CORPORATION, 1988d. Chap Image Process-

ing System Technical Summary.

PIXAR CORPORATION. 1989. Chap Volumes Vol-

umes Rendering Package Technical Summary.

PORTER, T., AND DUFF, T. 1984. Compositing digi-tal images. Cornput. Graph. 18, 3 (July),253-259,

ROBERTSON,B. 1986. PIXAR goes commercial in anew market. Comput. Graph. World 9, 6 (June),61-70.

SPRINGER, D. 1986. Image computer revolution-izes industrial design. Intelligent Instrum.Comput. (Nov./Dee.), 276-277.

Reynolds & Goldwasser’s Voxel ProcessorBibliography

GOLDWASSER, S. M., AND REYNOLDS, R. A. 1983.An architecture for the real-time display andmanipulation of three-dimensional objects. InProceedings of the International Conference on

Parallel Processing (Bellaire, Mich., Aug.),IEEE Computer Society Press, pp. 269-274.

GOLDWASSER, S. M. 1984a. A generalized objectdisplay processor architecture. In Proc-eedings of the 11th Annual InternationalSymposium on Computer Architecture,(Ann Arbor, Mich., May), IEEE Computer Soci-ety Press, pp. 38-47.

GOLDWASSER, S. M. 1984b. A generalized objectdisplay processor architecture. IEEE Comput.

Graph. Appl. 4, 10, (Oct.), 43-55.

GOLDWASSER, S. M. 1985. The voxel processorarchitecture for real-time display and man-ipulation of 3D medical objects. In Proc-eedings of the 6th Annual Conference andExposition of the National Computer

Graph ics Association, NCGA ’85 (Dallas, Tx.,Apr. 14-18), pp. 71-80.

GOLDWASSER, S. M., REYNOLDS, R. A., BAPTY, ‘1’.,BARAFF, D., SUMMERS, J., TALTON, D, A., ANDWALSH, E. 1985 Physician’s workstation withreal-time performance, IEEE Comput. Graph.

Appl. 5, 12 (Dec.), 44-57.

GOLDWASSER, S. M. 1986. Rapid techniques forthe display and manipulation of 3-D biomedicaldata. In proceedings of the 7th Annual Confer-

ence and Exposition of the National Computer

Graphics Association, NCGA ’86 (Anaheim,Calif, May 11-15), pp 115-149.

GOLDWASSER,S. M., REYNOLDS, R. A., TALTON, D.,AND WALSH, E. 1986. Real-time interactivefacilities associated with a 3-D medical work-station. Application of Optical Instrumentationin Medicine XIV and Ptcture Arch iuing andCommunication Systems (PACS IV) for MedicalApplications, SPIE 626, (Newport Beach,Calif., Feb. 2-7), pp 491-503.

GOLDWASSER, S. M , AND REYNoLnS, R. A. 1987.Real-time display and manipulation of 3-Dmedical objects: The voxel processor architecture. Comput. Vision, Graph., ImageProcess. 39, 1-27.

GOLDWASSER, S. M., REYNOLDS, R. A., TALTON,D. A., AND WALSH, E. S. 1988a. High-perfor-mance graphics processors for medical imagingapplications. In Proceedings of the Interna-

tional Conference on Parallel Processing for

Computer Vision and Display (University ofLeeds, United Kingdom, Jan. 12- 15), pp. 1-13.

GOLDWASSER, S. M., REYNOLIM, R. A., TALTON,D. A., AND WALSH, E. S. 1988b. Techniquesfor the rapid display and manipulation of 3-Dbiomedical data. Comput. Med. Imag. Graph.12, 1 (Jan. -Feb.), 1-24.

GOLDWASSER, S. M., REYNOLDS, R. A., TALTON,D. A,, AND WALSH, E, S, 1989 High perfor-mance graphics processors for medical imagingapplications. In Parallel Processing for Corn-

puter V1.won and Display Dew, P M., Earn-shaw, R. A., and Heywood, T. R. (eds. ), Addi-son-Wesley, Workingham, England.

REYNOLDS, R. A. 1983a. Simulation of 3D dis-play systems. Dept. of Radiology, Univ. ofPennsylvania, Unpublished report

REYNOLDS, R. A. 1985. Fast methods for 3D dis-play of medical objects. Ph.D. Dissertation,Dept. of Computer and Information Science,Univ. of Pennsylvania.

Recewed August 1988, final revision accepted March 1991

ACM Computing Surveys, Vol. 23, No 4, December 1991