Les Loges Terrains – Vente de terrains viabilisés dans la ...
PhotoRealistic Imaging of Digital Terrains

Upload
danielcohen 
Category
Documents

view
222 
download
2
Transcript of PhotoRealistic Imaging of Digital Terrains
EUROGRAPHICS '93 / R. J. Hubbold and R. Juan (Guest Editors), Blackwell Publishers © Eurographics Association, 1993
Volume 12, (1993), number 3
PhotoRealistic Imaging of Digital Terrains
Daniel Cohen1,2 and Amit Shaked2
1Department of Mathematics and Computer Science Ben Gurion University, Beer Sheva 84015, Israel
2Department of Computer Science, School of Mathematical Sciences Tel Aviv University, Ramat Aviv 69978, Israel
Abstract We present a method for the generation of photorealistic images of views over terrain datasets by mapping a digital aerial photograph on a perspective projection of a digital elevation map. We use high resolution for both digital maps to increase the quality and the realism of the image at the cost of the overhead of processing very large data bases. In the core of this paper we present an accelerated ray casting technique based on a new algorithm of traversing a pyramidal data structure. Unlike other known traversal techniques, the cost of a single step of the algorithm consists of a few additions, shifts and comparisons only.
Keywords: Ray Casting, VoxelBased, Height Field, Hierarchical Methods, Pyramid, Terrain Visualization.
1 Introduction
Recent advances in computer architectures, computational power, and memory bandwidth have created new opportunities for the display of large terrain datasets at high frame rates. Moreover, photorealism has become a common requirement in applications such as flight simulators, mission rehearsal, and construction planning [9, 12, 14]. In some training applications such as targeting, the reconnaissance and identification of the target is necessary. Such strong demands require a very detailed terrain model as well as high resolution digital photographs to be mapped on the model in order to provide the realistic impression. Furthermore, for these applications, this huge amount of data must be rendered in realtime to train the user in true speeds and special visual conditions.
A common approach to the display of terrain data maps the digital photo on a large polygonal surface that approximates the terrain, taking advantage of advances in special hardware for rasterizing texture mapped polygons [7]. However, a detailed terrain model contains too many polygons to be rendered in realtime on common workstations, and only graphic supercomputers can perform in realtime [5]. If the application permits some degradation of the model geometry, larger areas can be approximated by less polygons. However, lower resolution of the digital photo is much more noticeable and in many cases, not acceptable.
A high resolution data allows to represent curved surfaces and detailed geometries. A detailed polygonal representation will cause the polygons to be so tiny that most polygons will contribute to the image an area smaller than that of a single pixel, loosing the costeffectiveness of the polygon rasterization hardware. An alternative approach uses a digital elevation map (DTM) to represent the terrain model; a DTM is an evenly spaced array of the terrain elevation offering an easy access to the terrain data.
Eventually, we would like to render a digital elevation map in realtime on general purpose parallel computers, Thus, we have first devised an algorithm that is fast enough on a sequential computer which can be easily parallelized. Rendering algorithms of elevation maps are close to the task of rendering 3D voxel data [17], but elevation maps are a function of two variables, and better performance is naturally expected. However, synthetic
C364 D. Cohen et al. / PhotoRealistic Imaging of Digital Terrains
Figure 1: (a) A forward mapping maps all voxels along the ray of sight, while (b) a backward mapping maps only the closest voxel.
objects such as trees, buildings, and vehicles are placed over the terrain so that every data element is specified by three coordinates, and is represented by a voxelbased data structure [2, 10]. Thus, we will use the term voxel to describe a data element either represented by the digital elevation map or by a voxelbased data structure.
In Section 2 we introduce our hierarchical rendering method which is based on a quadtree traversal (Section 3). The integerbased traversal technique, which is multiplicationfree is developed in Section 4. In Section 5 we present the hierarchy of filtered data which improve the sampling of the terrain data, and in Section 6 we present our results. The final section contains concluding remarks and some directions for future work.
2 Ray Casting Digital Terrains
The synthesis of a perspective image is done by mapping the terrain data onto the image. There are two mapping approaches classified by the direction of the mapping. A forward mapping approach transforms the voxel dataset onto the image pixels, where they are sorted to avoid the display of hidden voxels, commonly by a zbuffer technique. Ray casting is a backward mapping approach, where sight rays are cast from the pixels of the image back to the voxel dataset. The first voxel encountered hides the rest of the voxels that may exist along the ray.
As illustrated in Figure 1, forward mapping maps all the voxels that are pierced by the ray of sight, while in backward mapping only the visible voxels are mapped. Ray casting has the potential to be faster, because the image pixel colors can be generated with less access to the dataset, while forward mapping requires the mapping of the entire dataset in addition to the overhead of the zbuffering process. On the other hand, the forward mapping can produce a better quality image than a backward mapping, since many voxels can contribute to the final color of one pixel, approximating an area sampling of the dataset, while a naive backward mapping does point sampling and thus produces more aliasing.
To generate a perspective view of the terrain, we use the ray casting technique in which we cast a ray from the viewpoint through every pixel of the image towards the terrain in order to find the first hit of the ray with the terrain. The color of the hit voxel is backmapped to the source pixel. A straightforward implementation of a ray casting tracks the ray’s component vector on the “sea level” plane. At equal intervals along this vector, the terrain elevation values are sampled and compared to the height of the ray above that point. The first time the ray gets below the terrain indicates a rayterrain hit; if no hit occurs the pixel is given the sky color. We refer to such an algorithm as an incremental algorithm [11, 3]. This algorithm can be accelerated by clipping the ray with a bounding box of the terrain defined by the plane dimension and the maximum value of the elevation map. The clipped ray is shorter and less points are needed to be sampled.
In order to reduce the number of steps furthermore, a hierarchical method can be engaged. A pyramidal data structure is a stack of successively reduced resolution maps. The pyramid of elevation maps is employed as a hierarchy of bounding boxes, when the value of each cell is defined by the maximum elevation of the terrain region it represents. Unlike the incremental algorithm, the ray traverses the hierarchy of finer and finer bounding boxes with a varying step size. The ray skips above bounding boxes and recursively traverses below bounding
D. Cohen et al. / PhotoRealistic Imaging of Digital Terrains C365
Figure 2: A 2D ray traverses a hierarchy of bounding boxes. (a) Intersecting with the largest bounding box chooses the right subbox. (b) Then, intersection with that box chooses its left subbox, and so on.
boxes until it either hits the terrain or leaves the top level bounding box to the sky with no hit. Figure 2 illustrates a traversal of a hierarchy of 2D bounding boxes by a 2D ray.
The literature is rich with traversal algorithms of hierarchical data structures [13]. We briefly review the principles of those algorithm in Section 3. The pyramid data structure provides a mechanism with which the ray hits a terrain map in steps, where It is clear that it becomes relatively more efficient when maps of a higher resolution are traced. However, in practice, the cost of a single step is crucial for the total traversal cost. A single step of our new algorithm includes only few additions, shifts, and comparisons and no multiplications or divisions at all. In other acceleration techniques like the parametric method [12] the ray may hit the terrain within less steps, but every step involves several floating point multiplications and the total traversal cost can be higher.
3 Quadtrees Traversal Methods
Assume an image is a 2“ by raster of pixels, where each pixel is either black or white. The image is encoded into a quadtree in the following manner: the root of the quadtree corresponds to the whole image. The image is partitioned into four equalsized by quadrants. The four children nodes of the root correspond to these four quadrants. If one of the quadrants contains only black pixels or only white pixels, it is encoded as a black node or a white node, respectively. The other quadrants, containing both black and white pixels, are the gray nodes. Each gray quadrant is recursively partitioned into four subquadrants, until either no gray nodes exist anymore, or the quadrants have reached the size of a pixel.
Quadtrees, as well as their 3D equivalents  Octrees, were originally devised to save space for large raster images. But since the upper level nodes of the quadtree/octree encode large areas of the scene, it is possible to save processing time, in particular when tracing a ray through a scene represented by a quadtree/octree. The ray can skip a large empty space (a white node) in a single step, and the ray tracing process is accelerated [13]. Casting a ray through a quadtree means finding the first black node among all the leaf nodes that a 2D ray passes through.
The pyramidal data structure we employ is a “full” quadtree, whose leaf nodes hold the surfaceelevation values, and each inner node holds the maximum value of its four subquadrants. We are interested in a pyramidal ray traversal algorithm and in the following we survey rayquadtree traversal algorithms. It is important to note that the pyramidal model introduces a true 2D problem, while the quadtree ray tracing algorithms are not of much interest by themselves, but are commonly used as a basis for developing and demonstrating solutions for the equivalent 3D problem of traversing an octree.
The algorithms for traversing hierarchical data structures can be classified into two basic approaches: bottom up and topdown traversals of a quadtree. In the bottomup approach, also referred to as leaf traversal, the first black leaf pierced by the ray is found by tracking the leaves through which the ray passes. Many algorithms of leaf traversal were developed, and detailed description can be found in [13]. The algorithm begins by computing the point of the ray’s entry to the grid. The leaf node which contains this point is found by recursively comparing the point’s coordinates to the coordinates of the centers of the quadrants, starting from the root of the quadtree,
C366 D. Cohen et al. / PhotoRealistic Imaging of Digital Terrains
and each comparison determines which subquadrant contains the point. If that leaf node is white, the point through which the ray exits the leaf is computed. Then a procedure termed neighbor finding is used to find the adjacent node in the direction of the ray with a size greater than or equal to that of the previous node, which contains the exit point. If that neighbor node is not a leaf, a recursive search is performed from the neighbor node down to the leaf which contains the quadrant’s exit point. The ray is traversed through consecutive leaf nodes, until either a black leaf is found, or the ray exits the grid. The cost of each step consists of the exit point calculation and the neighbor finding routine.
A recursive subdivision approach [8] is a straightforward depthfirst topdown search of a tree structure. The ray is represented parametrically, with a parameter t whose value is 0 in the entry point of the ray to the grid, and 1 in the exit point. First, the ray is intersected with the two lines that splits the top quadrant into subquadrants. The intersection points yield one or two values of the parameter t along the ray (or no values of t at all, in case of no intersection), giving one, two or three subquadrants that the ray passes through. The increasing order of the t values determines the order in which the ray visits the subquadrants. The subquadrants are tested in this order. White quadrants are skipped, gray quadrants are recursively processed, and encountering a black quadrant means a hit. The cost of each step consists mainly of two intersection calculations and the overhead of the topdown navigation management.
4 A New Efficient TopDown Traversal
In this section we describe a pyramidal traversal algorithm. First we describe the pyramid datastructure and present an overview of the associated ray casting algorithm. Then we briefly introduce the midpoint technique, and show how it is adapted to efficiently traverse the pyramidal data structure in order to locate the first voxel hit by the ray.
4.1 The Pyramid Data Structure and Algorithm
The pyramid data structure employed by our algorithm, consist of N + 1 arrays of height values, termed levels. Level 0 is the by grid of voxels, each representing a unit terrain element, built from the database of height values (DTM). In each of the higher levels, a cell at (i, j ) contains the maximum height value of the four cells
and in the level below it. The highest level, N , is a single cell containing the maximum height value in the grid. As mentioned above, this data structure can be treated as a quadtree, with level N as the root of the tree, and each cell in the lower levels decomposed into four cells (quadrants), down to level 0 in which the leaves of the tree are the voxels.
A ray cast through this data structure is a 3D line emanating from the point of view which is located above the surface of the terrain. We have to find the first point along the ray which lies either under or on the surface. We do that by tracking the projection of the ray on the XY plane in a scale of resolutions, with respect to the pyramid levels, and comparing the Z component of the ray to the height of the pyramid cells that the ray passes through. When the ray passes through a pyramid cell at level I , the height stored in this cell is compared to the minimum Z value of the ray over this cell. If the minimum Z is smaller than or equal to the cell’s height, there is a possible hit of the ray in one of the voxels under that pyramid cell. If the current cell’s level is 0, the cell is actually a voxel, and this is a hit; otherwise, the ray switches to a finer resolution, that is  the ray is tested against the subquadrants of the current cell, at level l  1 . If the minimum Z of the ray over the current cell is greater than the cell’s height, there is no possible hit in that area, and the ray steps to the next quadrant within the same resolution. The traversal process can be accelerated by using vertical coherency, assuming that the dataset is a function of two variables [6]. This assumption is violated when objects are placed above the terrain, but it holds for the bounding boxes. A ray can pass to its vertical adjacent ray the distance it traversed above bounding boxes. The next ray can start its search from that distance. This means that the topdown search does not necessarily start from the top of the pyramid and the best level for starting the search is scene dependent. The traversal process can also be accelerated by stopping the search at a higher level of the pyramid for voxels seen further away from the eyepoint [16].
D. Cohen et al. / PhotoRealistic Imaging of Digital Terrains C367
Figure 3: Stepping through pixels using the midpoint technique.
4.2 The Midpoint Technique
Here we briefly describe the midpoint technique developed for rasterizing straight lines and other 2D curves [1, 6]. Unlike the rasterization application which selects the pixels whose centers are closest to the line, here we are required to track the sequence of all pixels pierced by the line. A typical rasterization algorithm generates an 8connected line which does not necessarily cover the line completely. However, an edgeconnected sequence of pixels, also referred to as a 4connected line, is guaranteed to cover all the pixels pierced by the line. Figure 3 shows an example of stepping through the pixels pierced by a straight line expressed implicitly by
We discuss the case in which and as in Figure 3, and all the other cases are treated similarly. Consider Figure 3; Assume that in the course of the algorithm we have chosen pixel A, which is represented
by its bottomleft point PA, and now we have to choose one of the next edgeadjacent pixels, C or B. The implicit representation of the line partitions the XY plane into two infinite regions, one where  the region to the left of the line, relatively to the direction of the line, and one where the region to the right of the line.
Let the point termed the midpoint, be at the midway between the two candidate pixels C and B. The value of indicates on which side of the line lies. If it indicates that the line passes horizontally from pixel A through the edge to pixel B, otherwise it goes vertically to pixel C. In Figure 3(a), and pixel B is selected as the next pixel. In Figure 3(b), the line passes above thus
and pixel C is selected. Using the value of the implicit line representation at the midpoint as a decision mechanism for traversing
the pixel space turns to be very efficient, since the decision variable can be evaluated incrementally. Looking again at Figure 3, assume the coordinates of PA are and the length of a pixel's edge is L. Then the coordinates of are giving
When stepping to pixel B, which is represented by point the new midpoint is with coordinates The new decision value will now be
Let be the difference between the new decision value and the old one when a horizontal step is made, and by when a vertical step is made:
C368 D. Cohen et al. / PhotoRealistic Imaging of Digital Terrains
which means that the incremental change in the decision value when we step to the next pixel is a constant, independent of the coordinates of the pixel itself, and determined only by the direction in which we step (horizontal or vertical).
can be precomputed, and the cost of each step is only two integer additions: the incremental step along the X or Y coordinate and the addition of the increment to the decision value, and one sign test of the decision variable.
The constant values and
4.3 Pyramid Traversal Using The Midpoint Technique
The 3D ray is the straight line defined by the point of view, or center of projection, and a screen pixel through which the ray is cast. The projection of that line on the XY plane can be represented by the straight line equation where and
Unlike previous algorithms the new pyramid traversing method uses a nonparametric representation of the ray vector, based on the midpoint technique described above. We regard the pyramid quadrants at each level as pixels, and follow the ray through these multiscale pixels, using a multi resolution midpoint method to decide which quadrant at which level will be entered at each step. It is important to note that we never actually compute the intersection point between the ray and the edges of the cells. We use x and y as pointers to the current cell at the current level l. Therefore, x and y always take values of the grid coordinates, using only integer arithmetic.
Each cell is represented by the coordinates of its southwest corner, and its edges have a length of where l is the level of cell. Assuming the ray has a positive X and Y directions, its northeast corner serves as the midpoint between its north and east adjacent neighbors.
The multiscale traversal is driven by the sign of a decision variable d, which functions as a midpointvalue at the current level of the traversal. According to the sign of d a step is taken either horizontally or vertically to the next pixel. However, here a step may include a change of level, down or up.
A step consists of an update x or y by which is a cell's edge length at level l. Thus, when we step up or down the levels the current edgelength needs to be updated:
Similarly, the values that update the midpoint are no longer constants, but a function of the current level:
Since the traversal is topdown, it is guaranteed that every change of levels can be either incremented or decremented by one. Thus, from Equation 6, they can be updated easily:
and from Equation 7 and Equation 8:
The decision variable and the above values are represented with fixedpoint integers, which allow us to apply a bit shift operation instead of a multiplication or a division by two. The ray equation coefficients a and b are normalized in order to guarantee that the range of values taken by the decision variable are small enough to allow fixedpoint representation with no bitoverflows.
The multiscale traversal consists of three types of steps:
1. a step within the same quadrant
D. Cohen et al. / PhotoRealistic Imaging of Digital Terrains C369
Figure 4: Stepping to an adjacent cell in the pyramid structure.
2. a step to a higher level
3. a step down to a lower level
The simplest type is the first, since the steps move between adjacent cells of the same level. The others two require to move the midpoint to a new location according to the change of level. From the current cell coordinates it is possible to detect when the ray leaves to a higher level. See for example in Figure 4(a) the step from to B. The midpoint must be shifted to the A step down to a lower level is illustrated in Figure 4(b) (the step from B down to The midpoint that was located at has to be shifted back half a step to The midpoint value is updated by with respect to the direction of the ray and the new location of the midpoint.
or
5 Sampling and Aliasing
One prominent concern in the rendition of photorealistic images is to reduce aliasing artifacts caused by the inherent point sampling of the ray casting process. According to the Shannon theorem, an image may be correctly reconstructed if and only if the sample rate is at least twice as high as the highest frequency of the signal domain. Casting one ray per pixel is a sampling rate too low for dealing with a detailed and noisy photo of a terrain; in other words, the photo contains frequencies much higher than the sampling rate. A common remedy is to supersample, that is, to increase the sampling rate by casting several rays per pixel and to average the results. However, supersampling directly increases the cost of rendering. Alternatively, an area sampling can be applied: a pixel may be considered to be a small window looking onto the terrain. The projection of the pixel on the terrain defines the pixel footprint (see Figure 5). The average color covered by the footprint quadrilateral area is assigned to the pixel. The area sampling technique eliminates the aliasing but it is very computationally expensive.
Since our primary goal is speeding up the rendering process, we have favored a solution that uses more space and precomputation time but with no time penalty during the generation of the images. In a precomputation, the high frequencies of the photo are filtered away to leave a smoother and blurred image from which it is possible to reconstruct an aliasingfree image with one sample per pixel. However, due to the perspective mapping the pixel
C370 D. Cohen et al. / PhotoRealistic Imaging of Digital Terrains
Figure 5: The footprint is determined by the distance, in object space, between two consecutive hits.
rate frequency exhibits different sample rates of the photo maps, or, in other words, different sizes of footprints [7, 9]. Instead of explicitly compute the sample frequency or the footprint size, it is possible to approximate the samples frequency in terms of the distance between consecutive samples (see Figure 5). Thus, each ray i , representing the sample at pixel needs to filter an area around the point where it hits the terrain. The filtered area provides a reasonable approximation of the pixel’s footprint.
A discrete scale of filters generates a scale of filtered photo maps, where the filters are specified by the size of their support defined in the frequency domain. The scale of filtered photos are precomputed and stored for a fast access, and can be seen as a special case of the precomputed summedarea tables [4]. For large values of the filtered map is very blurred and adjacent pixels have very similar values. For those blurred maps it is possible to use photo maps of reduced resolutions and to save space. This leads to a hierarchy of filtered maps, where each map fits to a discrete value of To avoid noticeable transition between hierarchies it is possible to interpolate the values of adjacent hierarchies, similarly to the mipmaps technique for texture mapping [15].
Another artifact occurs when the view point approaches the ground and the size of a voxel becomes larger than that of a pixel. This causes voxel oversampling by adjacent rays which causes the unpleasing “blocky” pattern of the pixels. To overcome this artifact, an exact hit location is needed from which a weighted average of the four near voxels color is obtained. However, such solution is very expensive due to the extra access to the photo maps and the cost of the averaging calculation [11]. Instead, the original photo map can be scaled up by interpolation and the new larger and smoothed map can be used. Those high resolution photo maps are needed only for particular area of interest, where the user approaches close to the ground.
6 Results
Figure 6 was generated on an Iris Indigo in 3.1 seconds, and in 5 seconds on a personal Iris. The resolution of the image is 512x512 and the dataset is a 1024x1024 DTM. Synthetic voxelbased objects were placed above a checkboard textured synthetic terrain (see Figure 7) to emphasize the hidden part removal and to test the backtracking effect on the topdown search.
The quality of one frame cannot imply on the results of an animation sequence running in realtime. We have tested our approach by displaying a movie of precomputed images. We have displayed precomputed sequences of 256x256 images in realtime and visualized no temporal aliasing. However, there is a trade off between the temporal aliasing and the softness of the dataset color. Clearly, using blurred photo maps will cause no aliasing artifacts, thus a good tuning of the prefiltering process is essential for satisfactory results. We have also recorded animation sequences on an NTSC videotape. However, the results produced suffer from minor artifacts due to the low quality of the NTSC technique.
D. Cohen et al. / PhotoRealistic Imaging of Digital Terrains C371
Figure 6: A 1024x1024 terrain data rendered into a 512x512 image.
7 Conclusions and Future work
We have presented a high speed algorithm to display photorealistic images based on a topdown pyramid traversal technique. The traversal is based on the midpoint technique and it uses no multiplications or floating point arithmetic. Spatial and temporal aliasing were avoided by using a hierarchy of prefiltered datasets. A ray hitting the terrain samples the area seen from the pixel by a look up into the hierarchy. It enables the use of a simple and fast point sampling rather than the time consuming solutions of supersampling in image space or in object space.
The photo maps can be enhanced by adding shadows to the terrain. This is done by casting light rays from the light source towards the terrain using the same rendering algorithm, but instead of coloring the source pixels, the target voxels are highlighted, leaving nonhitted voxels in the dark. We are currently working on special effects such as fog, clouds, and haze which will improve the realism of the images.
Since our rendering algorithm is hardware independent and can run on any machine, it can easily be ported to a multi processors workstation. Indeed, based on this work, a comprehensive voxelbased rendering system of terrains has been developed at Tiltan Engineering Systems Ltd. The system runs on a 32 processor machine (IBM PVS), where the generation of the frames has reached the expected realtime rates.
8 Acknowledgments
We thank Silicon Graphics Israel for supporting this work with their equipment. We thank Tiltan Engineering Systems and IBM Israel, in particular Eran Rich and Micha Aharon, for helping us to produce the video animations. We also wish to acknowledge the careful review and helpful comments of Roni Yagel.
C372 D. Cohen et al. / PhotoRealistic Imaging of Digital Terrains
Figure 7: Synthetic objects placed on a checkboardmapped terrain.
References
1. J. R. Van Aken and M. Novak. Curvedrawing algorithms for raster displays. ACM Transactions on Graphics, 4(2):147169, April 1985.
Visualization, pages 280301,1990.
pages 3542, July 1984.
2. D. Cohen and A. Kaufman. 3d scanconversion algorithms for linear and quadratic objects. Volume
3 . S . Coquillart and M. Gangnet. Shaded display of digital maps. IEEE Computer Graphics and Applications,
4. F. Crow. Summedarea tables for texture mapping. In Computer Graphics, volume 18(3), July 1984.
5. Evans and Sutherland Computer Corporation. Esig4000 technical overview. Technical report, 600 Komas Drive, Salt Lake City,UT 84108.
6. J.D. Foley, A. van Dam, K. Feiner, and F. Hughes. Computer Graphics Principles and Practice. Addison Wesley, Reading, MA, 1990.
7. B. Geymayer, M. Prantl, H. MullerSeelich, and B. Tabatabai. Animation of landscapes using satellite imagery. EUROGRAPHlCS’91, pages 437446, September 1991.
8. Frederik W. Jansen. Data structures for ray tracing. In L.R.A. Kessener, F.J. Peters, and M.L.P. van Lierop, editors, Data Structures for Raster Graphics, pages 5773, Netherlands, 1986. Springer Verlag.
9. K. Kaneda, F. Kato, E. Nakamae, T. Nishita, H. Tanaka, and T. Noguchi. Three dimentional terrain modeling and display for environmental assessment. In Computer Graphics, volume 23(3), pages 207214, July 1989.
10. A. Kaufman, D. Cohen, and R. Yagel. Volume graphics. To appear in IEEE Computer, July 1993.
D. Cohen et al. / Phot oRealistic Imaging of Digital Terrains C373
11. F. Kenton Musgrave. Grid tracing: Fast ray tracing for height fields. Technical report, Department of
12. D. W. Paglieroni and S . M. Petersen. Parametric height field ray tracing. In Graphics Interface '92, pages
13. H. Samet. Applications of Spatial Data Structures. Addison Wesley, 1989.
14. J. P. Thirion. Realistic 3d simulation of shapes and shadows for image processing. CVGIP: Graphics
15. L. Williams. Pyramidal parametrics. In Computer Graphics, volume 17(3), pages 111,1983.
16. J. Wright and J. Hsieh. A voxelbased, foreward projection algorithm for rendering surface and voluetric data. Proceedings of Visualization '92, 1992.
17. Roni Yagel, Daniel Cohen, and Arie Kaufman. Discrete ray tracing. IEEE Computer Graphics and Applications, 12(5): 1928, September 1992.
Mathematics, Yale University, December 1991.
19275,1992.
Models and Image Processing, 54(1):8290, January 1992.