Incremental Mesh-Based Integration of Registered …users.aber.ac.uk/yyl/research/accv06.pdf ·...

11
Incremental Mesh-Based Integration of Registered Range Images: Robust to Registration Error and Scanning Noise Hong Zhou 1 , Yonghuai Liu 1 , and Longzhuang Li 2 1 Department of Computer Science, University of Wales, Aberystwyth, Ceredigion SY23 3DB, UK 2 Department of Computer Science, Texas A and M University, Corpus Christi, TX-78412, USA Abstract. Existing integration algorithms often assume that the regis- tration error of neighbouring views is an order of magnitude less than the measurement error [3]. This assumption is very restrictive that automatic registration algorithms can hardly meet. In this paper, we develop a novel integration algorithm, robust to both large registration errors and heavy scanning noise. Firstly, a pre-processing procedure is developed to auto- matically triangulate a single range image and remove noisy triangles. Secondly, we shift points along their orientations by the projection of their resulting correspondence vectors so that new correspondences can approach together, leading large registration errors to be compensated. Thirdly, overlapping areas between neighbouring views are detected and integrated, considering the confidence of triangles, which is a function of the including angle between the centroid point vector of a triangle and its normal vector. The outcome of integration is a set of discon- nected triangles where gaps are caused by the removal of overlapping triangles with low confidence. Fourthly, the disconnected triangles are connected based on the principle of maximizing interior angles. Since the created triangular mesh is not necessarily smooth, finally, we mini- mize the weighted orientation variation. The experimental results based on real images show that the proposed algorithm significantly outper- forms an existing algorithm and is robust to both registration error and scanning noise. 1 Introduction Automatic 3D object model reconstruction from multiple registered range images is popular today in applications ranging from object modelling to computer graphics [1, 2, 9]. 3D object modelling usually involves the following four stages: (1) Scan object surface from various viewpoints; (2) Register the views; (3) Integrate the views; and finally (4) Render the integrated data. Data acquisition involves scanning the surface of 3D objects from multiple viewpoints using laser range scanners like Minolta Vivid 700. The data used in this paper were downloaded from the range image database currently hosted by P.J. Narayanan et al. (Eds.): ACCV 2006, LNCS 3851, pp. 958–968, 2006. c Springer-Verlag Berlin Heidelberg 2006

Transcript of Incremental Mesh-Based Integration of Registered …users.aber.ac.uk/yyl/research/accv06.pdf ·...

Incremental Mesh-Based Integration ofRegistered Range Images: Robust to

Registration Error and Scanning Noise

Hong Zhou1, Yonghuai Liu1, and Longzhuang Li2

1 Department of Computer Science,University of Wales, Aberystwyth, Ceredigion SY23 3DB, UK

2 Department of Computer Science,Texas A and M University, Corpus Christi, TX-78412, USA

Abstract. Existing integration algorithms often assume that the regis-tration error of neighbouring views is an order of magnitude less than themeasurement error [3]. This assumption is very restrictive that automaticregistration algorithms can hardly meet. In this paper, we develop a novelintegration algorithm, robust to both large registration errors and heavyscanning noise. Firstly, a pre-processing procedure is developed to auto-matically triangulate a single range image and remove noisy triangles.Secondly, we shift points along their orientations by the projection oftheir resulting correspondence vectors so that new correspondences canapproach together, leading large registration errors to be compensated.Thirdly, overlapping areas between neighbouring views are detected andintegrated, considering the confidence of triangles, which is a functionof the including angle between the centroid point vector of a triangleand its normal vector. The outcome of integration is a set of discon-nected triangles where gaps are caused by the removal of overlappingtriangles with low confidence. Fourthly, the disconnected triangles areconnected based on the principle of maximizing interior angles. Sincethe created triangular mesh is not necessarily smooth, finally, we mini-mize the weighted orientation variation. The experimental results basedon real images show that the proposed algorithm significantly outper-forms an existing algorithm and is robust to both registration error andscanning noise.

1 Introduction

Automatic 3D object model reconstruction from multiple registered range imagesis popular today in applications ranging from object modelling to computergraphics [1, 2, 9]. 3D object modelling usually involves the following four stages:(1) Scan object surface from various viewpoints; (2) Register the views; (3)Integrate the views; and finally (4) Render the integrated data.

Data acquisition involves scanning the surface of 3D objects from multipleviewpoints using laser range scanners like Minolta Vivid 700. The data used inthis paper were downloaded from the range image database currently hosted by

P.J. Narayanan et al. (Eds.): ACCV 2006, LNCS 3851, pp. 958–968, 2006.c© Springer-Verlag Berlin Heidelberg 2006

Incremental Mesh-Based Integration of Registered Range Images 959

the Signal Analysis and Machine Perception Laboratory at Ohio State Univer-sity. Each range image has a resolution of 200 by 200 and is depicted in locallaser range scanner centred coordinate system. So these range images have to befirst aligned into a global coordinate system. For this purpose, the registrationalgorithm [5] was employed. Through alignment, transformations between allpairs of views have been obtained. Integration then merges registered data frommultiple views so that a single surface representation is created in the globalcoordinate system. Finally rendering stage will build a watertight and smoothsurface based on the integrated data.

Existing integration algorithms can be classified into the following three maincategories:

1. Mesh integration [8, 9]: Original data from each view is firstly built intomesh (normally triangulation). Doing so is justified by the fact that theycan make full use of topological and geometrical information associated witheach mesh (e.g., point neighbourhood, curvature, and surface orientation).The overlapped meshes are detected and discarded. The remaining meshesare connected to build the whole surface. Mesh based integration is power-ful in discarding noisy mesh and is stable in detecting overlapping area byconsidering topological and geometrical information in mesh and can retaindetails of surface. However, the existing methods in this class often cannothandle the data with large registration error very well;

2. Volume based integration [2, 3]: It combines the integration of overlappingarea detection and surface reconstruction together by using implicit volumet-ric reconstruction methods. It is applicable to objects with arbitrary topol-ogy. But it introduces a lot of noisy mesh when sampling noise is heavy. Onthe other hand, it cannot provide an exact surface topology due to interpo-lation that approximates the intersection between implicit surface and voxeledges; and finally,

3. Points based integration [7]: the Cartesian 3D space is first decomposedinto multiple equally sized voxels and all points which fall into the samevoxel are then integrated as a consensus point without considering muchabout topology between points. The main difference between volume andpoints based integration lies in that while the former applies the traditionalmarching cubes algorithm to extract triangular mesh, the latter considers theintersection between voxel edges and a plane perpendicular to the orientationat the consensus point. This method may fail due to a large registration errorand when the density of points in 3D space changes significantly. In addition,the voxel size is difficult to decide.

All these methods succeed to varying degrees in different situations. Due tosampling noise and unpredicted registration error, the final reconstructed sur-face is often deformed and includes some artefacts such as holes and wronglyconnected edges. Hence, the algorithm that is tolerable to both large registrationand scanning errors is still desired to be developed.

So far, there is no universally stable registration algorithm that can alwaysregister any range data accurately. Moreover, the registration error is likely to

960 H. Zhou, Y. Liu, and L. Li

accumulate continuously with new images added [1]. In this case, integrationalgorithms are desired to possess a mechanism to compensate these registrationerrors. For this purpose, we shift the points along their orientations. The magni-tude of shifting is determined by the projection of their correspondence vectorsalong their orientations. The consequence of shift operation is to let point cor-respondences approach together and thus, leads large registration errors to becompensated. To deal with noise, three subroutines are developed: discontinuitypreservation based triangulation, removing triangles with single neighbours, andsmoothing the generated triangular mesh using a newly developed Gaussian fil-ter. Within these three subroutines, the first two are used for pre-processing andthe last is used for post-processing. A comparative study based on real imageshas shown that the proposed algorithm is promising for automatic 3D modelreconstruction.

The rest of this paper is structured as follows: Section 2 describes how totriangulate a single image, Section 3 describes how to integrate the registeredrange images, while Section 4 describes how to smooth the generated mesh.Finally, the experimental results are presented and some conclusions are drawnin Section 5.

2 Single Range Image Triangulation

Most laser range scanners employ a polar coordinate system and the viewingvolume is restricted by the horizontal and vertical maximal angles. The rangemeasurements are stored as a 2D grid, from which the 3D coordinates of sam-pled object surface points can be recovered when the calibration parameters areknown. For more accurate estimation of orientation of points, the scanned pointsdata are first triangulated. For four neighbouring points, there are six possibleconfigurations for triangulation (Figure 1).

Fig. 1. Six possible configurations for the creation of triangles from four neighbouringpoints

When two neighbouring range data measurements differ by more than athreshold, there is a step discontinuity. In this case, it is meaningless to jointhese two points directly with regard to the representation of surface geometry.The threshold is determined by the surface geometry and sampling resolution.However, the threshold is often difficult to determine. Here we develop a methodto automatically determine the threshold based on a given raster range image:

1. Find all the non-boundary points p(x, y, z). (Definition of non-boundary: Ifeight neighbours of a point p are all non-background, the point p is con-sidered to be a non-boundary point). For each non-boundary point and its

Incremental Mesh-Based Integration of Registered Range Images 961

three neighbouring non-boundary points, two triangles are then created withshorter diagonal length. As a consequence of this operation, a set of triangleshave been generated without considering step discontinuity;

2. Calculate the dot product of the normal of the triangles and the normalizedline of sight toward the centroids of these triangles. Find the triangles wherethe including angles between their normal and the line of sight toward theircentroids are in the range of [160◦, 180◦]. Calculate the mean M of lengthsof the longest edges of those triangles. This idea of determining thresholdfollows the range scanner’s working mechanism: the measurement accuracydepends on the incident angle;

3. Multiply the mean M by a constant C: D = C ∗ M (C=1.4 in this paper).The constant increases the distance threshold and thus guarantees that someaccurate points on boundary can be included in the resulting mesh.

After the distance threshold D has been calculated, we re-triangulate the pointsfrom the raster image file. For each non-boundary point and its three neighbour-ing points, if two of the three neighbouring points are invalid, then no trianglewill be created. If one of the three neighbouring points is invalid, then we com-pute the interpoint distance. If all three interpoint distances are smaller thanD, then a triangle will be created. If none of the three neighbouring points isinvalid, then we compute just the distance between diagonal points, since thedistance dn between two neighbouring points is in general smaller than that ddbetween two diagonal points. If dd is smaller than a threshold, then dn must besmaller than that threshold. Thus, doing so does not lose any triangles for therepresentation of surface details but gains computational efficiency. If only oneof these two diagonal distances is smaller than D, then a single triangle will becreated in one of the last four configurations in Figure 1. If both of these two dis-tances are smaller than D, then two triangles will be created with the commonedge being the one with a shorter length, as shown in the first two configurationsin Figure 1. Otherwise, no triangle will be created. Consequently, more accuratetriangular mesh that reflects surface geometry has been constructed.

In the triangular mesh built from a single range image, there are some points inisolated or boundary triangles that usually have only one neighbouring triangle.These points bring two troubles for the integration process: one is that they tendto be noisy and thus distort the shape of object. The other is that the orientationof points is difficult to estimate. As a result, we assume that a triangle withmore than one neighbour is more accurate and stable than the triangles withone neighbour only. So an iterative procedure is proposed to remove the triangleswith one neighbour only. Then the orientation N at all points on mesh can becalculated based on their area and neighbouring relation [7].

3 Integration of Multiple Registered Range Images

Integration of multiple registered range images consists of three main steps: over-lapping area detection, shift along normal, and overlapping triangle detectionand removal and surface reconstruction that are detailed as follows.

962 H. Zhou, Y. Liu, and L. Li

3.1 Front Face Checking and Overlapping Area Detection

When one range image R is transformed into the coordinate system in whichthe other range image Rold was described and becomes Rnew , they can then bemerged to obtain a single surface. Firstly, we check whether or not the trian-gular meshes in Rnew are facing the viewpoint at which the range image Rold

was captured. If the dot product of the normals of triangles and the rays fromthe viewpoint to the centroids of the triangles is negative, we say the trianglesin Rnew are “front facing”. The triangles in Rnew that overlap with those inRold must be those front facing ones. Secondly, because every new range im-age can supply somewhat new information of surface geometry for the exitingrange images, non-overlapping and overlapping areas with regard to “front fac-ing” triangles found need to be further detected. If the distance between thecentroid of a triangle in one range image and its closest centroid of a triangle inthe other is smaller than a threshold, it is added into overlapping triangle setsSold−overlapping and Snew−overlapping . Otherwise, it is put into non-overlappingtriangle set Sold−non−overlapping or Snew−non−overlapping . Those triangles in non-overlapping sets Sold−non−overlapping and Snew−non−overlapping are left and di-rectly added to form a new surface as new geometrical information supplied bythe two range images. In this paper, the threshold was set as D/2 where D wasestimated in Section 2. An example for the detection of overlapping area betweentwo registered range images is illustrated in Figure 2.

Fig. 2. Left: The registration result of teletubbydeg0 (no color) and teletubbydeg20

(green). Right: Their overlapping area.

3.2 Shift Along Normal

Because the fusion algorithm described here only utilizes the original pointsfrom all the range images, the points from two range images may be connectedand triangulated together. In this case, the accuracy of registration imposes aremarkable effect on the final fusion result. Inaccurate registration leads the realoverlapping area between two registered range images to stay apart. On thecontrary, some non-overlapping areas are close to each other. As a result, falseconnections and gaps are often created, as demonstrated by Figure 8(left).

To deal with large registration errors, we propose a novel algorithm that isdetailed as follows. Since the triangles in Sold−overlapping are of higher qualityand the number of triangles in Snew−overlapping is smaller, thus the triangles in

Incremental Mesh-Based Integration of Registered Range Images 963

Fig. 3. Point integration along normal vector

Sold−overlapping are used as reference. For each point pinew in Snew−overlapping ,the closest point piold in Sold−overlapping is identified. Then the dot product d

between vector �pi = piold − pinew and normal vector Ninew at point pinew iscomputed. Finally, we shift pinew along Ninew toward piold using the followingformula (Figure 3):

pi′new = pinew + dNinew (1)

so that shifted point pi′new is closer to piold. If vector �pi is in the same directionas normal Ninew at point pinew, then d is positive. Otherwise, it is negative.

Note what we change is the point position, but the triangulation relation-ship among points in Snew−overlapping are kept intact. Due to point posi-tion shift, self-intersection triangular meshes may emerge. In this case, we letthe point position shift a minimum distance so that the original topology inSnew−overlapping has not been changed. The finally obtained triangular mesh iscalled Sshifted−new−overlapping . An example of integrating registered range im-ages with large registration error is shown in Figure 8.

3.3 Overlapping Triangles’ Detection and Removal and SurfaceReconstruction

To detect the overlapping triangles between Sold−overlapping andSshifted−new−overlapping , we only consider the x and y coordinates ofpoints. Due to “front facing” detection in Section 3.1, no two trianglesfrom Sold−overlapping will occupy the same space on the xy plane. For eachtriangle Told in Sold−overlapping, we first project it onto the xy plane and thencompute its circum-circle CCold. For any triangle in Sshifted−new−overlapping ,if one of its three vertices or its centroid lies in CCold, then that triangle isconsidered as overlapping with Told and is called as Tsetnew. This approachcan find most intersection triangles. In some cases, the intersection trianglesare left, but they do not affect the final result since either the number ofsuch triangles or their intersection area is small. The purpose of overlappingtriangle detection is to find the relative relationship between different trianglesin Sshifted−new−overlapping and Sold−overlapping respectively, but the actualintersection information between these triangles is not needed by our integration

964 H. Zhou, Y. Liu, and L. Li

Fig. 4. Non-overlapping mesh between teletubbydeg0 and transformed teletubbydeg20

(left) and final mesh (right)

method and thus, is not computed. Consequently, computational load can becut down.

When all overlapping triangles Tsetnew and Told are found, for the sake ofremoving redundancy, we have to delete either the triangle Told or all the tri-angles in Tsetnew. To keep the best measurement, we define a confidence forthe accuracy of each triangle as follows. The including angle θ between the nor-mal of the triangle and the line of sight toward the centroid of the triangle isfirst computed. The length l of the vector from the original to the centroid ofthe triangle is then computed. Finally, the confidence of a triangle is computedas: w = 1/(θl). The larger the angle θ and the smaller the length l, the moreconfidence in the triangle.

The following rule is developed to decide whether the triangles in Tsetnew

or Told are kept. To this end, the average confidence of all the overlappingtriangles in Tsetnew is first computed. If the average is larger than that ofthe triangle Told, then the partial surface described by triangles in Tsetnew ismore accurate and stable than that described by the triangle Told. In this case,the triangle Told is deleted from Sold−overlapping and the triangles in Tsetnew

are retained in Sshifted−new−overlapping and vice versa. As a consequence ofthis operation, a set of non-overlapping triangles is left in Sold−overlapping andSshifted−new−overlapping , as illustrated in Figure 4(left).

The connection method in [8] is employed here to fill the gaps amongtriangles in Sold−overlapping, Sshifted−new−overlapping , Sold−non−overlapping andSnew−non−overlapping . After filling all the gaps, the finally reconstructed trian-gular mesh has been created for the representation of surface. One example of2D triangular mesh from teletubby is presented in Figure 4(right).

4 Surface Smoothing Algorithm

The finally reconstructed surface in the last section is usually non-smoothing inthe smooth area of real surface mainly due to a rapid change of orientation ofreconstructed surface and the estimation of surface orientation is often sensitiveto noise introduced by scanning, registration and integration. Therefore how toeffectively combat noise on the surface mesh, while preserving desired features,is thus an active area of research. To this end, two main approaches have beenproposed: One is to adjust vertex positions so that the overall surface becomes

Incremental Mesh-Based Integration of Registered Range Images 965

smoother [10], the other is to smooth the surface normals [11, 12]. Surface nor-mals play a critical role in most of the proposed surface smoothing algorithmssince normals impose a greater impact on the model’s perceived quality. There-fore, features of a surface can be determined more easily using surface normalsthan using vertex positions.

For this purpose, we develop here a simple method to accurately estimate thesurface orientation as follows. The normal of each vertex in the mesh is firstlycalculated by averaging the normals of all the triangles weighted by their areas[7] that share the vertex. This step of normal computation is different from thatin Section 2. While the former may apply points from two images, the latterapply points only from a single image. The neighbouring vertices of a vertex areall other vertices of the triangles sharing the vertex.

If a surface is smooth, then the orientation of each vertex should be con-sistent with those of its neighbours. So the weighted orientation variation∑M

i=1∑N

j=1 Wij∆θij should be minimum where M is the number of vertices in themesh and N is the number of neighbouring vertices of a vertex. For each vertexVi and its neighbouring vertices Vi1, Vi2, · · ·, ViN , the including angles betweennormal vectors Nimean at Vi and Nij at Vij are ∆θij : Nimean = 1

N

∑Nj=1 Nij ,

Wij are the weights of ∆θij .To optimize Wij , we apply the entropy maximization (EntMax) principle from

statistical mechanics [4]. Thus, the following objective function is built to smoothnoisy mesh: J =

∑Mi=1

∑Nj=1 Wij∆θij − (− 1

β

∑Mi=1

∑Nj=1 Wij ln Wij). Differentiating

this objective function about Wij leads to: ∂J∂Wij

= ∆θij + 1β

ln Wij + 1βWij

1Wij

=

0. Thus, Wij = exp(−β∆θij − 1). Since in Wij , exp(-1) is a constant, afternormalization, Wij can be expressed as: Wij = exp(−β∆θij). Finally, the neworientation Ni at vertex Vi is updated as a weighted sum of Nij : Ninew = Ni +∑N

j=1 WijNij where the parameter β controls how smooth the final surface is.The smaller the parameter β, the smoother the final surface (β= 0.005 anditeration number = 5).

Fig. 5. Integration results of 3 views of lobster and 7 views of doughboy before (oddcolumn) and after (even column) using our smoothing algorithm

Due to more accurate estimation of surface orientation, the finally recon-structed surface becomes smoother, as demonstrated in Figure 5 where aftersmoothing, fewer artefacts appear in the abdomen of the lobster and in thechest, mouth and hat of the doughboy. Meanwhile the geometric features suchas corners and cease edges are desirably kept.

966 H. Zhou, Y. Liu, and L. Li

5 Experiment Results and Conclusions

To measure the accuracy of the original and improved integration algorithms,we defined the integration error as the average distance between vertices ofremaining triangles in Snew−overlapping and their closest vertices of those inSold−overlapping. If the registration of two range images is quite accurate,then the remaining triangles in Snew−overlapping should be close to those inSold−overlapping, leading the integration error to be small. The experimental re-sults about 6 objects with total 44 images are presented in Figures 6, 7, and 8and Table 1.

Fig. 6. Integration results using our method. Left: bird(13 views). Second: bunny(6views). Third: doughboy(7 views). Right: frog(7 views).

Fig. 7. Integration results with small(left two) and large (right two) registration errorsfor different algorithms. Odd column: the integration method [8]. Even column: ourmethod.

From Figures 6, 7, and 8 and Table 1, it can be seen that our algorithm con-sistently outperforms the algorithm proposed in [8] in the sense that in all cases,the integration error has been reduced and more accurate, smooth and water-tight surfaces have been reconstructed. When the registration error is small, ourmethod produces similar results to the method [8], as demonstrated by Figure 7.

Fig. 8. Integration results with large registration error for different algorithms. Left:the integration algorithm [8]. Right: our method.

Incremental Mesh-Based Integration of Registered Range Images 967

Table 1. Integration results using different algorithms and images. RE: RegistrationError (RE)[5]. IE: Integration Error(IE1) [8]. IE: Integration Error(IE2) (our method).

Image Views Points Face Vertex RE(mm) IE1(mm) IE2(mm)Bird 13 102163 53531 27132 0.40 1.08 0.74Bunny 6 30816 24890 12530 0.30 0.87 0.64Doughboy 7 53460 25704 12956 0.45 1.10 0.75Frog 7 46075 30767 15532 0.45 1.16 0.68Tubby 6 30608 31464 15982 0.32 0.88 0.65Duck 5 57194 39740 20031 0.65 1.58 0.97

But when the registration error is relatively large, our method considerably out-performs the method [8], as demonstrated by Figures 7 and 8. In this case, theregistration algorithm [5] calibrated the rotation angle of the camera motionfrom the tubby and duck images to be 15.96◦ and 18.76◦ with an expectationof 20◦ respectively, yielding a relative calibration error in rotation angle of aslarge as 20.2%. For the duck images, the average registration error is 0.65mm.While the method [8] produced an integration error of 1.58mm, our integrationalgorithm produced the corresponding error of 0.97mm, leading the integrationerror to be reduced remarkably by 38%. While the method [8] created a lot offalse connections and gaps, our method almost perfectly recovered all details ofthe duck about wing, eye, and neck as shown in Figure 8 and all details of thedoughboy about hand, mouth and hat as shown in Figure 7. For the integrationof various views of any object, it took less than 30 minutes on a Pentium 4 com-puter. The larger the number and sizes of images, the more time the integrationrequires.

The main reason for our algorithm to outperform the method [8] is that weexplicitly took into account both registration errors and scanning noise. Ourintegration method has the following characteristics: (1) it is able to deal withnoisy mesh and compensate the registration error; (2) it smoothes the surfaceefficiently due to the use of a Gaussian filter; and finally, (3) it is an automaticprocess with range images registered under typical conditions without any re-strictive assumption [3]. The output is a watertight surface. In the future, weare planning to register and integrate all the views simultaneously using a starnetwork [6] to avoid the registration error accumulation.

References

1. M. Andreetto, N. Brusco, G.M. Cortelazzo. Automatic 3D modelling of texturedcultural heritage objects. IEEE Trans. Image Processing 13(2004) 354-369.

2. B. Curless and M. Levoy. A volumetric method for building complex models fromrange images. Proc. SIGGRAPH, 1996, pp. 303-312.

3. A. Hilton, J. Illingworth. Geometric fusion for a hand-held 3D sensor. MachineVision and Applications 12(2000) 44-51.

4. E.T. Jaynes. Information theory and statistical mechanics. The Physical Review,106(1957) 620-630.

5. Y. Liu, L. Li, and B. Wei. 3D shape matching using collinearity constraint. Proc.ICRA, 2004, pp. 2285-2290.

968 H. Zhou, Y. Liu, and L. Li

6. C. Oblonsek and N. Guid. A fast surface-based procedure for object reconstructionfrom 3-D scattered points. CVIU 69(1998) 185-195.

7. S. Rusinkiewicz, Olaf Hall-Holt and M. Levoy. Real-time 3D model acquisition.Proc. SIGGRAPH, 2002, pp. 438-446.

8. Y. Sun, C. Dumont. Mesh-based integration of range and color images. Proc. ofSPIE, Vol. 4051, 2000, pp. 110-117.

9. G. Turk and M. Levoy. Zippered polygon meshes from range images. Proc. SIG-GRAPH, 1994, pp. 311-318.

10. G. Taubin. A signal processing approach for fair surface design. Proc. SIGGRAPH,1995, pp. 351-358.

11. J. Vollmer, R. Mencl, and H. Muller. Improved Laplacian smoothing of noisy sur-face meshes. Proc. Eurographics, 1999, pp. 131-138.

12. S.S. Wong, K.L. Chan. Multi-view 3D model reconstruction: exploitation of colorhomogeneity in voxel mask. Proc. ICIG, 2004, 146-149.