doc

15
3D Visualization Method for Maxillofacial Surgical Planning by Using X-ray and Photo Images Young-In Kim*, Jung-Hyun Park**, Chang-Hun Kim* Dept. of Computer Science & Engineering, Korea University*, Dept. of Oral&Maxillofacial Surgery, Dental College, Yonsei University** {yikim,chkim}@cgvr.korea.ac.kr*, [email protected]** Abstract This paper describes a prototype system for predicting of human facial shape and visualizing of realistic 3D images after maxillofacial surgery for patient with facial deformities from three input images consisting of lateral Xray, lateral photo and frontal photo. The basic idea is to join a accurate predicting variation of facial shape with a 3D visualizing postsurgical facial shape. Firstly, we draw out the important parts of face, contours and control points in the profile of a patient from Xray image. Secondly, when a surgeon designates beforehand the change of the direction and magnitude of a maxillary bones (the lower and upper jawbone) at plan step, the system forms the postsurgical image using warping method and predicting function which calculate the change of movement of a soft tissue. The predicting function is based on the clinical study obtained from 100 patients and the warping method is implemented considering anatomic relation of facial soft tissue. Finally we generate the three dimensional images of postsurgical facial shape using the deformation estimating method which predict the 3dimensional movement of vertices

Transcript of doc

Page 1: doc

3D Visualization Method for Maxillofacial Surgical Planning by Using X-ray and Photo Images

Young-In Kim*, Jung-Hyun Park**, Chang-Hun Kim*

Dept. of Computer Science & Engineering, Korea University*,

Dept. of Oral&Maxillofacial Surgery, Dental College, Yonsei University**

{yikim,chkim}@cgvr.korea.ac.kr*, [email protected]**

Abstract

This paper describes a prototype system for predicting of human facial shape and

visualizing of realistic 3D images after maxillofacial surgery for patient with facial

deformities from three input images consisting of lateral Xray, lateral photo and

frontal photo. The basic idea is to join a accurate predicting variation of facial shape

with a 3D visualizing postsurgical facial shape. Firstly, we draw out the important

parts of face, contours and control points in the profile of a patient from Xray image.

Secondly, when a surgeon designates beforehand the change of the direction and

magnitude of a maxillary bones (the lower and upper jawbone) at plan step, the

system forms the postsurgical image using warping method and predicting function

which calculate the change of movement of a soft tissue. The predicting function is

based on the clinical study obtained from 100 patients and the warping method is

implemented considering anatomic relation of facial soft tissue. Finally we generate

the three dimensional images of postsurgical facial shape using the deformation

estimating method which predict the 3dimensional movement of vertices on face.

Keywords: Digital Surgery, Medical Visualization, Surgical Planning System

Page 2: doc

1 Introduction

Digital dental surgery planning system has many advantages in that of predicting

the postsurgical morphology and appearance of human face, or solving the problem

of a fair facial surface before the actual surgery is carried out. Early works [4,6,7]

need expensive equipment and high computation cost because it require the

complex and laborious processing steps of mapping the soft tissue structure to the

3D skull model which is constructed from 2D CT or MRI data. More over these

systems are not adequate for maxillofacial surgery because subtle malformation

can strongly affect the appearance of a face and these systems is hard to handle to

surgeon. Thus typically, the facial operation surgeon draws the patients predicted

profile using X-ray to give at least a 2D appearance of the future face. In this thesis,

we propose the facial surgery planning system, which computes accurate 2D lateral

facial contour and realistic 3D picture of the post surgical shape.

Our system consists of two modules, one is about the virtual surgery step and the

other is 3D visualization step. In virtual surgery step, we use lateral X-ray of patient

and predicting function for computing predicted soft tissue profile after maxillofacial

surgery. In 3D visualization step, we use two photos, which are frontal and lateral

images of patient, and deformation estimating method for generating 3D

appearance of post surgical facial shape. Figure1 shows the simple procedure of our

surgery predicting system.

a) b) c)

Figure 1: Illustration of system flow for maxillofacial surgery planning.

a) Lateral photo and X-ray image of the presurgical face. b) 2D postsurgical image

after maxillofacial surgical simulation. c) 3D postsurgical appearance.

Page 3: doc

The remainder of this paper is structured as follow: In section 2 we describe the

motivation of our research. And we give an overview of the system in section 3.

Section 4 we explain the principles of the predicting function, which is used for

virtual surgery. Section5 describes the deformation estimating method of soft

tissue for generating 3D appearance. Finally, we demonstrate and discuss the

proposed system with experimental results obtained from the X-ray image and

frontal and lateral facial photos in section 6.

2 Motivation

There has been much research in the field of facial surgery simulation. Most

facial surgery simulation systems used today in 3D medical image data like CT or

MRI. These 3D medical image data requires high cost and complex preprocessing

for surgical simulation. So we choose the planar medical image data, X-ray image,

traditionally it has been used in clinical area. Early works such as [9] restricted

themselves to 2D lateral image generation of the malformed patients. So we

propose the system which requires a simple and cheap X-ray image data and

generates 3D appearance of postsurgical facial shape.

3 System Overview

This section explains on different procedures, data acquisition and preprocessing

steps, virtual surgery step, and 3D visualization step in the chart of Figure 2. Our

data sources consist of X-ray image, frontal and lateral facial photos.

First of all, an initial facial contour is extracted from the X-ray image and control

points is created on a lateral contour.

Secondly, we simulate the maxillofacial surgery system, which enable to compute

the variations between the presurgical profile and postsurgical profile. As this result

postsurgical profile contour is created and it is warped with the lateral facial photo.

Then we warp the lateral photo to do the virtual surgery result.

Finally, we execute the third pass procedure, 3D postsurgical image generation. In

our system, we individualize the generic facial model and predict the postsurgical

facial images on frontal view. Figure2 shows the overall flow structure of our dental

Page 4: doc

surgery planning system.

Figure 2: System overview.

3.1 Data Acquisition and Preprocessing

The first pass procedure of our virtual surgery system is the data acquisition and

preprocessing step. In this step, we prepare two 2D wire frame templates composed

of feature points with predefined relation for front and side views and draw out the

facial profile on X-ray image and align facial lateral photo with it.

Because X-ray image maintains the 1:1 ratio of the surgery area, the lateral photo

is transformed by fixed profile on X-ray. Figure 3 shows the result after malformed

profile contours extraction and adaptation of them. These images can be aligned by

rotation and translation and scaling.

Page 5: doc

a) b)

Figure 3. Facial contour extraction. a) extracted facial profile on X-ray image. b)

Adaptation profile to X-ray image and photo.

3.2 Virtual Surgery Operation

This section explains the process of virtual operation necessary to accomplish

facial surgery simulations. First, we cut the malformed mandible contour and move

it to correct it. Next, the variations of the control points on facial soft tissue contours

between presurgery and postsurgery are computed by predicting function. Finally,

non-control points consisting of soft tissue contours are interpolated by linear

interpolation method.

Figure 4 shows the process of virtual operation. Figure 4a represents the cut and

moved malformed mandible contour. Figure 4b is the result from calculating

movement of the control points on soft tissue. Figure 4c shows the linear

interpolating movements of non-control points on soft-tissue contours.

a) b) c)

Figure 4. Maxillofacial surgery on the hard tissue contour. a) cutting and moving

the mandible. b) calculating the movements of control point on soft tissue. c) linear

interpolating the movements of non-control point on soft tissue contours.

This system makes synthetic postoperative image of post-operation by image

warping technique. Image warping process consists of triangulation and color

interpolation. Delaunay triangulation used on this system defines local areas on

lateral photo, each area is reorganized by virtual surgery, and color information in

area is interpolated.

Page 6: doc

Figure 5 shows this process about generating warped postsurgical image. Figure

5b shows the divided regions by triangulation and Figure 5d shows the warped

result of the modified model by virtual surgery.

a) b) c) d)

Figure 5. Virtual postsurgical facial image generation. a) Input data : lateral facial

photo. b) Triangulation of input image. c) Virtual operation d) Warped image of the

modified model by virtual surgery

3.3 3D postsurgical image generation

In this section, we explain the process for visualizing 3D postsurgical image

generation. First of all, we make individual face to visualize 3D images. After

preprocessing, we get two wire templates on front and side views and use them to

modify a generic facial model. In this paper, generic facial model with 915 vertices

is modified to make an individualized smooth facial surface. Figure 6 shows the

process of individualizing a generic facial model.

Figure 6. Individualization of generic facial model

Page 7: doc

4 Computation of the Soft-Tissue Variation

In this paper, we use the predicting function to calculate the variation of

postsurgical soft tissue profile. As the variation of human soft tissue can’t be

generally computed by linear equations, we suppose that it can be computed by

non-linear equations. To formulate our predicting function, we define 8 control

points on hard tissue and 10 control points on soft tissue. Figure 7 shows these

control points. Control points on hard tissue consist of A(ANS), B(A), C(MxI), D(MxM),

E(MnI), F(B), G(Pg), H(Me) and control points on soft tissue consist of A’(Pn), B’(Sn),

C’(A), D’(Ls), E’(Stms), F’(Stmi), G’(Li), H’(B’), I(Pg’), J’(Me’).

Figure 7. Facial control points on hard tissue and soft tissue.

This predicting function is based on the clinical study obtained from 100 patients.

For measuring the variation between presurgical profile and postsurgical profile, we

define eq. 1 as following.

, Eq. 1

The horizontal variation on soft tissue A’ is formulated as following.

Hard control points

Soft control points

Page 8: doc

Eq. 2

As the preceding equation 2 , other horizontal variation on control points from B’

to H’ is formulated. Equation 3 explains the linear regression for finding values of

.

Eq. 3

:estimated value, :input value , :parameter , :the number of data, : the number of

parameter

Using hard-tissue location changed from virtual surgery produce the soft-tissue

change by soft-tissue movement prediction functions. Each partial function for a

feature point on soft-tissue makes the movement based on multiple feature points

on hard-tissue. Table 1 shows the example of two partial functions.

Table1 Two functions about hard and soft tissue relation

Soft-tissue Hard-tissue

VPn 0.44409 * ANS – 0.27588 * vA + 0.05905

VB’ 0.65773 * vMe – 0.13109 * hMe + 0.21406 * hMnM – 1.31458

11 soft tissue control points predicted by that of hard tissue control point is as

small as this system represent the smooth facial outline. So, additional points of soft

tissue is defined as follows:

Eq. 4

Page 9: doc

, where p is the control point coordinate, w is the maximum length effected by p,

and p’ is the point on soft-tissue. [Figure 4] shows that this system determines

coordinate of the feature point on soft-tissue by prediction function, and interpolate

the additional points by equation 4.

Maxillofacial surgeon generally says that an error tolerance on maxillofacial

surgery is less than 2mm. In figure 8, we represent the comparison of result on

actual surgery with estimated surgery. You can see that the differences between this

two graphs are not greater than 2 mm.

Figure 8. The comparison of result between actual surgery and estimated surgery.

5 Deformation Estimating Method

Our system makes frontal postsurgical facial image by the deformation estimating

method. 3D visualization process consists of the individualization of a generic facial

model and the calculation of 3dimensional movement of soft tissue on face. For

calculating the movement of control points, we define the deformation estimating

method. First, we measure 3 dimensional variations of control points between soft

tissue and hard tissue from CT data. Figure 8 shows the 3 dimensional control points

of the soft tissue. The number of 3 dimensional control points is seven, Pn, Sn, A, Ls,

Li, B’, Pg’ , except Stms, Stmi, Me.

Page 10: doc

a) b) c)

Figure 9. 3 dimensional control points from CT data. a) soft tissue control points on

lateral view b) control points on frontal view. c) control points from –90° to +90° on

sectional view.

Secondly, we calculate the 3 dimensional variations on facial soft tissue. For this

computation, an axis of coordinates is established on individual facial model. We

use the following equation to compute the postsurgical soft tissue control points. In

this equation 5, P(x,y,z) is a control point on soft tissue contour before virtual

maxillofacial surgery and P’(x’,y’,z’) is computed position of it after virtual

maxillofacial surgery. P’’(x’’,y’’,z’’) in figure 10 is a temporary point projected on xy-

plane a point P(x,y,z).

, , . Eq. 5.

a) b)

Figure 10. Computing variation of 3D difference before and after maxillofacial

Z

Page 11: doc

surgery. a) lateral comutation model. b) sectional computation model.

6 Results

The goal was to predict the facial shape after procedure in maxillofacial surgery.

Figure 11-12 show the shape of soft tissue after maxillofacial surgery. The

calculations were carried out on a Intel Pentium-III 700 processor, 256MB RAM,

Microsoft Windows NTTM Workstation 4.0 and our system was implemented by using

Microsoft Visual C++TM 6.0 and SGI OpenGL. We can compare a estimated

postsurgical image with a actual postsurgical image. (See Figure11). We can find the

high similarity of two images.

a) b) c)

Figure 11. Comparison simulated results with actual surgery. a) Input data: presurgical images. b)

Estimated postsurgical images. c) Actual postsurgical image

a) b) c) d) f)

f)

Page 12: doc

Figure 12. 3D images after maxillofacial surgery. a,b) presurgical frontal and lateral

pictures of patient. c) Estimated postsurgical facial image. d,e) Actual postsurgical

frontal and lateral photo. f) 3d visualization of estimated postsurgical result.

7 Conclusion

We present a system which enables us to predict the deformations of the facial

shape after surgical procedures. Our system can generate a high realistic and

accurate postsurgical images by predicting function which obtained from 100

clinical studies. In our system, maxillofacial surgeon can easily extract the outline

profile and predict the postsurgical facial shape.

8 Reference

[1] Haider, A. Md., Takahashi, E. and Kaneko, T., Automatic Reconstruction of 3D

Human Face from CT and Color Photographs, IEICE Trans. Inf. & Syst..,

vol.E81-D, No.9, pp.1287-1293, Sep. 1999.

[2] Haider, A. Md., Takahashi, E. and Kaneko, T., A 3D face reconstruction

method from CT image and color photographs, IEICE Trans. Inf. & Syst..,

Vol.E81-E, No.10, pp.1095-1102, Oct. 1998.

[3] Hounsfield, G.N., and Ambrose, J.A. Computerized transverse axial scanning

tomography. British Journal of Radiology, pp.1016-1022, 1973

[4] Koch, R.M., Gross, M.H., Carls, F.R., von Büren, D.F., Fankhauser, G. and

Parish, Y.I.H., Simulating facial surgery using finite element models,

Computer Graphics(SIGGRAPH’96 Proceedings), pp.421-429, Aug. 1996

[5] Lee, W-S., Thalmann, N.M., Fast head modeling for animation, Image and

Vision Computing, pp.355-364, 2000.

[6] Xia, J., Wang, D., Samman, N., Yeung, R.W.K., Tideman, H., Computer-

assisted three-dimensional surgical planning and simulation: 3D color facial

model generation, International journal of oral & maxillofacial surgery,

Vol.29, No.1, pp.20-10, Feb. 2000.

[7] Xia, J., Ip, H.H.S., Samman, N., Wang, D., Kot, C.S.B., Yeung, R.W.K.,

Tideman, H., Computer-assisted three-dimensional surgical planning and

simulation: 3D virtual osteotomy, International journal of oral & maxillofacial

Page 13: doc

surgery, Vol.29, No.1, pp.11-17, Feb. 2000.

[8] Dr.Pss,(http://www.bit.co.kr/medical- i nfo/html/ drpss.htm )

[9] QuickCeph Systems, (http://www.quickceph.com/)