Image Recovery via Diffusion Tensor and Time-Delay - Mathematics

20
Journal of Visual Communication and Image Representation 13, 156–175 (2002) doi:10.1006/jvci.2001.0497, available online at http://www.idealibrary.com on Image Recovery via Diffusion Tensor and Time-Delay Regularization Y. Chen 1 and S. E. Levine Department of Mathematics, University of Florida, Gainesville, Florida 32611 E-mail: [email protected]fl.edu; [email protected]fl.edu Received February 1, 2000; accepted October 22, 2001 We present a system of PDEs for image restoration, which consists of an anisotropic diffusion equation driven by a diffusion tensor, whose structure depends on the gra- dient of the image obtained from a coupled time-delay regularization equation, and governs the direction and the speed of the diffusion. The diffusion resulting from this model is isotropic inside a homogeneous region, anisotropic along its boundary, and is able to connect broken edges and enhance coherent structures. Experimental results are given to show its effectiveness in tracking edges and recovering images with high levels of noise. Moreover, the proposed model can be interpreted as a time continuous Hopfield neural network. This connection further illustrates how the pro- posed model enhances coherent structures. The existence, uniqueness, and stability for the solutions of the PDEs are proved. C 2002 Elsevier Science (USA) Key Words: image restoration; diffusion tensor; time-delay regularization; hopfield neural network. 1. INTRODUCTION Image restoration is a fundamental problem in both image processing and computer vision with numerous applications. The challenging aspect of this problem is to design methods which can selectively smooth a noisy image without losing significant features. In recent years, PDE (partial differential equation) based methodology has been developed to attack this problem. A large number of PDE models used for image recovery can be viewed as curvature-driven anisotropic diffusion models or as the evolutions corresponding to the minimization of total variation (e.g., see [1–3, 10–19] and references there). The basic idea of this type of model is to penalize the diffusion where the magnitude of the gradients of the image intensity are large. In contrast to the curvature-driven diffusion models where the diffusion is governed by the geometry of the image, another type of diffusion model has been developed, where the diffusion is governed by a matrix (diffusion tensor) built in to the equation. The structure 1 Supported in part by NSF Grants DMS 9703497 and DMS 9972662. 156 1047-3203/02 $35.00 C 2002 Elsevier Science (USA) All rights reserved.

Transcript of Image Recovery via Diffusion Tensor and Time-Delay - Mathematics

Page 1: Image Recovery via Diffusion Tensor and Time-Delay - Mathematics

Journal of Visual Communication and Image Representation 13, 156–175 (2002)

doi:10.1006/jvci.2001.0497, available online at http://www.idealibrary.com on

Image Recovery via Diffusion Tensorand Time-Delay Regularization

Y. Chen1 and S. E. Levine

Department of Mathematics, University of Florida, Gainesville, Florida 32611E-mail: [email protected]; [email protected]

Received February 1, 2000; accepted October 22, 2001

We present a system of PDEs for image restoration, which consists of an anisotropicdiffusion equation driven by a diffusion tensor, whose structure depends on the gra-dient of the image obtained from a coupled time-delay regularization equation, andgoverns the direction and the speed of the diffusion. The diffusion resulting fromthis model is isotropic inside a homogeneous region, anisotropic along its boundary,and is able to connect broken edges and enhance coherent structures. Experimentalresults are given to show its effectiveness in tracking edges and recovering imageswith high levels of noise. Moreover, the proposed model can be interpreted as a timecontinuous Hopfield neural network. This connection further illustrates how the pro-posed model enhances coherent structures. The existence, uniqueness, and stabilityfor the solutions of the PDEs are proved. C© 2002 Elsevier Science (USA)

Key Words: image restoration; diffusion tensor; time-delay regularization; hopfieldneural network.

1. INTRODUCTION

Image restoration is a fundamental problem in both image processing and computervision with numerous applications. The challenging aspect of this problem is to designmethods which can selectively smooth a noisy image without losing significant features. Inrecent years, PDE (partial differential equation) based methodology has been developed toattack this problem. A large number of PDE models used for image recovery can be viewedas curvature-driven anisotropic diffusion models or as the evolutions corresponding to theminimization of total variation (e.g., see [1–3, 10–19] and references there). The basic ideaof this type of model is to penalize the diffusion where the magnitude of the gradients ofthe image intensity are large.

In contrast to the curvature-driven diffusion models where the diffusion is governed bythe geometry of the image, another type of diffusion model has been developed, where thediffusion is governed by a matrix (diffusion tensor) built in to the equation. The structure

1 Supported in part by NSF Grants DMS 9703497 and DMS 9972662.

156

1047-3203/02 $35.00C© 2002 Elsevier Science (USA)

All rights reserved.

Page 2: Image Recovery via Diffusion Tensor and Time-Delay - Mathematics

DIFFUSION TENSOR AND TIME-DELAY REGULARIZATION 157

of the diffusion tensor depends on the gradient of the image and determines the directionsof the diffusion as well as the speed in these directions. A framework of this type diffusioncan be described as

∂t u − div(L(u)∇u) = f (u), x ∈ �, t > 0,

where u is a smoothed version of u, and f (u) is a reaction term.To prevent diffusion across significant edges of the image, the diffusion tensors L in [5]

and [6] were constructed such that the diffusion is only along the direction perpendicularto ∇u. More precisely, in [6] the diffusion tensor was given by

L(u) = P∇u⊥ = |∇u|−2

(u2

x2−ux1 ux2

−ux1 ux2 u2x1

),

where u = Gε ∗ u, and Gε is the Gaussian filter with the local scale ε. In [5] L was obtainedby a time-delay regularization (see Definition 2.1) for the matrix

P∇u⊥ = |∇u|−2

(u2

x2−ux1 ux2

−ux1 ux2 u2x1

).

In addition to diffusion along ∇u⊥, inside a homogeneous region it is desirable to have thediffusion along ∇u as well. In order to do this, the diffusion tensor L in [20] was constructedsuch that its eigenvectors point in the directions ∇u and ∇u⊥, and the correspondingeigenvalues, λ1 and λ2, both equal 1 inside a homogeneous region, while λ1 becomes verysmall near the edges. Moreover, for the purpose of texture enhancement, in [21] L is designedso that it depends on the local structure of the image in the sense that one of the orthonormaleigenvectors of L is in the direction with the highest coherence (the lowest average contrastwithin a small neighborhood), and the corresponding eigenvalue is increasing with the localcoherence of the signal. In addition, special regularization was used in [20] and [21] for theconstruction of L .

The purpose of this paper is to improve the structure of the diffusion tensor based onthe works mentioned above. The goal of the proposed model is to provide the capability ofcombining isotropic diffusion inside a region with anisotropic along its edge and to allowan accurate tracking of the edges. To achieve this goal, we suggest that the diffusion tensorL be a combination of two tensors with different weights, i.e., L = λ1L1 + λ1L2, suchthat the diffusions driven by L1 and L2 are in the directions ∇u and ∇u⊥, respectively.The values of λ1 and λ2 determine the diffusivity in these two perpendicular directions,therefore, it is desirable to have λ1 = λ2 = 1 in the homogeneous regions, and λ1 = 0, λ2 =1 + γ, (γ > 0) on their edges. Using this choice, the diffusion is isotropic inside a region,and only in the direction perpendicular to the gradient along its edge. Moreover, the edgescan be enhanced by choosing suitable large γ .

The proposed model differs from the models in [6], [12], [20], and [21] in the use of thetime-delay regularization instead of spatial regularization to obtain u. Time-delay regular-ization enables us to bring the past information of the gradient of u into account, so thatover-smoothing can be adjusted. This is of particular importance for recovering fine struc-tures. For images with higher level noise, we suggest combining time-delay regularizationwith spatial regularization within a smaller scale in the presmoothing of u. The proposed

Page 3: Image Recovery via Diffusion Tensor and Time-Delay - Mathematics

158 CHEN AND LEVINE

model is also distinct from the models in [5] or [6] in the structure of the diffusion tensor.The proposed diffusion tensor has two components, L1 and L2, so that the diffusion can beperformed in both directions of ∇u and ∇u⊥, while the general model in [5] and the modelin [6] only involve L2 and as a result, the diffusion is restricted only in the direction of ∇u⊥

and is not good for recovering highly degraded image. Furthermore, we have a parameterfunction λ2 as a multiplier for L2 which is not present in the models in [5] and [6]. Theselection of λ2 plays an important role in enhancing edges, since λ2 allows the smoothing inthe direction tangential to the iso-intensity contours to be stronger on the edges than in thehomogeneous regions. Although the tool of the time-delay regularization is used both in [5]and here, it is applied in different ways. In the proposed model the time-delay regularizationis applied to the image gradient, while in [5] it is applied to the projection matrix. This isthe difference between applying time regularization before or after constructing the projec-tion matrix. We shall explain mathematically in the next section that the diffusion tensorobtained by projection after time-averaging can ensure the diffusion only in the directionof the projection, while the one obtained by projection before time-averaging may not beable to do so. Our numerical results will illustrate the benefits of computing the projectionafter time-averaging.

As pointed out in [4] and [5], by applying the results of [7] we are able to translatethe diffusion tensor in terms of synaptic weights linking neurons lying within a shortsynaptic range and interpret our model as a time-continuous Hopfield neural network [8].The synaptic connections of this ANN model depend on the coherence between the neurons.This gives another perspective as to why the proposed model can be expected to enhancecoherent structures.

This paper is organized as follows. We shall describe our model in Section 2, discuss itswell-posedness in Section 3, and present our numerical scheme and experimental results inSection 4. In Section 5 we shall derive the neural network approximation of our model. Wewill summarize our results in Section 6.

2. PDE MODEL

In this section we shall describe our model. First, we shall give a lemma which providesthe insight of the construction for the proposed diffusion tensor.

LEMMA 2.1. Let A be the matrix

A =(

m21 a

a m22

). (1)

Then, there exist a unique constant β and a vector w (with two possible choices, if w and−w are considered as the same vector), such that

A = ww� + βw⊥(w⊥)�, (2)

where w = ( w1w2

), w� is the transpose of w, and w⊥ = ( w1w2

). Moreover, β = 0 if and onlyif either a = m1m2, w = ( w1

w2) or a = −m1m2, w = ( w1

w2).

Page 4: Image Recovery via Diffusion Tensor and Time-Delay - Mathematics

DIFFUSION TENSOR AND TIME-DELAY REGULARIZATION 159

Proof. To prove this lemma, one only needs to show the system

w21 + βw2

2 = m21

w22 + βw2

1 = m22

(1 − β)w1w2 = a

(3)

has a solution (β, w1, w2).In fact, by algebraic computation, from (3) we get

β =m4

1 + m42 + 2a2 − (

m21 + m2

2

)√(m2

1 − m22

)2 + 4a2

2(m2

1m22 − a2

) . (4)

One can verify that |β| ≤ 1 by computing

β − 1 =(m1 − m2)2 + 4a2 − (

m21 + m2

2

)√(m2

1 − m22

)2 + 4a2

2(m2

1m22 − a2

) .

Note that m21m2

2 − a2 takes the opposite sign from√

(m21 − m2

2)2 + 4a2 − (m21 + m2

2), soβ − 1 ≤ 0. Similarly, we have β + 1 ≥ 0.

From (3) one also has that

w21 = (

βm22 − m2

1

)/(β2 − 1)

w22 = (

βm21 − m2

2

)/(β2 − 1).

To show the solvability of w1 and w2, we shall show βm22 − m2

1 ≤ 0 and βm21 − m2

2 ≤ 0.First, we consider the case where m2

1 ≤ m22. In this case βm2

1 − m22 ≤ 0 is obvious since

|β| ≤ 1. Hence we only need to prove βm22 − m2

1 ≤ 0. Denote

g = m41 + m4

2 + 2a2 − (m2

1 + m22

)√(m2

1 − m22

)2 + 4a2.

Then

βm22 − m2

1 = m22g − 2m2

1

(m2

1m22 − a2

)2(m2

1m22 − a2

) . (5)

It is not difficult to see that if m21m2

2 ≤ a2, then g ≥ 0 and m22g − 2m2

1(m21m2

2 − a2) ≥m2

2g ≥ 0 and if m21m2

2 ≥ a2, then g ≤ 0 and m22g − 2m2

1(m21m2

2 − a2) ≤ m22g ≤ 0. There-

fore, from (5) βm22 − m2

1 ≤ 0. The case m21 ≥ m2

2 can be proved similarly. From (3), Theconclusion in the case β = 0 follows directly from (2.3). �

Remark 2.1. Let A be given in (1). From (2) one has

div(A∇ f ) = div(ww�∇ f ) + βdiv(w⊥(w⊥)�∇ f )

= div(wDw f ) + βdiv(w⊥ Dw⊥ f )

= D2w f + β D2

w⊥ f + (div w)Dw f + β(div w⊥)Dw⊥ f, (6)

Page 5: Image Recovery via Diffusion Tensor and Time-Delay - Mathematics

160 CHEN AND LEVINE

where Dw f denotes the directional derivative of f along the direction w. Therefore, ifβ �= 0 the diffusion given by ut − div(A∇ f ) = 0 is in two direction, both of w and w⊥.Moreover, the ratio of the diffusion in these directions is 1

β.

DEFINITION 2.1. We say f is the time-delay regularization of f , if f satisfies the equationτ

d fdt + f = f . The time-delay regularization for a vector function or for a matrix function

is defined as that for each component of the vector or each entry of the matrix.

Denote v = ∇u, where u is the time-delay regularization of u. By Lemma 2.1 andRemark 2.1, the diffusion governed by the diffusion tensor

L1 = vv� =(

v21 v1v2

v1v2 v22

)/|v|2 (7)

is only in the direction of ∇u (using β = 0, a = v1v2, w = (v1 v2)T in Lemma 2.1), andthe diffusion governed by

L2 = vv⊥ =(

v22 −v1v2

−v1v2 v21

)/|v|2 (8)

is only in the direction of ∇u⊥ (using β = 0, a = −v1v2, w = (−v1 v2)T in Lemma 2.1).However, if L is determined by the time-delay regularization for the projection matrix P∇u⊥ ,i.e.,

τd L

dt+ L = P∇u⊥

then in general L212 �= L11L22. By Lemma 2.1, the diffusion governed by this diffusion tensor

is in two perpendicular directions. So, to ensure the diffusion is only along the direction ofthe projection, time averaging should be applied before projection, not after.

Therefore, to achieve our goal we construct

L = λ1L1 + λ2L2, (9)

where L1 and L2 are given in (7) and (8), respectively. Since λ1,2 are the diffusivities inthe directions of ∇u and ∇u⊥ respectively, it is desirable for them to be smooth functionssatisfying λ1 = λ2 = 1 inside the homogeneous regions and λ1 = 0, λ2 = 1 + γ (for γ > 0)on the edges. For instance, they can be chosen as follows.

Choice 1. λ1 and λ2 are smooth approximations of

λ1 ={

1, if |v| ≤ δ

0, if |v| > δand λ2 =

{1, if |v| ≤ δ

1 + γ, if |v| > δ

with parameters δ > 0 and γ > 0.

Choice 2.

λ1 = 1

1 + k|v|2 and λ2 = 1 + α|v|21 + k|v|2 (10)

with parameters k > 0 and α > 0.

Page 6: Image Recovery via Diffusion Tensor and Time-Delay - Mathematics

DIFFUSION TENSOR AND TIME-DELAY REGULARIZATION 161

The diffusion governed by L in (9) is isotropic in the regions where |v| is very small, andis only in the direction of v⊥ where |v| is very large. Moreover, the speed of the diffusionin the direction of v⊥ is higher on the edges than in the homogeneous region. Therefore,small gaps on the edges can be closed.

In summary, we propose the following PDE model:

ut = div(L(v)∇u) − λ(u − I ), (x, t) ∈ �T (11)

τdv

dt+ v = ∇uσ (x, t) ∈ �T (12)

u(x, 0) = I, v(x, 0) = 0 x ∈ � (13)

∂u

∂n= 0, (x, t) ∈ (∂�)T , (14)

where τ > 0 and σ ≥ 0 are parameters, Gσ = 1σ 2 e− |x |2

4σ , uσ = Gσ ∗ u (extend u to be zerooutside �, if necessary), and L is given in (9). The parameter σ can be equal to zero whichmeans v is the gradient of the time-delay regularization of u. If the image has a high level ofnoise, we may take σ > 0 but small. The last term in (11) is used for forcing the reconstructedimage to be close to the observed image.

3. WELLPOSEDNESS

In this section, we shall discuss the existence, uniqueness, and stability for the solutionsof (11)–(14). For this discussion we need to assume σ > 0.

THEOREM 3.1. Let I ∈ L∞(�) ∩ H 1 and suppose λ1 and λ2 are selected by Choice 1or 2 in section 2. Then for any σ > 0, the system (11)–(14) admits a pair of solutions u and v

on �T for any T > 0, satisfying u ∈ L∞ (0, T ; L∞(�) ∩ H 1), ut ∈ L2 (0, T ; L2(�)), andv ∈ W 1,∞ (0, T ; C∞(�)).

Proof. The principle part of (11) is∑2

i j=1 ai j uxi x j , where

a11 = (λ1v

21 + λ2v

22

) / |v|2, a22 = (λ1v

22 + λ2v

21

) / |v|2,(15)

a12 = a21 = 2(λ1 − λ2) v1v2/|v|2.

For any x ∈ � and ξ ∈ R2\{0}, ∑2i j=1 ai jξiξ j = {λ1 (v · ξ )2 + λ2 (v⊥ · ξ )2} / |v|2 ≥ 0, and

the equality holds for those x and ξ such that λ1 = 0 and ξ‖v. Therefore, (11) is a degen-erate parabolic equation. To study existence, we first consider the following approximationproblem

uεt = ε�uε + div (L (vε) ∇uε) − β (uε − I ε) in �T (16)

τ∂vε

∂t+ vε = ∇uε

σ in �T (17)

uε (x, 0) = I ε, vε (x, 0) = 0 on � (18)

∂uε

∂n= 0, on (∂�)T , (19)

Page 7: Image Recovery via Diffusion Tensor and Time-Delay - Mathematics

162 CHEN AND LEVINE

where I ε → I strongly in L∞ (�) ∩ H 1 and

‖I ε‖L∞ (�) ≤ ‖I‖L∞ (�). (20)

From (17) and (18)

vε (x, t) = 1

τ

∫ t

0e

s−tτ (∇Gσ ∗ uε) (x, s) ds. (21)

Equation (16) can be written as

uεt =

2∑i j=1

bi j (x, t, uε)uεxi x j

+ b(x, t, uε, ∇uε)

with

bi j (x, t, uε) = ai j (x, t, uε) + εδi j ,

where ai j (x, t, uε) is given in (15) with v replaced by vε, and

b (x, t, uε, ∇uε) ={

∂x1

(λ1v

ε1

2 + λ2vε2

2

|vε|2)}

uεx1

+{

∂x1

((λ1 − λ2) vε

1vε2

|vε|2)}

uεx2

+{

∂x2

((λ1 − λ2) vε

1vε2

|vε|2)}

uεx1

+{

∂x2

(λ1v

ε2

2 + λ2vε1

2

|vε|2)}

uεx2

− β (uε − I ε).

By applying the same argument developed in the proof of Theorem 7.4 (p. 491) of [9],and noticing that |∇vε|/|vε| ≤ C (σ, �), one can find a unique solution uε ∈ C2+α,1+ α

2 (�T )of (16) and (18)–(19) for some 0 < α < 1 and any T > 0. The solution of (17), followsfrom (21). Furthermore, using (16) and setting w = uε − ‖I ε‖L∞ we have

∂tw − div(L (vε)∇w) + λw ≤ 0 (x, t) ∈ �T (22)

w (x, 0) = I ε − ‖I ε‖L∞ , x ∈ � (23)∂w

∂n= 0, (x, t) ∈ (∂�)T , (24)

where vε is determined by (21) with the replacement of uε by w. Applying the maximumprinciple to (22), we get

max�T

w ≤ max�

(I ε − ‖I ε‖L∞ ) = 0.

This leads to, for all (x, t) ∈ �T ,

uε ≤ ‖I ε‖L∞ ≤ ‖I‖L∞ .

Similarly we can show uε ≥ − ‖I ε‖L∞ . Therefore,

‖uε‖L∞(�T ) ≤ ‖I ε‖L∞(�). (25)

Page 8: Image Recovery via Diffusion Tensor and Time-Delay - Mathematics

DIFFUSION TENSOR AND TIME-DELAY REGULARIZATION 163

Then by using (21), it is easy to verify that for any multi-index α, with 0 ≤ |α| ≤ 2,∥∥Dαx vε

∥∥L∞ , ‖∂tv

ε‖L∞ ≤ c ‖uε‖L∞ ≤ ‖I‖L∞ , (26)

where c > 0 is a constant depending only on α, τ , and σ .Next we shall estimate the L∞ (0, T ; H 1(�)) norm of uε. Differentiating (16) with respect

to xk yields

uεt xi

= ε�uεxi

+ div((L(vε))xi ∇uε

) + div(L (vε)∇uε

xi

) − λ(uε

xi− I ε

xi

).

Multiplying this equation by uεxi

, integrating over �, and summing with respect to i we get

d

dt‖∇uε‖2

L2(�) +2∑

i=1

∫�

L (vε)∇uεxi

· ∇uεxi

+ ε‖D2uε‖L2(�)

= −2∑

i=1

∫�

(L(vε))xi ∇uε · ∇uεxi

− λ

2∑i=1

∫�

uεxi

(uε

xi− I ε

xi

). (27)

Note that∫�

L(vε) ∇uεxi

· ∇uεxi

=∫

(λ1

∣∣vε · ∇uεxi

∣∣2 + λ2

∣∣vε⊥ · ∇uεxi

∣∣2)/ |vε|2 ≥ 0 (28)

and

−∫

(L(vε))xi ∇uε · ∇uεxi

= 1

2

∫�

(L(vε))xi xi ∇uε · ∇uε ≤ c ‖∇uε‖L2(�), (29)

where in the last inequality we used (26). From (27)–(29) we get

d

dt‖∇uε‖2

L2(�) ≤ c(‖∇uε‖2

L2(�) + ‖∇ I ε‖2L2(�)

).

By Gronwall’s inequality, for any t ∈ [0, T ]

‖∇uε‖2L2(�) + ‖∇ I ε‖2

L2(�) ≤ 2 ‖∇ I ε‖2L2(�)e

cT .

and since I ε converges to I strongly in H 1(�)

‖∇uε‖2L∞(0,T ;L2(�)) ≤ c (T ) ‖∇ I ε‖2

L2(�) ≤ c (T ) ‖∇ I‖2L2(�). (30)

Similarly, multiplying both sides of (16) by uεt and integrating over �T , by (18) one gets

∫ T

0‖uε

t ‖2L2(�) dt + ε

2

∫�

|∇uε|2 (x, t) dx + 1

2

∫�

(L(vε))∇uε · ∇uε(x, t) dx

= −λ

∫ T

0

∫�

uεt (u − I ε) dx dt + ε

2

∫�

|∇ I ε|2 dx +∫ T

0

∫�

(L(vε))t∇uε · ∇uε dx dt .

Using Cauchy’s inequality, (25), (26), and (30) we get that

∫ T

0‖uε

t ‖2L2(�) ≤ c

{‖uε − I ε‖L2(�T ) + ‖∇ I ε‖L2(�) + ‖∇uε‖L2(0,T ;L2(�))}

≤ c(T )‖I ε‖H 1(�) ≤ c(T )‖I‖H 1(�). (31)

Page 9: Image Recovery via Diffusion Tensor and Time-Delay - Mathematics

164 CHEN AND LEVINE

From (25), (26), (30), and (31), there exist subsequences (uεi , vεi ) of (uε, vε), and func-tions (u, v), such that u ∈ L∞(0, T ; H 1(�) ∩ L∞), ut ∈ L2(0, T ; L2(�)), v ∈ W 1,∞(0,

T ; C∞(�)) and

uεi ⇀ u weakly in L p(0, T ; H 1(�)) for any p > 1 (32)

uεi ⇀ u weak-∗ in L∞(0, T ; L∞(�)) (33)

uεit ⇀ u weakly in L2(0, T ; L2(�)) (34)

vεi → v uniformly on �T . (35)

From (35), as εi → 0, L(vε) → L(v) uniformly on �T and then from (32) for any ϕ ∈L2(0, T ; H 1(�)),

∣∣∣∣∫ T

0

∫�

(L(vε)∇uε · ∇φ − L(v)∇u · ∇φ) dx dt

∣∣∣∣ ≤∣∣∣∣∫ T

0

∫�

L(v)∇φ · (∇uε − ∇u) dx dt

∣∣∣∣+

∣∣∣∣∫ T

0

∫�

(L(vε)∇φ − L(v)∇φ) · ∇uε dx dt

∣∣∣∣ → 0. (36)

Combining (36) with (32)–(35) and taking the limit as εi → 0 in (16), one can concludethat u and v are the solutions of (11)–(14). The proof of Theorem 3.1 is completed. �

If λi is given as in Choice 2 then from (26).

λ1 = 1

1 + k|vε|2 ≥ 1

1 + c‖I‖L∞(�)and λ2 ≥ 1.

Then, (ref. (28))

∫�

L(vε)∇uεxi

· ∇uεxi

dx ≥∫

1

1 + c‖I‖L∞(�)

(∣∣vε · ∇uεxi

∣∣2 +∣∣(vε)⊥ · ∇uε

xi

∣∣2

|vε|2)

dx

≥ δ

∫�

∣∣∇uεxi

∣∣2dx (37)

where δ > 0 is a constant depending only on τ, k, σ and ‖I‖L∞(�). In this case, we have thefollowing uniqueness and stability results for (11)–(14).

THEOREM 3.2. Let I ∈ L∞(�) ∩ H 1 and λ1 and λ2 be given by (10). Then the weak so-lution u obtained in Theorem 3.1 is unique and satisfies u ∈ L2(0, T ; H 2(�)). Moreover, if(ui , vi ), i = 1, 2 are the solutions of (11)–(12) with initial data ui (x, 0) = Ii (x) ∈ L∞(�) ∩H 1 and vi (x, 0) = 0, i = 1, 2 respectively, then

‖u1 − u2‖L∞(0,T ;L2(�)) ≤ c‖I1 − I2‖L∞(�)∩H 1 , (38)

where c > 0 is a constant depending on τ, k, σ, T and ‖Ii‖L∞(�)∩H 1 , i = 1, 2.

Page 10: Image Recovery via Diffusion Tensor and Time-Delay - Mathematics

DIFFUSION TENSOR AND TIME-DELAY REGULARIZATION 165

Proof. Inserting (37) into (27), we get

d

dt‖∇uε‖2

L2(�) + δ‖D2uε‖L2(�) ≤ c(‖∇uε‖2

L2(�) + ‖∇ I ε‖2L2(�)

).

Then by Gronwall’s inequality and (30)

∫ T

0‖D2uε‖L2(�) ≤ c, (39)

where c > 0 is a constant depending only on τ, k, σ and ‖I‖L∞(�). Therefore we can choosea subsequence uεi of uε such that in addition to (32)–(35),

uεi ⇀ u weakly in L2(0, T ; H 2(�)). (40)

To prove (38), let (u1, v1) and (u2, v2) be the solutions of (11)–(14) with initial data I1 andI2, respectively. Denote u = u1 − u2, I = I1 − I2 and v = v1 − v2. Then

ut = div((L(v1) − L(v2))∇u1) + div(L(v2)∇u) − λ(u − I ) in �T (41)

v = 1

τ

∫ t

0e

s−tτ ∇Gσ ∗ u(x, t) ds in �T (42)

u(x, 0) = 0 on � (43)

∂u

∂n= 0 on (∂�)T (44)

Multiplying u on both sides of (41) and integrating over � yields

1

2

d

dt‖u‖2

L2(�) +∫

(L(v1) − L(v2))∇u1 · ∇u dx

+∫

L(v2)∇u · ∇u dx + λ

∫�

u2 dx = λ

∫�

I 2 dx . (45)

From (42) we have

|L(v1) − L(v2)| ≤ c|v1 − v2| = c|v| ≤ c

τ

∫ t

0|∇Gσ ∗ u|(x, s) ds

≤ c∫ t

0‖u(x, s)‖L2(�) ds, (46)

where c > 0 is a constant depending only on τ, k, and σ . We also have (cf. (3.14) or (3.23))

∫�

L(v2)∇u · ∇u dx ≥ 1

1 + c‖I2‖L∞(�)‖∇u‖2

L2(�). (47)

and ‖∇u1‖L∞(0,T ;L2(�)) ≤ c‖∇ I1‖L2(�). Hence, with the notation

δ2 = 1

1 + c‖I2‖L∞(�)

Page 11: Image Recovery via Diffusion Tensor and Time-Delay - Mathematics

166 CHEN AND LEVINE

we have that for any t ∈ [0, T ]

1

2

d

dt‖u‖2

L2(�) + δ2‖∇u‖2L2(�) + λ‖∇u‖2

L2(�)

≤ ‖L(v1) − L(v2)‖L∞(�)‖∇u1‖L2(�)‖∇u‖L2(�) + λ‖I‖2L2(�)

≤ δ2‖∇u‖2L2(�) + cT

∫ t

0‖u(·, s)‖2

L2(�) ds + λ‖I‖2L2(�).

This implies

‖u(·, t)‖2L2(�) ≤ cT 2

∫ t

0‖u(·, s)‖2

L2(�) ds + c‖I‖2L2(�).

By Gronwall’s inequality, for any t ∈ [0, T ],

‖u(·, t)‖2L2(�) ≤ c‖I‖2

L2(�),

where c depends only on τ, k, σ, T and ‖Ii‖L∞(�)∩H 1 . This proves (38). Uniqueness followsfrom (38) with I1 = I2. �

4. NUMERICAL EXPERIMENTS

In this section we present our numerical scheme and several experiments.Expanding (11) we have that

ut = div(L(v)∇u) − λ(u − I )

= λ1 D2wu + λ2 D2

w⊥u + λ1(div w)Dwu + λ2(div w⊥)Dw⊥u

+ (∇λ1 · w)Dwu + (∇λ2 · w⊥)Dw⊥u, (48)

where w = v‖v‖ . The first four terms on the right-hand side of (48) are calculated using

central differences. The last two terms require a bit more care.For convenience, we denote ui j = u(i, j) and use the standard difference notations

�+x ui j = ui+1, j − ui j

h, �−

x ui j = ui j − ui−1, j

h, �x ui j = ui+1, j − ui−1, j

2h.

An important aspect of our numerical scheme is that it should exploit the capability of theterms (∇λ1 · w) Dwu and (∇λ2 · w⊥) Dw⊥u to preserve and enhance edges in the image.To do this, we used a variation of the upwind difference scheme presented in [13]. Since λ1

is very small near the edges and close to 1 inside the homogeneous regions, if �xλ1 < 0,we may be approaching a vertical edge. In this case, Dwu = uxw1 + uyw2 is computedusing a backward difference for ux (to stay away from the edge) and a central differencefor uy (to enhance the edge). Likewise, if �xλ1 < 0, uy is computed in the same way, butux requires a forward difference. �yλ1 and �yλ1 are computed similarly. λ2 works in theopposite manner, that is, it is close to 1 in the homogeneous regions and takes on highervalues at the edges. Therefore, we do a similar calculation, preventing averaging acrossedges while encouraging smoothing along the edges.

Page 12: Image Recovery via Diffusion Tensor and Time-Delay - Mathematics

DIFFUSION TENSOR AND TIME-DELAY REGULARIZATION 167

In summary, (∇λ1 · w)Dwu and (∇λ2 · w⊥)Dw⊥u are approximated by

U 1i j (λ1, u, w) =

∑k �=l

(min

(�xk λ1i j , 0

)�−

xkui j + max

(�xk λ1i j , 0

)�+

xkui j

) · w2ki j

+ �xk λ1i j · �xl ui j · wkwl

U 2i j (λ2, u, w⊥) =

∑k �=l

(min

(�xk λ2i j , 0

)�−

xkui j + max

(�xk λ2i j , 0

)�−

xkui j

) · (w⊥

ki j

)2

+ �xk λ2i j · �xl ui j · w⊥k w⊥

l ,

respectively.Denoting ∇i j u = (�x ui j , �yui j ), tracking the time level by the upper index, n, and

setting u0i j = Ii j , our scheme is as follows:

un+1i j − un

i j

�t= λn

1i j

(�xx un

i j

(wn

1i j

)2 + 2�xyuni jw

n1i j

wn2i j

+ �yyuni j

(wn

2i j

)2)+ λn

2i j

(�xx un

i j

((wn

1i j

)⊥)2 + 2�xyuni j

(wn

1i j

)⊥(wn

2i j

)⊥ + �yyuni j

((wn

2i j

)⊥)2)+ λn

1i j

(�xw

n1i j

+ �ywn2i j

)∇i j un · wn

i j + λn2i j

(�x

(wn

1i j

)⊥

+ �y(wn

2i j

)⊥)∇i j un · (

wni j

)⊥ + U 1i j

(λn

1, un, wn) + U 2

i j

(λn

2, un(wn)⊥).

Since (12) provides a smoothed version of u, central differences were sufficient. There-fore, we computed v by the implicit scheme

vn+11i j

− vn1i j

�t= 1

τ

(∇i j (Gσ ∗ un) − vn+11i j

)vn+1

2i j− vn

2i j

�t= 1

τ

(∇i j (Gσ ∗ un) − vn+12i j

).

Using Choice 2 to compute λ1,2 (see Section 2), we tested our model on the followingimages to verify it’s effectiveness. In both Figs. 1 and 2, Gaussian noise with zero mean wasadded to the original images with a signal to noise ratio of 1 : 6. Because of the high level ofnoise, both time and spatial regularization were required to recover the original images. Thereconstructed images in 1d and 2d were obtained by using the proposed model, (11)–(14).The images in 1c and 2c were obtained by using the same equation (11), with the diffusiontensor L = λ1 P∇uσ

+ λ2 P∇u⊥σ, where P denotes the time-delay regularization for the matrix

P . The difference between the L which is used in Figs. 1c and 2c and the proposed L which isused in 1d and 2d is whether the projection matrix is constructed before or after applying thetime-delay regularization. The numerical results showed that the time averaging performedbefore the projection (cf. (12)) provides a much sharper reconstruction of the edges.

In Figs. 3 and 4, time-delay regularization was sufficient in removing the noise fromthe images while still retaining all of their significant details. Figure 3a is a PET (PositronEmission Tomography) image of a lung. Our goal was to remove the noise around thelung while sharpening the boundaries of the lung itself. Figure 4 is a radar image of aland mine field. The desired result was to remove as much detail from the rest of the

Page 13: Image Recovery via Diffusion Tensor and Time-Delay - Mathematics

168 CHEN AND LEVINE

FIG. 1. (a) True image (b) true image + Gaussian noise with zero mean (SNR 1 : 6), (c) reconstructedimage using projection before time-averaging (100 iterations, α = 10, k = 1, σ = 0.5, dt = 0.05, τ = 0.5) (d)reconstructed image using the proposed model (100 iterations, α = 10, k = 1, σ = 0.5, dt = 0.05, τ = 0.5).

picture while enhancing the mines. Our model successfully accomplished both of thesegoals.

In the last set of images, Figs. 5 and 6, we used the proposed model to enhance a fingerprint.Both sets of images are prints of a right index finger. Figure 5 is the entire fingerprint andFig. 6 gives close up views of the print in Fig. 5. Time-delay regularization was sufficient forthe reconstruction. The experiments showed that the proposed model was able to connectthe broken edges. This is because the diffusion along the edges can be made stronger thanthat in the homogeneous regions by selecting the appropriate parameters for λ2.

5. RELATED NEURAL NETWORK

Both PDE methods and neural networks have been effectively applied to image restora-tion. It is interesting to know whether they are related and if so, how they are related.In this section we shall explore the link between the two using the following Lemma, dueto Degond and Mas-Gallic [7], which allows us to approximate the diffusion term by anintegral term, whose Riemann sum is in the form of Hopfield neural network.

Page 14: Image Recovery via Diffusion Tensor and Time-Delay - Mathematics

DIFFUSION TENSOR AND TIME-DELAY REGULARIZATION 169

FIG. 2. (a) Aerial landscape picture (true image) (b) true image + Gaussian noise with zero mean (SNR1 : 6), (c) reconstructed image using projection before time-averaging (75 iterations, α = 10, k = 1, σ = 0.5,

dt = 0.05, τ = 0.5) (d) reconstructed image using the proposed model (75 iterations, α = 10, k = 1, σ = 0.5,

dt = 0.05, τ = 0.5).

FIG. 3. (a) PET image of a lung, (b) reconstructed lung using the proposed model (30 iterations,α = 1, k = 1, dt = 0.05, τ = 0.5).

Page 15: Image Recovery via Diffusion Tensor and Time-Delay - Mathematics

170 CHEN AND LEVINE

FIG. 4. (a) Radar picture of land mines, (b) reconstructed land mines using the proposed model (40 iterations,α = 1, k = 1, dt = 0.05, τ = 0.5).

Let

D(t) : f → div(L∇ f )

be a differential operator. Let

Qε(t) : f →∫

σ ε(x, y, t)( f (y) − f (x)) dy

be an integral operator where σ ε takes the form

σ ε(x, y, t) = ε−(2+n)n∑

i j=1

Mi j (x, y, t)ψi j

(y − x

ε

), x, y ∈ Rn (49)

with a cut-off matrix ψ and a matrix M depending on L . Define

mi j (x, t) = Mi j (x, x, t). (50)

FIG. 5. (a) Fingerprint, (b) reconstructed fingerprint (20 iterations, α = 10, k = 1, dt = 0.05, τ = 0.5).

Page 16: Image Recovery via Diffusion Tensor and Time-Delay - Mathematics

DIFFUSION TENSOR AND TIME-DELAY REGULARIZATION 171

FIG. 6. (a) Top of fingerprint in figure, (b) reconstructed fingerprint (10 iterations, α = 10, k = 1,

dt = 0.05, τ = 0.5), (c) bottom of fingerprint in figure, (d) reconstructed fingerprint (15 iterations, α = 10,

k = 1, dt = 0.05, τ = 0.5).

Assume that

(H1) there exists an integer r ≥ 2, such that Zαi j = ∫

ψi j (x)xα = 0 for 1 ≤ |α| ≤ r + 1,and |α| �= 2.

(H2) for any integer k, l ∈ [1, n]∑n

i j=1 mi j (x)Zek+eli j = 2Lk,l , where ei is the i th basis

of Rn .

LEMMA 5.1. (see [7]) If M and ψ in (49) satisfy M ∈ W 1,∞(Rn × Rn), (1 + |x |r+2)ψ(x)∈ L1(Rn) and the hypothesis (H1) and (H2), then

‖D f − Qε f ‖L∞(Rn ) ≤ c(M, ψ)εr‖ f ‖W r+2,∞(Rn ).

The following examples give two pairs of ψ and M that satisfy (H1) and (H2). Let θ (x)be a function in R2 depending on r = |x | only and satisfying

∫ ∞0 r5θ (r ) dr = 4

π. Denote

akl = ∫R2 x2

k x2l θ (x) dx; that is,

a11 =∫ 2π

0

∫ ∞

0r5 cos4 ωθ (r ) dr dw = 3, a22 =

∫ 2π

0

∫ ∞

0r5 sin4 ωθ (r ) dr dw = 3

and a12 = a21 =∫ 2π

0

∫ ∞

0r5 cos2 ω sin2 ωθ (r ) dr dw = 1.

Page 17: Image Recovery via Diffusion Tensor and Time-Delay - Mathematics

172 CHEN AND LEVINE

EXAMPLE 5.1. [7]: Let

ψi j (x) = xi x jθ (x). (51)

Then the hypothesis (H2) reduces to

a11m11 + a12m22 = 2L11

a21m11 + a22m22 = 2L22

2a12m12 = 2L12.

Therefore,

m = L − 1

4(tr L)I. (52)

EXAMPLE 5.2. Let

ψi j (x) = x⊥i x⊥

j θ (x). (53)

Then the hypothesis (H2) reduces to

a12m11 + a11m22 = 2L11

a22m11 + a12m22 = 2L22

−2m12a12 = 2L21.

Therefore,

m = −L + 3

4(tr L)I. (54)

As suggested in [7], we take

M(x, y) = m

(x + y

2

). (55)

Now we apply the two pairs of (ψ, M) in (51)–(52) and (53)–(54) to our diffusion tensorL . Let

σ 1ε (x, y, t) = ε−6θ

( |y − x |ε

) 2∑i j=1

m1i j

(x + y

2

)(y − x)i (y − x) j ,

where m1(x) = λ1L1 − 14 tr (λ1L1)I , and L1 is given by (7), and

σ 2ε (x, y, t) = ε−6θ

( |y − x |ε

) 2∑i j=1

m2i j

(x + y

2

)(y − x)⊥i (y − x)⊥j ,

where m2(x) = −λ2L2 + 34 tr(λ2L2)I , and L2 is given by (8). Define

σε(x, y, t) = σ 1ε (x, y, t) + σ 2

ε (x, y, t).

Page 18: Image Recovery via Diffusion Tensor and Time-Delay - Mathematics

DIFFUSION TENSOR AND TIME-DELAY REGULARIZATION 173

Then using (7), (8), (52), and (54)

σε(x, y, t) = ε−6θ

( |y − x |ε

){(λ1 − λ2)

(v

‖v‖∣∣∣∣( x+y

2 ,t)

· (y − x)

)2

+(

3

4λ2 − 1

4λ1

)‖y − x‖2

}. (56)

Let

Qεu =∫

σε(x, y, t)(u(y) − u(x)) dy. (57)

By Lemma 5.1, div(L∇u) is approximated by Qεu, where L is defined in (9). On the otherhand, the integral (57) can be approximated by its discrete form in space by

Qεu(x p) ≈∑

q

σε(x p, xq , t)(u(xq ) − u(x p))h2. (58)

Therefore, the equation (11) can be approximated by

du(x p, t)

dt= h2

∑q

�pq (u(xq ) − u(x p)) − λ(u(x p) − I (x p)), (59)

where

�pq = σε(x p, xq , t) (60)

and σε is given in (56).Recall a popular Hopfield (1984) ANN [8] that has an activation updating dynamics

specified by the following continuous-time dynamical system

dup

dt= 1

n p

∑q

Wpq Vq − αpu p + Ip, (61)

where Ip is the external input or “bias” at the neuron x p, Wpq is the synaptic connectionbetween the neurons x p and xq , αp > 0 is the resistive parameter at x p, n p > 0 is thenumber of non-zero terms in the sum; i.e., the number of neurons within the synaptic rangeof neuron x p, Vq ∈ [−1, 1] denotes the activation level of the neuron xq , and u p ∈ R is thestate of the neuron x p. The functions u and V are linked through the relation

V = g(βu), (62)

where β is a parameter and g is a smooth, increasing, odd function satisfying g(±∞) = ±1.Next we shall translate (59) into the ANN model (61)–(62). Define g : R → R, such that

g is smooth, decreasing, g(x) = x if |x | ≤ 12 and g(±∞) = ±1. Let

β = 1

2‖I‖L∞(�).

Due to (25), ‖u‖L∞(�T ) ≤ ‖I‖L∞(�), hence, g(βu) = βu. Using (62) and the notation u p =u(x p, t), and Ip = I (x p), we can rewrite (59) as

du p

dt= h2

∑q

1

β�pq Vq −

(h2

∑q

�pq + λ

)u p + λIp. (63)

Page 19: Image Recovery via Diffusion Tensor and Time-Delay - Mathematics

174 CHEN AND LEVINE

Notice that (63) is exactly the same form as (61) with αp = h2 ∑q �pq + λ, n p = 1

h2 , and

Wpq = 1

β�pq = 1

βσε(x p, xq , t)

= 1

βε−6θ

( |xq − x p|ε

){(λ1 − λ2)

(v

‖v‖(

x + y

2, t

)· (xq − x p)

)2

+(

3

4λ2 − 1

4λ1

)‖xq − x p‖2

}. (64)

Therefore, the system (11)–(12) can be interpreted as the time-continuous Hopfield NeuralNetwork, (62)–(63).

Moreover, since v is the time average of ∇u and Wpq �= 0 only if |xq − x p| < ε, roughlyspeaking we have

v

‖v‖(

x + y

2, t

)· (xq − x p) ≈ u(xq , t) − u(x p, t).

Noticing that λ1 − λ2 < 0 and 34λ2 − 1

4λ1 > 0, we can see from (64) that the smaller|u(xq , t) − u(x p, t)| is, the larger Wpq is. One can also see from (64) that as λ2 increases, sodoes Wpq . This shows that the synaptic connections are enhanced or inhibited dependingon the coherence between the neurons x p and xq and the size of λ2. Therefore, this neuralnetwork can enhance coherent structures. Since the solution of the PDE model can be ap-proximated by the solution of this neural network, we can expect that the PDE model willalso have this feature.

6. CONCLUSION

We have presented a nonlinear diffusion model for image restoration that is driven by adiffusion tensor depending on the gradient of the image smoothed by time-delay regulariza-tion. We proposed applying time-delay regularization to the image (or image gradient) priorto constructing the projection matrix to ensure the diffusion is only in the direction of theprojection. The structure of the diffusion tensor in the proposed model is the combinationof two diffusion tensors with proper weights. One of them governs the diffusion in thedirection of ∇u and the other governs the diffusion in the direction of ∇u⊥. By choosingsuitable diffusivity parameters, the proposed model is able to diffuse images isotropicallyinside a homogeneous region and anisotropically along its edge, and can enhance coher-ent structures. Experimental results show the effectiveness of the model in tracking edgesand recovering highly degraded images. The link between the model and the time continu-ous Hopfield neural network has been derived. The well-posedness of the model has beenproved.

ACKNOWLEDGMENTS

The authors thank Chris Brislawn and Susan Kelley for providing the fingerprint data and Bernard Mair forproviding the lung data.

Page 20: Image Recovery via Diffusion Tensor and Time-Delay - Mathematics

DIFFUSION TENSOR AND TIME-DELAY REGULARIZATION 175

REFERENCES

1. L. Alvarez, P. L. Lions, and J. M. Morel, Image selective smoothing and edge detection by nonlinear diffusion,SIAM J. Numer. Anal. 29, 1992, 845–866.

2. F. Catte, T. Coll, P. L. Lions, and J. M. Morel, Image selective smoothing and edge detection by nonlineardiffusion, SIAM J. Numer. Anal. 1992, 182–193.

3. Y. Chen, B. C. Vemuri, and L. Wang, Image denoising and segmentation via nonlinear diffusion, Internat. J.Comput. Math. Appl., to appear.

4. G. H. Cottet, Neural networks: Continuous approach and applications to image processing, J. Biol Syst. 3,1995, 1131–1139.

5. G. H. Cottet and M. E. Ayyadi, A volterra type model for image processing, IEEE Trans. Image Process.7(3), 1998, 292–303.

6. G. H. Cottet and L. Germain, Image processing through reaction combined with nonlinear diffusion, Math.Comput. 61, 1993, 659–673.

7. P. Degond and S. Mas-Gallic, The weighted particles method for convection-diffusion equations: II. Theanisotropic case, Math. Comput. 53, 1989, 485–508.

8. J. J. Hopfield, Neurons with graded response have collective computational properties like those of two statesneurons, Proc. Nat. Acad. Sci. 81, 1984, 3088–3092.

9. O. Ladyzhenskaja, V. A. Solonnikov, and N. N. Ural’ceva, Linear and quasilinear equations of parabolic type,AMS Transl. Math. Monogr. 23, 1968.

10. R. Malladi and J. A. Sethian, Image Processing, flows under min/max curvature and mean curvature, GraphicalModels Image Process. 58(2), 1996, 127–141.

11. D. Mumford and J. Shah, Optimal approximations by piecewise smooth functions and associated variationalproblems, Comm. Pure Appl. Math. 17, 1989, 577–685.

12. M. Nitzberg and T. Shiota, Nonlinear Image filtering with edge and corner enhancement, IEEE Trans. PatternAnal. Machine Intell. 14, 1992, 826–833.

13. S. Osher and J. A. Sethian, Fronts propagating with curvature-dependent speed: Algorithms based onHamilton–Jacobi formulations, J. Comput. Phys. 79, 1988, 12–49.

14. P. Perona and J. Malik, Scale-space and edge detection using anisotropic diffusion, IEEE Trans. Pattern Anal.Mach. Intell. 12, 1990, pp. 629–639.

15. L. Rudin, S. Osher, and E. Fatemi, Nonlinear total variation based noise removal algorithms, Physica D 60,1992, 259–268.

16. J. Shah, A common framework for curve evolution, segmentation and anisotropic diffusion, IEEE Conf. onComputer Vision and Pattern Recognition, June, San Francisco, 1996.

17. N. Sochen, R. Kimmel, and R. Malladi, A general frame work for low level vision, IEEE Trans. ImageProcess. 7(3), 1998, 310–318.

18. D. Strong and T. Chan, Spatially and Scale Adaptive Total Variation Based Regularization and AnisotropicDiffusion in Image Processing, UCLA Computational and Applied Mathematics Report 96-46, 1996.

19. C. R. Vogel and M. E. Oman, Iterative methods for total variation denoising, SIAM J. Sci. Statist. Comput.17, 1996, 227–238.

20. J. Weickert, Scale-Space Properties of Nonlinear Diffusion Filtering with a Diffusion Tensor, Report No. 110.Laboratory of Technomathematics, University of Kaiserslautern, Germany (1994).

21. J. Weickert, Multiscale texture enhancement, in Computer Analysis of Images and Patterns, pp. 230–237,Lecture Notes in Computer Sciences, Vol. 970, Springer-Verlag, Berlin, 1995.

YUNMEI CHEN received the Ph.D. in mathematics from Fudan University, Shanghai, China in 1985. Currently,she is a professor in the Department of Mathematics at the University of Florida. Her research interests includenonlinear PDE, geometric analysis, and applications of PDE and geometric analysis to image processing.

STACEY LEVINE received her Ph.D. in mathematics at the University of Florida in May 2000. She is currentlyan assistant professor in the Department of Mathematics and Computer Science at Duquesne University. Herresearch interests include geometric evolution equations and applications of PDEs to image processing.