Texture Synthesis Using Gray-level Co-occurrence

19
Texture synthesis using gray-level co-occurrence models: algorithms, experimental analysis, and psychophysical support Anthony C. Copeland Gopalan Ravichandran Mohan M. Trivedi, FELLOW SPIE University of California, San Diego Computer Vision and Robotics Research (CVRR) Laboratory Electrical and Computer Engineering Department La Jolla, California 92093-0407 E-mail: [email protected] Abstract. The development and evaluation of texture synthesis algo- rithms is discussed. We present texture synthesis algorithms based on the gray-level co-occurrence (GLC) model of a texture field. These algo- rithms use a texture similarity metric, which is shown to have high cor- relation with human perception of textures. Synthesis algorithms are evaluated using extensive experimental analysis. These experiments are designed to compare various iterative algorithms for synthesizing a ran- dom texture possessing a given set of second-order probabilities as characterized by a GLC model. Three texture test cases are selected to serve as the targets for the synthesis process in the experiments. The three texture test cases are selected so as to represent three different types of primitive texture: disordered, weakly ordered, and strongly or- dered. For each experiment, we judge the relative quality of the algo- rithms by two criteria. First, we consider the quality of the final synthe- sized result in terms of the visual similarity to the target texture as well as a numerical measure of the error between the GLC models of the syn- thesized texture and the target texture. Second, we consider the relative computational efficiency of an algorithm, in terms of how quickly the algorithm converges to the final result. We conclude that a multiresolu- tion version of the ‘‘spin flip’’ algorithm, where an individual pixel’s gray level is changed to the gray level that most reduces the weighted error between the images second order probabilities and the target probabili- ties, performs the best for all of the texture test cases considered. Finally, with the help of psychophysical experiments, we demonstrate that the results for the texture synthesis algorithms have high correlation with the texture similarities perceived by human observers. © 2001 Society of Photo- Optical Instrumentation Engineers. [DOI: 10.1117/1.1412851] Subject terms: texture synthesis; gray-level co-occurrence models; algorithms. Paper 010018 received Jan. 16, 2001; revised manuscript received May 3, 2001; accepted for publication May 8, 2001. 1 Introduction Image synthesis or scene generation can be approached from two somewhat different directions. In the first, which can be called ‘‘qualitative perceptual-cue’’ approach, the goal is to develop synthetic images that have the appear- ance of a particular type of textured surface. Capturing the appearance quality is the primary goal of such synthesis algorithms. They synthesized textures need to invoke a re- alistic surface appearance in the observer’s eyes. Examples of this kind of synthesis are approaches used in video games and animated films. In the second type of synthesis approach, which can be called ‘‘quantitative perceptual- cue’’ approach, the objective is to generate texture fields that are based on quantitative, physical models underlying important perceptual cues or physical properties. The abil- ity to derive a texture field that has a prescribed quantitative characterization of the underlying physical and perceptual models is fundamental in developing these algorithms. Good examples of this kind of synthesis algorithms are those required for infrared and visible scene generation for design and evaluation of target recognition systems. 1 We concentrate on texture synthesis algorithms of this kind. Texture is an important preattentive cue in human and machine vision. 2 Most natural images are rich in texture, which is the result of regular and repetitive spatial arrange- ments of some characteristic tiling pattern. Texture models are shown to form the basis of human preattentive vision. 3 We present texture synthesis algorithms based on the gray- level co-occurrence ~GLC! model of a texture field. One of the important and unique features of this research is to pro- vide psychophysical experimental support to the synthesis algorithms. This support is sought for two reasons: first, to seek a quantitative metric to compare the similarities be- tween two textures, which can provide a useful termination criterion, and second to correlate the performance of a syn- thesis algorithm to the human observer’s assessment of tex- ture similarities and differences. These texture synthesis al- gorithms are evaluated using extensive experimental analysis. These experiments are designed to compare vari- ous iterative algorithms for synthesizing a random texture possessing a given set of second-order probabilities as char- 2655 Opt. Eng. 40(11) 26552673 (November 2001) 0091-3286/2001/$15.00 © 2001 Society of Photo-Optical Instrumentation Engineers

Transcript of Texture Synthesis Using Gray-level Co-occurrence

Page 1: Texture Synthesis Using Gray-level Co-occurrence

Texture synthesis using gray-level co-occurrencemodels: algorithms, experimental analysis,and psychophysical support

Anthony C. CopelandGopalan RavichandranMohan M. Trivedi, FELLOW SPIEUniversity of California, San DiegoComputer Vision and Robotics Research

(CVRR) LaboratoryElectrical and Computer Engineering

DepartmentLa Jolla, California 92093-0407E-mail: [email protected]

Abstract. The development and evaluation of texture synthesis algo-rithms is discussed. We present texture synthesis algorithms based onthe gray-level co-occurrence (GLC) model of a texture field. These algo-rithms use a texture similarity metric, which is shown to have high cor-relation with human perception of textures. Synthesis algorithms areevaluated using extensive experimental analysis. These experiments aredesigned to compare various iterative algorithms for synthesizing a ran-dom texture possessing a given set of second-order probabilities ascharacterized by a GLC model. Three texture test cases are selected toserve as the targets for the synthesis process in the experiments. Thethree texture test cases are selected so as to represent three differenttypes of primitive texture: disordered, weakly ordered, and strongly or-dered. For each experiment, we judge the relative quality of the algo-rithms by two criteria. First, we consider the quality of the final synthe-sized result in terms of the visual similarity to the target texture as well asa numerical measure of the error between the GLC models of the syn-thesized texture and the target texture. Second, we consider the relativecomputational efficiency of an algorithm, in terms of how quickly thealgorithm converges to the final result. We conclude that a multiresolu-tion version of the ‘‘spin flip’’ algorithm, where an individual pixel’s graylevel is changed to the gray level that most reduces the weighted errorbetween the images second order probabilities and the target probabili-ties, performs the best for all of the texture test cases considered. Finally,with the help of psychophysical experiments, we demonstrate that theresults for the texture synthesis algorithms have high correlation with thetexture similarities perceived by human observers. © 2001 Society of Photo-Optical Instrumentation Engineers. [DOI: 10.1117/1.1412851]

Subject terms: texture synthesis; gray-level co-occurrence models; algorithms.

Paper 010018 received Jan. 16, 2001; revised manuscript received May 3, 2001;accepted for publication May 8, 2001.

cheh

theearthesisrepleeo

esial-ldsingbil-tivetua

s.arefor

.nd,

ge-elsn.ray-fro-sisto

be-ionyn-tex-al-

ntalvari-urehar-

1 Introduction

Image synthesis or scene generation can be approafrom two somewhat different directions. In the first, whiccan be called ‘‘qualitative perceptual-cue’’ approach,goal is to develop synthetic images that have the appance of a particular type of textured surface. Capturingappearance quality is the primary goal of such synthealgorithms. They synthesized textures need to invoke aalistic surface appearance in the observer’s eyes. Examof this kind of synthesis are approaches used in vidgames and animated films. In the second type of synthapproach, which can be called ‘‘quantitative perceptucue’’ approach, the objective is to generate texture fiethat are based on quantitative, physical models underlyimportant perceptual cues or physical properties. The aity to derive a texture field that has a prescribed quantitacharacterization of the underlying physical and percepmodels is fundamental in developing these algorithmGood examples of this kind of synthesis algorithmsthose required for infrared and visible scene generation

Opt. Eng. 40(11) 2655–2673 (November 2001) 0091-3286/2001/$15.00

d

-

-s

s

l

design and evaluation of target recognition systems.1 Weconcentrate on texture synthesis algorithms of this kind

Texture is an important preattentive cue in human amachine vision.2 Most natural images are rich in texturewhich is the result of regular and repetitive spatial arranments of some characteristic tiling pattern. Texture modare shown to form the basis of human preattentive visio3

We present texture synthesis algorithms based on the glevel co-occurrence~GLC! model of a texture field. One othe important and unique features of this research is to pvide psychophysical experimental support to the synthealgorithms. This support is sought for two reasons: first,seek a quantitative metric to compare the similaritiestween two textures, which can provide a useful terminatcriterion, and second to correlate the performance of a sthesis algorithm to the human observer’s assessment ofture similarities and differences. These texture synthesisgorithms are evaluated using extensive experimeanalysis. These experiments are designed to compareous iterative algorithms for synthesizing a random textpossessing a given set of second-order probabilities as c

2655© 2001 Society of Photo-Optical Instrumentation Engineers

Page 2: Texture Synthesis Using Gray-level Co-occurrence

sen thd

or-ex

byn-etbethe

utayde

-on-

aray

n

he

iesthed

-

a-x--

eit

.d

ltory-ilem

in-um

ta-for

thena-in-

Copeland, Ravichandran, and Trivedi: Texture synthesis . . .

acterized in a GLC model. Three texture test cases arelected to serve as the targets for the synthesis process iexperiments. The three texture test cases are selecterepresent three different types of primitive texture: disdered, weakly ordered, and strongly ordered. For eachperiment, we judge the relative quality of the algorithmstwo criteria. First, we consider the quality of the final sythesized result in terms of the visual similarity to the targtexture, as well as a numerical measure of the errortween the GLC models of the synthesized texture andtarget texture. Second, we consider the relative comptional efficiency of an algorithm, in terms of how quicklthe algorithm converges to the final result. We concluthat a multiresolution version of the ‘‘spin flip’’ algorithm~as described in Sec. 3!, using a new weighted error criterion, performs the best for all of the texture test cases csidered.

2 Texture Representation and ExperimentalMethodology

2.1 GLC Model

The second-order gray level probability distribution oftexture image can be calculated by considering the glevels of pixels in pairs~two at a time!. A second-orderprobability is often called a GLC probability. For a givedisplacement vectorD5@DxDy#, the joint probability of apixel at location (x,y) having a gray leveli, and the pixel atlocation (x1Dx ,y1Dy) having a gray levelj in the tex-ture, is represented byP( i , j uD):

P~ i , j uD!:

51

N (~x,y!~x,y!1D JPI

g@ f ~x,y!5 i , f ~x1Dx ,y1Dy5 j #,

i 50,1, . . . ,G21,

j 50,1, . . . ,G21, ~1!

whereN is the number of pairs of pixels separated by tdisplacementD such that both lie within the imageI, andg@ .#51 if f (x,y)5 i and f (x1Dx ,y1Dy)5 j , and g@ .#50 otherwise. Allowingi and j to take on any of theGpossible integer gray-level values, the GLC probabilitfor any single displacement are normally tabulated inform of a G3G matrix, with i serving as the row index anj serving as the column index. The notationP(D) will beused to refer to the entire GLC matrix for displacementD.A GLC matrix P~D! is a discrete joint probability distribution, and as such, the sum of its elements is unity:

(i 50

G21

(j 50

G21

P~ i , j uD!51. ~2!

A texture model is developed on the basis of GLC mtrices, inspired by the work of Gagalowicz and Ma on teture synthesis.4 The modelT consists of a set of GLC matrices:

2656 Optical Engineering, Vol. 40 No. 11, November 2001

-eto

-

-

-

T5$P~D!:DPD%, ~3!

whereD is the set of displacement vectors. To simplify thnomenclature, a texture and the model computed fromwill be synonymous, and will be represented asT. Thetexture modelT is essentially a vector of GLC matricesThe set of displacement vectorsD is an ordered set, defineas

D55 D5@Dx Dy#UDx ,Dy ,TNX ,TNYPI

0<uDxu<TNX

0<uDyu<TNY

@DxDy#Þ@0 0#

290°,tan21Dx

Dy<90°

6 . ~4!

The displacement vectors are nonredundant andTNX andTNY represent the maximum displacement in thex andy di-rection. For digital images, allDx andDy are discrete andbelong to the set of integersI. Figure 1 shows the spatiaaxis convention for determining the displacement vec@Dx Dy# for a GLC matrix. Notice that using symmetrP( i , j uD)5P( j ,i u2D), so we need to consider only displacements in directions varying over 180 deg. Thus, whthe x coordinate for the displacement vector is varied fro2TNX to 1TNX , the y coordinate is varied only from 1 toTNY , plus there areTNX additional displacements from@1 0#to @TNX 0#. The number of displacementsTNGLC corre-sponding to the texture modelT is:

TNGLC52TNXTNY1TNX1TNY . ~5!

The number of GLC matrices comprising the modelcreases drastically and nonlinearly with the latter maximdisplacement values. Higher values ofTNX and TNY willresult in a more extensive texture model, but the computional cost for a large model may overwhelm the desirequality, and a judicious compromise must be made.

After B. Julesz made the important conjecture aboutrole of second-order statistics in human texture discrimition, GLC models have found many useful applicationsmachine vision.5 Many studies have utilized features com

Fig. 1 Definition of the spatial neighborhood of a pixel for a GLCmodel.

Page 3: Texture Synthesis Using Gray-level Co-occurrence

Copeland, Ravichandran, and Trivedi: Texture synthesis . . .

Fig. 2 The three test cases for the texture synthesis experiments; (a) disordered(pigskin), (b) strongly ordered (raffia), and (c) weakly ordered (wood grain).

deforthaconhe

ear odis-erb-

lth

een

askovd-

varureabuthe.

ult,

tssyn-an-e oftex-this

ithfer-

cy-isEentsics

eran-ta-om-theded

puted from GLC matrices, such as inertia, cluster shaentropy, and local homogeneity. Specific definitionsthese measures may be found in Refs. 6–9. It is knownthese measures do not gauge all the important texturetext information contained in a GLC matrix. Therefore telements of a GLC matrix themselves are also used as msures. In several studies to compare the relative powevarious texture analysis techniques to perform texturecrimination, GLC matrices generally outperformed othmethods.6,10–13GLC matrices have also been used for oject detection,7,8 scene analysis,9,14 as well as texturesynthesis.4,15,16Other studies have demonstrated the weaof texture information contained within GLC matrices.17–19

Also, the human preattentive vision mechanism has bshown to be described quantitatively by GLC matrices.7,12

GLC matrices contain essentially the same informationsome other texture analysis tools, such as Gibbs/Marrandom fields,20 in that they consist of tabulated seconorder probabilities.

2.2 Experimental Methodology for ComparativeAnalysis of Algorithms

We discuss several experiments designed to compareous iterative algorithms for synthesizing a random textpossessing a given set of second-order probabilities as tlated in a GLC model. For each experiment, we judgerelative quality of the algorithm in terms of two criteriaFirst, we consider the quality of the final synthesized res

,

t-

-f

i-

-

both qualitatively and quantitatively. Qualitative judgmenare made by comparing the visual appearance of thethesized texture to that of the original target texture. Qutitative judgments are made using a numerical measurthe error between the GLC models of the synthesizedture and the target texture. The measure we use forpurpose is simple average co-occurrence error~ACE!,which has also been found to be highly correlated whuman judgments of the visual distinctness between difent textures.21,22 It is defined as:

ACE51

TNGLC(

DPD (i 50

G21

(j 50

G21

uPt~ i , j uD!2Pb~ i , j uD!u. ~6!

Second, we consider the relative computational efficienof an algorithm in terms of how quickly the algorithm converges to the final result. Computational efficiencyjudged by examining how quickly the value of the ACmeasure decreases versus computing time. All experimwere performed on the same machine, a Silicon GraphIndy with a 132-Mhz IP22 processor. All of the iterativalgorithms discussed require the generation of pseudodom numbers, which in itself carries a certain computional overhead. No attempt is made to separate the cputational overhead of random number generation fromrest of the algorithm. We consider the generation of neerandom numbers to be an implicit part of the algorithm.

2657Optical Engineering, Vol. 40 No. 11, November 2001

Page 4: Texture Synthesis Using Gray-level Co-occurrence

Copeland, Ravichandran, and Trivedi: Texture synthesis . . .

26

Fig. 3 The three histograms reduced to eight gray levels by clustering: (a) disordered (pigskin), (b)strongly ordered (raffia), and (c) weakly ordered (wood grain).

yn-yn-vi-n ome

et oor aro

e

rered

iesl of

thetotheto

to-of

ini-is

o-testisto-i-ams

e

yn-

2.3 Selection of Texture Test Patterns

We would like to analyze the effectiveness of various sthesis algorithms regardless of what texture is being sthesized. However, image textures have widely varyingsual and statistical qualities. Some texture representatiosynthesis methods may be more or less effective for sotextures than others. For this reason, we selected a sthree texture test cases to serve as the target textures fof the experiments. These three textures are from the Bdatz album of textures,23 and represent all three of thclasses of primitive textures in Rao’s taxonomy.24 The threeselected textures are shown in Fig. 2. The pigskin textuD92 from Brodatz, is classified by Rao as a disordetexture; the raffia texture~D84! is classified as stronglyordered; the wood grain texture~D68! is classified asweakly ordered.

As explained previously, the co-occurrence probabilitin a complete GLC texture model are tabulated in a totaTNGLC matrices, each of which is of sizeG3G. The three

58 Optical Engineering, Vol. 40 No. 11, November 2001

r

fll

-

,

selected texture images were originally quantized tostandardG5256 gray levels. To keep the texture modelsa more easily manageable size, it was desired to reducequantization level of each of the three texture imagesG58 levels. It was also desired to accomplish this hisgram reduction with as little change in the appearancethe textures as possible. This means that we wish to mmize the change in gray levels for every pixel. For threason, theK-means clustering algorithm25 was used to se-lect the eight gray levels for the new histogram. The algrithm was run separately for each of the three texturecase images. Figure 3 shows the resulting reduced hgrams for the three 2563256 texture test images. The vsual appearance of the images with the reduced histogris indistinguishable from the originalG5256 level imagesshown previously in Fig. 2. For all of our experiments, wuse the GLC models computed from theseG58 reducedhistogram texture images as the target model for the sthesis.

Page 5: Texture Synthesis Using Gray-level Co-occurrence

Copeland, Ravichandran, and Trivedi: Texture synthesis . . .

Fig. 4 ACE convergence for the spin-flip and spin-exchange algorithms: (a) disordered (pigskin), (b)strongly ordered (raffia), and (c) weakly ordered (wood grain).

sisulargetifyitiesgo-tivepoimrobndo-

n-rayur-

ex-can

ge

uldein-peri-o anthe

uc-

la-o-eded

ntro-

3 Spin-Flip Versus Spin-Exchange SynthesisAlgorithms

Using a GLC model for image texture, texture syntheconsists of generating a random pattern with the particset of second-order probabilities contained within the tarGLC model. The general procedure is to iteratively moda random image so as to bring its second-order probabilcloser to the target probabilities. There are two basic alrithms that have been used to accomplish this iteramodification of the image. In the Metropolis spin-flialgorithm,26,27an individual pixel’s gray level is changed tthe gray level that most reduces the error between theage’s current second-order probabilities and the target pabilities. This is essentially the method of Gagalowicz aMa.4 In the Metropolis spin-exchange algorithm, intrduced in image analysis by Cross and Jain28 and applied toGLC models by Ravichandran, King, and Trivedi29 and byLohmann,16 pairs of pixels are chosen randomly to be cosidered for a gray-level exchange. If exchanging the glevels of the two pixels reduces the error between the c

--

rent and target second-order probabilities, then thechange is executed. A temperature annealing schedulealso be used,30 which essentially means that the exchanoccurs with probability min@1,exp~2DE/T)#, whereDE is ameasurement of the change in probabilities andT is a tem-perature that is gradually reduced to zero. Annealing coalso be included with the spin-flip algorithm. However, wwish to study the performance of synthesis algorithmsdependent of annealing schemes, so the synthesis exments described use no annealing. This corresponds tinfinite annealing temperature, and simply means thatflip or exchange is always executed if it results in a redtion in co-occurrence error.

In the first synthesis experiment, we compare the retive efficiency of the spin-flip and spin-exchange algrithms for iterative modification. For this experiment, thGLC models used as the targets of the synthesis incluGLC matrices for all displacements up toTNX5TNY58pixels. In actual practice, the GLC model for the curreimage as well as the target GLC model are stored in p

2659Optical Engineering, Vol. 40 No. 11, November 2001

Page 6: Texture Synthesis Using Gray-level Co-occurrence

Copeland, Ravichandran, and Trivedi: Texture synthesis . . .

2660

Fig. 5 Spin flip executes a higher proportion of the modifications considered: (a) disordered (pigskin),(b) strongly ordered (raffia), and (c) weakly ordered (wood grain).

abairing-er-forntre-the

ifi-thebeLCo-

uldsure

t-

helityin

thatly

on-icexelhe

gram memory as matrices of integers, representing thesolute number of co-occurrences for every gray-level pand for every displacement. If we used the correspondprobabilitiesP( i , j uD), this would require the use of floating point numbers. Manipulation of integer values genally requires less computation time and memory thanfloating point numbers. The GLC model for the curreimage is updated after every modification by simply incmenting or decrementing the appropriate elements in

Table 1 The number of modifications considered and executed.

Executed Considered %

Pigskin s-f: 43,671 327,680 13.3

s-e: 31,499 424,420 7.4

Raffia s-f: 52,395 327,680 16.0

s-e: 38,923 429,064 9.1

Wood grain s-f: 74,712 327,680 22.8

s-e: 54,362 424,710 12.8

Optical Engineering, Vol. 40 No. 11, November 2001

-matrix for each displacement. To determine how a modcation would affect the co-occurrence error, we countnumber of matrix elements in which the value wouldbrought closer to the corresponding value in the target Gmodel. The result of this is that the measure of coccurrence error used to determine if a modification wobe beneficial is essentially the same as the ACE meagiven in Eq.~6!.

For both algorithms, the initial starting point for the ierative modification is a 2563256 image consisting of pix-els with uncorrelated, randomly generated gray levels. Tgray level is chosen for each pixel such that the probabiof any particular gray level is the same as its proportionthe histogram of the target texture. Thus, we can expectthe histogram of this initial image will be approximatethe same as the histogram of the target texture~shown inFig. 3!.

One iteration is counted when every pixel has been csidered for a modification, and no pixel is considered twduring a single iteration. The procedure for selecting a pifor consideration is to first select a random location in t

Page 7: Texture Synthesis Using Gray-level Co-occurrence

Copeland, Ravichandran, and Trivedi: Texture synthesis . . .

Fig. 6 The result of five spin-flip iterations. Synthesized: (a) pigskin, (b) raffia, and(c) wood grain.

Fig. 7 The result of 15 spin-exchange iterations. Synthesized: (a) pigskin, (b) raffia,and (c) wood grain.

2661Optical Engineering, Vol. 40 No. 11, November 2001

Page 8: Texture Synthesis Using Gray-level Co-occurrence

Copeland, Ravichandran, and Trivedi: Texture synthesis . . .

2662

Fig. 8 There is a slight change in the image histograms as a result of the spin-flip iterative process: (a)pigskin, (b) raffia, and (c) wood grain.

thesterredve-

ta-

h oac

hebewen isible, itsis

susn-

inplo

klyin

r

redent.testlgo-iponing

blenlyev-ate,

hmas

.on-

ut-

image. If that pixel has already been considered duringcurrent iteration, then we step through the image in raorder until we find a pixel that has not yet been consideduring the current iteration. We present the result of fiiterations for the spin-flip algorithm and 15 for spinexchange, requiring a roughly similar length of compution time.

For each of the three texture test cases and for botthe algorithms, a total of ten patterns was synthesized, ewith a different number provided as the initial seed for trandom number generator. Since the initial random numseed has an effect on the progress of the synthesis asas the final synthesized result, any single synthesis ruessentially drawing a sample from the space of all possresults for any given target GLC model. For this reasonwould be unwise to draw conclusions about a synthealgorithm from only one sample from this space.

Figure 4 shows the value of the ACE measure vercomputing time, for both the spin-flip and the spiexchange algorithms. The ACE measure that is plottedeach case is the average over the ten test runs. These

Optical Engineering, Vol. 40 No. 11, November 2001

fh

rll

ts

indicate that the spin-flip algorithm converges more quicthan spin-exchange. It is also evident that the reductionACE levels off to a lower final level for spin flip than fospin exchange.

Figure 5 shows the number of modifications consideand the number actually executed for this same experimAgain, the statistics plotted are averaged over the tenruns. In every case, we see that the spin-exchange arithm considers modifications more quickly. The spin-flalgorithm requires more time to consider a modificatibecause it must compute the effect on the error of changthe pixel’s gray level to each of the seven other possigray levels, while the spin-exchange algorithm must oconsider one possibility—the simple exchange of gray lels between two pixels. This fact, along with the fact ththe spin-exchange algorithm considers two pixels at a timexplains why 15 iterations of the spin-exchange algoritcan be completed in roughly the same computing timeonly five iterations of spin flip. Also from the plots in Fig5, we see that although the spin-exchange algorithm is csidering more modifications, the spin flip is actually exec

Page 9: Texture Synthesis Using Gray-level Co-occurrence

Copeland, Ravichandran, and Trivedi: Texture synthesis . . .

Fig. 9 ACE convergence for the spin-flip algorithm using weighted error and using absolute error: (a)pigskin, (b) raffia, and (c) wood grain.

e a, itgeof

turealby

msultm-h-n-tial-flipea

mad-lgo

to-Forwe

muli. 7.re-rdery thege,hwtestallforat

go-of

m.ol-

ing more. Because the spin-flip algorithm can changpixel’s gray level to any of the other possible gray levelshas more flexibility and is more likely to discover a chanthat will reduce the error. Table 1 gives the final numbermodifications considered and executed. For all three textest cases, we see that the spin-flip algorithm executedmost double the percentage of modifications executedspin exchange.

Figure 6 shows the final result of the spin-flip algorithfor one of the ten test runs. Figure 7 shows the final reof the spin-exchange algorithm. All of the synthesized iages exhibit some ‘‘speckle,’’ which resembles higfrequency noise. This is an artifact left over from the radom uncorrelated gray levels, which served as the iniimage for the synthesis. But we can see that the spinresults exhibit less of this speckle, and generally appmore similar to the target textures of Fig. 2.

The evidence appears to favor the spin-flip algorithover the spin-exchange algorithm. However, we mustdress one important advantage of the spin-exchange a

-

r

-

rithm. Since it only allows gray-level exchanges, the hisgram of the image being synthesized never changes.some applications, this may be required. For example,have used the spin-exchange algorithm to generate stifor the texture perception experiments described in SecSince we wished to study the perceptual qualities repsented by second-order statistics separate from first-ostatistics, we needed images that all possessed exactlsame histogram. In Fig. 8, we plot the average chanfrom initial to final image, in the number of pixels at eacgray level for the spin-flip algorithm. The error bars shothe spread of the amount of this change over the tenruns. For the pigskin texture, the maximum change overthe gray levels and over all ten test runs was 0.018%;raffia, 0.015%; and for wood grain, 0.019%. We feel thfor most applications, the advantages of the spin-flip alrithm in terms of convergence speed and visual qualitythe result outweigh the slight modification of the histograSpin flip is the algorithm used in the experiments that flow.

2663Optical Engineering, Vol. 40 No. 11, November 2001

Page 10: Texture Synthesis Using Gray-level Co-occurrence

Copeland, Ravichandran, and Trivedi: Texture synthesis . . .

2664

Fig. 10 Using weighted error executes a much higher percentage of changes: (a) pigskin, (b) raffia,and (c) wood grain.

ca-thebeLCle-co-asexeangbyredhatrix

alror,

eest

lgo-mallarlyuteer-uto a

4 GLC Similarity Measures: Absolute ErrorVersus Weighted Error

In the previous experiment, to determine how a modifition would affect the co-occurrence error, we countednumber of matrix elements in which the value wouldbrought closer to the corresponding value in the target Gmodel. A change by one co-occurrence in any matrix ement is considered equivalent to a change of oneoccurrence in any other matrix element. We refer to thisa measure of absolute co-occurrence error. In this nextperiment, we consider the use of a different, weighted msure of co-occurrence error. In this new measure, a chaby one co-occurrence in any matrix element is weightedthe current surplus or deficit in co-occurrences, compato the target model, in that matrix element. The result is tif there is a large surplus or deficit for a particular matelement, this is considered more important than a smsurplus or deficit in another element. Using absolute erthese would be considered of equal importance.

Five iterations of the spin-flip algorithm, using thweighted error criterion, were run for the three texture t

Optical Engineering, Vol. 40 No. 11, November 2001

--e

l

cases. Figure 9 shows plots of the convergence of the arithm along with plots from the spin-flip convergence frothe previous experiment, using absolute error. Again,statistics are averaged over ten test runs. During the estages of the iterative modification, the ACE using absolerror drops more quickly than the ACE using weightedror. This is most evident in the plot for wood grain. Beventually, the algorithm using weighted error settles t

Table 2 The number of modifications considered and executed.

Executed Considered %

Pigskin absolute: 43,671 327,680 13.3

weighted: 222,910 327,680 68.0

Raffia absolute: 52,395 327,680 16.0

weighted: 199,240 327,680 60.8

Wood grain absolute: 74,712 327,680 22.8

weighted: 165,370 327,680 50.5

Page 11: Texture Synthesis Using Gray-level Co-occurrence

Copeland, Ravichandran, and Trivedi: Texture synthesis . . .

Fig. 11 The result of five spin-flip iterations using weighted error: (a) synthesizedpigskin, (b) synthesized raffia, and (c) synthesized wood grain.

wstheuteofthaof11

ipns.r inthena

texed

LCal-

re

d-

eryati

the

iveheo

for

ec-how

i-

etedls

algo-ntld

oreionsLCess-x-nof

geall

lower ACE value for all three test cases. Figure 10 shothat both algorithms consider modifications at aboutsame rate, but the algorithm using weighted error execmany more of them. Table 2 gives the final numbermodifications considered and executed. Again, we seeusing weighted error greatly increases the proportionmodifications considered that are executed. Figureshows the final result after five iterations of the spin-flalgorithm using weighted error for one of the ten test ruComparing these images to the ones for absolute erroFig. 6, we see that using weighted error results in less ofspeckle that was evident previously, and produces a firesult that appears more similar to the desired targettures. Spin flip using weighted error is the algorithm usin the experiments that follow.

5 Evaluating Spatial Neighborhood ExtentEffects

As mentioned previously, the number of matrices in a Gmodel increases drastically and nonlinearly with higher vues ofTNX andTNY . In the next experiment, we compathe results of texture synthesis for varying values ofTNX

andTNY . In the previous experiments, we used GLC moels with TNX5TNY58 pixels. In this experiment, we tryvalues for these parameters of 2, 4, and 16 pixels. In evcase, these parameters describe the extent of the spneighborhood for the target GLC model as well asmodel of the image being synthesized.

s

t

l-

al

Figure 12 shows the convergence of spin-flip iteratmodification using weighted error for various values for tmaximum spatial extent of the texture model. It is of nbenefit to compare the magnitude of ACE computedGLC models for different values ofTNX andTNY , since theerror is averaged over a different set of displacement vtors D. Instead, these plots are presented to comparequickly the value of ACE levels off to a final level. It isevident the ACE levels off much more quickly for maxmum displacements of 2 or 4 pixels than for 8~from theprevious experiment! or 16 pixels. Figure 13 shows thnumber of iterative modifications considered and execufor various values ofTNX andTNY . The smaller the spatiaextent of the GLC model, the more quickly modificationare both considered and executed. This is because therithm must examine the GLC matrix for every displacemein the setD and consider the effect the modification wouhave on the error for that matrix.

6 Multiresolution Texture Synthesis Approach

The previous experiment demonstrated how much mquickly the synthesis considers and executes modificatfor GLC models of lesser spatial extent. However, a Gmodel using a spatial model of lesser extent is less succful at capturing information about patterns with larger teture tiles.29 In this section, we examine a multiresolutioapproach to the synthesis algorithm, which is capablesynthesizing larger tile patterns while still taking advantaof the increased speed of using a GLC model of sm

2665Optical Engineering, Vol. 40 No. 11, November 2001

Page 12: Texture Synthesis Using Gray-level Co-occurrence

Copeland, Ravichandran, and Trivedi: Texture synthesis . . .

Fig. 12 Spin-flip iterative modifications for various values of the parameters representing the maxi-mum spatial extent of the texture model, TNX and TNY : (a) pigskin, (b) raffia, and (c) wood grain.

ulge,leftx-

them-ginheo-e-forexx-

tails

is,pa-e

1,tion

gerat

blethe

or

ar-foraon

sed8

ceneral

spatial extent. Figure 14 shows the general scheme for mtiresolution texture synthesis. The original texture imawhich will be the target of the synthesis, is at the upperof the diagram. The GLC model is computed from the teture image at its original scale, then the resolution ofimage is halved twice, and the GLC model is also coputed from these two smaller images. The synthesis beat the lowest level of resolution, at the bottom right of tdiagram. The initial image subjected to the spin-flip algrithm at the lowest level consists of pixels with uncorrlated, randomly generated gray levels. The initial imagethe two higher levels is the one synthesized at the nlower level, but doubled in size. In this manner, larger teture tiles can be synthesized first, and then the finer deof the texture are synthesized.

Figure 15 shows the result of multiresolution synthesusing a model at each resolution level with maximum stial extent of TNX5TNY52. Figures 16 and 17 show thresult for spatial extents ofTNX5TNY54 and TNX5TNY

58. Comparing the images in Fig. 15 to those in Fig. 1we see that we can achieve results using the multiresolu

2666 Optical Engineering, Vol. 40 No. 11, November 2001

-

s

t

approach and relatively smallTNX5TNY52 models compa-rable to the single resolution results using a much larTNX5TNY58 model. Also from these results, we see thfor the pigskin and raffia textures, there is no noticeadifference in the multiresolution results as we increasesize of the models fromTNX5TNY52 to TNX5TNY58.These texture patterns consist of relatively small tiles. Fthe wood grain texture, the results forTNX5TNY54 andTNX5TNY58 have large shading patterns similar to the tget texture in Fig. 2, which are not evident in the resultsTNX5TNY52. It is clear that some textures will requireGLC model of larger spatial extent to capture informatiabout larger tiling patterns.

The texture synthesis algorithms presented can be ufor a wide range of textures. As an illustration, Fig. 1shows an image of stones and bushes from a natural sand the result of multiresolution synthesis of the natutexture, using models with spatial extents ofTNX5TNY54pixels. The total computation time was 2.5 min.

Page 13: Texture Synthesis Using Gray-level Co-occurrence

Copeland, Ravichandran, and Trivedi: Texture synthesis . . .

Fig. 13 Iterative modifications considered and executed for various values for the maximum spatialextent of the texture model: (a) pigskin, (b) raffia, and (c) wood grain.

ility

rinsigtowsf-s,

bot

sidkly

x-p tow

thees.ng

eyyn-in

ity.n-

es

ls.

r

7 Hybrid Texture Synthesis and PsychophysicalStudy

The purpose of this next experiment is to assess the abof the GLC-based error metric@Eq. ~6!# to model the per-ceptual distance between textured patterns by considethe case where the texture patterns are hybrids of twonificantly different textures. The creation of the stimulibe used in the experiment was accomplished as folloTwo different 2563256 texture patterns were selected. Ater experimenting with several different pairs of texturewe settled on a wood grain texture and a grass texture,from the Brodatz album of textures.23 These two texturepatterns are shown in Fig. 19. The grass texture is conered disordered, while the wood grain texture is weaordered.24

The GLC models were then computed from the two teture images, considering all possible displacements of unx5ny58 pixels. From these two GLC models, nine nesynthetic models were created by linearly combiningtwo real models using nine different weighting schemThis linear combination was performed by simply taki

g-

.

h

-

each element of each GLC matrix for texture modelA,multiplying it by the weightwA , and adding it to the cor-responding matrix element from texture modelB multipliedby weight wB . The weights were chosen such that thalways summed to unity, meaning that the resulting sthetic model represents a proper probability distributionthat the elements in each GLC matrix also sum to unThis combination of two real GLC models to form a sythetic model is described by Eq.~7!:

P~wA ,wB!~ i , j ud!5wA•PA~ i , j ud!1wB

•PB~ i , j ud! ; i ,; j ,;d

wA1wB51. ~7!

A synthetic texture model created in this way from texturA andB can be described by the two weights,wA andwB .These models will be referred to as hybrid texture mode

Designating the grass texture model asA and the woodgrain model asB, the nine hybrid texture models for ou

2667Optical Engineering, Vol. 40 No. 11, November 2001

Page 14: Texture Synthesis Using Gray-level Co-occurrence

Copeland, Ravichandran, and Trivedi: Texture synthesis . . .

Fig. 14 The general scheme for multiresolution texture synthesis.

Fig. 15 The result of multiresolution synthesis using a model with maximumspatial extent of TNX5TNY52 pixels: (a) synthesized pigskin, (b) synthesized raf-fia, and (c) synthesized wood grain.

2668 Optical Engineering, Vol. 40 No. 11, November 2001

Page 15: Texture Synthesis Using Gray-level Co-occurrence

Copeland, Ravichandran, and Trivedi: Texture synthesis . . .

Fig. 16 The result of multiresolution synthesis using a model with maximum spa-tial extent of TNX5TNY54 pixels. Synthesized: (a) pigskin, (b) raffia, and (c) woodgrain.

Fig. 17 The result of multiresolution synthesis using a model with maximum spa-tial extent of TNX5TNY58 pixels. Synthesized: (a) pigskin, (b) raffia, and (c) woodgrain.

2669Optical Engineering, Vol. 40 No. 11, November 2001

Page 16: Texture Synthesis Using Gray-level Co-occurrence

Copeland, Ravichandran, and Trivedi: Texture synthesis . . .

2670 Optical Engineering, V

Fig. 18 A (a) natural texture and (b) the result of multiresolution synthesis for thetexture.

es

ls.dLCwaasm-res-

oulduli

asrom

theal

thedgewa

asaseri-lly

mu-et,assthex-

ure

ing

de ton isngofnsere

experiment were created using the weighting schem(wA ,wB) as follows: ~0.9,0.1!, ~0.8,0.2!, ~0.7,0.3!,~0.6,0.4!, ~0.5,0.5!, ~0.4,0.6!, ~0.3,0.7!, ~0.2,0.8!, and~0.1,0.9!. Together with the two original models,A ~1,0!and B ~0,1!, we now have a total of eleven GLC modeNext, eleven different 2563256 textures were synthesizewith the spin-exchange method, using the eleven Gmodels as inputs. The synthesis process in every casecontinued until no further improvement in the results wevident. Also, a different initial seed for the random nuber generator was used for each of the eleven texturesulting in a unique initial spatial distribution of pixel intensities. These eleven different synthesized textures wserve as the background patterns for the experiment stim

Target patterns were again synthesized in a 96396square in the center of each of the 2563256 backgroundtextures. The GLC model from which the target pattern wsynthesized in every case was the model computed fthe original grass texture@the ~1,0! model#. Thus, thestimuli all had the same target texture model and onlybackground textures were different. The spin-exchangegorithm was used for the synthesis, with blending alongboundary between target and background to minimize eeffects. Once again, the synthesis process in every case

ol. 40 No. 11, November 2001

s

,

.

-

s

continued until no further improvement in the results wevident, and a different initial random number seed wused for each target. The eleven stimuli used in the expment are shown in Fig. 20. Stimulus 1 should rheoreticahave no target visible, since the original grass~1,0! modelwas used for both target and background synthesis. Stilus 11 should theoretically have the most visible targsince it consists of a texture synthesized from the gr~1,0! model against a background synthesized fromwood ~0,1! model. The nine stimuli between these two etremes have targets synthesize from the grass~1,0! modelagainst backgrounds synthesized from the hybrid textmodels.

A paired-comparison experiment was conducted usthe synthesized stimuli. Since there weren511 stimuliused in the experiment, there were a total ofn(n21)/25(11)(10)/2555 possible pairs of stimuli. For each pairecomparison, the human observer was required as beforchoose the image in which he thought the target pattermore distinct from its background. So the attribute beijudged in the experiment is the ‘‘perceptual distinctness’’the target from the background, but no specific instructiowere given on what perceptual cues the test subjects w

Fig. 19 The original texture images for the texture hybridization experiment: (a)grass and (b) wood.

Page 17: Texture Synthesis Using Gray-level Co-occurrence

Copeland, Ravichandran, and Trivedi: Texture synthesis . . .

Fig. 20 The stimuli used in the texture hybridization experiment.

ereen

thatheis-g-ch

resng

ess.caleery

rve

to use when making their decisions. Ten observers wused. The data that were collected from the experimconsist of the number of times, out of ten observations,each of the stimuli was judged greater than each ofother stimuli, i.e., its target pattern was judged more dtinct. This raw data was used in a law of comparative judment ~LCJ! analysis to solve for the scale values for ea

tt

stimulus.31 The scale values resulting from this method agiven in the fourth column of Table 3. Figure 21 showgraphically the relative locations of the scale values alothe perceptual continuum representing target distinctnNote that stimulus 11 could not be assigned a relative sbecause it was chosen over every other stimulus by evobserver. In this case, we can think of the normal cu

2671Optical Engineering, Vol. 40 No. 11, November 2001

Page 18: Texture Synthesis Using Gray-level Co-occurrence

foronssotheenas,teneseulu

helsoly-

turure

re-rt towothese-er-er’se al-aly-s it-rehar-eress ined sore:achs

alherrorthe

uta-yde,

forof

te a

Copeland, Ravichandran, and Trivedi: Texture synthesis . . .

representing the distribution of discriminal processesstimulus 11 as being so far separated from the distributiof the other stimuli as to have practically zero overlap,we cannot accurately estimate its relative location oncontinuum. It was impossible to foresee this situation whthe stimuli for the experiment were created. There whowever, plenty of perceptual overlap among the otherstimuli, so the LCJ analysis was successful when only thten were analyzed. These scale values, neglecting stim11, exhibit a correlation of 0.93 with the ACE measure@Eq.~6!#. Figure 22 shows the ten finite stimuli plotted using tACE metric computations and the LCJ scale values. Ashown is the best fitting line from a linear regression anasis.

8 Concluding Remarks

We have presented development and analysis of texsynthesis algorithms based on GLC model of a text

Table 3 The scale values for the 11 stimuli in the texture hybridiza-tion experiment.

Stimulus # Target Model Bkgnd. Model Scale Value

1 (1,0) (1,0) 1.29

2 (1,0) (.9,.1) 0.00

3 (1,0) (.8,.2) 2.14

4 (1,0) (.7,.3) 2.14

5 (1,0) (.6,.4) 1.61

6 (1,0) (.5,.5) 2.90

7 (1,0) (.4,.6) 3.16

8 (1,0) (.3,.7) 3.77

9 (1,0) (.2,.8) 5.22

10 (1,0) (.1,.9) 4.37

11 (1,0) (0,1) `

2672 Optical Engineering, Vol. 40 No. 11, November 2001

s

e

field. One of the important and unique features of thissearch is to provide psychophysical experimental suppothe synthesis algorithms. This support is sought for treasons: first, to seek a quantitative metric to comparesimilarities between two textures, which can provide a uful termination criterion; and second, to correlate the pformance of a synthesis algorithm to the human observassessment of texture similarities and differences. Thesgorithms are evaluated using extensive experimental ansis. These experiments are designed to compare variouerative algorithms for synthesizing a random textupossessing a given set of second-order probabilities as cacterized in a GLC model. Three texture test cases wselected to serve as the targets for the synthesis procethe experiments. The three texture test cases are selectas to represent three different types of primitive textudisordered, weakly ordered, and strongly ordered. For eexperiment, we judge the relative quality of the algorithmby two criteria. First, we consider the quality of the finsynthesized result in terms of the visual similarity to ttarget texture as well as a numerical measure of the ebetween the GLC models of the synthesized texture andtarget texture. Second, we consider the relative comptional efficiency of an algorithm, in terms of how quicklthe algorithm converges to the final result. We concluthat a multiresolution version of the spin-flip algorithmusing a new weighted error criterion, performs the bestall of the texture test case considered. Also, with the helppsychophysical experiments, we were able to demonstra

Fig. 21 The relative locations of the scale values along the percep-tual continuum representing target distinctness.

Fig. 22 The ten finite stimuli in the texture hybridization experiment plotted using theACE metric computations and the LCJ scale values.

Page 19: Texture Synthesis Using Gray-level Co-occurrence

tedffer

0-rs

ul

E.-

pre-

ex-

es

ct

n

a

x-

four

of

y of

e

me

tex

ez,

dic-

on

es,

for

ns

n-

ti-

er,

e,’’

s,’’

is:

x-

nex-

Copeland, Ravichandran, and Trivedi: Texture synthesis . . .

high degree of correlation between synthetically generatextures and how human observers perceive texture diences.

Acknowledgments

This research was partially supported by grant DAAK793-C-0037 from the Night Vision and Electronic SensoDirectorate of the U.S. Army. We also thank Dr. MukShirvaikar for his assistance.

References1. G. R. Gerhart, E. L. Bednarz, T. J. Meitzler, E. Sohn, and R.

Karlsen, ‘‘Target acquisition methodology for visual and infrared imaging sensors,’’Opt. Eng.35~10!, 3026–3036~1996!.

2. M. D. Levine,Vision in Man and Machine,McGraw-Hill, New York~1985!.

3. B. Julesz and R. Bergen, ‘‘Textons, the fundamental elements inattentive vision and perception of textures,’’ inRCV87, pp. 243–256~1987!.

4. A. Gagalowicz and S. De Ma, ‘‘Sequential synthesis of natural ttures,’’ Comput. Vis. Graph. Image Process.30, 289–315~1985!.

5. B. Julesz, ‘‘Visual pattern discrimination,’’IRE Trans. Inf. Theory8~2!, 84–92~1962!.

6. R. M. Haralick, K. Shanmugan, and I. H. Dinstein, ‘‘Textural featurfor image classification,’’IEEE Trans. Syst. Man Cybern.3, 610–621~1973!.

7. M. M. Trivedi, C.A. Harlow, R. W. Conners, and S. Goh, ‘‘Objedetection based on gray level cooccurrence,’’Comput. Vis. Graph.Image Process.28, 199–219~1984!.

8. M. M. Trivedi and C. A. Harlow, ‘‘Identification of unique objects ihigh resolution aerial images,’’Opt. Eng.24~3!, 502–506~1985!.

9. R. W. Conners, M. M. Trivedi, and C. A. Harlow, ‘‘Segmentation ofhigh-resolution urban scene using texture operators,’’Comput. Vis.Graph. Image Process.25~3!, 273–310~1984!.

10. R. W. Conners and C. A. Harlow, ‘‘A theoretical comparison of teture algorithms,’’IEEE Trans. Pattern Anal. Mach. Intell.PAMI-2 ~3!,204–222~1980!.

11. P. P. Ohanian and R. C. Dubes, ‘‘Performance evaluation forclasses of textural features,’’Pattern Recogn.25~8!, 819–833~1992!.

12. R. W. Conners and C. T. Ng, ‘‘Developing a quantitative modelhuman preattentive vision,’’IEEE Trans. Syst. Man Cybern.19~6!,1384–1407~1989!.

13. J. S. Weszka, C. R. Dyer, and A. Rosenfeld, ‘‘A comparative studtexture measures for terrain classification,’’IEEE Trans. Syst. ManCybern.6, 269–285~1976!.

14. C. A. Harlow, M. M. Trivedi, R. W. Conners, and D. Phillips, ‘‘Scenanalysis of high resolution aerial scenes,’’Opt. Eng.25~3!, 347–355~1986!.

15. A. Gagalowicz, ‘‘A new method for texture fields synthesis: Soapplications to the study of human vision,’’IEEE Trans. Pattern Anal.Mach. Intell.PAMI-3 ~5!, 520–533~1981!.

16. G. Lohmann, ‘‘Co-occurrence-based analysis and synthesis oftures,’’ in 12th IAPR Intl. Conf. Patt. Recog. (ICPR),Vol. 1, pp. 449–453, Jerusalem, Israel~1994!.

17. A. R. Figueiras-Vidal, J. M. Paez-Borrallo, and R. Garcia-Gom‘‘On using cooccurrence matrices to detect periodicities,’’IEEETrans. Acoust., Speech, Signal Process.35~1!, 114–116~1987!.

18. J. Parkinnen, K. Selkainaho, and E. Oja, ‘‘Detecting texture perioity from the cooccurrence matrix,’’Pattern Recogn. Lett.11, 43–50~1990!.

19. C. C. Gotlieb and H. E. Kreyszig, ‘‘Texture descriptors basedco-occurrence matrices,’’Comput. Vis. Graph. Image Process.51,70–86~1990!.

20. I. M. Elfadel and R. W. Picard, ‘‘Gibbs random fields, cooccurrencand texture modeling,’’IEEE Trans. Pattern Anal. Mach. Intell.16~1!,24–37~1994!.

21. A. C. Copeland and M. M. Trivedi, ‘‘Signature strength metricscamouflaged targets corresponding to human perceptual cues,’’Opt.Eng.37~2!, 582–591~1998!.

22. A. C. Copeland and M. M. Trivedi, ‘‘Texture perception in humaand computers: Models and psychophysical experiments,’’Proc. SPIE2742, 436–446~1996!.

23. P. Brodatz,Textures—A Photographic Album for Artists and Desigers, Dover Publications, New York~1966!.

24. A. Ravishankar Rao,A Taxonomy for Texture Description and Idenfication,Springer-Verlag, New York~1990!.

25. J. T. Tou and R. C. Gonzalez,Pattern Recognition Principles,Addison-Wesley, Reading, MA~1974!.

-

-

26. N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, and E. Tell‘‘Equation of state calculations by fast computing machines,’’Chem.Phys.21, 1087–1092~1953!.

27. J. M. Carstensen, ‘‘Description and Simulation of Visual TexturPhD thesis, Technical University of Denmark~1992!.

28. G. R. Cross and A. K. Jain, ‘‘Markov random field texture modelIEEE Trans. Pattern Anal. Mach. Intell.5~1!, 25–39~1983!.

29. G. Ravichandran, E. J. King, and M. M. Trivedi, ‘‘Texture synthesA multiresolution approach,’’ inProc. Ground Target Modeling andValidation Conf.,Houghton, MI~1994!.

30. R. W. Picard, I. M. Elfadel, and A. P. Pentland, ‘‘Markov/Gibbs teture modeling: Aura matrices and temperature effects,’’ inProc. IEEEComputer Soc. Conf. on Computer Vision Patt. Recog. (CVPR), 371–377 ~1991!.

31. A. C. Copeland, M. M. Trivedi, and J. R. McManamey, ‘‘Evaluatioof image metrics for target discrimination using psychophysicalperiments,’’Opt. Eng.35~6!, 1714–1722~1996!.

Anthony C. Copeland received the BSEEdegree in 1988 from the US Military Acad-emy at West Point, New York, and theMSEE and PhD degrees in 1993 and 1996from the University of Tennessee, Knox-ville. He was a signal corps officer in theUS Army from 1988 to 1991, which in-cluded service as a communications nodeplatoon leader from the 24th Infantry Divi-sion (Mechanized) during Operation DesertShield/Storm. He was employed as a re-

search engineer during 1996 to 1997 in the Electrical and ComputerEngineering Department at the University of California, San Diego,performing research in image understanding, texture analysis, andhuman visual perception. Since 1997, he has developed real-timesignal processing systems for hyperspectral sensors at PAR Gov-ernment Systems Corporation in San Diego.

Gopalan Ravichandran is the co-founder and CTO of iTrendersInc. He received his BTech degree from the Indian Institute of Tech-nology at Madras, India in 1987, followed by his MS and PhD de-grees from Carnegie Mellon University at Pittsburgh in 1989 and1992, respectively. He was a research associate at the ComputerVision and Robotics Research (CVRR) Laboratory, Department ofElectrical Engineering, University of Tennessee, Knoxville, where hecontributed to the work on texture analysis and synthesis, before hemoved on to SRI International in 1995. His recent work includes theapplication of pattern recognition techniques to business intelligencesystems for automated detection of general market trends, and spe-cifically customer behavioral trends to streamline the positioning ofproducts and services.

Mohan M. Trivedi is a professor in theElectrical and Computer Engineering De-partment of the University of California,San Diego where he serves as the Directorof the Computer Vision and Robotics Re-search Laboratory (http://cvrr.ucsd.edu).He and his team are engaged in a broadrange of research studies in active percep-tion and novel machine vision systems, in-telligent (‘‘smart’’) environments, distrib-uted video networks, and intelligent

systems. At UCSD, Trivedi also serves on the Executive Committeeof the California Institute for Telecommunication and InformationTechnologies, Cal (IT) 2, leading the team involved in the IntelligentTransportation and Telematics projects. Trivedi serves as the Editor-in-Chief of Machine Visions and Applications, the office journal ofthe International Association of Pattern Recognition. He is a fre-quent consultant to various national and international industry andgovernment agencies. Trivedi is a recipient of the Pioneer Award(Technical Activities) and the Meritorious Service Award of the IEEEComputer Society and the Distinguished Alumnus Award from theUtah State University. He is a Fellow of SPIE.

2673Optical Engineering, Vol. 40 No. 11, November 2001