Sparse sampling in array processing - heim.ifi.uio.no

45
Sparse sampling in array processing Sverre Holm, Andreas Austeng, Kamran Iranpour, Jon-Fredrik Hopperstad Department of Informatics, University of Oslo P. O. Box 1080, N-0316 Oslo, Norway E-mail: sverre.holm () ifi uio no Abstract Sparsely sampled irregular arrays and random arrays have been used or pro- posed in several fields such as radar, sonar, ultrasound imaging, and seismics. We start with an introduction to array processing and then consider the combinatorial problem of finding the best layout of elements in sparse 1-D and 2-D arrays. The optimization criteria are then reviewed: creation of beampatterns with low main- lobe width and low sidelobes, or as uniform as possible coarray. The latter case is shown here to be nearly equivalent to finding a beampattern with minimal peak sidelobes. We have applied several optimization methods to the layout problem, including linear programming, genetic algorithms and simulated annealing. The examples given here are both for 1-D and 2-D arrays. The largest problem considered is the selection of K = 500 elements in an aperture of 50 by 50 elements. Based on these examples we propose that an estimate of the achievable peak level in an algorithmically optimized array is inverse proportional to K and is close to the estimate of the average level in a random array. Active array systems use both a transmitter and receiver aperture and they need not necessarily be the same. This gives additional freedom in design of the thinning patterns, and favorable solutions can be found by using periodic patterns with dif- ferent periodicity for the two apertures, or a periodic pattern in combination with an algorithmically optimized pattern with the condition that there be no overlap between transmitter and receiver elements. With the methods given here one has the freedom to choose a design method for a sparse array system using either the same elements for the receiver and the transmitter, no overlap between the receiver and transmitter or partial overlap as in periodic arrays. 1

Transcript of Sparse sampling in array processing - heim.ifi.uio.no

Page 1: Sparse sampling in array processing - heim.ifi.uio.no

Sparse sampling in array processing

Sverre Holm, Andreas Austeng,Kamran Iranpour, Jon-Fredrik Hopperstad

Department of Informatics, University of OsloP. O. Box 1080, N-0316 Oslo, Norway

E-mail: sverre.holm () ifi uio no

Abstract

Sparsely sampled irregular arrays and random arrays have been used or pro-posed in several fields such as radar, sonar, ultrasound imaging, and seismics. Westart with an introduction to array processing and then consider the combinatorialproblem of finding the best layout of elements in sparse 1-D and 2-D arrays. Theoptimization criteria are then reviewed: creation of beampatterns with low main-lobe width and low sidelobes, or as uniform as possible coarray. The latter caseis shown here to be nearly equivalent to finding a beampattern with minimal peaksidelobes.

We have applied several optimization methods to the layout problem, includinglinear programming, genetic algorithms and simulated annealing. The examplesgiven here are both for 1-D and 2-D arrays. The largest problem considered isthe selection of K = 500 elements in an aperture of 50 by 50 elements. Basedon these examples we propose that an estimate of the achievable peak level in analgorithmically optimized array is inverse proportional to K and is close to theestimate of the average level in a random array.

Active array systems use both a transmitter and receiver aperture and they neednot necessarily be the same. This gives additional freedom in design of the thinningpatterns, and favorable solutions can be found by using periodic patterns with dif-ferent periodicity for the two apertures, or a periodic pattern in combination withan algorithmically optimized pattern with the condition that there be no overlapbetween transmitter and receiver elements. With the methods given here one hasthe freedom to choose a design method for a sparse array system using either thesame elements for the receiver and the transmitter, no overlap between the receiverand transmitter or partial overlap as in periodic arrays.

1

Page 2: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 2

1 Introduction

Sparse arrays are antenna arrays that originally were adequately sampled, but whereseveral elements have been removed. This is called thinning, and it results in the ar-ray being undersampled. Such undersampling, in traditional sampling theory, createsaliasing. In the context of spatial sampling, and if the aliasing is discrete, it is usuallyreferred to as grating lobes. In any case this is unwanted energy in the sidelobe region.

Why would one want to use sparse arrays rather than full arrays? The main rea-son is economy. Each of the elements needs to be connected to a transmitter and apreamplifier for reception, in addition to receive and transmit beamformers. Medicalultrasound imaging, the field where most of the work to be presented here was done,illustrates this: Conventional 2-D scans is done with 1-D arrays with between 32 and192 elements. 3-D ultrasound imaging is now in development and this requires 2-D ar-rays in order to perform a volumetric scan without mechanical movement. Such arraysrequire thousands of elements in order to cover the desired aperture.

The purpose of the work presented here is to give a coherent presentation of sparsearray properties and sparse array design. Both topics have been active areas of researchfor at least the last thirty years as documented in for instance the books [1] and [2]. Wehave chosen to let the terminology of this chapter be consistent with the latter reference.The main contribution of this work is the application of optimization methods such asgenetic programming and simulated annealing to the problem of element placement in1-D and 2-D arrays. These methods enable one to find solutions that are believed to benear the optimal limits in terms of sidelobe performance. They also make it possibleto estimate the lower limits for peak sidelobe level for layout optimized arrays. Theestimate for these limits is proportional to 1/K , where K is the number of remainingelements in the array.

This chapter starts with an introduction to array processing based on the analogyto sampling in the time domain. Topics that do not have their parallels in time-domainsampling such as the effect of the element response, steering, grating lobes, and thecoarray are covered. The important distinction between one-way and two-way re-sponses is described and later used to give more degrees of freedom in the optimization.Theory for random arrays and a subclass of random arrays called binned arrays is thencovered.

We then move on to optimization of either element weights or element layout orboth. The layout problem is shown to be a combinatorial problem of such a large mag-nitude that an exhaustive search will never be possible. Different criteria for optimiza-tion are then reviewed and we show through an example that criteria in the coarraydomain are nearly equivalent to minimizing the maximum sidelobe level. Examplesof weight and layout optimization for relatively small-size 1-D arrays are then given.Some new results with a lower sidelobe level than previously reported for the problemof finding the best 25 elements in an aperture with 101 elements are then given. Large2-D array problems are then considered and it is shown that the optimization regionin the angular domain has to include some invisible regions in order for the array tobe steerable. Some results obtained from simulated annealing and genetic optimiza-tion are then presented. Finally we give some results where the two-way beampatternis optimized, allowing one to use different sampling patterns for the transmitter and

Page 3: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 3

d l

d sinφ

φ

x

z

Wavefield

Figure 1: Uniform linear array with element distance d, element length l, and a wavearriving from direction φ.

receiver.In the appendix the three optimization methods used here, i.e., linear programming,

simulated annealing, and the genetic algorithm, are briefly described.

2 Theory

2.1 Introduction to array processing

2.1.1 The array pattern as a spatial frequency response

In time-frequency signal processing, a filter is characterized by values of its impulseresponse, hm, spaced regularly with a time T between samples. A linear shift-invariantsystem is also characterized by the frequency response

H(ejωT ) =M−1∑m=0

hme−jmωT (1)

which is given in (1) for a finite length impulse response with M samples. The re-lationship between the sampling interval, T and the angular frequency, ω, in order toavoid ambiguities, is that the argument in the exponent satisfies ωT ≤ π. This is astatement of the sampling theorem.

In array signal processing, the aperture smoothing function plays the same rolein characterizing an array’s performance. Assume that the M elements are regularlyspaced with a distance d and are located at xm = m ·d for m = 0, . . . , M −1 as in Fig.1. This is a 1-dimensional linear array or a uniform linear array. Its aperture smoothingfunction when each element is weighted by the scalar wm is

Page 4: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 4

W (u) =M−1∑m=0

wme−jm2π(u/λ)d (2)

The variable u is defined by u = sin φ where φ is the angle between broadside ofthe array and the direction from the wavefield (usually called azimuth angle), λ is thewavelength, and the weights, wm, is a standard window function [3]. With reference toFig. 1, (2) can be found from geometry. For a wave coming from an infinite distance,the difference in travel-distance between two neighbor elements is d sin φ. When thisis converted to phase angle, where one wavelength of travel-distance corresponds to2π, one gets the expression in the exponenent of (2).

The aperture smoothing function is, therefore, the output after weighting and sum-ming all elements in the array for a wave from infinite distance hitting the array at anangle of incidence φ. The aperture smoothing function determines how the wavefieldFourier transform is smoothed by observation through a finite aperture [2], just likethe frequency response determines how the received signal spectrum is smoothed bythe filtering operation. The condition for avoiding aliasing is that the argument in theexponent satisfies

2π|u|λ

d = |kx| · d ≤ π (3)

where kx = 2πu/λ is the x-component of the wavenumber.The relationship between the array pattern for a regular 1-d array and a filter frequency-

response is nowω ↔ kx = 2π u

λT ↔ dhn ↔ wn

By using these parallels, the time-frequency sampling theorem T ≤ π/ωmax translatesinto the spatial sampling theorem d ≤ λmin/2.

2.1.2 Array pattern for arbitrary geometry

The spatial frequency kx can be generalized for an array with elements located any-where in space and with arbitrary irregular geometry. Let the wavenumber vector be�k ∈ R

3 with norm |�k| = 2π/λ, and let it be directed from the source towards thearray as in Fig. 2. This figure also defines a unit direction vector �sφ,θ = (sin φ cos θ,sin φ sin θ, cosφ) = (u, v, cosφ) in rectangular coordinates. These angles are usuallycalled azimuth angle for φ and elevation angle for θ. These terms come from sidelook-ing radar, but are used in other applications also.

The wavenumber vector is now �k = −2π�sφ,θ/λ and the array pattern can be gen-eralized to

W (�k) =M−1∑m=0

wmej�k·�xm =M−1∑m=0

wme−j2π/λ(uxm+vym+cos φzm) (4)

where the array element locations are �xm = (xm, ym, zm) ∈ R3 with the correspond-

ing weights wm ∈ R. The weighting function is often called windowing, shading,

Page 5: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 5

x

y

z

φ

θ

φ: azimuthθ: elevation

k

xm

sφ , θ

wavefront

Transducer array

Figure 2: A 2-D planar array with coordinate system.

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1−50

−45

−40

−35

−30

−25

−20

−15

−10

−5

0

sinφ

[dB

]

Figure 3: Array pattern with rectangular weights for 128 element array with λ/2 spac-ing, maximum sidelobe level −13.3 dB, beamwidth (−6 dB) 1.09◦.

Page 6: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 6

tapering or apodization. The relationship between the general array pattern and that forthe linear 1-dimensional array (2) can be found by setting the element position to be onthe x-axis only: �xm = (m · d, 0, 0).

In the following, the notation W (�k) will be used for the array pattern for a generalgeometry while W (u) will be used for a one-dimensional geometry, with u = sin φ.When a 2-D planar array is considered, one usually uses W (u, v) where (u, v) =(sin φ cos θ, sin φ sin θ).

An example of the array pattern of a 1-D array with uniform weighting is shownin Fig. 3. The array pattern is characterized by properties of the main lobe and thesidelobes. The main lobe width is usually measured either at the −3 dB point or the−6 dB point. In the chapter we will use the latter which for a 1-D array is given byW (u−6 dB/2) = W (sin φ−6 dB/2) = 0.5. For a full array with uniform weights, thebeamwidths are given as φ−3 dB ≈ 0.89λ/D and φ−6 dB ≈ 1.22λ/D where D is theextent of the aperture. The sidelobe region is characterized by e.g. the peak value.

2.1.3 Periodic arrays and grating lobes

For the important class of arrays that have their elements on an underlying regular grid,aliasing just like in time-frequency signal processing occurs. Equation (2) gives thearray pattern. Like all regularly sampled systems, the array pattern is periodic, and theperiodicity is given by the argument in the exponent repeating itself by 2π. This isequivalent to

un = u0 + nλ

dfor n = . . . ,−1, 0, 1, . . . (5)

where u in (2) is now called un due to the possible repetitition in the array response.The distance between the elements relative to the wavelength is what matters. Recallnow the spatial sampling theorem, d = λ/2. In this case un = u0 + 2 · n. Since u0

is the sine of an angle, and in order for there to be aliasing, un must also be a validsine of angle, the sampling theorem implies that only n = 0 is possible and there is noambiguity in the array pattern.

This is changed if the system is undersampled. Let for instance d = λ. Nowun = u0+n. A system with a response at u0 = 0 will repeat the response at u−1 = −1and u2 = 1 as shown in Fig. 4. This example is actually a thinned array made from thatin Fig. 3 by removing every second element. The two extra responses are called gratinglobes due to the parallel with a similar phenomenon in optical diffraction gratings.

2.1.4 Element response

Consider the situation in time-domain sampling where an analog signal is sampled bya non-ideal sample-and-hold circuit. Instead of impulse sampling, the sampler willaverage over a small time window, resulting in a low-pass filtering of the sampled data.The low-pass response will be multiplied with the spectrum of the data in order to getthe final spectrum.

This has a parallel in array processing in the element response. So far it has beenassumed that there are M point elements, each of them being omnidirectional. How-

Page 7: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 7

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1−50

−45

−40

−35

−30

−25

−20

−15

−10

−5

0

sinφ

[dB

]

Thinning pattern (50.0% thinned):

0101010101010101010101010101010101010101010101010101010101010101 0101010101010101010101010101010101010101010101010101010101010101

Figure 4: Array pattern with grating lobes due to every other element missing (d = λ),beamwidth (−6 dB) 1.09◦.

ever, each element may, due to its size, have its own directivity. This is described bythe element response

We(�k) =∫ ∞

−∞w(�k)ej�k·�xd�x (6)

The extent of the aperture is determined by the support for the aperture weighting func-tion, w(�k). For a regular, linear array with element distance d, and non-overlapping el-ements, the element may be slightly smaller than the element distance, i.e., it is definedin the interval < −l/2, l/2 > where l ≤ d (see Fig. 1).

As in time-domain sampling, the total response for the array system is the combinedeffect of the element response and the array pattern. In the case that the elements areequal and one operates in the far-field of the array, it is the product of the two

Wtotal(u, v) = We(u, v) · W (u, v) (7)

These conditions are only satisfied for a uniform linear array or for a planar linear array.Arrays that are curved are examples of a system where (7) does not hold.

An example of an array response with grating lobes and element response is shownin Fig. 5. This example is based on the array of Fig. 4 and the element response ofa uniformly weighted element with size l = λ/2. The element response for such anelement is

We(u) =sin(π(l/λ)u)

π(l/λ)u= sinc(

lv

λ) (8)

Note the similarity between Fig. 5 and Fig. 3, however, this similarity vanishes whenthe array is steered as will be seen in the next section.

Page 8: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 8

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1−50

−45

−40

−35

−30

−25

−20

−15

−10

−5

0

sinφ

[dB

]

Figure 5: Response for an array with grating lobes and element response (d = λ, l =λ/2).

2.1.5 Beampattern

A beamformer sums each output from the array with appropriate delays and weights.The delays are found by considering a certain direction given by �k0 and compensatingfor the difference in travel time between the elements. In a 1-dimensional array thiscorresponds to a certain direction, φ0. A beamformer may also be used to focus thebeam on a point in the nearfield of the array given by both the angle and the distance.This is routinely done in medical ultrasound imaging. In any case the delays are foundfrom geometrical considerations by taking into account the velocity of propagation inthe medium. In most applications such as ultrasound, radar and sonar, the medium canbe assumed to be homogenous with a constant velocity of propagation.

If the array output is processed by a beamformer with delays set to match a certaindirection and wavelength given by �k0, the beampattern will simply be a shifted versionof the array pattern

W (�k − �k0) (9)

Consider a regular linear array with element spacing d. The vector-product in (4) thensimplifies to (�k − �k0) · �xm = 2π

λ md (−u) where

u = sin φ − sin φ0 (10)

for φ0 defined as the angle between the broadside direction and the steered direction.In this special case the beampattern is given by (2) with u defined by (10).

The total response for the array system is the combined effect of the element re-sponse and the beamforming. When they are separable, it is given by W e(u, v) ·W (u−u0, v − v0). Note that only the beampattern is affected by the steering, the element re-sponse is not possible to change by beamforming. This is illustrated in Fig. 6 whichis the array of Figs. 4 and 5 with steering. Now the grating lobes reappear. If wehad instead steered the full array of Fig. 3, the result would instead have been just atranslation of the response along the axis.

Page 9: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 9

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1−50

−45

−40

−35

−30

−25

−20

−15

−10

−5

0

sinφ

[dB

]

Figure 6: Beampattern with steering to φ = 25◦ with grating lobes and element re-sponse.

2.2 One-way and two-way beampatterns

The array patterns and beampatterns discussed so far have all been for the case depictedin Fig. 1, that is a source that transmits a signal which is received by the array. This isa one-way scenario. Due to reciprocity in a linear medium, it could equally well havebeen the other way around, i.e., the array could have been configured as a transmitterand the receiver could have been located in the far-field of the array. In either case, thearray patterns and beampatterns would have described the transfer function.

A two-way scenario is when a signal is transmitted by the array, it is reflected offa target, and then received by the array. In this case, the two-way beam pattern is theproduct of the receiver’s and the transmitter’s beam patterns. If the same array is used,they will be equal and one gets

|WTR(u, v)| = |We(u, v) · W (u − u0, v − v0)|2 (11)

where the subindex TR stands for transmit and receive. We are rarely interested in thephase of the beam patterns, and therefore only the magnitude is shown here.

The previous array patterns and beampatterns in Figs. (3 - 6), all show the one-waypattern. By simply squaring them, i.e., doubling the dB-axis, the two-way patterns canbe found.

Equation (11) is valid under the condition that the receiver and transmitter arraysare the same, that the waveform is a continuous single-frequency wave, that the reflect-ing target is in the far-field of the array, and that the medium is linear. These conditionswill be more or less satified in different kinds of imaging systems.

When optimizing the response from sparse arrays, more degrees of freedom areobtained if one lets the receiver and transmitter arrays be different. In that case thetwo-way beampattern will be

|WTR(u, v)| = (12)

|We,T (u, v) · WT (u − u0, v − v0) · We,R(u, v) · WR(u − u0, v − v0)|

where the subindices T and R stand for transmitter and receiver respectively.

Page 10: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 10

2.2.1 The coarray and sparse arrays

As an alternative description of an array, the coarray may be used. Rather than de-scribing the angular response, it describes the morphology of the array. It was firstintroduced in [4] and for arrays with elements on a regular grid, the coarray is definedas the autocorrelation of the element weights

c(l) =M−|l|−1∑

m=0

wmwm+|l| (13)

The coarray describes the weight with which the array samples the different lags of theincoming field’s correlation function. For a linear array with element distance d, thecoarray is related to the squared array pattern through the Fourier transform

|W (k)|2 =M−1∑

l=−(M−1)

c(l)ejl2π(u/λ)d (14)

A full array has a smooth-looking coarray and with unity weights it is triangular. Ourinterest here is to study it for sparse arrays with K remaining elements out of theoriginal M elements in the full aperture.

In order to characterize the coarray, some definitions are required. A redundantlag, l, is when the coarray of that lag is greater than unity, c(l) > 1. The opposite isa hole. In that case the coarray is zero at that lag, c(l) = 0. In order to have an evensampling of the incoming wave field, it seems natural to require a coarray with thesame weight for all lags. A perfect array is such an array. It is defined as an array witha coarray with no holes or redundancies except for lag zero. Unfortunately, perfectarrays only exist for four or fewer elements in the array. Therefore, we study arraysthat approximate perfect arrays; the Minimum Redundancy (MR) and the MinimumHole (MH) arrays. They are defined by the number of redundancies, R, and holes, H .Minimum redundancy arrays are those element configurations that have no holes andminimize the number of redundancies. Minimum hole arrays minimize the number ofholes in the coarray without any redundancies. These arrays are also known as Golombrulers [5].

The smallest minimum hole and minimum redundancy arrays are given in Table 1where the results have been taken from [6] and [7]. Finding such arrays is a formidabletask as the aperture grows. The largest proven minimum hole array that has beenpublished is of size K = 19, [6] (as of this writing there are claims on the World WideWeb that the K = 20 and K = 21 minimum hole arrays also have been proven to beoptimal). The largest known minimum redundancy array is of size K = 17 [7].

Examples of coarrays of minimum redundancy and minimum hole arrays are shownin Fig. 7. A generalization of the coarray that corresponds to the case when differentapertures for transmit and receive are used is also possible. In this case one must buildon (12) which describes the two-way beampattern. Disregarding the element patternsand the steering one gets

|WTR(u)| = |WT (u) · WR(u)| (15)

Page 11: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 11

K Minimum hole Minimum redundancy3 1101 (perfect) 1101 (perfect)4 1100101 (perfect) 1100101 (perfect)5 110010000101 1100100101

100110000101 10010001116 110010000010100001 11001100000101

110010000010000101 11000010010101110000001001010001 11100010001001110000001000101001 -

7 11001000001000000010000101 11001000001010010111000001000100000000100101 11100010001000100110110000001000001000010001 11110000100001000110100010010000100000000011 11100000100010100110011000000010000010000101 110000001001010101

8 11001000010000010000001000000000101 110010000010000010100101- 111000000001000100100101

Table 1: Table of the first set of minimum hole and minimum redundancy arrays.

The inverse Fourier transform of this expression corresponds to the sum coarray [8]which is equivalent to the convolution of the transmit and receive apertures. In othercontexts this has been called the effective aperture [9].

c12(l) =M−|l|−1∑

m=0

wmwm−l (16)

An example of such a coarray is shown in Fig. 8. An aperture of 128 elements isassumed for this example. Every second element is used for transmission (as in Fig.4), and every third element for reception. Due to reciprocity, the roles of the receiverand transmitter arrays could have been exchanged without any effect on the result.

0 5 10 15 20 25 300

1

2

3

4

5

6

7

Array: 11001000001000000010000101

Coa

rray

0 2 4 6 8 10 12 14 16 180

1

2

3

4

5

6

7

Array: 110010000010100101

Coa

rray

Figure 7: Coarray of minimum hole and minimum redundancy arrays. The arrays arethe first entries for K = 7 in Table 1.

Page 12: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 12

−150 −100 −50 0 50 100 1500

5

10

15

20

25

Coa

rray

lag

Figure 8: Coarray for a 128 element array with every other element used for transmis-sion and every third element for reception.

2.3 Random arrays

Strictly speaking, a random array is described by a probability density function, p �x(�x)which determines the random sensor positions. This differentiates it from a sparsearray which is based on a conventional array with regular spacing between elements,and where a certain fraction of the elements are removed at random. The array patternsof the two have the same statistical properties and one often uses the term random toapply to both of them [1]. This will be done here also.

Assuming that the elements are unweighted, the one-way array pattern is

W (�k) =K−1∑m=0

ej�k·�xm (17)

The position variables are now random variables. The randomness disappears when�k = �0, and then |W (�0)|2 = K2. This occurs as one looks broadside to a 1-D arrayor a 2-D planar array, i.e., for φ = 0 in the direction vector. For other directions, oneshould sum K unit random vectors. In the case that they are uncorrelated, the powersum is K . This applies to the sidelobe region well away from the mainlobe. Thus theratio of average sidelobe power to main lobe power is [10], [1]

Norm. Avg. Sidelobe = K/K2 = 1/K (18)

By taking the expected value of the array pattern from (17), a more accurate statisticalanalysis can be performed

E[W (�k)] = K · E[ej�k·�xm ] = K ·∫aperture

p�x(�x)ej�k·�xd�x (19)

Page 13: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 13

The average array pattern for a random array is, therefore, equal to the array pattern ofa continuous aperture of the same size with the probability density function playing thesame role as the weighting funtion – compare to (6).

The variance is

var[W (�k] = K · var[ej�k �xm ]

= K · E[|ej�k �xm |2] − K · |E[ej�k �xm ]|2

= K − 1K

|E[ej�k �xm ]|2 (20)

In order to discuss these results, let us consider an example where the elements areuniformly distributed over a 1-dimensional linear aperture of length L. In this case theaverage array pattern is

E[W (u)] = K · sinc(Lu/λ) (21)

and the variance is

varU [W (u)] = K · (1 − sinc2(Lu/λ)) (22)

Thus for small arguments the average array pattern is K and the variance is 0. Forlarge values of u, however, the average array pattern is about 0, and the variance isclose to K . This confirms the result given previously for the ratio of average sidelobepower to main lobe power to be 1/K . For the uniform distribution, this result is validapproximately after the first null of the average array pattern, or for |u| > λ/L. Similarresults can be found for other probability density distributions.

A comparison can be seen in Fig. 9 with uniform distribution of the element po-sitions and the one in Fig. 10 with a triangular distribution. The latter has a widermainlobe beam and lower first sidelobes. The conclusion is, therefore, that the prob-ability distribution of the random distribution of the elements, or that of the thinningfor a sparse array, determines the mainlobe shape and the first few sidelobes. Furtheraway from the mainlobe the sidelobes can only be described in a statistical sense andthe number of elements determines the average level.

An estimate of the relative peak level of a 1-D random array, derived in [11] and[12], is

√(K ln K). This estimate gives in our experience a fairly good estimate of the

peak level and is, therefore, plotted with the estimate 1/K for the average value on allof the beampatterns for thinned 1-D arrays.

Steinberg ([13], [1]) has developed a statistical description for the sidelobe patternand the expected peak sidelobe level in the random array response. His theory suggeststhat the amplitude of the peak sidelobe is logarithmically proportional to the number ofindependent samples in the sidelobe region. Accordingly, the peak sidelobe amplitudeis expected to be 3 dB higher in planar arrays with the same number of elements andthe same dimensions compared to linear arrays.

The hypothesis of our work here is that the achievable peak sidelobe level in an al-gorithmically optimized sparse array is proportional to the average value in the randomcase or 1/K for 1-D. For 2-D arrays our estimate is twice as high.

Page 14: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 14

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1−30

−25

−20

−15

−10

−5

0

sinφ

[dB

]

Thinning pattern (50.0% thinned):

1101110101100100111101100010111010000100011000101010110100111111 0010110010011001010110100010000010010110001101010110101011011001

Figure 9: One-way array pattern of sparse array thinned from M = 128 to K = 64elements with uniform probability density distribution. In this and the following sparsearray beampatterns an estimate of the average and the peak levels are also plotted.

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1−30

−25

−20

−15

−10

−5

0

sinφ

[dB

]

Thinning pattern (50.0% thinned):

1100010000001010000001100010011011011110110100111101011111011111 1111111111011100111000010110100000101000101001000000101010000101

Figure 10: One-way array pattern of sparse array thinned from M = 128 to K = 64elements with triangular probability density distribution.

Page 15: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 15

2.4 The binned random array

An interesting variant of the random array is the binned random array. It is equivalent tojittered random time-domain sampling. Consider a one-dimensional aperture of lengthL. Divide the aperture into N equal-size, non-overlapping bins of length w = L/N .The position of each element can be found from

xm = −L/2 + m · w + ym for m = 0, . . . , K − 1 (23)

The random variable ym is distributed in the interval (0, w) according to some proba-bility density function.

The average array pattern cannot in general be found except for the important caseof a uniform distribution in each bin. Statistically, this is equivalent to a uniform distri-bution over the full aperture, and the average array pattern is the same as for a randomarray, i.e., eq. (21) applies [11]. Therefore, the mainlobe and the nearest sidelobesare the same as for a random array with uniform distribution of the position of theelements.

Under the same conditions, the variance can also be found

varB[W (u)] =K−1∑m=0

var[ej2π(−L/2+m·w+ym)u] (24)

=K−1∑m=0

|ej2π(−L/2+m·w)u|2 · var[ej2πymu]

= K var[ej2πymu] = K var[ej2πK·ymu/K ]

Because K · ym is uniform distributed over the interval (0, L) just like the randomvariable in (22), the final result is that the variance is a scaled version of that for auniform distributed random linear array

varB[W (u)] = varU [W (u/K)] (25)

This is a remarkable result because the variance does not reach a full maximum untilthe first zero of sin(π(L/λ)u/K) or for |u| > Kλ/L. This means that the binnedarray has a much larger region around the steered direction where the effect of therandomness is small. In Fig. 11 it means that the variance does not reach its full valueuntil |u| = 0.5. Note also that the nature of the binning is such that not more thantwo elements can be clustered. This makes the binned array resemble an array witha nearest neighbor restriction, and actually the sidelobe depression just described wasfirst reported for nearest neighbor restricted arrays in [14].

3 Optimization of Sparse Arrays

Given that a random sparse array has such a large variation in peak sidelobe level, it isnatural to ask if it is possible to find arrays with good sidelobe behavior. Because weare dealing with sparse arrays we are concerned with arrays with elements on a regular

Page 16: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 16

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1−30

−25

−20

−15

−10

−5

0

sinφ

[dB

]

Thinning pattern (50.0% thinned):

0110101010011010101010101001011010010101101010101001101001100110 1001011001100101011010010101011010011001011001101001011010100110

Figure 11: One-way array pattern of a binned sparse array thinned from M = 128 toK = 64 elements, bin size w = 2. The number of elements is the same as in Figs. 9and 10.

grid. Further we restrict ourselves to one or two dimensions and uniform linear arraysor uniform planar arrays. There is no principal problem in also dealing with arrayswith their elements uniformly distributed along a regular curve, such as a part of circle(curved linear arrays) or a 3-D array with elements on a spheroid. The variables tooptimize can be the element weights (wm in (4)), or the active element positions.

Let us first consider element weighting for sparse random arrays. This resemblesthe design of weighting functions for fully sampled arrays or for time series as forinstance discussed in the overview paper by Harris [3]. Many different criteria foroptimization are to be found there, but the two most relevant ones are minimization ofthe maximum sidelobe and minimization of the sidelobe energy. For a full array, thefirst criterion leads to Dolph-Chebyshev weighting, and the latter leads to the prolate-spheroidal weighting, which can be approximated by the Kaiser-Bessel window.

In spectral analysis, the first criterion minimizes the effect of spectral leakage fromdiscrete frequency components. The second criterion is related to the estimation of alow spectral level in a background of broad-band noise at the other frequencies. Thissituation is not so common in spectral estimation. In imaging systems such as med-ical ultrasound systems, minimization of the maximum sidelobe is a criterion whichis related to imaging of a strong reflecting point target in a non-reflecting backgroundcontaining other point targets. A typical scenario is imaging of point targets in water.Although this is not a clinically relevant imaging scenario, it is typical for testing ofimaging systems. However, in certain organs of the human body, the imaging scenariomay approximate this situation. This applies for instance to imaging of valve leafletsinside the fluid-filled cardiac ventricles. The alternative criterion of minimization ofthe integrated sidelobe energy is directly related to image contrast when imaging anon-reflecting area like a cyst or a ventricle in a background of reflecting tissue. This isfound much more often in the human body than the previous scenario. The minimum

Page 17: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 17

sidelobe energy criterion must be combined with a restriction on the peak sidelobe forit to be tractable, see [15]. Some results on weight optimization for 1-D arrays usingthis criterion and quadratic optimization have been reported in [16].

Here we will first find the properties of arrays based on minimization of the peaksidelobe, because this has been the most common criterion so far, and it is straightfor-ward to formulate optimization algorithms for it. In [17] we showed that it is possibleto find apodization functions or element weights for a given thinning pattern that givethe beampattern optimal properties. An important result is that these functions havelittle or no resemblance with the corresponding full array’s apodization function. Alimitation of this work was that is was not possible to optimize the full angular extentof the sidelobe region for a sparse array. This was due to the algorithm used (Remezexchange algorithm). In [18] and [19] this approach was extended from 1-D to 2-D ar-rays, and improved results were reported. By using the linear programming algorithmfor optimization, it was possible to optimize the whole sidelobe region. In this way itwas possible to find properties of the beampattern of such arrays. Of special interestis to determine the minimum peak sidelobe level and compare it with the predictionsfrom random theory.

It is possible to search either for real weights or for complex, unit norm weights.The latter is an optimization of phase and has been done for full arrays in [20]. Thedisadvantage is that it is essentially a single-frequency optimization. The phases will bedifferent for different frequencies, while real weights are valid for broadband signals.Therefore, real weight optimization will be the approach used here.

We will also consider optimization of the element positions of a sparse array. Thearray will then no longer be random, and it is more relevant to call it algorithmicallyoptimized. This problem is considerably more difficult than weight optimization. Jointoptimization of positions and weights is also possible, usually by iterating over a se-quence of position optimization followed by weight optimization [21], [19]. The reasonwhy element position optimization is so difficult can be seen by considering the numberof combinations to search. For an array with M elements, the number of combinationswhen a subset of K elements are to be picked are

(M

K

)=

M !(M − K)!K!

(26)

An array with 50 x 50 elements is typical of the requirement for a 2-D array for medicalultrasound imaging. If between 10% and 50% of the elements are to kept, such an arraygives between 10350 and 10750 combinations. Considering that the estimated numberof electrons in the universe is about 1080, it is easy to understand why an exhaustivesearch is out of the question.

There are several ways that this number can be reduced. First, due to the propertythat real functions have the same Fourier transform when they are mirrored, we canreduce the number of combinations to half. However, this reduction does not reallycontribute much to making the combinatorial problem more tractable. The second isto require symmetry in the array, this will result in the array pattern becoming a realfunction. In fact this is required for all optimization using linear programming. Thiswill reduce the number of elements to search over to 50% (M and K will both be

Page 18: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 18

reduced to 50%). For the previous example the result is that between about 10 175 and10375 combinations will have to be searched.

Another way, that especially applies to 1-D arrays, is to require the end elementsalways to be active. This is a way to ensure that the aperture of the thinned array ismaintained and that the algorithm does not just degenerate to finding an array withall the elements clustered at the center of the aperture of the original array. Such anarray would have excellent sidelobe properties, but the width of the mainlobe would beinferior. The search space in this case does not diminish significantly since all it meansis that M and K in (26) are both decreased by 2. For a 2-D array, it is hard to think ofa similar way to fix the ends of the aperture. In any case, this is not a significant sourceof reduction of the size of the combinatorial problem.

A final way would be to require that the array should be a binned array. In this caseM has to be divisible by K and the number of combinations to search is reduced to Kindependent problems, each of the size of a bin, M/K . The number of combinationsis (

M/K

1

)K

=(

M

K

)K

(27)

If 10% or 50% of M = 2500 elements are to be kept, this gives 10250 and 10376 whichis a considerable reduction over the full problem.

Other related work on joint optimization of thinning pattern and weights has beenreported in the context of sonar arrays in [22] and [21]. Like all of the previously citedpapers, our approach is based on allowing elements only on a fixed underlying grid ofpositions as opposed to what was done in [23]. The approach taken there is that theyleave out the weights and search for the element positions that give minimum peaksidelobe levels. However, due to limitations in the fabrication process, such arraysare very difficult to manufacture in many applications, for instance as a transducer forultrasound imaging. That is why we stick to a fixed underlying grid here.

The optimization criteria are the same for the layout problem as for the weightproblem, i.e.

• Minimize maximum sidelobe in the beampattern with a condition on the maxi-mum mainlobe width

• Minimize integrated sidelobe energy in the beampattern with a condition on thepeak sidelobe and/or on the maximum mainlobe width

In addition, there are some criteria that relate to the coarray. They are

• Minimize number of holes in the coarray

• Minimize the number of redundancies in the coarray

The four criteria given here are in two different domains and very few investigatorshave compared them. In [24] we did that through an exhaustive search of small arraysof aperture M = 18 with K = 7 active elements and of aperture M = 26 withK = 7 active elements. There exist five different minimum redundancy arrays forthe first case (Table 1). A full search of all possible arrays (4368 different ones when

Page 19: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 19

5 6 7 8 9 10 11 12−8

−7

−6

−5

−4

−3

−2

−1

0

−6 dB Beamwidth [deg]

Pea

k si

delo

be [d

B]

Figure 12: Peak sidelobe level vs. beamwidth for all arrays with K = 7 and M = 18.

the end elements are fixed, eq. (26), results in a plot of peak sidelobe versus −6 dBbeamwidth as shown in Fig. 12. The interesting cases are those that have the lowestsidelobe and the smallest beamwidth, i.e., those that lie on the lower and left boundaryof this figure. In Fig. 13 a line has been drawn through this optimal boundary and thepositions of the five minimum redundancy arrays have been shown. Only three of thefive are on the boundary and it turns out that these are the ones that have a redundancyof not more than 2. For larger arrays, the distribution of the redundancies over thelag domain also plays a role in determining whether a minimum redundancy array willhave performance on the optimal boundary. It is in particular important to avoid peri-odicities in the redundancies. In Fig. 14 a similar search has been done for the fivedifferent minimum hole arrays that exist for the second case with M = 26 and K = 7(Table 1). In this case, there were 42504 different possible thinning patterns to search.Now one can see that all five minimum hole arrays are on the optimal boundary. Wehave concluded from this empirical study that the minimum peak sidelobe criterionseems to be equivalent to the minimum hole criterion and also that it is close to theminimum redundancy criterion. Whether this can be proved mathematically or not, isnot known.

3.1 1-D arrays

In [19] and [25] a method for optimizing the weights and/or the layout of a sparse arrayis described. It uses linear programming and is based on the array being symmetric.Some of the 1-D examples from that paper will be described here.

An array with half wavelength spacing, 64 elements and Gaussian random thinningto 48 elements was optimized. An example of the beam patterns before and after op-

Page 20: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 20

5 6 7 8 9 10 11 12−8

−7

−6

−5

−4

−3

−2

−1

0

−6 dB Beamwidth [deg]

Pea

k si

delo

be [d

B]

MR arrays

Figure 13: Optimal peak sidelobe level vs. beamwidth relative to the minimum redun-dancy arrays with K = 7 and M = 18.

3 4 5 6 7 8 9−7

−6

−5

−4

−3

−2

−1

0

−6 dB Beamwidth [deg]

Pea

k si

delo

be [d

B]

MH−arrays

Figure 14: Optimal peak sidelobe level vs. beamwidth relative to the minimum holearrays for K = 7 and N = 26.

Page 21: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 21

−80 −60 −40 −20 0 20 40 60 80−30

−20

−10

0

phi

Res

pons

e [d

B]

−80 −60 −40 −20 0 20 40 60 80−30

−20

−10

0

phi

Res

pons

e [d

B]

Response optimized from: 2.00 [deg] Peak: −17.9 [dB] −6 dB BW: 2.59 [deg]

Figure 15: One-way array pattern before and after optimization for 64-element arrayrandomly thinned to 48 elements. Thinning and weights are shown in the center panelof Fig. 17 (from [19], c©1997 IEEE).

timization are shown in Fig. 15. The optimization of the element weights was doneby minimizing the peak sidelobe in a region extending from a start angle, φ 1, to 90degrees. Due to the symmetry of the beam pattern, only positive angles were required.In this way, the mainlobe is not affected by the optimization. However, the peak side-lobe level is very sensitive to the start angle. This parameter influences the trade-offbetween the beamwidth and the peak sidelobe level after optimization. The details ofthe optimization algorithm are given in Appendix A. Several optimizations were per-formed for various beamwidths and thinning patterns. For each thinning pattern, thestart angle φ1 was varied and an optimization was performed. The resulting peak side-lobe and −6 dB beamwidth is plotted in Fig. 16. Each curve is the result of between5 and 18 such optimizations. Fig. 16 shows two dash-dot lines which are the resultsof optimizing the weights to give uniform sidelobe levels for the full arrays. The left-hand one (smallest beamwidth) is the performance for a full 64-element array, and theright-hand one (largest beamwidth) for a full 48-element array. Only thinned arrayswith performance better than the 48-element curve are of interest. All the remainingcurves are for a 64-element array thinned to 48 elements. The upper solid line showsperformance for the worst symmetric thinning that could be found, giving a minimumsidelobe level of about −13 dB. This array has almost a periodic thinning. If it hadnot been for the requirement for symmetry, this would have been a periodically thinnedarray with fully developed grating lobes.

The two dashed lines are two realizations of random Gaussian thinning. Both ofthem start leveling off at −17 to −18 dB sidelobe level. This is in the vicinity ofthe mean sidelobe level predicted for a random array (18) given as the inverse of the

Page 22: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 22

1.5 2 2.5 3 3.5−40

−35

−30

−25

−20

−15

−10

−5

0

−6 dB Beamwidth (deg)

Pea

k si

delo

be le

vel (

dB)

Optimally thinned 48 elements

Worst−case thinned 48 elements

Figure 16: Result of optimizing weights given as sidelobe level as a function ofbeamwidth. Shown are uniform sidelobe level 64-element and 48-element full arrays(dash-dot lines), two realizations of random 25% thinning of the 64-element array(dashed lines), and worst-case and optimally 25% thinned arrays (solid lines) (from[19], c©1997 IEEE).

number of remaining elements which is −10 log 48 = −16.8 dB. However, with theoptimization used here this value is achieved as a peak value instead.

Finally, the two lower solid curves are the results from optimizing the weightsfor two near-optimal thinning patterns. They were obtained with a combined weightand layout optimization algorithm with sidelobe targets of −18 and −19.5 dB. Theother values in their curves were obtained by keeping the layout and then optimizingthe weights only for different values of start-angles in the optimization. With suchthinnings the peak sidelobe level can be improved down to the range −17 to −20 dB.

All the thinning patterns are shown in Table 2. Examples of the weights required areshown in Fig. 17. They are quite different from the much smoother weight functionsthat are obtained for full arrays (see the Dolph-Chebyshev weights of Figs. 46-49of [3]). The limitation of a symmetric array that the linear programming algorithmimposes is really unnecessary. If a heuristic method like the genetic search algorithm orsimulated annealing is used instead, any array can be optimized both for weight and/orlayout. However, with these algorithms, one does not have any guarantee that a globalminimum is reached. On the other hand, the linear programming method is limited inthat it can only solve small problems. The rest of the results here will, therefore, befound with heuristic methods.

Page 23: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 23

Elements enabled Comment11011101110111011101110111011101 Worst-case symmetric array11010110110110101111101111111011 Random 1 (upper dashed curve)11011011011111111001010111110111 Random 2 (lower dashed curve)10111100011001111101101111111111 Optimized 1, (-18 dB) (upper solid curve)00101001111101111011101111111111 Optimized 2, (-19.5 dB) (lower solid curve)

Table 2: Left-hand part (32 elements) of symmetric 64-element arrays. All referencesto relative position are to the right-hand part of curves in Fig. 16 (from [19], c©1997IEEE).

3.2 1-D Layout optimization

In [21], simulated annealing was used to minimize the maximum sidelobe level foran array with aperture L = 50λ (M = 101) with N = 25 active elements lying ona grid with λ/2 element distance. This problem has 1.9146 · 1022 solutions (M=99and K=23 inserted in (26) since the end elements are fixed). This is a problem thathas been optimized since the sixties, and a table of the solutions obtained is given inTable 3 based on [21]. For a description of the simulated annealing algorithm, referto Appendix B. In [29] this problem was solved with a simulated annealing procedurewhich is an improvement over that of [21], [30], [31] in several respects. First of allit is faster since an incremental procedure is used for finding the array pattern. Theevaluation of (17) requires a discrete Fourier transform that can be implemented bya Fast Fourier Transform (FFT) algorithm in order to speed it up. A further speedincrease can be obtained by the observation that the simulated annealing algorithmconsists in perturbing just a single element at a time. Therefore, the array pattern of theperturbed array can be found by subtracting the contribution of the element that wasmoved and adding the contribution of that which was added. When all contributionsfrom all the elements at all the angles are precomputed and stored in memory thisresults in a speed increase. In [29] it is shown that for an N = 256 point evaluation, thisresults in 6.7 times faster execution than when the FFT algorithm is used. The fasterevaluation of the array pattern means that one can evaluate more configurations and

Year Min sidelobe Optimization Referencelevel method

1964 -8.8 dB Dynamic programming [26]1966 -8.9 dB Space-tapering [27]1968 -10.14 dB Dynamic programming [28]1996 -12.07 dB Simulated annealing [21]1998 -12.36 dB Simulated annealing [29]

Table 3: Table of solutions to the problem of finding the best 25 unit weight elementpositions in an aperture of L = 50λ.

Page 24: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 24

0 10 20 30 40 50 600

0.5

1Dyn. range 27.2 dBWorst−case

0 10 20 30 40 50 600

0.5

1Random Dyn. range 14.7 dB

0 10 20 30 40 50 600

0.5

1Dyn. range 8.8 dBOptimized

Figure 17: Weights found after optimization from 2 degrees for three different elementlayouts. The beampattern of the random layout is shown in the lower panel of Fig. 15(from [19], c©1997 IEEE).

therefore the simulated annealing algorithm of [29] allows perturbation at an arbitrarylocation in the array, rather than within the interval given by the neighbors on theright-hand and left-hand sides of the element to be perturbed as in [21]. The resultingsidelobe level depends on the sampling of the array pattern. When N = 4096 pointsare used for evaluating it, the optimal solution of [21] has a sidelobe level of−12.03 dBand a −6 dB beamwidth of 2.10◦ (see Fig. 18). Ten solutions that are slightly better aregiven in [29]. Two of the better ones are shown in Figs. 19 and 20. The first representsa search over 2.0 · 105 configurations and has a sidelobe level of −12.06 dB and a −6dB beamwidth of 1.71◦. The second solution is the result of a search over 2.9 · 107

configurations and it has a sidelobe level −12.36 dB and a −6 dB beamwidth of 2.10 ◦.Here the sidelobe level is improved by 0.32 dB over the reference. The other eightsolutions are between the two given here, i.e., with some sidelobe level improvementand some beamwidth improvement over the reference.

Other solutions using the simulated annealing algorithm are shown in Figs. 21 and22. If one accepts a widening of the mainlobe, the last figure shows that it is actuallypossible to find a sparse array that has a peak level which is equivalent to the meanvalue for a random sparse array (1/K = −13.97 dB). However, in this case, most ofthe elements are clustered near the center of the array and the mainlobe suffers. Theresults of Fig. 20 and 21 are probably the best that can be obtained in terms of sidelobelevel. The beamwidths (−6 dB) are 2.10◦ and 2.77◦. This is 51% and 99% over thebeamwidth of the full array (1.21λ/D = 1.39◦). The sidelobe levels are −12.36 dBand −13.2 dB which is 0.8 - 1.6 dB above the average level of a random array andcorresponds to a level of 1.2/K - 1.5/K . This is our best estimate of the peak sidelobe

Page 25: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 25

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1−30

−25

−20

−15

−10

−5

0

sin(phi)

[dB

], IS

LR 2

.65,

Avg

−16

.203

4

Thinning pattern (75.2% thinned):

10000000000001000000000000010000000001000001011111 1 00001100011000101110100011000000100100000000000001

1−way response to unweighted array Peak: −12.0 [dB] −6 dB BW: 2.10 [deg]

Figure 18: Array pattern for optimized array with 25 elements out of 101, based on[21].

level for algorithmically optimized 1-D arrays.

4 2-D Array Optimization

A 2-D array is considerably harder to optimize than a 1-D array due simply to its sizeand the vast increase in the number of combinations. In [19] we were successful inusing linear programming to find weights for a thinned 2-D array. However, the muchharder problem of finding layouts for arrays with thousands of elements is simply toohard a problem for linear programming at present. It is, however, a much more impor-tant problem than weight optimization. This can be illustrated by ultrasound imaging.Two-dimensional ultrasound arrays at present have a sensitivity problem that makes itunattractive to weight the individual elements. Furthermore, realization of thousandsof accurate weights in a hardware implementation is very undesirable. The layout op-timization problem for 2-D arrays is really the problem one would like to solve. Themethods that can be used are the genetic algorithm and the simulated annealing algo-rithm. Before showing results, it is necessary to discuss the implications on optimiza-tion when the 2-D arrays not only look broadside, but also are required to be steered.This means that the beam pattern of sec. 2.1.5 must be optimized, not just the arraypattern of sec. 2.1.2.

4.1 Optimization of Steered Arrays

For optimization of the unsteered beam pattern, the sidelobe level should be minimizedover all visible angles, except those where the mainlobe is located. This is the regiondefined by all elevation angles and with the azimuth angle in the range φ ∈ [φ 1, π/2],where φ1 is the boundary between the mainlobe and sidelobe regions. Because of

Page 26: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 26

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1−30

−25

−20

−15

−10

−5

0

sin(phi)

[dB

], IS

LR 3

.851

1, A

vg −

15.7

004

Thinning pattern (75.2% thinned):

10000100000000000000000100001000000000001010000001 1 00010010010101111100100010000010010010000000110001

1−way response to unweighted array Peak: −12.1 [dB] −6 dB BW: 1.71 [deg]

Figure 19: One-way array pattern for optimized array with 25 elements out of 101,optimized for minimum beamwidth.

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1−30

−25

−20

−15

−10

−5

0

sin(phi)

[dB

], IS

LR 2

.792

4, A

vg −

16.1

636

Thinning pattern (75.2% thinned):

10000001000000000000011110101000101101000010110011 0 00110010001100000100100000000000000000000000000001

1−way response to unweighted array Peak: −12.4 [dB] −6 dB BW: 2.10 [deg]

Figure 20: One-way array pattern for optimized array with 25 elements out of 101,optimized for minimum sidelobe level.

Page 27: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 27

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1−30

−25

−20

−15

−10

−5

0

sin(phi)

[dB

], IS

LR −

1.36

85, A

vg −

18.1

568

Thinning pattern (75.2% thinned):

10001010110011100111110111001011111101000000000000 0 00000000000000000000000000000000000000000000000001

1−way response to unweighted array Peak: −13.2 [dB] −6 dB BW: 2.77 [deg]

Figure 21: Result of optimization. One-way array pattern for 25 elements out of 101,peak sidelobe level -13.2 dB.

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1−30

−25

−20

−15

−10

−5

0

sin(phi)

[dB

], IS

LR −

2.54

99, A

vg −

18.8

566

Thinning pattern (75.2% thinned):

10000000000000000000000000000000000010010001111111 1 11110101110111100001000000000000000000000000000001

1−way response to unweighted array Peak: −14.0 [dB] −6 dB BW: 2.77 [deg]

Figure 22: Result of optimization. One-way array pattern with 25 elements out of 101,peak sidelobe level -14.0 dB which is similar to the average sidelobe level for a randomarray.

Page 28: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 28

kx

ky

sinφ1λ/2πθ

λ/2π

Figure 23: The optimization region in k-space, containing the visible region – every-thing inside the radius |k| = 2π/λ except the mainlobe region which is inside a radiusof |k| = 2π/λ sinφ1 (from [19], c©1997 IEEE).

the correspondence (kx, ky) = 2π/λ · (u, v) = 2π/λ · (sin φ cos θ, sin φ sin θ), thiscorresponds to an annular region in �k-space of radius |k| = 2π/λ centered at the origin,except the small mainlobe region in the center, as shown in Fig. 23. The mainloberegion is defined by a circle of radius |k| = 2π/λ sin φ1. Due to the sampling of theaperture, the beampattern will be repeated for argument of k x and ky larger than 2π/λ.This means that the circles will repeat along the kx-axis and the ky-axis. When theelement distance is λ/2, the circles will exactly touch along the direction of the axis.If the element distance is larger than λ/2, there is undersampling and the circles willpartly overlap. Grating lobes may be explained in this way. When steering is appliedto the array, the beampattern is W (kx − k0

x, ky − k0y) (9). The visible region will shift

to have its center at the steering direction (k0x, k0

y), while the optimized region from thearray is still centered at the origin. There is, therefore, no longer full overlap betweenthe optimized region and the visible region. In order to deal properly with steering,one must therefore, optimize a larger region. For an array with element distance λ/2,and for all possible steering angles, one must optimize over the area not covered by thepattern of repeating circles, i.e., the square region shown in Fig. 23.

For a 1-D array, this is greatly simplified. The only relevant variable is kx, andwhen there is steering, the argument in the beampattern is

kx − k0x = 2π/λ · (sin φ − sin φ0) = 2π/λ · u (28)

First there is always symmetry with respect to u = 0. When, in addition the elementlocations are all on a grid with distance λ/2, there will also be symmetry with respectto u = 1. In this case optimization over the region u ∈ [sin φ1, 1] ensures that the arraycan be steered to any azimuth angle [21]. A larger element distance requires a smallerregion, and a smaller element distance requires a larger region than u ∈ [sin φ 1, 1].

Page 29: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 29

−6 −4 −2 0 2 4 6−6

−4

−2

0

2

4

62D array

x [mm]

y [m

m]

Figure 24: Element layout for optimized array with 500 elements out of 50 x 50, opti-mized for minimum beamwidth. The grid is shown for a frequency of 3.5 MHz and avelocity of sound of 1540 m/s corresponding to a wavelength of λ = 0.44 mm.

4.2 2-D array optimized with simulated annealing

The simulated annealing algorithm of [29] has been used for a 2-D array of size 50 x50 elements with element spacing λ/2. The algorithm with precomputed contributionsfrom each element to the array pattern at all angles was used. For 64 points in the uand v directions, this requires several hundred Mbytes of RAM for storage. One of thebest 500 element thinning patterns found is shown in Fig. 24. Finding this solutionstook 46 hours on a single CPU of a Silicon Graphics Power Challenge computer usinga MATLAB implementation of the simulated annealing algorithm. The array pattern isshown in Fig. 25. It has a −6 dB beamwidth of 3.05◦ and a maximum sidelobe levelof −21.5 dB. In comparison, the full array has a beamwidth of 2.81 ◦. The algorithmsearched over 500 iterations and 5000 perturbations for each iteration, i.e., a total of2.5 million configurations.

4.3 2-D array optimized with genetic optimization

The genetic algorithm is also well suited for searching for solutions to large 2-D arraylayout problems. The general principles for the genetic algorithm are given in Ap-pendix C. Each gene is coded with 1s and 0s indicating whether an element is included

Page 30: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 30

0 0.5 1 1.5−50

−40

−30

−20

−10

0

φ

Response to uniform weighted array -6 dB BW: 3.05 [deg]

Res

pons

e [d

B]

Figure 25: One-way array pattern for optimized array with 500 elements out of 50x 50, optimized for minimum beamwidth. Peak sidelobe is −21.5 dB and the −6 dBbeamwidth is 3.05◦. The array is steered to θ = φ = 0 and the response is shown asa function of azimuth angle and seen from the side in 3-D space, i.e., the peak valuesover all elevation angles are seen.

or not. The gene is found by scanning the 2-D array row by row. In our implementationa parent selection strategy is used where the candidates are ranked according to theirsidelobe level. Proportionate selection based on the ranking (roulette-wheel selection)is then used. A probability is assigned which is proportional to the ranking, so that thebest individual has a high probability and the poorest one 0 probability. An individualis accepted as a parent with this probability, resulting in an elitist polygamous strat-egy. A second distinguishing feature of our implementation is that a parent dominantreproduction mechanism is used. A probability, p, is assigned to the reproduction sothat with this probability the dominant parent is reproduced, and with probability 1− pthe rest is contributed from the other parent. The value of p is assigned so that a largenumber, say on average N − 2, where N is the number of elements in the array, aretaken from the dominant parent. A low value for the mutation probability is also used,usually about 1/N , so that mutation plays a minor role in the algorithm.

The cross-over scheme of our algorithm ensures rapid convergence, but gives anincreased probability of convergence to a local minimum. This can be overcome byusing an improved initialization method. Genetic algorithms are usually intialized withuniform probability distributions over the array and according to random array theory(19), one should expect first sidelobes in the −13 dB range. The main operation in thegenetic algorithm is the cross-over. However, this operation does not significantly alterthe probability distribution, so that the probability density distribution is still close to auniform one. The randomness introduced by the mutation operation is often not largeenough to significantly alter the probability density distribution. Therefore one shouldinitialize the search with density functions that already have the desirable sidelobeproperties in the Fourier domain. This improves convergence time and more impor-tantly makes convergence to a good solution possible.

The previous 2-D example is now optimized using the genetic algorithm. The

Page 31: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 31

0 0.5 1 1.5−50

−40

−30

−20

−10

0

φ

Response to uniform weighted array -6 dB BW: 4 [deg]

Res

pons

e [d

B]

Figure 26: Optimized array response for 500 elements chosen from a 50 x 50 array.Peak sidelobe level is −22.2 dB and −6 dB beamwidth is 4.0◦.

first example shown here (Fig. 26) is initialized with a circular symmetric probabilitydistribution that has a Chebyshev-type Fourier transform (uniform sidelobes). It resultsin a sidelobe level of−22.2 dB and a beamwidth of 4.0◦. A layout that gives a narrowermainlobe and higher sidelobes comparable to that of Fig. 25 is also possible to obtain(3.1◦ and −21.8 dB) . This shows that one has freedom to trade-off beamwidth andsidelobe level. An important observation is that the sidelobe level of the first example isvery close to 3 dB higher than that predicted for a 1-D array, 10 log 1.5/K = −25.2 dB,and the beamwidth is 42% higher than the full array’s beamwidth. Thus this exampleseems to confirm the hypothesis that a value close to −10 log 1.5K+3 is an estimate ofthe achievable peak level in algorithmically optimized 2-D arrays when the beamwidthis allowed to increase by 50% over that of the full array.

The next example (Fig. 27) shows the versatility of the method in that it allows forthe sidelobe level to increase with angle away from the mainlobe. This could serve asa partial compensation for the element response (Eqn. 7).

5 Optimization of the Two-way Beampattern

The 1-D and 2-D optimized arrays shown so far have all been one-way responses. Bysimply squaring them according to (11), one can find the two-way responses whenthe receiver and transmit array layouts are the same. More degrees of freedom canbe obtained in the optimization if one allows the layouts to be different. We will stillassume that the receiver and transmitter arrays are located in the same position, butallow for partial or no overlap at all between the selected elements.

The simplest way to utilize this freedom is to use the observation of [32] which waselaborated by [9] and [33], that good two-way responses can be obtained from periodicreceiver and transmitter arrays if the two periodicities are different. A simple 1-D ex-ample will illustrate the idea. Assume that the transmitter periodicity is two, as in Fig.4, and that the receiver periodicity is three. Due to the periodicity, this array will also

Page 32: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 32

0 0.5 1 1.5−50

−40

−30

−20

−10

0

φ

Response to uniform weighted array -6 dB BW: 3.15 [deg]

Res

pons

e [d

B]

Figure 27: Designed probability density function for 500 elements chosen from a 50 x50 array. The result has suppressed sidelobe response depending on the distance fromthe mainlobe. Beamwidth is 3.15◦.

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1−100

−90

−80

−70

−60

−50

−40

−30

−20

−10

0Two−way array pattern

sinφ

[dB

]

Figure 28: Two-way array pattern for a 128 element array with every other elementused for transmission and every third element for reception.

have a fixed overlap between the receiver and transmiter elements. Every sixth elementwill be shared, or a total of 128/6 = 21 elements out of 128/2 = 64 transmitter ele-ments and 128/3 = 42 receiver elements. The coarray (16) which is the convolutionof the two aperture functions or effective aperture) is shown in Fig. 8. Note that it has atriangular shape which is similar to the one obtained from full, unweighted apertures.In addition it has some undesirable ripples. They may be reduced by weighting of theperiodic apertures [34]. The transmitter has grating lobes that are a distance |u| = 1away from the mainlobe. In the receiver the grating lobes will be located a distance|u| = 2/3 away from the mainlobe according to (5). The two-way array pattern will beas in Fig. 28. In the sidelobe region, the peak values are −32.7 dB at | sinφ| = 1 and−35.2 dB at | sin φ| = 2/3. This result should be compared to the randomly thinnedone-way array patterns of Figs. 9 and 10. In these figures, the number of elementsis 64 after random thinning from 128. The peak sidelobe value is −14 to −15 dB. If

Page 33: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 33

the same 64 elements are used both for the receiver and the transmitter, the peak side-lobe value of the two-way array pattern will be twice as much, i.e., −28 to −30 dB.However, the energy in the sidelobe region is much higher. The sparse binned array ofFig. 11 is somewhat comparable to the periodic array in that there is less energy nearthe mainlobe, but it has a peak value in the two-way pattern of about −24 dB. Thedownside of the periodic array approach is the existence of the discrete sidelobes at| sinφ| = 1 and | sin φ| = 2/3, due to the partly suppressed grating lobes. When thisarray is steered, the first grating lobe will move even closer to the broadside direction.For instance, if the array is steered to φ0 = 30◦, the first grating lobe will move tosin φ0 − 2/3 corresponding to an angle of −9.6◦.

This approach is simple to extend to the 2-D planar case by using the same period-icity in both axes. An example of such an array based on a square 50 x 50 array withλ/2 element spacing where the corner elements are unused to make it a circle (about50 · 50π/4 ≈ 1963 elements) is given here. Every second element in both directionsover a reduced aperture is used for the transmitter (a total of 253 elements), and ev-ery third element in both directions is used for the receiver (a total of 241 elements).The layout of this array is shown in Fig. 29 and the two-way array pattern is shown inFig. 30. This is a two-dimensional extension of Fig. 28. In [35] we made an attemptto get rid of the discrete sidelobes of the periodic array. This is done by combiningthe transmitter pattern with a periodicy of two with an algorithmically optimized re-ceiver pattern using 256 receiver elements. Such an array is shown in Fig. 31. Thereceiver pattern is designed from a criterion of minimizing peak sidelobes in the two-way beampattern using the genetic algorithm. The resulting beampattern is shown inFig. 32. Compared to Fig. 30, the new response does not have the discrete peaks alongthe axes at |u| = 2/3 and |v| = 2/3. This is an advantage when the array is steeredbecause then the whole response is shifted in the (u,v)-plane. An example of this isshown in Fig. 33 where steering to φ = 30◦ and θ = 30◦ is shown. The downside ofthis approach is the increased average background sidelobe level.

Another advantage over the periodic approach is the flexibility in the choice of thereceiver and transmitter elements. In some applications it may be important to haveseparate transmitter and receiver elements due to restrictions in cabling or electronics.The algorithmic approach satisfies that requirement. The proposed array in Fig. 31 wasdesigned using the genetic algorithm with a constraint that elements already occupiedby the transmitter were not allowed.

6 Conclusion

Sparse arrays have traditionally been designed with two main objectives in mind: cre-ation of beampatterns with low mainlobe width and small sidelobes, or best possiblesampling of a random field. In the latter case the correlation function of the array (coar-ray) should be optimized and be as uniform as possible. This case is shown here to bevery close to finding a beampattern with minimal peak sidelobes.

Since the search space for layout optimization for sparse arrays is so vast, heuristicsearch methods such as genetic optimization and simulated annealing have been ap-plied to this problem. Both methods need to be tuned to this problem in order to speed

Page 34: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 34

Tx

Rx

Tx/Rx

#Tx = 253 #Rx = 241 #Tx/Rx = 29Tx

Rx

Tx/Rx

Figure 29: Layout of transmit and receive elements for a 2-D periodic array with everyother element used for transmission and every third element for reception.

−100 −80 −60 −40 −20 0dB

Figure 30: Two-way beampattern for a 2-D periodic array with every other elementused for transmission and every third element for reception (from [35], c©1997 IEEE).

Page 35: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 35

Figure 31: Element layout with periodic transmitter pattern designated by filledsquares and algorithmically optimized receiver pattern designated by x. There is nooverlap (from [35], c©1997 IEEE).

−100 −80 −60 −40 −20 0dB

Figure 32: Two-way beampattern for the 2-D array shown in Fig. 31 (from [35],c©1997 IEEE).

Page 36: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 36

−100 −80 −60 −40 −20 0dB

Figure 33: Two-way steered beampattern with φ = 30◦ and θ = 30◦. Array as in Figs.31 and 32 (from [35], c©1997 IEEE).

convergence and sometimes even to make convergence possible. Both 1-D and 2-Dexamples have been shown here. We propose that the estimate of the average level in arandom array, 1/K , is in fact very close to an estimate of the achievable peak level inan algorithmically optimized 1-D array. A value of about 1.5/K is our best estimatefor the peak value when the beamwidth is allowed to increase by 50% over that of thefull array. This is 1.8 dB over the average value of a random array. For 2-D arraysthe estimate is twice as large, or about 3/K , based on peak sidelobe theory for sparsearrays and the examples given here.

When different array layouts for the transmitter and the receiver are allowed, onegets additional freedom in the design of the thinning patterns, as aliasing in one ofthe beampatterns can be cancelled by zeroes in the other one and vice versa. This hasbeen exploited in methods based on periodic arrays and design in the coarray (effectiveaperture) domain. This method is compared with the previous methods.

With the methods given here, one has the freedom to choose a design method for asparse array system using either the same elements for the receiver and the transmitter,no overlap, or partial overlap as in the periodic arrays.

7 Acknowledgement

This work was partly sponsored by the ESPRIT program of the European Union undercontract EP 22982, and by the Norwegian Research Council. This report was com-pleted while SH was on a sabbatical leave from the University of Oslo at the GE Corpo-rate Research and Development Center, Schenectady, New York. I would like to thank

Page 37: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 37

Dr. K. Thomenius, manager of the ultrasound program, for making that stay possible.We would also like to thank Dr. N. Aakvaag for making his genetic optimization codeavailable to us.

Page 38: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 38

Appendix: Optimization Methods

A. Linear Programming

A linear programming (LP) problem is the minimization of a linear function subject toa set of linear inequalities and linear equations [36] [37]. In matrix form an LP problemmay be written as

minimize cT xsubject to Ax ≤ b

(29)

where x is a vector of n variables, and the data is given by the m× n-matrix A and thevectors c and b. The weight optimization problem can be put in this form by lettingthe unknown weights and the unknown minimum sidelobe level be x, and the complexexponentials of (2) or (4) be contained in c for all the angles in the sidelobe region. Thenormalization that ensures that W (0) = 1 is handled by A and b [19]. One limitationis that the objective function has to be real and linear in the variables to minimize. Thisimplies that the array pattern has to be a real function, i.e., that the array has to havesymmetry.

The problem of optimizing the layout of an array is a mixed integer linear pro-gramming problem, i.e., a linear programming problem where some or all variablesare required to be integers. In general, mixed integer LP problems are computationallyvery difficult. The layout optimization problem is hard even for moderate size prob-lems. This is mainly due to the complex structure of the matrix V since none of itselements are zero.

Small-scale mixed-integer problems may be solved by the branch and bound method.This is a general method for solving such problems where the feasible region is gradu-ally divided into finer subregions for which a linear programming problem is solved.

Linear programming has been applied to the sparse optimization problem in [22]and [19].

B. Simulated Annealing

Simulated annealing is a stochastic optimization method where the analogy with ther-modynamics as in a metal that cools and anneals is used [38]. At high temperaturesthe molecules move freely. As the temperature falls, the molecules slow down and arefinally lined up in a crystal, which is the state of minimum energy. In stochastic opti-mization this state corresponds to the optimal solution. In our problem, the energy, E,is proportional to the peak sidelobe level. A temperature function is slowly decreasedfor each iteration, and the solutions are randomized by changing one of the active ele-ments at a time. The energy of the perturbed system is found and compared with lastiteration’s best solution. The new solution may be accepted even if it is inferior to theprevious one, based on a probability function. In this way it is ensured that the algo-rithm does not get stuck in a local minimum. As the system cools off, the probabilityfor accepting an inferior solution is reduced, and eventually the system converges onthe final solution which, if the optimization parameters are chosen well, may be closeto the optimal solution.

Page 39: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 39

The temperature function used in our optimizations is T i = T0/i where i is theiteration number. For each iteration a probability density function which is proportionalto p = e∆E/T is computed. The energy difference ∆E is the change in sidelobe leveldue to a perturbation of the array configuration. The Metropolis algorithm used fordeciding which array configuration to use next is

• If ∆E < 0, the new configuration is better and it is used as a starting point forthe next perturbation

• If ∆E > 0, the new configuration will be used with a probability p.

Simulated annealing has been applied to optimization of sparse arrays in [21], [30],[31], and [29].

C. Genetic Optimization

The genetic algorithm is an iterative process that operates on a set of individuals (pop-ulation) [39] [40]. Each member of the population represents a potential solution to theproblem. Initially, the population is randomly generated. The individuals are evaluatedby means of a fitness function, a measure of its fitness with respect to some predefinedevaluation function (environment). The presence of a sensor is indicated by one, andits absence by zero. The steps taken in the algorithm are

• Selection based on the fitness

• Reproduction

• Replacement

The reproduction stage usually consists of two separate operations, cross-over and mu-tation. In the cross-over operation two or more individuals (parents) are crossed ac-cording to some method to produce one or two new individuals (offspring). Some ofthe new individuals are then subject to a mutation process where one or more of the bits(genes) are flipped. The resulting offspring are then either ignored or are included inthe new population depending on their fitness. The algorithm stops either after reachinga predefined number of generations, or by converging to a point of no improvement.

The core of the genetic algorithm resulting in production of new individuals, is thecross-over operation. Mutation operates as a secondary step where only a few elementsof the array change values.

Genetic optimization has been applied to optimization of sparse arrays in [41], [42],[43] , and [35].

Page 40: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 40

References

[1] B. Steinberg, Principles of aperture and array system design. Wiley, New York,1976.

[2] D. H. Johnson and D. E. Dudgeon, Array Signal Processing. Englewood Cliffs,NJ: Prentice Hall, 1993.

[3] F. J. Harris, “On the use of windows for harmonic analysis with the DiscreteFourier Transform,” Proc. IEEE, vol. 66, pp. 51–83, Jan. 1978.

[4] R. A. Haubrich, “Array design,” Bull. Seismological Soc. of Am., vol. 58, pp. 977–991, 1968.

[5] G. S. Bloom and S. W. Golomb, “Application of numbered undirected graphs,”Proceedings of the IEEE, vol. 65, pp. 562–570, Apr. 1977.

[6] A. Dollas, W. T. Rankin, and D. McCracken, “A new algorithm for Golomb rulerderivation and proof of the 19 mark ruler,” IEEE Trans. Inf. Theory, vol. 44, no. 1,1998.

[7] D. A. Linebarger, I. H. Sudborough, and I. G. Tollis, “Difference bases and sparsesensor arrays,” IEEE Trans. Inf. Theory, vol. 39, pp. 716–721, Mar. 1993.

[8] R. T. Hoctor and S. A. Kassam, “The unifying role of the coarray in aperturesynthesis for coherent and incoherent imaging,” Proc. IEEE, vol. 78, pp. 735–752, Apr. 1990.

[9] G. R. Lockwood, P.-C. Li, M. O’Donnell, and F. S. Foster, “Optimizing the radi-ation pattern of sparse periodic linear arrays,” IEEE Trans. Ultrason., Ferroelect.,Freq. Contr., vol. 43, pp. 7–14, Jan. 1996.

[10] Y. T. Lo, “A mathematical theory of antenna arrays with randomly spaced ele-ments,” IEEE Trans. Antennas Propagat., pp. 257–268, may 1964.

[11] W. J. Hendricks, “The totally random versus the bin approach for random arrays,”IEEE Trans. Antennas Propagat., vol. 39, pp. 1757–1761, Dec. 1991.

[12] G. Benke and W. J. Hendricks, “Estimates for large deviation in random trigono-metric polynomials,” SIAM J. Math. Anal., vol. 24, pp. 1067–1085, July 1993.

[13] B. Steinberg, “The peak sidelobe of the phased array having randomly locatedelements,” IEEE Trans. Antennas Propagat., vol. AP-20, pp. 129–136, Mar. 1972.

[14] R. L. Fante, G. A. Robertshaw, and S. Zamoscianyk, “Observation and explana-tion of an unusual feature of random arrays with a nearest-neighbor contraint,”IEEE Trans. Antennas Propagat., vol. 39, pp. 1047–1049, July 1991.

[15] J. W. Adams, “A new optimal window,” IEEE Trans. Signal Processing, vol. 39,pp. 1753–1769, Aug. 1991.

Page 41: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 41

[16] S. Holm, “Maximum sidelobe energy versus minimum peak sidelobe level forsparse array optimization,” in Proc. IEEE Nordic Signal Processing Symp., (Es-poo, Finland), pp. 227–230, Sept. 1996.

[17] J. O. Erstad and S. Holm, “An approach to the design of sparse array systems,” inProc. IEEE Ultrason. Symp., (Cannes, France), pp. 1507–1510, 1994.

[18] S. Holm and B. Elgetun, “Optimization of the beampattern of 2D sparse arrays byweighting,” in Proc. IEEE Ultrason. Symp., (Seattle, WA), pp. 1345–1348, 1995.

[19] S. Holm, B. Elgetun, and G. Dahl, “Properties of the beampattern of weight- andlayout-optimized sparse arrays,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr.,vol. 44, pp. 983–991, Sept. 1997.

[20] J. F. DeFord and O. P. Gandhi, “Phase-only synthesis of minimum peak sidelobepatterns for linear and planar arrays,” IEEE Trans. Antennas Propagat., vol. AP-36, pp. 191–201, Feb. 1988.

[21] V. Murino, A. Trucco, and C. S. Regazzoni, “Synthesis of unequally spaced arraysby simulated annealing,” IEEE Trans. Signal Processing, vol. 44, pp. 119–123,Jan. 1996.

[22] R. M. Leahy and B. D. Jeffs, “On the design of maximally sparse beamformingarrays,” IEEE Trans. Antennas Propagat., vol. AP-39, pp. 1178–1187, Aug. 1991.

[23] H. Schjær-Jacobsen and K. Madsen, “Synthesis of nonuniformly spaced arraysusing a general nonlinear minimax optimization method,” IEEE Trans. AntennasPropagat., vol. AP-24, pp. 501–506, July 1976.

[24] J.-F. Hopperstad and S. Holm, “The coarray of sparse arrays with minimum side-lobe level,” in Proc. IEEE NORSIG-98, (Vigs, Denmark), pp. 137–140, June1998.

[25] S. Holm, B. Elgetun, and G. Dahl, “Weight- and layout-optimized sparse arrays,”in Proc. Int. Workshop on Sampling Theory and Applications, (Aveiro, Portugal),pp. 97–102, June 1997.

[26] M. I. Skolnik, G. Nemhauser, and J. W. Sherman III, “Dynamic programmingapplied to unequally spaced arrays,” IEEE Trans. Antennas Propagat., vol. AP-12, pp. 35–43, Jan. 1964.

[27] Y. T. Lo and S. W. Lee, “A study of space-tapered arrays,” IEEE Trans. AntennasPropagat., vol. AP-14, pp. 22–30, Jan. 1966.

[28] R. K. Arora and N. C. V. Krishnamacharyulu, “Synthesis of unequally spacedarrays using dynamic programming,” IEEE Trans. Antennas Propagat., pp. 593–595, July 1968.

[29] J.-F. Hopperstad, “Optimization of thinned arrays (in Norwegian),” Master’s the-sis, Department of Informatics, University of Oslo, May 1998.

Page 42: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 42

[30] A. Trucco and F. Repetto, “A stochastic approach to optimizing the aperture andthe number of elements of an aperiodic array,” in Proc. OCEANS ’96, vol. 3,pp. 1510–1515, Sept. 1996.

[31] A. Trucco, “Synthesis of aperiodic planar arrays by a stochastic approach,” inProc. Oceans 97, 1997.

[32] S. Bennett, D. Peterson, D. Corl, and G. Kino, “A real-time synthetic aperturedigital acoustic imaging system,” Acoust. Imaging, vol. 10, pp. 669–692, 1980.

[33] G. R. Lockwood and F. S. Foster, “Optimizing the radiation pattern of sparse pe-riodic two-dimensional arrays,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr.,vol. 43, pp. 15–19, Jan. 1996.

[34] S. S. Brunke and G. R. Lockwood, “Broad-bandwidth radiation pattern of sparsetwo-dimensional vernier arrays,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr.,vol. 44, pp. 1101–1109, Sept. 1997.

[35] A. Austeng, S. Holm, P. Weber, N. Aakvaag, and K. Iranpour, “1D and 2D al-gorithmically optimized sparse arrays,” in Proc. IEEE Ultrason. Symp., (Toronto,Canada), pp. 1683–1686, 1997.

[36] V. Chvatal, Linear Programming. San Francisco: CA: Freeman, 1983.

[37] G. Strang, Linear Algebra and Its Applications, 2nd ed. New York: Academic,1980.

[38] S. Kirkpatrick, C. D. Gelatt Jr., and M. P. Vecchi, “Optimization by simulatedannealing,” Science, vol. 220, pp. 671–680, May 1983.

[39] J. H. Holland, “Genetic algorithms,” Sci. Amer., pp. 66–72, July 1992.

[40] D. E. Goldberg, Genetic algorithms. Addison-Wesley, 1989.

[41] R. L. Haupt, “Thinned arrays using genetic algorithms,” IEEE Trans. AntennasPropagat., vol. 42, pp. 993–999, July 1994.

[42] P. Weber, R. Schmitt, B. D. Tylkowski, and J. Steck, “Optimization of randomsparse 2-D transducer arrays for 3-D electronic beam steering and focusing,” inProc. IEEE Ultrason. Symp., vol. 3, pp. 1503–1506, 1994.

[43] D. O’Neill, “Element placement in thinned arrays using genetic algorithms,” inProc. OCEANS ’94, vol. 2, pp. 301–306, Sept. 1994.

Page 43: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 43

Contents

1 Introduction 2

2 Theory 32.1 Introduction to array processing . . . . . . . . . . . . . . . . . . . . 3

2.1.1 The array pattern as a spatial frequency response . . . . . . . 32.1.2 Array pattern for arbitrary geometry . . . . . . . . . . . . . . 42.1.3 Periodic arrays and grating lobes . . . . . . . . . . . . . . . . 62.1.4 Element response . . . . . . . . . . . . . . . . . . . . . . . . 62.1.5 Beampattern . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.2 One-way and two-way beampatterns . . . . . . . . . . . . . . . . . . 92.2.1 The coarray and sparse arrays . . . . . . . . . . . . . . . . . 10

2.3 Random arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.4 The binned random array . . . . . . . . . . . . . . . . . . . . . . . . 15

3 Optimization of Sparse Arrays 153.1 1-D arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.2 1-D Layout optimization . . . . . . . . . . . . . . . . . . . . . . . . 23

4 2-D Array Optimization 254.1 Optimization of Steered Arrays . . . . . . . . . . . . . . . . . . . . . 254.2 2-D array optimized with simulated annealing . . . . . . . . . . . . . 294.3 2-D array optimized with genetic optimization . . . . . . . . . . . . . 29

5 Optimization of the Two-way Beampattern 31

6 Conclusion 33

7 Acknowledgement 36

Appendix: Optimization Methods 38A. Linear Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38B. Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38C. Genetic Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

List of Figures

1 Uniform linear array with element distance d, element length l, and awave arriving from direction φ. . . . . . . . . . . . . . . . . . . . . . 3

2 A 2-D planar array with coordinate system. . . . . . . . . . . . . . . 53 Array pattern with rectangular weights for 128 element array with λ/2

spacing, maximum sidelobe level −13.3 dB, beamwidth (−6 dB) 1.09◦. 54 Array pattern with grating lobes due to every other element missing

(d = λ), beamwidth (−6 dB) 1.09◦. . . . . . . . . . . . . . . . . . . 7

Page 44: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 44

5 Response for an array with grating lobes and element response (d =λ, l = λ/2). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

6 Beampattern with steering to φ = 25◦ with grating lobes and elementresponse. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

7 Coarray of minimum hole and minimum redundancy arrays. The ar-rays are the first entries for K = 7 in Table 1. . . . . . . . . . . . . . 11

8 Coarray for a 128 element array with every other element used fortransmission and every third element for reception. . . . . . . . . . . 12

9 One-way array pattern of sparse array thinned from M = 128 to K =64 elements with uniform probability density distribution. In this andthe following sparse array beampatterns an estimate of the averageand the peak levels are also plotted. . . . . . . . . . . . . . . . . . . 14

10 One-way array pattern of sparse array thinned from M = 128 to K =64 elements with triangular probability density distribution. . . . . . 14

11 One-way array pattern of a binned sparse array thinned from M =128 to K = 64 elements, bin size w = 2. The number of elements isthe same as in Figs. 9 and 10. . . . . . . . . . . . . . . . . . . . . . 16

12 Peak sidelobe level vs. beamwidth for all arrays with K = 7 andM = 18. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

13 Optimal peak sidelobe level vs. beamwidth relative to the minimumredundancy arrays with K = 7 and M = 18. . . . . . . . . . . . . . 20

14 Optimal peak sidelobe level vs. beamwidth relative to the minimumhole arrays for K = 7 and N = 26. . . . . . . . . . . . . . . . . . . 20

15 One-way array pattern before and after optimization for 64-element ar-ray randomly thinned to 48 elements. Thinning and weights are shownin the center panel of Fig. 17 (from [19], c©1997 IEEE). . . . . . . . 21

16 Result of optimizing weights given as sidelobe level as a function ofbeamwidth. Shown are uniform sidelobe level 64-element and 48-element full arrays (dash-dot lines), two realizations of random 25%thinning of the 64-element array (dashed lines), and worst-case andoptimally 25% thinned arrays (solid lines) (from [19], c©1997 IEEE). 22

17 Weights found after optimization from 2 degrees for three different el-ement layouts. The beampattern of the random layout is shown in thelower panel of Fig. 15 (from [19], c©1997 IEEE). . . . . . . . . . . . 24

18 Array pattern for optimized array with 25 elements out of 101, basedon [21]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

19 One-way array pattern for optimized array with 25 elements out of 101,optimized for minimum beamwidth. . . . . . . . . . . . . . . . . . . . 26

20 One-way array pattern for optimized array with 25 elements out of 101,optimized for minimum sidelobe level. . . . . . . . . . . . . . . . . . 26

21 Result of optimization. One-way array pattern for 25 elements out of101, peak sidelobe level -13.2 dB. . . . . . . . . . . . . . . . . . . . 27

22 Result of optimization. One-way array pattern with 25 elements outof 101, peak sidelobe level -14.0 dB which is similar to the averagesidelobe level for a random array. . . . . . . . . . . . . . . . . . . . 27

Page 45: Sparse sampling in array processing - heim.ifi.uio.no

Ch. 19 in ”Sampling theory and practice” (F. Marvasti Ed.), Plenum, NY, 2001 45

23 The optimization region in k-space, containing the visible region –everything inside the radius |k| = 2π/λ except the mainlobe regionwhich is inside a radius of |k| = 2π/λ sin φ1 (from [19], c©1997 IEEE). 28

24 Element layout for optimized array with 500 elements out of 50 x 50,optimized for minimum beamwidth. The grid is shown for a frequencyof 3.5 MHz and a velocity of sound of 1540 m/s corresponding to awavelength of λ = 0.44 mm. . . . . . . . . . . . . . . . . . . . . . . 29

25 One-way array pattern for optimized array with 500 elements out of 50x 50, optimized for minimum beamwidth. Peak sidelobe is −21.5 dBand the −6 dB beamwidth is 3.05◦. The array is steered to θ = φ = 0and the response is shown as a function of azimuth angle and seen fromthe side in 3-D space, i.e., the peak values over all elevation angles areseen. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

26 Optimized array response for 500 elements chosen from a 50 x 50 ar-ray. Peak sidelobe level is −22.2 dB and −6 dB beamwidth is 4.0◦. . 31

27 Designed probability density function for 500 elements chosen from a50 x 50 array. The result has suppressed sidelobe response dependingon the distance from the mainlobe. Beamwidth is 3.15◦. . . . . . . . . 32

28 Two-way array pattern for a 128 element array with every other ele-ment used for transmission and every third element for reception. . . . 32

29 Layout of transmit and receive elements for a 2-D periodic array withevery other element used for transmission and every third element forreception. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

30 Two-way beampattern for a 2-D periodic array with every other ele-ment used for transmission and every third element for reception (from[35], c©1997 IEEE). . . . . . . . . . . . . . . . . . . . . . . . . . . 34

31 Element layout with periodic transmitter pattern designated by filledsquares and algorithmically optimized receiver pattern designated byx. There is no overlap (from [35], c©1997 IEEE). . . . . . . . . . . . 35

32 Two-way beampattern for the 2-D array shown in Fig. 31 (from [35],c©1997 IEEE). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

33 Two-way steered beampattern with φ = 30◦ and θ = 30◦. Array as inFigs. 31 and 32 (from [35], c©1997 IEEE). . . . . . . . . . . . . . . 36

List of Tables

1 Table of the first set of minimum hole and minimum redundancy arrays. 112 Left-hand part (32 elements) of symmetric 64-element arrays. All refer-

ences to relative position are to the right-hand part of curves in Fig. 16(from [19], c©1997 IEEE). . . . . . . . . . . . . . . . . . . . . . . . 23

3 Table of solutions to the problem of finding the best 25 unit weightelement positions in an aperture of L = 50λ. . . . . . . . . . . . . . 23