Tensor Sparsity for Classifying Low-Frequency Ultra ... · Each target sample is represented by a...
Transcript of Tensor Sparsity for Classifying Low-Frequency Ultra ... · Each target sample is represented by a...
Tensor Sparsity for Classifying Low-FrequencyUltra-Wideband (UWB) SAR Imagery
Tiep H. Vu1, Lam Nguyen2, Calvin Le3,Vishal Monga4
1,4 School of Electrical Engineering and Computer Science 2,3 Sensors & Electron Devices DirectorateThe Pennsylvania State University U.S. Army Research Laboratory
University Park, PA, 16802 2800 Powder Mill Rd, Adelphi, MD, 20783Email: 1 [email protected] Email: 2 [email protected] [email protected] 3 [email protected]
Abstract—Although a lot of progress has been made over theyears, one critical challenge still facing low-frequency (UHF toL-band) ultra-wideband (UWB) synthetic aperture radar (SAR)technology is the discrimination of buried and obscured targetsof interest from other natural and manmade clutter objects inthe scene. The key issues are i) low-resolution SAR imagery forthis frequency band, ii) targets of interests being typically smallcompared to the radar signal wavelengths, iii) targets having lowradar cross sections (RCS), and iv) very noisy SAR imagery (e.g.target responses buried in responses from cluttered environment).In this paper, we consider the problem of discriminating andclassifying buried targets of interest (buried metal and plasticmines, 155-mm unexploded ordinance [UXO], etc.) from othernatural and manmade clutter objects (soda can, rocks, etc.) inthe presence of noisy responses from the rough ground surfacesfor low-frequency UWB 2-D SAR images. We generalize thetraditional sparse representation-based classification (SRC) to amodel with capability of using the information of the shared class,and implement multichannel classification problems by exploitingstructures of sparse coefficients using various techniques. Here,we employ an electromagnetic (EM) SAR database generatedusing the finite-difference, time-domain (FDTD) software, whichis based on a full-wave computational EM method.
Keywords—SRC, sparse recovery, UWB, SAR, discriminator,classifier.
I. INTRODUCTION
Over the past two decades, the U.S. Army hasbeen investigating the capability of low-frequency, ultra-wideband (UWB) synthetic aperture radar (SAR) systemsfor the detection of buried and obscured targets invarious applications, such as foliage penetration [1], groundpenetration [2], and sensing-through-the-wall [3]. Thesesystems must operate in the low-frequency spectrum that spansfrom UHF frequency band to L band in order to achieveboth resolution and penetration capability. Although a lot ofprogress has been made over the years, one critical challengethat the low-frequency UWB SAR technology still faces isthe discrimination of targets of interest from other natural andmanmade clutter objects in the scene. The key issue is thatthat the targets of interests are typically small compared to thewavelengths of the radar signals in this frequency band andhave very low radar cross sections (RCSs). Thus it is verydifficult to discriminate targets and clutter objects using thelow-resolution SAR imagery.
In this paper, we consider the problem of discriminatingand classifying buried targets of interest (metal and plasticmines, 155-mm unexploded ordinance [UXO], etc.) from othernatural and manmade clutter objects (soda can, rocks, etc.) inthe presence of noisy responses from the rough ground surfacesfor low-frequency UWB 2-D SAR images. For classificationproblems, sparse representation-based classification [4] (SRC)has been successfully demonstrated in other imagery domainssuch as medical image classification [5]–[8], hyperspectralimage classification [9]–[11], high-resolution X-band syntheticaperture radar (SAR) image classification [12], recapturedimage recognition [13], video anomaly detection [14], andseveral others [15]–[21]. However, in the low-frequency RFUWB SAR domain, although we have studied the feasibilityof using SRC for higher-resolution 3-D down-looking SARimagery [22], the application of SRC to low-frequencyUWB 2-D SAR imagery has not been studied to date dueto the aforementioned low-resolution issue. In this paper,we generalize the traditional SRC to incorporate the targetclassification using either a single channel (radar polarization)or multiple channels of SAR imagery.
In sparse representations, many signals can be expressed bya linear combination of few bases taken from a “dictionary”.Based on this theory, SRC [4] was originally developed forrobust face recognition. The main idea in SRC is to representa test sample (e.g., a face) as a linear combination of samplesfrom the available training set. Sparsity manifests becausemost of the nonzero components correspond to bases whosememberships are the same as the test sample.
The aforementioned SRC is solely applied to the problemof one channel, using l1-norm as the sparsity regularizerand without the presence of the shared class. Our maincontribution: In this paper, we generalize the traditional SRCto a model that can use the information of the shared class, andimplement multichannel classification problems by exploitingstructures of sparse coefficients using various techniques.With the presence of the shared class, a new test sample ispresumably represented by samples from the correspondingclass in conjunction with samples from the shared class.Particularly, a noisy test sample of a target, e.g. T1, can beexpressed as a linear combination of available clean trainingimages from that target and some ground images at different
angles. This assumption still assures the sparsity of the codingcoefficient. After obtaining the sparse coefficient, we caneliminate the contribution of the ground and focus on thediscriminative information of the signal, which likely enhancesthe performance of the SRC framework. For multichannelsignals, in order to use the cross-channel correlation, wepresent four different sparsity regularizers imposing on thesparse codes.
We also note here that our proposed models are not limitedat the problem of classifying UWB SAR images; they can beapplied to any kind of signals which contain multiple channels,e.g. multiple color channels in medical image processing ormultiple viewpoints of objects.
II. SPARSE REPRESENTATION-BASED CLASSIFICATION
A. Notation
Each target sample is represented by a SAR image formedusing either a single (using co-pol) or multiple polarization(using both co-pol and cross-pol) channels. Thus, one targetsample is denoted by y ∈ Rd×1×T , where d is the total numberof image pixels and T is the number of polarization channels.It is noted that y may represent a tensor when T > 1. Acollection of N samples is denoted by Y ∈ Rd×N×T .
Suppose we are considering a general classificationproblem with C different classes, let Dc(1 ≤ c ≤ C)be the collection of all training samples from class c, D0
be the collection of samples in the shared class, and D =[D1, . . . ,DC ,D0] be the total dictionary. In our problem, theshared class can be seen as the collection of ground images.
For any tensor M, let M(t) be its t-th channel. Forconvenience, given two tensors M,N, the tensor multiplicationP = MN is considered as channel-wise multiplication, i.e.,P(t) = M(t)N(t). For a tensor M, we also denote the sumof square of all elements by ‖M‖2F and the sum of absolutevalues of all elements by ‖M‖1. Tensor addition/subtractionsimply represents element-wise addition/subtraction.
With the definition of tensor multiplication, a sparserepresentation of y using D can be obtain by solving:
x = arg minx
1
2‖y −Dx‖2F + λFS(x) (1)
where λ is a positive regularization parameter and FS(x) is afunction that encourages a sparsity structure of x. Denote byxi the sparse coefficient of y on Di. Then, the tensor x canbe divided into C + 1 tensor parts x1,x2, . . . ,xC ,x0.
B. Generalized sparse representation-based classification
Confuser detection:
In practical problems, the set of confusers is not limitedin the training set. A confuser can be anything that is not atarget; it can be solely the ground or a capture of an unseenobject. In the former case, the test signal can be well presentedby using only the ground D0, while in the latter case, thesparsity assumption is no longer valid. Therefore, one testsignal y is classified as a confuser if one the following threeconditions satisfies: i) it is not sparsely interpreted by the totaldictionary D; ii) it has the most active elements in the sparsecode locating at x0; and iii) it is similar to known confusers.
y
≈
D
×
Dtargets Dconfusers Dgrounds
x1a) CR
x2b) CC
x3c) SM
x4d) GT
Fig. 1: Different sparsity constraints on x.
Algorithm 1 Generalized sprase representation-based classifi-cation with a shared class
function IDENTITY(y) =GENERALIZED_SRC(y,D, λ,FS(•), ε, τ)
INPUT:y ∈ Rd×1×T – a test signal;D = [D1,D2, . . . ,DC ,D0] ∈ Rd×K×T – the totaldictionary with the shared dictionary D0;FS(•) – the sparsity constraint imposed on sparse codes.λ ∈ R+ – a positive regularization parameter;ε, τ – positive thresholds.
OUTPUT: the identity of y.1. Sparsely code y on D via solving:
x = arg minx{‖y −Dx‖+ λFS(x)} (2)
2. Remove contribution of the shared dictionary:
y = y −D0x0.
3. Calculate class-specific residuals :
rc = ‖y −Dcxc‖F ,∀c = 1, 2, . . . , C.
4. Decision:if min
c(rc) > τ (an unseen object) or ‖y‖2 < ε (a
ground) theny is a confuser.
elseidentity(y) = arg min
c{rc}
The overall algorithm of general SRC applied to multi-channel signals with presence of a shared class is shown inAlgorithm 1.
The overall algorithm of generalized SRC applied tomultichannel signals in the presence of a shared class isshown in Algorithm 1. In this algorithm, FS(x) is a sparsityregularizer imposed on x, which can be defined in differentways:
a) Apply a sparsity constraint on each column of thecoefficient tensor x, no cross-channel constraint.
In this case, active elements of x could be randomlydistributed. We can see that it is similar to solving T separatesparse coding problems, each for a polarization channel. The
classification rule is executed based on the sum of all thesquares of the residuals. We refer to this framework as SRC-cumulative residual or SRC-CR (CR in the short version). (SeeFig. 1a with sparse tensor x1). b) Concatenate all channels.
The most convenient way to convert a multichannel to asingle-channel problem is to concatenate all T channels of onesignal to obtain a long vector. By doing so, we have the originalSRC framework with l1-norm minimization. Consequently,sparse codes of all T channels are identical. From the tensorviewpoint, the code tensor x will have few active “tubes”;moreover, all elements in a tube are the same. We refer to thisframework as SRC-concatenation or SRC-CC (CC in the shortversion). (See Fig. 1b with sparse tensor x2).
c) Use a simultaneous model.
Similar to the SHIRC model proposed in [6], [8] wecan impose one constraint on active elements of tensor xas follows: x also has few nonzero tubes as in SRC-CC;however, elements in one active tube are not necessarily thesame. In other words, the locations of nonzero coefficientsof training samples in the linear combination exhibit a one-to-one correspondence across channels. If the j-th trainingsample in D(1) has a nonzero contribution to y(1), then fort ∈ {2, . . . , T}, y(t) has a nonzero contribution from the j-thtraining sample in D(t). We refer to this framework as SRC-Simultaneous or SRC-SM (SM in the short version). (See Fig.1c with sparse tensor x3). To achieve this requirement, we canimpose on the tensor x3 (with one column and T channels)the l1,2-minimization constraint, which is similar to the row-sparsity constraint applied on matrices in [12].
Remarks: While SHIRC uses l0-minimization on x andapplies the modified SOMP [23], our proposed SRC-SMexploits the flexibility of l1,2-regularizer since it is easilymodified when more constraints present (e.g., non-negativity).In addition, it is more efficient especially when dealing withproblems of multiple samples at input.
d) Use a Group tensor model.
Intuitively, since one object is ideally presented by a linearcombination of the corresponding dictionary and the shareddictionary, it is highly expected that number of active (tensor)parts in x is small, i.e., only few of x1, . . . ,xC ,x0 arenonzero tensors. This suggests us a group tensor sparsityframework that can improve the classification performance,which is referred to as SRC-GT (GT in the short version).The visualization of this idea is shown in Fig. 1d.
C. Problem formulation and solution
While solving the main optimization problems in SRC-CCand SRC-CR are pretty straightforward, those in SRC-SM andSRC-GT require more explanation.
In order to achieve the “tube-sparsity” (SRC-SM) and“group-tensor sparsity” (SRC-GT) solutions, we propose thefollowing optimization problem:
x = arg minx
1
2‖y −Dx‖22 + λ
∑gi∈G‖xgi‖2 (3)
where G is a partition of the tensor x, and xgi is thevectorization of the part corresponding to gi. In SRC-SM, G
a)
b)
Targets
c)
T1 T2 T3 T4 T5
d)
e)
Con
fusers
f)
C1 C2 C3 C4 C5
Fig. 2: Sample images of five targets and five clutter objects.T1 = M15 anti-tank mine, T2 = TM62P3 plastic mine, T5 =155 mm artillery shell, C1 = Coke can, C2 = Rocks, C3 =Rocks, C4 = Rocks, C5 = Rocks. a) Targets under smoothground surface. b) Targets under rough ground surface (easycase, scale = 1). c) Targets under rough ground surface (hardcase, scale = 5). d) Confusers under smooth ground surface.e) Confusers under rough ground surface (easy case, scale=1).f) Confusers under rough ground surface (hard case, scale=5).
is divided into tubes, while in SRC-GT, G is partitioned intoall tubes in the corresponding class, e.g., xgi = xi.
This problem is similar to the group lasso problemproposed in [24]. The solution can be found using FISTA[25] and the minimum l2-norm minimization, which will bediscussed in details in our future works.
III. EXPERIMENTAL RESULTS
A. Data
The SRC is applied to a SAR database consisting of targets(metal and plastic mines, 155-mm unexploded ordinance[UXO], etc.) and clutter objects (soda can, rocks, etc.) buriedunder rough ground surfaces. The electromagnetic (EM) radardata are simulated based on the full-wave computationalEM method known as finite-difference, time-domain (FDTD)software [26], which was developed by the U.S. ArmyResearch Laboratory (ARL). The software was validated for awide variety of radar signature calculation scenarios [27], [28].Our volumetric rough ground surface grid – with the embeddedburied targets – was generated by using the surface root-mean-square (rms) height and the correlation length parameters. The
1 2 3 4 5
60
70
80
90
Ground scale
SRC-CC
1 2 3 4 5
60
70
80
90
Ground scale
SRC-CR
1 2 3 4 5
60
70
80
90
Ground scale
SRC-SM
1 2 3 4 5
60
70
80
90
Ground scale
SRC-GT
SVM VV HH VV-HH VV-HV HH-HV ALL
Fig. 3: Classification accuracy (%) as function of number of training confusers and ground scale.
targets are flush buried at 2-3 cm depth. In our experiments, theeasiest case of rough ground surface in our experiments, thesurface rms is 5.6 mm and the correlation length is 15 cm. TheSAR images of various targets and clutter objects are generatedfrom EM data by coherently integrated individual radar returnsignals along over a range of aspect angles. The SAR imagesare formed using the backprojection image formation [29] withan integration angle of 60◦. Fig. 2a shows the SAR images(using VV polarization) of some targets that are buried undera perfectly smooth ground surface. Each target is imaged at arandom viewing aspect angle and an integration angle of 60◦.Fig. 2b and 2c shows the same targets Fig. 2a, except thatthey are buried under a rough ground surface (the easiest casecorresponds to ground scale = 1 and harder case correspondsto ground scale = 5). Similarly, Fig. 2d, 2e and 2f show theSAR images of some clutter objects buried under a smoothand rough surface, respectively. For training, the target andclutter object are buried under a smooth surface in order togenerate high signal-to-clutter ratio images. We include 12SAR images that correspond to 12 the different aspect angles(0◦, 30◦, . . . , 330◦) for each target type. For testing, the SARimages of targets and confusers are generated at random aspectangles and buried under the rough ground surfaces. Variouslevels of ground surface roughness are simulated by selectingdifferent ground surface scaling factors when embedding thetest targets under the rough surfaces. Thus, the resulting testimages are very noisy with a very low signal-to-clutter ratio.Each data sample of one object is represented by either i) oneSAR image using data from one co-pol (VV, HH) channel orii) two or more images using data from co-pol (VV, HH) andcross-pol (HV) channels. For each target type, we test 100image samples measured at random aspect angles.
B. Overall classification accuracy
We apply four presented methods to different possiblecombinations of three polarizations: VV, HH, VV-HH, VV-HV,HH-HV, and VV-HH-HV (or ALL), and also compare theseresults with those obtained by SVM using the libsvm library[30]. The training set comprises all five target sets, four outof five confuser sets (each time we leave one confuser set out,
TABLE I: Overall classification accuracy (%) when applyingfour presented models on all possible combination ofpolarization channels.
VV HH VVHH VVHV HHHV ALLSVM 70.48 65.98 69.34 70.48 65.82 69.70CC 77.22 82.80 81.48 86.34 87.72 87.60CR 84.90 89.22 88.36 88.14 91.00 91.54SM 85.26 91.80 89.56 89.40 93.20 92.26GT 81.96 88.40 85.58 86.80 89.02 88.18
CC∗ 77.64 82.46 81.66 86.22 87.16 87.22CR∗ 86.36 92.48 90.34 88.02 92.70 91.78SM∗ 87.58 94.00 91.88 89.72 94.34 93.48GT∗ 88.04 94.24 91.80 87.72 92.38 92.14
∗ with nonnegativity constraint.
which is supposed to be unseen), and the ground set. Eachtarget is considered as one class, all confusers are gathered intoone class – class of confusers. The test set comprise of all testtargets and confusers, each of them is contaminated by one ofthe test grounds at scale 1 – small disruption. We also imposethe non-negative constraints on sparse coefficients to compareall results. The results are averaged and reported in Table I.From Table I, we can observe that i) the performance of eachmethod is, in general, improved with non-negativity constraintsimposed on sparse coefficients; ii) the HH polarization itselfand all combinations containing it yield better results; andiii) when more than two polarizations are involved, SRC-SMoutperforms all other methods. It is worth noting here that allSRC-like methods perform better than SVM does.
C. Polarization combination comparison
To see which combination of polarizations provides goodresults, we conduct another experiment by applying each ofcompeting methods on six polarization combinations. The dataare set up as in section III-B with slightly modifications: i)we use the non-negativity constraints, which were observedto have better performance; and ii) in test signals, the groundscale is varied from 1 to 5 to see how each method performswith high corruptions. Fig. 3 shows the results of all methods
1 2 3 4
70
80
90
No. of training confusers
Cla
ssifi
cati
onac
cura
cy(%
)ground scale = 1
1 2 3 4
60
70
80
No. of training confusers
Cla
ssifi
cati
onac
cura
cy(%
)
ground scale = 3
1 2 3 4
60
65
70
75
80
No. of training confusers
Cla
ssifi
cati
onac
cura
cy(%
)
ground scale = 5
SVM CR-VVHH CR-VVHV CR-HHHV CR-ALL SM-VVHH SM-VVHVSM-HHHV SM-ALL GT-HH GT-VVHH GT-VVHV GT-HHHV GT-ALL
Fig. 4: Classification accuracy as function of number of confuser sets and ground scale.
on different combinations. First, among all methods, VV-HV, HH-HV, and ALL combinations outperform others withthe gaps being significant when ground scales increase. Itis also evident that HH-HV consistently outperforms othercombinations in all methods. Second, among all four presentedmethods, the SRC-CC results are clearly bettered by otherswith its best number being under 90%. Third, SRC-GT usingHH only also provides good results; it is even the best whenthe ground scale is equal to 1. SVM again does not providegood performances.
D. Classification results as a function of training confuser sets
In this experiment, we vary the number of training confusersets from one to four in conjunction with different groundscales. The results are shown in Fig. 4. It is evident that withsmall corruption, SRC-based methods provide almost the sameresults with GT-HH being slightly better than others. However,when the ground scale is increased, SM-HHHV and SM-ALLoffer the best results.
IV. CONCLUSION
We have applied four sparse representation-based classi-fiers to identify and discriminate buried targets of interestfrom other natural and manmade clutter objects in the presenceof noisy responses from the rough ground surfaces for low-frequency UWB 2-D SAR images. The application of usingSRC for low-frequency UWB 2-D SAR imagery has neverbeen studied. The discrimination/classification results are veryencouraging, even when the test images are very noisy (buriedunder rough ground surfaces), targets have small RCS, and thetarget’s responses have very little detailed structures (targetsare small compared to the wavelengths of the radar signals).The simultaneous sparse-based (SM) technique consistentlyoffers the best results with the combination of co- and cross-pol data (e.g., HH-HV). On the other hand, the group tensorsparse-based (GT) technique outperforms others when onlyone polarization (HH) is available. In addition, the non-negativity constraints on sparse codes enhance the perfor-mance of the system. In future works, we aim to incorporate
dictionary learning frameworks into these models to obtainperformance improvement.
REFERENCES
[1] L. H. Nguyen, R. Kapoor, and J. Sichina, “Detection algorithms forultrawideband foliage-penetration radar,” Proceedings of the SPIE,3066, pp. 165–176, 1997.
[2] L. H. Nguyen, K. Kappra, D. Wong, R. Kapoor, and J. Sichina, “Minefield detection algorithm utilizing data from an ultrawideband wide-area surveillance radar,” Proceedings of the SPIE International Societyof Optical Engineers, 3392, no. 627, 1998.
[3] L. H. Nguyen, M. Ressler, and J. Sichina, “Sensing through thewall imaging using the army research lab ultra-wideband synchronousimpulse reconstruction (uwb sire) radar,” Proceedings of SPIE, 6947,no. 69470B, 2008.
[4] J. Wright, A. Yang, A. Ganesh, S. Sastry, and Y. Ma, “Robust facerecognition via sparse representation,” IEEE Trans. on Pattern Analysisand Machine Int., vol. 31, no. 2, pp. 210–227, Feb. 2009.
[5] T. H. Vu, H. S. Mousavi, V. Monga, U. Rao, and G. Rao, “DFDL:Discriminative feature-oriented dictionary learning for histopathologicalimage classification,” Proc. IEEE International Symposium onBiomedical Imaging, pp. 990–994, 2015.
[6] U. Srinivas, H. S. Mousavi, C. Jeon, V. Monga, A. Hattel,and B. Jayarao, “SHIRC: A simultaneous sparsity model forhistopathological image representation and classification,” Proc. IEEEInternational Symposium on Biomedical Imaging, pp. 1118–1121, Apr.2013.
[7] T. H. Vu, H. S. Mousavi, V. Monga, U. Rao, and G. Rao,“Histopathological image classification using discriminative feature-oriented dictionary learning,” IEEE Transactions on Medical Imaging,vol. 35, no. 3, pp. 738–751, March, 2016.
[8] U. Srinivas, H. S. Mousavi, V. Monga, A. Hattel, and B. Jayarao, “Si-multaneous sparsity model for histopathological image representationand classification,” IEEE Transactions on Medical Imaging, vol. 33,no. 5, pp. 1163–1179, May 2014.
[9] X. Sun, N. M. Nasrabadi, and T. D. Tran, “Task-driven dictionarylearning for hyperspectral image classification with structured sparsityconstraints,” IEEE Transactions on Geoscience and Remote Sensing,vol. 53, no. 8, pp. 4457–4471, 2015.
[10] X. Sun, Q. Qu, N. M. Nasrabadi, and T. D. Tran, “Structured priorsfor sparse-representation-based hyperspectral image classification,”Geoscience and Remote Sensing Letters, IEEE, vol. 11, no. 7, pp. 1235–1239, 2014.
[11] Y. Chen, N. M. Nasrabadi, and T. D. Tran, “Hyperspectral imageclassification via kernel sparse representation,” IEEE Transactions onGeoscience and Remote Sensing, vol. 51, no. 1, pp. 217–231, 2013.
[12] H. Zhang, N. M. Nasrabadi, Y. Zhang, and T. S. Huang, “Multi-viewautomatic target recognition using joint sparse representation,” IEEETransactions on Aerospace and Electronic Systems, vol. 48, no. 3, pp.2481–2497, 2012.
[13] T. Thongkamwitoon, H. Muammar, and P.-L. Dragotti, “An imagerecapture detection algorithm based on learning dictionaries of edgeprofiles,” IEEE Transactions on Information Forensics and Security,vol. 10, no. 5, pp. 953–968, 2015.
[14] X. Mo, V. Monga, R. Bala, and Z. Fan, “Adaptive sparse representationsfor video anomaly detection,” Circuits and Systems for VideoTechnology, IEEE Transactions on, vol. 24, no. 4, pp. 631–645, 2014.
[15] H. S. Mousavi, U. Srinivas, V. Monga, Y. Suo, M. Dao, and T. Tran,“Multi-task image classification via collaborative, hierarchical spike-and-slab priors,” in Proc. IEEE Conf. on Image Processing, 2014, pp.4236–4240.
[16] U. Srinivas, Y. Suo, M. Dao, V. Monga, and T. D. Tran, “Structuredsparse priors for image classification,” Image Processing, IEEETransactions on, vol. 24, no. 6, pp. 1763–1776, 2015.
[17] H. Zhang, Y. Zhang, N. M. Nasrabadi, and T. S. Huang,“Joint-structured-sparsity-based classification for multiple-measurementtransient acoustic signals,” Systems, Man, and Cybernetics, Part B:Cybernetics, IEEE Transactions on, vol. 42, no. 6, pp. 1586–1598, 2012.
[18] M. Dao, Y. Suo, S. P. Chin, and T. D. Tran, “Structured sparserepresentation with low-rank interference,” in 2014 48th AsilomarConference on Signals, Systems and Computers. IEEE, 2014, pp.106–110.
[19] M. Dao, N. H. Nguyen, N. M. Nasrabadi, and T. D. Tran, “Collaborativemulti-sensor classification via sparsity-based representation,” IEEETransactions on Signal Processing, vol. 64, no. 9, pp. 2400–2415, 2016.
[20] H. Van Nguyen, V. M. Patel, N. M. Nasrabadi, and R. Chellappa,
“Design of non-linear kernel dictionaries for object recognition,” IEEETransactions on Image Processing, vol. 22, no. 12, pp. 5123–5135,2013.
[21] T. H. Vu and V. Monga, “Learning a low-rank shared dictionary forobject classification,” accepted to International Conference on ImageProcessing, 2016.
[22] L. Nguyen and C. Le, “Sparsity Driven Target Discrimination for Ultra-wideband (UWB) Down-Looking Ground penetration radar (GPR),”Tri-Service Radar, 2013.
[23] J. A. Tropp, A. C. Gilbert, and M. J. Strauss, “Algorithms forsimultaneous sparse approximation. part i: Greedy pursuit,” SignalProcessing, vol. 86, no. 3, pp. 572–588, 2006.
[24] M. Yuan and Y. Lin, “Model selection and estimation in regression withgrouped variables,” Journal of the Royal Statistical Society: Series B(Statistical Methodology), vol. 68, no. 1, pp. 49–67, 2006.
[25] A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholdingalgorithm for linear inverse problems,” SIAM Journal on ImagingSciences, vol. 2, no. 1, pp. 183–202, 2009.
[26] T. Dogaru, “AFDTD user’s manual,” ARL Technical Report, Adelphi,MD, ARL-TR-5145, March 2010.
[27] T. Dogaru, L. Nguyen, and C. Le, “Computer models of the humanbody signature for sensing through the wall radar applications,” ARL,Adelphi, MD, ARL-TR-4290, September 2007.
[28] D. Liao and T. Dogaru, “Full-wave characterization of rough terrainsurface scattering for forward-looking radar applications,” IEEE Trans.on Antenna and Propagation, vol. 60, pp. 3853–3866, August 2012.
[29] J. McCorkle and L. Nguyen, “Focusing of dispersive targets usingsynthetic aperture radar,” ARL-TR-305; U.S. Army Research Laboratory,Adelphi, MD, August 1994.
[30] C.-C. Chang and C.-J. Lin, “LIBSVM: A library for support vectormachines,” ACM Transactions on Intelligent Systems and Technology,vol. 2, pp. 27:1–27:27, 2011, software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm.