DEEP LEARNING APPROACH FOR REMOTE SENSING IMAGE ANALYSIS · 2017. 11. 29. · DEEP LEARNING...

Post on 24-Aug-2021

1 views 0 download

Transcript of DEEP LEARNING APPROACH FOR REMOTE SENSING IMAGE ANALYSIS · 2017. 11. 29. · DEEP LEARNING...

DEEP LEARNING APPROACH FOR REMOTE

SENSING IMAGE ANALYSIS

* LISTIC, Université Savoie Mont Blanc, France

{amina.ben-hamida,alexandre.benoit, patrick.lambert}@univ-smb.fr

** REGIM, ENIS, Tunisia, chokri.benamar@ieee.org

Amina Ben Hamida*,** Alexandre Benoit*, Patrick Lambert*, Chokri Ben Amar**

Presentation outline

Scientific context● Big Data● Deep Learning (DL)● Remote Sensing

DL for hyperspectral Data● Experimental dataset● DL architectures● Results

Discussion & Future work

Scientific Context

100 hours of videos are uploaded every minute: 2 billions each year

350 millions photos are uploaded daily

1.4 millions of minute chats are saved every minute

Big Data Medical Imaging….

Remote Sensing:(RS)

Use case example : Sentinel satellites which

provide some thousands of terabytes of data on a scale

of 10 years.

Specific Fields

Scientific Context

Can we adapt recent methods developed in the multimedia community for RS ?

Deep Learning

Modelling high level abstractionsfrom multiple non linear transformations

“Rachel”

Deep Learning

Fully connected layer :● connects all the

neurons to all available inputs

● No spatial embedding

Non linearity :● Impact of convergence

speed !!!

Deep Learning

Pooling layer :● Subsampling

signals● Add translation

robustness

Convolutional layer :● Local filtering● Rich feature maps

generation

Hyperspectral Data

DL for Hyperspectral Data Classification

Taking into account the spatial and spectral components

Seperately

(using SAE)

Early combining

spatial and spectral

dimensions

Only using spectral

information

● Explodes

parameters

number

● more data

for training

Forget

Spatial

information

?

Looks good

Experimental dataset

University of Pavia dataset

Single image

610×340 pixels

103 bands

9 classes

DL architecture

Cascading 3D convolutions, 1D convolutions and final fully connected layers

Hyperspectral Deep Network architectures

3 layers

3D/1D

6 layers

3D/1D

4 layers

3D/1D

Results :accuracy vs complexity

0 10000 20000 30000 40000 50000 60000 70000 8000065

70

75

80

85

90

95

100

85.2

79.3

75.9

92.5

75

93.8

84

93.9

95.6

86.6

Accuracy when training on ~5% of the data

Number of parameters

Acc

ura

cy

*

* Hu&al, “Deep convolutional neural networks for hyperspectral imageclassification,” in Journal of Sensors, 2015

3 layers

6 layers

4 layers

Spatial range impact

5*5

3*3

1*1

Results :accuracy vs complexity

Deeper models for increased performances and less

parameters.

Deeper networks need more time to train

Spatial information does matter

but spatial range depends on the use case

Results :6 layers deep net, 5*5 neighbors

Results :6 layers deep net, 5*5 neighbors

Spectral profiles

Results :6 layers deep net, 5*5 neighbors

Per class accuracy mostly stable ~95% on average

Classification errors explained by :

• similar spectral profiles

• boundary effects (ROI size vs neighborhood class)

Results :confusion vs neighborhood

Processing time

(caffe, CPU mode,Dual core i7 proc).

1h 5h2h

Observation : spatial information gradually corrects

spectral based errors

1*1 3*3 5*5

Results :Accuracy vs training dataset size

0 10 20 30 40 50 60 70 80 90 10088

90

92

94

96

98

100

102Accuracy on Pavia University dataset

Training samples ratio (%)

Accuracy

6 layers, 3*3 neighbors, ~4419 parameters6 layers, 5*5 neighbors, ~6074 parameters

CNN challenger, 5*5 neighbors, no pretrainingK. Makantasis&al “Deep supervised learning for hyperspectral data classification through convolutional neural networks,” IGRS2015~20000 parameters

SAE challenger, 7*7 neighbors, with pretrainingX. Ma&al“Hyperspectral im-age classification via contextual deep learning,” EURASIP JIVP 2015>>20000 parameters

Conclusion

Deep Learning can do the job !

● Automatic adaptation to the context and good results

● Deeper is better... up to a limit ?

Main issues :

● Expertise required

● Network architecture design

● Training procedures design

● Reduce the number of parameters

Future Work guideline

35

Enhance architectures

Learning metrics from similarity measures Siamese

Networks

Get lighter models ! SqueezeNet

approach

Adapt to new contexts

Switch to multispectral dataThe

Sentinel

Use case Play with unlabelled data

What's next ?

32

Yes, DL was so far so good for simple RS application

But, what gaps will it be facing when hardening the

task ?

Questions ?

Thank you for your attention

Results :from one dataset to another

Accuracy vsdataset, deepness,

neighborhood

Pavia Univer

sity

Pavia Cente

r

3 layers

1*1 neighbors 75.9 % 90.5 %

3*3 neighbors 84.0 % 94.5 %

5*5 neighbors 93.8 % 96.4 %

7*7 neighbors 85.9 % 96.2 %

6 layers

1*1 neighbors 86.5 %

3*3 neighbors 92.3 % 98.5 %

5*5 neighbors 93.8 %

Future Work guideline

37

Testing the robustness level of the DL structure

Injecting noise into the system in order to test its ability to deal

with noisy images.Facing

Noise

Testing to what extent can the system face a variety of trials to

degrade its performances.

Degrade

performances

Future Work guideline

38

Relying on larger ground truth databases

The use of other dabases in order to create ground truth

annotaded ones.

This work can be done in collaboration with other labs.

Larger

amount of

data

Future Work guideline

40

Extending the work to the sentinel databases

Resorting to multispectral and hyperspectral data, with

complex challenges to rise.The

Sentinel

Use caseFacing the challenge of large unlabelled data

Conv layer hints parameters vs IO dimensions

24

mi<=n

fli<=f