Learning Robust Global Representations by Penalizing Local ...haohanw/PAR/poster.pdfLearning Robust...

Post on 14-Jul-2020

13 views 0 download

Transcript of Learning Robust Global Representations by Penalizing Local ...haohanw/PAR/poster.pdfLearning Robust...

Learning Robust Global Representations by Penalizing Local Predictive Power Haohan Wang, Songwei Ge, Eric P. Xing, Zachary C. Lipton

School of Computer Science, Carnegie Mellon University

!  ImageNet-Sketch Dataset & Experiments

•  First out-of-domain data set at the ImageNet validation set scale •  1000 classes, with 50 testing images in each •  Used as test data set to test the model’s generalization

ability when trained on standard ImageNet train set. •  Performance:

•  Analysis:

Accuracy AlexNet DANN* InfoDrop HEX PAR Top1 0.1204 0.1360* 0.1224 0.1292 0.1306 Top5 0.2480 0.2712* 0.2560 0.2654 0.2627

AlexNet-PAR AlexNet Predic;on Confidence Predic;on Confidence

stethoscope 0.6608 hook 0.3903

tricycle 0.9260 safetypin 0.5143

Afghanhound 0.8945 swab(mop) 0.7379

redwine 0.5999 goblet 0.7427

!  Patch-wise Adversarial Regularization (PAR) ! Highlights

! Empirical Results

•  Notations •  top layers: f(•;θ) •  patch classifier: h(•;ϕ) •  bottom layers: g(•;δ)

•  Patch-wise Adversarial Regularization

•  Training heuristics •  first train the model conventionally until

convergence •  then train the model with regularization

•  Variants •  PAR: •  1-layer classifier •  1x1 local patch •  first layer

•  PARB •  3x3 local path

•  PARM •  3-layer classifier

•  PARH •  higher layer

•  Engineering-wise •  One set of parameters •  Implemented efficiently

through convolution

•  Out-of-domain CIFAR10 •  Test with ResNet-50 •  4 out-of-domain settings created: •  Greyscale, NegativeColor,

RandomKernel, RadiamKernel •  Best performance in comparison to

standard methods

•  PACS experiment •  Test with AlexNet (consistent with

previous state-of-the-art) •  Best average performance in

domain-agnostic setting •  Best performance in Sketch domain

in comparison to any method

! Contact

•  Novel method for out-of-domain robustness •  with domain-agnostic setting (more

industry-friendly) •  simple and intuitive regularization,

architecture-agnostic •  New vision data set for large scale out-of-

domain robustness testing •  ImageNet validation set scale

! Motivation •  Neural networks are not robust enough! •  Models with high accuracy can easily fail

when tested with out-of-domain data •  One reason is that the models are

exploiting predictive local signals, ignoring the global picture

•  Penalize model’s tendency in predicting through local signals

•  haohanw@cs.cmu.edu @HaohanWang •  songweig@cs.cmu.edu •  resource links