Recurrent Transformer Networks for Semantic Correspondence

Post on 25-Mar-2022

7 views 0 download

Transcript of Recurrent Transformer Networks for Semantic Correspondence

Recurrent Transformer Networks for Semantic CorrespondenceSeungryong Kim, Stephen Lin, Sangryul Jeon, Dongbo Min, and Kwanghoon Sohn

Neural Information Processing Systems (NeurIPS) 2018

Semantic Correspondenceβ€’ Establishing dense

correspondences between semantically similar images (different instances within the same object)

Introduction

Background

Recurrent Transformer Networks Experimental Results and Discussion

Challenges in Semantic Correspondenceβ€’ Photometric/geometric deformations, lack of supervisions

Problem Formulationβ€’ Given a pair of image 𝐼𝑠 and 𝐼𝑑, infer a fields of affine

transformations for each pixel𝐓𝑖 = 𝐀𝑖 , πŸπ‘–

that maps pixel 𝑖 to 𝑖′ = 𝑖 + πŸπ‘–

Intuition of RTNs

Network Configuration

Feature Extraction Networksβ€’ To extract features 𝐷𝑠 and 𝐷𝑑, input images 𝐼𝑠 and 𝐼𝑑 are passed

through convolution networks with parameters 𝐖𝐹 such that𝐷𝑖 = 𝐹(𝐼|𝐖𝐹)

using CAT-FCSS, VGGNet (conv4-4), ResNet (conv4-23)

Recurrent Geometric Matching Networksβ€’ Constraint correlation volume

𝐢(𝐷𝑖𝑠, 𝐷𝑑(𝐓𝑗)) =< 𝐷𝑖

𝑠, 𝐷𝑑(𝐓𝑗) >/ < 𝐷𝑖𝑠, 𝐷𝑑(𝐓𝑗) >

2

β€’ Recurrent geometry estimation

π“π‘–π‘˜ βˆ’ 𝐓𝑖

π‘˜βˆ’1 = 𝐹(𝐢(𝐷𝑖𝑠, 𝐷𝑑(𝐓𝑖

π‘˜βˆ’1))|𝐖𝐺)

Weakly-supervised Learningβ€’ Intuition: Matching score between the source feature 𝐷𝑠 at

each pixel 𝑖 and the target feature 𝐷𝑑(𝐓𝑖) should be maximized while keeping the scores of other transformation candidates low

𝐿 𝐷𝑖𝑠, 𝐷𝑑 𝐓 = βˆ’

π‘—βˆˆπ‘€π‘–

π‘π‘—βˆ—log(𝑝(𝐷𝑖

𝑠, 𝐷𝑑(𝐓𝑗)))

where the function 𝑝(𝐷𝑖𝑠, 𝐷𝑑(𝐓𝑗)) is a Softmax probability

𝑝(𝐷𝑖𝑠, 𝐷𝑑(𝐓𝑗)) =

exp(𝐢(𝐷𝑖𝑠, 𝐷𝑑(𝐓𝑗)))

π‘™βˆˆπ‘€π‘– exp(𝐢(𝐷𝑖𝑠, 𝐷𝑑(𝐓𝑗)))

where π‘π‘—βˆ— denotes a class label defined as 1 if 𝑗 = 𝑖, 0 otherwise

Ablation Studyβ€’ RTNs converges in 3-5 iterationsβ€’ Accuracy improves until window 9 Γ— 9,

but larger window sizes reduce accuracy

Results on TSS Benchmark

Results on PF-WILLOW/PF-PASCAL BenchmarksMethods for geometric invariance in the regularization steps Geometric matching

methods [Rocco’17,’18] Inference using

source/target images 𝐓𝑖 is learned w/𝐓𝑖

βˆ—

using self- or meta-supervision

Methods for geometric invariance in the feature extraction steps STN-based methods

[Choy’16, Kim’18] 𝐀𝑖 is learned wo/𝐀𝑖

βˆ—

πŸπ‘– is learned w/πŸπ‘–βˆ—

Inference based only source or target image

Recurrent Transformer Networks (RTNs) Weaves the

advantages of both existing STN-based methods and geometric matching methods!

Source Target DCTM SCNet Gmat. w/Inl RTNs

Source Target CAT-FCSS

SCNet Gmat.w/Inl

RTNs

ResNet feature exhibits the best performance!

Fine-tuned features show improved accuracy!

Learning the feature extraction networks and geometric matching networks jointly can boost accuracy!

RTNs has shown the state-of-the-art performance!

Project webpage: http://diml.yonsei.ac.kr/~srkim/RTNs