Discrete-Continuous Transformation Matching (DCTM)


Seungryong Kim1
Dongbo Min2
Stephen Lin3
Kwanghoon Sohn4
Korea University1, Ewha Womans University2, MSRA3, Yonsei University4

[ICCV'17 paper]
[TPAMI'20 paper]
[Code]



Visualization of our DCTM results: (a) source image, (b) target image, (c), (d) ground truth correspondences, (e), (f), (g), (h) warped im- ages and correspondences after discrete and continuous optimization, respectively. For semantically similar images undergoing non-rigid de- formations, our DCTM estimates reliable correspondences by iteratively optimizing the discrete label space via continuous regularization.


Techniques for dense semantic correspondence have provided limited ability to deal with the geometric variations that commonly exist between semantically similar images. While variations due to scale and rotation have been examined, there is a lack of practical solutions for more complex deformations such as affine transformations because of the tremendous size of the associated solution space. To address this problem, we present a discrete-continuous transformation matching (DCTM) framework where dense affine transformation fields are inferred through a discrete label optimization in which the labels are iteratively updated via continuous regularization. In this way, our approach draws solutions from the continuous space of affine transformations in a manner that can be computed efficiently through constant-time edge-aware filtering and a proposed affine-varying CNN-based descriptor. Furthermore, leveraging correspondence consistency and confidence-guided filtering in each iteration facilitates the convergence of our method. Experimental results show that this model outperforms the state-of-the-art methods for dense semantic correspondence on various benchmarks and applications.