Presentation
Self-supervised Texture Filtering
DescriptionDecomposing an image Iinto the combination of structure S and texture T components is an important problem in computational photography and image analysis. Traditional solutions are basically non-learning based, because it is difficult to construct datasets containing ground-truth decompositions or find effective structure/texture supervisions. In this article, we present a self-supervised framework for smoothing out textures while maintaining the image structures. At the core of our method is a texture-inversion observation - if structure S and texture T are well disentangled, then S-T will produce a texture-inverted image that is symmetric to the input image I= S + T and the two will be visually highly similar, while for other conditions that structure and texture are not effectively separated, the generated texture-inverted images will be less similar to the input. Based on the observation, we propose to learn texture fltering from unlabeled data by encouraging the texture inverted image generated from the fltering output to be visually more similar to the input via contrastive learning. Experiments show that our method can robustly produce high-quality texture smoothing results, and also enables various applications.
Technical Papers Fast Forward Presenter

Event Type
Technical Papers
TimeTuesday, 16 December 20252:04pm - 2:15pm HKT
LocationMeeting Room S423+S424, Level 4


