Presentation
Hierarchical Neural Semantic Representation for 3D Semantic Correspondence
SessionGenerative 3D Modeling
DescriptionThis paper presents a new approach to estimate accurate and robust 3D semantic correspondence with a hierarchical neural semantic representation. Our work has three key contributions. First, we design the hierarchical neural semantic representation (HNSR), which consists of a global semantic
feature capturing high-level structure and multi-resolution local geometric features preserving fine details, by carefully harnessing the 3D priors from pre-trained 3D generative models. Second, we design a progressive global-tolocal matching strategy, which establishes coarse semantic correspondence using global semantic feature then iteratively refines it with local geometric features, yielding accurate and semantically-consistent mappings. Third, our framework is training-free and broadly compatible with various pretrained 3D generative backbones, demonstrating strong generalization across diverse shape categories. Our method also supports various applications, such as shape co-segmentation, keypoint matching, and texture transfer , and
generalizes well to structurally diverse shapes, with promising results even in cross-category scenarios. Both qualitative and quantitative evaluations show that our method outperforms previous state-of-the-art techniques.
feature capturing high-level structure and multi-resolution local geometric features preserving fine details, by carefully harnessing the 3D priors from pre-trained 3D generative models. Second, we design a progressive global-tolocal matching strategy, which establishes coarse semantic correspondence using global semantic feature then iteratively refines it with local geometric features, yielding accurate and semantically-consistent mappings. Third, our framework is training-free and broadly compatible with various pretrained 3D generative backbones, demonstrating strong generalization across diverse shape categories. Our method also supports various applications, such as shape co-segmentation, keypoint matching, and texture transfer , and
generalizes well to structurally diverse shapes, with promising results even in cross-category scenarios. Both qualitative and quantitative evaluations show that our method outperforms previous state-of-the-art techniques.

Event Type
Technical Papers
TimeWednesday, 17 December 20254:51pm - 5:02pm HKT
LocationMeeting Room S421, Level 4
