Ruqi Huang

Contact:

ruqihuang AT sz.tsinghua.edu.cn

Introduction

I obtained my PhD degree under the supervision of Frederic Chazal from University of Paris-Saclay in 2016. Prior to that, I obtained my bachelor and master degree from Tsinghua University in 2011 and 2013, respectively. My research interests lie in 3D Computer Vision and Geometry Processing, with a strong focus on developing 3D reconstruction techniques for both static and dynamic scenes. In particular, I am interested in developing learning approaches towards 3D computer vision tasks without heavy dependency on supervision, via incorporating structural priors (especially the geometric ones) into neural networks. Beyond that, I am also interested in applying geometric/topological analysis on interdisciplinary data, e.g., biological, medical and high-dimensional imaging data. My full CV can be found here.

News

  • 2024-03: Two papers were accepted by CVPR 2024.
  • 2023-10: One paper was accepted by TPAMI 2023.
  • 2023-09: One paper was accepted by NeurIPS 2023.
  • 2023-07: Two papers were accepted by ICCV 2023.

Publications

XScale-NVS: Cross-Scale Novel View Synthesis with Hash Featurized Manifold

Guangyu Wang, Jinzhi Zhang, Fan Wang, Ruqi Huang, Lu Fang

Abstract: We propose XScale-NVS for high-fidelity cross-scale novel view synthesis of real-world large-scale scenes. Existing representations based on explicit surface suffer from discretization resolution or UV distortion, while implicit volumetric representations lack scalability for large scenes due to the dispersed weight distribution and surface ambiguity. In light of the above challenges, we introduce hash featurized manifold, a novel hash-based featurization coupled with a deferred neural rendering framework. This approach fully unlocks the expressivity of the representation by explicitly concentrating the hash entries on the 2D manifold, thus effectively representing highly detailed contents independent of the discretization resolution. We also introduce a novel dataset, namely GigaNVS, to benchmark cross-scale, high-resolution novel view synthesis of realworld large-scale scenes. Our method significantly outperforms competing baselines on various real-world scenes, yielding an average LPIPS that is ∼ 40% lower than prior state-of-the-art on the challenging GigaNVS benchmark.

Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024

OmniSeg3D: Omniversal 3D Segmentation via Hierarchical Contrastive Learning

Haiyang Ying, Yixuan Yin, Jinzhi Zhang, Fan Wang, Tao Yu, Ruqi Huang, Lu Fang

Abstract: Towards holistic understanding of 3D scenes, a general 3D segmentation method is needed that can segment diverse objects without restrictions on object quantity or categories, while also reflecting the inherent hierarchical structure. To achieve this, we propose OmniSeg3D, an omniversal segmentation method aims for segmenting anything in 3D all at once. The key insight is to lift multi-view inconsistent 2D segmentations into a consistent 3D feature field through a hierarchical contrastive learning framework, which is accomplished by two steps. Firstly, we design a novel hierarchical representation based on category-agnostic 2D segmentations to model the multi-level relationship among pixels. Secondly, image features rendered from the 3D feature field are clustered at different levels, which can be further drawn closer or pushed apart according to the hierarchical relationship between different levels. In tackling the challenges posed by inconsistent 2D segmentations, this framework yields a global consistent 3D feature field, which further enables hierarchical segmentation, multi-object selection, and global discretization. Extensive experiments demonstrate the effectiveness of our method on high-quality 3D segmentation and accurate hierarchical structure understanding. A graphical user interface further facilitates flexible interaction for omniversal 3D segmentation.

Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024

Unsupervised learning of pixel clustering in Mueller matrix images for mapping microstructural features in pathological tissues

Jiachen Wan, Yang Dong, Yue Yao, Weijin Xiao, Ruqi Huang, Jing-Hao Xue, Ran Peng, Haojie Pei, Xuewu Tian, Ran Liao, Honghui He, Nan Zeng, Chao Li, Hui Ma

Abstract: In histopathology, doctors identify diseases by characterizing abnormal cells and their spatial organization within tissues. Polarization microscopy and supervised learning have been proved as an effective tool for extracting polarization parameters to highlight pathological features. Here, we present an alternative approach based on unsupervised learning to group polarization-pixels into clusters, which correspond to distinct pathological structures. For pathological samples from different patients, it is confirmed that such unsupervised learning technique can decompose the histological structures into a stable basis of characteristic microstructural clusters, some of which correspond to distinctive pathological features for clinical diagnosis. Using hepatocellular carcinoma (HCC) and intrahepatic cholangiocarcinoma (ICC) samples, we demonstrate how the proposed framework can be utilized for segmentation of histological image, visualization of microstructure composition associated with lesion, and identification of polarization-based microstructure markers that correlates with specific pathology variation. This technique is capable of unraveling invisible microstructures in non-polarization images, and turn them into visible polarization features to pathologists and researchers.

Proc. Communications Engineering, 2023

Non-Rigid Shape Registration via Deep Functional Maps Prior

Puhua Jiang, Mingze Sun, Ruqi Huang

Abstract: In this paper, we propose a learning-based framework for non-rigid shape registration without correspondence supervision. Traditional shape registration techniques typically rely on correspondences induced by extrinsic proximity, therefore can fail in the presence of large intrinsic deformations. Spectral mapping methods overcome this challenge by embedding shapes into, geometric or learned, highdimensional spaces, where shapes are easier to align. However, due to the dependency on abstract, non-linear embedding schemes, the latter can be vulnerable with respect to perturbed or alien input. In light of this, our framework takes the best of both worlds. Namely, we deform source mesh towards the target point cloud, guided by correspondences induced by high-dimensional embeddings learned from deep functional maps (DFM). In particular, the correspondences are dynamically updated according to the intermediate registrations and filtered by consistency prior, which prominently robustify the overall pipeline. Moreover, in order to alleviate the requirement of extrinsically aligned input, we train an orientation regressor on a set of aligned synthetic shapes independent of the training shapes for DFM. Empirical results show that, with as few as dozens of training shapes of limited variability, our pipeline achieves state-of-the-art results on several benchmarks of non-rigid point cloud matching, but also delivers high-quality correspondences between unseen challenging shape pairs that undergo both significant extrinsic and intrinsic deformations, in which case neither traditional registration methods nor intrinsic methods work.

Proc. 37th Conference on Neural Information Processing Systems (NeurIPS 2023)

GiganticNVS: Gigapixel Large-scale Neural Rendering with Implicit Meta-deformed Manifold

Guangyu Wang, Jinzhi Zhang, Kai Zhang, Ruqi Huang, Lu Fang

Abstract: The rapid advances of high-performance sensation empowered gigapixel-level imaging/videography for large-scale scenes, yet the abundant details in gigapixel images were rarely valued in 3d reconstruction solutions. Bridging the gap between the sensation capacity and that of reconstruction requires to attack the large-baseline challenge imposed by the large-scale scenes, while utilizing the high-resolution details provided by the gigapixel images. This paper introduces GiganticNVS for gigapixel large-scale novel view synthesis (NVS). Existing NVS methods suffer from excessively blurred artifacts and fail on the full exploitation of image resolution, due to their inefficacy of recovering a faithful underlying geometry and the dependence on dense observations to accurately interpolate radiance. Our key insight is that, a highly-expressive implicit field with view-consistency is critical for synthesizing high-fidelity details from large-baseline observations. In light of this, we propose meta-deformed manifold, where meta refers to the locally defined surface manifold whose geometry and appearance are embedded into high-dimensional latent space. Technically, meta can be decoded as neural fields using an MLP (i.e., implicit representation). Upon this novel representation, multi-view geometric correspondence can be effectively enforced with featuremetric deformation and the reflectance field can be learned purely on the surface. Experimental results verify that the proposed method outperforms state-of-the-art methods both quantitatively and qualitatively, not only on the standard datasets containing complex real-world scenes with large baseline angles, but also on the challenging gigapixel-level ultra-large-scale benchmarks.

Proc. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2023

Spatially and Spectrally Consistent Deep Functional Maps

Mingze Sun, Shiwei Mao, Puhua Jiang, Maks Ovsjanikov, Ruqi Huang

Abstract: Cycle consistency has long been exploited as a powerful prior for jointly optimizing maps within a collection of shapes. In this paper, we investigate its utility in the approaches of Deep Functional Maps, which are considered state-of-the-art in non-rigid shape matching. We first justify that under certain conditions, the learned maps, when represented in the spectral domain, are already cycle consistent. Furthermore, we identify the discrepancy that spectrally consistent maps are not necessarily spatially, or point-wise, consistent. In light of this, we present a novel design of unsupervised Deep Functional Maps, which effectively enforces the harmony of learned maps under the spectral and the point-wise representation. By taking advantage of cycle consistency, our framework produces stateof- the-art results in mapping shapes even under significant distortions. Beyond that, by independently estimating maps in both spectral and spatial domains, our method naturally alleviates over-fitting in network training, yielding superior generalization performance and accuracy within an array of challenging tests for both near-isometric and nonisometric datasets.

Proc. International Conference on Computer Vision (ICCV), 2023

RealGraph: A Multiview Dataset for 4D Real-world Context Graph Generation

Haozhe Lin, Zequn Chen, Jinzhi Zhang, Bing Bai, Yu Wang, Ruqi Huang, Lu Fang

Abstract: Understanding 4D scene context in real world has become urgently critical for deploying sophisticated AI systems. In this paper, we propose a brand new scene understanding paradigm called “Context Graph Generation (CGG)”, aiming at abstracting holistic semantic information in the complicated 4D world. The CGG task capitalizes on the calibrated multiview videos of a dynamic scene, and targets at recovering semantic information (coordination, trajectories and relationships) of the presented objects in the form of spatio-temporal context graph in 4D space. We also present a benchmark 4D video dataset “RealGraph”, the first dataset tailored for the proposed CGG task. The raw data of RealGraph is composed of calibrated and synchronized multiview videos. We exclusively provide manual annotations including object 2D&3D bounding boxes, category labels and semantic relationships. We also make sure the annotated ID for every single object is temporally and spatially consistent. We propose the first CGG baseline algorithm, Multiview-based Context Graph Generation Network (MCGNet), to empirically investigate the legitimacy of CGG task on RealGraph dataset. We nevertheless reveal the great challenges behind this task and encourage the community to explore beyond our solution.

Proc. International Conference on Computer Vision (ICCV), 2023

The Group Interaction Field for Learning and Explaining Pedestrian Anticipation

Xueyang Wang, Xuecheng Chen, Puhua Jiang, Haozhe Lin, Xiaoyun Yuan, Mengqi Ji, Yuchen Guo, Ruqi Huang, Lu Fang

Abstract: Anticipating others’ actions is innate and essential in order for humans to navigate and interact well with others in dense crowds. This ability is urgently required for unmanned systems such as service robots and self-driving cars. However, existing solutions struggle to predict pedestrian anticipation accurately, because the influence of group-related social behaviors has not been well considered. While group relationships and group interactions are ubiquitous and significantly influence pedestrian anticipation, their influence is diverse and subtle, making it difficult to explicitly quantify. Here, we propose the group interaction field (GIF), a novel group-aware representation that quantifies pedestrian anticipation into a probability field of pedestrians’ future locations and attention orientations. An end-to-end neural network, GIFNet, is tailored to estimate the GIF from explicit multidimensional observations. GIFNet quantifies the influence of group behaviors by formulating a group interaction graph with propagation and graph attention that is adaptive to the group size and dynamic interaction states. The experimental results show that the GIF effectively represents the change in pedestrians’ anticipation under the prominent impact of group behaviors and accurately predicts pedestrians’ future states. Moreover, the GIF contributes to explaining various predictions of pedestrians’ behavior in different social states. The proposed GIF will eventually be able to allow unmanned systems to work in a human-like manner and comply with social norms, thereby promoting harmonious human–machine relationships.

Proc. Engineering, 2023

Fast Point Cloud Registration for Urban Scenes via Pillar-Point Representation

Siyuan Gu, Ruqi Huang

Abstract: Efficient and robust point cloud registration is an essential task for real-time applications in urban scenes. Most methods introduce keypoint sampling or detection to achieve real-time registration of large-scale point clouds. Recent advances in keypoint-free methods have succeeded in alleviating the bias and error introduced by keypoint detection via coarse-to-fine dense matching strategies. Nevertheless, the running time performance of such a strategy turns out to be far inferior to keypoint methods. This paper proposes a novel framework that adopts a pillar-point representation based feature extraction pipeline and a three-stage semi-dense keypoint matching scheme. The scheme includes global coarse matching, anchor generation and local dense matching for efficient correspondence matching. Experiments on large-scale outdoor datasets, including KITTI and NuScenes, demonstrate that the proposed feature representation and matching framework achieve real-time inference and high registration recall.

Proc. CAAI International Conference on Artificial Intelligence(CICAI), 2023

Neural Intrinsic Embedding for Non-rigid Point Cloud Matching

Puhua Jiang, Mingze Sun, Ruqi Huang

Abstract: As a primitive 3D data representation, point clouds are prevailing in 3D sensing, yet short of intrinsic structural information of the underlying objects. Such discrepancy poses great challenges on directly establishing correspondences between point clouds sampled from deformable shapes. In light of this, we propose Neural Intrinsic Embedding (NIE) to embed each vertex into a high-dimensional space in a way that respects the intrinsic structure. Based upon NIE, we further present a weakly-supervised learning framework for non-rigid point cloud registration. Unlike the prior works, we do not require expansive and sensitive off-line basis construction (e.g., eigen-decomposition of Laplacians), nor do we require ground-truth correspondence labels for supervision. We empirically show that our framework performs on par with or even better than the state-of-the-art baselines, which generally require more supervision and/or more structural geometric input.

Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023

Optical Neural Ordinary Differential Equations

Yun Zhao, Hang Chen, Min Lin, Haiou Zhang, Tao Yan, Xing Lin, Ruqi Huang, Qionghai Dai

Abstract: Increasing the layer number of on-chip photonic neural networks (PNNs) is essential to improve its model performance. However, the successively cascading of network hidden layers results in larger integrated photonic chip areas. To address this issue, we propose the optical neural ordinary differential equations (ON-ODE) architecture that parameterizes the continuous dynamics of hidden layers with optical ODE solvers. The ON-ODE comprises the PNNs followed by the photonic integrator and optical feedback loop, which can be configured to represent residual neural networks (ResNet) and recurrent neural networks with effectively reduced chip area occupancy. For the interference-based optoelectronic nonlinear hidden layer, the numerical experiments demonstrate that the single hidden layer ON-ODE can achieve approximately the same accuracy as the two-layer optical ResNet in image classification tasks. Besides, the ON-ODE improves the model classification accuracy for the diffraction-based all-optical linear hidden layer. The time-dependent dynamics property of ON-ODE is further applied for trajectory prediction with high accuracy.

Proc. Optics Letters, 2023

ElasticMVS: Learning elastic part representation for self-supervised multi-view stereopsis

Jinzhi Zhang, Ruofan Tang, Zheng Cao, Jing Xiao, Ruqi Huang, Lu Fang

Abstract: Learning dense surface predictions from only a set of images without onerous groundtruth 3D training data for supervision. However, existing methods highly rely on the local photometric consistency, which fail to identify accurately dense correspondence in broad textureless or reflectant areas. In this paper, we show that geometric proximity such as surface connectedness and occlusion boundaries implicitly inferred from images could serve as reliable guidance for pixel-wise multi-view correspondences. With this insight, we present a novel elastic part representation, which encodes physically-connected part segmentations with elastically-varying scales, shapes and boundaries. Meanwhile, a self-supervised MVS framework namely ElasticMVS is proposed to learn the representation and estimate per-view depth following a part-aware propagation and evaluation scheme. Specifically, the pixel-wise part representation is trained by a contrastive learning-based strategy, which increases the representation compactness in geometrically concentrated areas and contrasts otherwise. ElasticMVS iteratively optimizes a part-level consistency loss and a surface smoothness loss, based on a set of depth hypotheses propagated from the geometrically concentrated parts. Extensive evaluations convey the superiority of ElasticMVS in the reconstruction completeness and accuracy, as well as the efficiency and scalability. Particularly, for the challenging large-scale reconstruction benchmark, ElasticMVS demonstrates significant performance gain over both the supervised and self-supervised approaches.

Proc. Conference on Neural Information Processing Systems (NeurIPS) (Spotlight), 2022

ParseMVS: Learning Primitive-aware Surface Representations for Sparse Multi-view Stereopsis

Haiyang Ying, Jinzhi Zhang, Yuzhe Chen, Zheng Cao, Jing Xiao, Ruqi Huang, Lu Fang

Abstract: Multi-view stereopsis (MVS) recovers 3D surfaces by finding dense photo-consistent correspondences from densely sampled images. In this paper, we tackle the challenging MVS task from sparsely sampled views (up to an order of magnitude fewer images), which is more practical and cost-efficient in applications. The major challenge comes from the significant correspondence ambiguity introduced by the severe occlusions and the highly skewed patches. On the other hand, such ambiguity can be resolved by incorporating geometric cues from the global structure. In light of this, we propose ParseMVS, boosting sparse MVS by learning the Primitive-AwaRe Surface rEpresentation. In particular, on top of being aware of global structure, our novel representation further allows for the preservation of fine details including geometry, texture, and visibility. More specifically, the whole scene is parsed into multiple geometric primitives. On each of them, the geometry is defined as the displacement along the primitives’ normal directions, together with the texture and visibility along each view direction. An unsupervised neural network is trained to learn these factors by progressively increasing the photo-consistency and render-consistency among all input images. Since the surface properties are changed locally in the 2D space of each primitive, ParseMVS can preserve global primitive structures while optimizing local details, handling the ‘incompleteness’ and the ‘inaccuracy’ problems.We experimentally demonstrate that ParseMVS constantly outperforms the state-ofthe- art surface reconstruction method in both completeness and the overall score under varying sampling sparsity, especially under the extreme sparse-MVS settings. Beyond that, ParseMVS also shows great potential in compression, robustness, and efficiency.

Proc. ACM International Conference on Multimedia, 2022

Cross-Camera Deep Colorization

Yaping Zhao, Haitian Zheng, Mengqi Ji, Ruqi Huang

Abstract: In this paper, we consider the color-plus-mono dual-camera system and propose an end-to-end convolutional neural network to align and fuse images from it in an efficient and cost-effective way. Our method takes cross-domain and cross-scale images as input, and consequently synthesizes HR colorization results to facilitate the trade-off between spatial-temporal resolution and color depth in the single-camera imaging system. In contrast to the previous colorization methods, ours can adapt to color and monochrome cameras with distinctive spatialtemporal resolutions, rendering the flexibility and robustness in practical applications. The key ingredient of our method is a cross-camera alignment module that generates multi-scale correspondences for cross-domain image alignment. Through extensive experiments on various datasets and multiple settings, we validate the flexibility and effectiveness of our approach. Remarkably, our method consistently achieves substantial improvements, i.e., around 10dB PSNR gain, upon the state-of-the-art methods.

Proc. CAAI International Conference on Artificial Intelligence(Oral presentation), 2022

EFENet: Reference-based Video Super-Resolution with Enhanced Flow Estimation

Yaping Zhao, Mengqi Ji, Ruqi Huang, Bin Wang, Shengjin Wang

Abstract: In this paper, we consider the problem of reference-based video super-resolution(RefVSR), i.e., how to utilize a high-resolution (HR) reference frame to super-resolve a low-resolution (LR) video sequence. The existing approaches to RefVSR essentially attempt to align the reference and the input sequence, in the presence of resolution gap and long temporal range. However, they either ignore temporal structure within the input sequence, or suffer accumulative alignment errors. To address these issues, we propose EFENet to exploit simultaneously the visual cues contained in the HR reference and the temporal information contained in the LR sequence. EFENet first globally estimates cross-scale flow between the reference and each LR frame. Then our novel flow refinement module of EFENet refines the flow regarding the furthest frame using all the estimated flows, which leverages the global temporal information within the sequence and therefore effectively reduces the alignment errors. We provide comprehensive evaluations to validate the strengths of our approach, and to demonstrate that the proposed framework outperforms the state-of-the-art methods.

Proc. CAAI International Conference on Artificial Intelligence, 2021

Consistent ZoomOut: Efficient Spectral Map Synchronization

Ruqi Huang, Jing Ren, Peter Wonka, Maks Ovsjanikov

Abstract: In this paper, we propose a novel method, which we call CONSISTENT ZOOMOUT, for efficiently refining correspondences among deformable 3D shape collections, while promoting the resulting map consistency. Our formulation is closely related to a recent unidirectional spectral refinement framework, but naturally integrates map consistency constraints into the refinement. Beyond that, we show further that our formulation can be adapted to recover the underlying isometry among near-isometric shape collections with a theoretical guarantee, which is absent in the other spectral map synchronization frameworks. We demonstrate that our method improves the accuracy compared to the competing methods when synchronizing correspondences in both near-isometric and heterogeneous shape collections, but also significantly outperforms the baselines in terms of map consistency.

Proc. Symposium on Geometry Processing, 2020

OperatorNet: Recovering 3D Shapes From Difference Operators

Ruqi Huang, Marie-Julie Rakotosaona, Panos Achlioptas, Leonidas Guibas, Maks Ovsjanikov

Abstract: This paper proposes a learning-based framework for reconstructing 3D shapes from functional operators, compactly encoded as small-sized matrices. To this end we introduce a novel neural architecture, called OperatorNet, which takes as input a set of linear operators representing a shape and produces its 3D embedding. We demonstrate that this approach significantly outperforms previous purely geometric methods for the same problem. Furthermore, we introduce a novel functional operator, which encodes the extrinsic or pose-dependent shape information, and thus complements purely intrinsic pose-oblivious operators, such as the classical Laplacian. Coupled with this novel operator, our reconstruction network achieves very high reconstruction accuracy, even in the presence of incomplete information about a shape, given a soft or functional map expressed in a reduced basis. Finally, we demonstrate that the multiplicative functional algebra enjoyed by these operators can be used to synthesize entirely new unseen shapes, in the context of shape interpolation and shape analogy applications.

Proc. International Conference on Computer Vision(ICCV), 2019

Limit Shape – A Tool for Understanding Shape Differences and Variablity in 3D Model Collections

Ruqi Huang, Panos Achlioptas, Leonidas Guibas, Maks Ovsjanikov

Abstract: We propose a novel construction for extracting a central or limit shape in a shape collection, connected via a functional map network. Our approach is based on enriching the latent space induced by a functional map network with an additional natural metric structure. We call this shape-like dual object the limit shape and show that its construction avoids many of the biases introduced by selecting a fixed base shape or template. We also show that shape differences between real shapes and the limit shape can be computed and characterize the unique properties of each shape in a collection – leading to a compact and rich shape representation. We demonstrate the utility of this representation in a range of shape analysis tasks, including improving functional maps in difficult situations through the mediation of limit shapes, understanding and visualizing the variability within and across different shape classes, and several others. In this way, our analysis sheds light on the missing geometric structure in previously used latent functional spaces, demonstrates how these can be addressed and finally enables a compact and meaningful shape representation useful in a variety of practical applications.

Proc. Symposium on Geometry Processing, 2019

Adjoint Map Representation for Shape Analysis and Matching

Ruqi Huang, Maks Ovsjanikov

Abstract: In this paper, we propose to consider the adjoint operators of functional maps, and demonstrate their utility in several tasks in geometry processing. Unlike a functional map, which represents a correspondence simply using the pull-back of function values, the adjoint operator reflects both the map and its distortion with respect to given inner products. We argue that this property of adjoint operators and especially their relation to the map inverse under the choice of different inner products, can be useful in applications including bi-directional shape matching, shape exploration, and pointwise map recovery among others. In particular, in this paper, we show that the adjoint operators can be used within the cycle-consistency framework to encode and reveal the presence or lack of consistency between distortions in a collection, in a way that is complementary to the previously used purely map-based consistency measures. We also show how the adjoint can be used for matching pairs of shapes, by accounting for maps in both directions, can help in recovering point-to-point maps from their functional counterparts, and describe how it can shed light on the role of functional basis selection.

Proc. Symposium on Geometry Processing, 2017

On the Stability of Functional Maps and Shape Difference Operators

Ruqi Huang, Frederic Chazal, Maks Ovsjanikov

Abstract: In this paper, we provide stability guarantees for two frameworks that are based on the notion of functional maps – the shape difference operators and the framework which is used to analyze and visualize the deformations between shapes induced by a functional map. We consider two types of perturbations in our analysis: one is on the input shapes and the other is on the change in scale. In theory, we formulate and justify the robustness that has been observed in practical implementations of those frameworks. Inspired by our theoretical results, we propose a pipeline for constructing shape difference operators on point clouds and show numerically that the results are robust and informative. In particular, we show that both the shape difference operators and the derived areas of highest distortion are stable with respect to changes in shape representation and change of scale. Remarkably, this is in contrast with the well-known instability of the eigenfunctions of the Laplace-Beltrami operator computed on point clouds compared to those obtained on triangle meshes.

Proc. Computer Graphics Forum, 2017

Gromov-Hausdorff Approximation of Filamentary Structures Using Reeb-Type Graphs

Frederic Chazal, Ruqi Huang, Jian Sun

Abstract: In many real-world applications data appear to be sampled around 1-dimensional filamentary structures that can be seen as topological metric graphs. In this paper we address the metric reconstruction problem of such filamentary structures from data sampled around them. We prove that they can be approximated, with respect to the Gromov-Hausdorff distance by well-chosen Reeb graphs (and some of their variants) and provide an efficient and easy to implement algorithm to compute such approximations in almost linear time. We illustrate the performances of our algorithm on a few data sets.

Proc. Discrete Computational Geometry, 2015

Group

PhD Students

  • Puhua Jiang, 2021-now
  • Jinzhi Zhang, 2022-now
  • Yun Zhao, 2021-now
  • Yanchen Guo, 2021-now
  • Yujia Chen, 2022-now
  • Fengdi Zhang, 2022-now
  • Xinyu Jiang, 2023-now

MPhil Students

  • Zequn Chen, 2021-now
  • Chen Guo, 2021-now
  • Ting Zhang, 2021-now
  • Mingze Sun, 2022-now
  • Yurun Chen, 2022-now
  • Jin Wang, 2022-now
  • Shiwei Mao, 2022-now
  • Guochen Shao, 2022-now
  • Yunqi Zhao, 2022-now
  • Zhangquan Chen, 2023-now
  • Zhejia Cai, 2023-now
  • Xiaoyu Hu, 2023-now
  • Kaisan Li, 2020-2023
  • Leyao Liu, 2020-2023
  • Xuechao Chen, 2020-2023