Title
BLNet: Bidirectional learning network for point clouds
Document Type
Article
Publication Date
1-1-2022
Abstract
The key challenge in processing point clouds lies in the inherent lack of ordering and irregularity of the 3D points. By relying on perpoint multi-layer perceptions (MLPs), most existing point-based approaches only address the first issue yet ignore the second one. Directly convolving kernels with irregular points will result in loss of shape information. This paper introduces a novel point-based bidirectional learning network (BLNet) to analyze irregular 3D points. BLNet optimizes the learning of 3D points through two iterative operations: feature-guided point shifting and feature learning from shifted points, so as to minimise intra-class variances, leading to a more regular distribution. On the other hand, explicitly modeling point positions leads to a new feature encoding with increased structure-awareness. Then, an attention pooling unit selectively combines important features. This bidirectional learning alternately regularizes the point cloud and learns its geometric features, with these two procedures iteratively promoting each other for more effective feature learning. Experiments show that BLNet is able to learn deep point features robustly and efficiently, and outperforms the prior state-of-the-art on multiple challenging tasks. [Figure not available: see fulltext.]
Publication Source (Journal or Book title)
Computational Visual Media
Recommended Citation
Han, W., Wu, H., Wen, C., Wang, C., & Li, X. (2022). BLNet: Bidirectional learning network for point clouds. Computational Visual Media https://doi.org/10.1007/s41095-021-0260-6