FeatFlow: Learning geometric features for 3D motion estimation

Document Type


Publication Date



3D motion estimation is an important prerequisite for the autonomous operation of vehicles and robots in dynamic environments. This work presents FeatFlow, a novel neural network architecture to estimate 3D motions from unstructured point clouds. Specifically, we learn deep geometric features to estimate the dense scene flow and the ego-motion of the platform. We build a scene flow estimation pipeline by an encoder-decoder architecture which comprises three novel modules: feature extractor, motion embedder, and flow decoder. By using a point-score layer to assign scores to the extracted features in a learning procedure, the feature extractor effectively extracts keypoints and features that are most significant for estimating the relative transformation between two consecutive point clouds. The whole model adaptively learns the required robust descriptors to represent a variety of point motions at the object or scene level. We evaluated our approach on synthetic data from FlyingThings3D, and real-world LiDAR scans from KITTI and Oxford RobotCar. Our network successfully generalizes to datasets with different patterns, outperforming various baselines and achieving state-of-the-art performance.

Publication Source (Journal or Book title)

Pattern Recognition

This document is currently not available here.