In this paper, we propose a new distortion quantification method for point
clouds, the multiscale potential energy discrepancy (MPED). Currently, there is
a lack of effective distortion quantification for a variety of point cloud
perception tasks. Specifically, for dense point clouds, a distortion
quantification method is used to predict human subjective scores and optimize
the selection of human perception tasks parameters, such as compression and
enhancement. For sparse point clouds, a distortion quantification methods is
work as a loss function to guide the training of deep neural networks for
unsupervised learning tasks (e.g., point cloud reconstruction, completion and
upsampling). Therefore, an effective distortion quantification should be
differentiable, distortion discriminable and have a low computational
complexity. However, current distortion quantification cannot satisfy all three
conditions. To fill this gap, we propose a new point cloud feature description
method, the point potential energy (PPE), inspired by the classical physics. We
regard the point clouds are systems that have potential energy and the
distortion can change the total potential energy. By evaluating at various
neighborhood sizes, the proposed MPED achieves global-local tradeoffs,
capturing distortion in a multiscale fashion. We further theoretically show
that classical Chamfer distance is a special case of our MPED. Extensive
experiments show the proposed MPED superior to current methods on both human
and machine perception tasks. Our code is avaliable at
https://github.com/Qi-Yangsjtu/MPED.