Autonomous robots will soon play a significant role in various domains, such as search-and-rescue, agriculture farms, homes, offices, transportation, and medical surgery, where fast, safe, and optimal response to different situations will be critical. However, to do so, these robots need fast algorithms to plan their motion sequences in real-time with limited perception and battery life. The field of motion planning and control addresses this challenge of coordinating robot motions and enabling them to interact with their environments for performing various challenging tasks under constraints.
Planning algorithms for robot control have a long history ranging from methods with complete to probabilistically complete worst-case theoretical guarantees. However, despite having deep roots in artificial intelligence and robotics, these methods tend to be computationally inefficient in high-dimensional problems. On the other hand, machine learning advancements have led toward systems that can directly perform complex decision-making from raw sensory information. This thesis introduces a new class of planning methods called Neural Motion Planners that emerged from the cross-fertilization of classical motion planning and machine learning techniques. These methods can achieve unprecedented speed and robustness in planning robot motion sequences in complex, cluttered, and partially observable environments. They exhibit worst-case theoretical guarantees and solve a broad range of motion planning problems under geometric collision-avoidance, kinodynamic, non-holonomic, and hard kinematic manifold constraints.
Another challenge towards deploying robots into our natural world is the tedious process of defining objective functions for underlying motion planners and transferring and composing their motion skills into new skills for a combinatorial outburst in robot’s skillset for solving unseen practical problems. To address these challenges, this thesis introduces novel methods, i.e., variational inverse reinforcement learning and compositional reinforcement learning approaches. These methods learn unknown constraint functions and their motion skills directly from expert demonstrations for NMPs and compose them into new complex skills for solving more complicated problems across different domains. Finally, this thesis also presents a model-free neural task planning algorithm that works with never-before-seen objects and generalizes to real world environments. It generates task plans for underlying motion planning and control approaches and solves challenging rearrangement tasks in unknown environments.