Intelligent algorithms for measuring and augmenting performance during robotic-assisted surgery (RAS) in both human robot collaborative and autonomous system settings have the potential to benefit both surgeons and patients. Successful RAS depends on both human operator and robotic system performance. Measuring performance requires integrating, synchronizing and analyzing contemporaneous data from humans, robots, and task environments. Safety-critical tasks in dynamic, unstructured environments, such as RAS, require both high performing operators and robotic systems. Surgeons operate in mentally and physically demanding workspaces where the impact of error is highly consequential, and uncertainties in operating room (OR) robotic systems, particularly in kinematics and perception for autonomous applications, have meaningful implications for clinical outcomes.
The purpose of this dissertation is to develop novel machine intelligence algorithms to quantitatively model and augment performance during robot-assisted surgery for both human operators and autonomous systems. For human operators, we detect intraoperative errors and analyze operator biometric data. For autonomous systems, we use perception algorithms to measure tool localization accuracy and visual scene uncertainty in surgical environments. Our results show that we can quantitatively analyze human intraoperative error and that our perception algorithms can more accurately localize surgical tools and measure visual scene uncertainty in surgical environments.