In an ever-changing and complex environment, the ability to precisely and rapidly regulate actions is crucial for effective operations. This requires an online decision-making process, which includes gathering information from multiple sensory inputs sources: vision, audition, gustation, olfaction, and somatosensation; integrating these information with prior knowledge and current goals; making rapid decisions to generate motor commands; and executing motor commands. During movement, we are also frequently required to stop the current movement, make new decisions based on new incoming information, and switch to other action plans when needed. These processes involve complex interplay of multiple brain regions across the cortical-subcortical network, in which information and goals are converted to action sequences while taking into account of uncertainty. Although both the decision-making and motor control aspects of action regulation have been extensively investigated, how different functions-e.g., action selection, stopping, switching-are interrelated both behaviorally and mechanistically remain elusive, partly because of the tendency of exploring these functions in isolation, making it difficult to develop a unified theory of action regulation. The main goal of this work is to construct a large-scale, unified neurocomputational framework that predicts how the brain selects, stops and switches actions.
Part of this thesis explores computational modeling of action regulation functions. In two related studies, we evaluated our model hypotheses by analyzing the motor behavior of both neurotypical individuals and Parkinson's disease (PD) patients during tasks involving action inhibition. This approach demonstrated the model's capacity to explain key features of action regulation behaviors in both groups. Building on these findings, in a collaborative work, we extended this model, enabling precise adaptive environment-aware target reaching in robotic manipulation.
We are also interested in the obstacle avoidance behavior in cluttered environments. Based on stochastic optimal control (SOC) theory, we constructed a computational framework that integrates value information associated with goals, obstacles and actions, and demonstrating its ability to capture key aspects of human reaching behavior in complex environments where both obstacles and targets are present.