While the theoretical analysis of linear dynamical systems with finite state-spaces is a mature topic, in situations where the underlying model has a large number of dimensions, modelers must turn to computational tools to better visualize and analyze the dynamic behavior of interest. In these situations, we are confronted with the Curse of Dimensionality: computational and storage complexity grows exponentially in the number of dimensions.
This doctoral project focuses on two main classes of large-scale linear systems which arise in system biology. The Chemical Master Equation (CME) is a Fokker-Planck equation which describes the evolution of the probability mass function of a countable state space Markov process. Each state of the CME is labelled with an ordered S-tuple corresponding to one configuration of a well-mixed chemical system, where S is the number of distinct chemical species of interest. Even in cases where one only considers a projection of the CME to a finite subset of the states, one still must contend with the Curse of Dimensionality: the computational complexity grows exponentially in the number of chemical species. This dissertation describes a computational methodology for efficient solution of the CME which, in the best cases, will scale linearly in the number of chemical species.
The second main class of high-dimensional problems requiring computational tools are coupled linear reaction-diffusion equations. For this class of models, we focus primarily on the computation of certain high-dimensional matrices which describe in a quantitative sense the input-to-state and state-to-output relationships. We describe algorithms for extracting useful information stored in these matrices and use this information to efficiently compute both reduced order models and open-loop control laws for steering the full system. A key feature of this approach is that the method is completely simulation or experiment free, in fact, in our numerical experiments, the computation of a reduced model or open-loop control law is an order of magnitude faster on a laptop than simulation of the full system on a 32 core node of a high-performance cluster.
In both projects, the enabling computational technology is the recently proposed Tensor Train (TT) structured low-parametric representation of high-dimensional data. The TT-format effectively exploits low-rank structure of the "unfolding matrices" for compression and computational efficiency. Formally, the computational complexity of basic TT arithmetics scale linearly in the number of dimensions, potentially circumventing the curse of dimensionality. To demonstrate the effectiveness of this approach, we performed numerous numerical experiments whose results are reported here.