Neural networks (NNs) have become the backbone of intelligent systems, with applications ranging from sensing modules to learning-based controllers. Their deployment in safety-critical domains such as healthcare and transportation underscores their significance. However, the fragility of NNs and their potential for dangerous mistakes, whether unintentional or adversarial, necessitates rigorous checks on their behavior. This thesis delves into the Formal Verification of NNs, a process that can provides formal guarantees on their behavior and ensures the reliability and robustness of these systems.
The cornerstone of this work is a novel algorithm, PeregriNN, that has advanced the Formal Verification of NNs. This algorithm has been used to verify various NN properties such as Adversarial Robustness, safety of autonomous systems, and in demonstrating the fairness of NNs by innovatively formulating the fairness property into a Formal Verification problem. Further, this thesis explores a new type of activation function that simplifies the formal verification process, albeit at the cost of increased training time. This exploration, coupled with a collaborative effort that led to the proposal of a polynomial approximation method for ReLU NNs to formally verify them, provides valuable insights into the balance between verification ease and computational efficiency. These methods offer promising approaches to the formal verification of neural networks, further enhancing the robustness and reliability of these systems. The thesis also extends the application of these theories and algorithms to the safety of autonomous systems with neural network controllers. This practical application underscores the real-world implications of formal verification in ensuring the safety and reliability of autonomous systems.
In summary, this thesis provides a comprehensive understanding of the formal verification of neural networks, underscoring the importance of algorithm development and its real-world applications. The findings from this study contribute significantly to this critical field of study, with implications for the fairness, safety, and robustness of NNs.