Fast Physics-Informed Neural Networks on Edge Devices
Skip to main content
eScholarship
Open Access Publications from the University of California

UC Santa Barbara

UC Santa Barbara Electronic Theses and Dissertations bannerUC Santa Barbara

Fast Physics-Informed Neural Networks on Edge Devices

Abstract

Training end-to-end models for solving partial differential equations (PDEs) using deep learning methods, such as deep neural networks, demands substantial computing resources, including power supply, memory space, and advanced computing platforms. However, edge devices typically lack these resources, making such training paradigms unattainable on these devices. Transfer learning, which involves leveraging pre-trained models on one dataset to perform inference on another dataset by fine-tuning model parameters, offers a solution by extensively pre-training models on modern GPUs and requiring fewer computing resources during fine-tuning. By applying transfer learning to solve PDEs with neural net-works, we address the demand for real-time response from PDE solvers in scientific and engineering problems. In this project, we propose utilizing transfer learning for Physics-Informed Neural Networks (PINNs) to address problems in reachability analysis. We first pre-train a modified PINN on standard GPUs and subsequently fine-tune the model with constrained computing resources. During fine-tuning, we compute gradients of the loss analytically to reduce dependency on existing libraries, thereby enhancing the method’s generalizability across other edge devices. Through experimentation with multiple PDE examples and the reachability problem, our results demonstrate that transfer learning with limited computing resources achieves comparable accuracy levels to the end-to-end training paradigm while requiring significantly fewer computing resources.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View