Victorita Dolean
University Côte d'Azur - Victorita.Dolean@univ-cotedazur.frAlexander Heinlein and Ben Moseley
Physics-informed neural networks (PINNs) [2] are a solution method for solving boundary value problems based on differential equations (PDEs). The key idea of PINNs is to incorporate the residual of the PDE as well as boundary conditions into the loss function of the neural network. This provides a simple and mesh-free approach for solving problems relating to PDEs. However, a key limitation of PINNs is their lack of accuracy and efficiency when solving problems with larger domains and more complex, multi-scale solutions. In a more recent approach, Finite Basis Physics-Informed Neural Networks (FBPINNs) [1], the authors use ideas from domain decomposition to ac- celerate the learning process of PINNs and improve their accuracy in this setting. In this talk, we show how Schwarz-like additive, multiplicative, and hybrid iteration methods for training FBPINNs can be developed. Further- more, we will present numerical experiments on the influence on conver- gence and accuracy of these different variants. References
[1] B. Moseley, A. Markham, and T. Nissen-Meyer. Finite basis physics- informed neural networks (FBPINNs): a scalable domain decompo- sition approach for solving differential equations. arXiv:2107.07871, 2021.
[2] M. Raissi, P. Perdikaris, and G. E. Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378:686–707, 2019.