Motivation

The motivation for integrating physics based information into neural networks (NNs) are multifaceted. 
NNs are attractive for solving (nonlinear) systems of equations. Because of their ability to represent all continuous functions enables them to accurately represent the solutions of physical systems. This property is explained by the universal approximation theorem: There "is a single hidden layer feed-forward network that approximates any measurable function to any desired degree of accuracy on some compact set K of input patterns [...]" [8]. Leveraging the power of NNs to capture nonlinear relationships in physical systems is therefore possible. 
However, conventional NNs fail to converge when data is sparse or noisy [6]. In many physics based systems the availability of good labeled data can not be guaranteed or not feasibly generated in the first place, especially in the context of medicine. Patient specific information is needed to make accurate diagnoses. This information (e. . ultrasonic scans) is sparse and noisy and generating good predictions with conventional deep learning is difficult.


 

 Additionally, the behavior (i. . the underlying physical behavior) is governed by physical laws already known to researchers. Achieving good generalization can be guided by using this prior knowledge of systematic behavior in the system [4]. Furthermore, extrapolating data can be essential in engineering and design processes. It enables informed decision-making to estimate system behavior under extreme conditions, beyond the limits of available data. Bad performance in extrapolation is one of the major drawbacks of conventional NNs. Accordingly learning the underlying physical behavior is desirable, as this approach is able to better extend beyond the data domain. Physics informed NNs do improve the model in accuracy and reliability not only inside the domain, but even beyond its boundaries (though they still are very limited) [4]. This property is highly attractive in medicine, as accurate predictions are the basis on which decisions are made. 



As physical systems are not only a data-driven domain, but also (mainly) populated by physics-based modeling approaches they motivate these techniques as well. Conventional methods include the Finite-Element-Method (FEM), Finite Volume Method (FVM), Finite Differences (FD), Boundary Element Method (BEM), and many more [20]. Their solutions are commonly generated on a grid [20]. If solutions in between grid points are necessary, interpolation is used [20]. Interpolation is computationally costly and may not represent the systems true behavior [20]. Solutions of NNs with continuous independent variables are a-priori grid-less and can be evaluated anywhere without the need of interpolation algorithms [4]. Moreover, numerical solvers have limitations with respect to the sampling density [20]. Some methods become unstable when there are not enough sampling points [20]. Especially in high dimensional systems this can lead to indefeasibly large systems, which can not be solved in limited time. Though NNs suffer from this issue as well, they often exhibit a power to tackle a wide range of real-world learning problems without using abundant amounts of data [22]. Along this, numerical solvers are limited in solving problems which are not well defined [20]. Sometimes defining all boundary problems is not possible [4]. The resulting system can not easily be solved by conventional numerical solvers as the resulting system of linear equations is under-determined, leading to infinitely many or no solutions [20]. 

The Figure is by Arzani et al. [1] which shows a comparison of ground truth blood flow in an aneurysm. The upper image is created with numerical method and the inlet and outlet boundary conditions needed to be provided. The lower (physics inspired) version shows the prediction of a Physics Informed Neuronal Network which does not have information on the inlet and outlet boundary condition. Both were (even though not completely correct) inferred from sparse measurement data (3 sensors) . [1]

In essence the integration of physics information to NNs enables spanning the gap between data-driven and physics-based modeling approaches. This can produce methods that both support, enhance, or solve problems that would not have been possible with only one at a time. 

What is Physics?

Physics is the natural science in understanding how real world systems behave. In physics, this behavior of systems is described using equations. Differential equations are essential in describing many fundamental laws and principles, such as Newton's laws of motion, Maxwell's equations, or the Schrödinger equation in quantum mechanics [20]. In the context of Physics Inspired NNs they are used to characterize problems and their solutions. 
In principle differential equations relate one or more functions with their derivatives or any other non-linear differential operator. The most general form of expressing differential equations is:

(1) \mathcal{F}\left(u(z); y\right) &= f(z)&\qquad &z \text{ in } \Omega,
(2) \mathcal{B}\left(u(z)\right) &= g(z) &\qquad &z \text{ on } \partial \Omega


where \Omega is the domain and \partial \Omega is its boundary. 
\mathcal{F} and \mathcal{B} are arbitrary differential operators acting on the solution uz indicates the space-time-coordinate of the system and y represents parameters related to the system. f and g are terms of the differential equation, which do not include u. These terms most commonly describe specific behavior of systems, such as sources, wells, or external forces acting on it. Equation (1) is the differential equation and Equation (2) are boundary conditions. As initial conditions can be understood as boundaries of the spatio-temporal domain, they are included in this term. 


Physics Inspired Neural Networks

There are many different ways of integrating physics based knowledge into NNs.

Physics Priors Guided Learning

Known Physical laws are used to generate priors which are a useful condition to guide the output of the network.

Su et al. [23] use standard equations from traditional linear modal synthesis. They generate a physics prior by representing recorded sound as damped sinusoidal modes of the spectrum. Additionally they learn residual parameters, to be able to display interactions with the environment. They use these priors to guide a conditional De-noising Diffusion Probabilistic Model. [23]

Symbolic Regression

 Not directly inspired, but very useful in the physical domain. Symbolic regression aims to discover underlying mathematical expressions for physical phenomena. NNs are used to fit data, then simplified from which expressions can be derived explaining these phenomena. [28]

In Udrescu et al. [28] they use NNs to lower the complexity of problems. Conventional techniques fail to give good solutions in reasonable time. They create accurate high-dimensional interpolation between data-points to be able to solve their system.

Graph Neural Networks

 Graph NNs can be leveraged by using the underlying structure and relationship in physical systems, which are often represented as graphs.

In Shlomi et al. [27] give a broad review over their use in particle physics. They are used here as facilities generating these data-sets generate inherently sparse data. Graphs can be used to better represent these large high-dimensional measurements and interpret them.

Physics Informed Neural Networks

 In Physics-Informed NNs the equations are directly embedded into the network.

They are either informing the architecture directly or enforcing physics conformity through constraining losses.


This report will mainly touch on Physics-Informed NNs. The research in this domain is rapidly growing and it is becoming a state-of-the-art technique for very specialized problems. Additionally, I think that these networks will become an essential part of solvers of differential equation problems. Furthermore, as the space is limited in this report I would like to delve deeper into one particular method, as this yields more space for explaining underlying mechanics. In regards to the report from the winter-semester 2022/2023 the focus of the report will be more on underlying mechanics of Physics-Informed NNs, as applications in the medical domain have already been touched extensively.

Physics-Informed NNs

Physics-Informed NNs (PINNs) include physics information by introducing differential equations to their structure or the way they are trained. There are two ways of doing this, soft and hard constrained [4]. 


Soft Constrained


The first, by far the most popular, is called soft constrained PINN [4]. Due to its popularity this approach is also called the "vanilla" way of constraining PINNs. For vanilla PINNs constraining terms are introduced to the loss [13]. The differential equation and the boundary equation can be redefined in a residual form [13].

(3)     \mathcal{R}_\mathcal{F}(u(z)) &= \mathcal{F}\left(u(z); y\right) &- f(z)&\qquad &z \in \Omega,
(4)     \mathcal{R}_\mathcal{B}(u(z)) &= \mathcal{B}\left(u(z)\right) &- g(z) &\qquad &z \in \partial \Omega

The residuals are a measure of how far a prediction is from a solution which satisfies the differential/boundary equation [20]. In vanilla PINNs the independent variables are fed through the network and the output can be one or multiple dependent variables, representing the solution of the system (or depended variables aiding the learning process, but not of interest to the solution). Automatic Differentiation (AD) is used to calculate gradients (or differential operators). The residual is weighted and added to the conventional neural network loss dependent on data. Backpropagation is used to update the parameters of the network. [13]

The makeup of the loss function is inherently problem specific with varying knowledge about the boundary, solution inside the domain or even the differential equations themselves. Almost all permutations of these conditions are possible and can theoretically be solved using PINNs. [4]

As data is - most commonly - sparse or non-existent, collocation points are introduced to function as points on which the residuals are calculated  [13]. These collocation points can be sampled in various different ways (uniform, pseudo-random, Sobol sampling, Latin-Hypercube) inside the domain as well as on the boundary [10]. It is even possible to draw new samples during the training process, further increasing the size of the "data set" [7]. Both the boundary and the domain need to be sampled sufficiently well. 
AD makes this possible in the first place. While there are other ways of determining the residuals (conventional numerical methods, hard-coded, symbolic), AD is the standard for researchers. It is very cheap to calculate and the derivatives are accurate to machine precision [2]. 

Hard Constrained

The second, which is rarely used, is called hard constrained PINN. Here a second (or more) networks are introduced which are purely trained on satisfying the boundary conditions [24], conservation laws, material properties, or any other known physics-based equation. These hard constraints explicitly force the network to satisfy these conditions throughout the training process [24]. The different physics properties are essentially encoded in the networks design. This ensures consistency with the underlying laws, which can not be guaranteed for soft constraints [19]. This approach "can not only facilitate the learning process [...] but also produce more accurate temperature prediction" [24]. However, they are more complex to implement, have limited flexibility (especially for noisy data), and often need well defined constraints. Only for a limited amount of problems, this can be used to great effect [19, 18]. 

Other Properties

The physics-constrained loss leads to a problem which is harder to train [9]. Finding correct hyperparamters is hard and an issue of ongoing research [9]. The loss close to the boundaries leads to many vanishing gradients in the network, making the training difficult [10]. Adaptive activation functions, such as Swish, solve this problem partially [10]. 

The networks are commonly optimized with ADAM [1, 16, 23, 17, 12, 4]. Once a good solution space is found, sometimes the Limited-Broyden-Fletcher-Goldfarb-Shanno (L-BFGS - a quasi-Newton solver) is used to further increase the accuracy [11, 7, 4, 10, 16]. The models and data are so small, that this is possible for most problems. This optimizer provides better convergence and improves upon the solution significantly [11]. However, starting with this optimizer is not recommended as it easily gets stuck  on local minima [4]. 

The Figure on the left hand side by Markidis [10]  shows different residual errors curves over the training process. Markidis swaps to the L-BFGS-B optimizer (an algorithm form the BFGS-family) after 2000 epochs. The Residual Error can be improved in some cases upwards of four orders of magnitude in residual. [10]


Other Layers and Architectures

While there is research into different architectures like using convolutional layers [5], most research focuses on other aspects. Convolutions commonly are used for feature extraction in image processing. Pooling layers are usually attached to compress the data. Physics based systems neither have the structure of images, nor do they need compression [5]. Therefore convolutions are used in a different capacity to the networks. Fang uses convolutions in his paper to approximate "the differential operator to solve the PDEs instead of automatic differentiation" [5]. Ricci et al. use convolutions to create an embedding "that learns high quality, physically-meaningful representations of low-dimensional dynamical [systems] without supervision" [15]. 
Recurrent NNs (RNNs) are used in the manner intended for them [21]. They and architectures like Long Short-Term Memory (LSTM) [12] are modified to replace numerical integration techniques for time dependent problems. Mavi et al. use special LSTM cell units to mimic Euler and Runge-Kutta integration, which outperforms vanilla PINNs for temporal problems[12].

The Figure by Markidis [10] on the left hand side shows the performance of different activation functions for a Poisson problem  with a smooth source. Here derivations of Swish and locally adaptive versions of the tanh-function perform best. [10]

Papers

Trends in PINN research


PINNs were introduced in 2017 by Raissi et al. [25]. The trend in the first years in this field was to take this new concept and apply it to other problems. The approach itself was only rarely extended, as expectations of these networks were overestimated. Achieving convergence, even on simple problems, is not trivial. Additionally the application to real world problems was not really possible. Almost all papers focus on academic examples of different differential equations. 
Only recently the approaches to this field of study were extended. New fields within the domain include testing other architectures, understanding the fundamental dynamics of the training process, and applying this technique to real world problems. 

Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations (PINN)

The field of hemodynamics focuses on the simulation and prediction of fluid dynamic interactions of blood with itself or vessels. Conventional Computational Fluid Dynamics (CFD) is sensitive to uncertainty in in available data and boundary conditions are often not well known [1]. In Raissi et al. they uncover the hemodynamics of an aneurysm from limited observations [14].


The Figure on the left  shows the schematic representation of the inferring process proposed by Raissi et al. [14]. A) shows observation data, from which in B a sampling model and concentration can be estimated, C) shows the PINN with all physical soft constraints, D) is a comparison of the regressed results with reference results, E) comparison of flow streamlines from the regressed and reference solution.  [14] The independent variables t, x, y, z are passed through the network. The output of the network displays the three spatial velocities (uvw), the pressure p, and the concentration c. They use multiple scans of an aneurysm to define a 3D clone of the whole system. AD is used to calculate derivatives on the output of the system. These define different loss functions, derived from the Navier-Stokes equation and no-slip boundaries. The boundary conditions on the in- and outlet are not known. The scans are a measure of the flow concentration. This is used as ground truth for the concentration, introduced to the system as a common loss. Subfigure [D]  and Subfigure [E]  show the results of their predictions. Raissi et al. achieve accurate predictions for inferring hemodynamics in a 3D aneurysm. [14]


Cardiac Activation Mapping (PINN)

Arterial fibrillation is often unrecognized [3]. Currently its detection is "the basis for risk stratification for stroke and appropriate decision making about the need for anticoagulant therapy" [3]. The detection procedure is an interpolation of a complete electro-anatomic map of the hearts chamber. Currently there are no strategies for suggesting the best way of obtaining these measurements to minimize uncertainty in the interpolation. Today samples are taken in a random fashion. This leads to noisy measurements, which can result in nonphysical behavior - like artificially high conduction velocities - and increased procedure time. [16]
 
Sahli Costabal et al. [16] suggest two different vanilla PINNs regulated with the Eikonal equation. Additionally, and more importantly, they use randomized prior functions to get a measure for the uncertainty of the model. They introduce different duplicate architectures with different prior functions which have been randomly sampled with Glorot initialization. Furthermore they add Gaussian noise to the data and compile the final prediction as the mean output of all networks. This results in an approach with robustness of noise and yields a good measure of uncertainty. [16]

Their model is able to simulate behavior, which can not be simulated with other state-of-the-art models, like collision of wave-fronts. With their approach they are able to give precise suggestions for measurements, which minimize the uncertainty in the model. This would result in more precise and reduced procedure times. Likewise the physical behavior is correctly displayed and does not yield results like artificially high conduction velocities. [16]


The Figure [16] on the left shows a plot of the uncertainty quantification and training times for different numbers of networks on a flat 2d domain. It shows uncertainty quantification and effect of different number of networks on the prediction by Sahli Costabal et al. [16]. The black circles display sampling points. The right most plot shows that there is a trade off between entropy reduction and normalizing the training time. It clearly shows a trade-off between cost and accuracy. Especially in clinical trails this arrangement would need to be carefully selected. For 100 networks to converge it takes only about four times as long as using a single network. Especially with more powerful computers (they used a laptop with 8 cores) this would lead to very low computational cost for significantly better measurements. [16] 

Their approach still needs to be significantly improved to be ready for clinical trails. Especially because they ignored anisotropy in conduction of cardiac tissue. To gain this information they would need information on the orientations of fibers in the artria and ventricles. [16] 

PINNs as Linear Solvers (PINNs)

An interesting new approach in researching PINNs is comparing conventional methods to PINNs. Markidis et al. [10] focus on solving a Poisson based problem. 

They characterize PINNs as linear solvers by using conventionally soft constraint networks. In their work they try to quantify effects of different parts of the PINN (like the activation function or breadth and depth, sampling and many more) on the training error over training time. One interesting result is that for their problem setting the type of sampling did not have a considerable effect on the training performance of the networks. They highlight the effectiveness of adaptive activation functions for mitigating the vanishing gradient problem. [10]

Another aspect they emphasize in greater detail is the importance of transfer learning on training time. They found that for smooth source terms transfer learning allows gaining two orders of magnitude of improvement in the training error in less than 1000 epochs. They show that they were able to achieve super-convergence with the usage of transfer learning. The main challenge in this was only finding the right pre-trained PINN for new problems. [10]

Markidis et al. [10] results show that by applying only PINNs thy are outperformed significantly by conventional numerical solvers in terms of accuracy and computational cost. However, they noticed that PINNs converge on low frequency components first and only then resolve higher ones (which is a result of the F-principle). Therefore they conclude that PINNs are not useful for high precision results. Yet conventional solvers converge on high frequency components first and only then resolve lower frequency components. Accordingly, they propose a technique of combining these two to gain a solver which excels at both. [10]


The Figure on the left hand shows the solution (first row) and the 2-dimensional Fourier transform (lower row). It displays  that only over the training duration the numerical solver is able to resolve the low frequencies in the center of the image. The PINN does not have any problems with that from the beginning.

The proposed algorithm resembles conventional multigrid-solvers. Multigrid-solvers usually leverage that solutions on different grid sizes are similar. Convergence on coarser grids are usually faster to calculate and finer grids yield more accurate results. In their algorithm they use the PINN to calculate a solution on a coarse grid. Then the solution is refined and a multigrid cycle with conventional numerical methods is added. [7]


The Figure shows the results of applying this hybrid approach to their Poisson based problem. It shows execution times of different solvers for a Poisson problem in Grossmann et al. [7]. Comparison of their hybrid approach and conventional solver.  They found that their hybrid multigrid solver is faster for problems using large grids. [7]

Grossmann et al. show that PINNs are still far from replacing conventional methods entirely. However, while this is the case, they establish an algorithm that can keep up with state-of-the-art algorithms for specific problems. They conclude that PINNs or similar deep learning approaches will find their way into traditional HPC solvers and approaches and will play an essential role in next-generation tools for solving linear systems. [7]

Conclusion


PINNs can be used to solve physical problems with available differential equations. Their greatest strength is that they are able to solve systems even with missing information. They can be employed in settings where conventional solvers fail. Another big strength is achieved when they are not employed on their own, but combined with conventional techniques. Furthermore, they seem to be promising for solving problems in high dimensional domains, where conventional methods suffer from the curse of dimensionality. 
Currently, limiting these approaches is the availability of good frameworks. Furthermore, applying PINNs to solve problems for which currently conventional methods deliver acceptable solutions is academic. PINNs require upwards of two orders of magnitude more time to solve the same problem often to a lower degree of accuracy. Moreover, the field of study was stuck in one place for a long time, not innovating new techniques, but trying to use the same technique for different problem statements. This trend seems to be on decline with recent promising architectures and precise research into failure modes [9]. 
In conclusion PINNs are promising, but currently fail to be generalized to real world problems. Most problem statements are very academic and lack a connection to real world applications. In my assertion PINNs are an integral parts for solving very specialized problems and will become part of many conventional solvers. I hold the impression that they will become an important back-end tool for most research with differential equations or solvers in general. Thus, it is important to investigate this field of study further, particularly for developing techniques and methods which can be applied to real world problems or focus on the underlying mechanics of the technique itself. 


References

[1] Amirhossein Arzani, Jian-Xun Wang, and Roshan M D’Souza. Uncover-

ing near-wall blood flow from sparse data with physics-informed neural

networks. Physics of Fluids, 33(7):071905, 2021.

[2] Atilim Gunes Baydin, Barak A Pearlmutter, Alexey Andreyevich Radul,

and Jeffrey Mark Siskind. Automatic differentiation in machine learning:

a survey. Journal of Marchine Learning Research, 18:1–43, 2018.

[3] Emelia J Benjamin, Salim S Virani, Clifton W Callaway, Alanna M Cham-

berlain, Alexander R Chang, Susan Cheng, Stephanie E Chiuve, Mary

Cushman, Francesca N Delling, Rajat Deo, et al. Heart disease and stroke

statistics—2018 update: a report from the american heart association. Cir-

culation, 137(12):e67–e492, 2018.

[4] Salvatore Cuomo, Vincenzo Schiano Di Cola, Fabio Giampaolo, Gianluigi

Rozza, Maziar Raissi, and Francesco Piccialli. Scientific machine learning

through physics–informed neural networks: where we are and what’s next.

Journal of Scientific Computing, 92(3):88, 2022.

[5] Zhiwei Fang. A high-efficient hybrid physics-informed neural networks

based on convolutional neural network. IEEE Transactions on Neural Net-

works and Learning Systems, 33(10):5514–5526, 2021.

[6] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT

press, 2016.

[7] Tamara G Grossmann, Urszula Julia Komorowska, Jonas Latz, and Carola-

Bibiane Sch¨onlieb. Can physics-informed neural networks beat the finite

element method? arXiv preprint arXiv:2302.04107, 2023.

[8] Kurt Hornik, Maxwell Stinchcombe, and Halbert White. Multilayer feedfor-

ward networks are universal approximators. Neural networks, 2(5):359–366,

1989.

[9] Aditi Krishnapriyan, Amir Gholami, Shandian Zhe, Robert Kirby, and

Michael W Mahoney. Characterizing possible failure modes in physics-

informed neural networks. Advances in Neural Information Processing Sys-

tems, 34:26548–26560, 2021.

[10] Stefano Markidis. The old and the new: Can physics-informed deep-

learning replace traditional linear solvers? Frontiers in big Data, 4:669097,

2021.

[11] Abhilash Mathews, Manaure Francisquez, Jerry W Hughes, David R Hatch,

Ben Zhu, and Barrett N Rogers. Uncovering turbulent plasma dynamics

via deep learning from partial observations. Physical Review E, 104(2).

[12] Arda Mavi, Ali Can Bekar, Ehsan Haghighat, and Erdogan Madenci.

An unsupervised latent/output physics-informed convolutional-lstm net-

work for solving partial differential equations using peridynamic differen-

tial operator. Computer Methods in Applied Mechanics and Engineering,

407:115944, 2023.

[13] Maziar Raissi, Paris Perdikaris, and George E Karniadakis. Physics-

informed neural networks: A deep learning framework for solving for-

ward and inverse problems involving nonlinear partial differential equa-

tions. Journal of Computational physics, 378:686–707, 2019.

[14] Maziar Raissi, Alireza Yazdani, and George Em Karniadakis. Hidden fluid

mechanics: Learning velocity and pressure fields from flow visualizations.

Science, 367(6481):1026–1030, 2020.

[15] Matthew Ricci, Noa Moriel, Zoe Piran, and Mor Nitzan. Phase2vec: Dy-

namical systems embedding with a physics-informed convolutional network.

arXiv preprint arXiv:2212.03857, 2022.

[16] Francisco Sahli Costabal, Yibo Yang, Paris Perdikaris, Daniel E Hurtado,

and Ellen Kuhl. Physics-informed neural networks for cardiac activation

mapping. Frontiers in Physics, 8:42, 2020.

[17] Mohammad Sarabian, Hessam Babaee, and Kaveh Laksari. Physics-

informed neural networks for improving cerebral hemodynamics predic-

tions. arXiv preprint arXiv:2108.11498, 2021.

[18] Luning Sun, Han Gao, Shaowu Pan, and Jian-Xun Wang. Surrogate mod-

eling for fluid flows based on physics-constrained deep learning without

simulation data. Computer Methods in Applied Mechanics and Engineer-

ing, 361:112732, 2020.

[19] Mohammad Taufik, Tariq Alkhalifah, and Umair Waheed. A stable neural

network-based eikonal tomography using hard-constrained measurements.

Authorea Preprints, 2023.

[20] Aslak Tveito, Hans Petter Langtangen, Bjørn Frederik Nielsen, and Xing

Cai. Elements of scientific computing. Springer, 2010.

[21] Felipe AC Viana, Renato G Nascimento, Arinan Dourado, and Yigit A

Yucesan. Estimating model inadequacy in ordinary differential equa-

tions with physics-informed neural networks. Computers & Structures,

245:106458, 2021.

[22] Lechao Xiao and Jeffrey Pennington. What breaks the curse of dimension-

ality in deep learning?, 2021.

[23] Mykhaylo Zayats, Malgorzata J Zimo´n, Kyongmin Yeo, and Sergiy Zhuk.

Super resolution for turbulent flows in 2d: Stabilized physics informed

neural networks. In 2022 IEEE 61st Conference on Decision and Control

(CDC), pages 3377–3382. IEEE, 2022.

[24] Qiming Zhu, Zeliang Liu, and Jinhui Yan. Machine learning for metal

additive manufacturing: predicting temperature and melt pool fluid dy-

namics using physics-informed neural networks. Computational Mechanics,

67:619–635, 2021.

[25]  Maziar Raissi, Paris Perdikaris, and George Em Karniadakis. Physics informed deep learning (part i): Data-driven solutions of nonlinear partial differential equations. arXiv preprint arXiv:1711.10561, 2017c.

[26] Su, Kun and Qian, Kaizhi and Shlizerman, Eli and Torralba, Antonio and Gan, Chuang. Physics-Driven Diffusion Models for Impact Sound Synthesis from Videos. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023

[27] Shlomi, Jonathan and Battaglia, Peter and Vlimant, Jean-Roch. Graph neural networks in particle physics. Machine Learning: Science and Technology. 2020

[28] Udrescu, Silviu-Marian and Tegmark, Max. AI Feynman: A physics-inspired method for symbolic regression. Science Advances. 2020

  • Keine Stichwörter