Tutorials and Doc (#191)

* Tutorial doc update
* update doc tutorial
* doc not compiling

---------

Co-authored-by: Dario Coscia <dcoscia@euclide.maths.sissa.it>
Co-authored-by: Dario Coscia <dariocoscia@Dario-Coscia.local>
This commit is contained in:
Nicola Demo
2023-10-23 12:48:09 +02:00
parent ac829aece9
commit 0c8072274e
93 changed files with 2306 additions and 1685 deletions

View File

@@ -0,0 +1,77 @@
Code Documentation
==================
Welcome to PINA documentation! Here you can find the modules of the package divided in different sections.
PINA Features
-------
.. toctree::
:titlesonly:
LabelTensor <label_tensor.rst>
Condition <condition.rst>
Plotter <plotter.rst>
Problem
-------
.. toctree::
:titlesonly:
AbstractProblem <problem/abstractproblem.rst>
SpatialProblem <problem/spatialproblem.rst>
TimeDependentProblem <problem/timedepproblem.rst>
ParametricProblem <problem/parametricproblem.rst>
Solvers
-------
.. toctree::
:titlesonly:
SolverInterface <solvers/solver_interface.rst>
PINN <solvers/pinn.rst>
Models
-----
.. toctree::
:titlesonly:
Network <models/network.rst>
FeedForward <models/fnn.rst>
MultiFeedForward <models/multifeedforward.rst>
ResidualFeedForward <models/fnn_residual.rst>
DeepONet <models/deeponet.rst>
FNO <models/fno.rst>
Layers
------
.. toctree::
:titlesonly:
ContinuousConv <layers/convolution.rst>
Geometries
----------
.. toctree::
:titlesonly:
Location <geometry/location.rst>
CartesianDomain <geometry/cartesian.rst>
EllipsoidDomain <geometry/ellipsoid.rst>
SimplexDomain <geometry/simplex.rst>
Loss
------
.. toctree::
:titlesonly:
LossInterface <loss/loss_interface.rst>
LpLoss <loss/lploss.rst>
PowerLoss <loss/powerloss.rst>

View File

@@ -0,0 +1,27 @@
PINA Tutorials
==============
In this folder we collect useful tutorials in order to understand the principles and the potential of **PINA**.
.. toctree::
:maxdepth: 3
:hidden:
Getting started with PINA
-------------------------
* :doc:`Introduction to PINA for Physics Informed Neural Networks training<tutorials/tutorial1/tutorial>`
* :doc:`Building custom geometries with PINA Location class<tutorials/tutorial6/tutorial>`
Physics Informed Neural Networks
--------------------------------
* :doc:`Two dimensional Poisson problem using Extra Features Learning<tutorials/tutorial2/tutorial>`
* :doc:`Two dimensional Wave problem with hard constraint<tutorials/tutorial3/tutorial>`
Neural Operator Learning
------------------------
* :doc:`Two dimensional Darcy flow using the Fourier Neural Operator<tutorials/tutorial5/tutorial>`
Supervised Learning
-------------------
* :doc:`Unstructured convolutional autoencoder via continuous convolution<tutorials/tutorial4/tutorial>`

View File

@@ -1,61 +0,0 @@
Code Documentation
==================
.. toctree::
:maxdepth: 3
PINN <pinn.rst>
LabelTensor <label_tensor.rst>
Condition <condition.rst>
Location <location.rst>
Operators <operators.rst>
Plotter <plotter.rst>
Geometries
----------
.. toctree::
:maxdepth: 3
Span <span.rst>
Ellipsoid <ellipsoid.rst>
Model
-----
.. toctree::
:maxdepth: 3
Network <network.rst>
FeedForward <fnn.rst>
DeepONet <deeponet.rst>
MultiFeedForward <multifeedforward.rst>
Layers
------
.. toctree::
:maxdepth: 3
ContinuousConv <convolution.rst>
Loss
------
.. toctree::
:maxdepth: 3
LpLoss <lploss.rst>
PowerLoss <powerloss.rst>
Problem
-------
.. toctree::
:maxdepth: 3
AbstractProblem <abstractproblem.rst>
SpatialProblem <spatialproblem.rst>
TimeDependentProblem <timedepproblem.rst>
ParametricProblem <parametricproblem.rst>

View File

@@ -1,10 +0,0 @@
Ellipsoid
===========
.. currentmodule:: pina.ellipsoid
.. automodule:: pina.ellipsoid
.. autoclass:: Ellipsoid
:members:
:show-inheritance:
:noindex:

View File

@@ -0,0 +1,10 @@
CartesianDomain
===========
.. currentmodule:: pina.geometry.cartesian
.. automodule:: pina.geometry.cartesian
.. autoclass:: CartesianDomain
:members:
:show-inheritance:
:noindex:

View File

@@ -0,0 +1,10 @@
EllipsoidDomain
===========
.. currentmodule:: pina.geometry.ellipsoid
.. automodule:: pina.geometry.ellipsoid
.. autoclass:: EllipsoidDomain
:members:
:show-inheritance:
:noindex:

View File

@@ -1,8 +1,8 @@
Location
=========
.. currentmodule:: pina.location
.. currentmodule:: pina.geometry.location
.. automodule:: pina.location
.. automodule:: pina.geometry.location
.. autoclass:: Location
:members:

View File

@@ -0,0 +1,10 @@
SimplexDomain
===========
.. currentmodule:: pina.geometry.simplex
.. automodule:: pina.geometry.simplex
.. autoclass:: SimplexDomain
:members:
:show-inheritance:
:noindex:

View File

@@ -1,10 +1,10 @@
ContinuousConv
==============
ContinuousConvBlock
===================
.. currentmodule:: pina.model.layers.convolution_2d
.. automodule:: pina.model.layers.convolution_2d
.. autoclass:: ContinuousConv
.. autoclass:: ContinuousConvBlock
:members:
:private-members:
:undoc-members:

View File

@@ -0,0 +1,10 @@
LpLoss
====
.. currentmodule:: pina.loss
.. automodule:: pina.loss
.. autoclass:: LossInterface
:members:
:private-members:
:show-inheritance:

View File

@@ -0,0 +1,10 @@
ResidualFeedForward
===========
.. currentmodule:: pina.model.feed_forward
.. automodule:: pina.model.feed_forward
.. autoclass:: ResidualFeedForward
:members:
:private-members:
:show-inheritance:

View File

@@ -0,0 +1,10 @@
FNO
===========
.. currentmodule:: pina.model.fno
.. automodule:: pina.model.fno
.. autoclass:: FNO
:members:
:private-members:
:show-inheritance:

View File

@@ -0,0 +1,10 @@
SolverInterface
===========
.. currentmodule:: pina.solvers.solver
.. automodule:: pina.solvers.solver
.. autoclass:: SolverInterface
:members:
:show-inheritance:
:noindex:

View File

@@ -1,10 +0,0 @@
Span
===========
.. currentmodule:: pina.span
.. automodule:: pina.span
.. autoclass:: Span
:members:
:show-inheritance:
:noindex:

View File

@@ -1,279 +0,0 @@
Tutorial 1: Physics Informed Neural Networks on PINA
====================================================
In this tutorial, we will demonstrate a typical use case of PINA on a
toy problem. Specifically, the tutorial aims to introduce the following
topics:
- Defining a PINA Problem,
- Building a ``pinn`` object,
- Sampling points in a domain
These are the three main steps needed **before** training a Physics
Informed Neural Network (PINN). We will show each step in detail, and at
the end, we will solve the problem.
PINA Problem
------------
Initialize the ``Problem`` class
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Problem definition in the PINA framework is done by building a python
``class``, which inherits from one or more problem classes
(``SpatialProblem``, ``TimeDependentProblem``, ``ParametricProblem``)
depending on the nature of the problem. Below is an example: #### Simple
Ordinary Differential Equation Consider the following:
.. math::
\begin{equation}
\begin{cases}
\frac{d}{dx}u(x) &= u(x) \quad x\in(0,1)\\
u(x=0) &= 1 \\
\end{cases}
\end{equation}
with the analytical solution :math:`u(x) = e^x`. In this case, our ODE
depends only on the spatial variable :math:`x\in(0,1)` , meaning that
our ``Problem`` class is going to be inherited from the
``SpatialProblem`` class:
.. code:: python
from pina.problem import SpatialProblem
from pina import CartesianProblem
class SimpleODE(SpatialProblem):
output_variables = ['u']
spatial_domain = CartesianProblem({'x': [0, 1]})
# other stuff ...
Notice that we define ``output_variables`` as a list of symbols,
indicating the output variables of our equation (in this case only
:math:`u`). The ``spatial_domain`` variable indicates where the sample
points are going to be sampled in the domain, in this case
:math:`x\in[0,1]`.
What about if our equation is also time dependent? In this case, our
``class`` will inherit from both ``SpatialProblem`` and
``TimeDependentProblem``:
.. code:: ipython3
from pina.problem import SpatialProblem, TimeDependentProblem
from pina import CartesianDomain
class TimeSpaceODE(SpatialProblem, TimeDependentProblem):
output_variables = ['u']
spatial_domain = CartesianDomain({'x': [0, 1]})
temporal_domain = CartesianDomain({'t': [0, 1]})
# other stuff ...
where we have included the ``temporal_domain`` variable, indicating the
time domain wanted for the solution.
In summary, using PINA, we can initialize a problem with a class which
inherits from three base classes: ``SpatialProblem``,
``TimeDependentProblem``, ``ParametricProblem``, depending on the type
of problem we are considering. For reference: \* ``SpatialProblem``
:math:`\rightarrow` a differential equation with spatial variable(s) \*
``TimeDependentProblem`` :math:`\rightarrow` a time-dependent
differential equation \* ``ParametricProblem`` :math:`\rightarrow` a
parametrized differential equation
Write the ``Problem`` class
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Once the ``Problem`` class is initialized, we need to represent the
differential equation in PINA. In order to do this, we need to load the
PINA operators from ``pina.operators`` module. Again, we'll consider
Equation (1) and represent it in PINA:
.. code:: ipython3
from pina.problem import SpatialProblem
from pina.operators import grad
from pina import Condition, CartesianDomain
from pina.equation.equation import Equation
import torch
class SimpleODE(SpatialProblem):
output_variables = ['u']
spatial_domain = CartesianDomain({'x': [0, 1]})
# defining the ode equation
def ode_equation(input_, output_):
# computing the derivative
u_x = grad(output_, input_, components=['u'], d=['x'])
# extracting the u input variable
u = output_.extract(['u'])
# calculate the residual and return it
return u_x - u
# defining the initial condition
def initial_condition(input_, output_):
# setting the initial value
value = 1.0
# extracting the u input variable
u = output_.extract(['u'])
# calculate the residual and return it
return u - value
# conditions to hold
conditions = {
'x0': Condition(location=CartesianDomain({'x': 0.}), equation=Equation(initial_condition)),
'D': Condition(location=CartesianDomain({'x': [0, 1]}), equation=Equation(ode_equation)),
}
# sampled points (see below)
input_pts = None
# defining the true solution
def truth_solution(self, pts):
return torch.exp(pts.extract(['x']))
After we define the ``Problem`` class, we need to write different class
methods, where each method is a function returning a residual. These
functions are the ones minimized during PINN optimization, given the
initial conditions. For example, in the domain :math:`[0,1]`, the ODE
equation (``ode_equation``) must be satisfied. We represent this by
returning the difference between subtracting the variable ``u`` from its
gradient (the residual), which we hope to minimize to 0. This is done
for all conditions (``ode_equation``, ``initial_condition``).
Once we have defined the function, we need to tell the neural network
where these methods are to be applied. To do so, we use the
``Condition`` class. In the ``Condition`` class, we pass the location
points and the function we want minimized on those points (other
possibilities are allowed, see the documentation for reference) as
parameters.
Finally, it's possible to define a ``truth_solution`` function, which
can be useful if we want to plot the results and see how the real
solution compares to the expected (true) solution. Notice that the
``truth_solution`` function is a method of the ``PINN`` class, but is
not mandatory for problem definition.
Build the ``PINN`` object
-------------------------
The basic requirements for building a ``PINN`` model are a ``Problem``
and a model. We have just covered the ``Problem`` definition. For the
model parameter, one can use either the default models provided in PINA
or a custom model. We will not go into the details of model definition
(see Tutorial2 and Tutorial3 for more details on model definition).
.. code:: ipython3
from pina.model import FeedForward
from pina import PINN
# initialize the problem
problem = SimpleODE()
# build the model
model = FeedForward(
layers=[10, 10],
func=torch.nn.Tanh,
output_dimensions=len(problem.output_variables),
input_dimensions=len(problem.input_variables)
)
# create the PINN object
pinn = PINN(problem, model)
Creating the ``PINN`` object is fairly simple. Different optional
parameters include: optimizer, batch size, ... (see
`documentation <https://mathlab.github.io/PINA/>`__ for reference).
Sample points in the domain
---------------------------
Once the ``PINN`` object is created, we need to generate the points for
starting the optimization. To do so, we use the ``sample`` method of the
``CartesianDomain`` class. Below are three examples of sampling methods
on the :math:`[0,1]` domain:
.. code:: ipython3
# sampling 20 points in [0, 1] through discretization
pinn.problem.discretise_domain(n=20, mode='grid', variables=['x'])
# sampling 20 points in (0, 1) through latin hypercube samping
pinn.problem.discretise_domain(n=20, mode='latin', variables=['x'])
# sampling 20 points in (0, 1) randomly
pinn.problem.discretise_domain(n=20, mode='random', variables=['x'])
Very simple training and plotting
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Once we have defined the PINA model, created a network, and sampled
points in the domain, we have everything necessary for training a PINN.
To do so, we make use of the ``Trainer`` class.
.. code:: ipython3
from pina import Trainer
# initialize trainer
trainer = Trainer(pinn)
# train the model
trainer.train()
.. parsed-literal::
/u/n/ndemo/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:546: UserWarning: Can't initialize NVML
warnings.warn("Can't initialize NVML")
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
/u/n/ndemo/.local/lib/python3.9/site-packages/lightning/pytorch/loops/utilities.py:72: PossibleUserWarning: `max_epochs` was not set. Setting it to 1000 epochs. To train without an epoch limit, set `max_epochs=-1`.
rank_zero_warn(
2023-10-17 10:02:21.318700: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-10-17 10:02:21.345355: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-10-17 10:02:23.572602: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
/opt/sissa/apps/intelpython/2022.0.2/intelpython/latest/lib/python3.9/site-packages/scipy/__init__.py:138: UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.26.0)
warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion} is required for this version of "
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
----------------------------------------
0 | _loss | MSELoss | 0
1 | _neural_net | Network | 141
----------------------------------------
141 Trainable params
0 Non-trainable params
141 Total params
0.001 Total estimated model params size (MB)
.. parsed-literal::
Training: 0it [00:00, ?it/s]
.. parsed-literal::
`Trainer.fit` stopped: `max_epochs=1000` reached.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 58 KiB

View File

@@ -1,204 +0,0 @@
Tutorial 3: resolution of wave equation with hard constraint PINNs.
===================================================================
The problem definition
----------------------
In this tutorial we present how to solve the wave equation using hard
constraint PINNs. For doing so we will build a costum torch model and
pass it to the ``PINN`` solver.
The problem is written in the following form:
.. raw:: latex
\begin{equation}
\begin{cases}
\Delta u(x,y,t) = \frac{\partial^2}{\partial t^2} u(x,y,t) \quad \text{in } D, \\\\
u(x, y, t=0) = \sin(\pi x)\sin(\pi y), \\\\
u(x, y, t) = 0 \quad \text{on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
\end{cases}
\end{equation}
where :math:`D` is a square domain :math:`[0,1]^2`, and
:math:`\Gamma_i`, with :math:`i=1,...,4`, are the boundaries of the
square, and the velocity in the standard wave equation is fixed to one.
First of all, some useful imports.
.. code:: ipython3
import torch
from pina.problem import SpatialProblem, TimeDependentProblem
from pina.operators import laplacian, grad
from pina.geometry import CartesianDomain
from pina.solvers import PINN
from pina.trainer import Trainer
from pina.equation import Equation
from pina.equation.equation_factory import FixedValue
from pina import Condition, Plotter
Now, the wave problem is written in PINA code as a class, inheriting
from ``SpatialProblem`` and ``TimeDependentProblem`` since we deal with
spatial, and time dependent variables. The equations are written as
``conditions`` that should be satisfied in the corresponding domains.
``truth_solution`` is the exact solution which will be compared with the
predicted one.
.. code:: ipython3
class Wave(TimeDependentProblem, SpatialProblem):
output_variables = ['u']
spatial_domain = CartesianDomain({'x': [0, 1], 'y': [0, 1]})
temporal_domain = CartesianDomain({'t': [0, 1]})
def wave_equation(input_, output_):
u_t = grad(output_, input_, components=['u'], d=['t'])
u_tt = grad(u_t, input_, components=['dudt'], d=['t'])
nabla_u = laplacian(output_, input_, components=['u'], d=['x', 'y'])
return nabla_u - u_tt
def initial_condition(input_, output_):
u_expected = (torch.sin(torch.pi*input_.extract(['x'])) *
torch.sin(torch.pi*input_.extract(['y'])))
return output_.extract(['u']) - u_expected
conditions = {
'gamma1': Condition(location=CartesianDomain({'x': [0, 1], 'y': 1, 't': [0, 1]}), equation=FixedValue(0.)),
'gamma2': Condition(location=CartesianDomain({'x': [0, 1], 'y': 0, 't': [0, 1]}), equation=FixedValue(0.)),
'gamma3': Condition(location=CartesianDomain({'x': 1, 'y': [0, 1], 't': [0, 1]}), equation=FixedValue(0.)),
'gamma4': Condition(location=CartesianDomain({'x': 0, 'y': [0, 1], 't': [0, 1]}), equation=FixedValue(0.)),
't0': Condition(location=CartesianDomain({'x': [0, 1], 'y': [0, 1], 't': 0}), equation=Equation(initial_condition)),
'D': Condition(location=CartesianDomain({'x': [0, 1], 'y': [0, 1], 't': [0, 1]}), equation=Equation(wave_equation)),
}
def wave_sol(self, pts):
return (torch.sin(torch.pi*pts.extract(['x'])) *
torch.sin(torch.pi*pts.extract(['y'])) *
torch.cos(torch.sqrt(torch.tensor(2.))*torch.pi*pts.extract(['t'])))
truth_solution = wave_sol
problem = Wave()
Hard Constraint Model
---------------------
After the problem, a **torch** model is needed to solve the PINN.
Usually, many models are already implemented in ``PINA``, but the user
has the possibility to build his/her own model in ``PyTorch``. The hard
constraint we impose is on the boundary of the spatial domain.
Specifically, our solution is written as:
.. math:: u_{\rm{pinn}} = xy(1-x)(1-y)\cdot NN(x, y, t),
where :math:`NN` is the neural net output. This neural network takes as
input the coordinates (in this case :math:`x`, :math:`y` and :math:`t`)
and provides the unknown field :math:`u`. By construction, it is zero on
the boundaries. The residuals of the equations are evaluated at several
sampling points (which the user can manipulate using the method
``discretise_domain``) and the loss minimized by the neural network is
the sum of the residuals.
.. code:: ipython3
class HardMLP(torch.nn.Module):
def __init__(self, input_dim, output_dim):
super().__init__()
self.layers = torch.nn.Sequential(torch.nn.Linear(input_dim, 20),
torch.nn.Tanh(),
torch.nn.Linear(20, 20),
torch.nn.Tanh(),
torch.nn.Linear(20, output_dim))
# here in the foward we implement the hard constraints
def forward(self, x):
hard = x.extract(['x'])*(1-x.extract(['x']))*x.extract(['y'])*(1-x.extract(['y']))
return hard*self.layers(x)
Train and Inference
-------------------
In this tutorial, the neural network is trained for 3000 epochs with a
learning rate of 0.001 (default in ``PINN``). Training takes
approximately 1 minute.
.. code:: ipython3
pinn = PINN(problem, HardMLP(len(problem.input_variables), len(problem.output_variables)))
problem.discretise_domain(1000, 'random', locations=['D','t0', 'gamma1', 'gamma2', 'gamma3', 'gamma4'])
trainer = Trainer(pinn, max_epochs=3000)
trainer.train()
.. parsed-literal::
/u/n/ndemo/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:546: UserWarning: Can't initialize NVML
warnings.warn("Can't initialize NVML")
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
Missing logger folder: /u/n/ndemo/PINA/tutorials/tutorial3/lightning_logs
2023-10-17 10:24:02.163746: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-10-17 10:24:02.218849: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-10-17 10:24:07.063047: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
/opt/sissa/apps/intelpython/2022.0.2/intelpython/latest/lib/python3.9/site-packages/scipy/__init__.py:138: UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.26.0)
warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion} is required for this version of "
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
----------------------------------------
0 | _loss | MSELoss | 0
1 | _neural_net | Network | 521
----------------------------------------
521 Trainable params
0 Non-trainable params
521 Total params
0.002 Total estimated model params size (MB)
.. parsed-literal::
Training: 0it [00:00, ?it/s]
.. parsed-literal::
`Trainer.fit` stopped: `max_epochs=3000` reached.
Notice that the loss on the boundaries of the spatial domain is exactly
zero, as expected! After the training is completed one can now plot some
results using the ``Plotter`` class of **PINA**.
.. code:: ipython3
plotter = Plotter()
# plotting at fixed time t = 0.0
plotter.plot(trainer, fixed_variables={'t': 0.0})
# plotting at fixed time t = 0.5
plotter.plot(trainer, fixed_variables={'t': 0.5})
# plotting at fixed time t = 1.
plotter.plot(trainer, fixed_variables={'t': 1.0})
.. image:: output_14_0.png
.. image:: output_14_1.png
.. image:: output_14_2.png

Binary file not shown.

Before

Width:  |  Height:  |  Size: 126 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 134 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 119 KiB

View File

@@ -0,0 +1,385 @@
Tutorial: Physics Informed Neural Networks on PINA
==================================================
In this tutorial, we will demonstrate a typical use case of **PINA** on
a toy problem, following the standard API procedure.
.. raw:: html
<p align="center">
.. raw:: html
</p>
Specifically, the tutorial aims to introduce the following topics:
- Explaining how to build **PINA** Problem,
- Showing how to generate data for ``PINN`` straining
These are the two main steps needed **before** starting the modelling
optimization (choose model and solver, and train). We will show each
step in detail, and at the end, we will solve a simple Ordinary
Differential Equation (ODE) problem busing the ``PINN`` solver.
Build a PINA problem
--------------------
Problem definition in the **PINA** framework is done by building a
python ``class``, which inherits from one or more problem classes
(``SpatialProblem``, ``TimeDependentProblem``, ``ParametricProblem``, …)
depending on the nature of the problem. Below is an example: ### Simple
Ordinary Differential Equation Consider the following:
.. math::
\begin{equation}
\begin{cases}
\frac{d}{dx}u(x) &= u(x) \quad x\in(0,1)\\
u(x=0) &= 1 \\
\end{cases}
\end{equation}
with the analytical solution :math:`u(x) = e^x`. In this case, our ODE
depends only on the spatial variable :math:`x\in(0,1)` , meaning that
our ``Problem`` class is going to be inherited from the
``SpatialProblem`` class:
.. code:: python
from pina.problem import SpatialProblem
from pina import CartesianProblem
class SimpleODE(SpatialProblem):
output_variables = ['u']
spatial_domain = CartesianProblem({'x': [0, 1]})
# other stuff ...
Notice that we define ``output_variables`` as a list of symbols,
indicating the output variables of our equation (in this case only
:math:`u`), this is done because in **PINA** the ``torch.Tensor``\ s are
labelled, allowing the user maximal flexibility for the manipulation of
the tensor. The ``spatial_domain`` variable indicates where the sample
points are going to be sampled in the domain, in this case
:math:`x\in[0,1]`.
What about if our equation is also time dependent? In this case, our
``class`` will inherit from both ``SpatialProblem`` and
``TimeDependentProblem``:
.. code:: ipython3
from pina.problem import SpatialProblem, TimeDependentProblem
from pina import CartesianDomain
class TimeSpaceODE(SpatialProblem, TimeDependentProblem):
output_variables = ['u']
spatial_domain = CartesianDomain({'x': [0, 1]})
temporal_domain = CartesianDomain({'t': [0, 1]})
# other stuff ...
where we have included the ``temporal_domain`` variable, indicating the
time domain wanted for the solution.
In summary, using **PINA**, we can initialize a problem with a class
which inherits from different base classes: ``SpatialProblem``,
``TimeDependentProblem``, ``ParametricProblem``, and so on depending on
the type of problem we are considering. Here are some examples (more on
the official documentation): \* ``SpatialProblem`` :math:`\rightarrow` a
differential equation with spatial variable(s) \*
``TimeDependentProblem`` :math:`\rightarrow` a time-dependent
differential equation \* ``ParametricProblem`` :math:`\rightarrow` a
parametrized differential equation \* ``AbstractProblem``
:math:`\rightarrow` any **PINA** problem inherits from here
Write the problem class
~~~~~~~~~~~~~~~~~~~~~~~
Once the ``Problem`` class is initialized, we need to represent the
differential equation in **PINA**. In order to do this, we need to load
the **PINA** operators from ``pina.operators`` module. Again, well
consider Equation (1) and represent it in **PINA**:
.. code:: ipython3
from pina.problem import SpatialProblem
from pina.operators import grad
from pina import Condition
from pina.geometry import CartesianDomain
from pina.equation import Equation, FixedValue
import torch
class SimpleODE(SpatialProblem):
output_variables = ['u']
spatial_domain = CartesianDomain({'x': [0, 1]})
# defining the ode equation
def ode_equation(input_, output_):
# computing the derivative
u_x = grad(output_, input_, components=['u'], d=['x'])
# extracting the u input variable
u = output_.extract(['u'])
# calculate the residual and return it
return u_x - u
# conditions to hold
conditions = {
'x0': Condition(location=CartesianDomain({'x': 0.}), equation=FixedValue(1)), # We fix initial condition to value 1
'D': Condition(location=CartesianDomain({'x': [0, 1]}), equation=Equation(ode_equation)), # We wrap the python equation using Equation
}
# sampled points (see below)
input_pts = None
# defining the true solution
def truth_solution(self, pts):
return torch.exp(pts.extract(['x']))
problem = SimpleODE()
After we define the ``Problem`` class, we need to write different class
methods, where each method is a function returning a residual. These
functions are the ones minimized during PINN optimization, given the
initial conditions. For example, in the domain :math:`[0,1]`, the ODE
equation (``ode_equation``) must be satisfied. We represent this by
returning the difference between subtracting the variable ``u`` from its
gradient (the residual), which we hope to minimize to 0. This is done
for all conditions. Notice that we do not pass directly a ``python``
function, but an ``Equation`` object, which is initialized with the
``python`` function. This is done so that all the computations, and
internal checks are done inside **PINA**.
Once we have defined the function, we need to tell the neural network
where these methods are to be applied. To do so, we use the
``Condition`` class. In the ``Condition`` class, we pass the location
points and the equation we want minimized on those points (other
possibilities are allowed, see the documentation for reference).
Finally, its possible to define a ``truth_solution`` function, which
can be useful if we want to plot the results and see how the real
solution compares to the expected (true) solution. Notice that the
``truth_solution`` function is a method of the ``PINN`` class, but is
not mandatory for problem definition.
Generate data
-------------
Data for training can come in form of direct numerical simulation
reusults, or points in the domains. In case we do unsupervised learning,
we just need the collocation points for training, i.e. points where we
want to evaluate the neural network. Sampling point in **PINA** is very
easy, here we show three examples using the ``.discretise_domain``
method of the ``AbstractProblem`` class.
.. code:: ipython3
# sampling 20 points in [0, 1] through discretization in all locations
problem.discretise_domain(n=20, mode='grid', variables=['x'], locations='all')
# sampling 20 points in (0, 1) through latin hypercube samping in D, and 1 point in x0
problem.discretise_domain(n=20, mode='latin', variables=['x'], locations=['D'])
problem.discretise_domain(n=1, mode='random', variables=['x'], locations=['x0'])
# sampling 20 points in (0, 1) randomly
problem.discretise_domain(n=20, mode='random', variables=['x'])
We are going to use latin hypercube points for sampling. We need to
sample in all the conditions domains. In our case we sample in ``D`` and
``x0``.
.. code:: ipython3
# sampling for training
problem.discretise_domain(1, 'random', locations=['x0'])
problem.discretise_domain(20, 'lh', locations=['D'])
The points are saved in a python ``dict``, and can be accessed by
calling the attribute ``input_pts`` of the problem
.. code:: ipython3
print('Input points:', problem.input_pts)
print('Input points labels:', problem.input_pts['D'].labels)
.. parsed-literal::
Input points: {'x0': LabelTensor([[[0.]]]), 'D': LabelTensor([[[0.8569]],
[[0.9478]],
[[0.3030]],
[[0.8182]],
[[0.4116]],
[[0.6687]],
[[0.5394]],
[[0.9927]],
[[0.6082]],
[[0.4605]],
[[0.2859]],
[[0.7321]],
[[0.5624]],
[[0.1303]],
[[0.2402]],
[[0.0182]],
[[0.0714]],
[[0.3697]],
[[0.7770]],
[[0.1784]]])}
Input points labels: ['x']
To visualize the sampled points we can use the ``.plot_samples`` method
of the ``Plotter`` class
.. code:: ipython3
from pina import Plotter
pl = Plotter()
pl.plot_samples(problem=problem)
.. image:: tutorial_files/tutorial_16_0.png
Perform a small training
------------------------
Once we have defined the problem and generated the data we can start the
modelling. Here we will choose a ``FeedForward`` neural network
available in ``pina.model``, and we will train using the ``PINN`` solver
from ``pina.solvers``. We highlight that this training is fairly simple,
for more advanced stuff consider the tutorials in the **Physics Informed
Neural Networks** section of **Tutorials**. For training we use the
``Trainer`` class from ``pina.trainer``. Here we show a very short
training and some method for plotting the results. Notice that by
default all relevant metrics (e.g. MSE error during training) are going
to be tracked using a ``lightining`` logger, by default ``CSVLogger``.
If you want to track the metric by yourself without a logger, use
``pina.callbacks.MetricTracker``.
.. code:: ipython3
from pina import PINN, Trainer
from pina.model import FeedForward
from pina.callbacks import MetricTracker
# build the model
model = FeedForward(
layers=[10, 10],
func=torch.nn.Tanh,
output_dimensions=len(problem.output_variables),
input_dimensions=len(problem.input_variables)
)
# create the PINN object
pinn = PINN(problem, model)
# create the trainer
trainer = Trainer(solver=pinn, max_epochs=1500, callbacks=[MetricTracker()], accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
# train
trainer.train()
.. parsed-literal::
/u/d/dcoscia/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:546: UserWarning: Can't initialize NVML
warnings.warn("Can't initialize NVML")
/u/d/dcoscia/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:651: UserWarning: CUDA initialization: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:109.)
return torch._C._cuda_getDeviceCount() if nvml_count < 0 else nvml_count
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
.. parsed-literal::
Epoch 1499: : 1it [00:00, 143.58it/s, v_num=5, mean_loss=1.09e-5, x0_loss=1.33e-7, D_loss=2.17e-5]
.. parsed-literal::
`Trainer.fit` stopped: `max_epochs=1500` reached.
.. parsed-literal::
Epoch 1499: : 1it [00:00, 65.39it/s, v_num=5, mean_loss=1.09e-5, x0_loss=1.33e-7, D_loss=2.17e-5]
After the training we can inspect trainer logged metrics (by default
**PINA** logs mean square error residual loss). The logged metrics can
be accessed online using one of the ``Lightinig`` loggers. The final
loss can be accessed by ``trainer.logged_metrics``
.. code:: ipython3
# inspecting final loss
trainer.logged_metrics
.. parsed-literal::
{'mean_loss': tensor(1.0938e-05),
'x0_loss': tensor(1.3328e-07),
'D_loss': tensor(2.1743e-05)}
By using the ``Plotter`` class from **PINA** we can also do some
quatitative plots of the solution.
.. code:: ipython3
# plotting the solution
pl.plot(trainer=trainer)
.. image:: tutorial_files/tutorial_23_0.png
The solution is overlapped with the actual one, and they are barely
indistinguishable. We can also plot easily the loss:
.. code:: ipython3
pl.plot_loss(trainer=trainer, metric='mean_loss', log_scale=True)
.. image:: tutorial_files/tutorial_25_0.png
As we can see the loss has not reached a minimum, suggesting that we
could train for longer
Whats next?
------------
Nice you have completed the introductory tutorial of **PINA**! There are
multiple directions you can go now:
1. Train the network for longer or with different layer sizes and assert
the finaly accuracy
2. Train the network using other types of models (see ``pina.model``)
3. GPU trainining and benchmark the speed
4. Many more…

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

View File

@@ -1,27 +1,13 @@
Tutorial 2: resolution of Poisson problem and usage of extra-features
=====================================================================
The problem definition
~~~~~~~~~~~~~~~~~~~~~~
Tutorial: Two dimensional Poisson problem using Extra Features Learning
=======================================================================
This tutorial presents how to solve with Physics-Informed Neural
Networks a 2D Poisson problem with Dirichlet boundary conditions. Using
extrafeatures.
The problem is written as:
.. raw:: latex
\begin{equation}
\begin{cases}
\Delta u = \sin{(\pi x)} \sin{(\pi y)} \text{ in } D, \\
u = 0 \text{ on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
\end{cases}
\end{equation}
where :math:`D` is a square domain :math:`[0,1]^2`, and
:math:`\Gamma_i`, with :math:`i=1,...,4`, are the boundaries of the
square.
Networks (PINNs) a 2D Poisson problem with Dirichlet boundary
conditions. We will train with standard PINNs training, and with
extrafeatures. For more insights on extrafeature learning please read
`An extended physics informed neural network for preliminary analysis of
parametric optimal control
problems <https://www.sciencedirect.com/science/article/abs/pii/S0898122123002018>`__.
First of all, some useful imports.
@@ -41,9 +27,22 @@ First of all, some useful imports.
from pina import Condition, LabelTensor
from pina.callbacks import MetricTracker
Now, the Poisson problem is written in PINA code as a class. The
The problem definition
----------------------
The two-dimensional Poisson problem is mathematically written as:
:raw-latex:`\begin{equation}
\begin{cases}
\Delta u = \sin{(\pi x)} \sin{(\pi y)} \text{ in } D, \\
u = 0 \text{ on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
\end{cases}
\end{equation}` where :math:`D` is a square domain :math:`[0,1]^2`, and
:math:`\Gamma_i`, with :math:`i=1,...,4`, are the boundaries of the
square.
The Poisson problem is written in **PINA** code as a class. The
equations are written as *conditions* that should be satisfied in the
corresponding domains. *truth\_solution* is the exact solution which
corresponding domains. The *truth_solution* is the exact solution which
will be compared with the predicted one.
.. code:: ipython3
@@ -58,6 +57,7 @@ will be compared with the predicted one.
laplacian_u = laplacian(output_, input_, components=['u'], d=['x', 'y'])
return laplacian_u - force_term
# here we write the problem conditions
conditions = {
'gamma1': Condition(location=CartesianDomain({'x': [0, 1], 'y': 1}), equation=FixedValue(0.)),
'gamma2': Condition(location=CartesianDomain({'x': [0, 1], 'y': 0}), equation=FixedValue(0.)),
@@ -80,8 +80,8 @@ will be compared with the predicted one.
problem.discretise_domain(25, 'grid', locations=['D'])
problem.discretise_domain(25, 'grid', locations=['gamma1', 'gamma2', 'gamma3', 'gamma4'])
The problem solution
~~~~~~~~~~~~~~~~~~~~
Solving the problem with standard PINNs
---------------------------------------
After the problem, the feed-forward neural network is defined, through
the class ``FeedForward``. This neural network takes as input the
@@ -93,7 +93,9 @@ neural network is the sum of the residuals.
In this tutorial, the neural network is composed by two hidden layers of
10 neurons each, and it is trained for 1000 epochs with a learning rate
of 0.006. These parameters can be modified as desired.
of 0.006 and :math:`l_2` weight regularization set to :math:`10^{-7}`.
These parameters can be modified as desired. We use the
``MetricTracker`` class to track the metrics during training.
.. code:: ipython3
@@ -105,7 +107,7 @@ of 0.006. These parameters can be modified as desired.
input_dimensions=len(problem.input_variables)
)
pinn = PINN(problem, model, optimizer_kwargs={'lr':0.006, 'weight_decay':1e-8})
trainer = Trainer(pinn, max_epochs=1000, callbacks=[MetricTracker()])
trainer = Trainer(pinn, max_epochs=1000, callbacks=[MetricTracker()], accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
# train
trainer.train()
@@ -113,30 +115,15 @@ of 0.006. These parameters can be modified as desired.
.. parsed-literal::
/u/n/ndemo/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:546: UserWarning: Can't initialize NVML
/u/d/dcoscia/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:546: UserWarning: Can't initialize NVML
warnings.warn("Can't initialize NVML")
GPU available: True (cuda), used: True
/u/d/dcoscia/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:651: UserWarning: CUDA initialization: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:109.)
return torch._C._cuda_getDeviceCount() if nvml_count < 0 else nvml_count
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
Missing logger folder: /u/n/ndemo/PINA/tutorials/tutorial2/lightning_logs
2023-10-17 10:09:18.208459: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-10-17 10:09:18.235849: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-10-17 10:09:20.462393: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
/opt/sissa/apps/intelpython/2022.0.2/intelpython/latest/lib/python3.9/site-packages/scipy/__init__.py:138: UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.26.0)
warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion} is required for this version of "
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
----------------------------------------
0 | _loss | MSELoss | 0
1 | _neural_net | Network | 151
----------------------------------------
151 Trainable params
0 Non-trainable params
151 Total params
0.001 Total estimated model params size (MB)
Missing logger folder: /u/d/dcoscia/PINA/tutorials/tutorial2/lightning_logs
@@ -162,22 +149,20 @@ and the predicted solutions is showed.
.. image:: output_11_0.png
.. image:: tutorial_files/tutorial_9_0.png
The problem solution with extra-features
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Solving the problem with extra-features PINNs
---------------------------------------------
Now, the same problem is solved in a different way. A new neural network
is now defined, with an additional input variable, named extra-feature,
which coincides with the forcing term in the Laplace equation. The set
of input variables to the neural network is:
.. raw:: latex
\begin{equation}
[x, y, k(x, y)], \text{ with } k(x, y)=\sin{(\pi x)}\sin{(\pi y)},
\end{equation}
:raw-latex:`\begin{equation}
[x, y, k(x, y)], \text{ with } k(x, y)=\sin{(\pi x)}\sin{(\pi y)},
\end{equation}`
where :math:`x` and :math:`y` are the spatial coordinates and
:math:`k(x, y)` is the added feature.
@@ -215,7 +200,7 @@ new extra feature.
input_dimensions=len(problem.input_variables)+1
)
pinn_feat = PINN(problem, model_feat, extra_features=[SinSin()], optimizer_kwargs={'lr':0.006, 'weight_decay':1e-8})
trainer_feat = Trainer(pinn_feat, max_epochs=1000, callbacks=[MetricTracker()])
trainer_feat = Trainer(pinn_feat, max_epochs=1000, callbacks=[MetricTracker()], accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
# train
trainer_feat.train()
@@ -223,21 +208,10 @@ new extra feature.
.. parsed-literal::
GPU available: True (cuda), used: True
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
----------------------------------------
0 | _loss | MSELoss | 0
1 | _neural_net | Network | 161
----------------------------------------
161 Trainable params
0 Non-trainable params
161 Total params
0.001 Total estimated model params size (MB)
@@ -262,11 +236,11 @@ of magnitudes in accuracy.
.. image:: output_16_0.png
.. image:: tutorial_files/tutorial_14_0.png
The problem solution with learnable extra-features
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Solving the problem with learnable extra-features PINNs
-------------------------------------------------------
We can still do better!
@@ -274,11 +248,9 @@ Another way to exploit the extra features is the addition of learnable
parameter inside them. In this way, the added parameters are learned
during the training phase of the neural network. In this case, we use:
.. raw:: latex
\begin{equation}
k(x, \mathbf{y}) = \beta \sin{(\alpha x)} \sin{(\alpha y)},
\end{equation}
:raw-latex:`\begin{equation}
k(x, \mathbf{y}) = \beta \sin{(\alpha x)} \sin{(\alpha y)},
\end{equation}`
where :math:`\alpha` and :math:`\beta` are the abovementioned
parameters. Their implementation is quite trivial: by using the class
@@ -310,8 +282,8 @@ need, and they are managed by ``autograd`` module!
output_dimensions=len(problem.output_variables),
input_dimensions=len(problem.input_variables)+1
)
pinn_lean = PINN(problem, model_lean, extra_features=[SinSin()], optimizer_kwargs={'lr':0.006, 'weight_decay':1e-8})
trainer_learn = Trainer(pinn_lean, max_epochs=1000)
pinn_lean = PINN(problem, model_lean, extra_features=[SinSinAB()], optimizer_kwargs={'lr':0.006, 'weight_decay':1e-8})
trainer_learn = Trainer(pinn_lean, max_epochs=1000, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
# train
trainer_learn.train()
@@ -319,21 +291,10 @@ need, and they are managed by ``autograd`` module!
.. parsed-literal::
GPU available: True (cuda), used: True
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
----------------------------------------
0 | _loss | MSELoss | 0
1 | _neural_net | Network | 161
----------------------------------------
161 Trainable params
0 Non-trainable params
161 Total params
0.001 Total estimated model params size (MB)
@@ -367,8 +328,8 @@ removing all the hidden layers in the ``FeedForward``, keeping only the
output_dimensions=len(problem.output_variables),
input_dimensions=len(problem.input_variables)+1
)
pinn_learn = PINN(problem, model_lean, extra_features=[SinSin()], optimizer_kwargs={'lr':0.006, 'weight_decay':1e-8})
trainer_learn = Trainer(pinn_learn, max_epochs=1000, callbacks=[MetricTracker()])
pinn_learn = PINN(problem, model_lean, extra_features=[SinSinAB()], optimizer_kwargs={'lr':0.006, 'weight_decay':1e-8})
trainer_learn = Trainer(pinn_learn, max_epochs=1000, callbacks=[MetricTracker()], accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
# train
trainer_learn.train()
@@ -376,21 +337,10 @@ removing all the hidden layers in the ``FeedForward``, keeping only the
.. parsed-literal::
GPU available: True (cuda), used: True
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
----------------------------------------
0 | _loss | MSELoss | 0
1 | _neural_net | Network | 4
----------------------------------------
4 Trainable params
0 Non-trainable params
4 Total params
0.000 Total estimated model params size (MB)
@@ -422,5 +372,35 @@ features.
.. image:: output_23_0.png
.. image:: tutorial_files/tutorial_21_0.png
Let us compare the training losses for the various types of training
.. code:: ipython3
plotter.plot_loss(trainer, label='Standard')
plotter.plot_loss(trainer_feat, label='Static Features')
plotter.plot_loss(trainer_learn, label='Learnable Features')
.. image:: tutorial_files/tutorial_23_0.png
Whats next?
------------
Nice you have completed the two dimensional Poisson tutorial of
**PINA**! There are multiple directions you can go now:
1. Train the network for longer or with different layer sizes and assert
the finaly accuracy
2. Propose new types of extrafeatures and see how they affect the
learning
3. Exploit extrafeature training in more complex problems
4. Many more…

View File

Before

Width:  |  Height:  |  Size: 43 KiB

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

View File

Before

Width:  |  Height:  |  Size: 35 KiB

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

View File

@@ -0,0 +1,342 @@
Tutorial: Two dimensional Wave problem with hard constraint
===========================================================
In this tutorial we present how to solve the wave equation using hard
constraint PINNs. For doing so we will build a costum ``torch`` model
and pass it to the ``PINN`` solver.
First of all, some useful imports.
.. code:: ipython3
import torch
from pina.problem import SpatialProblem, TimeDependentProblem
from pina.operators import laplacian, grad
from pina.geometry import CartesianDomain
from pina.solvers import PINN
from pina.trainer import Trainer
from pina.equation import Equation
from pina.equation.equation_factory import FixedValue
from pina import Condition, Plotter
The problem definition
----------------------
The problem is written in the following form:
:raw-latex:`\begin{equation}
\begin{cases}
\Delta u(x,y,t) = \frac{\partial^2}{\partial t^2} u(x,y,t) \quad \text{in } D, \\\\
u(x, y, t=0) = \sin(\pi x)\sin(\pi y), \\\\
u(x, y, t) = 0 \quad \text{on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
\end{cases}
\end{equation}`
where :math:`D` is a square domain :math:`[0,1]^2`, and
:math:`\Gamma_i`, with :math:`i=1,...,4`, are the boundaries of the
square, and the velocity in the standard wave equation is fixed to one.
Now, the wave problem is written in PINA code as a class, inheriting
from ``SpatialProblem`` and ``TimeDependentProblem`` since we deal with
spatial, and time dependent variables. The equations are written as
``conditions`` that should be satisfied in the corresponding domains.
``truth_solution`` is the exact solution which will be compared with the
predicted one.
.. code:: ipython3
class Wave(TimeDependentProblem, SpatialProblem):
output_variables = ['u']
spatial_domain = CartesianDomain({'x': [0, 1], 'y': [0, 1]})
temporal_domain = CartesianDomain({'t': [0, 1]})
def wave_equation(input_, output_):
u_t = grad(output_, input_, components=['u'], d=['t'])
u_tt = grad(u_t, input_, components=['dudt'], d=['t'])
nabla_u = laplacian(output_, input_, components=['u'], d=['x', 'y'])
return nabla_u - u_tt
def initial_condition(input_, output_):
u_expected = (torch.sin(torch.pi*input_.extract(['x'])) *
torch.sin(torch.pi*input_.extract(['y'])))
return output_.extract(['u']) - u_expected
conditions = {
'gamma1': Condition(location=CartesianDomain({'x': [0, 1], 'y': 1, 't': [0, 1]}), equation=FixedValue(0.)),
'gamma2': Condition(location=CartesianDomain({'x': [0, 1], 'y': 0, 't': [0, 1]}), equation=FixedValue(0.)),
'gamma3': Condition(location=CartesianDomain({'x': 1, 'y': [0, 1], 't': [0, 1]}), equation=FixedValue(0.)),
'gamma4': Condition(location=CartesianDomain({'x': 0, 'y': [0, 1], 't': [0, 1]}), equation=FixedValue(0.)),
't0': Condition(location=CartesianDomain({'x': [0, 1], 'y': [0, 1], 't': 0}), equation=Equation(initial_condition)),
'D': Condition(location=CartesianDomain({'x': [0, 1], 'y': [0, 1], 't': [0, 1]}), equation=Equation(wave_equation)),
}
def wave_sol(self, pts):
return (torch.sin(torch.pi*pts.extract(['x'])) *
torch.sin(torch.pi*pts.extract(['y'])) *
torch.cos(torch.sqrt(torch.tensor(2.))*torch.pi*pts.extract(['t'])))
truth_solution = wave_sol
problem = Wave()
Hard Constraint Model
---------------------
After the problem, a **torch** model is needed to solve the PINN.
Usually, many models are already implemented in **PINA**, but the user
has the possibility to build his/her own model in ``torch``. The hard
constraint we impose is on the boundary of the spatial domain.
Specifically, our solution is written as:
.. math:: u_{\rm{pinn}} = xy(1-x)(1-y)\cdot NN(x, y, t),
where :math:`NN` is the neural net output. This neural network takes as
input the coordinates (in this case :math:`x`, :math:`y` and :math:`t`)
and provides the unknown field :math:`u`. By construction, it is zero on
the boundaries. The residuals of the equations are evaluated at several
sampling points (which the user can manipulate using the method
``discretise_domain``) and the loss minimized by the neural network is
the sum of the residuals.
.. code:: ipython3
class HardMLP(torch.nn.Module):
def __init__(self, input_dim, output_dim):
super().__init__()
self.layers = torch.nn.Sequential(torch.nn.Linear(input_dim, 40),
torch.nn.ReLU(),
torch.nn.Linear(40, 40),
torch.nn.ReLU(),
torch.nn.Linear(40, output_dim))
# here in the foward we implement the hard constraints
def forward(self, x):
hard = x.extract(['x'])*(1-x.extract(['x']))*x.extract(['y'])*(1-x.extract(['y']))
return hard*self.layers(x)
Train and Inference
-------------------
In this tutorial, the neural network is trained for 1000 epochs with a
learning rate of 0.001 (default in ``PINN``). Training takes
approximately 3 minutes.
.. code:: ipython3
# generate the data
problem.discretise_domain(1000, 'random', locations=['D', 't0', 'gamma1', 'gamma2', 'gamma3', 'gamma4'])
# crete the solver
pinn = PINN(problem, HardMLP(len(problem.input_variables), len(problem.output_variables)))
# create trainer and train
trainer = Trainer(pinn, max_epochs=1000, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
trainer.train()
.. parsed-literal::
/u/d/dcoscia/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:546: UserWarning: Can't initialize NVML
warnings.warn("Can't initialize NVML")
/u/d/dcoscia/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:651: UserWarning: CUDA initialization: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:109.)
return torch._C._cuda_getDeviceCount() if nvml_count < 0 else nvml_count
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
.. parsed-literal::
Training: 0it [00:00, ?it/s]
.. parsed-literal::
`Trainer.fit` stopped: `max_epochs=1000` reached.
Notice that the loss on the boundaries of the spatial domain is exactly
zero, as expected! After the training is completed one can now plot some
results using the ``Plotter`` class of **PINA**.
.. code:: ipython3
plotter = Plotter()
# plotting at fixed time t = 0.0
print('Plotting at t=0')
plotter.plot(trainer, fixed_variables={'t': 0.0})
# plotting at fixed time t = 0.5
print('Plotting at t=0.5')
plotter.plot(trainer, fixed_variables={'t': 0.5})
# plotting at fixed time t = 1.
print('Plotting at t=1')
plotter.plot(trainer, fixed_variables={'t': 1.0})
.. parsed-literal::
Plotting at t=0
.. image:: tutorial_files/tutorial_13_1.png
.. parsed-literal::
Plotting at t=0.5
.. image:: tutorial_files/tutorial_13_3.png
.. parsed-literal::
Plotting at t=1
.. image:: tutorial_files/tutorial_13_5.png
The results are not so great, and we can clearly see that as time
progress the solution get worse…. Can we do better?
A valid option is to impose the initial condition as hard constraint as
well. Specifically, our solution is written as:
.. math:: u_{\rm{pinn}} = xy(1-x)(1-y)\cdot NN(x, y, t)\cdot t + \cos(\sqrt{2}\pi t)sin(\pi x)\sin(\pi y),
Let us build the network first
.. code:: ipython3
class HardMLPtime(torch.nn.Module):
def __init__(self, input_dim, output_dim):
super().__init__()
self.layers = torch.nn.Sequential(torch.nn.Linear(input_dim, 40),
torch.nn.ReLU(),
torch.nn.Linear(40, 40),
torch.nn.ReLU(),
torch.nn.Linear(40, output_dim))
# here in the foward we implement the hard constraints
def forward(self, x):
hard_space = x.extract(['x'])*(1-x.extract(['x']))*x.extract(['y'])*(1-x.extract(['y']))
hard_t = torch.sin(torch.pi*x.extract(['x'])) * torch.sin(torch.pi*x.extract(['y'])) * torch.cos(torch.sqrt(torch.tensor(2.))*torch.pi*x.extract(['t']))
return hard_space * self.layers(x) * x.extract(['t']) + hard_t
Now lets train with the same configuration as thre previous test
.. code:: ipython3
# generate the data
problem.discretise_domain(1000, 'random', locations=['D', 't0', 'gamma1', 'gamma2', 'gamma3', 'gamma4'])
# crete the solver
pinn = PINN(problem, HardMLPtime(len(problem.input_variables), len(problem.output_variables)))
# create trainer and train
trainer = Trainer(pinn, max_epochs=1000, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
trainer.train()
.. parsed-literal::
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
.. parsed-literal::
Training: 0it [00:00, ?it/s]
.. parsed-literal::
`Trainer.fit` stopped: `max_epochs=1000` reached.
We can clearly see that the loss is way lower now. Lets plot the
results
.. code:: ipython3
plotter = Plotter()
# plotting at fixed time t = 0.0
print('Plotting at t=0')
plotter.plot(trainer, fixed_variables={'t': 0.0})
# plotting at fixed time t = 0.5
print('Plotting at t=0.5')
plotter.plot(trainer, fixed_variables={'t': 0.5})
# plotting at fixed time t = 1.
print('Plotting at t=1')
plotter.plot(trainer, fixed_variables={'t': 1.0})
.. parsed-literal::
Plotting at t=0
.. image:: tutorial_files/tutorial_19_1.png
.. parsed-literal::
Plotting at t=0.5
.. image:: tutorial_files/tutorial_19_3.png
.. parsed-literal::
Plotting at t=1
.. image:: tutorial_files/tutorial_19_5.png
We can see now that the results are way better! This is due to the fact
that previously the network was not learning correctly the initial
conditon, leading to a poor solution when the time evolved. By imposing
the initial condition the network is able to correctly solve the
problem.
Whats next?
------------
Nice you have completed the two dimensional Wave tutorial of **PINA**!
There are multiple directions you can go now:
1. Train the network for longer or with different layer sizes and assert
the finaly accuracy
2. Propose new types of hard constraints in time, e.g. 
.. math:: u_{\rm{pinn}} = xy(1-x)(1-y)\cdot NN(x, y, t)(1-\exp(-t)) + \cos(\sqrt{2}\pi t)sin(\pi x)\sin(\pi y),
3. Exploit extrafeature training for model 1 and 2
4. Many more…

Binary file not shown.

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

View File

Before

Width:  |  Height:  |  Size: 38 KiB

After

Width:  |  Height:  |  Size: 38 KiB

View File

Before

Width:  |  Height:  |  Size: 41 KiB

After

Width:  |  Height:  |  Size: 41 KiB

View File

Before

Width:  |  Height:  |  Size: 38 KiB

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

View File

@@ -1,24 +1,22 @@
Tutorial 4: continuous convolutional filter
===========================================
Tutorial: Unstructured convolutional autoencoder via continuous convolution
===========================================================================
In this tutorial, we will show how to use the Continuous Convolutional
Filter, and how to build common Deep Learning architectures with it. The
implementation of the filter follows the original work `**A Continuous
implementation of the filter follows the original work `A Continuous
Convolutional Trainable Filter for Modelling Unstructured
Data** <https://arxiv.org/abs/2210.13416>`__.
Data <https://arxiv.org/abs/2210.13416>`__.
First of all we import the modules needed for the tutorial, which
include:
- ``ContinuousConv`` class from ``pina.model.layers`` which implements
the continuous convolutional filter
- ``PyTorch`` and ``Matplotlib`` for tensorial operations and
visualization respectively
First of all we import the modules needed for the tutorial:
.. code:: ipython3
import torch
import matplotlib.pyplot as plt
from pina.problem import AbstractProblem
from pina.solvers import SupervisedSolver
from pina.trainer import Trainer
from pina import Condition, LabelTensor
from pina.model.layers import ContinuousConvBlock
import torchvision # for MNIST dataset
from pina.model import FeedForward # for building AE and MNIST classification
@@ -46,7 +44,7 @@ as:
\mathcal{I}_{\rm{out}}(\mathbf{x}) = \int_{\mathcal{X}} \mathcal{I}(\mathbf{x} + \mathbf{\tau}) \cdot \mathcal{K}(\mathbf{\tau}) d\mathbf{\tau},
where :math:`\mathcal{K} : \mathcal{X} \rightarrow \mathbb{R}` is the
where :math:`\mathcal{K} : \mathcal{X} \rightarrow \mathbb{R}` is the
*continuous filter* function, and
:math:`\mathcal{I} : \Omega \subset \mathbb{R}^N \rightarrow \mathbb{R}`
is the input function. The continuous filter function is approximated
@@ -62,7 +60,7 @@ by the authors. Thus, given :math:`\{\mathbf{x}_i\}_{i=1}^{n}` points in
\mathcal{I}_{\rm{out}}(\mathbf{\tilde{x}}_i) = \sum_{{\mathbf{x}_i}\in\mathcal{X}} \mathcal{I}(\mathbf{x}_i + \mathbf{\tau}) \cdot \mathcal{K}(\mathbf{x}_i),
where :math:`\mathbf{\tau} \in \mathcal{S}`, with :math:`\mathcal{S}`
where :math:`\mathbf{\tau} \in \mathcal{S}`, with :math:`\mathcal{S}`
the set of available strides, corresponds to the current stride position
of the filter, and :math:`\mathbf{\tilde{x}}_i` points are obtained by
taking the centroid of the filter position mapped on the :math:`\Omega`
@@ -83,7 +81,7 @@ shape:
.. math:: [B \times N_{in} \times N \times D]
where :math:`B` is the batch\_size, :math:`N_{in}` is the number of
\ where :math:`B` is the batch_size, :math:`N_{in}` is the number of
input fields, :math:`N` the number of points in the mesh, :math:`D` the
dimension of the problem. In particular: \* :math:`D` is the number of
spatial variables + 1. The last column must contain the field value. For
@@ -93,7 +91,7 @@ like ``[first coordinate, second coordinate, field value]`` \*
For example a vectorial function :math:`f = [f_1, f_2]` will have
:math:`N_{in}=2`
Let's see an example to clear the ideas. We will be verbose to explain
Lets see an example to clear the ideas. We will be verbose to explain
in details the input form. We wish to create the function:
.. math::
@@ -148,12 +146,12 @@ where to go. Here is an example for the :math:`[0,1]\times[0,5]` domain:
.. code:: python
# stride definition
stride = {"domain": [1, 5],
"start": [0, 0],
"jump": [0.1, 0.3],
"direction": [1, 1],
}
# stride definition
stride = {"domain": [1, 5],
"start": [0, 0],
"jump": [0.1, 0.3],
"direction": [1, 1],
}
This tells the filter: 1. ``domain``: square domain (the only
implemented) :math:`[0,1]\times[0,5]`. The minimum value is always zero,
@@ -198,15 +196,15 @@ fix the filter dimension to be :math:`[0.1, 0.1]`.
.. parsed-literal::
/u/n/ndemo/.local/lib/python3.9/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3526.)
/u/d/dcoscia/.local/lib/python3.9/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
That's it! In just one line of code we have created the continuous
Thats it! In just one line of code we have created the continuous
convolutional filter. By default the ``pina.model.FeedForward`` neural
network is intitialised, more on the
`documentation <https://mathlab.github.io/PINA/_rst/fnn.html>`__. In
case the mesh doesn't change during training we can set the ``optimize``
case the mesh doesnt change during training we can set the ``optimize``
flag equals to ``True``, to exploit optimizations for finding the points
to convolve.
@@ -220,7 +218,7 @@ to convolve.
optimize=True)
Let's try to do a forward pass
Lets try to do a forward pass
.. code:: ipython3
@@ -238,7 +236,7 @@ Let's try to do a forward pass
Filter output data has shape: torch.Size([1, 1, 169, 3])
If we don't want to use the default ``FeedForward`` neural network, we
If we dont want to use the default ``FeedForward`` neural network, we
can pass a specified torch model in the ``model`` keyword as follow:
.. code:: ipython3
@@ -270,7 +268,7 @@ Notice that we pass the class and not an already built object!
Building a MNIST Classifier
---------------------------
Let's see how we can build a MNIST classifier using a continuous
Lets see how we can build a MNIST classifier using a continuous
convolutional filter. We will use the MNIST dataset from PyTorch. In
order to keep small training times we use only 6000 samples for training
and 1000 samples for testing.
@@ -308,68 +306,7 @@ and 1000 samples for testing.
test_loader = DataLoader(train_data, batch_size=batch_size,
sampler=SubsetRandomSampler(subsample_train_indices))
.. parsed-literal::
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to ./data/MNIST/raw/train-images-idx3-ubyte.gz
.. parsed-literal::
100%|█████████████████████████████████| 9912422/9912422 [00:00<00:00, 59926793.62it/s]
.. parsed-literal::
Extracting ./data/MNIST/raw/train-images-idx3-ubyte.gz to ./data/MNIST/raw
Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz to ./data/MNIST/raw/train-labels-idx1-ubyte.gz
.. parsed-literal::
100%|██████████████████████████████████████| 28881/28881 [00:00<00:00, 2463209.03it/s]
.. parsed-literal::
Extracting ./data/MNIST/raw/train-labels-idx1-ubyte.gz to ./data/MNIST/raw
Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz to ./data/MNIST/raw/t10k-images-idx3-ubyte.gz
.. parsed-literal::
100%|█████████████████████████████████| 1648877/1648877 [00:00<00:00, 46499639.59it/s]
.. parsed-literal::
Extracting ./data/MNIST/raw/t10k-images-idx3-ubyte.gz to ./data/MNIST/raw
Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz to ./data/MNIST/raw/t10k-labels-idx1-ubyte.gz
.. parsed-literal::
100%|███████████████████████████████████████| 4542/4542 [00:00<00:00, 19761959.30it/s]
.. parsed-literal::
Extracting ./data/MNIST/raw/t10k-labels-idx1-ubyte.gz to ./data/MNIST/raw
.. parsed-literal::
Let's now build a simple classifier. The MNIST dataset is composed by
Lets now build a simple classifier. The MNIST dataset is composed by
vectors of shape ``[batch, 1, 28, 28]``, but we can image them as one
field functions where the pixels :math:`ij` are the coordinate
:math:`x=i, y=j` in a :math:`[0, 27]\times[0,27]` domain, and the pixels
@@ -448,7 +385,7 @@ filter followed by a feedforward neural network
net = ContinuousClassifier()
Let's try to train it using a simple pytorch training loop. We train for
Lets try to train it using a simple pytorch training loop. We train for
juts 1 epoch using Adam optimizer with a :math:`0.001` learning rate.
.. code:: ipython3
@@ -487,7 +424,9 @@ juts 1 epoch using Adam optimizer with a :math:`0.001` learning rate.
.. parsed-literal::
/u/n/ndemo/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:611: UserWarning: Can't initialize NVML
/u/d/dcoscia/.local/lib/python3.9/site-packages/torch/autograd/__init__.py:200: UserWarning: CUDA initialization: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:109.)
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
/u/d/dcoscia/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:546: UserWarning: Can't initialize NVML
warnings.warn("Can't initialize NVML")
@@ -510,7 +449,7 @@ juts 1 epoch using Adam optimizer with a :math:`0.001` learning rate.
batch [750/750] loss[0.040]
Let's see the performance on the train set!
Lets see the performance on the train set!
.. code:: ipython3
@@ -537,7 +476,7 @@ Let's see the performance on the train set!
As we can see we have very good performance for having traing only for 1
epoch! Nevertheless, we are still using structured data... Let's see how
epoch! Nevertheless, we are still using structured data Lets see how
we can build an autoencoder for unstructured data now.
Building a Continuous Convolutional Autoencoder
@@ -546,7 +485,7 @@ Building a Continuous Convolutional Autoencoder
Just as toy problem, we will now build an autoencoder for the following
function :math:`f(x,y)=\sin(\pi x)\sin(\pi y)` on the unit circle domain
centered in :math:`(0.5, 0.5)`. We will also see the ability to
up-sample (once trained) the results without retraining. Let's first
up-sample (once trained) the results without retraining. Lets first
create the input and visualize it, we will use firstly a mesh of
:math:`100` points.
@@ -592,12 +531,12 @@ create the input and visualize it, we will use firstly a mesh of
.. image:: output_32_0.png
.. image:: tutorial_files/tutorial_32_0.png
Let's now build a simple autoencoder using the continuous convolutional
Lets now build a simple autoencoder using the continuous convolutional
filter. The data is clearly unstructured and a simple convolutional
filter might not work without projecting or interpolating first. Let's
filter might not work without projecting or interpolating first. Lets
first build and ``Encoder`` and ``Decoder`` class, and then a
``Autoencoder`` class that contains both.
@@ -658,7 +597,7 @@ first build and ``Encoder`` and ``Decoder`` class, and then a
Very good! Notice that in the ``Decoder`` class in the ``forward`` pass
we have used the ``.transpose()`` method of the
``ContinuousConvolution`` class. This method accepts the ``weights`` for
upsampling and the ``grid`` on where to upsample. Let's now build the
upsampling and the ``grid`` on where to upsample. Lets now build the
autoencoder! We set the hidden dimension in the ``hidden_dimension``
variable. We apply the sigmoid on the output since the field value is
between :math:`[0, 1]`.
@@ -681,59 +620,50 @@ between :math:`[0, 1]`.
out = self.decoder(weights, grid)
return out
net = Autoencoder()
Let's now train the autoencoder, minimizing the mean square error loss
and optimizing using Adam.
Lets now train the autoencoder, minimizing the mean square error loss
and optimizing using Adam. We use the ``SupervisedSolver`` as solver,
and the problem is a simple problem created by inheriting from
``AbstractProblem``. It takes approximately two minutes to train on CPU.
.. code:: ipython3
# setting the seed
torch.manual_seed(seed)
# define the problem
class CircleProblem(AbstractProblem):
input_variables = ['x', 'y', 'f']
output_variables = input_variables
conditions = {'data' : Condition(input_points=LabelTensor(input_data, input_variables), output_points=LabelTensor(input_data, output_variables))}
# optimizer and loss function
optimizer = torch.optim.Adam(net.parameters(), lr=0.001)
criterion = torch.nn.MSELoss()
max_epochs = 150
# define the solver
solver = SupervisedSolver(problem=CircleProblem(), model=net, loss=torch.nn.MSELoss())
for epoch in range(max_epochs): # loop over the dataset multiple times
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(input_data)
loss = criterion(outputs[..., -1], input_data[..., -1])
loss.backward()
optimizer.step()
# print statistics
if epoch % 10 ==9:
print(f'epoch [{epoch + 1}/{max_epochs}] loss [{loss.item():.2}]')
# train
trainer = Trainer(solver, max_epochs=150, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
trainer.train()
.. parsed-literal::
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
.. parsed-literal::
epoch [10/150] loss [0.012]
epoch [20/150] loss [0.0036]
epoch [30/150] loss [0.0018]
epoch [40/150] loss [0.0014]
epoch [50/150] loss [0.0012]
epoch [60/150] loss [0.001]
epoch [70/150] loss [0.0009]
epoch [80/150] loss [0.00082]
epoch [90/150] loss [0.00075]
epoch [100/150] loss [0.0007]
epoch [110/150] loss [0.00066]
epoch [120/150] loss [0.00063]
epoch [130/150] loss [0.00061]
epoch [140/150] loss [0.00059]
epoch [150/150] loss [0.00058]
Training: 0it [00:00, ?it/s]
Let's visualize the two solutions side by side!
.. parsed-literal::
`Trainer.fit` stopped: `max_epochs=150` reached.
Lets visualize the two solutions side by side!
.. code:: ipython3
@@ -757,7 +687,7 @@ Let's visualize the two solutions side by side!
.. image:: output_40_0.png
.. image:: tutorial_files/tutorial_40_0.png
As we can see the two are really similar! We can compute the :math:`l_2`
@@ -774,19 +704,19 @@ error quite easily as well:
.. parsed-literal::
l2 error: 4.22%
l2 error: 4.32%
More or less :math:`4\%` in :math:`l_2` error, which is really low
considering the fact that we use just **one** convolutional layer and a
simple feedforward to decrease the dimension. Let's see now some
simple feedforward to decrease the dimension. Lets see now some
peculiarity of the filter.
Filter for upsampling
~~~~~~~~~~~~~~~~~~~~~
Suppose we have already the hidden dimension and we want to upsample on
a differen grid with more points. Let's see how to do it:
a differen grid with more points. Lets see how to do it:
.. code:: ipython3
@@ -820,11 +750,11 @@ a differen grid with more points. Let's see how to do it:
.. image:: output_45_0.png
.. image:: tutorial_files/tutorial_45_0.png
As we can see we have a very good approximation of the original
function, even thought some noise is present. Let's calculate the error
function, even thought some noise is present. Lets calculate the error
now:
.. code:: ipython3
@@ -834,7 +764,7 @@ now:
.. parsed-literal::
l2 error: 8.37%
l2 error: 8.49%
Autoencoding at different resolution
@@ -844,7 +774,7 @@ In the previous example we already had the hidden dimension (of original
input) and we used it to upsample. Sometimes however we have a more fine
mesh solution and we simply want to encode it. This can be done without
retraining! This procedure can be useful in case we have many points in
the mesh and just a smaller part of them are needed for training. Let's
the mesh and just a smaller part of them are needed for training. Lets
see the results of this:
.. code:: ipython3
@@ -883,18 +813,23 @@ see the results of this:
.. image:: output_49_0.png
.. image:: tutorial_files/tutorial_49_0.png
.. parsed-literal::
l2 error: 8.50%
l2 error: 8.59%
What's next?
Whats next?
------------
We have shown the basic usage of a convolutional filter. In the next
tutorials we will show how to combine the PINA framework with the
convolutional filter to train in few lines and efficiently a Neural
Network!
We have shown the basic usage of a convolutional filter. There are
additional extensions possible:
1. Train using Physics Informed strategies
2. Use the filter to build an unstructured convolutional autoencoder for
reduced order modelling
3. Many more…

View File

Before

Width:  |  Height:  |  Size: 99 KiB

After

Width:  |  Height:  |  Size: 99 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 126 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 134 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 119 KiB

View File

@@ -1,16 +1,15 @@
Tutorial 5: Fourier Neural Operator Learning
============================================
Tutorial: Two dimensional Darcy flow using the Fourier Neural Operator
======================================================================
In this tutorial we are going to solve the Darcy flow 2d problem,
presented in `Fourier Neural Operator for Parametric Partial
In this tutorial we are going to solve the Darcy flow problem in two
dimensions, presented in `Fourier Neural Operator for Parametric Partial
Differential Equation <https://openreview.net/pdf?id=c8P9NQVtmnO>`__.
First of all we import the modules needed for the tutorial. Importing
``scipy`` is needed for input output operation, run
``pip install scipy`` for installing it.
``scipy`` is needed for input output operations.
.. code:: ipython3
# !pip install scipy # install scipy
from scipy import io
import torch
from pina.model import FNO, FeedForward # let's import some models
@@ -21,13 +20,6 @@ First of all we import the modules needed for the tutorial. Importing
from pina.problem import AbstractProblem
import matplotlib.pyplot as plt
.. parsed-literal::
/opt/sissa/apps/intelpython/2022.0.2/intelpython/latest/lib/python3.9/site-packages/scipy/__init__.py:138: UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.26.0)
warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion} is required for this version of "
Data Generation
---------------
@@ -51,15 +43,15 @@ taken from the authors original reference.
# download the dataset
data = io.loadmat("Data_Darcy.mat")
# extract data
k_train = torch.tensor(data['k_train'], dtype=torch.float).unsqueeze(-1)
u_train = torch.tensor(data['u_train'], dtype=torch.float).unsqueeze(-1)
# extract data (we use only 100 data for train)
k_train = torch.tensor(data['k_train'], dtype=torch.float).unsqueeze(-1)[:100, ...]
u_train = torch.tensor(data['u_train'], dtype=torch.float).unsqueeze(-1)[:100, ...]
k_test = torch.tensor(data['k_test'], dtype=torch.float).unsqueeze(-1)
u_test= torch.tensor(data['u_test'], dtype=torch.float).unsqueeze(-1)
x = torch.tensor(data['x'], dtype=torch.float)[0]
y = torch.tensor(data['y'], dtype=torch.float)[0]
Let's visualize some data
Lets visualize some data
.. code:: ipython3
@@ -73,7 +65,7 @@ Let's visualize some data
.. image:: output_6_0.png
.. image:: tutorial_files/tutorial_6_0.png
We now create the neural operator class. It is a very simple class,
@@ -100,43 +92,24 @@ training using supervised learning.
.. code:: ipython3
# make model
model=FeedForward(input_dimensions=1, output_dimensions=1)
model = FeedForward(input_dimensions=1, output_dimensions=1)
# make solver
solver = SupervisedSolver(problem=problem, model=model)
# make the trainer and train
trainer = Trainer(solver=solver, max_epochs=100)
trainer = Trainer(solver=solver, max_epochs=100, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
trainer.train()
.. parsed-literal::
/u/n/ndemo/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:611: UserWarning: Can't initialize NVML
warnings.warn("Can't initialize NVML")
GPU available: True (cuda), used: True
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
Missing logger folder: /u/n/ndemo/PINA/tutorials/tutorial5/lightning_logs
2023-10-17 10:41:03.316644: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-10-17 10:41:03.333768: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-10-17 10:41:03.383188: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-10-17 10:41:07.712785: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
----------------------------------------
0 | _loss | MSELoss | 0
1 | _neural_net | Network | 481
----------------------------------------
481 Trainable params
0 Non-trainable params
481 Total params
0.002 Total estimated model params size (MB)
@@ -147,12 +120,10 @@ training using supervised learning.
.. parsed-literal::
/u/n/ndemo/.local/lib/python3.9/site-packages/torch/_tensor.py:1386: UserWarning: The use of `x.T` on tensors of dimension other than 2 to reverse their shape is deprecated and it will throw an error in a future release. Consider `x.mT` to transpose batches of matrices or `x.permute(*torch.arange(x.ndim - 1, -1, -1))` to reverse the dimensions of a tensor. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3614.)
ret = func(*args, **kwargs)
`Trainer.fit` stopped: `max_epochs=100` reached.
The final loss is pretty high... We can calculate the error by importing
The final loss is pretty high We can calculate the error by importing
``LpLoss``.
.. code:: ipython3
@@ -172,8 +143,8 @@ The final loss is pretty high... We can calculate the error by importing
.. parsed-literal::
Final error training 56.86%
Final error testing 56.82%
Final error training 56.24%
Final error testing 55.95%
Solving the problem with a Fuorier Neural Operator (FNO)
@@ -199,28 +170,17 @@ operator this approach is better suited, as we shall see.
solver = SupervisedSolver(problem=problem, model=model)
# make the trainer and train
trainer = Trainer(solver=solver, max_epochs=20)
trainer = Trainer(solver=solver, max_epochs=100, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
trainer.train()
.. parsed-literal::
GPU available: True (cuda), used: True
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
----------------------------------------
0 | _loss | MSELoss | 0
1 | _neural_net | Network | 591 K
----------------------------------------
591 K Trainable params
0 Non-trainable params
591 K Total params
2.364 Total estimated model params size (MB)
@@ -231,13 +191,13 @@ operator this approach is better suited, as we shall see.
.. parsed-literal::
`Trainer.fit` stopped: `max_epochs=20` reached.
`Trainer.fit` stopped: `max_epochs=100` reached.
We can clearly see that with 1/3 of the total epochs the loss is lower.
Let's see in testing.. Notice that the number of parameters is way
higher than a ``FeedForward`` network. We suggest to use GPU or TPU for
a speed up in training.
We can clearly see that the final loss is lower. Lets see in testing..
Notice that the number of parameters is way higher than a
``FeedForward`` network. We suggest to use GPU or TPU for a speed up in
training, when many data samples are used.
.. code:: ipython3
@@ -250,13 +210,13 @@ a speed up in training.
.. parsed-literal::
Final error training 26.19%
Final error testing 25.89%
Final error training 10.86%
Final error testing 12.77%
As we can see the loss is way lower!
What's next?
Whats next?
------------
We have made a very simple example on how to use the ``FNO`` for

View File

Before

Width:  |  Height:  |  Size: 14 KiB

After

Width:  |  Height:  |  Size: 14 KiB

View File

@@ -1,8 +1,5 @@
Tutorial 6: How to Use Geometries in PINA
=========================================
Built-in Geometries
-------------------
Tutorial: Building custom geometries with PINA ``Location`` class
=================================================================
In this tutorial we will show how to use geometries in PINA.
Specifically, the tutorial will include how to create geometries and how
@@ -12,7 +9,7 @@ to visualize them. The topics covered are:
- Getting the Union and Difference of Geometries
- Sampling points in the domain (and visualize them)
We import the relevant modules.
We import the relevant modules first.
.. code:: ipython3
@@ -24,8 +21,11 @@ We import the relevant modules.
ax.title.set_text(title)
ax.scatter(pts.extract('x'), pts.extract('y'), color='blue', alpha=0.5)
Built-in Geometries
-------------------
We will create one cartesian and two ellipsoids. For the sake of
simplicity, we show here the 2-dimensional, but it's trivial the
simplicity, we show here the 2-dimensional, but its trivial the
extension to 3D (and higher) cases. The geometries allows also the
generation of samples belonging to the boundary. So, we will create one
ellipsoid with the border and one without.
@@ -109,7 +109,7 @@ We are now ready to visualize the samples using matplotlib.
.. image:: output_11_0.png
.. image:: tutorial_files/tutorial_10_0.png
We have now created, sampled, and visualized our first geometries! We
@@ -151,7 +151,7 @@ Among the built-in shapes, we quickly show here the usage of
.. image:: output_14_0.png
.. image:: tutorial_files/tutorial_13_0.png
Boolean Operations
@@ -161,7 +161,7 @@ To create complex shapes we can use the boolean operations, for example
to merge two default geometries. We need to simply use the ``Union``
class: it takes a list of geometries and returns the union of them.
Let's create three unions. Firstly, it will be a union of ``cartesian``
Lets create three unions. Firstly, it will be a union of ``cartesian``
and ``ellipsoid_no_border``. Next, it will be a union of
``ellipse_no_border`` and ``ellipse_border``. Lastly, it will be a union
of all three geometries.
@@ -195,7 +195,7 @@ with.
.. image:: output_21_0.png
.. image:: tutorial_files/tutorial_20_0.png
Now, we will find the differences of the geometries. We will find the
@@ -211,7 +211,7 @@ difference of ``cartesian`` and ``ellipsoid_no_border``.
.. image:: output_23_0.png
.. image:: tutorial_files/tutorial_22_0.png
Create Custom Location
@@ -222,7 +222,7 @@ try to make is a heart defined by the function
.. math:: (x^2+y^2-1)^3-x^2y^3 \le 0
Let's start by importing what we will need to create our own geometry
Lets start by importing what we will need to create our own geometry
based on this equation.
.. code:: ipython3
@@ -244,8 +244,8 @@ Next, we will create the ``Heart(Location)`` class and initialize it.
Because the ``Location`` class we are inherting from requires both a
sample method and ``is_inside`` method, we will create them and just add
in "pass" for the moment.
``sample`` method and ``is_inside`` method, we will create them and just
add in pass for the moment.
.. code:: ipython3
@@ -262,7 +262,7 @@ in "pass" for the moment.
pass
Now we have the skeleton for our ``Heart`` class. The ``is_inside``
method is where most of the work is done so let's fill it out.
method is where most of the work is done so lets fill it out.
.. code:: ipython3
@@ -304,5 +304,5 @@ To sample from the Heart geometry we simply run:
.. image:: output_37_0.png
.. image:: tutorial_files/tutorial_36_0.png

View File

Before

Width:  |  Height:  |  Size: 153 KiB

After

Width:  |  Height:  |  Size: 153 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 153 KiB

View File

Before

Width:  |  Height:  |  Size: 156 KiB

After

Width:  |  Height:  |  Size: 156 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 156 KiB

View File

Before

Width:  |  Height:  |  Size: 181 KiB

After

Width:  |  Height:  |  Size: 181 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 181 KiB

View File

Before

Width:  |  Height:  |  Size: 137 KiB

After

Width:  |  Height:  |  Size: 137 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 137 KiB

View File

Before

Width:  |  Height:  |  Size: 98 KiB

After

Width:  |  Height:  |  Size: 98 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 98 KiB

View File

@@ -46,6 +46,7 @@ extensions = [
'sphinx.ext.viewcode',
#'sphinx.ext.ifconfig',
'sphinx.ext.mathjax',
'sphinx.ext.autosectionlabel',
]
#autosummary_generate = True

View File

@@ -8,13 +8,23 @@ Welcome to PINA's documentation!
|
PINA is a Python package providing an easy interface to deal with
physics-informed neural networks (PINN) for the approximation of (differential,
nonlinear, ...) functions. Based on Pytorch, PINA offers a simple and intuitive
way to formalize a specific problem and solve it using PINN. The approximated
solution of a differential equation can be implemented using PINA in a few lines
of code thanks to the intuitive and user-friendly interface.
Physics Informed Neural network for Advanced modeling (**PINA**) is
an open-source Python library providing an intuitive interface for
solving differential equations using PINNs, NOs or both together.
Based on `PyTorch <https://pytorch.org/>`_ and `PyTorchLightning <https://lightning.ai/docs/pytorch/stable/>`_,
PINA offers a simple and intuitive way to formalize a specific (differential) problem
and solve it using neural networks . The approximated solution of a differential equation
can be implemented using PINA in a few lines of code thanks to the intuitive and user-friendly interface.
`PyTorchLightning <https://lightning.ai/docs/pytorch/stable/>`_ as backhand is done to offer
professional AI researchers and machine learning engineers the possibility of using advancement
training strategies provided by the library, such as multiple device training, modern model compression techniques,
gradient accumulation, and so on. In addition, it provides the possibility to add arbitrary
self-contained routines (callbacks) to the training for easy extensions without the need to touch the
underlying code.
The high-level structure of the package is depicted in our API. The pipeline to solve differential equations
with PINA follows just five steps: problem definition, model selection, data generation, solver selection, and training.
.. figure:: index_files/API_color.png
:alt: PINA application program interface
@@ -26,22 +36,30 @@ of code thanks to the intuitive and user-friendly interface.
Physics-informed neural network
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
PINN is a novel approach that involves neural networks to solve supervised
learning tasks while respecting any given law of physics described by general
nonlinear differential equations. Proposed in "Physics-informed neural
`PINN <https://www.sciencedirect.com/science/article/abs/pii/S0021999118307125>`_ is a novel approach that
involves neural networks to solve differential equations in an unsupervised manner, while respecting
any given law of physics described by general differential equations. Proposed in "*Physics-informed neural
networks: A deep learning framework for solving forward and inverse problems
involving nonlinear partial differential equations", such framework aims to
solve problems in a continuous and nonlinear settings. :py:class:`pina.pinn.PINN`
involving nonlinear partial differential equations*", such framework aims to
solve problems in a continuous and nonlinear settings.
Neural operator learning
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
`Neural Operators <https://www.jmlr.org/papers/v24/21-1524.html>`_ is a novel approach involving neural networks
to learn differential operators using supervised learning strategies. By learning the differential operator, the
neural network is able to generalize across different instances of the differential equations (e.g. different forcing
terms), without the need of re-training.
.. toctree::
:maxdepth: 2
:caption: Package Documentation:
Installation <_rst/installation>
API <_rst/code>
Contributing <_rst/contributing>
License <LICENSE.rst>
API <_rst/_code>
Contributing <_rst/_contributing>
License <_LICENSE.rst>
.. the following is demo content intended to showcase some of the features you can invoke in reStructuredText
.. this can be safely deleted or commented out
@@ -50,20 +68,7 @@ solve problems in a continuous and nonlinear settings. :py:class:`pina.pinn.PINN
.. toctree::
:maxdepth: 1
:numbered:
:caption: Tutorials:
:caption: Getting Started:
Getting start with PINA <_rst/tutorial1/tutorial.rst>
Poisson problem <_rst/tutorial2/tutorial.rst>
Wave equation <_rst/tutorial3/tutorial.rst>
Continuous Convolutional Filter <_rst/tutorial4/tutorial.rst>
Fourier Neural Operator <_rst/tutorial5/tutorial.rst>
Geometry Usage <_rst/tutorial6/tutorial.rst>
.. ........................................................................................
.. toctree::
:maxdepth: 2
:numbered:
:caption: Download
.. ........................................................................................
Installation <_rst/_installation>
Tutorials <_rst/_tutorials>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 95 KiB

After

Width:  |  Height:  |  Size: 121 KiB