Tutorials and Doc (#191)
* Tutorial doc update * update doc tutorial * doc not compiling --------- Co-authored-by: Dario Coscia <dcoscia@euclide.maths.sissa.it> Co-authored-by: Dario Coscia <dariocoscia@Dario-Coscia.local>
77
docs/source/_rst/_code.rst
Normal file
@@ -0,0 +1,77 @@
|
|||||||
|
Code Documentation
|
||||||
|
==================
|
||||||
|
Welcome to PINA documentation! Here you can find the modules of the package divided in different sections.
|
||||||
|
|
||||||
|
PINA Features
|
||||||
|
-------
|
||||||
|
.. toctree::
|
||||||
|
:titlesonly:
|
||||||
|
|
||||||
|
LabelTensor <label_tensor.rst>
|
||||||
|
Condition <condition.rst>
|
||||||
|
Plotter <plotter.rst>
|
||||||
|
|
||||||
|
Problem
|
||||||
|
-------
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
:titlesonly:
|
||||||
|
|
||||||
|
AbstractProblem <problem/abstractproblem.rst>
|
||||||
|
SpatialProblem <problem/spatialproblem.rst>
|
||||||
|
TimeDependentProblem <problem/timedepproblem.rst>
|
||||||
|
ParametricProblem <problem/parametricproblem.rst>
|
||||||
|
|
||||||
|
Solvers
|
||||||
|
-------
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
:titlesonly:
|
||||||
|
|
||||||
|
SolverInterface <solvers/solver_interface.rst>
|
||||||
|
PINN <solvers/pinn.rst>
|
||||||
|
|
||||||
|
|
||||||
|
Models
|
||||||
|
-----
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
:titlesonly:
|
||||||
|
|
||||||
|
Network <models/network.rst>
|
||||||
|
FeedForward <models/fnn.rst>
|
||||||
|
MultiFeedForward <models/multifeedforward.rst>
|
||||||
|
ResidualFeedForward <models/fnn_residual.rst>
|
||||||
|
DeepONet <models/deeponet.rst>
|
||||||
|
FNO <models/fno.rst>
|
||||||
|
|
||||||
|
Layers
|
||||||
|
------
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
:titlesonly:
|
||||||
|
|
||||||
|
ContinuousConv <layers/convolution.rst>
|
||||||
|
|
||||||
|
|
||||||
|
Geometries
|
||||||
|
----------
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
:titlesonly:
|
||||||
|
|
||||||
|
Location <geometry/location.rst>
|
||||||
|
CartesianDomain <geometry/cartesian.rst>
|
||||||
|
EllipsoidDomain <geometry/ellipsoid.rst>
|
||||||
|
SimplexDomain <geometry/simplex.rst>
|
||||||
|
|
||||||
|
|
||||||
|
Loss
|
||||||
|
------
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
:titlesonly:
|
||||||
|
|
||||||
|
LossInterface <loss/loss_interface.rst>
|
||||||
|
LpLoss <loss/lploss.rst>
|
||||||
|
PowerLoss <loss/powerloss.rst>
|
||||||
27
docs/source/_rst/_tutorials.rst
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
|
||||||
|
PINA Tutorials
|
||||||
|
==============
|
||||||
|
|
||||||
|
In this folder we collect useful tutorials in order to understand the principles and the potential of **PINA**.
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
:maxdepth: 3
|
||||||
|
:hidden:
|
||||||
|
|
||||||
|
Getting started with PINA
|
||||||
|
-------------------------
|
||||||
|
* :doc:`Introduction to PINA for Physics Informed Neural Networks training<tutorials/tutorial1/tutorial>`
|
||||||
|
* :doc:`Building custom geometries with PINA Location class<tutorials/tutorial6/tutorial>`
|
||||||
|
|
||||||
|
Physics Informed Neural Networks
|
||||||
|
--------------------------------
|
||||||
|
* :doc:`Two dimensional Poisson problem using Extra Features Learning<tutorials/tutorial2/tutorial>`
|
||||||
|
* :doc:`Two dimensional Wave problem with hard constraint<tutorials/tutorial3/tutorial>`
|
||||||
|
|
||||||
|
Neural Operator Learning
|
||||||
|
------------------------
|
||||||
|
* :doc:`Two dimensional Darcy flow using the Fourier Neural Operator<tutorials/tutorial5/tutorial>`
|
||||||
|
|
||||||
|
Supervised Learning
|
||||||
|
-------------------
|
||||||
|
* :doc:`Unstructured convolutional autoencoder via continuous convolution<tutorials/tutorial4/tutorial>`
|
||||||
@@ -1,61 +0,0 @@
|
|||||||
Code Documentation
|
|
||||||
==================
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 3
|
|
||||||
|
|
||||||
PINN <pinn.rst>
|
|
||||||
LabelTensor <label_tensor.rst>
|
|
||||||
Condition <condition.rst>
|
|
||||||
Location <location.rst>
|
|
||||||
Operators <operators.rst>
|
|
||||||
Plotter <plotter.rst>
|
|
||||||
|
|
||||||
Geometries
|
|
||||||
----------
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 3
|
|
||||||
|
|
||||||
Span <span.rst>
|
|
||||||
Ellipsoid <ellipsoid.rst>
|
|
||||||
|
|
||||||
|
|
||||||
Model
|
|
||||||
-----
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 3
|
|
||||||
|
|
||||||
Network <network.rst>
|
|
||||||
FeedForward <fnn.rst>
|
|
||||||
DeepONet <deeponet.rst>
|
|
||||||
MultiFeedForward <multifeedforward.rst>
|
|
||||||
|
|
||||||
Layers
|
|
||||||
------
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 3
|
|
||||||
|
|
||||||
ContinuousConv <convolution.rst>
|
|
||||||
|
|
||||||
Loss
|
|
||||||
------
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 3
|
|
||||||
|
|
||||||
LpLoss <lploss.rst>
|
|
||||||
PowerLoss <powerloss.rst>
|
|
||||||
|
|
||||||
Problem
|
|
||||||
-------
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 3
|
|
||||||
|
|
||||||
AbstractProblem <abstractproblem.rst>
|
|
||||||
SpatialProblem <spatialproblem.rst>
|
|
||||||
TimeDependentProblem <timedepproblem.rst>
|
|
||||||
ParametricProblem <parametricproblem.rst>
|
|
||||||
@@ -1,10 +0,0 @@
|
|||||||
Ellipsoid
|
|
||||||
===========
|
|
||||||
.. currentmodule:: pina.ellipsoid
|
|
||||||
|
|
||||||
.. automodule:: pina.ellipsoid
|
|
||||||
|
|
||||||
.. autoclass:: Ellipsoid
|
|
||||||
:members:
|
|
||||||
:show-inheritance:
|
|
||||||
:noindex:
|
|
||||||
10
docs/source/_rst/geometry/cartesian.rst
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
CartesianDomain
|
||||||
|
===========
|
||||||
|
.. currentmodule:: pina.geometry.cartesian
|
||||||
|
|
||||||
|
.. automodule:: pina.geometry.cartesian
|
||||||
|
|
||||||
|
.. autoclass:: CartesianDomain
|
||||||
|
:members:
|
||||||
|
:show-inheritance:
|
||||||
|
:noindex:
|
||||||
10
docs/source/_rst/geometry/ellipsoid.rst
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
EllipsoidDomain
|
||||||
|
===========
|
||||||
|
.. currentmodule:: pina.geometry.ellipsoid
|
||||||
|
|
||||||
|
.. automodule:: pina.geometry.ellipsoid
|
||||||
|
|
||||||
|
.. autoclass:: EllipsoidDomain
|
||||||
|
:members:
|
||||||
|
:show-inheritance:
|
||||||
|
:noindex:
|
||||||
@@ -1,8 +1,8 @@
|
|||||||
Location
|
Location
|
||||||
=========
|
=========
|
||||||
.. currentmodule:: pina.location
|
.. currentmodule:: pina.geometry.location
|
||||||
|
|
||||||
.. automodule:: pina.location
|
.. automodule:: pina.geometry.location
|
||||||
|
|
||||||
.. autoclass:: Location
|
.. autoclass:: Location
|
||||||
:members:
|
:members:
|
||||||
10
docs/source/_rst/geometry/simplex.rst
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
SimplexDomain
|
||||||
|
===========
|
||||||
|
.. currentmodule:: pina.geometry.simplex
|
||||||
|
|
||||||
|
.. automodule:: pina.geometry.simplex
|
||||||
|
|
||||||
|
.. autoclass:: SimplexDomain
|
||||||
|
:members:
|
||||||
|
:show-inheritance:
|
||||||
|
:noindex:
|
||||||
@@ -1,10 +1,10 @@
|
|||||||
ContinuousConv
|
ContinuousConvBlock
|
||||||
==============
|
===================
|
||||||
.. currentmodule:: pina.model.layers.convolution_2d
|
.. currentmodule:: pina.model.layers.convolution_2d
|
||||||
|
|
||||||
.. automodule:: pina.model.layers.convolution_2d
|
.. automodule:: pina.model.layers.convolution_2d
|
||||||
|
|
||||||
.. autoclass:: ContinuousConv
|
.. autoclass:: ContinuousConvBlock
|
||||||
:members:
|
:members:
|
||||||
:private-members:
|
:private-members:
|
||||||
:undoc-members:
|
:undoc-members:
|
||||||
10
docs/source/_rst/loss/loss_interface.rst
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
LpLoss
|
||||||
|
====
|
||||||
|
.. currentmodule:: pina.loss
|
||||||
|
|
||||||
|
.. automodule:: pina.loss
|
||||||
|
|
||||||
|
.. autoclass:: LossInterface
|
||||||
|
:members:
|
||||||
|
:private-members:
|
||||||
|
:show-inheritance:
|
||||||
10
docs/source/_rst/models/fnn_residual.rst
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
ResidualFeedForward
|
||||||
|
===========
|
||||||
|
.. currentmodule:: pina.model.feed_forward
|
||||||
|
|
||||||
|
.. automodule:: pina.model.feed_forward
|
||||||
|
|
||||||
|
.. autoclass:: ResidualFeedForward
|
||||||
|
:members:
|
||||||
|
:private-members:
|
||||||
|
:show-inheritance:
|
||||||
10
docs/source/_rst/models/fno.rst
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
FNO
|
||||||
|
===========
|
||||||
|
.. currentmodule:: pina.model.fno
|
||||||
|
|
||||||
|
.. automodule:: pina.model.fno
|
||||||
|
|
||||||
|
.. autoclass:: FNO
|
||||||
|
:members:
|
||||||
|
:private-members:
|
||||||
|
:show-inheritance:
|
||||||
10
docs/source/_rst/solvers/solver_interface.rst
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
SolverInterface
|
||||||
|
===========
|
||||||
|
.. currentmodule:: pina.solvers.solver
|
||||||
|
|
||||||
|
.. automodule:: pina.solvers.solver
|
||||||
|
|
||||||
|
.. autoclass:: SolverInterface
|
||||||
|
:members:
|
||||||
|
:show-inheritance:
|
||||||
|
:noindex:
|
||||||
@@ -1,10 +0,0 @@
|
|||||||
Span
|
|
||||||
===========
|
|
||||||
.. currentmodule:: pina.span
|
|
||||||
|
|
||||||
.. automodule:: pina.span
|
|
||||||
|
|
||||||
.. autoclass:: Span
|
|
||||||
:members:
|
|
||||||
:show-inheritance:
|
|
||||||
:noindex:
|
|
||||||
@@ -1,279 +0,0 @@
|
|||||||
Tutorial 1: Physics Informed Neural Networks on PINA
|
|
||||||
====================================================
|
|
||||||
|
|
||||||
In this tutorial, we will demonstrate a typical use case of PINA on a
|
|
||||||
toy problem. Specifically, the tutorial aims to introduce the following
|
|
||||||
topics:
|
|
||||||
|
|
||||||
- Defining a PINA Problem,
|
|
||||||
- Building a ``pinn`` object,
|
|
||||||
- Sampling points in a domain
|
|
||||||
|
|
||||||
These are the three main steps needed **before** training a Physics
|
|
||||||
Informed Neural Network (PINN). We will show each step in detail, and at
|
|
||||||
the end, we will solve the problem.
|
|
||||||
|
|
||||||
PINA Problem
|
|
||||||
------------
|
|
||||||
|
|
||||||
Initialize the ``Problem`` class
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Problem definition in the PINA framework is done by building a python
|
|
||||||
``class``, which inherits from one or more problem classes
|
|
||||||
(``SpatialProblem``, ``TimeDependentProblem``, ``ParametricProblem``)
|
|
||||||
depending on the nature of the problem. Below is an example: #### Simple
|
|
||||||
Ordinary Differential Equation Consider the following:
|
|
||||||
|
|
||||||
.. math::
|
|
||||||
|
|
||||||
|
|
||||||
\begin{equation}
|
|
||||||
\begin{cases}
|
|
||||||
\frac{d}{dx}u(x) &= u(x) \quad x\in(0,1)\\
|
|
||||||
u(x=0) &= 1 \\
|
|
||||||
\end{cases}
|
|
||||||
\end{equation}
|
|
||||||
|
|
||||||
with the analytical solution :math:`u(x) = e^x`. In this case, our ODE
|
|
||||||
depends only on the spatial variable :math:`x\in(0,1)` , meaning that
|
|
||||||
our ``Problem`` class is going to be inherited from the
|
|
||||||
``SpatialProblem`` class:
|
|
||||||
|
|
||||||
.. code:: python
|
|
||||||
|
|
||||||
from pina.problem import SpatialProblem
|
|
||||||
from pina import CartesianProblem
|
|
||||||
|
|
||||||
class SimpleODE(SpatialProblem):
|
|
||||||
|
|
||||||
output_variables = ['u']
|
|
||||||
spatial_domain = CartesianProblem({'x': [0, 1]})
|
|
||||||
|
|
||||||
# other stuff ...
|
|
||||||
|
|
||||||
Notice that we define ``output_variables`` as a list of symbols,
|
|
||||||
indicating the output variables of our equation (in this case only
|
|
||||||
:math:`u`). The ``spatial_domain`` variable indicates where the sample
|
|
||||||
points are going to be sampled in the domain, in this case
|
|
||||||
:math:`x\in[0,1]`.
|
|
||||||
|
|
||||||
What about if our equation is also time dependent? In this case, our
|
|
||||||
``class`` will inherit from both ``SpatialProblem`` and
|
|
||||||
``TimeDependentProblem``:
|
|
||||||
|
|
||||||
.. code:: ipython3
|
|
||||||
|
|
||||||
from pina.problem import SpatialProblem, TimeDependentProblem
|
|
||||||
from pina import CartesianDomain
|
|
||||||
|
|
||||||
class TimeSpaceODE(SpatialProblem, TimeDependentProblem):
|
|
||||||
|
|
||||||
output_variables = ['u']
|
|
||||||
spatial_domain = CartesianDomain({'x': [0, 1]})
|
|
||||||
temporal_domain = CartesianDomain({'t': [0, 1]})
|
|
||||||
|
|
||||||
# other stuff ...
|
|
||||||
|
|
||||||
where we have included the ``temporal_domain`` variable, indicating the
|
|
||||||
time domain wanted for the solution.
|
|
||||||
|
|
||||||
In summary, using PINA, we can initialize a problem with a class which
|
|
||||||
inherits from three base classes: ``SpatialProblem``,
|
|
||||||
``TimeDependentProblem``, ``ParametricProblem``, depending on the type
|
|
||||||
of problem we are considering. For reference: \* ``SpatialProblem``
|
|
||||||
:math:`\rightarrow` a differential equation with spatial variable(s) \*
|
|
||||||
``TimeDependentProblem`` :math:`\rightarrow` a time-dependent
|
|
||||||
differential equation \* ``ParametricProblem`` :math:`\rightarrow` a
|
|
||||||
parametrized differential equation
|
|
||||||
|
|
||||||
Write the ``Problem`` class
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Once the ``Problem`` class is initialized, we need to represent the
|
|
||||||
differential equation in PINA. In order to do this, we need to load the
|
|
||||||
PINA operators from ``pina.operators`` module. Again, we'll consider
|
|
||||||
Equation (1) and represent it in PINA:
|
|
||||||
|
|
||||||
.. code:: ipython3
|
|
||||||
|
|
||||||
from pina.problem import SpatialProblem
|
|
||||||
from pina.operators import grad
|
|
||||||
from pina import Condition, CartesianDomain
|
|
||||||
from pina.equation.equation import Equation
|
|
||||||
|
|
||||||
import torch
|
|
||||||
|
|
||||||
|
|
||||||
class SimpleODE(SpatialProblem):
|
|
||||||
|
|
||||||
output_variables = ['u']
|
|
||||||
spatial_domain = CartesianDomain({'x': [0, 1]})
|
|
||||||
|
|
||||||
# defining the ode equation
|
|
||||||
def ode_equation(input_, output_):
|
|
||||||
|
|
||||||
# computing the derivative
|
|
||||||
u_x = grad(output_, input_, components=['u'], d=['x'])
|
|
||||||
|
|
||||||
# extracting the u input variable
|
|
||||||
u = output_.extract(['u'])
|
|
||||||
|
|
||||||
# calculate the residual and return it
|
|
||||||
return u_x - u
|
|
||||||
|
|
||||||
# defining the initial condition
|
|
||||||
def initial_condition(input_, output_):
|
|
||||||
|
|
||||||
# setting the initial value
|
|
||||||
value = 1.0
|
|
||||||
|
|
||||||
# extracting the u input variable
|
|
||||||
u = output_.extract(['u'])
|
|
||||||
|
|
||||||
# calculate the residual and return it
|
|
||||||
return u - value
|
|
||||||
|
|
||||||
# conditions to hold
|
|
||||||
conditions = {
|
|
||||||
'x0': Condition(location=CartesianDomain({'x': 0.}), equation=Equation(initial_condition)),
|
|
||||||
'D': Condition(location=CartesianDomain({'x': [0, 1]}), equation=Equation(ode_equation)),
|
|
||||||
}
|
|
||||||
|
|
||||||
# sampled points (see below)
|
|
||||||
input_pts = None
|
|
||||||
|
|
||||||
# defining the true solution
|
|
||||||
def truth_solution(self, pts):
|
|
||||||
return torch.exp(pts.extract(['x']))
|
|
||||||
|
|
||||||
After we define the ``Problem`` class, we need to write different class
|
|
||||||
methods, where each method is a function returning a residual. These
|
|
||||||
functions are the ones minimized during PINN optimization, given the
|
|
||||||
initial conditions. For example, in the domain :math:`[0,1]`, the ODE
|
|
||||||
equation (``ode_equation``) must be satisfied. We represent this by
|
|
||||||
returning the difference between subtracting the variable ``u`` from its
|
|
||||||
gradient (the residual), which we hope to minimize to 0. This is done
|
|
||||||
for all conditions (``ode_equation``, ``initial_condition``).
|
|
||||||
|
|
||||||
Once we have defined the function, we need to tell the neural network
|
|
||||||
where these methods are to be applied. To do so, we use the
|
|
||||||
``Condition`` class. In the ``Condition`` class, we pass the location
|
|
||||||
points and the function we want minimized on those points (other
|
|
||||||
possibilities are allowed, see the documentation for reference) as
|
|
||||||
parameters.
|
|
||||||
|
|
||||||
Finally, it's possible to define a ``truth_solution`` function, which
|
|
||||||
can be useful if we want to plot the results and see how the real
|
|
||||||
solution compares to the expected (true) solution. Notice that the
|
|
||||||
``truth_solution`` function is a method of the ``PINN`` class, but is
|
|
||||||
not mandatory for problem definition.
|
|
||||||
|
|
||||||
Build the ``PINN`` object
|
|
||||||
-------------------------
|
|
||||||
|
|
||||||
The basic requirements for building a ``PINN`` model are a ``Problem``
|
|
||||||
and a model. We have just covered the ``Problem`` definition. For the
|
|
||||||
model parameter, one can use either the default models provided in PINA
|
|
||||||
or a custom model. We will not go into the details of model definition
|
|
||||||
(see Tutorial2 and Tutorial3 for more details on model definition).
|
|
||||||
|
|
||||||
.. code:: ipython3
|
|
||||||
|
|
||||||
from pina.model import FeedForward
|
|
||||||
from pina import PINN
|
|
||||||
|
|
||||||
# initialize the problem
|
|
||||||
problem = SimpleODE()
|
|
||||||
|
|
||||||
# build the model
|
|
||||||
model = FeedForward(
|
|
||||||
layers=[10, 10],
|
|
||||||
func=torch.nn.Tanh,
|
|
||||||
output_dimensions=len(problem.output_variables),
|
|
||||||
input_dimensions=len(problem.input_variables)
|
|
||||||
)
|
|
||||||
|
|
||||||
# create the PINN object
|
|
||||||
pinn = PINN(problem, model)
|
|
||||||
|
|
||||||
Creating the ``PINN`` object is fairly simple. Different optional
|
|
||||||
parameters include: optimizer, batch size, ... (see
|
|
||||||
`documentation <https://mathlab.github.io/PINA/>`__ for reference).
|
|
||||||
|
|
||||||
Sample points in the domain
|
|
||||||
---------------------------
|
|
||||||
|
|
||||||
Once the ``PINN`` object is created, we need to generate the points for
|
|
||||||
starting the optimization. To do so, we use the ``sample`` method of the
|
|
||||||
``CartesianDomain`` class. Below are three examples of sampling methods
|
|
||||||
on the :math:`[0,1]` domain:
|
|
||||||
|
|
||||||
.. code:: ipython3
|
|
||||||
|
|
||||||
# sampling 20 points in [0, 1] through discretization
|
|
||||||
pinn.problem.discretise_domain(n=20, mode='grid', variables=['x'])
|
|
||||||
|
|
||||||
# sampling 20 points in (0, 1) through latin hypercube samping
|
|
||||||
pinn.problem.discretise_domain(n=20, mode='latin', variables=['x'])
|
|
||||||
|
|
||||||
# sampling 20 points in (0, 1) randomly
|
|
||||||
pinn.problem.discretise_domain(n=20, mode='random', variables=['x'])
|
|
||||||
|
|
||||||
Very simple training and plotting
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Once we have defined the PINA model, created a network, and sampled
|
|
||||||
points in the domain, we have everything necessary for training a PINN.
|
|
||||||
To do so, we make use of the ``Trainer`` class.
|
|
||||||
|
|
||||||
.. code:: ipython3
|
|
||||||
|
|
||||||
from pina import Trainer
|
|
||||||
|
|
||||||
# initialize trainer
|
|
||||||
trainer = Trainer(pinn)
|
|
||||||
|
|
||||||
# train the model
|
|
||||||
trainer.train()
|
|
||||||
|
|
||||||
|
|
||||||
.. parsed-literal::
|
|
||||||
|
|
||||||
/u/n/ndemo/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:546: UserWarning: Can't initialize NVML
|
|
||||||
warnings.warn("Can't initialize NVML")
|
|
||||||
GPU available: True (cuda), used: True
|
|
||||||
TPU available: False, using: 0 TPU cores
|
|
||||||
IPU available: False, using: 0 IPUs
|
|
||||||
HPU available: False, using: 0 HPUs
|
|
||||||
/u/n/ndemo/.local/lib/python3.9/site-packages/lightning/pytorch/loops/utilities.py:72: PossibleUserWarning: `max_epochs` was not set. Setting it to 1000 epochs. To train without an epoch limit, set `max_epochs=-1`.
|
|
||||||
rank_zero_warn(
|
|
||||||
2023-10-17 10:02:21.318700: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
|
||||||
2023-10-17 10:02:21.345355: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
|
||||||
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
|
||||||
2023-10-17 10:02:23.572602: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
|
||||||
/opt/sissa/apps/intelpython/2022.0.2/intelpython/latest/lib/python3.9/site-packages/scipy/__init__.py:138: UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.26.0)
|
|
||||||
warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion} is required for this version of "
|
|
||||||
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
|
|
||||||
|
|
||||||
| Name | Type | Params
|
|
||||||
----------------------------------------
|
|
||||||
0 | _loss | MSELoss | 0
|
|
||||||
1 | _neural_net | Network | 141
|
|
||||||
----------------------------------------
|
|
||||||
141 Trainable params
|
|
||||||
0 Non-trainable params
|
|
||||||
141 Total params
|
|
||||||
0.001 Total estimated model params size (MB)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. parsed-literal::
|
|
||||||
|
|
||||||
Training: 0it [00:00, ?it/s]
|
|
||||||
|
|
||||||
|
|
||||||
.. parsed-literal::
|
|
||||||
|
|
||||||
`Trainer.fit` stopped: `max_epochs=1000` reached.
|
|
||||||
|
|
||||||
|
Before Width: | Height: | Size: 25 KiB |
|
Before Width: | Height: | Size: 20 KiB |
|
Before Width: | Height: | Size: 58 KiB |
@@ -1,204 +0,0 @@
|
|||||||
Tutorial 3: resolution of wave equation with hard constraint PINNs.
|
|
||||||
===================================================================
|
|
||||||
|
|
||||||
The problem definition
|
|
||||||
----------------------
|
|
||||||
|
|
||||||
In this tutorial we present how to solve the wave equation using hard
|
|
||||||
constraint PINNs. For doing so we will build a costum torch model and
|
|
||||||
pass it to the ``PINN`` solver.
|
|
||||||
|
|
||||||
The problem is written in the following form:
|
|
||||||
|
|
||||||
.. raw:: latex
|
|
||||||
|
|
||||||
\begin{equation}
|
|
||||||
\begin{cases}
|
|
||||||
\Delta u(x,y,t) = \frac{\partial^2}{\partial t^2} u(x,y,t) \quad \text{in } D, \\\\
|
|
||||||
u(x, y, t=0) = \sin(\pi x)\sin(\pi y), \\\\
|
|
||||||
u(x, y, t) = 0 \quad \text{on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
|
|
||||||
\end{cases}
|
|
||||||
\end{equation}
|
|
||||||
|
|
||||||
where :math:`D` is a square domain :math:`[0,1]^2`, and
|
|
||||||
:math:`\Gamma_i`, with :math:`i=1,...,4`, are the boundaries of the
|
|
||||||
square, and the velocity in the standard wave equation is fixed to one.
|
|
||||||
|
|
||||||
First of all, some useful imports.
|
|
||||||
|
|
||||||
.. code:: ipython3
|
|
||||||
|
|
||||||
import torch
|
|
||||||
|
|
||||||
from pina.problem import SpatialProblem, TimeDependentProblem
|
|
||||||
from pina.operators import laplacian, grad
|
|
||||||
from pina.geometry import CartesianDomain
|
|
||||||
from pina.solvers import PINN
|
|
||||||
from pina.trainer import Trainer
|
|
||||||
from pina.equation import Equation
|
|
||||||
from pina.equation.equation_factory import FixedValue
|
|
||||||
from pina import Condition, Plotter
|
|
||||||
|
|
||||||
Now, the wave problem is written in PINA code as a class, inheriting
|
|
||||||
from ``SpatialProblem`` and ``TimeDependentProblem`` since we deal with
|
|
||||||
spatial, and time dependent variables. The equations are written as
|
|
||||||
``conditions`` that should be satisfied in the corresponding domains.
|
|
||||||
``truth_solution`` is the exact solution which will be compared with the
|
|
||||||
predicted one.
|
|
||||||
|
|
||||||
.. code:: ipython3
|
|
||||||
|
|
||||||
class Wave(TimeDependentProblem, SpatialProblem):
|
|
||||||
output_variables = ['u']
|
|
||||||
spatial_domain = CartesianDomain({'x': [0, 1], 'y': [0, 1]})
|
|
||||||
temporal_domain = CartesianDomain({'t': [0, 1]})
|
|
||||||
|
|
||||||
def wave_equation(input_, output_):
|
|
||||||
u_t = grad(output_, input_, components=['u'], d=['t'])
|
|
||||||
u_tt = grad(u_t, input_, components=['dudt'], d=['t'])
|
|
||||||
nabla_u = laplacian(output_, input_, components=['u'], d=['x', 'y'])
|
|
||||||
return nabla_u - u_tt
|
|
||||||
|
|
||||||
def initial_condition(input_, output_):
|
|
||||||
u_expected = (torch.sin(torch.pi*input_.extract(['x'])) *
|
|
||||||
torch.sin(torch.pi*input_.extract(['y'])))
|
|
||||||
return output_.extract(['u']) - u_expected
|
|
||||||
|
|
||||||
conditions = {
|
|
||||||
'gamma1': Condition(location=CartesianDomain({'x': [0, 1], 'y': 1, 't': [0, 1]}), equation=FixedValue(0.)),
|
|
||||||
'gamma2': Condition(location=CartesianDomain({'x': [0, 1], 'y': 0, 't': [0, 1]}), equation=FixedValue(0.)),
|
|
||||||
'gamma3': Condition(location=CartesianDomain({'x': 1, 'y': [0, 1], 't': [0, 1]}), equation=FixedValue(0.)),
|
|
||||||
'gamma4': Condition(location=CartesianDomain({'x': 0, 'y': [0, 1], 't': [0, 1]}), equation=FixedValue(0.)),
|
|
||||||
't0': Condition(location=CartesianDomain({'x': [0, 1], 'y': [0, 1], 't': 0}), equation=Equation(initial_condition)),
|
|
||||||
'D': Condition(location=CartesianDomain({'x': [0, 1], 'y': [0, 1], 't': [0, 1]}), equation=Equation(wave_equation)),
|
|
||||||
}
|
|
||||||
|
|
||||||
def wave_sol(self, pts):
|
|
||||||
return (torch.sin(torch.pi*pts.extract(['x'])) *
|
|
||||||
torch.sin(torch.pi*pts.extract(['y'])) *
|
|
||||||
torch.cos(torch.sqrt(torch.tensor(2.))*torch.pi*pts.extract(['t'])))
|
|
||||||
|
|
||||||
truth_solution = wave_sol
|
|
||||||
|
|
||||||
problem = Wave()
|
|
||||||
|
|
||||||
Hard Constraint Model
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
After the problem, a **torch** model is needed to solve the PINN.
|
|
||||||
Usually, many models are already implemented in ``PINA``, but the user
|
|
||||||
has the possibility to build his/her own model in ``PyTorch``. The hard
|
|
||||||
constraint we impose is on the boundary of the spatial domain.
|
|
||||||
Specifically, our solution is written as:
|
|
||||||
|
|
||||||
.. math:: u_{\rm{pinn}} = xy(1-x)(1-y)\cdot NN(x, y, t),
|
|
||||||
|
|
||||||
where :math:`NN` is the neural net output. This neural network takes as
|
|
||||||
input the coordinates (in this case :math:`x`, :math:`y` and :math:`t`)
|
|
||||||
and provides the unknown field :math:`u`. By construction, it is zero on
|
|
||||||
the boundaries. The residuals of the equations are evaluated at several
|
|
||||||
sampling points (which the user can manipulate using the method
|
|
||||||
``discretise_domain``) and the loss minimized by the neural network is
|
|
||||||
the sum of the residuals.
|
|
||||||
|
|
||||||
.. code:: ipython3
|
|
||||||
|
|
||||||
class HardMLP(torch.nn.Module):
|
|
||||||
|
|
||||||
def __init__(self, input_dim, output_dim):
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
self.layers = torch.nn.Sequential(torch.nn.Linear(input_dim, 20),
|
|
||||||
torch.nn.Tanh(),
|
|
||||||
torch.nn.Linear(20, 20),
|
|
||||||
torch.nn.Tanh(),
|
|
||||||
torch.nn.Linear(20, output_dim))
|
|
||||||
|
|
||||||
# here in the foward we implement the hard constraints
|
|
||||||
def forward(self, x):
|
|
||||||
hard = x.extract(['x'])*(1-x.extract(['x']))*x.extract(['y'])*(1-x.extract(['y']))
|
|
||||||
return hard*self.layers(x)
|
|
||||||
|
|
||||||
Train and Inference
|
|
||||||
-------------------
|
|
||||||
|
|
||||||
In this tutorial, the neural network is trained for 3000 epochs with a
|
|
||||||
learning rate of 0.001 (default in ``PINN``). Training takes
|
|
||||||
approximately 1 minute.
|
|
||||||
|
|
||||||
.. code:: ipython3
|
|
||||||
|
|
||||||
pinn = PINN(problem, HardMLP(len(problem.input_variables), len(problem.output_variables)))
|
|
||||||
problem.discretise_domain(1000, 'random', locations=['D','t0', 'gamma1', 'gamma2', 'gamma3', 'gamma4'])
|
|
||||||
trainer = Trainer(pinn, max_epochs=3000)
|
|
||||||
trainer.train()
|
|
||||||
|
|
||||||
|
|
||||||
.. parsed-literal::
|
|
||||||
|
|
||||||
/u/n/ndemo/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:546: UserWarning: Can't initialize NVML
|
|
||||||
warnings.warn("Can't initialize NVML")
|
|
||||||
GPU available: True (cuda), used: True
|
|
||||||
TPU available: False, using: 0 TPU cores
|
|
||||||
IPU available: False, using: 0 IPUs
|
|
||||||
HPU available: False, using: 0 HPUs
|
|
||||||
Missing logger folder: /u/n/ndemo/PINA/tutorials/tutorial3/lightning_logs
|
|
||||||
2023-10-17 10:24:02.163746: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
|
||||||
2023-10-17 10:24:02.218849: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
|
||||||
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
|
||||||
2023-10-17 10:24:07.063047: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
|
||||||
/opt/sissa/apps/intelpython/2022.0.2/intelpython/latest/lib/python3.9/site-packages/scipy/__init__.py:138: UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.26.0)
|
|
||||||
warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion} is required for this version of "
|
|
||||||
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
|
|
||||||
|
|
||||||
| Name | Type | Params
|
|
||||||
----------------------------------------
|
|
||||||
0 | _loss | MSELoss | 0
|
|
||||||
1 | _neural_net | Network | 521
|
|
||||||
----------------------------------------
|
|
||||||
521 Trainable params
|
|
||||||
0 Non-trainable params
|
|
||||||
521 Total params
|
|
||||||
0.002 Total estimated model params size (MB)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. parsed-literal::
|
|
||||||
|
|
||||||
Training: 0it [00:00, ?it/s]
|
|
||||||
|
|
||||||
|
|
||||||
.. parsed-literal::
|
|
||||||
|
|
||||||
`Trainer.fit` stopped: `max_epochs=3000` reached.
|
|
||||||
|
|
||||||
|
|
||||||
Notice that the loss on the boundaries of the spatial domain is exactly
|
|
||||||
zero, as expected! After the training is completed one can now plot some
|
|
||||||
results using the ``Plotter`` class of **PINA**.
|
|
||||||
|
|
||||||
.. code:: ipython3
|
|
||||||
|
|
||||||
plotter = Plotter()
|
|
||||||
|
|
||||||
# plotting at fixed time t = 0.0
|
|
||||||
plotter.plot(trainer, fixed_variables={'t': 0.0})
|
|
||||||
|
|
||||||
# plotting at fixed time t = 0.5
|
|
||||||
plotter.plot(trainer, fixed_variables={'t': 0.5})
|
|
||||||
|
|
||||||
# plotting at fixed time t = 1.
|
|
||||||
plotter.plot(trainer, fixed_variables={'t': 1.0})
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. image:: output_14_0.png
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. image:: output_14_1.png
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. image:: output_14_2.png
|
|
||||||
|
|
||||||
|
Before Width: | Height: | Size: 126 KiB |
|
Before Width: | Height: | Size: 134 KiB |
|
Before Width: | Height: | Size: 119 KiB |
385
docs/source/_rst/tutorials/tutorial1/tutorial.rst
Normal file
@@ -0,0 +1,385 @@
|
|||||||
|
Tutorial: Physics Informed Neural Networks on PINA
|
||||||
|
==================================================
|
||||||
|
|
||||||
|
In this tutorial, we will demonstrate a typical use case of **PINA** on
|
||||||
|
a toy problem, following the standard API procedure.
|
||||||
|
|
||||||
|
.. raw:: html
|
||||||
|
|
||||||
|
<p align="center">
|
||||||
|
|
||||||
|
.. raw:: html
|
||||||
|
|
||||||
|
</p>
|
||||||
|
|
||||||
|
Specifically, the tutorial aims to introduce the following topics:
|
||||||
|
|
||||||
|
- Explaining how to build **PINA** Problem,
|
||||||
|
- Showing how to generate data for ``PINN`` straining
|
||||||
|
|
||||||
|
These are the two main steps needed **before** starting the modelling
|
||||||
|
optimization (choose model and solver, and train). We will show each
|
||||||
|
step in detail, and at the end, we will solve a simple Ordinary
|
||||||
|
Differential Equation (ODE) problem busing the ``PINN`` solver.
|
||||||
|
|
||||||
|
Build a PINA problem
|
||||||
|
--------------------
|
||||||
|
|
||||||
|
Problem definition in the **PINA** framework is done by building a
|
||||||
|
python ``class``, which inherits from one or more problem classes
|
||||||
|
(``SpatialProblem``, ``TimeDependentProblem``, ``ParametricProblem``, …)
|
||||||
|
depending on the nature of the problem. Below is an example: ### Simple
|
||||||
|
Ordinary Differential Equation Consider the following:
|
||||||
|
|
||||||
|
.. math::
|
||||||
|
|
||||||
|
|
||||||
|
\begin{equation}
|
||||||
|
\begin{cases}
|
||||||
|
\frac{d}{dx}u(x) &= u(x) \quad x\in(0,1)\\
|
||||||
|
u(x=0) &= 1 \\
|
||||||
|
\end{cases}
|
||||||
|
\end{equation}
|
||||||
|
|
||||||
|
with the analytical solution :math:`u(x) = e^x`. In this case, our ODE
|
||||||
|
depends only on the spatial variable :math:`x\in(0,1)` , meaning that
|
||||||
|
our ``Problem`` class is going to be inherited from the
|
||||||
|
``SpatialProblem`` class:
|
||||||
|
|
||||||
|
.. code:: python
|
||||||
|
|
||||||
|
from pina.problem import SpatialProblem
|
||||||
|
from pina import CartesianProblem
|
||||||
|
|
||||||
|
class SimpleODE(SpatialProblem):
|
||||||
|
|
||||||
|
output_variables = ['u']
|
||||||
|
spatial_domain = CartesianProblem({'x': [0, 1]})
|
||||||
|
|
||||||
|
# other stuff ...
|
||||||
|
|
||||||
|
Notice that we define ``output_variables`` as a list of symbols,
|
||||||
|
indicating the output variables of our equation (in this case only
|
||||||
|
:math:`u`), this is done because in **PINA** the ``torch.Tensor``\ s are
|
||||||
|
labelled, allowing the user maximal flexibility for the manipulation of
|
||||||
|
the tensor. The ``spatial_domain`` variable indicates where the sample
|
||||||
|
points are going to be sampled in the domain, in this case
|
||||||
|
:math:`x\in[0,1]`.
|
||||||
|
|
||||||
|
What about if our equation is also time dependent? In this case, our
|
||||||
|
``class`` will inherit from both ``SpatialProblem`` and
|
||||||
|
``TimeDependentProblem``:
|
||||||
|
|
||||||
|
.. code:: ipython3
|
||||||
|
|
||||||
|
from pina.problem import SpatialProblem, TimeDependentProblem
|
||||||
|
from pina import CartesianDomain
|
||||||
|
|
||||||
|
class TimeSpaceODE(SpatialProblem, TimeDependentProblem):
|
||||||
|
|
||||||
|
output_variables = ['u']
|
||||||
|
spatial_domain = CartesianDomain({'x': [0, 1]})
|
||||||
|
temporal_domain = CartesianDomain({'t': [0, 1]})
|
||||||
|
|
||||||
|
# other stuff ...
|
||||||
|
|
||||||
|
where we have included the ``temporal_domain`` variable, indicating the
|
||||||
|
time domain wanted for the solution.
|
||||||
|
|
||||||
|
In summary, using **PINA**, we can initialize a problem with a class
|
||||||
|
which inherits from different base classes: ``SpatialProblem``,
|
||||||
|
``TimeDependentProblem``, ``ParametricProblem``, and so on depending on
|
||||||
|
the type of problem we are considering. Here are some examples (more on
|
||||||
|
the official documentation): \* ``SpatialProblem`` :math:`\rightarrow` a
|
||||||
|
differential equation with spatial variable(s) \*
|
||||||
|
``TimeDependentProblem`` :math:`\rightarrow` a time-dependent
|
||||||
|
differential equation \* ``ParametricProblem`` :math:`\rightarrow` a
|
||||||
|
parametrized differential equation \* ``AbstractProblem``
|
||||||
|
:math:`\rightarrow` any **PINA** problem inherits from here
|
||||||
|
|
||||||
|
Write the problem class
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
Once the ``Problem`` class is initialized, we need to represent the
|
||||||
|
differential equation in **PINA**. In order to do this, we need to load
|
||||||
|
the **PINA** operators from ``pina.operators`` module. Again, we’ll
|
||||||
|
consider Equation (1) and represent it in **PINA**:
|
||||||
|
|
||||||
|
.. code:: ipython3
|
||||||
|
|
||||||
|
from pina.problem import SpatialProblem
|
||||||
|
from pina.operators import grad
|
||||||
|
from pina import Condition
|
||||||
|
from pina.geometry import CartesianDomain
|
||||||
|
from pina.equation import Equation, FixedValue
|
||||||
|
|
||||||
|
import torch
|
||||||
|
|
||||||
|
|
||||||
|
class SimpleODE(SpatialProblem):
|
||||||
|
|
||||||
|
output_variables = ['u']
|
||||||
|
spatial_domain = CartesianDomain({'x': [0, 1]})
|
||||||
|
|
||||||
|
# defining the ode equation
|
||||||
|
def ode_equation(input_, output_):
|
||||||
|
|
||||||
|
# computing the derivative
|
||||||
|
u_x = grad(output_, input_, components=['u'], d=['x'])
|
||||||
|
|
||||||
|
# extracting the u input variable
|
||||||
|
u = output_.extract(['u'])
|
||||||
|
|
||||||
|
# calculate the residual and return it
|
||||||
|
return u_x - u
|
||||||
|
|
||||||
|
# conditions to hold
|
||||||
|
conditions = {
|
||||||
|
'x0': Condition(location=CartesianDomain({'x': 0.}), equation=FixedValue(1)), # We fix initial condition to value 1
|
||||||
|
'D': Condition(location=CartesianDomain({'x': [0, 1]}), equation=Equation(ode_equation)), # We wrap the python equation using Equation
|
||||||
|
}
|
||||||
|
|
||||||
|
# sampled points (see below)
|
||||||
|
input_pts = None
|
||||||
|
|
||||||
|
# defining the true solution
|
||||||
|
def truth_solution(self, pts):
|
||||||
|
return torch.exp(pts.extract(['x']))
|
||||||
|
|
||||||
|
problem = SimpleODE()
|
||||||
|
|
||||||
|
After we define the ``Problem`` class, we need to write different class
|
||||||
|
methods, where each method is a function returning a residual. These
|
||||||
|
functions are the ones minimized during PINN optimization, given the
|
||||||
|
initial conditions. For example, in the domain :math:`[0,1]`, the ODE
|
||||||
|
equation (``ode_equation``) must be satisfied. We represent this by
|
||||||
|
returning the difference between subtracting the variable ``u`` from its
|
||||||
|
gradient (the residual), which we hope to minimize to 0. This is done
|
||||||
|
for all conditions. Notice that we do not pass directly a ``python``
|
||||||
|
function, but an ``Equation`` object, which is initialized with the
|
||||||
|
``python`` function. This is done so that all the computations, and
|
||||||
|
internal checks are done inside **PINA**.
|
||||||
|
|
||||||
|
Once we have defined the function, we need to tell the neural network
|
||||||
|
where these methods are to be applied. To do so, we use the
|
||||||
|
``Condition`` class. In the ``Condition`` class, we pass the location
|
||||||
|
points and the equation we want minimized on those points (other
|
||||||
|
possibilities are allowed, see the documentation for reference).
|
||||||
|
|
||||||
|
Finally, it’s possible to define a ``truth_solution`` function, which
|
||||||
|
can be useful if we want to plot the results and see how the real
|
||||||
|
solution compares to the expected (true) solution. Notice that the
|
||||||
|
``truth_solution`` function is a method of the ``PINN`` class, but is
|
||||||
|
not mandatory for problem definition.
|
||||||
|
|
||||||
|
Generate data
|
||||||
|
-------------
|
||||||
|
|
||||||
|
Data for training can come in form of direct numerical simulation
|
||||||
|
reusults, or points in the domains. In case we do unsupervised learning,
|
||||||
|
we just need the collocation points for training, i.e. points where we
|
||||||
|
want to evaluate the neural network. Sampling point in **PINA** is very
|
||||||
|
easy, here we show three examples using the ``.discretise_domain``
|
||||||
|
method of the ``AbstractProblem`` class.
|
||||||
|
|
||||||
|
.. code:: ipython3
|
||||||
|
|
||||||
|
# sampling 20 points in [0, 1] through discretization in all locations
|
||||||
|
problem.discretise_domain(n=20, mode='grid', variables=['x'], locations='all')
|
||||||
|
|
||||||
|
# sampling 20 points in (0, 1) through latin hypercube samping in D, and 1 point in x0
|
||||||
|
problem.discretise_domain(n=20, mode='latin', variables=['x'], locations=['D'])
|
||||||
|
problem.discretise_domain(n=1, mode='random', variables=['x'], locations=['x0'])
|
||||||
|
|
||||||
|
# sampling 20 points in (0, 1) randomly
|
||||||
|
problem.discretise_domain(n=20, mode='random', variables=['x'])
|
||||||
|
|
||||||
|
We are going to use latin hypercube points for sampling. We need to
|
||||||
|
sample in all the conditions domains. In our case we sample in ``D`` and
|
||||||
|
``x0``.
|
||||||
|
|
||||||
|
.. code:: ipython3
|
||||||
|
|
||||||
|
# sampling for training
|
||||||
|
problem.discretise_domain(1, 'random', locations=['x0'])
|
||||||
|
problem.discretise_domain(20, 'lh', locations=['D'])
|
||||||
|
|
||||||
|
The points are saved in a python ``dict``, and can be accessed by
|
||||||
|
calling the attribute ``input_pts`` of the problem
|
||||||
|
|
||||||
|
.. code:: ipython3
|
||||||
|
|
||||||
|
print('Input points:', problem.input_pts)
|
||||||
|
print('Input points labels:', problem.input_pts['D'].labels)
|
||||||
|
|
||||||
|
|
||||||
|
.. parsed-literal::
|
||||||
|
|
||||||
|
Input points: {'x0': LabelTensor([[[0.]]]), 'D': LabelTensor([[[0.8569]],
|
||||||
|
[[0.9478]],
|
||||||
|
[[0.3030]],
|
||||||
|
[[0.8182]],
|
||||||
|
[[0.4116]],
|
||||||
|
[[0.6687]],
|
||||||
|
[[0.5394]],
|
||||||
|
[[0.9927]],
|
||||||
|
[[0.6082]],
|
||||||
|
[[0.4605]],
|
||||||
|
[[0.2859]],
|
||||||
|
[[0.7321]],
|
||||||
|
[[0.5624]],
|
||||||
|
[[0.1303]],
|
||||||
|
[[0.2402]],
|
||||||
|
[[0.0182]],
|
||||||
|
[[0.0714]],
|
||||||
|
[[0.3697]],
|
||||||
|
[[0.7770]],
|
||||||
|
[[0.1784]]])}
|
||||||
|
Input points labels: ['x']
|
||||||
|
|
||||||
|
|
||||||
|
To visualize the sampled points we can use the ``.plot_samples`` method
|
||||||
|
of the ``Plotter`` class
|
||||||
|
|
||||||
|
.. code:: ipython3
|
||||||
|
|
||||||
|
from pina import Plotter
|
||||||
|
|
||||||
|
pl = Plotter()
|
||||||
|
pl.plot_samples(problem=problem)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
.. image:: tutorial_files/tutorial_16_0.png
|
||||||
|
|
||||||
|
|
||||||
|
Perform a small training
|
||||||
|
------------------------
|
||||||
|
|
||||||
|
Once we have defined the problem and generated the data we can start the
|
||||||
|
modelling. Here we will choose a ``FeedForward`` neural network
|
||||||
|
available in ``pina.model``, and we will train using the ``PINN`` solver
|
||||||
|
from ``pina.solvers``. We highlight that this training is fairly simple,
|
||||||
|
for more advanced stuff consider the tutorials in the **Physics Informed
|
||||||
|
Neural Networks** section of **Tutorials**. For training we use the
|
||||||
|
``Trainer`` class from ``pina.trainer``. Here we show a very short
|
||||||
|
training and some method for plotting the results. Notice that by
|
||||||
|
default all relevant metrics (e.g. MSE error during training) are going
|
||||||
|
to be tracked using a ``lightining`` logger, by default ``CSVLogger``.
|
||||||
|
If you want to track the metric by yourself without a logger, use
|
||||||
|
``pina.callbacks.MetricTracker``.
|
||||||
|
|
||||||
|
.. code:: ipython3
|
||||||
|
|
||||||
|
from pina import PINN, Trainer
|
||||||
|
from pina.model import FeedForward
|
||||||
|
from pina.callbacks import MetricTracker
|
||||||
|
|
||||||
|
|
||||||
|
# build the model
|
||||||
|
model = FeedForward(
|
||||||
|
layers=[10, 10],
|
||||||
|
func=torch.nn.Tanh,
|
||||||
|
output_dimensions=len(problem.output_variables),
|
||||||
|
input_dimensions=len(problem.input_variables)
|
||||||
|
)
|
||||||
|
|
||||||
|
# create the PINN object
|
||||||
|
pinn = PINN(problem, model)
|
||||||
|
|
||||||
|
# create the trainer
|
||||||
|
trainer = Trainer(solver=pinn, max_epochs=1500, callbacks=[MetricTracker()], accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
|
||||||
|
|
||||||
|
# train
|
||||||
|
trainer.train()
|
||||||
|
|
||||||
|
|
||||||
|
.. parsed-literal::
|
||||||
|
|
||||||
|
/u/d/dcoscia/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:546: UserWarning: Can't initialize NVML
|
||||||
|
warnings.warn("Can't initialize NVML")
|
||||||
|
/u/d/dcoscia/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:651: UserWarning: CUDA initialization: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:109.)
|
||||||
|
return torch._C._cuda_getDeviceCount() if nvml_count < 0 else nvml_count
|
||||||
|
GPU available: False, used: False
|
||||||
|
TPU available: False, using: 0 TPU cores
|
||||||
|
IPU available: False, using: 0 IPUs
|
||||||
|
HPU available: False, using: 0 HPUs
|
||||||
|
|
||||||
|
|
||||||
|
.. parsed-literal::
|
||||||
|
|
||||||
|
Epoch 1499: : 1it [00:00, 143.58it/s, v_num=5, mean_loss=1.09e-5, x0_loss=1.33e-7, D_loss=2.17e-5]
|
||||||
|
|
||||||
|
.. parsed-literal::
|
||||||
|
|
||||||
|
`Trainer.fit` stopped: `max_epochs=1500` reached.
|
||||||
|
|
||||||
|
|
||||||
|
.. parsed-literal::
|
||||||
|
|
||||||
|
Epoch 1499: : 1it [00:00, 65.39it/s, v_num=5, mean_loss=1.09e-5, x0_loss=1.33e-7, D_loss=2.17e-5]
|
||||||
|
|
||||||
|
|
||||||
|
After the training we can inspect trainer logged metrics (by default
|
||||||
|
**PINA** logs mean square error residual loss). The logged metrics can
|
||||||
|
be accessed online using one of the ``Lightinig`` loggers. The final
|
||||||
|
loss can be accessed by ``trainer.logged_metrics``
|
||||||
|
|
||||||
|
.. code:: ipython3
|
||||||
|
|
||||||
|
# inspecting final loss
|
||||||
|
trainer.logged_metrics
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
.. parsed-literal::
|
||||||
|
|
||||||
|
{'mean_loss': tensor(1.0938e-05),
|
||||||
|
'x0_loss': tensor(1.3328e-07),
|
||||||
|
'D_loss': tensor(2.1743e-05)}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
By using the ``Plotter`` class from **PINA** we can also do some
|
||||||
|
quatitative plots of the solution.
|
||||||
|
|
||||||
|
.. code:: ipython3
|
||||||
|
|
||||||
|
# plotting the solution
|
||||||
|
pl.plot(trainer=trainer)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
.. image:: tutorial_files/tutorial_23_0.png
|
||||||
|
|
||||||
|
|
||||||
|
The solution is overlapped with the actual one, and they are barely
|
||||||
|
indistinguishable. We can also plot easily the loss:
|
||||||
|
|
||||||
|
.. code:: ipython3
|
||||||
|
|
||||||
|
pl.plot_loss(trainer=trainer, metric='mean_loss', log_scale=True)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
.. image:: tutorial_files/tutorial_25_0.png
|
||||||
|
|
||||||
|
|
||||||
|
As we can see the loss has not reached a minimum, suggesting that we
|
||||||
|
could train for longer
|
||||||
|
|
||||||
|
What’s next?
|
||||||
|
------------
|
||||||
|
|
||||||
|
Nice you have completed the introductory tutorial of **PINA**! There are
|
||||||
|
multiple directions you can go now:
|
||||||
|
|
||||||
|
1. Train the network for longer or with different layer sizes and assert
|
||||||
|
the finaly accuracy
|
||||||
|
|
||||||
|
2. Train the network using other types of models (see ``pina.model``)
|
||||||
|
|
||||||
|
3. GPU trainining and benchmark the speed
|
||||||
|
|
||||||
|
4. Many more…
|
||||||
|
After Width: | Height: | Size: 9.4 KiB |
|
After Width: | Height: | Size: 27 KiB |
|
After Width: | Height: | Size: 18 KiB |
@@ -1,27 +1,13 @@
|
|||||||
Tutorial 2: resolution of Poisson problem and usage of extra-features
|
Tutorial: Two dimensional Poisson problem using Extra Features Learning
|
||||||
=====================================================================
|
=======================================================================
|
||||||
|
|
||||||
The problem definition
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
This tutorial presents how to solve with Physics-Informed Neural
|
This tutorial presents how to solve with Physics-Informed Neural
|
||||||
Networks a 2D Poisson problem with Dirichlet boundary conditions. Using
|
Networks (PINNs) a 2D Poisson problem with Dirichlet boundary
|
||||||
extrafeatures.
|
conditions. We will train with standard PINN’s training, and with
|
||||||
|
extrafeatures. For more insights on extrafeature learning please read
|
||||||
The problem is written as:
|
`An extended physics informed neural network for preliminary analysis of
|
||||||
|
parametric optimal control
|
||||||
.. raw:: latex
|
problems <https://www.sciencedirect.com/science/article/abs/pii/S0898122123002018>`__.
|
||||||
|
|
||||||
\begin{equation}
|
|
||||||
\begin{cases}
|
|
||||||
\Delta u = \sin{(\pi x)} \sin{(\pi y)} \text{ in } D, \\
|
|
||||||
u = 0 \text{ on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
|
|
||||||
\end{cases}
|
|
||||||
\end{equation}
|
|
||||||
|
|
||||||
where :math:`D` is a square domain :math:`[0,1]^2`, and
|
|
||||||
:math:`\Gamma_i`, with :math:`i=1,...,4`, are the boundaries of the
|
|
||||||
square.
|
|
||||||
|
|
||||||
First of all, some useful imports.
|
First of all, some useful imports.
|
||||||
|
|
||||||
@@ -41,9 +27,22 @@ First of all, some useful imports.
|
|||||||
from pina import Condition, LabelTensor
|
from pina import Condition, LabelTensor
|
||||||
from pina.callbacks import MetricTracker
|
from pina.callbacks import MetricTracker
|
||||||
|
|
||||||
Now, the Poisson problem is written in PINA code as a class. The
|
The problem definition
|
||||||
|
----------------------
|
||||||
|
|
||||||
|
The two-dimensional Poisson problem is mathematically written as:
|
||||||
|
:raw-latex:`\begin{equation}
|
||||||
|
\begin{cases}
|
||||||
|
\Delta u = \sin{(\pi x)} \sin{(\pi y)} \text{ in } D, \\
|
||||||
|
u = 0 \text{ on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
|
||||||
|
\end{cases}
|
||||||
|
\end{equation}` where :math:`D` is a square domain :math:`[0,1]^2`, and
|
||||||
|
:math:`\Gamma_i`, with :math:`i=1,...,4`, are the boundaries of the
|
||||||
|
square.
|
||||||
|
|
||||||
|
The Poisson problem is written in **PINA** code as a class. The
|
||||||
equations are written as *conditions* that should be satisfied in the
|
equations are written as *conditions* that should be satisfied in the
|
||||||
corresponding domains. *truth\_solution* is the exact solution which
|
corresponding domains. The *truth_solution* is the exact solution which
|
||||||
will be compared with the predicted one.
|
will be compared with the predicted one.
|
||||||
|
|
||||||
.. code:: ipython3
|
.. code:: ipython3
|
||||||
@@ -58,6 +57,7 @@ will be compared with the predicted one.
|
|||||||
laplacian_u = laplacian(output_, input_, components=['u'], d=['x', 'y'])
|
laplacian_u = laplacian(output_, input_, components=['u'], d=['x', 'y'])
|
||||||
return laplacian_u - force_term
|
return laplacian_u - force_term
|
||||||
|
|
||||||
|
# here we write the problem conditions
|
||||||
conditions = {
|
conditions = {
|
||||||
'gamma1': Condition(location=CartesianDomain({'x': [0, 1], 'y': 1}), equation=FixedValue(0.)),
|
'gamma1': Condition(location=CartesianDomain({'x': [0, 1], 'y': 1}), equation=FixedValue(0.)),
|
||||||
'gamma2': Condition(location=CartesianDomain({'x': [0, 1], 'y': 0}), equation=FixedValue(0.)),
|
'gamma2': Condition(location=CartesianDomain({'x': [0, 1], 'y': 0}), equation=FixedValue(0.)),
|
||||||
@@ -80,8 +80,8 @@ will be compared with the predicted one.
|
|||||||
problem.discretise_domain(25, 'grid', locations=['D'])
|
problem.discretise_domain(25, 'grid', locations=['D'])
|
||||||
problem.discretise_domain(25, 'grid', locations=['gamma1', 'gamma2', 'gamma3', 'gamma4'])
|
problem.discretise_domain(25, 'grid', locations=['gamma1', 'gamma2', 'gamma3', 'gamma4'])
|
||||||
|
|
||||||
The problem solution
|
Solving the problem with standard PINNs
|
||||||
~~~~~~~~~~~~~~~~~~~~
|
---------------------------------------
|
||||||
|
|
||||||
After the problem, the feed-forward neural network is defined, through
|
After the problem, the feed-forward neural network is defined, through
|
||||||
the class ``FeedForward``. This neural network takes as input the
|
the class ``FeedForward``. This neural network takes as input the
|
||||||
@@ -93,7 +93,9 @@ neural network is the sum of the residuals.
|
|||||||
|
|
||||||
In this tutorial, the neural network is composed by two hidden layers of
|
In this tutorial, the neural network is composed by two hidden layers of
|
||||||
10 neurons each, and it is trained for 1000 epochs with a learning rate
|
10 neurons each, and it is trained for 1000 epochs with a learning rate
|
||||||
of 0.006. These parameters can be modified as desired.
|
of 0.006 and :math:`l_2` weight regularization set to :math:`10^{-7}`.
|
||||||
|
These parameters can be modified as desired. We use the
|
||||||
|
``MetricTracker`` class to track the metrics during training.
|
||||||
|
|
||||||
.. code:: ipython3
|
.. code:: ipython3
|
||||||
|
|
||||||
@@ -105,7 +107,7 @@ of 0.006. These parameters can be modified as desired.
|
|||||||
input_dimensions=len(problem.input_variables)
|
input_dimensions=len(problem.input_variables)
|
||||||
)
|
)
|
||||||
pinn = PINN(problem, model, optimizer_kwargs={'lr':0.006, 'weight_decay':1e-8})
|
pinn = PINN(problem, model, optimizer_kwargs={'lr':0.006, 'weight_decay':1e-8})
|
||||||
trainer = Trainer(pinn, max_epochs=1000, callbacks=[MetricTracker()])
|
trainer = Trainer(pinn, max_epochs=1000, callbacks=[MetricTracker()], accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
|
||||||
|
|
||||||
# train
|
# train
|
||||||
trainer.train()
|
trainer.train()
|
||||||
@@ -113,30 +115,15 @@ of 0.006. These parameters can be modified as desired.
|
|||||||
|
|
||||||
.. parsed-literal::
|
.. parsed-literal::
|
||||||
|
|
||||||
/u/n/ndemo/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:546: UserWarning: Can't initialize NVML
|
/u/d/dcoscia/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:546: UserWarning: Can't initialize NVML
|
||||||
warnings.warn("Can't initialize NVML")
|
warnings.warn("Can't initialize NVML")
|
||||||
GPU available: True (cuda), used: True
|
/u/d/dcoscia/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:651: UserWarning: CUDA initialization: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:109.)
|
||||||
|
return torch._C._cuda_getDeviceCount() if nvml_count < 0 else nvml_count
|
||||||
|
GPU available: False, used: False
|
||||||
TPU available: False, using: 0 TPU cores
|
TPU available: False, using: 0 TPU cores
|
||||||
IPU available: False, using: 0 IPUs
|
IPU available: False, using: 0 IPUs
|
||||||
HPU available: False, using: 0 HPUs
|
HPU available: False, using: 0 HPUs
|
||||||
Missing logger folder: /u/n/ndemo/PINA/tutorials/tutorial2/lightning_logs
|
Missing logger folder: /u/d/dcoscia/PINA/tutorials/tutorial2/lightning_logs
|
||||||
2023-10-17 10:09:18.208459: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
|
||||||
2023-10-17 10:09:18.235849: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
|
||||||
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
|
||||||
2023-10-17 10:09:20.462393: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
|
||||||
/opt/sissa/apps/intelpython/2022.0.2/intelpython/latest/lib/python3.9/site-packages/scipy/__init__.py:138: UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.26.0)
|
|
||||||
warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion} is required for this version of "
|
|
||||||
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
|
|
||||||
|
|
||||||
| Name | Type | Params
|
|
||||||
----------------------------------------
|
|
||||||
0 | _loss | MSELoss | 0
|
|
||||||
1 | _neural_net | Network | 151
|
|
||||||
----------------------------------------
|
|
||||||
151 Trainable params
|
|
||||||
0 Non-trainable params
|
|
||||||
151 Total params
|
|
||||||
0.001 Total estimated model params size (MB)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@@ -162,22 +149,20 @@ and the predicted solutions is showed.
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. image:: output_11_0.png
|
.. image:: tutorial_files/tutorial_9_0.png
|
||||||
|
|
||||||
|
|
||||||
The problem solution with extra-features
|
Solving the problem with extra-features PINNs
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
---------------------------------------------
|
||||||
|
|
||||||
Now, the same problem is solved in a different way. A new neural network
|
Now, the same problem is solved in a different way. A new neural network
|
||||||
is now defined, with an additional input variable, named extra-feature,
|
is now defined, with an additional input variable, named extra-feature,
|
||||||
which coincides with the forcing term in the Laplace equation. The set
|
which coincides with the forcing term in the Laplace equation. The set
|
||||||
of input variables to the neural network is:
|
of input variables to the neural network is:
|
||||||
|
|
||||||
.. raw:: latex
|
:raw-latex:`\begin{equation}
|
||||||
|
[x, y, k(x, y)], \text{ with } k(x, y)=\sin{(\pi x)}\sin{(\pi y)},
|
||||||
\begin{equation}
|
\end{equation}`
|
||||||
[x, y, k(x, y)], \text{ with } k(x, y)=\sin{(\pi x)}\sin{(\pi y)},
|
|
||||||
\end{equation}
|
|
||||||
|
|
||||||
where :math:`x` and :math:`y` are the spatial coordinates and
|
where :math:`x` and :math:`y` are the spatial coordinates and
|
||||||
:math:`k(x, y)` is the added feature.
|
:math:`k(x, y)` is the added feature.
|
||||||
@@ -215,7 +200,7 @@ new extra feature.
|
|||||||
input_dimensions=len(problem.input_variables)+1
|
input_dimensions=len(problem.input_variables)+1
|
||||||
)
|
)
|
||||||
pinn_feat = PINN(problem, model_feat, extra_features=[SinSin()], optimizer_kwargs={'lr':0.006, 'weight_decay':1e-8})
|
pinn_feat = PINN(problem, model_feat, extra_features=[SinSin()], optimizer_kwargs={'lr':0.006, 'weight_decay':1e-8})
|
||||||
trainer_feat = Trainer(pinn_feat, max_epochs=1000, callbacks=[MetricTracker()])
|
trainer_feat = Trainer(pinn_feat, max_epochs=1000, callbacks=[MetricTracker()], accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
|
||||||
|
|
||||||
# train
|
# train
|
||||||
trainer_feat.train()
|
trainer_feat.train()
|
||||||
@@ -223,21 +208,10 @@ new extra feature.
|
|||||||
|
|
||||||
.. parsed-literal::
|
.. parsed-literal::
|
||||||
|
|
||||||
GPU available: True (cuda), used: True
|
GPU available: False, used: False
|
||||||
TPU available: False, using: 0 TPU cores
|
TPU available: False, using: 0 TPU cores
|
||||||
IPU available: False, using: 0 IPUs
|
IPU available: False, using: 0 IPUs
|
||||||
HPU available: False, using: 0 HPUs
|
HPU available: False, using: 0 HPUs
|
||||||
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
|
|
||||||
|
|
||||||
| Name | Type | Params
|
|
||||||
----------------------------------------
|
|
||||||
0 | _loss | MSELoss | 0
|
|
||||||
1 | _neural_net | Network | 161
|
|
||||||
----------------------------------------
|
|
||||||
161 Trainable params
|
|
||||||
0 Non-trainable params
|
|
||||||
161 Total params
|
|
||||||
0.001 Total estimated model params size (MB)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@@ -262,11 +236,11 @@ of magnitudes in accuracy.
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. image:: output_16_0.png
|
.. image:: tutorial_files/tutorial_14_0.png
|
||||||
|
|
||||||
|
|
||||||
The problem solution with learnable extra-features
|
Solving the problem with learnable extra-features PINNs
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
-------------------------------------------------------
|
||||||
|
|
||||||
We can still do better!
|
We can still do better!
|
||||||
|
|
||||||
@@ -274,11 +248,9 @@ Another way to exploit the extra features is the addition of learnable
|
|||||||
parameter inside them. In this way, the added parameters are learned
|
parameter inside them. In this way, the added parameters are learned
|
||||||
during the training phase of the neural network. In this case, we use:
|
during the training phase of the neural network. In this case, we use:
|
||||||
|
|
||||||
.. raw:: latex
|
:raw-latex:`\begin{equation}
|
||||||
|
k(x, \mathbf{y}) = \beta \sin{(\alpha x)} \sin{(\alpha y)},
|
||||||
\begin{equation}
|
\end{equation}`
|
||||||
k(x, \mathbf{y}) = \beta \sin{(\alpha x)} \sin{(\alpha y)},
|
|
||||||
\end{equation}
|
|
||||||
|
|
||||||
where :math:`\alpha` and :math:`\beta` are the abovementioned
|
where :math:`\alpha` and :math:`\beta` are the abovementioned
|
||||||
parameters. Their implementation is quite trivial: by using the class
|
parameters. Their implementation is quite trivial: by using the class
|
||||||
@@ -310,8 +282,8 @@ need, and they are managed by ``autograd`` module!
|
|||||||
output_dimensions=len(problem.output_variables),
|
output_dimensions=len(problem.output_variables),
|
||||||
input_dimensions=len(problem.input_variables)+1
|
input_dimensions=len(problem.input_variables)+1
|
||||||
)
|
)
|
||||||
pinn_lean = PINN(problem, model_lean, extra_features=[SinSin()], optimizer_kwargs={'lr':0.006, 'weight_decay':1e-8})
|
pinn_lean = PINN(problem, model_lean, extra_features=[SinSinAB()], optimizer_kwargs={'lr':0.006, 'weight_decay':1e-8})
|
||||||
trainer_learn = Trainer(pinn_lean, max_epochs=1000)
|
trainer_learn = Trainer(pinn_lean, max_epochs=1000, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
|
||||||
|
|
||||||
# train
|
# train
|
||||||
trainer_learn.train()
|
trainer_learn.train()
|
||||||
@@ -319,21 +291,10 @@ need, and they are managed by ``autograd`` module!
|
|||||||
|
|
||||||
.. parsed-literal::
|
.. parsed-literal::
|
||||||
|
|
||||||
GPU available: True (cuda), used: True
|
GPU available: False, used: False
|
||||||
TPU available: False, using: 0 TPU cores
|
TPU available: False, using: 0 TPU cores
|
||||||
IPU available: False, using: 0 IPUs
|
IPU available: False, using: 0 IPUs
|
||||||
HPU available: False, using: 0 HPUs
|
HPU available: False, using: 0 HPUs
|
||||||
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
|
|
||||||
|
|
||||||
| Name | Type | Params
|
|
||||||
----------------------------------------
|
|
||||||
0 | _loss | MSELoss | 0
|
|
||||||
1 | _neural_net | Network | 161
|
|
||||||
----------------------------------------
|
|
||||||
161 Trainable params
|
|
||||||
0 Non-trainable params
|
|
||||||
161 Total params
|
|
||||||
0.001 Total estimated model params size (MB)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@@ -367,8 +328,8 @@ removing all the hidden layers in the ``FeedForward``, keeping only the
|
|||||||
output_dimensions=len(problem.output_variables),
|
output_dimensions=len(problem.output_variables),
|
||||||
input_dimensions=len(problem.input_variables)+1
|
input_dimensions=len(problem.input_variables)+1
|
||||||
)
|
)
|
||||||
pinn_learn = PINN(problem, model_lean, extra_features=[SinSin()], optimizer_kwargs={'lr':0.006, 'weight_decay':1e-8})
|
pinn_learn = PINN(problem, model_lean, extra_features=[SinSinAB()], optimizer_kwargs={'lr':0.006, 'weight_decay':1e-8})
|
||||||
trainer_learn = Trainer(pinn_learn, max_epochs=1000, callbacks=[MetricTracker()])
|
trainer_learn = Trainer(pinn_learn, max_epochs=1000, callbacks=[MetricTracker()], accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
|
||||||
|
|
||||||
# train
|
# train
|
||||||
trainer_learn.train()
|
trainer_learn.train()
|
||||||
@@ -376,21 +337,10 @@ removing all the hidden layers in the ``FeedForward``, keeping only the
|
|||||||
|
|
||||||
.. parsed-literal::
|
.. parsed-literal::
|
||||||
|
|
||||||
GPU available: True (cuda), used: True
|
GPU available: False, used: False
|
||||||
TPU available: False, using: 0 TPU cores
|
TPU available: False, using: 0 TPU cores
|
||||||
IPU available: False, using: 0 IPUs
|
IPU available: False, using: 0 IPUs
|
||||||
HPU available: False, using: 0 HPUs
|
HPU available: False, using: 0 HPUs
|
||||||
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
|
|
||||||
|
|
||||||
| Name | Type | Params
|
|
||||||
----------------------------------------
|
|
||||||
0 | _loss | MSELoss | 0
|
|
||||||
1 | _neural_net | Network | 4
|
|
||||||
----------------------------------------
|
|
||||||
4 Trainable params
|
|
||||||
0 Non-trainable params
|
|
||||||
4 Total params
|
|
||||||
0.000 Total estimated model params size (MB)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@@ -422,5 +372,35 @@ features.
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. image:: output_23_0.png
|
.. image:: tutorial_files/tutorial_21_0.png
|
||||||
|
|
||||||
|
|
||||||
|
Let us compare the training losses for the various types of training
|
||||||
|
|
||||||
|
.. code:: ipython3
|
||||||
|
|
||||||
|
plotter.plot_loss(trainer, label='Standard')
|
||||||
|
plotter.plot_loss(trainer_feat, label='Static Features')
|
||||||
|
plotter.plot_loss(trainer_learn, label='Learnable Features')
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
.. image:: tutorial_files/tutorial_23_0.png
|
||||||
|
|
||||||
|
|
||||||
|
What’s next?
|
||||||
|
------------
|
||||||
|
|
||||||
|
Nice you have completed the two dimensional Poisson tutorial of
|
||||||
|
**PINA**! There are multiple directions you can go now:
|
||||||
|
|
||||||
|
1. Train the network for longer or with different layer sizes and assert
|
||||||
|
the finaly accuracy
|
||||||
|
|
||||||
|
2. Propose new types of extrafeatures and see how they affect the
|
||||||
|
learning
|
||||||
|
|
||||||
|
3. Exploit extrafeature training in more complex problems
|
||||||
|
|
||||||
|
4. Many more…
|
||||||
|
Before Width: | Height: | Size: 43 KiB After Width: | Height: | Size: 43 KiB |
|
After Width: | Height: | Size: 35 KiB |
|
Before Width: | Height: | Size: 35 KiB After Width: | Height: | Size: 35 KiB |
|
After Width: | Height: | Size: 36 KiB |
|
After Width: | Height: | Size: 32 KiB |
|
After Width: | Height: | Size: 38 KiB |
342
docs/source/_rst/tutorials/tutorial3/tutorial.rst
Normal file
@@ -0,0 +1,342 @@
|
|||||||
|
Tutorial: Two dimensional Wave problem with hard constraint
|
||||||
|
===========================================================
|
||||||
|
|
||||||
|
In this tutorial we present how to solve the wave equation using hard
|
||||||
|
constraint PINNs. For doing so we will build a costum ``torch`` model
|
||||||
|
and pass it to the ``PINN`` solver.
|
||||||
|
|
||||||
|
First of all, some useful imports.
|
||||||
|
|
||||||
|
.. code:: ipython3
|
||||||
|
|
||||||
|
import torch
|
||||||
|
|
||||||
|
from pina.problem import SpatialProblem, TimeDependentProblem
|
||||||
|
from pina.operators import laplacian, grad
|
||||||
|
from pina.geometry import CartesianDomain
|
||||||
|
from pina.solvers import PINN
|
||||||
|
from pina.trainer import Trainer
|
||||||
|
from pina.equation import Equation
|
||||||
|
from pina.equation.equation_factory import FixedValue
|
||||||
|
from pina import Condition, Plotter
|
||||||
|
|
||||||
|
The problem definition
|
||||||
|
----------------------
|
||||||
|
|
||||||
|
The problem is written in the following form:
|
||||||
|
|
||||||
|
:raw-latex:`\begin{equation}
|
||||||
|
\begin{cases}
|
||||||
|
\Delta u(x,y,t) = \frac{\partial^2}{\partial t^2} u(x,y,t) \quad \text{in } D, \\\\
|
||||||
|
u(x, y, t=0) = \sin(\pi x)\sin(\pi y), \\\\
|
||||||
|
u(x, y, t) = 0 \quad \text{on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
|
||||||
|
\end{cases}
|
||||||
|
\end{equation}`
|
||||||
|
|
||||||
|
where :math:`D` is a square domain :math:`[0,1]^2`, and
|
||||||
|
:math:`\Gamma_i`, with :math:`i=1,...,4`, are the boundaries of the
|
||||||
|
square, and the velocity in the standard wave equation is fixed to one.
|
||||||
|
|
||||||
|
Now, the wave problem is written in PINA code as a class, inheriting
|
||||||
|
from ``SpatialProblem`` and ``TimeDependentProblem`` since we deal with
|
||||||
|
spatial, and time dependent variables. The equations are written as
|
||||||
|
``conditions`` that should be satisfied in the corresponding domains.
|
||||||
|
``truth_solution`` is the exact solution which will be compared with the
|
||||||
|
predicted one.
|
||||||
|
|
||||||
|
.. code:: ipython3
|
||||||
|
|
||||||
|
class Wave(TimeDependentProblem, SpatialProblem):
|
||||||
|
output_variables = ['u']
|
||||||
|
spatial_domain = CartesianDomain({'x': [0, 1], 'y': [0, 1]})
|
||||||
|
temporal_domain = CartesianDomain({'t': [0, 1]})
|
||||||
|
|
||||||
|
def wave_equation(input_, output_):
|
||||||
|
u_t = grad(output_, input_, components=['u'], d=['t'])
|
||||||
|
u_tt = grad(u_t, input_, components=['dudt'], d=['t'])
|
||||||
|
nabla_u = laplacian(output_, input_, components=['u'], d=['x', 'y'])
|
||||||
|
return nabla_u - u_tt
|
||||||
|
|
||||||
|
def initial_condition(input_, output_):
|
||||||
|
u_expected = (torch.sin(torch.pi*input_.extract(['x'])) *
|
||||||
|
torch.sin(torch.pi*input_.extract(['y'])))
|
||||||
|
return output_.extract(['u']) - u_expected
|
||||||
|
|
||||||
|
conditions = {
|
||||||
|
'gamma1': Condition(location=CartesianDomain({'x': [0, 1], 'y': 1, 't': [0, 1]}), equation=FixedValue(0.)),
|
||||||
|
'gamma2': Condition(location=CartesianDomain({'x': [0, 1], 'y': 0, 't': [0, 1]}), equation=FixedValue(0.)),
|
||||||
|
'gamma3': Condition(location=CartesianDomain({'x': 1, 'y': [0, 1], 't': [0, 1]}), equation=FixedValue(0.)),
|
||||||
|
'gamma4': Condition(location=CartesianDomain({'x': 0, 'y': [0, 1], 't': [0, 1]}), equation=FixedValue(0.)),
|
||||||
|
't0': Condition(location=CartesianDomain({'x': [0, 1], 'y': [0, 1], 't': 0}), equation=Equation(initial_condition)),
|
||||||
|
'D': Condition(location=CartesianDomain({'x': [0, 1], 'y': [0, 1], 't': [0, 1]}), equation=Equation(wave_equation)),
|
||||||
|
}
|
||||||
|
|
||||||
|
def wave_sol(self, pts):
|
||||||
|
return (torch.sin(torch.pi*pts.extract(['x'])) *
|
||||||
|
torch.sin(torch.pi*pts.extract(['y'])) *
|
||||||
|
torch.cos(torch.sqrt(torch.tensor(2.))*torch.pi*pts.extract(['t'])))
|
||||||
|
|
||||||
|
truth_solution = wave_sol
|
||||||
|
|
||||||
|
problem = Wave()
|
||||||
|
|
||||||
|
Hard Constraint Model
|
||||||
|
---------------------
|
||||||
|
|
||||||
|
After the problem, a **torch** model is needed to solve the PINN.
|
||||||
|
Usually, many models are already implemented in **PINA**, but the user
|
||||||
|
has the possibility to build his/her own model in ``torch``. The hard
|
||||||
|
constraint we impose is on the boundary of the spatial domain.
|
||||||
|
Specifically, our solution is written as:
|
||||||
|
|
||||||
|
.. math:: u_{\rm{pinn}} = xy(1-x)(1-y)\cdot NN(x, y, t),
|
||||||
|
|
||||||
|
where :math:`NN` is the neural net output. This neural network takes as
|
||||||
|
input the coordinates (in this case :math:`x`, :math:`y` and :math:`t`)
|
||||||
|
and provides the unknown field :math:`u`. By construction, it is zero on
|
||||||
|
the boundaries. The residuals of the equations are evaluated at several
|
||||||
|
sampling points (which the user can manipulate using the method
|
||||||
|
``discretise_domain``) and the loss minimized by the neural network is
|
||||||
|
the sum of the residuals.
|
||||||
|
|
||||||
|
.. code:: ipython3
|
||||||
|
|
||||||
|
class HardMLP(torch.nn.Module):
|
||||||
|
|
||||||
|
def __init__(self, input_dim, output_dim):
|
||||||
|
super().__init__()
|
||||||
|
|
||||||
|
self.layers = torch.nn.Sequential(torch.nn.Linear(input_dim, 40),
|
||||||
|
torch.nn.ReLU(),
|
||||||
|
torch.nn.Linear(40, 40),
|
||||||
|
torch.nn.ReLU(),
|
||||||
|
torch.nn.Linear(40, output_dim))
|
||||||
|
|
||||||
|
# here in the foward we implement the hard constraints
|
||||||
|
def forward(self, x):
|
||||||
|
hard = x.extract(['x'])*(1-x.extract(['x']))*x.extract(['y'])*(1-x.extract(['y']))
|
||||||
|
return hard*self.layers(x)
|
||||||
|
|
||||||
|
Train and Inference
|
||||||
|
-------------------
|
||||||
|
|
||||||
|
In this tutorial, the neural network is trained for 1000 epochs with a
|
||||||
|
learning rate of 0.001 (default in ``PINN``). Training takes
|
||||||
|
approximately 3 minutes.
|
||||||
|
|
||||||
|
.. code:: ipython3
|
||||||
|
|
||||||
|
# generate the data
|
||||||
|
problem.discretise_domain(1000, 'random', locations=['D', 't0', 'gamma1', 'gamma2', 'gamma3', 'gamma4'])
|
||||||
|
|
||||||
|
# crete the solver
|
||||||
|
pinn = PINN(problem, HardMLP(len(problem.input_variables), len(problem.output_variables)))
|
||||||
|
|
||||||
|
# create trainer and train
|
||||||
|
trainer = Trainer(pinn, max_epochs=1000, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
|
||||||
|
trainer.train()
|
||||||
|
|
||||||
|
|
||||||
|
.. parsed-literal::
|
||||||
|
|
||||||
|
/u/d/dcoscia/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:546: UserWarning: Can't initialize NVML
|
||||||
|
warnings.warn("Can't initialize NVML")
|
||||||
|
/u/d/dcoscia/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:651: UserWarning: CUDA initialization: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:109.)
|
||||||
|
return torch._C._cuda_getDeviceCount() if nvml_count < 0 else nvml_count
|
||||||
|
GPU available: False, used: False
|
||||||
|
TPU available: False, using: 0 TPU cores
|
||||||
|
IPU available: False, using: 0 IPUs
|
||||||
|
HPU available: False, using: 0 HPUs
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
.. parsed-literal::
|
||||||
|
|
||||||
|
Training: 0it [00:00, ?it/s]
|
||||||
|
|
||||||
|
|
||||||
|
.. parsed-literal::
|
||||||
|
|
||||||
|
`Trainer.fit` stopped: `max_epochs=1000` reached.
|
||||||
|
|
||||||
|
|
||||||
|
Notice that the loss on the boundaries of the spatial domain is exactly
|
||||||
|
zero, as expected! After the training is completed one can now plot some
|
||||||
|
results using the ``Plotter`` class of **PINA**.
|
||||||
|
|
||||||
|
.. code:: ipython3
|
||||||
|
|
||||||
|
plotter = Plotter()
|
||||||
|
|
||||||
|
# plotting at fixed time t = 0.0
|
||||||
|
print('Plotting at t=0')
|
||||||
|
plotter.plot(trainer, fixed_variables={'t': 0.0})
|
||||||
|
|
||||||
|
# plotting at fixed time t = 0.5
|
||||||
|
print('Plotting at t=0.5')
|
||||||
|
plotter.plot(trainer, fixed_variables={'t': 0.5})
|
||||||
|
|
||||||
|
# plotting at fixed time t = 1.
|
||||||
|
print('Plotting at t=1')
|
||||||
|
plotter.plot(trainer, fixed_variables={'t': 1.0})
|
||||||
|
|
||||||
|
|
||||||
|
.. parsed-literal::
|
||||||
|
|
||||||
|
Plotting at t=0
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
.. image:: tutorial_files/tutorial_13_1.png
|
||||||
|
|
||||||
|
|
||||||
|
.. parsed-literal::
|
||||||
|
|
||||||
|
Plotting at t=0.5
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
.. image:: tutorial_files/tutorial_13_3.png
|
||||||
|
|
||||||
|
|
||||||
|
.. parsed-literal::
|
||||||
|
|
||||||
|
Plotting at t=1
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
.. image:: tutorial_files/tutorial_13_5.png
|
||||||
|
|
||||||
|
|
||||||
|
The results are not so great, and we can clearly see that as time
|
||||||
|
progress the solution get worse…. Can we do better?
|
||||||
|
|
||||||
|
A valid option is to impose the initial condition as hard constraint as
|
||||||
|
well. Specifically, our solution is written as:
|
||||||
|
|
||||||
|
.. math:: u_{\rm{pinn}} = xy(1-x)(1-y)\cdot NN(x, y, t)\cdot t + \cos(\sqrt{2}\pi t)sin(\pi x)\sin(\pi y),
|
||||||
|
|
||||||
|
Let us build the network first
|
||||||
|
|
||||||
|
.. code:: ipython3
|
||||||
|
|
||||||
|
class HardMLPtime(torch.nn.Module):
|
||||||
|
|
||||||
|
def __init__(self, input_dim, output_dim):
|
||||||
|
super().__init__()
|
||||||
|
|
||||||
|
self.layers = torch.nn.Sequential(torch.nn.Linear(input_dim, 40),
|
||||||
|
torch.nn.ReLU(),
|
||||||
|
torch.nn.Linear(40, 40),
|
||||||
|
torch.nn.ReLU(),
|
||||||
|
torch.nn.Linear(40, output_dim))
|
||||||
|
|
||||||
|
# here in the foward we implement the hard constraints
|
||||||
|
def forward(self, x):
|
||||||
|
hard_space = x.extract(['x'])*(1-x.extract(['x']))*x.extract(['y'])*(1-x.extract(['y']))
|
||||||
|
hard_t = torch.sin(torch.pi*x.extract(['x'])) * torch.sin(torch.pi*x.extract(['y'])) * torch.cos(torch.sqrt(torch.tensor(2.))*torch.pi*x.extract(['t']))
|
||||||
|
return hard_space * self.layers(x) * x.extract(['t']) + hard_t
|
||||||
|
|
||||||
|
Now let’s train with the same configuration as thre previous test
|
||||||
|
|
||||||
|
.. code:: ipython3
|
||||||
|
|
||||||
|
# generate the data
|
||||||
|
problem.discretise_domain(1000, 'random', locations=['D', 't0', 'gamma1', 'gamma2', 'gamma3', 'gamma4'])
|
||||||
|
|
||||||
|
# crete the solver
|
||||||
|
pinn = PINN(problem, HardMLPtime(len(problem.input_variables), len(problem.output_variables)))
|
||||||
|
|
||||||
|
# create trainer and train
|
||||||
|
trainer = Trainer(pinn, max_epochs=1000, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
|
||||||
|
trainer.train()
|
||||||
|
|
||||||
|
|
||||||
|
.. parsed-literal::
|
||||||
|
|
||||||
|
GPU available: False, used: False
|
||||||
|
TPU available: False, using: 0 TPU cores
|
||||||
|
IPU available: False, using: 0 IPUs
|
||||||
|
HPU available: False, using: 0 HPUs
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
.. parsed-literal::
|
||||||
|
|
||||||
|
Training: 0it [00:00, ?it/s]
|
||||||
|
|
||||||
|
|
||||||
|
.. parsed-literal::
|
||||||
|
|
||||||
|
`Trainer.fit` stopped: `max_epochs=1000` reached.
|
||||||
|
|
||||||
|
|
||||||
|
We can clearly see that the loss is way lower now. Let’s plot the
|
||||||
|
results
|
||||||
|
|
||||||
|
.. code:: ipython3
|
||||||
|
|
||||||
|
plotter = Plotter()
|
||||||
|
|
||||||
|
# plotting at fixed time t = 0.0
|
||||||
|
print('Plotting at t=0')
|
||||||
|
plotter.plot(trainer, fixed_variables={'t': 0.0})
|
||||||
|
|
||||||
|
# plotting at fixed time t = 0.5
|
||||||
|
print('Plotting at t=0.5')
|
||||||
|
plotter.plot(trainer, fixed_variables={'t': 0.5})
|
||||||
|
|
||||||
|
# plotting at fixed time t = 1.
|
||||||
|
print('Plotting at t=1')
|
||||||
|
plotter.plot(trainer, fixed_variables={'t': 1.0})
|
||||||
|
|
||||||
|
|
||||||
|
.. parsed-literal::
|
||||||
|
|
||||||
|
Plotting at t=0
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
.. image:: tutorial_files/tutorial_19_1.png
|
||||||
|
|
||||||
|
|
||||||
|
.. parsed-literal::
|
||||||
|
|
||||||
|
Plotting at t=0.5
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
.. image:: tutorial_files/tutorial_19_3.png
|
||||||
|
|
||||||
|
|
||||||
|
.. parsed-literal::
|
||||||
|
|
||||||
|
Plotting at t=1
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
.. image:: tutorial_files/tutorial_19_5.png
|
||||||
|
|
||||||
|
|
||||||
|
We can see now that the results are way better! This is due to the fact
|
||||||
|
that previously the network was not learning correctly the initial
|
||||||
|
conditon, leading to a poor solution when the time evolved. By imposing
|
||||||
|
the initial condition the network is able to correctly solve the
|
||||||
|
problem.
|
||||||
|
|
||||||
|
What’s next?
|
||||||
|
------------
|
||||||
|
|
||||||
|
Nice you have completed the two dimensional Wave tutorial of **PINA**!
|
||||||
|
There are multiple directions you can go now:
|
||||||
|
|
||||||
|
1. Train the network for longer or with different layer sizes and assert
|
||||||
|
the finaly accuracy
|
||||||
|
|
||||||
|
2. Propose new types of hard constraints in time, e.g.
|
||||||
|
|
||||||
|
.. math:: u_{\rm{pinn}} = xy(1-x)(1-y)\cdot NN(x, y, t)(1-\exp(-t)) + \cos(\sqrt{2}\pi t)sin(\pi x)\sin(\pi y),
|
||||||
|
|
||||||
|
3. Exploit extrafeature training for model 1 and 2
|
||||||
|
|
||||||
|
4. Many more…
|
||||||
|
After Width: | Height: | Size: 41 KiB |
|
After Width: | Height: | Size: 39 KiB |
|
After Width: | Height: | Size: 36 KiB |
|
Before Width: | Height: | Size: 38 KiB After Width: | Height: | Size: 38 KiB |
|
Before Width: | Height: | Size: 41 KiB After Width: | Height: | Size: 41 KiB |
|
Before Width: | Height: | Size: 38 KiB After Width: | Height: | Size: 38 KiB |
|
After Width: | Height: | Size: 26 KiB |
|
After Width: | Height: | Size: 34 KiB |
|
After Width: | Height: | Size: 33 KiB |
@@ -1,24 +1,22 @@
|
|||||||
Tutorial 4: continuous convolutional filter
|
Tutorial: Unstructured convolutional autoencoder via continuous convolution
|
||||||
===========================================
|
===========================================================================
|
||||||
|
|
||||||
In this tutorial, we will show how to use the Continuous Convolutional
|
In this tutorial, we will show how to use the Continuous Convolutional
|
||||||
Filter, and how to build common Deep Learning architectures with it. The
|
Filter, and how to build common Deep Learning architectures with it. The
|
||||||
implementation of the filter follows the original work `**A Continuous
|
implementation of the filter follows the original work `A Continuous
|
||||||
Convolutional Trainable Filter for Modelling Unstructured
|
Convolutional Trainable Filter for Modelling Unstructured
|
||||||
Data** <https://arxiv.org/abs/2210.13416>`__.
|
Data <https://arxiv.org/abs/2210.13416>`__.
|
||||||
|
|
||||||
First of all we import the modules needed for the tutorial, which
|
First of all we import the modules needed for the tutorial:
|
||||||
include:
|
|
||||||
|
|
||||||
- ``ContinuousConv`` class from ``pina.model.layers`` which implements
|
|
||||||
the continuous convolutional filter
|
|
||||||
- ``PyTorch`` and ``Matplotlib`` for tensorial operations and
|
|
||||||
visualization respectively
|
|
||||||
|
|
||||||
.. code:: ipython3
|
.. code:: ipython3
|
||||||
|
|
||||||
import torch
|
import torch
|
||||||
import matplotlib.pyplot as plt
|
import matplotlib.pyplot as plt
|
||||||
|
from pina.problem import AbstractProblem
|
||||||
|
from pina.solvers import SupervisedSolver
|
||||||
|
from pina.trainer import Trainer
|
||||||
|
from pina import Condition, LabelTensor
|
||||||
from pina.model.layers import ContinuousConvBlock
|
from pina.model.layers import ContinuousConvBlock
|
||||||
import torchvision # for MNIST dataset
|
import torchvision # for MNIST dataset
|
||||||
from pina.model import FeedForward # for building AE and MNIST classification
|
from pina.model import FeedForward # for building AE and MNIST classification
|
||||||
@@ -46,7 +44,7 @@ as:
|
|||||||
|
|
||||||
\mathcal{I}_{\rm{out}}(\mathbf{x}) = \int_{\mathcal{X}} \mathcal{I}(\mathbf{x} + \mathbf{\tau}) \cdot \mathcal{K}(\mathbf{\tau}) d\mathbf{\tau},
|
\mathcal{I}_{\rm{out}}(\mathbf{x}) = \int_{\mathcal{X}} \mathcal{I}(\mathbf{x} + \mathbf{\tau}) \cdot \mathcal{K}(\mathbf{\tau}) d\mathbf{\tau},
|
||||||
|
|
||||||
where :math:`\mathcal{K} : \mathcal{X} \rightarrow \mathbb{R}` is the
|
where :math:`\mathcal{K} : \mathcal{X} \rightarrow \mathbb{R}` is the
|
||||||
*continuous filter* function, and
|
*continuous filter* function, and
|
||||||
:math:`\mathcal{I} : \Omega \subset \mathbb{R}^N \rightarrow \mathbb{R}`
|
:math:`\mathcal{I} : \Omega \subset \mathbb{R}^N \rightarrow \mathbb{R}`
|
||||||
is the input function. The continuous filter function is approximated
|
is the input function. The continuous filter function is approximated
|
||||||
@@ -62,7 +60,7 @@ by the authors. Thus, given :math:`\{\mathbf{x}_i\}_{i=1}^{n}` points in
|
|||||||
|
|
||||||
\mathcal{I}_{\rm{out}}(\mathbf{\tilde{x}}_i) = \sum_{{\mathbf{x}_i}\in\mathcal{X}} \mathcal{I}(\mathbf{x}_i + \mathbf{\tau}) \cdot \mathcal{K}(\mathbf{x}_i),
|
\mathcal{I}_{\rm{out}}(\mathbf{\tilde{x}}_i) = \sum_{{\mathbf{x}_i}\in\mathcal{X}} \mathcal{I}(\mathbf{x}_i + \mathbf{\tau}) \cdot \mathcal{K}(\mathbf{x}_i),
|
||||||
|
|
||||||
where :math:`\mathbf{\tau} \in \mathcal{S}`, with :math:`\mathcal{S}`
|
where :math:`\mathbf{\tau} \in \mathcal{S}`, with :math:`\mathcal{S}`
|
||||||
the set of available strides, corresponds to the current stride position
|
the set of available strides, corresponds to the current stride position
|
||||||
of the filter, and :math:`\mathbf{\tilde{x}}_i` points are obtained by
|
of the filter, and :math:`\mathbf{\tilde{x}}_i` points are obtained by
|
||||||
taking the centroid of the filter position mapped on the :math:`\Omega`
|
taking the centroid of the filter position mapped on the :math:`\Omega`
|
||||||
@@ -83,7 +81,7 @@ shape:
|
|||||||
|
|
||||||
.. math:: [B \times N_{in} \times N \times D]
|
.. math:: [B \times N_{in} \times N \times D]
|
||||||
|
|
||||||
where :math:`B` is the batch\_size, :math:`N_{in}` is the number of
|
\ where :math:`B` is the batch_size, :math:`N_{in}` is the number of
|
||||||
input fields, :math:`N` the number of points in the mesh, :math:`D` the
|
input fields, :math:`N` the number of points in the mesh, :math:`D` the
|
||||||
dimension of the problem. In particular: \* :math:`D` is the number of
|
dimension of the problem. In particular: \* :math:`D` is the number of
|
||||||
spatial variables + 1. The last column must contain the field value. For
|
spatial variables + 1. The last column must contain the field value. For
|
||||||
@@ -93,7 +91,7 @@ like ``[first coordinate, second coordinate, field value]`` \*
|
|||||||
For example a vectorial function :math:`f = [f_1, f_2]` will have
|
For example a vectorial function :math:`f = [f_1, f_2]` will have
|
||||||
:math:`N_{in}=2`
|
:math:`N_{in}=2`
|
||||||
|
|
||||||
Let's see an example to clear the ideas. We will be verbose to explain
|
Let’s see an example to clear the ideas. We will be verbose to explain
|
||||||
in details the input form. We wish to create the function:
|
in details the input form. We wish to create the function:
|
||||||
|
|
||||||
.. math::
|
.. math::
|
||||||
@@ -148,12 +146,12 @@ where to go. Here is an example for the :math:`[0,1]\times[0,5]` domain:
|
|||||||
|
|
||||||
.. code:: python
|
.. code:: python
|
||||||
|
|
||||||
# stride definition
|
# stride definition
|
||||||
stride = {"domain": [1, 5],
|
stride = {"domain": [1, 5],
|
||||||
"start": [0, 0],
|
"start": [0, 0],
|
||||||
"jump": [0.1, 0.3],
|
"jump": [0.1, 0.3],
|
||||||
"direction": [1, 1],
|
"direction": [1, 1],
|
||||||
}
|
}
|
||||||
|
|
||||||
This tells the filter: 1. ``domain``: square domain (the only
|
This tells the filter: 1. ``domain``: square domain (the only
|
||||||
implemented) :math:`[0,1]\times[0,5]`. The minimum value is always zero,
|
implemented) :math:`[0,1]\times[0,5]`. The minimum value is always zero,
|
||||||
@@ -198,15 +196,15 @@ fix the filter dimension to be :math:`[0.1, 0.1]`.
|
|||||||
|
|
||||||
.. parsed-literal::
|
.. parsed-literal::
|
||||||
|
|
||||||
/u/n/ndemo/.local/lib/python3.9/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3526.)
|
/u/d/dcoscia/.local/lib/python3.9/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.)
|
||||||
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
|
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
|
||||||
|
|
||||||
|
|
||||||
That's it! In just one line of code we have created the continuous
|
That’s it! In just one line of code we have created the continuous
|
||||||
convolutional filter. By default the ``pina.model.FeedForward`` neural
|
convolutional filter. By default the ``pina.model.FeedForward`` neural
|
||||||
network is intitialised, more on the
|
network is intitialised, more on the
|
||||||
`documentation <https://mathlab.github.io/PINA/_rst/fnn.html>`__. In
|
`documentation <https://mathlab.github.io/PINA/_rst/fnn.html>`__. In
|
||||||
case the mesh doesn't change during training we can set the ``optimize``
|
case the mesh doesn’t change during training we can set the ``optimize``
|
||||||
flag equals to ``True``, to exploit optimizations for finding the points
|
flag equals to ``True``, to exploit optimizations for finding the points
|
||||||
to convolve.
|
to convolve.
|
||||||
|
|
||||||
@@ -220,7 +218,7 @@ to convolve.
|
|||||||
optimize=True)
|
optimize=True)
|
||||||
|
|
||||||
|
|
||||||
Let's try to do a forward pass
|
Let’s try to do a forward pass
|
||||||
|
|
||||||
.. code:: ipython3
|
.. code:: ipython3
|
||||||
|
|
||||||
@@ -238,7 +236,7 @@ Let's try to do a forward pass
|
|||||||
Filter output data has shape: torch.Size([1, 1, 169, 3])
|
Filter output data has shape: torch.Size([1, 1, 169, 3])
|
||||||
|
|
||||||
|
|
||||||
If we don't want to use the default ``FeedForward`` neural network, we
|
If we don’t want to use the default ``FeedForward`` neural network, we
|
||||||
can pass a specified torch model in the ``model`` keyword as follow:
|
can pass a specified torch model in the ``model`` keyword as follow:
|
||||||
|
|
||||||
.. code:: ipython3
|
.. code:: ipython3
|
||||||
@@ -270,7 +268,7 @@ Notice that we pass the class and not an already built object!
|
|||||||
Building a MNIST Classifier
|
Building a MNIST Classifier
|
||||||
---------------------------
|
---------------------------
|
||||||
|
|
||||||
Let's see how we can build a MNIST classifier using a continuous
|
Let’s see how we can build a MNIST classifier using a continuous
|
||||||
convolutional filter. We will use the MNIST dataset from PyTorch. In
|
convolutional filter. We will use the MNIST dataset from PyTorch. In
|
||||||
order to keep small training times we use only 6000 samples for training
|
order to keep small training times we use only 6000 samples for training
|
||||||
and 1000 samples for testing.
|
and 1000 samples for testing.
|
||||||
@@ -308,68 +306,7 @@ and 1000 samples for testing.
|
|||||||
test_loader = DataLoader(train_data, batch_size=batch_size,
|
test_loader = DataLoader(train_data, batch_size=batch_size,
|
||||||
sampler=SubsetRandomSampler(subsample_train_indices))
|
sampler=SubsetRandomSampler(subsample_train_indices))
|
||||||
|
|
||||||
|
Let’s now build a simple classifier. The MNIST dataset is composed by
|
||||||
.. parsed-literal::
|
|
||||||
|
|
||||||
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz
|
|
||||||
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to ./data/MNIST/raw/train-images-idx3-ubyte.gz
|
|
||||||
|
|
||||||
|
|
||||||
.. parsed-literal::
|
|
||||||
|
|
||||||
100%|█████████████████████████████████| 9912422/9912422 [00:00<00:00, 59926793.62it/s]
|
|
||||||
|
|
||||||
|
|
||||||
.. parsed-literal::
|
|
||||||
|
|
||||||
Extracting ./data/MNIST/raw/train-images-idx3-ubyte.gz to ./data/MNIST/raw
|
|
||||||
|
|
||||||
Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz
|
|
||||||
Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz to ./data/MNIST/raw/train-labels-idx1-ubyte.gz
|
|
||||||
|
|
||||||
|
|
||||||
.. parsed-literal::
|
|
||||||
|
|
||||||
100%|██████████████████████████████████████| 28881/28881 [00:00<00:00, 2463209.03it/s]
|
|
||||||
|
|
||||||
|
|
||||||
.. parsed-literal::
|
|
||||||
|
|
||||||
Extracting ./data/MNIST/raw/train-labels-idx1-ubyte.gz to ./data/MNIST/raw
|
|
||||||
|
|
||||||
Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz
|
|
||||||
Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz to ./data/MNIST/raw/t10k-images-idx3-ubyte.gz
|
|
||||||
|
|
||||||
|
|
||||||
.. parsed-literal::
|
|
||||||
|
|
||||||
100%|█████████████████████████████████| 1648877/1648877 [00:00<00:00, 46499639.59it/s]
|
|
||||||
|
|
||||||
|
|
||||||
.. parsed-literal::
|
|
||||||
|
|
||||||
Extracting ./data/MNIST/raw/t10k-images-idx3-ubyte.gz to ./data/MNIST/raw
|
|
||||||
|
|
||||||
Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz
|
|
||||||
Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz to ./data/MNIST/raw/t10k-labels-idx1-ubyte.gz
|
|
||||||
|
|
||||||
|
|
||||||
.. parsed-literal::
|
|
||||||
|
|
||||||
100%|███████████████████████████████████████| 4542/4542 [00:00<00:00, 19761959.30it/s]
|
|
||||||
|
|
||||||
.. parsed-literal::
|
|
||||||
|
|
||||||
Extracting ./data/MNIST/raw/t10k-labels-idx1-ubyte.gz to ./data/MNIST/raw
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. parsed-literal::
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Let's now build a simple classifier. The MNIST dataset is composed by
|
|
||||||
vectors of shape ``[batch, 1, 28, 28]``, but we can image them as one
|
vectors of shape ``[batch, 1, 28, 28]``, but we can image them as one
|
||||||
field functions where the pixels :math:`ij` are the coordinate
|
field functions where the pixels :math:`ij` are the coordinate
|
||||||
:math:`x=i, y=j` in a :math:`[0, 27]\times[0,27]` domain, and the pixels
|
:math:`x=i, y=j` in a :math:`[0, 27]\times[0,27]` domain, and the pixels
|
||||||
@@ -448,7 +385,7 @@ filter followed by a feedforward neural network
|
|||||||
|
|
||||||
net = ContinuousClassifier()
|
net = ContinuousClassifier()
|
||||||
|
|
||||||
Let's try to train it using a simple pytorch training loop. We train for
|
Let’s try to train it using a simple pytorch training loop. We train for
|
||||||
juts 1 epoch using Adam optimizer with a :math:`0.001` learning rate.
|
juts 1 epoch using Adam optimizer with a :math:`0.001` learning rate.
|
||||||
|
|
||||||
.. code:: ipython3
|
.. code:: ipython3
|
||||||
@@ -487,7 +424,9 @@ juts 1 epoch using Adam optimizer with a :math:`0.001` learning rate.
|
|||||||
|
|
||||||
.. parsed-literal::
|
.. parsed-literal::
|
||||||
|
|
||||||
/u/n/ndemo/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:611: UserWarning: Can't initialize NVML
|
/u/d/dcoscia/.local/lib/python3.9/site-packages/torch/autograd/__init__.py:200: UserWarning: CUDA initialization: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:109.)
|
||||||
|
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
|
||||||
|
/u/d/dcoscia/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:546: UserWarning: Can't initialize NVML
|
||||||
warnings.warn("Can't initialize NVML")
|
warnings.warn("Can't initialize NVML")
|
||||||
|
|
||||||
|
|
||||||
@@ -510,7 +449,7 @@ juts 1 epoch using Adam optimizer with a :math:`0.001` learning rate.
|
|||||||
batch [750/750] loss[0.040]
|
batch [750/750] loss[0.040]
|
||||||
|
|
||||||
|
|
||||||
Let's see the performance on the train set!
|
Let’s see the performance on the train set!
|
||||||
|
|
||||||
.. code:: ipython3
|
.. code:: ipython3
|
||||||
|
|
||||||
@@ -537,7 +476,7 @@ Let's see the performance on the train set!
|
|||||||
|
|
||||||
|
|
||||||
As we can see we have very good performance for having traing only for 1
|
As we can see we have very good performance for having traing only for 1
|
||||||
epoch! Nevertheless, we are still using structured data... Let's see how
|
epoch! Nevertheless, we are still using structured data… Let’s see how
|
||||||
we can build an autoencoder for unstructured data now.
|
we can build an autoencoder for unstructured data now.
|
||||||
|
|
||||||
Building a Continuous Convolutional Autoencoder
|
Building a Continuous Convolutional Autoencoder
|
||||||
@@ -546,7 +485,7 @@ Building a Continuous Convolutional Autoencoder
|
|||||||
Just as toy problem, we will now build an autoencoder for the following
|
Just as toy problem, we will now build an autoencoder for the following
|
||||||
function :math:`f(x,y)=\sin(\pi x)\sin(\pi y)` on the unit circle domain
|
function :math:`f(x,y)=\sin(\pi x)\sin(\pi y)` on the unit circle domain
|
||||||
centered in :math:`(0.5, 0.5)`. We will also see the ability to
|
centered in :math:`(0.5, 0.5)`. We will also see the ability to
|
||||||
up-sample (once trained) the results without retraining. Let's first
|
up-sample (once trained) the results without retraining. Let’s first
|
||||||
create the input and visualize it, we will use firstly a mesh of
|
create the input and visualize it, we will use firstly a mesh of
|
||||||
:math:`100` points.
|
:math:`100` points.
|
||||||
|
|
||||||
@@ -592,12 +531,12 @@ create the input and visualize it, we will use firstly a mesh of
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. image:: output_32_0.png
|
.. image:: tutorial_files/tutorial_32_0.png
|
||||||
|
|
||||||
|
|
||||||
Let's now build a simple autoencoder using the continuous convolutional
|
Let’s now build a simple autoencoder using the continuous convolutional
|
||||||
filter. The data is clearly unstructured and a simple convolutional
|
filter. The data is clearly unstructured and a simple convolutional
|
||||||
filter might not work without projecting or interpolating first. Let's
|
filter might not work without projecting or interpolating first. Let’s
|
||||||
first build and ``Encoder`` and ``Decoder`` class, and then a
|
first build and ``Encoder`` and ``Decoder`` class, and then a
|
||||||
``Autoencoder`` class that contains both.
|
``Autoencoder`` class that contains both.
|
||||||
|
|
||||||
@@ -658,7 +597,7 @@ first build and ``Encoder`` and ``Decoder`` class, and then a
|
|||||||
Very good! Notice that in the ``Decoder`` class in the ``forward`` pass
|
Very good! Notice that in the ``Decoder`` class in the ``forward`` pass
|
||||||
we have used the ``.transpose()`` method of the
|
we have used the ``.transpose()`` method of the
|
||||||
``ContinuousConvolution`` class. This method accepts the ``weights`` for
|
``ContinuousConvolution`` class. This method accepts the ``weights`` for
|
||||||
upsampling and the ``grid`` on where to upsample. Let's now build the
|
upsampling and the ``grid`` on where to upsample. Let’s now build the
|
||||||
autoencoder! We set the hidden dimension in the ``hidden_dimension``
|
autoencoder! We set the hidden dimension in the ``hidden_dimension``
|
||||||
variable. We apply the sigmoid on the output since the field value is
|
variable. We apply the sigmoid on the output since the field value is
|
||||||
between :math:`[0, 1]`.
|
between :math:`[0, 1]`.
|
||||||
@@ -681,59 +620,50 @@ between :math:`[0, 1]`.
|
|||||||
out = self.decoder(weights, grid)
|
out = self.decoder(weights, grid)
|
||||||
return out
|
return out
|
||||||
|
|
||||||
|
|
||||||
net = Autoencoder()
|
net = Autoencoder()
|
||||||
|
|
||||||
Let's now train the autoencoder, minimizing the mean square error loss
|
Let’s now train the autoencoder, minimizing the mean square error loss
|
||||||
and optimizing using Adam.
|
and optimizing using Adam. We use the ``SupervisedSolver`` as solver,
|
||||||
|
and the problem is a simple problem created by inheriting from
|
||||||
|
``AbstractProblem``. It takes approximately two minutes to train on CPU.
|
||||||
|
|
||||||
.. code:: ipython3
|
.. code:: ipython3
|
||||||
|
|
||||||
# setting the seed
|
# define the problem
|
||||||
torch.manual_seed(seed)
|
class CircleProblem(AbstractProblem):
|
||||||
|
input_variables = ['x', 'y', 'f']
|
||||||
|
output_variables = input_variables
|
||||||
|
conditions = {'data' : Condition(input_points=LabelTensor(input_data, input_variables), output_points=LabelTensor(input_data, output_variables))}
|
||||||
|
|
||||||
# optimizer and loss function
|
# define the solver
|
||||||
optimizer = torch.optim.Adam(net.parameters(), lr=0.001)
|
solver = SupervisedSolver(problem=CircleProblem(), model=net, loss=torch.nn.MSELoss())
|
||||||
criterion = torch.nn.MSELoss()
|
|
||||||
max_epochs = 150
|
|
||||||
|
|
||||||
for epoch in range(max_epochs): # loop over the dataset multiple times
|
# train
|
||||||
|
trainer = Trainer(solver, max_epochs=150, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
|
||||||
# zero the parameter gradients
|
trainer.train()
|
||||||
optimizer.zero_grad()
|
|
||||||
|
|
||||||
# forward + backward + optimize
|
|
||||||
outputs = net(input_data)
|
|
||||||
loss = criterion(outputs[..., -1], input_data[..., -1])
|
|
||||||
loss.backward()
|
|
||||||
optimizer.step()
|
|
||||||
|
|
||||||
# print statistics
|
|
||||||
if epoch % 10 ==9:
|
|
||||||
print(f'epoch [{epoch + 1}/{max_epochs}] loss [{loss.item():.2}]')
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. parsed-literal::
|
.. parsed-literal::
|
||||||
|
|
||||||
epoch [10/150] loss [0.012]
|
GPU available: False, used: False
|
||||||
epoch [20/150] loss [0.0036]
|
TPU available: False, using: 0 TPU cores
|
||||||
epoch [30/150] loss [0.0018]
|
IPU available: False, using: 0 IPUs
|
||||||
epoch [40/150] loss [0.0014]
|
HPU available: False, using: 0 HPUs
|
||||||
epoch [50/150] loss [0.0012]
|
|
||||||
epoch [60/150] loss [0.001]
|
|
||||||
epoch [70/150] loss [0.0009]
|
|
||||||
epoch [80/150] loss [0.00082]
|
|
||||||
epoch [90/150] loss [0.00075]
|
|
||||||
epoch [100/150] loss [0.0007]
|
|
||||||
epoch [110/150] loss [0.00066]
|
|
||||||
epoch [120/150] loss [0.00063]
|
|
||||||
epoch [130/150] loss [0.00061]
|
|
||||||
epoch [140/150] loss [0.00059]
|
|
||||||
epoch [150/150] loss [0.00058]
|
|
||||||
|
|
||||||
|
|
||||||
Let's visualize the two solutions side by side!
|
|
||||||
|
.. parsed-literal::
|
||||||
|
|
||||||
|
Training: 0it [00:00, ?it/s]
|
||||||
|
|
||||||
|
|
||||||
|
.. parsed-literal::
|
||||||
|
|
||||||
|
`Trainer.fit` stopped: `max_epochs=150` reached.
|
||||||
|
|
||||||
|
|
||||||
|
Let’s visualize the two solutions side by side!
|
||||||
|
|
||||||
.. code:: ipython3
|
.. code:: ipython3
|
||||||
|
|
||||||
@@ -757,7 +687,7 @@ Let's visualize the two solutions side by side!
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. image:: output_40_0.png
|
.. image:: tutorial_files/tutorial_40_0.png
|
||||||
|
|
||||||
|
|
||||||
As we can see the two are really similar! We can compute the :math:`l_2`
|
As we can see the two are really similar! We can compute the :math:`l_2`
|
||||||
@@ -774,19 +704,19 @@ error quite easily as well:
|
|||||||
|
|
||||||
.. parsed-literal::
|
.. parsed-literal::
|
||||||
|
|
||||||
l2 error: 4.22%
|
l2 error: 4.32%
|
||||||
|
|
||||||
|
|
||||||
More or less :math:`4\%` in :math:`l_2` error, which is really low
|
More or less :math:`4\%` in :math:`l_2` error, which is really low
|
||||||
considering the fact that we use just **one** convolutional layer and a
|
considering the fact that we use just **one** convolutional layer and a
|
||||||
simple feedforward to decrease the dimension. Let's see now some
|
simple feedforward to decrease the dimension. Let’s see now some
|
||||||
peculiarity of the filter.
|
peculiarity of the filter.
|
||||||
|
|
||||||
Filter for upsampling
|
Filter for upsampling
|
||||||
~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Suppose we have already the hidden dimension and we want to upsample on
|
Suppose we have already the hidden dimension and we want to upsample on
|
||||||
a differen grid with more points. Let's see how to do it:
|
a differen grid with more points. Let’s see how to do it:
|
||||||
|
|
||||||
.. code:: ipython3
|
.. code:: ipython3
|
||||||
|
|
||||||
@@ -820,11 +750,11 @@ a differen grid with more points. Let's see how to do it:
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. image:: output_45_0.png
|
.. image:: tutorial_files/tutorial_45_0.png
|
||||||
|
|
||||||
|
|
||||||
As we can see we have a very good approximation of the original
|
As we can see we have a very good approximation of the original
|
||||||
function, even thought some noise is present. Let's calculate the error
|
function, even thought some noise is present. Let’s calculate the error
|
||||||
now:
|
now:
|
||||||
|
|
||||||
.. code:: ipython3
|
.. code:: ipython3
|
||||||
@@ -834,7 +764,7 @@ now:
|
|||||||
|
|
||||||
.. parsed-literal::
|
.. parsed-literal::
|
||||||
|
|
||||||
l2 error: 8.37%
|
l2 error: 8.49%
|
||||||
|
|
||||||
|
|
||||||
Autoencoding at different resolution
|
Autoencoding at different resolution
|
||||||
@@ -844,7 +774,7 @@ In the previous example we already had the hidden dimension (of original
|
|||||||
input) and we used it to upsample. Sometimes however we have a more fine
|
input) and we used it to upsample. Sometimes however we have a more fine
|
||||||
mesh solution and we simply want to encode it. This can be done without
|
mesh solution and we simply want to encode it. This can be done without
|
||||||
retraining! This procedure can be useful in case we have many points in
|
retraining! This procedure can be useful in case we have many points in
|
||||||
the mesh and just a smaller part of them are needed for training. Let's
|
the mesh and just a smaller part of them are needed for training. Let’s
|
||||||
see the results of this:
|
see the results of this:
|
||||||
|
|
||||||
.. code:: ipython3
|
.. code:: ipython3
|
||||||
@@ -883,18 +813,23 @@ see the results of this:
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. image:: output_49_0.png
|
.. image:: tutorial_files/tutorial_49_0.png
|
||||||
|
|
||||||
|
|
||||||
.. parsed-literal::
|
.. parsed-literal::
|
||||||
|
|
||||||
l2 error: 8.50%
|
l2 error: 8.59%
|
||||||
|
|
||||||
|
|
||||||
What's next?
|
What’s next?
|
||||||
------------
|
------------
|
||||||
|
|
||||||
We have shown the basic usage of a convolutional filter. In the next
|
We have shown the basic usage of a convolutional filter. There are
|
||||||
tutorials we will show how to combine the PINA framework with the
|
additional extensions possible:
|
||||||
convolutional filter to train in few lines and efficiently a Neural
|
|
||||||
Network!
|
1. Train using Physics Informed strategies
|
||||||
|
|
||||||
|
2. Use the filter to build an unstructured convolutional autoencoder for
|
||||||
|
reduced order modelling
|
||||||
|
|
||||||
|
3. Many more…
|
||||||
|
Before Width: | Height: | Size: 99 KiB After Width: | Height: | Size: 99 KiB |
|
After Width: | Height: | Size: 126 KiB |
|
After Width: | Height: | Size: 134 KiB |
|
After Width: | Height: | Size: 119 KiB |
@@ -1,16 +1,15 @@
|
|||||||
Tutorial 5: Fourier Neural Operator Learning
|
Tutorial: Two dimensional Darcy flow using the Fourier Neural Operator
|
||||||
============================================
|
======================================================================
|
||||||
|
|
||||||
In this tutorial we are going to solve the Darcy flow 2d problem,
|
In this tutorial we are going to solve the Darcy flow problem in two
|
||||||
presented in `Fourier Neural Operator for Parametric Partial
|
dimensions, presented in `Fourier Neural Operator for Parametric Partial
|
||||||
Differential Equation <https://openreview.net/pdf?id=c8P9NQVtmnO>`__.
|
Differential Equation <https://openreview.net/pdf?id=c8P9NQVtmnO>`__.
|
||||||
First of all we import the modules needed for the tutorial. Importing
|
First of all we import the modules needed for the tutorial. Importing
|
||||||
``scipy`` is needed for input output operation, run
|
``scipy`` is needed for input output operations.
|
||||||
``pip install scipy`` for installing it.
|
|
||||||
|
|
||||||
.. code:: ipython3
|
.. code:: ipython3
|
||||||
|
|
||||||
|
# !pip install scipy # install scipy
|
||||||
from scipy import io
|
from scipy import io
|
||||||
import torch
|
import torch
|
||||||
from pina.model import FNO, FeedForward # let's import some models
|
from pina.model import FNO, FeedForward # let's import some models
|
||||||
@@ -21,13 +20,6 @@ First of all we import the modules needed for the tutorial. Importing
|
|||||||
from pina.problem import AbstractProblem
|
from pina.problem import AbstractProblem
|
||||||
import matplotlib.pyplot as plt
|
import matplotlib.pyplot as plt
|
||||||
|
|
||||||
|
|
||||||
.. parsed-literal::
|
|
||||||
|
|
||||||
/opt/sissa/apps/intelpython/2022.0.2/intelpython/latest/lib/python3.9/site-packages/scipy/__init__.py:138: UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.26.0)
|
|
||||||
warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion} is required for this version of "
|
|
||||||
|
|
||||||
|
|
||||||
Data Generation
|
Data Generation
|
||||||
---------------
|
---------------
|
||||||
|
|
||||||
@@ -51,15 +43,15 @@ taken from the authors original reference.
|
|||||||
# download the dataset
|
# download the dataset
|
||||||
data = io.loadmat("Data_Darcy.mat")
|
data = io.loadmat("Data_Darcy.mat")
|
||||||
|
|
||||||
# extract data
|
# extract data (we use only 100 data for train)
|
||||||
k_train = torch.tensor(data['k_train'], dtype=torch.float).unsqueeze(-1)
|
k_train = torch.tensor(data['k_train'], dtype=torch.float).unsqueeze(-1)[:100, ...]
|
||||||
u_train = torch.tensor(data['u_train'], dtype=torch.float).unsqueeze(-1)
|
u_train = torch.tensor(data['u_train'], dtype=torch.float).unsqueeze(-1)[:100, ...]
|
||||||
k_test = torch.tensor(data['k_test'], dtype=torch.float).unsqueeze(-1)
|
k_test = torch.tensor(data['k_test'], dtype=torch.float).unsqueeze(-1)
|
||||||
u_test= torch.tensor(data['u_test'], dtype=torch.float).unsqueeze(-1)
|
u_test= torch.tensor(data['u_test'], dtype=torch.float).unsqueeze(-1)
|
||||||
x = torch.tensor(data['x'], dtype=torch.float)[0]
|
x = torch.tensor(data['x'], dtype=torch.float)[0]
|
||||||
y = torch.tensor(data['y'], dtype=torch.float)[0]
|
y = torch.tensor(data['y'], dtype=torch.float)[0]
|
||||||
|
|
||||||
Let's visualize some data
|
Let’s visualize some data
|
||||||
|
|
||||||
.. code:: ipython3
|
.. code:: ipython3
|
||||||
|
|
||||||
@@ -73,7 +65,7 @@ Let's visualize some data
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. image:: output_6_0.png
|
.. image:: tutorial_files/tutorial_6_0.png
|
||||||
|
|
||||||
|
|
||||||
We now create the neural operator class. It is a very simple class,
|
We now create the neural operator class. It is a very simple class,
|
||||||
@@ -100,43 +92,24 @@ training using supervised learning.
|
|||||||
.. code:: ipython3
|
.. code:: ipython3
|
||||||
|
|
||||||
# make model
|
# make model
|
||||||
model=FeedForward(input_dimensions=1, output_dimensions=1)
|
model = FeedForward(input_dimensions=1, output_dimensions=1)
|
||||||
|
|
||||||
|
|
||||||
# make solver
|
# make solver
|
||||||
solver = SupervisedSolver(problem=problem, model=model)
|
solver = SupervisedSolver(problem=problem, model=model)
|
||||||
|
|
||||||
# make the trainer and train
|
# make the trainer and train
|
||||||
trainer = Trainer(solver=solver, max_epochs=100)
|
trainer = Trainer(solver=solver, max_epochs=100, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
|
||||||
trainer.train()
|
trainer.train()
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. parsed-literal::
|
.. parsed-literal::
|
||||||
|
|
||||||
/u/n/ndemo/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:611: UserWarning: Can't initialize NVML
|
GPU available: False, used: False
|
||||||
warnings.warn("Can't initialize NVML")
|
|
||||||
GPU available: True (cuda), used: True
|
|
||||||
TPU available: False, using: 0 TPU cores
|
TPU available: False, using: 0 TPU cores
|
||||||
IPU available: False, using: 0 IPUs
|
IPU available: False, using: 0 IPUs
|
||||||
HPU available: False, using: 0 HPUs
|
HPU available: False, using: 0 HPUs
|
||||||
Missing logger folder: /u/n/ndemo/PINA/tutorials/tutorial5/lightning_logs
|
|
||||||
2023-10-17 10:41:03.316644: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
|
|
||||||
2023-10-17 10:41:03.333768: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
|
|
||||||
2023-10-17 10:41:03.383188: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
|
|
||||||
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
|
|
||||||
2023-10-17 10:41:07.712785: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
|
|
||||||
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
|
|
||||||
|
|
||||||
| Name | Type | Params
|
|
||||||
----------------------------------------
|
|
||||||
0 | _loss | MSELoss | 0
|
|
||||||
1 | _neural_net | Network | 481
|
|
||||||
----------------------------------------
|
|
||||||
481 Trainable params
|
|
||||||
0 Non-trainable params
|
|
||||||
481 Total params
|
|
||||||
0.002 Total estimated model params size (MB)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@@ -147,12 +120,10 @@ training using supervised learning.
|
|||||||
|
|
||||||
.. parsed-literal::
|
.. parsed-literal::
|
||||||
|
|
||||||
/u/n/ndemo/.local/lib/python3.9/site-packages/torch/_tensor.py:1386: UserWarning: The use of `x.T` on tensors of dimension other than 2 to reverse their shape is deprecated and it will throw an error in a future release. Consider `x.mT` to transpose batches of matrices or `x.permute(*torch.arange(x.ndim - 1, -1, -1))` to reverse the dimensions of a tensor. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3614.)
|
|
||||||
ret = func(*args, **kwargs)
|
|
||||||
`Trainer.fit` stopped: `max_epochs=100` reached.
|
`Trainer.fit` stopped: `max_epochs=100` reached.
|
||||||
|
|
||||||
|
|
||||||
The final loss is pretty high... We can calculate the error by importing
|
The final loss is pretty high… We can calculate the error by importing
|
||||||
``LpLoss``.
|
``LpLoss``.
|
||||||
|
|
||||||
.. code:: ipython3
|
.. code:: ipython3
|
||||||
@@ -172,8 +143,8 @@ The final loss is pretty high... We can calculate the error by importing
|
|||||||
|
|
||||||
.. parsed-literal::
|
.. parsed-literal::
|
||||||
|
|
||||||
Final error training 56.86%
|
Final error training 56.24%
|
||||||
Final error testing 56.82%
|
Final error testing 55.95%
|
||||||
|
|
||||||
|
|
||||||
Solving the problem with a Fuorier Neural Operator (FNO)
|
Solving the problem with a Fuorier Neural Operator (FNO)
|
||||||
@@ -199,28 +170,17 @@ operator this approach is better suited, as we shall see.
|
|||||||
solver = SupervisedSolver(problem=problem, model=model)
|
solver = SupervisedSolver(problem=problem, model=model)
|
||||||
|
|
||||||
# make the trainer and train
|
# make the trainer and train
|
||||||
trainer = Trainer(solver=solver, max_epochs=20)
|
trainer = Trainer(solver=solver, max_epochs=100, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
|
||||||
trainer.train()
|
trainer.train()
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. parsed-literal::
|
.. parsed-literal::
|
||||||
|
|
||||||
GPU available: True (cuda), used: True
|
GPU available: False, used: False
|
||||||
TPU available: False, using: 0 TPU cores
|
TPU available: False, using: 0 TPU cores
|
||||||
IPU available: False, using: 0 IPUs
|
IPU available: False, using: 0 IPUs
|
||||||
HPU available: False, using: 0 HPUs
|
HPU available: False, using: 0 HPUs
|
||||||
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
|
|
||||||
|
|
||||||
| Name | Type | Params
|
|
||||||
----------------------------------------
|
|
||||||
0 | _loss | MSELoss | 0
|
|
||||||
1 | _neural_net | Network | 591 K
|
|
||||||
----------------------------------------
|
|
||||||
591 K Trainable params
|
|
||||||
0 Non-trainable params
|
|
||||||
591 K Total params
|
|
||||||
2.364 Total estimated model params size (MB)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@@ -231,13 +191,13 @@ operator this approach is better suited, as we shall see.
|
|||||||
|
|
||||||
.. parsed-literal::
|
.. parsed-literal::
|
||||||
|
|
||||||
`Trainer.fit` stopped: `max_epochs=20` reached.
|
`Trainer.fit` stopped: `max_epochs=100` reached.
|
||||||
|
|
||||||
|
|
||||||
We can clearly see that with 1/3 of the total epochs the loss is lower.
|
We can clearly see that the final loss is lower. Let’s see in testing..
|
||||||
Let's see in testing.. Notice that the number of parameters is way
|
Notice that the number of parameters is way higher than a
|
||||||
higher than a ``FeedForward`` network. We suggest to use GPU or TPU for
|
``FeedForward`` network. We suggest to use GPU or TPU for a speed up in
|
||||||
a speed up in training.
|
training, when many data samples are used.
|
||||||
|
|
||||||
.. code:: ipython3
|
.. code:: ipython3
|
||||||
|
|
||||||
@@ -250,13 +210,13 @@ a speed up in training.
|
|||||||
|
|
||||||
.. parsed-literal::
|
.. parsed-literal::
|
||||||
|
|
||||||
Final error training 26.19%
|
Final error training 10.86%
|
||||||
Final error testing 25.89%
|
Final error testing 12.77%
|
||||||
|
|
||||||
|
|
||||||
As we can see the loss is way lower!
|
As we can see the loss is way lower!
|
||||||
|
|
||||||
What's next?
|
What’s next?
|
||||||
------------
|
------------
|
||||||
|
|
||||||
We have made a very simple example on how to use the ``FNO`` for
|
We have made a very simple example on how to use the ``FNO`` for
|
||||||
|
Before Width: | Height: | Size: 14 KiB After Width: | Height: | Size: 14 KiB |
@@ -1,8 +1,5 @@
|
|||||||
Tutorial 6: How to Use Geometries in PINA
|
Tutorial: Building custom geometries with PINA ``Location`` class
|
||||||
=========================================
|
=================================================================
|
||||||
|
|
||||||
Built-in Geometries
|
|
||||||
-------------------
|
|
||||||
|
|
||||||
In this tutorial we will show how to use geometries in PINA.
|
In this tutorial we will show how to use geometries in PINA.
|
||||||
Specifically, the tutorial will include how to create geometries and how
|
Specifically, the tutorial will include how to create geometries and how
|
||||||
@@ -12,7 +9,7 @@ to visualize them. The topics covered are:
|
|||||||
- Getting the Union and Difference of Geometries
|
- Getting the Union and Difference of Geometries
|
||||||
- Sampling points in the domain (and visualize them)
|
- Sampling points in the domain (and visualize them)
|
||||||
|
|
||||||
We import the relevant modules.
|
We import the relevant modules first.
|
||||||
|
|
||||||
.. code:: ipython3
|
.. code:: ipython3
|
||||||
|
|
||||||
@@ -24,8 +21,11 @@ We import the relevant modules.
|
|||||||
ax.title.set_text(title)
|
ax.title.set_text(title)
|
||||||
ax.scatter(pts.extract('x'), pts.extract('y'), color='blue', alpha=0.5)
|
ax.scatter(pts.extract('x'), pts.extract('y'), color='blue', alpha=0.5)
|
||||||
|
|
||||||
|
Built-in Geometries
|
||||||
|
-------------------
|
||||||
|
|
||||||
We will create one cartesian and two ellipsoids. For the sake of
|
We will create one cartesian and two ellipsoids. For the sake of
|
||||||
simplicity, we show here the 2-dimensional, but it's trivial the
|
simplicity, we show here the 2-dimensional, but it’s trivial the
|
||||||
extension to 3D (and higher) cases. The geometries allows also the
|
extension to 3D (and higher) cases. The geometries allows also the
|
||||||
generation of samples belonging to the boundary. So, we will create one
|
generation of samples belonging to the boundary. So, we will create one
|
||||||
ellipsoid with the border and one without.
|
ellipsoid with the border and one without.
|
||||||
@@ -109,7 +109,7 @@ We are now ready to visualize the samples using matplotlib.
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. image:: output_11_0.png
|
.. image:: tutorial_files/tutorial_10_0.png
|
||||||
|
|
||||||
|
|
||||||
We have now created, sampled, and visualized our first geometries! We
|
We have now created, sampled, and visualized our first geometries! We
|
||||||
@@ -151,7 +151,7 @@ Among the built-in shapes, we quickly show here the usage of
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. image:: output_14_0.png
|
.. image:: tutorial_files/tutorial_13_0.png
|
||||||
|
|
||||||
|
|
||||||
Boolean Operations
|
Boolean Operations
|
||||||
@@ -161,7 +161,7 @@ To create complex shapes we can use the boolean operations, for example
|
|||||||
to merge two default geometries. We need to simply use the ``Union``
|
to merge two default geometries. We need to simply use the ``Union``
|
||||||
class: it takes a list of geometries and returns the union of them.
|
class: it takes a list of geometries and returns the union of them.
|
||||||
|
|
||||||
Let's create three unions. Firstly, it will be a union of ``cartesian``
|
Let’s create three unions. Firstly, it will be a union of ``cartesian``
|
||||||
and ``ellipsoid_no_border``. Next, it will be a union of
|
and ``ellipsoid_no_border``. Next, it will be a union of
|
||||||
``ellipse_no_border`` and ``ellipse_border``. Lastly, it will be a union
|
``ellipse_no_border`` and ``ellipse_border``. Lastly, it will be a union
|
||||||
of all three geometries.
|
of all three geometries.
|
||||||
@@ -195,7 +195,7 @@ with.
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. image:: output_21_0.png
|
.. image:: tutorial_files/tutorial_20_0.png
|
||||||
|
|
||||||
|
|
||||||
Now, we will find the differences of the geometries. We will find the
|
Now, we will find the differences of the geometries. We will find the
|
||||||
@@ -211,7 +211,7 @@ difference of ``cartesian`` and ``ellipsoid_no_border``.
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. image:: output_23_0.png
|
.. image:: tutorial_files/tutorial_22_0.png
|
||||||
|
|
||||||
|
|
||||||
Create Custom Location
|
Create Custom Location
|
||||||
@@ -222,7 +222,7 @@ try to make is a heart defined by the function
|
|||||||
|
|
||||||
.. math:: (x^2+y^2-1)^3-x^2y^3 \le 0
|
.. math:: (x^2+y^2-1)^3-x^2y^3 \le 0
|
||||||
|
|
||||||
Let's start by importing what we will need to create our own geometry
|
Let’s start by importing what we will need to create our own geometry
|
||||||
based on this equation.
|
based on this equation.
|
||||||
|
|
||||||
.. code:: ipython3
|
.. code:: ipython3
|
||||||
@@ -244,8 +244,8 @@ Next, we will create the ``Heart(Location)`` class and initialize it.
|
|||||||
|
|
||||||
|
|
||||||
Because the ``Location`` class we are inherting from requires both a
|
Because the ``Location`` class we are inherting from requires both a
|
||||||
sample method and ``is_inside`` method, we will create them and just add
|
``sample`` method and ``is_inside`` method, we will create them and just
|
||||||
in "pass" for the moment.
|
add in “pass” for the moment.
|
||||||
|
|
||||||
.. code:: ipython3
|
.. code:: ipython3
|
||||||
|
|
||||||
@@ -262,7 +262,7 @@ in "pass" for the moment.
|
|||||||
pass
|
pass
|
||||||
|
|
||||||
Now we have the skeleton for our ``Heart`` class. The ``is_inside``
|
Now we have the skeleton for our ``Heart`` class. The ``is_inside``
|
||||||
method is where most of the work is done so let's fill it out.
|
method is where most of the work is done so let’s fill it out.
|
||||||
|
|
||||||
.. code:: ipython3
|
.. code:: ipython3
|
||||||
|
|
||||||
@@ -304,5 +304,5 @@ To sample from the Heart geometry we simply run:
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. image:: output_37_0.png
|
.. image:: tutorial_files/tutorial_36_0.png
|
||||||
|
|
||||||
|
Before Width: | Height: | Size: 153 KiB After Width: | Height: | Size: 153 KiB |
|
After Width: | Height: | Size: 153 KiB |
|
Before Width: | Height: | Size: 156 KiB After Width: | Height: | Size: 156 KiB |
|
After Width: | Height: | Size: 156 KiB |
|
Before Width: | Height: | Size: 181 KiB After Width: | Height: | Size: 181 KiB |
|
After Width: | Height: | Size: 181 KiB |
|
Before Width: | Height: | Size: 137 KiB After Width: | Height: | Size: 137 KiB |
|
After Width: | Height: | Size: 137 KiB |
|
Before Width: | Height: | Size: 98 KiB After Width: | Height: | Size: 98 KiB |
|
After Width: | Height: | Size: 98 KiB |
@@ -46,6 +46,7 @@ extensions = [
|
|||||||
'sphinx.ext.viewcode',
|
'sphinx.ext.viewcode',
|
||||||
#'sphinx.ext.ifconfig',
|
#'sphinx.ext.ifconfig',
|
||||||
'sphinx.ext.mathjax',
|
'sphinx.ext.mathjax',
|
||||||
|
'sphinx.ext.autosectionlabel',
|
||||||
]
|
]
|
||||||
#autosummary_generate = True
|
#autosummary_generate = True
|
||||||
|
|
||||||
|
|||||||
@@ -8,13 +8,23 @@ Welcome to PINA's documentation!
|
|||||||
|
|
|
|
||||||
|
|
||||||
|
|
||||||
PINA is a Python package providing an easy interface to deal with
|
Physics Informed Neural network for Advanced modeling (**PINA**) is
|
||||||
physics-informed neural networks (PINN) for the approximation of (differential,
|
an open-source Python library providing an intuitive interface for
|
||||||
nonlinear, ...) functions. Based on Pytorch, PINA offers a simple and intuitive
|
solving differential equations using PINNs, NOs or both together.
|
||||||
way to formalize a specific problem and solve it using PINN. The approximated
|
Based on `PyTorch <https://pytorch.org/>`_ and `PyTorchLightning <https://lightning.ai/docs/pytorch/stable/>`_,
|
||||||
solution of a differential equation can be implemented using PINA in a few lines
|
PINA offers a simple and intuitive way to formalize a specific (differential) problem
|
||||||
of code thanks to the intuitive and user-friendly interface.
|
and solve it using neural networks . The approximated solution of a differential equation
|
||||||
|
can be implemented using PINA in a few lines of code thanks to the intuitive and user-friendly interface.
|
||||||
|
|
||||||
|
`PyTorchLightning <https://lightning.ai/docs/pytorch/stable/>`_ as backhand is done to offer
|
||||||
|
professional AI researchers and machine learning engineers the possibility of using advancement
|
||||||
|
training strategies provided by the library, such as multiple device training, modern model compression techniques,
|
||||||
|
gradient accumulation, and so on. In addition, it provides the possibility to add arbitrary
|
||||||
|
self-contained routines (callbacks) to the training for easy extensions without the need to touch the
|
||||||
|
underlying code.
|
||||||
|
|
||||||
|
The high-level structure of the package is depicted in our API. The pipeline to solve differential equations
|
||||||
|
with PINA follows just five steps: problem definition, model selection, data generation, solver selection, and training.
|
||||||
|
|
||||||
.. figure:: index_files/API_color.png
|
.. figure:: index_files/API_color.png
|
||||||
:alt: PINA application program interface
|
:alt: PINA application program interface
|
||||||
@@ -26,22 +36,30 @@ of code thanks to the intuitive and user-friendly interface.
|
|||||||
Physics-informed neural network
|
Physics-informed neural network
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
PINN is a novel approach that involves neural networks to solve supervised
|
`PINN <https://www.sciencedirect.com/science/article/abs/pii/S0021999118307125>`_ is a novel approach that
|
||||||
learning tasks while respecting any given law of physics described by general
|
involves neural networks to solve differential equations in an unsupervised manner, while respecting
|
||||||
nonlinear differential equations. Proposed in "Physics-informed neural
|
any given law of physics described by general differential equations. Proposed in "*Physics-informed neural
|
||||||
networks: A deep learning framework for solving forward and inverse problems
|
networks: A deep learning framework for solving forward and inverse problems
|
||||||
involving nonlinear partial differential equations", such framework aims to
|
involving nonlinear partial differential equations*", such framework aims to
|
||||||
solve problems in a continuous and nonlinear settings. :py:class:`pina.pinn.PINN`
|
solve problems in a continuous and nonlinear settings.
|
||||||
|
|
||||||
|
Neural operator learning
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
`Neural Operators <https://www.jmlr.org/papers/v24/21-1524.html>`_ is a novel approach involving neural networks
|
||||||
|
to learn differential operators using supervised learning strategies. By learning the differential operator, the
|
||||||
|
neural network is able to generalize across different instances of the differential equations (e.g. different forcing
|
||||||
|
terms), without the need of re-training.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 2
|
:maxdepth: 2
|
||||||
:caption: Package Documentation:
|
:caption: Package Documentation:
|
||||||
|
|
||||||
Installation <_rst/installation>
|
API <_rst/_code>
|
||||||
API <_rst/code>
|
Contributing <_rst/_contributing>
|
||||||
Contributing <_rst/contributing>
|
License <_LICENSE.rst>
|
||||||
License <LICENSE.rst>
|
|
||||||
|
|
||||||
.. the following is demo content intended to showcase some of the features you can invoke in reStructuredText
|
.. the following is demo content intended to showcase some of the features you can invoke in reStructuredText
|
||||||
.. this can be safely deleted or commented out
|
.. this can be safely deleted or commented out
|
||||||
@@ -50,20 +68,7 @@ solve problems in a continuous and nonlinear settings. :py:class:`pina.pinn.PINN
|
|||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
:numbered:
|
:numbered:
|
||||||
:caption: Tutorials:
|
:caption: Getting Started:
|
||||||
|
|
||||||
Getting start with PINA <_rst/tutorial1/tutorial.rst>
|
Installation <_rst/_installation>
|
||||||
Poisson problem <_rst/tutorial2/tutorial.rst>
|
Tutorials <_rst/_tutorials>
|
||||||
Wave equation <_rst/tutorial3/tutorial.rst>
|
|
||||||
Continuous Convolutional Filter <_rst/tutorial4/tutorial.rst>
|
|
||||||
Fourier Neural Operator <_rst/tutorial5/tutorial.rst>
|
|
||||||
Geometry Usage <_rst/tutorial6/tutorial.rst>
|
|
||||||
|
|
||||||
.. ........................................................................................
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 2
|
|
||||||
:numbered:
|
|
||||||
:caption: Download
|
|
||||||
|
|
||||||
.. ........................................................................................
|
|
||||||
|
|||||||
|
Before Width: | Height: | Size: 95 KiB After Width: | Height: | Size: 121 KiB |
@@ -6,7 +6,7 @@ from torch.nn.modules.loss import _Loss
|
|||||||
import torch
|
import torch
|
||||||
from .utils import check_consistency
|
from .utils import check_consistency
|
||||||
|
|
||||||
__all__ = ['LpLoss']
|
__all__ = ['LossInterface', 'LpLoss', 'PowerLoss']
|
||||||
|
|
||||||
class LossInterface(_Loss, metaclass=ABCMeta):
|
class LossInterface(_Loss, metaclass=ABCMeta):
|
||||||
"""
|
"""
|
||||||
|
|||||||
@@ -11,7 +11,7 @@ class Plotter:
|
|||||||
Implementation of a plotter class, for easy visualizations.
|
Implementation of a plotter class, for easy visualizations.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def plot_samples(self, solver, variables=None):
|
def plot_samples(self, problem, variables=None):
|
||||||
"""
|
"""
|
||||||
Plot the training grid samples.
|
Plot the training grid samples.
|
||||||
|
|
||||||
@@ -30,11 +30,11 @@ class Plotter:
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
if variables is None:
|
if variables is None:
|
||||||
variables = solver.problem.domain.variables
|
variables = problem.domain.variables
|
||||||
elif variables == 'spatial':
|
elif variables == 'spatial':
|
||||||
variables = solver.problem.spatial_domain.variables
|
variables = problem.spatial_domain.variables
|
||||||
elif variables == 'temporal':
|
elif variables == 'temporal':
|
||||||
variables = solver.problem.temporal_domain.variables
|
variables = problem.temporal_domain.variables
|
||||||
|
|
||||||
if len(variables) not in [1, 2, 3]:
|
if len(variables) not in [1, 2, 3]:
|
||||||
raise ValueError
|
raise ValueError
|
||||||
@@ -42,11 +42,11 @@ class Plotter:
|
|||||||
fig = plt.figure()
|
fig = plt.figure()
|
||||||
proj = '3d' if len(variables) == 3 else None
|
proj = '3d' if len(variables) == 3 else None
|
||||||
ax = fig.add_subplot(projection=proj)
|
ax = fig.add_subplot(projection=proj)
|
||||||
for location in solver.problem.input_pts:
|
for location in problem.input_pts:
|
||||||
coords = solver.problem.input_pts[location].extract(
|
coords = problem.input_pts[location].extract(
|
||||||
variables).T.detach()
|
variables).T.detach()
|
||||||
if coords.shape[0] == 1: # 1D samples
|
if coords.shape[0] == 1: # 1D samples
|
||||||
ax.plot(coords[0], torch.zeros(coords[0].shape), '.',
|
ax.plot(coords.flatten(), torch.zeros(coords.flatten().shape), '.',
|
||||||
label=location)
|
label=location)
|
||||||
else:
|
else:
|
||||||
ax.plot(*coords, '.', label=location)
|
ax.plot(*coords, '.', label=location)
|
||||||
@@ -80,14 +80,15 @@ class Plotter:
|
|||||||
"""
|
"""
|
||||||
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(8, 8))
|
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(8, 8))
|
||||||
|
|
||||||
ax.plot(pts, pred.detach(), **kwargs)
|
ax.plot(pts, pred.detach(), label='neural net solution', **kwargs)
|
||||||
|
|
||||||
if truth_solution:
|
if truth_solution:
|
||||||
truth_output = truth_solution(pts).float()
|
truth_output = truth_solution(pts).float()
|
||||||
ax.plot(pts, truth_output.detach(), **kwargs)
|
ax.plot(pts, truth_output.detach(), label='true solution', **kwargs)
|
||||||
|
|
||||||
plt.xlabel(pts.labels[0])
|
plt.xlabel(pts.labels[0])
|
||||||
plt.ylabel(pred.labels[0])
|
plt.ylabel(pred.labels[0])
|
||||||
|
plt.legend()
|
||||||
plt.show()
|
plt.show()
|
||||||
|
|
||||||
def _2d_plot(self, pts, pred, v, res, method, truth_solution=None,
|
def _2d_plot(self, pts, pred, v, res, method, truth_solution=None,
|
||||||
|
|||||||
|
Before Width: | Height: | Size: 95 KiB After Width: | Height: | Size: 121 KiB |
29
tutorials/README.md
vendored
@@ -1,14 +1,27 @@
|
|||||||
# Tutorials
|
# PINA Tutorials
|
||||||
|
|
||||||
In this folder we collect useful tutorials in order to understand the principles and the potential of **PINA**. Please read the following table for details about the tutorials. The HTML version of all the tutorials is available also within the [documentation](http://mathlab.github.io/PINA/).
|
In this folder we collect useful tutorials in order to understand the principles and the potential of **PINA**. Please read the following table for details about the tutorials. The HTML version of all the tutorials is available also within the [documentation](http://mathlab.github.io/PINA/).
|
||||||
|
|
||||||
|
## Getting started with PINA
|
||||||
|
|
||||||
| Name | Description | Type of Problem |
|
| Description | Tutorial |
|
||||||
|-------|---------------|-------------------|
|
|---------------|-----------|
|
||||||
| Tutorial1 [[.ipynb](tutorial1/tutorial.ipynb), [.py](tutorial1/tutorial.py), [.html](http://mathlab.github.io/PINA/_rst/tutorial1/tutorial.html)]| Introduction to PINA features | `SpatialProblem` |
|
Introduction to PINA for Physics Informed Neural Networks training|[[.ipynb](tutorial1/tutorial.ipynb), [.py](tutorial1/tutorial.py), [.html](http://mathlab.github.io/PINA/_rst/tutorial1/tutorial.html)]|
|
||||||
| Tutorial2 [[.ipynb](tutorial2/tutorial.ipynb), [.py](tutorial2/tutorial.py), [.html](http://mathlab.github.io/PINA/_rst/tutorial2/tutorial.html)]| Poisson problem on regular domain using extra features | `SpatialProblem` |
|
Building custom geometries with PINA `Location` class|[[.ipynb](tutorial1/tutorial.ipynb), [.py](tutorial1/tutorial.py), [.html](http://mathlab.github.io/PINA/_rst/tutorial1/tutorial.html)]|
|
||||||
| Tutorial3 [[.ipynb](tutorial3/tutorial.ipynb), [.py](tutorial3/tutorial.py), [.html](http://mathlab.github.io/PINA/_rst/tutorial3/tutorial.html)]| Wave problem on regular domain using custom pytorch networks. | `SpatialProblem`, `TimeDependentProblem` |
|
|
||||||
| Tutorial4 [[.ipynb](tutorial4/tutorial.ipynb), [.py](tutorial4/tutorial.py), [.html](http://mathlab.github.io/PINA/_rst/tutorial4/tutorial.html)]| Continuous Convolutional Filter usage. | `None` |
|
|
||||||
| Tutorial5 [[.ipynb](tutorial5/tutorial.ipynb), [.py](tutorial5/tutorial.py), [.html](http://mathlab.github.io/PINA/_rst/tutorial5/tutorial.html)]| Fourier Neural Operator. | `AbstractProblem` |
|
|
||||||
|
|
||||||
|
|
||||||
|
## Physics Informed Neural Networks
|
||||||
|
| Description | Tutorial |
|
||||||
|
|---------------|-----------|
|
||||||
|
Two dimensional Poisson problem using Extra Features Learning |[[.ipynb](tutorial2/tutorial.ipynb), [.py](tutorial2/tutorial.py), [.html](http://mathlab.github.io/PINA/_rst/tutorial2/tutorial.html)]|
|
||||||
|
Two dimensional Wave problem with hard constraint |[[.ipynb](tutorial3/tutorial.ipynb), [.py](tutorial3/tutorial.py), [.html](http://mathlab.github.io/PINA/_rst/tutorial3/tutorial.html)]|
|
||||||
|
|
||||||
|
## Neural Operator Learning
|
||||||
|
| Description | Tutorial |
|
||||||
|
|---------------|-----------|
|
||||||
|
Two dimensional Darcy flow using the Fourier Neural Operator |[[.ipynb](tutorial5/tutorial.ipynb), [.py](tutorial5/tutorial.py), [.html](http://mathlab.github.io/PINA/_rst/tutorial5/tutorial.html)]|
|
||||||
|
|
||||||
|
## Supervised Learning
|
||||||
|
| Description | Tutorial |
|
||||||
|
|---------------|-----------|
|
||||||
|
Unstructured convolutional autoencoder via continuous convolution |[[.ipynb](tutorial4/tutorial.ipynb), [.py](tutorial4/tutorial.py), [.html](http://mathlab.github.io/PINA/_rst/tutorial4/tutorial.html)]|
|
||||||
|
|||||||
463
tutorials/tutorial1/tutorial.ipynb
vendored
199
tutorials/tutorial1/tutorial.py
vendored
@@ -1,22 +1,25 @@
|
|||||||
#!/usr/bin/env python
|
#!/usr/bin/env python
|
||||||
# coding: utf-8
|
# coding: utf-8
|
||||||
|
|
||||||
# # Tutorial 1: Physics Informed Neural Networks on PINA
|
# # Tutorial: Physics Informed Neural Networks on PINA
|
||||||
|
|
||||||
# In this tutorial, we will demonstrate a typical use case of PINA on a toy problem. Specifically, the tutorial aims to introduce the following topics:
|
# In this tutorial, we will demonstrate a typical use case of **PINA** on a toy problem, following the standard API procedure.
|
||||||
#
|
#
|
||||||
# * Defining a PINA Problem,
|
# <p align="center">
|
||||||
# * Building a `pinn` object,
|
# <img src="../../readme/API_color.png" alt="PINA API" width="400"/>
|
||||||
# * Sampling points in a domain
|
# </p>
|
||||||
#
|
#
|
||||||
# These are the three main steps needed **before** training a Physics Informed Neural Network (PINN). We will show each step in detail, and at the end, we will solve the problem.
|
# Specifically, the tutorial aims to introduce the following topics:
|
||||||
|
#
|
||||||
|
# * Explaining how to build **PINA** Problem,
|
||||||
|
# * Showing how to generate data for `PINN` straining
|
||||||
|
#
|
||||||
|
# These are the two main steps needed **before** starting the modelling optimization (choose model and solver, and train). We will show each step in detail, and at the end, we will solve a simple Ordinary Differential Equation (ODE) problem busing the `PINN` solver.
|
||||||
|
|
||||||
# ## PINA Problem
|
# ## Build a PINA problem
|
||||||
|
|
||||||
# ### Initialize the `Problem` class
|
# Problem definition in the **PINA** framework is done by building a python `class`, which inherits from one or more problem classes (`SpatialProblem`, `TimeDependentProblem`, `ParametricProblem`, ...) depending on the nature of the problem. Below is an example:
|
||||||
|
# ### Simple Ordinary Differential Equation
|
||||||
# Problem definition in the PINA framework is done by building a python `class`, which inherits from one or more problem classes (`SpatialProblem`, `TimeDependentProblem`, `ParametricProblem`) depending on the nature of the problem. Below is an example:
|
|
||||||
# #### Simple Ordinary Differential Equation
|
|
||||||
# Consider the following:
|
# Consider the following:
|
||||||
#
|
#
|
||||||
# $$
|
# $$
|
||||||
@@ -42,8 +45,8 @@
|
|||||||
# # other stuff ...
|
# # other stuff ...
|
||||||
# ```
|
# ```
|
||||||
#
|
#
|
||||||
# Notice that we define `output_variables` as a list of symbols, indicating the output variables of our equation (in this case only $u$). The `spatial_domain` variable indicates where the sample points are going to be sampled in the domain, in this case $x\in[0,1]$.
|
# Notice that we define `output_variables` as a list of symbols, indicating the output variables of our equation (in this case only $u$), this is done because in **PINA** the `torch.Tensor`s are labelled, allowing the user maximal flexibility for the manipulation of the tensor. The `spatial_domain` variable indicates where the sample points are going to be sampled in the domain, in this case $x\in[0,1]$.
|
||||||
|
#
|
||||||
# What about if our equation is also time dependent? In this case, our `class` will inherit from both `SpatialProblem` and `TimeDependentProblem`:
|
# What about if our equation is also time dependent? In this case, our `class` will inherit from both `SpatialProblem` and `TimeDependentProblem`:
|
||||||
#
|
#
|
||||||
|
|
||||||
@@ -64,22 +67,24 @@ class TimeSpaceODE(SpatialProblem, TimeDependentProblem):
|
|||||||
|
|
||||||
# where we have included the `temporal_domain` variable, indicating the time domain wanted for the solution.
|
# where we have included the `temporal_domain` variable, indicating the time domain wanted for the solution.
|
||||||
#
|
#
|
||||||
# In summary, using PINA, we can initialize a problem with a class which inherits from three base classes: `SpatialProblem`, `TimeDependentProblem`, `ParametricProblem`, depending on the type of problem we are considering. For reference:
|
# In summary, using **PINA**, we can initialize a problem with a class which inherits from different base classes: `SpatialProblem`, `TimeDependentProblem`, `ParametricProblem`, and so on depending on the type of problem we are considering. Here are some examples (more on the official documentation):
|
||||||
# * `SpatialProblem` $\rightarrow$ a differential equation with spatial variable(s)
|
# * `SpatialProblem` $\rightarrow$ a differential equation with spatial variable(s)
|
||||||
# * `TimeDependentProblem` $\rightarrow$ a time-dependent differential equation
|
# * `TimeDependentProblem` $\rightarrow$ a time-dependent differential equation
|
||||||
# * `ParametricProblem` $\rightarrow$ a parametrized differential equation
|
# * `ParametricProblem` $\rightarrow$ a parametrized differential equation
|
||||||
|
# * `AbstractProblem` $\rightarrow$ any **PINA** problem inherits from here
|
||||||
|
|
||||||
# ### Write the `Problem` class
|
# ### Write the problem class
|
||||||
#
|
#
|
||||||
# Once the `Problem` class is initialized, we need to represent the differential equation in PINA. In order to do this, we need to load the PINA operators from `pina.operators` module. Again, we'll consider Equation (1) and represent it in PINA:
|
# Once the `Problem` class is initialized, we need to represent the differential equation in **PINA**. In order to do this, we need to load the **PINA** operators from `pina.operators` module. Again, we'll consider Equation (1) and represent it in **PINA**:
|
||||||
|
|
||||||
# In[2]:
|
# In[2]:
|
||||||
|
|
||||||
|
|
||||||
from pina.problem import SpatialProblem
|
from pina.problem import SpatialProblem
|
||||||
from pina.operators import grad
|
from pina.operators import grad
|
||||||
from pina import Condition, CartesianDomain
|
from pina import Condition
|
||||||
from pina.equation.equation import Equation
|
from pina.geometry import CartesianDomain
|
||||||
|
from pina.equation import Equation, FixedValue
|
||||||
|
|
||||||
import torch
|
import torch
|
||||||
|
|
||||||
@@ -101,22 +106,10 @@ class SimpleODE(SpatialProblem):
|
|||||||
# calculate the residual and return it
|
# calculate the residual and return it
|
||||||
return u_x - u
|
return u_x - u
|
||||||
|
|
||||||
# defining the initial condition
|
|
||||||
def initial_condition(input_, output_):
|
|
||||||
|
|
||||||
# setting the initial value
|
|
||||||
value = 1.0
|
|
||||||
|
|
||||||
# extracting the u input variable
|
|
||||||
u = output_.extract(['u'])
|
|
||||||
|
|
||||||
# calculate the residual and return it
|
|
||||||
return u - value
|
|
||||||
|
|
||||||
# conditions to hold
|
# conditions to hold
|
||||||
conditions = {
|
conditions = {
|
||||||
'x0': Condition(location=CartesianDomain({'x': 0.}), equation=Equation(initial_condition)),
|
'x0': Condition(location=CartesianDomain({'x': 0.}), equation=FixedValue(1)), # We fix initial condition to value 1
|
||||||
'D': Condition(location=CartesianDomain({'x': [0, 1]}), equation=Equation(ode_equation)),
|
'D': Condition(location=CartesianDomain({'x': [0, 1]}), equation=Equation(ode_equation)), # We wrap the python equation using Equation
|
||||||
}
|
}
|
||||||
|
|
||||||
# sampled points (see below)
|
# sampled points (see below)
|
||||||
@@ -126,26 +119,75 @@ class SimpleODE(SpatialProblem):
|
|||||||
def truth_solution(self, pts):
|
def truth_solution(self, pts):
|
||||||
return torch.exp(pts.extract(['x']))
|
return torch.exp(pts.extract(['x']))
|
||||||
|
|
||||||
|
problem = SimpleODE()
|
||||||
|
|
||||||
# After we define the `Problem` class, we need to write different class methods, where each method is a function returning a residual. These functions are the ones minimized during PINN optimization, given the initial conditions. For example, in the domain $[0,1]$, the ODE equation (`ode_equation`) must be satisfied. We represent this by returning the difference between subtracting the variable `u` from its gradient (the residual), which we hope to minimize to 0. This is done for all conditions (`ode_equation`, `initial_condition`).
|
|
||||||
|
# After we define the `Problem` class, we need to write different class methods, where each method is a function returning a residual. These functions are the ones minimized during PINN optimization, given the initial conditions. For example, in the domain $[0,1]$, the ODE equation (`ode_equation`) must be satisfied. We represent this by returning the difference between subtracting the variable `u` from its gradient (the residual), which we hope to minimize to 0. This is done for all conditions. Notice that we do not pass directly a `python` function, but an `Equation` object, which is initialized with the `python` function. This is done so that all the computations, and internal checks are done inside **PINA**.
|
||||||
#
|
#
|
||||||
# Once we have defined the function, we need to tell the neural network where these methods are to be applied. To do so, we use the `Condition` class. In the `Condition` class, we pass the location points and the function we want minimized on those points (other possibilities are allowed, see the documentation for reference) as parameters.
|
# Once we have defined the function, we need to tell the neural network where these methods are to be applied. To do so, we use the `Condition` class. In the `Condition` class, we pass the location points and the equation we want minimized on those points (other possibilities are allowed, see the documentation for reference).
|
||||||
#
|
#
|
||||||
# Finally, it's possible to define a `truth_solution` function, which can be useful if we want to plot the results and see how the real solution compares to the expected (true) solution. Notice that the `truth_solution` function is a method of the `PINN` class, but is not mandatory for problem definition.
|
# Finally, it's possible to define a `truth_solution` function, which can be useful if we want to plot the results and see how the real solution compares to the expected (true) solution. Notice that the `truth_solution` function is a method of the `PINN` class, but is not mandatory for problem definition.
|
||||||
#
|
#
|
||||||
|
|
||||||
# ## Build the `PINN` object
|
# ## Generate data
|
||||||
|
#
|
||||||
# The basic requirements for building a `PINN` model are a `Problem` and a model. We have just covered the `Problem` definition. For the model parameter, one can use either the default models provided in PINA or a custom model. We will not go into the details of model definition (see Tutorial2 and Tutorial3 for more details on model definition).
|
# Data for training can come in form of direct numerical simulation reusults, or points in the domains. In case we do unsupervised learning, we just need the collocation points for training, i.e. points where we want to evaluate the neural network. Sampling point in **PINA** is very easy, here we show three examples using the `.discretise_domain` method of the `AbstractProblem` class.
|
||||||
|
|
||||||
# In[3]:
|
# In[3]:
|
||||||
|
|
||||||
|
|
||||||
from pina.model import FeedForward
|
# sampling 20 points in [0, 1] through discretization in all locations
|
||||||
from pina import PINN
|
problem.discretise_domain(n=20, mode='grid', variables=['x'], locations='all')
|
||||||
|
|
||||||
|
# sampling 20 points in (0, 1) through latin hypercube samping in D, and 1 point in x0
|
||||||
|
problem.discretise_domain(n=20, mode='latin', variables=['x'], locations=['D'])
|
||||||
|
problem.discretise_domain(n=1, mode='random', variables=['x'], locations=['x0'])
|
||||||
|
|
||||||
|
# sampling 20 points in (0, 1) randomly
|
||||||
|
problem.discretise_domain(n=20, mode='random', variables=['x'])
|
||||||
|
|
||||||
|
|
||||||
|
# We are going to use latin hypercube points for sampling. We need to sample in all the conditions domains. In our case we sample in `D` and `x0`.
|
||||||
|
|
||||||
|
# In[4]:
|
||||||
|
|
||||||
|
|
||||||
|
# sampling for training
|
||||||
|
problem.discretise_domain(1, 'random', locations=['x0'])
|
||||||
|
problem.discretise_domain(20, 'lh', locations=['D'])
|
||||||
|
|
||||||
|
|
||||||
|
# The points are saved in a python `dict`, and can be accessed by calling the attribute `input_pts` of the problem
|
||||||
|
|
||||||
|
# In[5]:
|
||||||
|
|
||||||
|
|
||||||
|
print('Input points:', problem.input_pts)
|
||||||
|
print('Input points labels:', problem.input_pts['D'].labels)
|
||||||
|
|
||||||
|
|
||||||
|
# To visualize the sampled points we can use the `.plot_samples` method of the `Plotter` class
|
||||||
|
|
||||||
|
# In[6]:
|
||||||
|
|
||||||
|
|
||||||
|
from pina import Plotter
|
||||||
|
|
||||||
|
pl = Plotter()
|
||||||
|
pl.plot_samples(problem=problem)
|
||||||
|
|
||||||
|
|
||||||
|
# ## Perform a small training
|
||||||
|
|
||||||
|
# Once we have defined the problem and generated the data we can start the modelling. Here we will choose a `FeedForward` neural network available in `pina.model`, and we will train using the `PINN` solver from `pina.solvers`. We highlight that this training is fairly simple, for more advanced stuff consider the tutorials in the ***Physics Informed Neural Networks*** section of ***Tutorials***. For training we use the `Trainer` class from `pina.trainer`. Here we show a very short training and some method for plotting the results. Notice that by default all relevant metrics (e.g. MSE error during training) are going to be tracked using a `lightining` logger, by default `CSVLogger`. If you want to track the metric by yourself without a logger, use `pina.callbacks.MetricTracker`.
|
||||||
|
|
||||||
|
# In[7]:
|
||||||
|
|
||||||
|
|
||||||
|
from pina import PINN, Trainer
|
||||||
|
from pina.model import FeedForward
|
||||||
|
from pina.callbacks import MetricTracker
|
||||||
|
|
||||||
# initialize the problem
|
|
||||||
problem = SimpleODE()
|
|
||||||
|
|
||||||
# build the model
|
# build the model
|
||||||
model = FeedForward(
|
model = FeedForward(
|
||||||
@@ -158,38 +200,49 @@ model = FeedForward(
|
|||||||
# create the PINN object
|
# create the PINN object
|
||||||
pinn = PINN(problem, model)
|
pinn = PINN(problem, model)
|
||||||
|
|
||||||
|
# create the trainer
|
||||||
|
trainer = Trainer(solver=pinn, max_epochs=1500, callbacks=[MetricTracker()], accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
|
||||||
|
|
||||||
# Creating the `PINN` object is fairly simple. Different optional parameters include: optimizer, batch size, ... (see [documentation](https://mathlab.github.io/PINA/) for reference).
|
# train
|
||||||
|
|
||||||
# ## Sample points in the domain
|
|
||||||
|
|
||||||
# Once the `PINN` object is created, we need to generate the points for starting the optimization. To do so, we use the `sample` method of the `CartesianDomain` class. Below are three examples of sampling methods on the $[0,1]$ domain:
|
|
||||||
|
|
||||||
# In[4]:
|
|
||||||
|
|
||||||
|
|
||||||
# sampling 20 points in [0, 1] through discretization
|
|
||||||
pinn.problem.discretise_domain(n=20, mode='grid', variables=['x'])
|
|
||||||
|
|
||||||
# sampling 20 points in (0, 1) through latin hypercube samping
|
|
||||||
pinn.problem.discretise_domain(n=20, mode='latin', variables=['x'])
|
|
||||||
|
|
||||||
# sampling 20 points in (0, 1) randomly
|
|
||||||
pinn.problem.discretise_domain(n=20, mode='random', variables=['x'])
|
|
||||||
|
|
||||||
|
|
||||||
# ### Very simple training and plotting
|
|
||||||
#
|
|
||||||
# Once we have defined the PINA model, created a network, and sampled points in the domain, we have everything necessary for training a PINN. To do so, we make use of the `Trainer` class.
|
|
||||||
|
|
||||||
# In[5]:
|
|
||||||
|
|
||||||
|
|
||||||
from pina import Trainer
|
|
||||||
|
|
||||||
# initialize trainer
|
|
||||||
trainer = Trainer(pinn)
|
|
||||||
|
|
||||||
# train the model
|
|
||||||
trainer.train()
|
trainer.train()
|
||||||
|
|
||||||
|
|
||||||
|
# After the training we can inspect trainer logged metrics (by default **PINA** logs mean square error residual loss). The logged metrics can be accessed online using one of the `Lightinig` loggers. The final loss can be accessed by `trainer.logged_metrics`
|
||||||
|
|
||||||
|
# In[8]:
|
||||||
|
|
||||||
|
|
||||||
|
# inspecting final loss
|
||||||
|
trainer.logged_metrics
|
||||||
|
|
||||||
|
|
||||||
|
# By using the `Plotter` class from **PINA** we can also do some quatitative plots of the solution.
|
||||||
|
|
||||||
|
# In[9]:
|
||||||
|
|
||||||
|
|
||||||
|
# plotting the solution
|
||||||
|
pl.plot(trainer=trainer)
|
||||||
|
|
||||||
|
|
||||||
|
# The solution is overlapped with the actual one, and they are barely indistinguishable. We can also plot easily the loss:
|
||||||
|
|
||||||
|
# In[10]:
|
||||||
|
|
||||||
|
|
||||||
|
pl.plot_loss(trainer=trainer, metric='mean_loss', log_scale=True)
|
||||||
|
|
||||||
|
|
||||||
|
# As we can see the loss has not reached a minimum, suggesting that we could train for longer
|
||||||
|
|
||||||
|
# ## What's next?
|
||||||
|
#
|
||||||
|
# Nice you have completed the introductory tutorial of **PINA**! There are multiple directions you can go now:
|
||||||
|
#
|
||||||
|
# 1. Train the network for longer or with different layer sizes and assert the finaly accuracy
|
||||||
|
#
|
||||||
|
# 2. Train the network using other types of models (see `pina.model`)
|
||||||
|
#
|
||||||
|
# 3. GPU trainining and benchmark the speed
|
||||||
|
#
|
||||||
|
# 4. Many more...
|
||||||
|
|||||||
218
tutorials/tutorial2/tutorial.ipynb
vendored
79
tutorials/tutorial2/tutorial.py
vendored
@@ -1,21 +1,10 @@
|
|||||||
#!/usr/bin/env python
|
#!/usr/bin/env python
|
||||||
# coding: utf-8
|
# coding: utf-8
|
||||||
|
|
||||||
# # Tutorial 2: resolution of Poisson problem and usage of extra-features
|
# # Tutorial: Two dimensional Poisson problem using Extra Features Learning
|
||||||
|
#
|
||||||
# ### The problem definition
|
# This tutorial presents how to solve with Physics-Informed Neural Networks (PINNs) a 2D Poisson problem with Dirichlet boundary conditions. We will train with standard PINN's training, and with extrafeatures. For more insights on extrafeature learning please read [*An extended physics informed neural network for preliminary analysis of parametric optimal control problems*](https://www.sciencedirect.com/science/article/abs/pii/S0898122123002018).
|
||||||
|
|
||||||
# This tutorial presents how to solve with Physics-Informed Neural Networks a 2D Poisson problem with Dirichlet boundary conditions. Using extrafeatures.
|
|
||||||
#
|
#
|
||||||
# The problem is written as:
|
|
||||||
# \begin{equation}
|
|
||||||
# \begin{cases}
|
|
||||||
# \Delta u = \sin{(\pi x)} \sin{(\pi y)} \text{ in } D, \\
|
|
||||||
# u = 0 \text{ on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
|
|
||||||
# \end{cases}
|
|
||||||
# \end{equation}
|
|
||||||
# where $D$ is a square domain $[0,1]^2$, and $\Gamma_i$, with $i=1,...,4$, are the boundaries of the square.
|
|
||||||
|
|
||||||
# First of all, some useful imports.
|
# First of all, some useful imports.
|
||||||
|
|
||||||
# In[1]:
|
# In[1]:
|
||||||
@@ -36,7 +25,18 @@ from pina import Condition, LabelTensor
|
|||||||
from pina.callbacks import MetricTracker
|
from pina.callbacks import MetricTracker
|
||||||
|
|
||||||
|
|
||||||
# Now, the Poisson problem is written in PINA code as a class. The equations are written as *conditions* that should be satisfied in the corresponding domains. *truth_solution*
|
# ## The problem definition
|
||||||
|
|
||||||
|
# The two-dimensional Poisson problem is mathematically written as:
|
||||||
|
# \begin{equation}
|
||||||
|
# \begin{cases}
|
||||||
|
# \Delta u = \sin{(\pi x)} \sin{(\pi y)} \text{ in } D, \\
|
||||||
|
# u = 0 \text{ on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
|
||||||
|
# \end{cases}
|
||||||
|
# \end{equation}
|
||||||
|
# where $D$ is a square domain $[0,1]^2$, and $\Gamma_i$, with $i=1,...,4$, are the boundaries of the square.
|
||||||
|
#
|
||||||
|
# The Poisson problem is written in **PINA** code as a class. The equations are written as *conditions* that should be satisfied in the corresponding domains. The *truth_solution*
|
||||||
# is the exact solution which will be compared with the predicted one.
|
# is the exact solution which will be compared with the predicted one.
|
||||||
|
|
||||||
# In[2]:
|
# In[2]:
|
||||||
@@ -52,6 +52,7 @@ class Poisson(SpatialProblem):
|
|||||||
laplacian_u = laplacian(output_, input_, components=['u'], d=['x', 'y'])
|
laplacian_u = laplacian(output_, input_, components=['u'], d=['x', 'y'])
|
||||||
return laplacian_u - force_term
|
return laplacian_u - force_term
|
||||||
|
|
||||||
|
# here we write the problem conditions
|
||||||
conditions = {
|
conditions = {
|
||||||
'gamma1': Condition(location=CartesianDomain({'x': [0, 1], 'y': 1}), equation=FixedValue(0.)),
|
'gamma1': Condition(location=CartesianDomain({'x': [0, 1], 'y': 1}), equation=FixedValue(0.)),
|
||||||
'gamma2': Condition(location=CartesianDomain({'x': [0, 1], 'y': 0}), equation=FixedValue(0.)),
|
'gamma2': Condition(location=CartesianDomain({'x': [0, 1], 'y': 0}), equation=FixedValue(0.)),
|
||||||
@@ -75,11 +76,11 @@ problem.discretise_domain(25, 'grid', locations=['D'])
|
|||||||
problem.discretise_domain(25, 'grid', locations=['gamma1', 'gamma2', 'gamma3', 'gamma4'])
|
problem.discretise_domain(25, 'grid', locations=['gamma1', 'gamma2', 'gamma3', 'gamma4'])
|
||||||
|
|
||||||
|
|
||||||
# ### The problem solution
|
# ## Solving the problem with standard PINNs
|
||||||
|
|
||||||
# After the problem, the feed-forward neural network is defined, through the class `FeedForward`. This neural network takes as input the coordinates (in this case $x$ and $y$) and provides the unkwown field of the Poisson problem. The residual of the equations are evaluated at several sampling points (which the user can manipulate using the method `CartesianDomain_pts`) and the loss minimized by the neural network is the sum of the residuals.
|
# After the problem, the feed-forward neural network is defined, through the class `FeedForward`. This neural network takes as input the coordinates (in this case $x$ and $y$) and provides the unkwown field of the Poisson problem. The residual of the equations are evaluated at several sampling points (which the user can manipulate using the method `CartesianDomain_pts`) and the loss minimized by the neural network is the sum of the residuals.
|
||||||
#
|
#
|
||||||
# In this tutorial, the neural network is composed by two hidden layers of 10 neurons each, and it is trained for 1000 epochs with a learning rate of 0.006. These parameters can be modified as desired.
|
# In this tutorial, the neural network is composed by two hidden layers of 10 neurons each, and it is trained for 1000 epochs with a learning rate of 0.006 and $l_2$ weight regularization set to $10^{-7}$. These parameters can be modified as desired. We use the `MetricTracker` class to track the metrics during training.
|
||||||
|
|
||||||
# In[3]:
|
# In[3]:
|
||||||
|
|
||||||
@@ -92,7 +93,7 @@ model = FeedForward(
|
|||||||
input_dimensions=len(problem.input_variables)
|
input_dimensions=len(problem.input_variables)
|
||||||
)
|
)
|
||||||
pinn = PINN(problem, model, optimizer_kwargs={'lr':0.006, 'weight_decay':1e-8})
|
pinn = PINN(problem, model, optimizer_kwargs={'lr':0.006, 'weight_decay':1e-8})
|
||||||
trainer = Trainer(pinn, max_epochs=1000, callbacks=[MetricTracker()])
|
trainer = Trainer(pinn, max_epochs=1000, callbacks=[MetricTracker()], accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
|
||||||
|
|
||||||
# train
|
# train
|
||||||
trainer.train()
|
trainer.train()
|
||||||
@@ -108,7 +109,7 @@ plotter = Plotter()
|
|||||||
plotter.plot(trainer)
|
plotter.plot(trainer)
|
||||||
|
|
||||||
|
|
||||||
# ### The problem solution with extra-features
|
# ## Solving the problem with extra-features PINNs
|
||||||
|
|
||||||
# Now, the same problem is solved in a different way.
|
# Now, the same problem is solved in a different way.
|
||||||
# A new neural network is now defined, with an additional input variable, named extra-feature, which coincides with the forcing term in the Laplace equation.
|
# A new neural network is now defined, with an additional input variable, named extra-feature, which coincides with the forcing term in the Laplace equation.
|
||||||
@@ -147,7 +148,7 @@ model_feat = FeedForward(
|
|||||||
input_dimensions=len(problem.input_variables)+1
|
input_dimensions=len(problem.input_variables)+1
|
||||||
)
|
)
|
||||||
pinn_feat = PINN(problem, model_feat, extra_features=[SinSin()], optimizer_kwargs={'lr':0.006, 'weight_decay':1e-8})
|
pinn_feat = PINN(problem, model_feat, extra_features=[SinSin()], optimizer_kwargs={'lr':0.006, 'weight_decay':1e-8})
|
||||||
trainer_feat = Trainer(pinn_feat, max_epochs=1000, callbacks=[MetricTracker()])
|
trainer_feat = Trainer(pinn_feat, max_epochs=1000, callbacks=[MetricTracker()], accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
|
||||||
|
|
||||||
# train
|
# train
|
||||||
trainer_feat.train()
|
trainer_feat.train()
|
||||||
@@ -162,7 +163,7 @@ trainer_feat.train()
|
|||||||
plotter.plot(trainer_feat)
|
plotter.plot(trainer_feat)
|
||||||
|
|
||||||
|
|
||||||
# ### The problem solution with learnable extra-features
|
# ## Solving the problem with learnable extra-features PINNs
|
||||||
|
|
||||||
# We can still do better!
|
# We can still do better!
|
||||||
#
|
#
|
||||||
@@ -176,7 +177,7 @@ plotter.plot(trainer_feat)
|
|||||||
# where $\alpha$ and $\beta$ are the abovementioned parameters.
|
# where $\alpha$ and $\beta$ are the abovementioned parameters.
|
||||||
# Their implementation is quite trivial: by using the class `torch.nn.Parameter` we cam define all the learnable parameters we need, and they are managed by `autograd` module!
|
# Their implementation is quite trivial: by using the class `torch.nn.Parameter` we cam define all the learnable parameters we need, and they are managed by `autograd` module!
|
||||||
|
|
||||||
# In[7]:
|
# In[8]:
|
||||||
|
|
||||||
|
|
||||||
class SinSinAB(torch.nn.Module):
|
class SinSinAB(torch.nn.Module):
|
||||||
@@ -202,8 +203,8 @@ model_lean= FeedForward(
|
|||||||
output_dimensions=len(problem.output_variables),
|
output_dimensions=len(problem.output_variables),
|
||||||
input_dimensions=len(problem.input_variables)+1
|
input_dimensions=len(problem.input_variables)+1
|
||||||
)
|
)
|
||||||
pinn_lean = PINN(problem, model_lean, extra_features=[SinSin()], optimizer_kwargs={'lr':0.006, 'weight_decay':1e-8})
|
pinn_lean = PINN(problem, model_lean, extra_features=[SinSinAB()], optimizer_kwargs={'lr':0.006, 'weight_decay':1e-8})
|
||||||
trainer_learn = Trainer(pinn_lean, max_epochs=1000)
|
trainer_learn = Trainer(pinn_lean, max_epochs=1000, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
|
||||||
|
|
||||||
# train
|
# train
|
||||||
trainer_learn.train()
|
trainer_learn.train()
|
||||||
@@ -211,7 +212,7 @@ trainer_learn.train()
|
|||||||
|
|
||||||
# Umh, the final loss is not appreciabily better than previous model (with static extra features), despite the usage of learnable parameters. This is mainly due to the over-parametrization of the network: there are many parameter to optimize during the training, and the model in unable to understand automatically that only the parameters of the extra feature (and not the weights/bias of the FFN) should be tuned in order to fit our problem. A longer training can be helpful, but in this case the faster way to reach machine precision for solving the Poisson problem is removing all the hidden layers in the `FeedForward`, keeping only the $\alpha$ and $\beta$ parameters of the extra feature.
|
# Umh, the final loss is not appreciabily better than previous model (with static extra features), despite the usage of learnable parameters. This is mainly due to the over-parametrization of the network: there are many parameter to optimize during the training, and the model in unable to understand automatically that only the parameters of the extra feature (and not the weights/bias of the FFN) should be tuned in order to fit our problem. A longer training can be helpful, but in this case the faster way to reach machine precision for solving the Poisson problem is removing all the hidden layers in the `FeedForward`, keeping only the $\alpha$ and $\beta$ parameters of the extra feature.
|
||||||
|
|
||||||
# In[8]:
|
# In[11]:
|
||||||
|
|
||||||
|
|
||||||
# make model + solver + trainer
|
# make model + solver + trainer
|
||||||
@@ -221,8 +222,8 @@ model_lean= FeedForward(
|
|||||||
output_dimensions=len(problem.output_variables),
|
output_dimensions=len(problem.output_variables),
|
||||||
input_dimensions=len(problem.input_variables)+1
|
input_dimensions=len(problem.input_variables)+1
|
||||||
)
|
)
|
||||||
pinn_learn = PINN(problem, model_lean, extra_features=[SinSin()], optimizer_kwargs={'lr':0.006, 'weight_decay':1e-8})
|
pinn_learn = PINN(problem, model_lean, extra_features=[SinSinAB()], optimizer_kwargs={'lr':0.006, 'weight_decay':1e-8})
|
||||||
trainer_learn = Trainer(pinn_learn, max_epochs=1000, callbacks=[MetricTracker()])
|
trainer_learn = Trainer(pinn_learn, max_epochs=1000, callbacks=[MetricTracker()], accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
|
||||||
|
|
||||||
# train
|
# train
|
||||||
trainer_learn.train()
|
trainer_learn.train()
|
||||||
@@ -233,8 +234,30 @@ trainer_learn.train()
|
|||||||
#
|
#
|
||||||
# We conclude here by showing the graphical comparison of the unknown field and the loss trend for all the test cases presented here: the standard PINN, PINN with extra features, and PINN with learnable extra features.
|
# We conclude here by showing the graphical comparison of the unknown field and the loss trend for all the test cases presented here: the standard PINN, PINN with extra features, and PINN with learnable extra features.
|
||||||
|
|
||||||
# In[9]:
|
# In[12]:
|
||||||
|
|
||||||
|
|
||||||
plotter.plot(trainer_learn)
|
plotter.plot(trainer_learn)
|
||||||
|
|
||||||
|
|
||||||
|
# Let us compare the training losses for the various types of training
|
||||||
|
|
||||||
|
# In[14]:
|
||||||
|
|
||||||
|
|
||||||
|
plotter.plot_loss(trainer, label='Standard')
|
||||||
|
plotter.plot_loss(trainer_feat, label='Static Features')
|
||||||
|
plotter.plot_loss(trainer_learn, label='Learnable Features')
|
||||||
|
|
||||||
|
|
||||||
|
# ## What's next?
|
||||||
|
#
|
||||||
|
# Nice you have completed the two dimensional Poisson tutorial of **PINA**! There are multiple directions you can go now:
|
||||||
|
#
|
||||||
|
# 1. Train the network for longer or with different layer sizes and assert the finaly accuracy
|
||||||
|
#
|
||||||
|
# 2. Propose new types of extrafeatures and see how they affect the learning
|
||||||
|
#
|
||||||
|
# 3. Exploit extrafeature training in more complex problems
|
||||||
|
#
|
||||||
|
# 4. Many more...
|
||||||
|
|||||||
350
tutorials/tutorial3/tutorial.ipynb
vendored
137
tutorials/tutorial3/tutorial.py
vendored
@@ -1,24 +1,10 @@
|
|||||||
#!/usr/bin/env python
|
#!/usr/bin/env python
|
||||||
# coding: utf-8
|
# coding: utf-8
|
||||||
|
|
||||||
# # Tutorial 3: resolution of wave equation with hard constraint PINNs.
|
# # Tutorial: Two dimensional Wave problem with hard constraint
|
||||||
|
|
||||||
# ## The problem definition
|
|
||||||
|
|
||||||
# In this tutorial we present how to solve the wave equation using hard constraint PINNs. For doing so we will build a costum torch model and pass it to the `PINN` solver.
|
|
||||||
#
|
#
|
||||||
# The problem is written in the following form:
|
# In this tutorial we present how to solve the wave equation using hard constraint PINNs. For doing so we will build a costum `torch` model and pass it to the `PINN` solver.
|
||||||
#
|
#
|
||||||
# \begin{equation}
|
|
||||||
# \begin{cases}
|
|
||||||
# \Delta u(x,y,t) = \frac{\partial^2}{\partial t^2} u(x,y,t) \quad \text{in } D, \\\\
|
|
||||||
# u(x, y, t=0) = \sin(\pi x)\sin(\pi y), \\\\
|
|
||||||
# u(x, y, t) = 0 \quad \text{on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
|
|
||||||
# \end{cases}
|
|
||||||
# \end{equation}
|
|
||||||
#
|
|
||||||
# where $D$ is a square domain $[0,1]^2$, and $\Gamma_i$, with $i=1,...,4$, are the boundaries of the square, and the velocity in the standard wave equation is fixed to one.
|
|
||||||
|
|
||||||
# First of all, some useful imports.
|
# First of all, some useful imports.
|
||||||
|
|
||||||
# In[1]:
|
# In[1]:
|
||||||
@@ -36,6 +22,20 @@ from pina.equation.equation_factory import FixedValue
|
|||||||
from pina import Condition, Plotter
|
from pina import Condition, Plotter
|
||||||
|
|
||||||
|
|
||||||
|
# ## The problem definition
|
||||||
|
|
||||||
|
# The problem is written in the following form:
|
||||||
|
#
|
||||||
|
# \begin{equation}
|
||||||
|
# \begin{cases}
|
||||||
|
# \Delta u(x,y,t) = \frac{\partial^2}{\partial t^2} u(x,y,t) \quad \text{in } D, \\\\
|
||||||
|
# u(x, y, t=0) = \sin(\pi x)\sin(\pi y), \\\\
|
||||||
|
# u(x, y, t) = 0 \quad \text{on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
|
||||||
|
# \end{cases}
|
||||||
|
# \end{equation}
|
||||||
|
#
|
||||||
|
# where $D$ is a square domain $[0,1]^2$, and $\Gamma_i$, with $i=1,...,4$, are the boundaries of the square, and the velocity in the standard wave equation is fixed to one.
|
||||||
|
|
||||||
# Now, the wave problem is written in PINA code as a class, inheriting from `SpatialProblem` and `TimeDependentProblem` since we deal with spatial, and time dependent variables. The equations are written as `conditions` that should be satisfied in the corresponding domains. `truth_solution` is the exact solution which will be compared with the predicted one.
|
# Now, the wave problem is written in PINA code as a class, inheriting from `SpatialProblem` and `TimeDependentProblem` since we deal with spatial, and time dependent variables. The equations are written as `conditions` that should be satisfied in the corresponding domains. `truth_solution` is the exact solution which will be compared with the predicted one.
|
||||||
|
|
||||||
# In[2]:
|
# In[2]:
|
||||||
@@ -78,7 +78,7 @@ problem = Wave()
|
|||||||
|
|
||||||
# ## Hard Constraint Model
|
# ## Hard Constraint Model
|
||||||
|
|
||||||
# After the problem, a **torch** model is needed to solve the PINN. Usually, many models are already implemented in `PINA`, but the user has the possibility to build his/her own model in `PyTorch`. The hard constraint we impose is on the boundary of the spatial domain. Specifically, our solution is written as:
|
# After the problem, a **torch** model is needed to solve the PINN. Usually, many models are already implemented in **PINA**, but the user has the possibility to build his/her own model in `torch`. The hard constraint we impose is on the boundary of the spatial domain. Specifically, our solution is written as:
|
||||||
#
|
#
|
||||||
# $$ u_{\rm{pinn}} = xy(1-x)(1-y)\cdot NN(x, y, t), $$
|
# $$ u_{\rm{pinn}} = xy(1-x)(1-y)\cdot NN(x, y, t), $$
|
||||||
#
|
#
|
||||||
@@ -92,11 +92,11 @@ class HardMLP(torch.nn.Module):
|
|||||||
def __init__(self, input_dim, output_dim):
|
def __init__(self, input_dim, output_dim):
|
||||||
super().__init__()
|
super().__init__()
|
||||||
|
|
||||||
self.layers = torch.nn.Sequential(torch.nn.Linear(input_dim, 20),
|
self.layers = torch.nn.Sequential(torch.nn.Linear(input_dim, 40),
|
||||||
torch.nn.Tanh(),
|
torch.nn.ReLU(),
|
||||||
torch.nn.Linear(20, 20),
|
torch.nn.Linear(40, 40),
|
||||||
torch.nn.Tanh(),
|
torch.nn.ReLU(),
|
||||||
torch.nn.Linear(20, output_dim))
|
torch.nn.Linear(40, output_dim))
|
||||||
|
|
||||||
# here in the foward we implement the hard constraints
|
# here in the foward we implement the hard constraints
|
||||||
def forward(self, x):
|
def forward(self, x):
|
||||||
@@ -106,14 +106,19 @@ class HardMLP(torch.nn.Module):
|
|||||||
|
|
||||||
# ## Train and Inference
|
# ## Train and Inference
|
||||||
|
|
||||||
# In this tutorial, the neural network is trained for 3000 epochs with a learning rate of 0.001 (default in `PINN`). Training takes approximately 1 minute.
|
# In this tutorial, the neural network is trained for 1000 epochs with a learning rate of 0.001 (default in `PINN`). Training takes approximately 3 minutes.
|
||||||
|
|
||||||
# In[4]:
|
# In[4]:
|
||||||
|
|
||||||
|
|
||||||
|
# generate the data
|
||||||
|
problem.discretise_domain(1000, 'random', locations=['D', 't0', 'gamma1', 'gamma2', 'gamma3', 'gamma4'])
|
||||||
|
|
||||||
|
# crete the solver
|
||||||
pinn = PINN(problem, HardMLP(len(problem.input_variables), len(problem.output_variables)))
|
pinn = PINN(problem, HardMLP(len(problem.input_variables), len(problem.output_variables)))
|
||||||
problem.discretise_domain(1000, 'random', locations=['D','t0', 'gamma1', 'gamma2', 'gamma3', 'gamma4'])
|
|
||||||
trainer = Trainer(pinn, max_epochs=3000)
|
# create trainer and train
|
||||||
|
trainer = Trainer(pinn, max_epochs=1000, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
|
||||||
trainer.train()
|
trainer.train()
|
||||||
|
|
||||||
|
|
||||||
@@ -125,11 +130,93 @@ trainer.train()
|
|||||||
plotter = Plotter()
|
plotter = Plotter()
|
||||||
|
|
||||||
# plotting at fixed time t = 0.0
|
# plotting at fixed time t = 0.0
|
||||||
|
print('Plotting at t=0')
|
||||||
plotter.plot(trainer, fixed_variables={'t': 0.0})
|
plotter.plot(trainer, fixed_variables={'t': 0.0})
|
||||||
|
|
||||||
# plotting at fixed time t = 0.5
|
# plotting at fixed time t = 0.5
|
||||||
|
print('Plotting at t=0.5')
|
||||||
plotter.plot(trainer, fixed_variables={'t': 0.5})
|
plotter.plot(trainer, fixed_variables={'t': 0.5})
|
||||||
|
|
||||||
# plotting at fixed time t = 1.
|
# plotting at fixed time t = 1.
|
||||||
|
print('Plotting at t=1')
|
||||||
plotter.plot(trainer, fixed_variables={'t': 1.0})
|
plotter.plot(trainer, fixed_variables={'t': 1.0})
|
||||||
|
|
||||||
|
|
||||||
|
# The results are not so great, and we can clearly see that as time progress the solution get worse.... Can we do better?
|
||||||
|
#
|
||||||
|
# A valid option is to impose the initial condition as hard constraint as well. Specifically, our solution is written as:
|
||||||
|
#
|
||||||
|
# $$ u_{\rm{pinn}} = xy(1-x)(1-y)\cdot NN(x, y, t)\cdot t + \cos(\sqrt{2}\pi t)sin(\pi x)\sin(\pi y), $$
|
||||||
|
#
|
||||||
|
# Let us build the network first
|
||||||
|
|
||||||
|
# In[6]:
|
||||||
|
|
||||||
|
|
||||||
|
class HardMLPtime(torch.nn.Module):
|
||||||
|
|
||||||
|
def __init__(self, input_dim, output_dim):
|
||||||
|
super().__init__()
|
||||||
|
|
||||||
|
self.layers = torch.nn.Sequential(torch.nn.Linear(input_dim, 40),
|
||||||
|
torch.nn.ReLU(),
|
||||||
|
torch.nn.Linear(40, 40),
|
||||||
|
torch.nn.ReLU(),
|
||||||
|
torch.nn.Linear(40, output_dim))
|
||||||
|
|
||||||
|
# here in the foward we implement the hard constraints
|
||||||
|
def forward(self, x):
|
||||||
|
hard_space = x.extract(['x'])*(1-x.extract(['x']))*x.extract(['y'])*(1-x.extract(['y']))
|
||||||
|
hard_t = torch.sin(torch.pi*x.extract(['x'])) * torch.sin(torch.pi*x.extract(['y'])) * torch.cos(torch.sqrt(torch.tensor(2.))*torch.pi*x.extract(['t']))
|
||||||
|
return hard_space * self.layers(x) * x.extract(['t']) + hard_t
|
||||||
|
|
||||||
|
|
||||||
|
# Now let's train with the same configuration as thre previous test
|
||||||
|
|
||||||
|
# In[7]:
|
||||||
|
|
||||||
|
|
||||||
|
# generate the data
|
||||||
|
problem.discretise_domain(1000, 'random', locations=['D', 't0', 'gamma1', 'gamma2', 'gamma3', 'gamma4'])
|
||||||
|
|
||||||
|
# crete the solver
|
||||||
|
pinn = PINN(problem, HardMLPtime(len(problem.input_variables), len(problem.output_variables)))
|
||||||
|
|
||||||
|
# create trainer and train
|
||||||
|
trainer = Trainer(pinn, max_epochs=1000, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
|
||||||
|
trainer.train()
|
||||||
|
|
||||||
|
|
||||||
|
# We can clearly see that the loss is way lower now. Let's plot the results
|
||||||
|
|
||||||
|
# In[8]:
|
||||||
|
|
||||||
|
|
||||||
|
plotter = Plotter()
|
||||||
|
|
||||||
|
# plotting at fixed time t = 0.0
|
||||||
|
print('Plotting at t=0')
|
||||||
|
plotter.plot(trainer, fixed_variables={'t': 0.0})
|
||||||
|
|
||||||
|
# plotting at fixed time t = 0.5
|
||||||
|
print('Plotting at t=0.5')
|
||||||
|
plotter.plot(trainer, fixed_variables={'t': 0.5})
|
||||||
|
|
||||||
|
# plotting at fixed time t = 1.
|
||||||
|
print('Plotting at t=1')
|
||||||
|
plotter.plot(trainer, fixed_variables={'t': 1.0})
|
||||||
|
|
||||||
|
|
||||||
|
# We can see now that the results are way better! This is due to the fact that previously the network was not learning correctly the initial conditon, leading to a poor solution when the time evolved. By imposing the initial condition the network is able to correctly solve the problem.
|
||||||
|
|
||||||
|
# ## What's next?
|
||||||
|
#
|
||||||
|
# Nice you have completed the two dimensional Wave tutorial of **PINA**! There are multiple directions you can go now:
|
||||||
|
#
|
||||||
|
# 1. Train the network for longer or with different layer sizes and assert the finaly accuracy
|
||||||
|
#
|
||||||
|
# 2. Propose new types of hard constraints in time, e.g. $$ u_{\rm{pinn}} = xy(1-x)(1-y)\cdot NN(x, y, t)(1-\exp(-t)) + \cos(\sqrt{2}\pi t)sin(\pi x)\sin(\pi y), $$
|
||||||
|
#
|
||||||
|
# 3. Exploit extrafeature training for model 1 and 2
|
||||||
|
#
|
||||||
|
# 4. Many more...
|
||||||
|
|||||||
215
tutorials/tutorial4/tutorial.ipynb
vendored
101
tutorials/tutorial5/tutorial.ipynb
vendored
@@ -5,7 +5,7 @@
|
|||||||
"id": "e80567a6",
|
"id": "e80567a6",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Tutorial 5: Fourier Neural Operator Learning"
|
"# Tutorial: Two dimensional Darcy flow using the Fourier Neural Operator"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -13,8 +13,8 @@
|
|||||||
"id": "8762bbe5",
|
"id": "8762bbe5",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"In this tutorial we are going to solve the Darcy flow 2d problem, presented in [Fourier Neural Operator for\n",
|
"In this tutorial we are going to solve the Darcy flow problem in two dimensions, presented in [*Fourier Neural Operator for\n",
|
||||||
"Parametric Partial Differential Equation](https://openreview.net/pdf?id=c8P9NQVtmnO). First of all we import the modules needed for the tutorial. Importing `scipy` is needed for input output operation, run `pip install scipy` for installing it."
|
"Parametric Partial Differential Equation*](https://openreview.net/pdf?id=c8P9NQVtmnO). First of all we import the modules needed for the tutorial. Importing `scipy` is needed for input output operations."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -22,18 +22,9 @@
|
|||||||
"execution_count": 1,
|
"execution_count": 1,
|
||||||
"id": "5f2744dc",
|
"id": "5f2744dc",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [
|
"outputs": [],
|
||||||
{
|
|
||||||
"name": "stderr",
|
|
||||||
"output_type": "stream",
|
|
||||||
"text": [
|
|
||||||
"/opt/sissa/apps/intelpython/2022.0.2/intelpython/latest/lib/python3.9/site-packages/scipy/__init__.py:138: UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.26.0)\n",
|
|
||||||
" warnings.warn(f\"A NumPy version >={np_minversion} and <{np_maxversion} is required for this version of \"\n"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"source": [
|
"source": [
|
||||||
"\n",
|
"# !pip install scipy # install scipy\n",
|
||||||
"from scipy import io\n",
|
"from scipy import io\n",
|
||||||
"import torch\n",
|
"import torch\n",
|
||||||
"from pina.model import FNO, FeedForward # let's import some models\n",
|
"from pina.model import FNO, FeedForward # let's import some models\n",
|
||||||
@@ -63,7 +54,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 2,
|
"execution_count": 17,
|
||||||
"id": "2ffb8a4c",
|
"id": "2ffb8a4c",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
@@ -71,9 +62,9 @@
|
|||||||
"# download the dataset\n",
|
"# download the dataset\n",
|
||||||
"data = io.loadmat(\"Data_Darcy.mat\")\n",
|
"data = io.loadmat(\"Data_Darcy.mat\")\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# extract data\n",
|
"# extract data (we use only 100 data for train)\n",
|
||||||
"k_train = torch.tensor(data['k_train'], dtype=torch.float).unsqueeze(-1)\n",
|
"k_train = torch.tensor(data['k_train'], dtype=torch.float).unsqueeze(-1)[:100, ...]\n",
|
||||||
"u_train = torch.tensor(data['u_train'], dtype=torch.float).unsqueeze(-1)\n",
|
"u_train = torch.tensor(data['u_train'], dtype=torch.float).unsqueeze(-1)[:100, ...]\n",
|
||||||
"k_test = torch.tensor(data['k_test'], dtype=torch.float).unsqueeze(-1)\n",
|
"k_test = torch.tensor(data['k_test'], dtype=torch.float).unsqueeze(-1)\n",
|
||||||
"u_test= torch.tensor(data['u_test'], dtype=torch.float).unsqueeze(-1)\n",
|
"u_test= torch.tensor(data['u_test'], dtype=torch.float).unsqueeze(-1)\n",
|
||||||
"x = torch.tensor(data['x'], dtype=torch.float)[0]\n",
|
"x = torch.tensor(data['x'], dtype=torch.float)[0]\n",
|
||||||
@@ -90,7 +81,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 3,
|
"execution_count": 18,
|
||||||
"id": "c8501b6f",
|
"id": "c8501b6f",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [
|
"outputs": [
|
||||||
@@ -125,7 +116,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 4,
|
"execution_count": 19,
|
||||||
"id": "8b27d283",
|
"id": "8b27d283",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
@@ -152,7 +143,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 5,
|
"execution_count": 20,
|
||||||
"id": "e34f18b0",
|
"id": "e34f18b0",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [
|
"outputs": [
|
||||||
@@ -160,35 +151,16 @@
|
|||||||
"name": "stderr",
|
"name": "stderr",
|
||||||
"output_type": "stream",
|
"output_type": "stream",
|
||||||
"text": [
|
"text": [
|
||||||
"/u/n/ndemo/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:611: UserWarning: Can't initialize NVML\n",
|
"GPU available: False, used: False\n",
|
||||||
" warnings.warn(\"Can't initialize NVML\")\n",
|
|
||||||
"GPU available: True (cuda), used: True\n",
|
|
||||||
"TPU available: False, using: 0 TPU cores\n",
|
"TPU available: False, using: 0 TPU cores\n",
|
||||||
"IPU available: False, using: 0 IPUs\n",
|
"IPU available: False, using: 0 IPUs\n",
|
||||||
"HPU available: False, using: 0 HPUs\n",
|
"HPU available: False, using: 0 HPUs\n"
|
||||||
"Missing logger folder: /u/n/ndemo/PINA/tutorials/tutorial5/lightning_logs\n",
|
|
||||||
"2023-10-17 10:41:03.316644: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n",
|
|
||||||
"2023-10-17 10:41:03.333768: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.\n",
|
|
||||||
"2023-10-17 10:41:03.383188: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
|
|
||||||
"To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n",
|
|
||||||
"2023-10-17 10:41:07.712785: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\n",
|
|
||||||
"LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]\n",
|
|
||||||
"\n",
|
|
||||||
" | Name | Type | Params\n",
|
|
||||||
"----------------------------------------\n",
|
|
||||||
"0 | _loss | MSELoss | 0 \n",
|
|
||||||
"1 | _neural_net | Network | 481 \n",
|
|
||||||
"----------------------------------------\n",
|
|
||||||
"481 Trainable params\n",
|
|
||||||
"0 Non-trainable params\n",
|
|
||||||
"481 Total params\n",
|
|
||||||
"0.002 Total estimated model params size (MB)\n"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"data": {
|
"data": {
|
||||||
"application/vnd.jupyter.widget-view+json": {
|
"application/vnd.jupyter.widget-view+json": {
|
||||||
"model_id": "eb573678e5d94f0490ce09817a06f5cb",
|
"model_id": "40f63403b97248a88e49755e8cb096fc",
|
||||||
"version_major": 2,
|
"version_major": 2,
|
||||||
"version_minor": 0
|
"version_minor": 0
|
||||||
},
|
},
|
||||||
@@ -203,22 +175,20 @@
|
|||||||
"name": "stderr",
|
"name": "stderr",
|
||||||
"output_type": "stream",
|
"output_type": "stream",
|
||||||
"text": [
|
"text": [
|
||||||
"/u/n/ndemo/.local/lib/python3.9/site-packages/torch/_tensor.py:1386: UserWarning: The use of `x.T` on tensors of dimension other than 2 to reverse their shape is deprecated and it will throw an error in a future release. Consider `x.mT` to transpose batches of matrices or `x.permute(*torch.arange(x.ndim - 1, -1, -1))` to reverse the dimensions of a tensor. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3614.)\n",
|
|
||||||
" ret = func(*args, **kwargs)\n",
|
|
||||||
"`Trainer.fit` stopped: `max_epochs=100` reached.\n"
|
"`Trainer.fit` stopped: `max_epochs=100` reached.\n"
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"source": [
|
"source": [
|
||||||
"# make model\n",
|
"# make model\n",
|
||||||
"model=FeedForward(input_dimensions=1, output_dimensions=1)\n",
|
"model = FeedForward(input_dimensions=1, output_dimensions=1)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# make solver\n",
|
"# make solver\n",
|
||||||
"solver = SupervisedSolver(problem=problem, model=model)\n",
|
"solver = SupervisedSolver(problem=problem, model=model)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# make the trainer and train\n",
|
"# make the trainer and train\n",
|
||||||
"trainer = Trainer(solver=solver, max_epochs=100)\n",
|
"trainer = Trainer(solver=solver, max_epochs=100, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)\n",
|
||||||
"trainer.train()\n"
|
"trainer.train()\n"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -232,7 +202,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 6,
|
"execution_count": 21,
|
||||||
"id": "0e2a6aa4",
|
"id": "0e2a6aa4",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [
|
"outputs": [
|
||||||
@@ -240,8 +210,8 @@
|
|||||||
"name": "stdout",
|
"name": "stdout",
|
||||||
"output_type": "stream",
|
"output_type": "stream",
|
||||||
"text": [
|
"text": [
|
||||||
"Final error training 56.86%\n",
|
"Final error training 56.24%\n",
|
||||||
"Final error testing 56.82%\n"
|
"Final error testing 55.95%\n"
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
@@ -271,7 +241,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 7,
|
"execution_count": 22,
|
||||||
"id": "9af523a5",
|
"id": "9af523a5",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [
|
"outputs": [
|
||||||
@@ -279,27 +249,16 @@
|
|||||||
"name": "stderr",
|
"name": "stderr",
|
||||||
"output_type": "stream",
|
"output_type": "stream",
|
||||||
"text": [
|
"text": [
|
||||||
"GPU available: True (cuda), used: True\n",
|
"GPU available: False, used: False\n",
|
||||||
"TPU available: False, using: 0 TPU cores\n",
|
"TPU available: False, using: 0 TPU cores\n",
|
||||||
"IPU available: False, using: 0 IPUs\n",
|
"IPU available: False, using: 0 IPUs\n",
|
||||||
"HPU available: False, using: 0 HPUs\n",
|
"HPU available: False, using: 0 HPUs\n"
|
||||||
"LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]\n",
|
|
||||||
"\n",
|
|
||||||
" | Name | Type | Params\n",
|
|
||||||
"----------------------------------------\n",
|
|
||||||
"0 | _loss | MSELoss | 0 \n",
|
|
||||||
"1 | _neural_net | Network | 591 K \n",
|
|
||||||
"----------------------------------------\n",
|
|
||||||
"591 K Trainable params\n",
|
|
||||||
"0 Non-trainable params\n",
|
|
||||||
"591 K Total params\n",
|
|
||||||
"2.364 Total estimated model params size (MB)\n"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"data": {
|
"data": {
|
||||||
"application/vnd.jupyter.widget-view+json": {
|
"application/vnd.jupyter.widget-view+json": {
|
||||||
"model_id": "0f7225d39f7241e692c6027c72adfd5f",
|
"model_id": "5328859a5d9344ddb818622fd058d2a5",
|
||||||
"version_major": 2,
|
"version_major": 2,
|
||||||
"version_minor": 0
|
"version_minor": 0
|
||||||
},
|
},
|
||||||
@@ -314,7 +273,7 @@
|
|||||||
"name": "stderr",
|
"name": "stderr",
|
||||||
"output_type": "stream",
|
"output_type": "stream",
|
||||||
"text": [
|
"text": [
|
||||||
"`Trainer.fit` stopped: `max_epochs=20` reached.\n"
|
"`Trainer.fit` stopped: `max_epochs=100` reached.\n"
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
@@ -334,7 +293,7 @@
|
|||||||
"solver = SupervisedSolver(problem=problem, model=model)\n",
|
"solver = SupervisedSolver(problem=problem, model=model)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# make the trainer and train\n",
|
"# make the trainer and train\n",
|
||||||
"trainer = Trainer(solver=solver, max_epochs=20)\n",
|
"trainer = Trainer(solver=solver, max_epochs=100, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)\n",
|
||||||
"trainer.train()\n"
|
"trainer.train()\n"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -343,12 +302,12 @@
|
|||||||
"id": "84964cb9",
|
"id": "84964cb9",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"We can clearly see that with 1/3 of the total epochs the loss is lower. Let's see in testing.. Notice that the number of parameters is way higher than a `FeedForward` network. We suggest to use GPU or TPU for a speed up in training."
|
"We can clearly see that the final loss is lower. Let's see in testing.. Notice that the number of parameters is way higher than a `FeedForward` network. We suggest to use GPU or TPU for a speed up in training, when many data samples are used."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 8,
|
"execution_count": 23,
|
||||||
"id": "58e2db89",
|
"id": "58e2db89",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [
|
"outputs": [
|
||||||
@@ -356,8 +315,8 @@
|
|||||||
"name": "stdout",
|
"name": "stdout",
|
||||||
"output_type": "stream",
|
"output_type": "stream",
|
||||||
"text": [
|
"text": [
|
||||||
"Final error training 26.19%\n",
|
"Final error training 10.86%\n",
|
||||||
"Final error testing 25.89%\n"
|
"Final error testing 12.77%\n"
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
|
|||||||
35
tutorials/tutorial5/tutorial.py
vendored
@@ -1,14 +1,15 @@
|
|||||||
#!/usr/bin/env python
|
#!/usr/bin/env python
|
||||||
# coding: utf-8
|
# coding: utf-8
|
||||||
|
|
||||||
# # Tutorial 5: Fourier Neural Operator Learning
|
# # Tutorial: Two dimensional Darcy flow using the Fourier Neural Operator
|
||||||
|
|
||||||
# In this tutorial we are going to solve the Darcy flow 2d problem, presented in [Fourier Neural Operator for
|
# In this tutorial we are going to solve the Darcy flow problem in two dimensions, presented in [*Fourier Neural Operator for
|
||||||
# Parametric Partial Differential Equation](https://openreview.net/pdf?id=c8P9NQVtmnO). First of all we import the modules needed for the tutorial. Importing `scipy` is needed for input output operation, run `pip install scipy` for installing it.
|
# Parametric Partial Differential Equation*](https://openreview.net/pdf?id=c8P9NQVtmnO). First of all we import the modules needed for the tutorial. Importing `scipy` is needed for input output operations.
|
||||||
|
|
||||||
# In[1]:
|
# In[1]:
|
||||||
|
|
||||||
|
|
||||||
|
# !pip install scipy # install scipy
|
||||||
from scipy import io
|
from scipy import io
|
||||||
import torch
|
import torch
|
||||||
from pina.model import FNO, FeedForward # let's import some models
|
from pina.model import FNO, FeedForward # let's import some models
|
||||||
@@ -31,15 +32,15 @@ import matplotlib.pyplot as plt
|
|||||||
# Specifically, $u$ is the flow pressure, $k$ is the permeability field and $f$ is the forcing function. The Darcy flow can parameterize a variety of systems including flow through porous media, elastic materials and heat conduction. Here you will define the domain as a 2D unit square Dirichlet boundary conditions. The dataset is taken from the authors original reference.
|
# Specifically, $u$ is the flow pressure, $k$ is the permeability field and $f$ is the forcing function. The Darcy flow can parameterize a variety of systems including flow through porous media, elastic materials and heat conduction. Here you will define the domain as a 2D unit square Dirichlet boundary conditions. The dataset is taken from the authors original reference.
|
||||||
#
|
#
|
||||||
|
|
||||||
# In[2]:
|
# In[17]:
|
||||||
|
|
||||||
|
|
||||||
# download the dataset
|
# download the dataset
|
||||||
data = io.loadmat("Data_Darcy.mat")
|
data = io.loadmat("Data_Darcy.mat")
|
||||||
|
|
||||||
# extract data
|
# extract data (we use only 100 data for train)
|
||||||
k_train = torch.tensor(data['k_train'], dtype=torch.float).unsqueeze(-1)
|
k_train = torch.tensor(data['k_train'], dtype=torch.float).unsqueeze(-1)[:100, ...]
|
||||||
u_train = torch.tensor(data['u_train'], dtype=torch.float).unsqueeze(-1)
|
u_train = torch.tensor(data['u_train'], dtype=torch.float).unsqueeze(-1)[:100, ...]
|
||||||
k_test = torch.tensor(data['k_test'], dtype=torch.float).unsqueeze(-1)
|
k_test = torch.tensor(data['k_test'], dtype=torch.float).unsqueeze(-1)
|
||||||
u_test= torch.tensor(data['u_test'], dtype=torch.float).unsqueeze(-1)
|
u_test= torch.tensor(data['u_test'], dtype=torch.float).unsqueeze(-1)
|
||||||
x = torch.tensor(data['x'], dtype=torch.float)[0]
|
x = torch.tensor(data['x'], dtype=torch.float)[0]
|
||||||
@@ -48,7 +49,7 @@ y = torch.tensor(data['y'], dtype=torch.float)[0]
|
|||||||
|
|
||||||
# Let's visualize some data
|
# Let's visualize some data
|
||||||
|
|
||||||
# In[3]:
|
# In[18]:
|
||||||
|
|
||||||
|
|
||||||
plt.subplot(1, 2, 1)
|
plt.subplot(1, 2, 1)
|
||||||
@@ -62,7 +63,7 @@ plt.show()
|
|||||||
|
|
||||||
# We now create the neural operator class. It is a very simple class, inheriting from `AbstractProblem`.
|
# We now create the neural operator class. It is a very simple class, inheriting from `AbstractProblem`.
|
||||||
|
|
||||||
# In[4]:
|
# In[19]:
|
||||||
|
|
||||||
|
|
||||||
class NeuralOperatorSolver(AbstractProblem):
|
class NeuralOperatorSolver(AbstractProblem):
|
||||||
@@ -79,24 +80,24 @@ problem = NeuralOperatorSolver()
|
|||||||
#
|
#
|
||||||
# We will first solve the problem using a Feedforward neural network. We will use the `SupervisedSolver` for solving the problem, since we are training using supervised learning.
|
# We will first solve the problem using a Feedforward neural network. We will use the `SupervisedSolver` for solving the problem, since we are training using supervised learning.
|
||||||
|
|
||||||
# In[5]:
|
# In[20]:
|
||||||
|
|
||||||
|
|
||||||
# make model
|
# make model
|
||||||
model=FeedForward(input_dimensions=1, output_dimensions=1)
|
model = FeedForward(input_dimensions=1, output_dimensions=1)
|
||||||
|
|
||||||
|
|
||||||
# make solver
|
# make solver
|
||||||
solver = SupervisedSolver(problem=problem, model=model)
|
solver = SupervisedSolver(problem=problem, model=model)
|
||||||
|
|
||||||
# make the trainer and train
|
# make the trainer and train
|
||||||
trainer = Trainer(solver=solver, max_epochs=100)
|
trainer = Trainer(solver=solver, max_epochs=100, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
|
||||||
trainer.train()
|
trainer.train()
|
||||||
|
|
||||||
|
|
||||||
# The final loss is pretty high... We can calculate the error by importing `LpLoss`.
|
# The final loss is pretty high... We can calculate the error by importing `LpLoss`.
|
||||||
|
|
||||||
# In[6]:
|
# In[21]:
|
||||||
|
|
||||||
|
|
||||||
from pina.loss import LpLoss
|
from pina.loss import LpLoss
|
||||||
@@ -116,7 +117,7 @@ print(f'Final error testing {err:.2f}%')
|
|||||||
#
|
#
|
||||||
# We will now move to solve the problem using a FNO. Since we are learning operator this approach is better suited, as we shall see.
|
# We will now move to solve the problem using a FNO. Since we are learning operator this approach is better suited, as we shall see.
|
||||||
|
|
||||||
# In[7]:
|
# In[22]:
|
||||||
|
|
||||||
|
|
||||||
# make model
|
# make model
|
||||||
@@ -134,13 +135,13 @@ model = FNO(lifting_net=lifting_net,
|
|||||||
solver = SupervisedSolver(problem=problem, model=model)
|
solver = SupervisedSolver(problem=problem, model=model)
|
||||||
|
|
||||||
# make the trainer and train
|
# make the trainer and train
|
||||||
trainer = Trainer(solver=solver, max_epochs=20)
|
trainer = Trainer(solver=solver, max_epochs=100, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
|
||||||
trainer.train()
|
trainer.train()
|
||||||
|
|
||||||
|
|
||||||
# We can clearly see that with 1/3 of the total epochs the loss is lower. Let's see in testing.. Notice that the number of parameters is way higher than a `FeedForward` network. We suggest to use GPU or TPU for a speed up in training.
|
# We can clearly see that the final loss is lower. Let's see in testing.. Notice that the number of parameters is way higher than a `FeedForward` network. We suggest to use GPU or TPU for a speed up in training, when many data samples are used.
|
||||||
|
|
||||||
# In[8]:
|
# In[23]:
|
||||||
|
|
||||||
|
|
||||||
err = float(metric_err(u_train.squeeze(-1), solver.models[0](k_train).squeeze(-1)).mean())*100
|
err = float(metric_err(u_train.squeeze(-1), solver.models[0](k_train).squeeze(-1)).mean())*100
|
||||||
|
|||||||
30
tutorials/tutorial6/tutorial.ipynb
vendored
@@ -5,29 +5,15 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Tutorial 6: How to Use Geometries in PINA"
|
"# Tutorial: Building custom geometries with PINA `Location` class\n",
|
||||||
]
|
"\n",
|
||||||
},
|
|
||||||
{
|
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"## Built-in Geometries"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"In this tutorial we will show how to use geometries in PINA. Specifically, the tutorial will include how to create geometries and how to visualize them. The topics covered are:\n",
|
"In this tutorial we will show how to use geometries in PINA. Specifically, the tutorial will include how to create geometries and how to visualize them. The topics covered are:\n",
|
||||||
"\n",
|
"\n",
|
||||||
"* Creating CartesianDomains and EllipsoidDomains\n",
|
"* Creating CartesianDomains and EllipsoidDomains\n",
|
||||||
"* Getting the Union and Difference of Geometries\n",
|
"* Getting the Union and Difference of Geometries\n",
|
||||||
"* Sampling points in the domain (and visualize them)\n",
|
"* Sampling points in the domain (and visualize them)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"We import the relevant modules."
|
"We import the relevant modules first."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -45,6 +31,14 @@
|
|||||||
" ax.scatter(pts.extract('x'), pts.extract('y'), color='blue', alpha=0.5)"
|
" ax.scatter(pts.extract('x'), pts.extract('y'), color='blue', alpha=0.5)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"attachments": {},
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Built-in Geometries"
|
||||||
|
]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
@@ -401,7 +395,7 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"Because the `Location` class we are inherting from requires both a sample method and `is_inside` method, we will create them and just add in \"pass\" for the moment."
|
"Because the `Location` class we are inherting from requires both a `sample` method and `is_inside` method, we will create them and just add in \"pass\" for the moment."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
12
tutorials/tutorial6/tutorial.py
vendored
@@ -1,17 +1,15 @@
|
|||||||
#!/usr/bin/env python
|
#!/usr/bin/env python
|
||||||
# coding: utf-8
|
# coding: utf-8
|
||||||
|
|
||||||
# # Tutorial 6: How to Use Geometries in PINA
|
# # Tutorial: Building custom geometries with PINA `Location` class
|
||||||
|
#
|
||||||
# ## Built-in Geometries
|
|
||||||
|
|
||||||
# In this tutorial we will show how to use geometries in PINA. Specifically, the tutorial will include how to create geometries and how to visualize them. The topics covered are:
|
# In this tutorial we will show how to use geometries in PINA. Specifically, the tutorial will include how to create geometries and how to visualize them. The topics covered are:
|
||||||
#
|
#
|
||||||
# * Creating CartesianDomains and EllipsoidDomains
|
# * Creating CartesianDomains and EllipsoidDomains
|
||||||
# * Getting the Union and Difference of Geometries
|
# * Getting the Union and Difference of Geometries
|
||||||
# * Sampling points in the domain (and visualize them)
|
# * Sampling points in the domain (and visualize them)
|
||||||
#
|
#
|
||||||
# We import the relevant modules.
|
# We import the relevant modules first.
|
||||||
|
|
||||||
# In[1]:
|
# In[1]:
|
||||||
|
|
||||||
@@ -25,6 +23,8 @@ def plot_scatter(ax, pts, title):
|
|||||||
ax.scatter(pts.extract('x'), pts.extract('y'), color='blue', alpha=0.5)
|
ax.scatter(pts.extract('x'), pts.extract('y'), color='blue', alpha=0.5)
|
||||||
|
|
||||||
|
|
||||||
|
# ## Built-in Geometries
|
||||||
|
|
||||||
# We will create one cartesian and two ellipsoids. For the sake of simplicity, we show here the 2-dimensional, but it's trivial the extension to 3D (and higher) cases. The geometries allows also the generation of samples belonging to the boundary. So, we will create one ellipsoid with the border and one without.
|
# We will create one cartesian and two ellipsoids. For the sake of simplicity, we show here the 2-dimensional, but it's trivial the extension to 3D (and higher) cases. The geometries allows also the generation of samples belonging to the boundary. So, we will create one ellipsoid with the border and one without.
|
||||||
|
|
||||||
# In[2]:
|
# In[2]:
|
||||||
@@ -180,7 +180,7 @@ class Heart(Location):
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
# Because the `Location` class we are inherting from requires both a sample method and `is_inside` method, we will create them and just add in "pass" for the moment.
|
# Because the `Location` class we are inherting from requires both a `sample` method and `is_inside` method, we will create them and just add in "pass" for the moment.
|
||||||
|
|
||||||
# In[13]:
|
# In[13]:
|
||||||
|
|
||||||
|
|||||||