Tutorials v0.1 (#178)

Tutorial update and small fixes

* Tutorials update + Tutorial FNO
* Create a metric tracker callback
* Update PINN for logging
* Update plotter for plotting
* Small fix LabelTensor
* Small fix FNO

---------

Co-authored-by: Dario Coscia <dariocoscia@cli-10-110-13-250.WIFIeduroamSTUD.units.it>
Co-authored-by: Dario Coscia <dariocoscia@dhcp-176.eduroam.sissa.it>
This commit is contained in:
Dario Coscia
2023-09-26 17:29:37 +02:00
committed by Nicola Demo
parent 939353f517
commit a9b1bd2826
45 changed files with 2760 additions and 1321 deletions

View File

@@ -2,39 +2,38 @@ Tutorial 1: Physics Informed Neural Networks on PINA
====================================================
In this tutorial we will show the typical use case of PINA on a toy
problem. Specifically, the tutorial aims to introduce the following
topics:
problem solved by Physics Informed Problems. Specifically, the tutorial
aims to introduce the following topics:
- Defining a PINA Problem,
- Build a ``pinn`` object,
- Sample points in the domain.
- Build a ``PINN`` Solver,
These are the three main steps needed **before** training a Physics
Informed Neural Network (PINN). We will show in detailed each step, and
at the end we will solve a very simple problem with PINA.
We will show in detailed each step, and at the end we will solve a very
simple problem with PINA.
PINA Problem
------------
Defining a Problem
------------------
Initialize the Problem class
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The problem definition in the PINA framework is done by building a
phython ``class``, inherited from one or more problem classes
(``SpatialProblem``, ``TimeDependentProblem``, ``ParametricProblem``),
depending on the nature of the problem treated. Lets see an example to
better understand: #### Simple Ordinary Differential Equation Consider
the following:
phython ``class``, inherited from ``AbsractProblem``. A problem is an
object which explains what the solver is supposed to solve. For Physics
Informed Neural Networks, a problem can be inherited from one or more
problem (already implemented) classes (``SpatialProblem``,
``TimeDependentProblem``, ``ParametricProblem``), depending on the
nature of the problem treated. Lets see an example to better
understand:
Simple Ordinary Differential Equation Consider the following:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. math::
\begin{equation}
\begin{cases}
\frac{d}{dx}u(x) &= u(x) \quad x\in(0,1)\\
u(x=0) &= 1 \\
\frac{d}{dx}u(x) &= u(x) \quad x\in(0,1)\\
u(x=0) &= 1 \\
\end{cases}
\end{equation}
with analytical solution :math:`u(x) = e^x`. In this case we have that
our ODE depends only on the spatial variable :math:`x\in(0,1)` , this
@@ -44,12 +43,12 @@ means that our problem class is going to be inherited from
.. code:: python
from pina.problem import SpatialProblem
from pina import Span
from pina.geometry import CartesianDomain
class SimpleODE(SpatialProblem):
output_variables = ['u']
spatial_domain = Span({'x': [0, 1]})
spatial_domain = CartesianDomain({'x': [0, 1]})
# other stuff ...
@@ -57,7 +56,7 @@ Notice that we define ``output_variables`` as a list of symbols,
indicating the output variables of our equation (in this case only
:math:`u`). The ``spatial_domain`` variable indicates where the sample
points are going to be sampled in the domain, in this case
:math:`x\in(0,1)`.
:math:`x\in(0,1)`
What about if we also have a time depencency in the equation? Well in
that case our ``class`` will inherit from both ``SpatialProblem`` and
@@ -66,13 +65,13 @@ that case our ``class`` will inherit from both ``SpatialProblem`` and
.. code:: python
from pina.problem import SpatialProblem, TimeDependentProblem
from pina import Span
from pina.geometry import CartesianDomain
class TimeSpaceODE(SpatialProblem, TimeDependentProblem):
output_variables = ['u']
spatial_domain = Span({'x': [0, 1]})
temporal_domain = Span({'x': [0, 1]})
spatial_domain = CartesianDomain({'x': [0, 1]})
temporal_domain = CartesianDomain({'x': [0, 1]})
# other stuff ...
@@ -82,11 +81,12 @@ time domain where we want the solution.
Summarizing, in PINA we can initialize a problem with a class which is
inherited from three base classes: ``SpatialProblem``,
``TimeDependentProblem``, ``ParametricProblem``, depending on the type
of problem we are considering. For reference:
* ``SpatialProblem`` :math:`\rightarrow` spatial variable(s) presented in the differential equation
* ``TimeDependentProblem`` :math:`\rightarrow` time variable(s) presented in the differential equation
* ``ParametricProblem`` :math:`\rightarrow` parameter(s) presented in the differential equation
of problem we are considering. For reference: \* ``SpatialProblem``
:math:`\rightarrow` spatial variable(s) presented in the differential
equation \* ``TimeDependentProblem`` :math:`\rightarrow` time
variable(s) presented in the differential equation \*
``ParametricProblem`` :math:`\rightarrow` parameter(s) presented in the
differential equation
Write the problem class
~~~~~~~~~~~~~~~~~~~~~~~
@@ -100,7 +100,9 @@ Equation (1) and try to write the PINA model class:
from pina.problem import SpatialProblem
from pina.operators import grad
from pina import Condition, Span
from pina.geometry import CartesianDomain
from pina.equation import Equation
from pina import Condition
import torch
@@ -108,7 +110,7 @@ Equation (1) and try to write the PINA model class:
class SimpleODE(SpatialProblem):
output_variables = ['u']
spatial_domain = Span({'x': [0, 1]})
spatial_domain = CartesianDomain({'x': [0, 1]})
# defining the ode equation
def ode_equation(input_, output_):
@@ -136,8 +138,8 @@ Equation (1) and try to write the PINA model class:
# Conditions to hold
conditions = {
'x0': Condition(location=Span({'x': 0.}), function=initial_condition),
'D': Condition(location=Span({'x': [0, 1]}), function=ode_equation),
'x0': Condition(location=CartesianDomain({'x': 0.}), equation=Equation(initial_condition)),
'D': Condition(location=CartesianDomain({'x': [0, 1]}), equation=Equation(ode_equation)),
}
# defining true solution
@@ -152,7 +154,10 @@ different conditions. For example, in the domain :math:`(0,1)` the ODE
equation (``ode_equation``) must be satisfied, so we write it by putting
all the ODE equation on the right hand side, such that we return the
zero residual. This is done for all the conditions (``ode_equation``,
``initial_condition``).
``initial_condition``). Notice that we do not pass directly a ``python``
function, but an ``Equation`` object, which is initialized with the
``python`` function. This is done so that all the computations, and
internal checks are done inside PINA.
Once we have defined the function we need to tell the network where
these methods have to be applied. For doing this we use the class
@@ -169,16 +174,17 @@ definition.
Build PINN object
-----------------
The basics requirements for building a PINN model are a problem and a
model. We have already covered the problem definition. For the model one
can use the default models provided in PINA or use a custom model. We
will not go into the details of model definition, Tutorial2 and
Tutorial3 treat the topic in detail.
In PINA we have already developed different solvers, one of them is
``PINN``. The basics requirements for building a ``PINN`` model are a
problem and a model. We have already covered the problem definition. For
the model one can use the default models provided in PINA or use a
custom model. We will not go into the details of model definition,
Tutorial2 and Tutorial3 treat the topic in detail.
.. code:: ipython3
from pina.model import FeedForward
from pina import PINN
from pina.solvers import PINN
# initialize the problem
problem = SimpleODE()
@@ -187,11 +193,11 @@ Tutorial3 treat the topic in detail.
model = FeedForward(
layers=[10, 10],
func=torch.nn.Tanh,
output_variables=problem.output_variables,
input_variables=problem.input_variables
output_dimensions=len(problem.output_variables),
input_dimensions=len(problem.input_variables)
)
# create the PINN object
# create the PINN object, see the PINN documentation for extra argument in the constructor
pinn = PINN(problem, model)
@@ -199,31 +205,24 @@ Creating the pinn object is fairly simple by using the ``PINN`` class,
different optional inputs can be passed: optimizer, batch size, … (see
`documentation <https://mathlab.github.io/PINA/>`__ for reference).
Sample points in the domain
---------------------------
Sample points in the domain and create the Trainer
--------------------------------------------------
Once the ``pinn`` object is created, we need to generate the points for
starting the optimization. For doing this we use the ``span_pts`` method
of the ``PINN`` class. Lets see some methods to sample in
:math:`(0,1 )`.
Once the ``PINN`` object is created, we need to generate the points for
starting the optimization. For doing this we use the
``.discretise_domain`` method of the ``AbstractProblem`` class. Lets
see some methods to sample in :math:`(0,1 )`.
.. code:: ipython3
# sampling 20 points in (0, 1) with discrite step
pinn.span_pts(20, 'grid', locations=['D'])
problem.discretise_domain(20, 'grid', locations=['D'])
# sampling 20 points in (0, 1) with latin hypercube
pinn.span_pts(20, 'latin', locations=['D'])
problem.discretise_domain(20, 'latin', locations=['D'])
# sampling 20 points in (0, 1) randomly
pinn.span_pts(20, 'random', locations=['D'])
We can also use a dictionary for specific variables:
.. code:: ipython3
pinn.span_pts({'variables': ['x'], 'mode': 'grid', 'n': 20}, locations=['D'])
problem.discretise_domain(20, 'random', locations=['D'])
We are going to use equispaced points for sampling. We need to sample in
@@ -232,8 +231,8 @@ all the conditions domains. In our case we sample in ``D`` and ``x0``.
.. code:: ipython3
# sampling for training
pinn.span_pts(1, 'random', locations=['x0'])
pinn.span_pts(20, 'grid', locations=['D'])
problem.discretise_domain(1, 'random', locations=['x0'])
problem.discretise_domain(20, 'grid', locations=['D'])
Very simple training and plotting
@@ -241,36 +240,68 @@ Very simple training and plotting
Once we have defined the PINA model, created a network and sampled
points in the domain, we have everything that is necessary for training
a PINN. Here we show a very short training and some method for plotting
the results.
a ``PINN``. For training we use the ``Trainer`` class. Here we show a
very short training and some method for plotting the results. Notice
that by default all relevant metrics (e.g. MSE error during training) is
going to be tracked using a ``lightining`` logger, by default
``CSVLogger``. If you want to track the metric by yourself without a
logger, use ``pina.callbacks.MetricTracker``.
.. code:: ipython3
# simple training
final_loss = pinn.train(stop=3000, frequency_print=1000)
# create the trainer
from pina.trainer import Trainer
from pina.callbacks import MetricTracker
trainer = Trainer(solver=pinn, max_epochs=3000, callbacks=[MetricTracker()])
# train
trainer.train()
.. parsed-literal::
sum x0initial_co Dode_equatio
[epoch 00000] 1.933187e+00 1.825489e+00 1.076983e-01
sum x0initial_co Dode_equatio
[epoch 00001] 1.860870e+00 1.766795e+00 9.407549e-02
sum x0initial_co Dode_equatio
[epoch 01000] 4.974120e-02 1.635524e-02 3.338596e-02
sum x0initial_co Dode_equatio
[epoch 02000] 1.099083e-03 3.420736e-05 1.064875e-03
[epoch 03000] 4.049759e-04 2.937766e-06 4.020381e-04
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
/Users/dariocoscia/anaconda3/envs/pina/lib/python3.9/site-packages/lightning/pytorch/trainer/connectors/logger_connector/logger_connector.py:67: UserWarning: Starting from v1.9.0, `tensorboardX` has been removed as a dependency of the `lightning.pytorch` package, due to potential conflicts with other packages in the ML ecosystem. For this reason, `logger=True` will use `CSVLogger` as the default logger, unless the `tensorboard` or `tensorboardX` packages are found. Please `pip install lightning[extra]` or one of them to enable TensorBoard support by default
warning_cache.warn(
| Name | Type | Params
----------------------------------------
0 | _loss | MSELoss | 0
1 | _neural_net | Network | 141
----------------------------------------
141 Trainable params
0 Non-trainable params
141 Total params
0.001 Total estimated model params size (MB)
.. parsed-literal::
After the training we have saved the final loss in ``final_loss``, which
we can inspect. By default PINA uses mean square error loss.
Epoch 2999: : 1it [00:00, 226.55it/s, v_num=10, mean_loss=2.14e-5, x0_loss=4.24e-5, D_loss=2.93e-7]
.. parsed-literal::
`Trainer.fit` stopped: `max_epochs=3000` reached.
.. parsed-literal::
Epoch 2999: : 1it [00:00, 159.67it/s, v_num=10, mean_loss=2.14e-5, x0_loss=4.24e-5, D_loss=2.93e-7]
After the training we can inspect trainer logged metrics (by default
PINA logs mean square error residual loss). The logged metrics can be
accessed online using one of the ``Lightinig`` loggers. The final loss
can be accessed by ``trainer.logged_metrics``.
.. code:: ipython3
# inspecting final loss
final_loss
trainer.logged_metrics
@@ -278,12 +309,14 @@ we can inspect. By default PINA uses mean square error loss.
.. parsed-literal::
0.0004049759008921683
{'mean_loss': tensor(2.1357e-05),
'x0_loss': tensor(4.2421e-05),
'D_loss': tensor(2.9291e-07)}
By using the ``Plotter`` class from PINA we can also do some quatitative
plots of the loss function.
plots of the solution.
.. code:: ipython3
@@ -291,11 +324,21 @@ plots of the loss function.
# plotting the loss
plotter = Plotter()
plotter.plot_loss(pinn)
plotter.plot(trainer=trainer)
.. image:: tutorial_files/tutorial_25_0.png
.. image:: tutorial_files/tutorial_21_0.png
We have a very smooth loss decreasing!
The solution is completely overlapped with the actual one. We can also
plot easily the loss:
.. code:: ipython3
plotter.plot_loss(trainer=trainer, metric='mean_loss', log_scale=True)
.. image:: tutorial_files/tutorial_23_0.png