update plotter

This commit is contained in:
Dario Coscia
2023-11-09 18:20:51 +01:00
committed by Nicola Demo
parent 934ae409ff
commit 0d38de5afe
21 changed files with 171 additions and 165 deletions

View File

@@ -28,8 +28,8 @@ Build a PINA problem
Problem definition in the **PINA** framework is done by building a
python ``class``, which inherits from one or more problem classes
(``SpatialProblem``, ``TimeDependentProblem``, ``ParametricProblem``, …)
depending on the nature of the problem. Below is an example. Consider the following
simple Ordinary Differential Equation:
depending on the nature of the problem. Below is an example: ### Simple
Ordinary Differential Equation Consider the following:
.. math::
@@ -49,7 +49,7 @@ our ``Problem`` class is going to be inherited from the
.. code:: python
from pina.problem import SpatialProblem
from pina import CartesianProblem
from pina.geometry import CartesianProblem
class SimpleODE(SpatialProblem):
@@ -73,7 +73,7 @@ What about if our equation is also time dependent? In this case, our
.. code:: ipython3
from pina.problem import SpatialProblem, TimeDependentProblem
from pina import CartesianDomain
from pina.geometry import CartesianDomain
class TimeSpaceODE(SpatialProblem, TimeDependentProblem):
@@ -215,26 +215,26 @@ calling the attribute ``input_pts`` of the problem
.. parsed-literal::
Input points: {'x0': LabelTensor([[[0.]]]), 'D': LabelTensor([[[0.8633]],
[[0.4009]],
[[0.6489]],
[[0.9278]],
[[0.3975]],
[[0.1484]],
[[0.9632]],
[[0.5485]],
[[0.2984]],
[[0.5643]],
[[0.0368]],
[[0.7847]],
[[0.4741]],
[[0.6957]],
[[0.3281]],
[[0.0958]],
[[0.1847]],
[[0.2232]],
[[0.8099]],
[[0.7304]]])}
Input points: {'x0': LabelTensor([[[0.]]]), 'D': LabelTensor([[[0.7644]],
[[0.2028]],
[[0.1789]],
[[0.4294]],
[[0.3239]],
[[0.6531]],
[[0.1406]],
[[0.6062]],
[[0.4969]],
[[0.7429]],
[[0.8681]],
[[0.3800]],
[[0.5357]],
[[0.0152]],
[[0.9679]],
[[0.8101]],
[[0.0662]],
[[0.9095]],
[[0.2503]],
[[0.5580]]])}
Input points labels: ['x']
@@ -271,7 +271,8 @@ If you want to track the metric by yourself without a logger, use
.. code:: ipython3
from pina import PINN, Trainer
from pina import Trainer
from pina.solvers import PINN
from pina.model import FeedForward
from pina.callbacks import MetricTracker
@@ -300,12 +301,11 @@ If you want to track the metric by yourself without a logger, use
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
Missing logger folder: /Users/dariocoscia/Desktop/PINA/tutorials/tutorial1/lightning_logs
.. parsed-literal::
Epoch 1499: : 1it [00:00, 316.24it/s, v_num=0, mean_loss=5.39e-5, x0_loss=1.26e-6, D_loss=0.000106]
Epoch 1499: : 1it [00:00, 272.55it/s, v_num=3, x0_loss=7.71e-6, D_loss=0.000734, mean_loss=0.000371]
.. parsed-literal::
@@ -314,7 +314,7 @@ If you want to track the metric by yourself without a logger, use
.. parsed-literal::
Epoch 1499: : 1it [00:00, 166.89it/s, v_num=0, mean_loss=5.39e-5, x0_loss=1.26e-6, D_loss=0.000106]
Epoch 1499: : 1it [00:00, 167.14it/s, v_num=3, x0_loss=7.71e-6, D_loss=0.000734, mean_loss=0.000371]
After the training we can inspect trainer logged metrics (by default
@@ -332,9 +332,9 @@ loss can be accessed by ``trainer.logged_metrics``
.. parsed-literal::
{'mean_loss': tensor(5.3852e-05),
'x0_loss': tensor(1.2636e-06),
'D_loss': tensor(0.0001)}
{'x0_loss': tensor(7.7149e-06),
'D_loss': tensor(0.0007),
'mean_loss': tensor(0.0004)}
@@ -362,7 +362,7 @@ indistinguishable. We can also plot easily the loss:
.. code:: ipython3
pl.plot_loss(trainer=trainer, label = 'mean_loss', logy=True)
pl.plot_loss(trainer=trainer, label = 'mean_loss', logy=True)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 10 KiB

After

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

After

Width:  |  Height:  |  Size: 19 KiB

View File

@@ -31,16 +31,12 @@ The problem definition
----------------------
The two-dimensional Poisson problem is mathematically written as:
.. math::
\begin{equation}
\begin{cases}
\Delta u = \sin{(\pi x)} \sin{(\pi y)} \text{ in } D, \\
u = 0 \text{ on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
\end{cases}
\end{equation}
where :math:`D` is a square domain :math:`[0,1]^2`, and
:raw-latex:`\begin{equation}
\begin{cases}
\Delta u = \sin{(\pi x)} \sin{(\pi y)} \text{ in } D, \\
u = 0 \text{ on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
\end{cases}
\end{equation}` where :math:`D` is a square domain :math:`[0,1]^2`, and
:math:`\Gamma_i`, with :math:`i=1,...,4`, are the boundaries of the
square.
@@ -127,7 +123,7 @@ These parameters can be modified as desired. We use the
.. parsed-literal::
Epoch 999: : 1it [00:00, 152.98it/s, v_num=9, mean_loss=0.000239, D_loss=0.000793, gamma1_loss=8.51e-5, gamma2_loss=0.000103, gamma3_loss=0.000122, gamma4_loss=9.14e-5]
Epoch 999: : 1it [00:00, 158.53it/s, v_num=3, gamma1_loss=5.29e-5, gamma2_loss=4.09e-5, gamma3_loss=4.73e-5, gamma4_loss=4.18e-5, D_loss=0.00134, mean_loss=0.000304]
.. parsed-literal::
@@ -136,7 +132,7 @@ These parameters can be modified as desired. We use the
.. parsed-literal::
Epoch 999: : 1it [00:00, 119.21it/s, v_num=9, mean_loss=0.000239, D_loss=0.000793, gamma1_loss=8.51e-5, gamma2_loss=0.000103, gamma3_loss=0.000122, gamma4_loss=9.14e-5]
Epoch 999: : 1it [00:00, 105.33it/s, v_num=3, gamma1_loss=5.29e-5, gamma2_loss=4.09e-5, gamma3_loss=4.73e-5, gamma4_loss=4.18e-5, D_loss=0.00134, mean_loss=0.000304]
Now the ``Plotter`` class is used to plot the results. The solution
@@ -162,10 +158,9 @@ is now defined, with an additional input variable, named extra-feature,
which coincides with the forcing term in the Laplace equation. The set
of input variables to the neural network is:
.. math::
\begin{equation}
[x, y, k(x, y)], \text{ with } k(x, y)=\sin{(\pi x)}\sin{(\pi y)},
\end{equation}
:raw-latex:`\begin{equation}
[x, y, k(x, y)], \text{ with } k(x, y)=\sin{(\pi x)}\sin{(\pi y)},
\end{equation}`
where :math:`x` and :math:`y` are the spatial coordinates and
:math:`k(x, y)` is the added feature.
@@ -219,7 +214,7 @@ new extra feature.
.. parsed-literal::
Epoch 999: : 1it [00:00, 119.36it/s, v_num=10, mean_loss=8.97e-7, D_loss=4.43e-6, gamma1_loss=1.37e-8, gamma2_loss=1.68e-8, gamma3_loss=1.22e-8, gamma4_loss=1.77e-8]
Epoch 999: : 1it [00:00, 111.88it/s, v_num=4, gamma1_loss=2.54e-7, gamma2_loss=2.17e-7, gamma3_loss=1.94e-7, gamma4_loss=2.69e-7, D_loss=9.2e-6, mean_loss=2.03e-6]
.. parsed-literal::
@@ -228,7 +223,7 @@ new extra feature.
.. parsed-literal::
Epoch 999: : 1it [00:00, 95.23it/s, v_num=10, mean_loss=8.97e-7, D_loss=4.43e-6, gamma1_loss=1.37e-8, gamma2_loss=1.68e-8, gamma3_loss=1.22e-8, gamma4_loss=1.77e-8]
Epoch 999: : 1it [00:00, 85.62it/s, v_num=4, gamma1_loss=2.54e-7, gamma2_loss=2.17e-7, gamma3_loss=1.94e-7, gamma4_loss=2.69e-7, D_loss=9.2e-6, mean_loss=2.03e-6]
The predicted and exact solutions and the error between them are
@@ -254,10 +249,9 @@ Another way to exploit the extra features is the addition of learnable
parameter inside them. In this way, the added parameters are learned
during the training phase of the neural network. In this case, we use:
.. math::
\begin{equation}
k(x, \mathbf{y}) = \beta \sin{(\alpha x)} \sin{(\alpha y)},
\end{equation}
:raw-latex:`\begin{equation}
k(x, \mathbf{y}) = \beta \sin{(\alpha x)} \sin{(\alpha y)},
\end{equation}`
where :math:`\alpha` and :math:`\beta` are the abovementioned
parameters. Their implementation is quite trivial: by using the class
@@ -306,7 +300,7 @@ need, and they are managed by ``autograd`` module!
.. parsed-literal::
Epoch 999: : 1it [00:00, 103.14it/s, v_num=14, mean_loss=1.39e-6, D_loss=6.04e-6, gamma1_loss=4.19e-7, gamma2_loss=2.8e-8, gamma3_loss=4.05e-7, gamma4_loss=3.49e-8]
Epoch 999: : 1it [00:00, 119.29it/s, v_num=5, gamma1_loss=3.26e-8, gamma2_loss=7.84e-8, gamma3_loss=1.13e-7, gamma4_loss=3.02e-8, D_loss=2.66e-6, mean_loss=5.82e-7]
.. parsed-literal::
@@ -315,7 +309,7 @@ need, and they are managed by ``autograd`` module!
.. parsed-literal::
Epoch 999: : 1it [00:00, 84.50it/s, v_num=14, mean_loss=1.39e-6, D_loss=6.04e-6, gamma1_loss=4.19e-7, gamma2_loss=2.8e-8, gamma3_loss=4.05e-7, gamma4_loss=3.49e-8]
Epoch 999: : 1it [00:00, 85.94it/s, v_num=5, gamma1_loss=3.26e-8, gamma2_loss=7.84e-8, gamma3_loss=1.13e-7, gamma4_loss=3.02e-8, D_loss=2.66e-6, mean_loss=5.82e-7]
Umh, the final loss is not appreciabily better than previous model (with
@@ -355,7 +349,7 @@ removing all the hidden layers in the ``FeedForward``, keeping only the
.. parsed-literal::
Epoch 999: : 1it [00:00, 130.55it/s, v_num=17, mean_loss=1.34e-14, D_loss=6.7e-14, gamma1_loss=5.13e-17, gamma2_loss=9.68e-18, gamma3_loss=5.14e-17, gamma4_loss=9.75e-18]
Epoch 0: : 0it [00:00, ?it/s]Epoch 999: : 1it [00:00, 131.20it/s, v_num=6, gamma1_loss=2.55e-16, gamma2_loss=4.76e-17, gamma3_loss=2.55e-16, gamma4_loss=4.76e-17, D_loss=1.74e-13, mean_loss=3.5e-14]
.. parsed-literal::
@@ -364,7 +358,7 @@ removing all the hidden layers in the ``FeedForward``, keeping only the
.. parsed-literal::
Epoch 999: : 1it [00:00, 104.91it/s, v_num=17, mean_loss=1.34e-14, D_loss=6.7e-14, gamma1_loss=5.13e-17, gamma2_loss=9.68e-18, gamma3_loss=5.14e-17, gamma4_loss=9.75e-18]
Epoch 999: : 1it [00:00, 98.81it/s, v_num=6, gamma1_loss=2.55e-16, gamma2_loss=4.76e-17, gamma3_loss=2.55e-16, gamma4_loss=4.76e-17, D_loss=1.74e-13, mean_loss=3.5e-14]
In such a way, the model is able to reach a very high accuracy! Of

Binary file not shown.

Before

Width:  |  Height:  |  Size: 56 KiB

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 77 KiB

After

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 31 KiB

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 55 KiB

After

Width:  |  Height:  |  Size: 57 KiB

View File

@@ -25,14 +25,13 @@ The problem definition
The problem is written in the following form:
.. math::
\begin{equation}
\begin{cases}
\Delta u(x,y,t) = \frac{\partial^2}{\partial t^2} u(x,y,t) \quad \text{in } D, \\\\
u(x, y, t=0) = \sin(\pi x)\sin(\pi y), \\\\
u(x, y, t) = 0 \quad \text{on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
\end{cases}
\end{equation}
:raw-latex:`\begin{equation}
\begin{cases}
\Delta u(x,y,t) = \frac{\partial^2}{\partial t^2} u(x,y,t) \quad \text{in } D, \\\\
u(x, y, t=0) = \sin(\pi x)\sin(\pi y), \\\\
u(x, y, t) = 0 \quad \text{on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
\end{cases}
\end{equation}`
where :math:`D` is a square domain :math:`[0,1]^2`, and
:math:`\Gamma_i`, with :math:`i=1,...,4`, are the boundaries of the
@@ -149,7 +148,7 @@ approximately 3 minutes.
.. parsed-literal::
Epoch 999: : 1it [00:00, 62.13it/s, v_num=0, mean_loss=0.0268, D_loss=0.0397, t0_loss=0.121, gamma1_loss=0.000, gamma2_loss=0.000, gamma3_loss=0.000, gamma4_loss=0.000]
Epoch 999: : 1it [00:00, 84.47it/s, v_num=0, gamma1_loss=0.000, gamma2_loss=0.000, gamma3_loss=0.000, gamma4_loss=0.000, t0_loss=0.0419, D_loss=0.0307, mean_loss=0.0121]
.. parsed-literal::
@@ -158,7 +157,7 @@ approximately 3 minutes.
.. parsed-literal::
Epoch 999: : 1it [00:00, 53.88it/s, v_num=0, mean_loss=0.0268, D_loss=0.0397, t0_loss=0.121, gamma1_loss=0.000, gamma2_loss=0.000, gamma3_loss=0.000, gamma4_loss=0.000]
Epoch 999: : 1it [00:00, 68.69it/s, v_num=0, gamma1_loss=0.000, gamma2_loss=0.000, gamma3_loss=0.000, gamma4_loss=0.000, t0_loss=0.0419, D_loss=0.0307, mean_loss=0.0121]
Notice that the loss on the boundaries of the spatial domain is exactly
@@ -263,7 +262,7 @@ Now lets train with the same configuration as thre previous test
.. parsed-literal::
Epoch 999: : 1it [00:00, 48.54it/s, v_num=1, mean_loss=1.48e-8, D_loss=8.89e-8, t0_loss=0.000, gamma1_loss=2.06e-15, gamma2_loss=0.000, gamma3_loss=2.1e-15, gamma4_loss=0.000]
Epoch 0: : 0it [00:00, ?it/s]Epoch 999: : 1it [00:00, 52.10it/s, v_num=1, gamma1_loss=1.97e-15, gamma2_loss=0.000, gamma3_loss=2.14e-15, gamma4_loss=0.000, t0_loss=0.000, D_loss=1.25e-7, mean_loss=2.09e-8]
.. parsed-literal::
@@ -272,7 +271,7 @@ Now lets train with the same configuration as thre previous test
.. parsed-literal::
Epoch 999: : 1it [00:00, 43.25it/s, v_num=1, mean_loss=1.48e-8, D_loss=8.89e-8, t0_loss=0.000, gamma1_loss=2.06e-15, gamma2_loss=0.000, gamma3_loss=2.1e-15, gamma4_loss=0.000]
Epoch 999: : 1it [00:00, 45.78it/s, v_num=1, gamma1_loss=1.97e-15, gamma2_loss=0.000, gamma3_loss=2.14e-15, gamma4_loss=0.000, t0_loss=0.000, D_loss=1.25e-7, mean_loss=2.09e-8]
We can clearly see that the loss is way lower now. Lets plot the

Binary file not shown.

Before

Width:  |  Height:  |  Size: 49 KiB

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 53 KiB

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

After

Width:  |  Height:  |  Size: 48 KiB