update plotter
@@ -28,8 +28,8 @@ Build a PINA problem
|
||||
Problem definition in the **PINA** framework is done by building a
|
||||
python ``class``, which inherits from one or more problem classes
|
||||
(``SpatialProblem``, ``TimeDependentProblem``, ``ParametricProblem``, …)
|
||||
depending on the nature of the problem. Below is an example. Consider the following
|
||||
simple Ordinary Differential Equation:
|
||||
depending on the nature of the problem. Below is an example: ### Simple
|
||||
Ordinary Differential Equation Consider the following:
|
||||
|
||||
.. math::
|
||||
|
||||
@@ -49,7 +49,7 @@ our ``Problem`` class is going to be inherited from the
|
||||
.. code:: python
|
||||
|
||||
from pina.problem import SpatialProblem
|
||||
from pina import CartesianProblem
|
||||
from pina.geometry import CartesianProblem
|
||||
|
||||
class SimpleODE(SpatialProblem):
|
||||
|
||||
@@ -73,7 +73,7 @@ What about if our equation is also time dependent? In this case, our
|
||||
.. code:: ipython3
|
||||
|
||||
from pina.problem import SpatialProblem, TimeDependentProblem
|
||||
from pina import CartesianDomain
|
||||
from pina.geometry import CartesianDomain
|
||||
|
||||
class TimeSpaceODE(SpatialProblem, TimeDependentProblem):
|
||||
|
||||
@@ -215,26 +215,26 @@ calling the attribute ``input_pts`` of the problem
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
Input points: {'x0': LabelTensor([[[0.]]]), 'D': LabelTensor([[[0.8633]],
|
||||
[[0.4009]],
|
||||
[[0.6489]],
|
||||
[[0.9278]],
|
||||
[[0.3975]],
|
||||
[[0.1484]],
|
||||
[[0.9632]],
|
||||
[[0.5485]],
|
||||
[[0.2984]],
|
||||
[[0.5643]],
|
||||
[[0.0368]],
|
||||
[[0.7847]],
|
||||
[[0.4741]],
|
||||
[[0.6957]],
|
||||
[[0.3281]],
|
||||
[[0.0958]],
|
||||
[[0.1847]],
|
||||
[[0.2232]],
|
||||
[[0.8099]],
|
||||
[[0.7304]]])}
|
||||
Input points: {'x0': LabelTensor([[[0.]]]), 'D': LabelTensor([[[0.7644]],
|
||||
[[0.2028]],
|
||||
[[0.1789]],
|
||||
[[0.4294]],
|
||||
[[0.3239]],
|
||||
[[0.6531]],
|
||||
[[0.1406]],
|
||||
[[0.6062]],
|
||||
[[0.4969]],
|
||||
[[0.7429]],
|
||||
[[0.8681]],
|
||||
[[0.3800]],
|
||||
[[0.5357]],
|
||||
[[0.0152]],
|
||||
[[0.9679]],
|
||||
[[0.8101]],
|
||||
[[0.0662]],
|
||||
[[0.9095]],
|
||||
[[0.2503]],
|
||||
[[0.5580]]])}
|
||||
Input points labels: ['x']
|
||||
|
||||
|
||||
@@ -271,7 +271,8 @@ If you want to track the metric by yourself without a logger, use
|
||||
|
||||
.. code:: ipython3
|
||||
|
||||
from pina import PINN, Trainer
|
||||
from pina import Trainer
|
||||
from pina.solvers import PINN
|
||||
from pina.model import FeedForward
|
||||
from pina.callbacks import MetricTracker
|
||||
|
||||
@@ -300,12 +301,11 @@ If you want to track the metric by yourself without a logger, use
|
||||
TPU available: False, using: 0 TPU cores
|
||||
IPU available: False, using: 0 IPUs
|
||||
HPU available: False, using: 0 HPUs
|
||||
Missing logger folder: /Users/dariocoscia/Desktop/PINA/tutorials/tutorial1/lightning_logs
|
||||
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
Epoch 1499: : 1it [00:00, 316.24it/s, v_num=0, mean_loss=5.39e-5, x0_loss=1.26e-6, D_loss=0.000106]
|
||||
Epoch 1499: : 1it [00:00, 272.55it/s, v_num=3, x0_loss=7.71e-6, D_loss=0.000734, mean_loss=0.000371]
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
@@ -314,7 +314,7 @@ If you want to track the metric by yourself without a logger, use
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
Epoch 1499: : 1it [00:00, 166.89it/s, v_num=0, mean_loss=5.39e-5, x0_loss=1.26e-6, D_loss=0.000106]
|
||||
Epoch 1499: : 1it [00:00, 167.14it/s, v_num=3, x0_loss=7.71e-6, D_loss=0.000734, mean_loss=0.000371]
|
||||
|
||||
|
||||
After the training we can inspect trainer logged metrics (by default
|
||||
@@ -332,9 +332,9 @@ loss can be accessed by ``trainer.logged_metrics``
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
{'mean_loss': tensor(5.3852e-05),
|
||||
'x0_loss': tensor(1.2636e-06),
|
||||
'D_loss': tensor(0.0001)}
|
||||
{'x0_loss': tensor(7.7149e-06),
|
||||
'D_loss': tensor(0.0007),
|
||||
'mean_loss': tensor(0.0004)}
|
||||
|
||||
|
||||
|
||||
|
||||
|
Before Width: | Height: | Size: 10 KiB After Width: | Height: | Size: 10 KiB |
|
Before Width: | Height: | Size: 30 KiB After Width: | Height: | Size: 31 KiB |
|
Before Width: | Height: | Size: 20 KiB After Width: | Height: | Size: 19 KiB |
@@ -31,16 +31,12 @@ The problem definition
|
||||
----------------------
|
||||
|
||||
The two-dimensional Poisson problem is mathematically written as:
|
||||
|
||||
.. math::
|
||||
\begin{equation}
|
||||
\begin{cases}
|
||||
\Delta u = \sin{(\pi x)} \sin{(\pi y)} \text{ in } D, \\
|
||||
u = 0 \text{ on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
|
||||
\end{cases}
|
||||
\end{equation}
|
||||
|
||||
where :math:`D` is a square domain :math:`[0,1]^2`, and
|
||||
:raw-latex:`\begin{equation}
|
||||
\begin{cases}
|
||||
\Delta u = \sin{(\pi x)} \sin{(\pi y)} \text{ in } D, \\
|
||||
u = 0 \text{ on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
|
||||
\end{cases}
|
||||
\end{equation}` where :math:`D` is a square domain :math:`[0,1]^2`, and
|
||||
:math:`\Gamma_i`, with :math:`i=1,...,4`, are the boundaries of the
|
||||
square.
|
||||
|
||||
@@ -127,7 +123,7 @@ These parameters can be modified as desired. We use the
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
Epoch 999: : 1it [00:00, 152.98it/s, v_num=9, mean_loss=0.000239, D_loss=0.000793, gamma1_loss=8.51e-5, gamma2_loss=0.000103, gamma3_loss=0.000122, gamma4_loss=9.14e-5]
|
||||
Epoch 999: : 1it [00:00, 158.53it/s, v_num=3, gamma1_loss=5.29e-5, gamma2_loss=4.09e-5, gamma3_loss=4.73e-5, gamma4_loss=4.18e-5, D_loss=0.00134, mean_loss=0.000304]
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
@@ -136,7 +132,7 @@ These parameters can be modified as desired. We use the
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
Epoch 999: : 1it [00:00, 119.21it/s, v_num=9, mean_loss=0.000239, D_loss=0.000793, gamma1_loss=8.51e-5, gamma2_loss=0.000103, gamma3_loss=0.000122, gamma4_loss=9.14e-5]
|
||||
Epoch 999: : 1it [00:00, 105.33it/s, v_num=3, gamma1_loss=5.29e-5, gamma2_loss=4.09e-5, gamma3_loss=4.73e-5, gamma4_loss=4.18e-5, D_loss=0.00134, mean_loss=0.000304]
|
||||
|
||||
|
||||
Now the ``Plotter`` class is used to plot the results. The solution
|
||||
@@ -162,10 +158,9 @@ is now defined, with an additional input variable, named extra-feature,
|
||||
which coincides with the forcing term in the Laplace equation. The set
|
||||
of input variables to the neural network is:
|
||||
|
||||
.. math::
|
||||
\begin{equation}
|
||||
[x, y, k(x, y)], \text{ with } k(x, y)=\sin{(\pi x)}\sin{(\pi y)},
|
||||
\end{equation}
|
||||
:raw-latex:`\begin{equation}
|
||||
[x, y, k(x, y)], \text{ with } k(x, y)=\sin{(\pi x)}\sin{(\pi y)},
|
||||
\end{equation}`
|
||||
|
||||
where :math:`x` and :math:`y` are the spatial coordinates and
|
||||
:math:`k(x, y)` is the added feature.
|
||||
@@ -219,7 +214,7 @@ new extra feature.
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
Epoch 999: : 1it [00:00, 119.36it/s, v_num=10, mean_loss=8.97e-7, D_loss=4.43e-6, gamma1_loss=1.37e-8, gamma2_loss=1.68e-8, gamma3_loss=1.22e-8, gamma4_loss=1.77e-8]
|
||||
Epoch 999: : 1it [00:00, 111.88it/s, v_num=4, gamma1_loss=2.54e-7, gamma2_loss=2.17e-7, gamma3_loss=1.94e-7, gamma4_loss=2.69e-7, D_loss=9.2e-6, mean_loss=2.03e-6]
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
@@ -228,7 +223,7 @@ new extra feature.
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
Epoch 999: : 1it [00:00, 95.23it/s, v_num=10, mean_loss=8.97e-7, D_loss=4.43e-6, gamma1_loss=1.37e-8, gamma2_loss=1.68e-8, gamma3_loss=1.22e-8, gamma4_loss=1.77e-8]
|
||||
Epoch 999: : 1it [00:00, 85.62it/s, v_num=4, gamma1_loss=2.54e-7, gamma2_loss=2.17e-7, gamma3_loss=1.94e-7, gamma4_loss=2.69e-7, D_loss=9.2e-6, mean_loss=2.03e-6]
|
||||
|
||||
|
||||
The predicted and exact solutions and the error between them are
|
||||
@@ -254,10 +249,9 @@ Another way to exploit the extra features is the addition of learnable
|
||||
parameter inside them. In this way, the added parameters are learned
|
||||
during the training phase of the neural network. In this case, we use:
|
||||
|
||||
.. math::
|
||||
\begin{equation}
|
||||
k(x, \mathbf{y}) = \beta \sin{(\alpha x)} \sin{(\alpha y)},
|
||||
\end{equation}
|
||||
:raw-latex:`\begin{equation}
|
||||
k(x, \mathbf{y}) = \beta \sin{(\alpha x)} \sin{(\alpha y)},
|
||||
\end{equation}`
|
||||
|
||||
where :math:`\alpha` and :math:`\beta` are the abovementioned
|
||||
parameters. Their implementation is quite trivial: by using the class
|
||||
@@ -306,7 +300,7 @@ need, and they are managed by ``autograd`` module!
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
Epoch 999: : 1it [00:00, 103.14it/s, v_num=14, mean_loss=1.39e-6, D_loss=6.04e-6, gamma1_loss=4.19e-7, gamma2_loss=2.8e-8, gamma3_loss=4.05e-7, gamma4_loss=3.49e-8]
|
||||
Epoch 999: : 1it [00:00, 119.29it/s, v_num=5, gamma1_loss=3.26e-8, gamma2_loss=7.84e-8, gamma3_loss=1.13e-7, gamma4_loss=3.02e-8, D_loss=2.66e-6, mean_loss=5.82e-7]
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
@@ -315,7 +309,7 @@ need, and they are managed by ``autograd`` module!
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
Epoch 999: : 1it [00:00, 84.50it/s, v_num=14, mean_loss=1.39e-6, D_loss=6.04e-6, gamma1_loss=4.19e-7, gamma2_loss=2.8e-8, gamma3_loss=4.05e-7, gamma4_loss=3.49e-8]
|
||||
Epoch 999: : 1it [00:00, 85.94it/s, v_num=5, gamma1_loss=3.26e-8, gamma2_loss=7.84e-8, gamma3_loss=1.13e-7, gamma4_loss=3.02e-8, D_loss=2.66e-6, mean_loss=5.82e-7]
|
||||
|
||||
|
||||
Umh, the final loss is not appreciabily better than previous model (with
|
||||
@@ -355,7 +349,7 @@ removing all the hidden layers in the ``FeedForward``, keeping only the
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
Epoch 999: : 1it [00:00, 130.55it/s, v_num=17, mean_loss=1.34e-14, D_loss=6.7e-14, gamma1_loss=5.13e-17, gamma2_loss=9.68e-18, gamma3_loss=5.14e-17, gamma4_loss=9.75e-18]
|
||||
Epoch 0: : 0it [00:00, ?it/s]Epoch 999: : 1it [00:00, 131.20it/s, v_num=6, gamma1_loss=2.55e-16, gamma2_loss=4.76e-17, gamma3_loss=2.55e-16, gamma4_loss=4.76e-17, D_loss=1.74e-13, mean_loss=3.5e-14]
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
@@ -364,7 +358,7 @@ removing all the hidden layers in the ``FeedForward``, keeping only the
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
Epoch 999: : 1it [00:00, 104.91it/s, v_num=17, mean_loss=1.34e-14, D_loss=6.7e-14, gamma1_loss=5.13e-17, gamma2_loss=9.68e-18, gamma3_loss=5.14e-17, gamma4_loss=9.75e-18]
|
||||
Epoch 999: : 1it [00:00, 98.81it/s, v_num=6, gamma1_loss=2.55e-16, gamma2_loss=4.76e-17, gamma3_loss=2.55e-16, gamma4_loss=4.76e-17, D_loss=1.74e-13, mean_loss=3.5e-14]
|
||||
|
||||
|
||||
In such a way, the model is able to reach a very high accuracy! Of
|
||||
|
||||
|
Before Width: | Height: | Size: 56 KiB After Width: | Height: | Size: 50 KiB |
|
Before Width: | Height: | Size: 77 KiB After Width: | Height: | Size: 61 KiB |
|
Before Width: | Height: | Size: 31 KiB After Width: | Height: | Size: 28 KiB |
|
Before Width: | Height: | Size: 55 KiB After Width: | Height: | Size: 57 KiB |
@@ -25,14 +25,13 @@ The problem definition
|
||||
|
||||
The problem is written in the following form:
|
||||
|
||||
.. math::
|
||||
\begin{equation}
|
||||
\begin{cases}
|
||||
\Delta u(x,y,t) = \frac{\partial^2}{\partial t^2} u(x,y,t) \quad \text{in } D, \\\\
|
||||
u(x, y, t=0) = \sin(\pi x)\sin(\pi y), \\\\
|
||||
u(x, y, t) = 0 \quad \text{on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
|
||||
\end{cases}
|
||||
\end{equation}
|
||||
:raw-latex:`\begin{equation}
|
||||
\begin{cases}
|
||||
\Delta u(x,y,t) = \frac{\partial^2}{\partial t^2} u(x,y,t) \quad \text{in } D, \\\\
|
||||
u(x, y, t=0) = \sin(\pi x)\sin(\pi y), \\\\
|
||||
u(x, y, t) = 0 \quad \text{on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
|
||||
\end{cases}
|
||||
\end{equation}`
|
||||
|
||||
where :math:`D` is a square domain :math:`[0,1]^2`, and
|
||||
:math:`\Gamma_i`, with :math:`i=1,...,4`, are the boundaries of the
|
||||
@@ -149,7 +148,7 @@ approximately 3 minutes.
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
Epoch 999: : 1it [00:00, 62.13it/s, v_num=0, mean_loss=0.0268, D_loss=0.0397, t0_loss=0.121, gamma1_loss=0.000, gamma2_loss=0.000, gamma3_loss=0.000, gamma4_loss=0.000]
|
||||
Epoch 999: : 1it [00:00, 84.47it/s, v_num=0, gamma1_loss=0.000, gamma2_loss=0.000, gamma3_loss=0.000, gamma4_loss=0.000, t0_loss=0.0419, D_loss=0.0307, mean_loss=0.0121]
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
@@ -158,7 +157,7 @@ approximately 3 minutes.
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
Epoch 999: : 1it [00:00, 53.88it/s, v_num=0, mean_loss=0.0268, D_loss=0.0397, t0_loss=0.121, gamma1_loss=0.000, gamma2_loss=0.000, gamma3_loss=0.000, gamma4_loss=0.000]
|
||||
Epoch 999: : 1it [00:00, 68.69it/s, v_num=0, gamma1_loss=0.000, gamma2_loss=0.000, gamma3_loss=0.000, gamma4_loss=0.000, t0_loss=0.0419, D_loss=0.0307, mean_loss=0.0121]
|
||||
|
||||
|
||||
Notice that the loss on the boundaries of the spatial domain is exactly
|
||||
@@ -263,7 +262,7 @@ Now let’s train with the same configuration as thre previous test
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
Epoch 999: : 1it [00:00, 48.54it/s, v_num=1, mean_loss=1.48e-8, D_loss=8.89e-8, t0_loss=0.000, gamma1_loss=2.06e-15, gamma2_loss=0.000, gamma3_loss=2.1e-15, gamma4_loss=0.000]
|
||||
Epoch 0: : 0it [00:00, ?it/s]Epoch 999: : 1it [00:00, 52.10it/s, v_num=1, gamma1_loss=1.97e-15, gamma2_loss=0.000, gamma3_loss=2.14e-15, gamma4_loss=0.000, t0_loss=0.000, D_loss=1.25e-7, mean_loss=2.09e-8]
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
@@ -272,7 +271,7 @@ Now let’s train with the same configuration as thre previous test
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
Epoch 999: : 1it [00:00, 43.25it/s, v_num=1, mean_loss=1.48e-8, D_loss=8.89e-8, t0_loss=0.000, gamma1_loss=2.06e-15, gamma2_loss=0.000, gamma3_loss=2.1e-15, gamma4_loss=0.000]
|
||||
Epoch 999: : 1it [00:00, 45.78it/s, v_num=1, gamma1_loss=1.97e-15, gamma2_loss=0.000, gamma3_loss=2.14e-15, gamma4_loss=0.000, t0_loss=0.000, D_loss=1.25e-7, mean_loss=2.09e-8]
|
||||
|
||||
|
||||
We can clearly see that the loss is way lower now. Let’s plot the
|
||||
|
||||
|
Before Width: | Height: | Size: 49 KiB After Width: | Height: | Size: 57 KiB |
|
Before Width: | Height: | Size: 53 KiB After Width: | Height: | Size: 56 KiB |
|
Before Width: | Height: | Size: 52 KiB After Width: | Height: | Size: 54 KiB |
|
Before Width: | Height: | Size: 52 KiB After Width: | Height: | Size: 52 KiB |
|
Before Width: | Height: | Size: 47 KiB After Width: | Height: | Size: 48 KiB |
@@ -88,7 +88,6 @@ class Plotter:
|
||||
truth_output = truth_solution(pts).float()
|
||||
ax.plot(pts, truth_output.detach(), label='True solution', **kwargs)
|
||||
|
||||
plt.xlabel(pts.labels[0])
|
||||
plt.ylabel(pred.labels[0])
|
||||
plt.legend()
|
||||
plt.show()
|
||||
@@ -120,7 +119,7 @@ class Plotter:
|
||||
|
||||
pred_output = pred.reshape(res, res)
|
||||
if truth_solution:
|
||||
truth_output = truth_solution(pts).float().reshape(res, res)
|
||||
truth_output = truth_solution(pts).float().reshape(res, res).as_subclass(torch.Tensor)
|
||||
fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(16, 6))
|
||||
|
||||
cb = getattr(ax[0], method)(*grids, pred_output.cpu().detach(),
|
||||
@@ -157,8 +156,7 @@ class Plotter:
|
||||
|
||||
:param SolverInterface solver: The ``SolverInterface`` object instance.
|
||||
:param list(str) components: The output variable to plot. If None, all
|
||||
the output variables of the problem are selected. Default value is
|
||||
None.
|
||||
the output variables of the problem are selected. Default value is None.
|
||||
:param dict fixed_variables: A dictionary with all the variables that
|
||||
should be kept fixed during the plot. The keys of the dictionary
|
||||
are the variables name whereas the values are the corresponding
|
||||
@@ -173,7 +171,11 @@ class Plotter:
|
||||
"""
|
||||
|
||||
if components is None:
|
||||
components = [solver.problem.output_variables]
|
||||
components = solver.problem.output_variables
|
||||
|
||||
if len(components) > 1:
|
||||
raise NotImplementedError('Multidimensional plots are not implemented, '
|
||||
'set components to an available components of the problem.')
|
||||
v = [
|
||||
var for var in solver.problem.input_variables
|
||||
if var not in fixed_variables.keys()
|
||||
@@ -188,13 +190,9 @@ class Plotter:
|
||||
pts = pts.append(fixed_pts)
|
||||
pts = pts.to(device=solver.device)
|
||||
|
||||
predicted_output = solver.forward(pts)
|
||||
if isinstance(components, str):
|
||||
predicted_output = predicted_output.extract(components)
|
||||
elif callable(components):
|
||||
predicted_output = components(predicted_output)
|
||||
|
||||
predicted_output = solver.forward(pts).extract(components).as_subclass(torch.Tensor)
|
||||
truth_solution = getattr(solver.problem, 'truth_solution', None)
|
||||
|
||||
if len(v) == 1:
|
||||
self._1d_plot(pts, predicted_output, method, truth_solution,
|
||||
**kwargs)
|
||||
|
||||
86
tutorials/tutorial1/tutorial.ipynb
vendored
25
tutorials/tutorial1/tutorial.py
vendored
@@ -35,7 +35,7 @@
|
||||
#
|
||||
# ```python
|
||||
# from pina.problem import SpatialProblem
|
||||
# from pina import CartesianProblem
|
||||
# from pina.geometry import CartesianProblem
|
||||
#
|
||||
# class SimpleODE(SpatialProblem):
|
||||
#
|
||||
@@ -54,7 +54,7 @@
|
||||
|
||||
|
||||
from pina.problem import SpatialProblem, TimeDependentProblem
|
||||
from pina import CartesianDomain
|
||||
from pina.geometry import CartesianDomain
|
||||
|
||||
class TimeSpaceODE(SpatialProblem, TimeDependentProblem):
|
||||
|
||||
@@ -77,7 +77,7 @@ class TimeSpaceODE(SpatialProblem, TimeDependentProblem):
|
||||
#
|
||||
# Once the `Problem` class is initialized, we need to represent the differential equation in **PINA**. In order to do this, we need to load the **PINA** operators from `pina.operators` module. Again, we'll consider Equation (1) and represent it in **PINA**:
|
||||
|
||||
# In[3]:
|
||||
# In[2]:
|
||||
|
||||
|
||||
from pina.problem import SpatialProblem
|
||||
@@ -133,7 +133,7 @@ problem = SimpleODE()
|
||||
#
|
||||
# Data for training can come in form of direct numerical simulation reusults, or points in the domains. In case we do unsupervised learning, we just need the collocation points for training, i.e. points where we want to evaluate the neural network. Sampling point in **PINA** is very easy, here we show three examples using the `.discretise_domain` method of the `AbstractProblem` class.
|
||||
|
||||
# In[4]:
|
||||
# In[3]:
|
||||
|
||||
|
||||
# sampling 20 points in [0, 1] through discretization in all locations
|
||||
@@ -149,7 +149,7 @@ problem.discretise_domain(n=20, mode='random', variables=['x'])
|
||||
|
||||
# We are going to use latin hypercube points for sampling. We need to sample in all the conditions domains. In our case we sample in `D` and `x0`.
|
||||
|
||||
# In[5]:
|
||||
# In[4]:
|
||||
|
||||
|
||||
# sampling for training
|
||||
@@ -159,7 +159,7 @@ problem.discretise_domain(20, 'lh', locations=['D'])
|
||||
|
||||
# The points are saved in a python `dict`, and can be accessed by calling the attribute `input_pts` of the problem
|
||||
|
||||
# In[6]:
|
||||
# In[5]:
|
||||
|
||||
|
||||
print('Input points:', problem.input_pts)
|
||||
@@ -168,7 +168,7 @@ print('Input points labels:', problem.input_pts['D'].labels)
|
||||
|
||||
# To visualize the sampled points we can use the `.plot_samples` method of the `Plotter` class
|
||||
|
||||
# In[7]:
|
||||
# In[6]:
|
||||
|
||||
|
||||
from pina import Plotter
|
||||
@@ -181,10 +181,11 @@ pl.plot_samples(problem=problem)
|
||||
|
||||
# Once we have defined the problem and generated the data we can start the modelling. Here we will choose a `FeedForward` neural network available in `pina.model`, and we will train using the `PINN` solver from `pina.solvers`. We highlight that this training is fairly simple, for more advanced stuff consider the tutorials in the ***Physics Informed Neural Networks*** section of ***Tutorials***. For training we use the `Trainer` class from `pina.trainer`. Here we show a very short training and some method for plotting the results. Notice that by default all relevant metrics (e.g. MSE error during training) are going to be tracked using a `lightining` logger, by default `CSVLogger`. If you want to track the metric by yourself without a logger, use `pina.callbacks.MetricTracker`.
|
||||
|
||||
# In[8]:
|
||||
# In[7]:
|
||||
|
||||
|
||||
from pina import PINN, Trainer
|
||||
from pina import Trainer
|
||||
from pina.solvers import PINN
|
||||
from pina.model import FeedForward
|
||||
from pina.callbacks import MetricTracker
|
||||
|
||||
@@ -209,7 +210,7 @@ trainer.train()
|
||||
|
||||
# After the training we can inspect trainer logged metrics (by default **PINA** logs mean square error residual loss). The logged metrics can be accessed online using one of the `Lightinig` loggers. The final loss can be accessed by `trainer.logged_metrics`
|
||||
|
||||
# In[9]:
|
||||
# In[8]:
|
||||
|
||||
|
||||
# inspecting final loss
|
||||
@@ -218,7 +219,7 @@ trainer.logged_metrics
|
||||
|
||||
# By using the `Plotter` class from **PINA** we can also do some quatitative plots of the solution.
|
||||
|
||||
# In[12]:
|
||||
# In[9]:
|
||||
|
||||
|
||||
# plotting the solution
|
||||
@@ -227,7 +228,7 @@ pl.plot(solver=pinn)
|
||||
|
||||
# The solution is overlapped with the actual one, and they are barely indistinguishable. We can also plot easily the loss:
|
||||
|
||||
# In[14]:
|
||||
# In[10]:
|
||||
|
||||
|
||||
pl.plot_loss(trainer=trainer, label = 'mean_loss', logy=True)
|
||||
|
||||
39
tutorials/tutorial2/tutorial.ipynb
vendored
8
tutorials/tutorial2/tutorial.py
vendored
@@ -177,7 +177,7 @@ plotter.plot(solver=pinn_feat)
|
||||
# where $\alpha$ and $\beta$ are the abovementioned parameters.
|
||||
# Their implementation is quite trivial: by using the class `torch.nn.Parameter` we cam define all the learnable parameters we need, and they are managed by `autograd` module!
|
||||
|
||||
# In[14]:
|
||||
# In[7]:
|
||||
|
||||
|
||||
class SinSinAB(torch.nn.Module):
|
||||
@@ -212,7 +212,7 @@ trainer_learn.train()
|
||||
|
||||
# Umh, the final loss is not appreciabily better than previous model (with static extra features), despite the usage of learnable parameters. This is mainly due to the over-parametrization of the network: there are many parameter to optimize during the training, and the model in unable to understand automatically that only the parameters of the extra feature (and not the weights/bias of the FFN) should be tuned in order to fit our problem. A longer training can be helpful, but in this case the faster way to reach machine precision for solving the Poisson problem is removing all the hidden layers in the `FeedForward`, keeping only the $\alpha$ and $\beta$ parameters of the extra feature.
|
||||
|
||||
# In[19]:
|
||||
# In[8]:
|
||||
|
||||
|
||||
# make model + solver + trainer
|
||||
@@ -234,7 +234,7 @@ trainer_learn.train()
|
||||
#
|
||||
# We conclude here by showing the graphical comparison of the unknown field and the loss trend for all the test cases presented here: the standard PINN, PINN with extra features, and PINN with learnable extra features.
|
||||
|
||||
# In[20]:
|
||||
# In[9]:
|
||||
|
||||
|
||||
plotter.plot(solver=pinn_learn)
|
||||
@@ -242,7 +242,7 @@ plotter.plot(solver=pinn_learn)
|
||||
|
||||
# Let us compare the training losses for the various types of training
|
||||
|
||||
# In[21]:
|
||||
# In[10]:
|
||||
|
||||
|
||||
plotter.plot_loss(trainer, logy=True, label='Standard')
|
||||
|
||||