Add layer to perform RBF interpolation in reduced order modeling (#315)

* add RBF implementation in pytorch (in layers)
* add POD-RBF example (baseline for nonintrusive closure)
* Add POD only and POD+RBF implementation
* add POD-RBF in tutorial 8
This commit is contained in:
Anna Ivagnes
2024-08-12 14:46:22 +02:00
committed by GitHub
parent 16261c9baf
commit d4ced3a7d7
14 changed files with 1111 additions and 224 deletions

View File

@@ -1,7 +1,7 @@
Code Documentation
==================
Welcome to PINA documentation! Here you can find the modules of the package divided in different sections.
The high-level structure of the package is depicted in our API.
The high-level structure of the package is depicted in our API.
.. figure:: ../index_files/API_color.png
:alt: PINA application program interface
@@ -33,7 +33,7 @@ Solvers
.. toctree::
:titlesonly:
SolverInterface <solvers/solver_interface.rst>
PINNInterface <solvers/basepinn.rst>
PINN <solvers/pinn.rst>
@@ -82,13 +82,14 @@ Layers
Proper Orthogonal Decomposition <layers/pod.rst>
Periodic Boundary Condition Embedding <layers/pbc_embedding.rst>
Fourier Feature Embedding <layers/fourier_embedding.rst>
Radial Basis Function Interpolation <layers/rbf_layer.rst>
Adaptive Activation Functions
-------------------------------
.. toctree::
:titlesonly:
Adaptive Function Interface <adaptive_functions/AdaptiveFunctionInterface.rst>
Adaptive ReLU <adaptive_functions/AdaptiveReLU.rst>
Adaptive Sigmoid <adaptive_functions/AdaptiveSigmoid.rst>
@@ -102,14 +103,14 @@ Adaptive Activation Functions
Adaptive Softmax <adaptive_functions/AdaptiveSoftmax.rst>
Adaptive SIREN <adaptive_functions/AdaptiveSIREN.rst>
Adaptive Exp <adaptive_functions/AdaptiveExp.rst>
Equations and Operators
-------------------------
.. toctree::
:titlesonly:
Equations <equations.rst>
Differential Operators <operators.rst>
@@ -166,4 +167,4 @@ Metrics and Losses
LossInterface <loss/loss_interface.rst>
LpLoss <loss/lploss.rst>
PowerLoss <loss/powerloss.rst>
PowerLoss <loss/powerloss.rst>

View File

@@ -43,4 +43,4 @@ Supervised Learning
:titlesonly:
Unstructured convolutional autoencoder via continuous convolution<tutorials/tutorial4/tutorial.rst>
POD-NN for reduced order modeling<tutorials/tutorial8/tutorial.rst>
POD-RBF and POD-NN for reduced order modeling<tutorials/tutorial8/tutorial.rst>

View File

@@ -0,0 +1,7 @@
RBFBlock
======================
.. currentmodule:: pina.model.layers.rbf_layer
.. autoclass:: RBFBlock
:members:
:show-inheritance:

View File

@@ -1,18 +1,20 @@
Tutorial: Reduced order model (PODNN) for parametric problems
===============================================================
Tutorial: Reduced order model (POD-RBF or POD-NN) for parametric problems
=========================================================================
The tutorial aims to show how to employ the **PINA** library in order to
apply a reduced order modeling technique [1]. Such methodologies have
several similarities with machine learning approaches, since the main
goal consists of predicting the solution of differential equations
goal consists in predicting the solution of differential equations
(typically parametric PDEs) in a real-time fashion.
In particular we are going to use the Proper Orthogonal Decomposition
with Neural Network (PODNN) [2], which basically perform a dimensional
reduction using the POD approach, approximating the parametric solution
manifold (at the reduced space) using a NN. In this example, we use a
simple multilayer perceptron, but the plenty of different archiutectures
can be plugged as well.
with either Radial Basis Function Interpolation(POD-RBF) or Neural
Network (POD-NN) [2]. Here we basically perform a dimensional reduction
using the POD approach, and approximating the parametric solution
manifold (at the reduced space) using an interpolation (RBF) or a
regression technique (NN). In this example, we use a simple multilayer
perceptron, but the plenty of different architectures can be plugged as
well.
References
^^^^^^^^^^
@@ -30,25 +32,25 @@ minimum PINA version to run this tutorial is the ``0.1``.
.. code:: ipython3
%matplotlib inline
import matplotlib.pyplot as plt
import torch
import pina
from pina.geometry import CartesianDomain
from pina.problem import ParametricProblem
from pina.model.layers import PODBlock
from pina.model.layers import PODBlock, RBFBlock
from pina import Condition, LabelTensor, Trainer
from pina.model import FeedForward
from pina.solvers import SupervisedSolver
print(f'We are using PINA version {pina.__version__}')
.. parsed-literal::
We are using PINA version 0.1
We are using PINA version 0.1.1
We exploit the `Smithers <www.github.com/mathLab/Smithers>`__ library to
@@ -60,26 +62,27 @@ snapshots of the velocity (along :math:`x`, :math:`y`, and the
magnitude) and pressure fields, and the corresponding parameter values.
To visually check the snapshots, lets plot also the data points and the
reference solution: this is the expected output of the neural network.
reference solution: this is the expected output of our model.
.. code:: ipython3
from smithers.dataset import NavierStokesDataset
dataset = NavierStokesDataset()
fig, axs = plt.subplots(1, 4, figsize=(14, 3))
for ax, p, u in zip(axs, dataset.params[:4], dataset.snapshots['mag(v)'][:4]):
ax.tricontourf(dataset.triang, u, levels=16)
ax.set_title(f'$\mu$ = {p[0]:.2f}')
.. image:: tutorial_files/tutorial_5_1.png
.. image:: tutorial_files/tutorial_5_0.png
The *snapshots* - aka the numerical solutions computed for several
parameters - and the corresponding parameters are the only data we need
to train the model, in order to predict for any new test parameter the
solution. To properly validate the accuracy, we initially split the 500
to train the model, in order to predict the solution for any new test
parameter. To properly validate the accuracy, we initially split the 500
snapshots into the training dataset (90% of the original data) and the
testing one (the reamining 10%). It must be said that, to plug the
snapshots into **PINA**, we have to cast them to ``LabelTensor``
@@ -89,10 +92,10 @@ objects.
u = torch.tensor(dataset.snapshots['mag(v)']).float()
p = torch.tensor(dataset.params).float()
p = LabelTensor(p, labels=['mu'])
u = LabelTensor(u, labels=[f's{i}' for i in range(u.shape[1])])
ratio_train_test = 0.9
n = u.shape
n_train = int(u.shape[0] * ratio_train_test)
@@ -109,17 +112,94 @@ methodology), just defining a simple *input-output* condition.
class SnapshotProblem(ParametricProblem):
output_variables = [f's{i}' for i in range(u.shape[1])]
parameter_domain = CartesianDomain({'mu': [0, 100]})
conditions = {
'io': Condition(input_points=p, output_points=u)
'io': Condition(input_points=p_train, output_points=u_train)
}
Then, we define the model we want to use: basically we have a MLP
architecture that takes in input the parameter and return the *modal
coefficients*, so the reduced dimension representation (the coordinates
in the POD space). Such latent variable is the projected to the original
space using the POD modes, which are computed and stored in the
``PODBlock`` object.
poisson_problem = SnapshotProblem()
We can then build a ``PODRBF`` model (using a Radial Basis Function
interpolation as approximation) and a ``PODNN`` approach (using an MLP
architecture as approximation).
POD-RBF reduced order model
---------------------------
Then, we define the model we want to use, with the POD (``PODBlock``)
and the RBF (``RBFBlock``) objects.
.. code:: ipython3
class PODRBF(torch.nn.Module):
"""
Proper orthogonal decomposition with Radial Basis Function interpolation model.
"""
def __init__(self, pod_rank, rbf_kernel):
"""
"""
super().__init__()
self.pod = PODBlock(pod_rank)
self.rbf = RBFBlock(kernel=rbf_kernel)
def forward(self, x):
"""
Defines the computation performed at every call.
:param x: The tensor to apply the forward pass.
:type x: torch.Tensor
:return: the output computed by the model.
:rtype: torch.Tensor
"""
coefficents = self.rbf(x)
return self.pod.expand(coefficents)
def fit(self, p, x):
"""
Call the :meth:`pina.model.layers.PODBlock.fit` method of the
:attr:`pina.model.layers.PODBlock` attribute to perform the POD,
and the :meth:`pina.model.layers.RBFBlock.fit` method of the
:attr:`pina.model.layers.RBFBlock` attribute to fit the interpolation.
"""
self.pod.fit(x)
self.rbf.fit(p, self.pod.reduce(x))
We can then fit the model and ask it to predict the required field for
unseen values of the parameters. Note that this model does not need a
``Trainer`` since it does not include any neural network or learnable
parameters.
.. code:: ipython3
pod_rbf = PODRBF(pod_rank=20, rbf_kernel='thin_plate_spline')
pod_rbf.fit(p_train, u_train)
.. code:: ipython3
u_test_rbf = pod_rbf(p_test)
u_train_rbf = pod_rbf(p_train)
relative_error_train = torch.norm(u_train_rbf - u_train)/torch.norm(u_train)
relative_error_test = torch.norm(u_test_rbf - u_test)/torch.norm(u_test)
print('Error summary for POD-RBF model:')
print(f' Train: {relative_error_train.item():e}')
print(f' Test: {relative_error_test.item():e}')
.. parsed-literal::
Error summary for POD-RBF model:
Train: 1.287801e-03
Test: 1.217041e-03
POD-NN reduced order model
--------------------------
.. code:: ipython3
@@ -127,13 +207,13 @@ space using the POD modes, which are computed and stored in the
"""
Proper orthogonal decomposition with neural network model.
"""
def __init__(self, pod_rank, layers, func):
"""
"""
super().__init__()
self.pod = PODBlock(pod_rank)
self.nn = FeedForward(
input_dimensions=1,
@@ -141,12 +221,12 @@ space using the POD modes, which are computed and stored in the
layers=layers,
func=func
)
def forward(self, x):
"""
Defines the computation performed at every call.
:param x: The tensor to apply the forward pass.
:type x: torch.Tensor
:return: the output computed by the model.
@@ -154,7 +234,7 @@ space using the POD modes, which are computed and stored in the
"""
coefficents = self.nn(x)
return self.pod.expand(coefficents)
def fit_pod(self, x):
"""
Just call the :meth:`pina.model.layers.PODBlock.fit` method of the
@@ -164,29 +244,27 @@ space using the POD modes, which are computed and stored in the
We highlight that the POD modes are directly computed by means of the
singular value decomposition (computed over the input data), and not
trained using the back-propagation approach. Only the weights of the MLP
trained using the backpropagation approach. Only the weights of the MLP
are actually trained during the optimization loop.
.. code:: ipython3
poisson_problem = SnapshotProblem()
pod_nn = PODNN(pod_rank=20, layers=[10, 10, 10], func=torch.nn.Tanh)
pod_nn.fit_pod(u)
pinn_stokes = SupervisedSolver(
problem=poisson_problem,
model=pod_nn,
pod_nn.fit_pod(u_train)
pod_nn_stokes = SupervisedSolver(
problem=poisson_problem,
model=pod_nn,
optimizer=torch.optim.Adam,
optimizer_kwargs={'lr': 0.0001})
Now that we set the ``Problem`` and the ``Model``, we have just to train
the model and use it for predict the test snapshots.
Now that we have set the ``Problem`` and the ``Model``, we have just to
train the model and use it for predicting the test snapshots.
.. code:: ipython3
trainer = Trainer(
solver=pinn_stokes,
solver=pod_nn_stokes,
max_epochs=1000,
batch_size=100,
log_every_n_steps=5,
@@ -196,15 +274,41 @@ the model and use it for predict the test snapshots.
.. parsed-literal::
`Trainer.fit` stopped: `max_epochs=1000` reached.
GPU available: True (cuda), used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
/u/a/aivagnes/anaconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/setup.py:187: GPU available but not used. You can set it by doing `Trainer(accelerator='gpu')`.
| Name | Type | Params
----------------------------------------
0 | _loss | MSELoss | 0
1 | _neural_net | Network | 460
----------------------------------------
460 Trainable params
0 Non-trainable params
460 Total params
0.002 Total estimated model params size (MB)
/u/a/aivagnes/anaconda3/lib/python3.8/site-packages/torch/cuda/__init__.py:152: UserWarning:
Found GPU0 Quadro K600 which is of cuda capability 3.0.
PyTorch no longer supports this GPU because it is too old.
The minimum cuda capability supported by this library is 3.7.
warnings.warn(old_gpu_warn % (d, name, major, minor, min_arch // 10, min_arch % 10))
.. parsed-literal::
Epoch 999: 100%|██████████| 5/5 [00:00<00:00, 248.36it/s, v_num=20, mean_loss=0.902]
Training: | | 0/? [00:00<?, ?it/s]
Done! Now the computational expensive part is over, we can load in
.. parsed-literal::
`Trainer.fit` stopped: `max_epochs=1000` reached.
Done! Now that the computational expensive part is over, we can load in
future the model to infer new parameters (simply loading the checkpoint
file automatically created by ``Lightning``) or test its performances.
We measure the relative error for the training and test datasets,
@@ -212,55 +316,73 @@ printing the mean one.
.. code:: ipython3
u_test_pred = pinn_stokes(p_test)
u_train_pred = pinn_stokes(p_train)
relative_error_train = torch.norm(u_train_pred - u_train)/torch.norm(u_train)
relative_error_test = torch.norm(u_test_pred - u_test)/torch.norm(u_test)
print('Error summary:')
u_test_nn = pod_nn_stokes(p_test)
u_train_nn = pod_nn_stokes(p_train)
relative_error_train = torch.norm(u_train_nn - u_train)/torch.norm(u_train)
relative_error_test = torch.norm(u_test_nn - u_test)/torch.norm(u_test)
print('Error summary for POD-NN model:')
print(f' Train: {relative_error_train.item():e}')
print(f' Test: {relative_error_test.item():e}')
.. parsed-literal::
Error summary:
Train: 3.865598e-02
Test: 3.593161e-02
Error summary for POD-NN model:
Train: 3.767902e-02
Test: 3.488588e-02
We can of course also plot the solutions predicted by the ``PODNN``
model, comparing them to the original ones. We can note here some
differences, especially for low velocities, but improvements can be
accomplished thanks to longer training.
POD-RBF vs POD-NN
-----------------
We can of course also plot the solutions predicted by the ``PODRBF`` and
by the ``PODNN`` model, comparing them to the original ones. We can note
here, in the ``PODNN`` model and for low velocities, some differences,
but improvements can be accomplished thanks to longer training.
.. code:: ipython3
idx = torch.randint(0, len(u_test_pred), (4,))
u_idx = pinn_stokes(p_test[idx])
idx = torch.randint(0, len(u_test), (4,))
u_idx_rbf = pod_rbf(p_test[idx])
u_idx_nn = pod_nn_stokes(p_test[idx])
import numpy as np
import matplotlib
fig, axs = plt.subplots(3, 4, figsize=(14, 9))
relative_error = np.abs(u_test[idx] - u_idx.detach())
relative_error = np.where(u_test[idx] < 1e-7, 1e-7, relative_error/u_test[idx])
for i, (idx_, u_, err_) in enumerate(zip(idx, u_idx, relative_error)):
cm = axs[0, i].tricontourf(dataset.triang, u_.detach())
import matplotlib.pyplot as plt
fig, axs = plt.subplots(5, 4, figsize=(14, 9))
relative_error_rbf = np.abs(u_test[idx] - u_idx_rbf.detach())
relative_error_rbf = np.where(u_test[idx] < 1e-7, 1e-7, relative_error_rbf/u_test[idx])
relative_error_nn = np.abs(u_test[idx] - u_idx_nn.detach())
relative_error_nn = np.where(u_test[idx] < 1e-7, 1e-7, relative_error_nn/u_test[idx])
for i, (idx_, rbf_, nn_, rbf_err_, nn_err_) in enumerate(
zip(idx, u_idx_rbf, u_idx_nn, relative_error_rbf, relative_error_nn)):
axs[0, i].set_title(f'$\mu$ = {p_test[idx_].item():.2f}')
plt.colorbar(cm)
cm = axs[1, i].tricontourf(dataset.triang, u_test[idx_].flatten())
plt.colorbar(cm)
cm = axs[2, i].tripcolor(dataset.triang, err_, norm=matplotlib.colors.LogNorm())
plt.colorbar(cm)
cm = axs[0, i].tricontourf(dataset.triang, rbf_.detach()) # POD-RBF prediction
plt.colorbar(cm, ax=axs[0, i])
cm = axs[1, i].tricontourf(dataset.triang, nn_.detach()) # POD-NN prediction
plt.colorbar(cm, ax=axs[1, i])
cm = axs[2, i].tricontourf(dataset.triang, u_test[idx_].flatten()) # Truth
plt.colorbar(cm, ax=axs[2, i])
cm = axs[3, i].tripcolor(dataset.triang, rbf_err_, norm=matplotlib.colors.LogNorm()) # Error for POD-RBF
plt.colorbar(cm, ax=axs[3, i])
cm = axs[4, i].tripcolor(dataset.triang, nn_err_, norm=matplotlib.colors.LogNorm()) # Error for POD-NN
plt.colorbar(cm, ax=axs[4, i])
plt.show()
.. image:: tutorial_files/tutorial_19_0.png
.. image:: tutorial_files/tutorial_27_0.png

Binary file not shown.

Before

Width:  |  Height:  |  Size: 132 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 110 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 27 KiB