Fixing tutorials grammar (#242)
* grammar check and sparse rephrasing * rst created * meta copyright adjusted
This commit is contained in:
committed by
GitHub
parent
15136e13f8
commit
b10e02103b
@@ -14,13 +14,13 @@ a toy problem, following the standard API procedure.
|
||||
|
||||
Specifically, the tutorial aims to introduce the following topics:
|
||||
|
||||
- Explaining how to build **PINA** Problem,
|
||||
- Showing how to generate data for ``PINN`` straining
|
||||
- Explaining how to build **PINA** Problems,
|
||||
- Showing how to generate data for ``PINN`` training
|
||||
|
||||
These are the two main steps needed **before** starting the modelling
|
||||
optimization (choose model and solver, and train). We will show each
|
||||
step in detail, and at the end, we will solve a simple Ordinary
|
||||
Differential Equation (ODE) problem busing the ``PINN`` solver.
|
||||
Differential Equation (ODE) problem using the ``PINN`` solver.
|
||||
|
||||
Build a PINA problem
|
||||
--------------------
|
||||
@@ -66,9 +66,8 @@ the tensor. The ``spatial_domain`` variable indicates where the sample
|
||||
points are going to be sampled in the domain, in this case
|
||||
:math:`x\in[0,1]`.
|
||||
|
||||
What about if our equation is also time dependent? In this case, our
|
||||
``class`` will inherit from both ``SpatialProblem`` and
|
||||
``TimeDependentProblem``:
|
||||
What if our equation is also time-dependent? In this case, our ``class``
|
||||
will inherit from both ``SpatialProblem`` and ``TimeDependentProblem``:
|
||||
|
||||
.. code:: ipython3
|
||||
|
||||
@@ -83,6 +82,13 @@ What about if our equation is also time dependent? In this case, our
|
||||
|
||||
# other stuff ...
|
||||
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
Intel MKL WARNING: Support of Intel(R) Streaming SIMD Extensions 4.2 (Intel(R) SSE4.2) enabled only processors has been deprecated. Intel oneAPI Math Kernel Library 2025.0 will require Intel(R) Advanced Vector Extensions (Intel(R) AVX) instructions.
|
||||
Intel MKL WARNING: Support of Intel(R) Streaming SIMD Extensions 4.2 (Intel(R) SSE4.2) enabled only processors has been deprecated. Intel oneAPI Math Kernel Library 2025.0 will require Intel(R) Advanced Vector Extensions (Intel(R) AVX) instructions.
|
||||
|
||||
|
||||
where we have included the ``temporal_domain`` variable, indicating the
|
||||
time domain wanted for the solution.
|
||||
|
||||
@@ -157,7 +163,7 @@ returning the difference between subtracting the variable ``u`` from its
|
||||
gradient (the residual), which we hope to minimize to 0. This is done
|
||||
for all conditions. Notice that we do not pass directly a ``python``
|
||||
function, but an ``Equation`` object, which is initialized with the
|
||||
``python`` function. This is done so that all the computations, and
|
||||
``python`` function. This is done so that all the computations and
|
||||
internal checks are done inside **PINA**.
|
||||
|
||||
Once we have defined the function, we need to tell the neural network
|
||||
@@ -169,25 +175,25 @@ possibilities are allowed, see the documentation for reference).
|
||||
Finally, it’s possible to define a ``truth_solution`` function, which
|
||||
can be useful if we want to plot the results and see how the real
|
||||
solution compares to the expected (true) solution. Notice that the
|
||||
``truth_solution`` function is a method of the ``PINN`` class, but is
|
||||
``truth_solution`` function is a method of the ``PINN`` class, but it is
|
||||
not mandatory for problem definition.
|
||||
|
||||
Generate data
|
||||
-------------
|
||||
|
||||
Data for training can come in form of direct numerical simulation
|
||||
reusults, or points in the domains. In case we do unsupervised learning,
|
||||
we just need the collocation points for training, i.e. points where we
|
||||
want to evaluate the neural network. Sampling point in **PINA** is very
|
||||
easy, here we show three examples using the ``.discretise_domain``
|
||||
method of the ``AbstractProblem`` class.
|
||||
results, or points in the domains. In case we perform unsupervised
|
||||
learning, we just need the collocation points for training, i.e. points
|
||||
where we want to evaluate the neural network. Sampling point in **PINA**
|
||||
is very easy, here we show three examples using the
|
||||
``.discretise_domain`` method of the ``AbstractProblem`` class.
|
||||
|
||||
.. code:: ipython3
|
||||
|
||||
# sampling 20 points in [0, 1] through discretization in all locations
|
||||
problem.discretise_domain(n=20, mode='grid', variables=['x'], locations='all')
|
||||
|
||||
# sampling 20 points in (0, 1) through latin hypercube samping in D, and 1 point in x0
|
||||
# sampling 20 points in (0, 1) through latin hypercube sampling in D, and 1 point in x0
|
||||
problem.discretise_domain(n=20, mode='latin', variables=['x'], locations=['D'])
|
||||
problem.discretise_domain(n=1, mode='random', variables=['x'], locations=['x0'])
|
||||
|
||||
@@ -301,11 +307,13 @@ If you want to track the metric by yourself without a logger, use
|
||||
TPU available: False, using: 0 TPU cores
|
||||
IPU available: False, using: 0 IPUs
|
||||
HPU available: False, using: 0 HPUs
|
||||
/Users/alessio/opt/anaconda3/envs/pina/lib/python3.11/site-packages/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py:67: Starting from v1.9.0, `tensorboardX` has been removed as a dependency of the `pytorch_lightning` package, due to potential conflicts with other packages in the ML ecosystem. For this reason, `logger=True` will use `CSVLogger` as the default logger, unless the `tensorboard` or `tensorboardX` packages are found. Please `pip install lightning[extra]` or one of them to enable TensorBoard support by default
|
||||
Missing logger folder: /Users/alessio/Downloads/lightning_logs
|
||||
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
Epoch 1499: : 1it [00:00, 272.55it/s, v_num=3, x0_loss=7.71e-6, D_loss=0.000734, mean_loss=0.000371]
|
||||
Epoch 1499: | | 1/? [00:00<00:00, 167.08it/s, v_num=0, x0_loss=1.07e-5, D_loss=0.000792, mean_loss=0.000401]
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
@@ -314,7 +322,7 @@ If you want to track the metric by yourself without a logger, use
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
Epoch 1499: : 1it [00:00, 167.14it/s, v_num=3, x0_loss=7.71e-6, D_loss=0.000734, mean_loss=0.000371]
|
||||
Epoch 1499: | | 1/? [00:00<00:00, 102.49it/s, v_num=0, x0_loss=1.07e-5, D_loss=0.000792, mean_loss=0.000401]
|
||||
|
||||
|
||||
After the training we can inspect trainer logged metrics (by default
|
||||
@@ -332,8 +340,8 @@ loss can be accessed by ``trainer.logged_metrics``
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
{'x0_loss': tensor(7.7149e-06),
|
||||
'D_loss': tensor(0.0007),
|
||||
{'x0_loss': tensor(1.0674e-05),
|
||||
'D_loss': tensor(0.0008),
|
||||
'mean_loss': tensor(0.0004)}
|
||||
|
||||
|
||||
@@ -347,8 +355,13 @@ quatitative plots of the solution.
|
||||
pl.plot(solver=pinn)
|
||||
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
.. image:: tutorial_files/tutorial_23_0.png
|
||||
Intel MKL WARNING: Support of Intel(R) Streaming SIMD Extensions 4.2 (Intel(R) SSE4.2) enabled only processors has been deprecated. Intel oneAPI Math Kernel Library 2025.0 will require Intel(R) Advanced Vector Extensions (Intel(R) AVX) instructions.
|
||||
|
||||
|
||||
|
||||
.. image:: tutorial_files/tutorial_23_1.png
|
||||
|
||||
|
||||
|
||||
@@ -375,14 +388,16 @@ could train for longer
|
||||
What’s next?
|
||||
------------
|
||||
|
||||
Nice you have completed the introductory tutorial of **PINA**! There are
|
||||
multiple directions you can go now:
|
||||
Congratulations on completing the introductory tutorial of **PINA**!
|
||||
There are several directions you can go now:
|
||||
|
||||
1. Train the network for longer or with different layer sizes and assert
|
||||
the finaly accuracy
|
||||
|
||||
2. Train the network using other types of models (see ``pina.model``)
|
||||
|
||||
3. GPU trainining and benchmark the speed
|
||||
3. GPU training and speed benchmarking
|
||||
|
||||
4. Many more…
|
||||
|
||||
|
||||
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 10 KiB After Width: | Height: | Size: 9.8 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 31 KiB |
Binary file not shown.
|
After Width: | Height: | Size: 31 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 19 KiB After Width: | Height: | Size: 18 KiB |
@@ -13,7 +13,7 @@ __all__ = [
|
||||
__project__ = "PINA"
|
||||
__title__ = "pina"
|
||||
__author__ = "PINA Contributors"
|
||||
__copyright__ = "Copyright 2021-2024, PINA Contributors"
|
||||
__copyright__ = "2021-2024, PINA Contributors"
|
||||
__license__ = "MIT"
|
||||
__version__ = "0.1.0"
|
||||
__mail__ = "demo.nicola@gmail.com, dario.coscia@sissa.it" # TODO
|
||||
|
||||
76
tutorials/tutorial1/tutorial.ipynb
vendored
76
tutorials/tutorial1/tutorial.ipynb
vendored
File diff suppressed because one or more lines are too long
32
tutorials/tutorial1/tutorial.py
vendored
32
tutorials/tutorial1/tutorial.py
vendored
@@ -11,10 +11,10 @@
|
||||
#
|
||||
# Specifically, the tutorial aims to introduce the following topics:
|
||||
#
|
||||
# * Explaining how to build **PINA** Problem,
|
||||
# * Showing how to generate data for `PINN` straining
|
||||
# * Explaining how to build **PINA** Problems,
|
||||
# * Showing how to generate data for `PINN` training
|
||||
#
|
||||
# These are the two main steps needed **before** starting the modelling optimization (choose model and solver, and train). We will show each step in detail, and at the end, we will solve a simple Ordinary Differential Equation (ODE) problem busing the `PINN` solver.
|
||||
# These are the two main steps needed **before** starting the modelling optimization (choose model and solver, and train). We will show each step in detail, and at the end, we will solve a simple Ordinary Differential Equation (ODE) problem using the `PINN` solver.
|
||||
|
||||
# ## Build a PINA problem
|
||||
|
||||
@@ -47,7 +47,7 @@
|
||||
#
|
||||
# Notice that we define `output_variables` as a list of symbols, indicating the output variables of our equation (in this case only $u$), this is done because in **PINA** the `torch.Tensor`s are labelled, allowing the user maximal flexibility for the manipulation of the tensor. The `spatial_domain` variable indicates where the sample points are going to be sampled in the domain, in this case $x\in[0,1]$.
|
||||
#
|
||||
# What about if our equation is also time dependent? In this case, our `class` will inherit from both `SpatialProblem` and `TimeDependentProblem`:
|
||||
# What if our equation is also time-dependent? In this case, our `class` will inherit from both `SpatialProblem` and `TimeDependentProblem`:
|
||||
#
|
||||
|
||||
# In[1]:
|
||||
@@ -122,16 +122,16 @@ class SimpleODE(SpatialProblem):
|
||||
problem = SimpleODE()
|
||||
|
||||
|
||||
# After we define the `Problem` class, we need to write different class methods, where each method is a function returning a residual. These functions are the ones minimized during PINN optimization, given the initial conditions. For example, in the domain $[0,1]$, the ODE equation (`ode_equation`) must be satisfied. We represent this by returning the difference between subtracting the variable `u` from its gradient (the residual), which we hope to minimize to 0. This is done for all conditions. Notice that we do not pass directly a `python` function, but an `Equation` object, which is initialized with the `python` function. This is done so that all the computations, and internal checks are done inside **PINA**.
|
||||
# After we define the `Problem` class, we need to write different class methods, where each method is a function returning a residual. These functions are the ones minimized during PINN optimization, given the initial conditions. For example, in the domain $[0,1]$, the ODE equation (`ode_equation`) must be satisfied. We represent this by returning the difference between subtracting the variable `u` from its gradient (the residual), which we hope to minimize to 0. This is done for all conditions. Notice that we do not pass directly a `python` function, but an `Equation` object, which is initialized with the `python` function. This is done so that all the computations and internal checks are done inside **PINA**.
|
||||
#
|
||||
# Once we have defined the function, we need to tell the neural network where these methods are to be applied. To do so, we use the `Condition` class. In the `Condition` class, we pass the location points and the equation we want minimized on those points (other possibilities are allowed, see the documentation for reference).
|
||||
#
|
||||
# Finally, it's possible to define a `truth_solution` function, which can be useful if we want to plot the results and see how the real solution compares to the expected (true) solution. Notice that the `truth_solution` function is a method of the `PINN` class, but is not mandatory for problem definition.
|
||||
# Finally, it's possible to define a `truth_solution` function, which can be useful if we want to plot the results and see how the real solution compares to the expected (true) solution. Notice that the `truth_solution` function is a method of the `PINN` class, but it is not mandatory for problem definition.
|
||||
#
|
||||
|
||||
# ## Generate data
|
||||
#
|
||||
# Data for training can come in form of direct numerical simulation reusults, or points in the domains. In case we do unsupervised learning, we just need the collocation points for training, i.e. points where we want to evaluate the neural network. Sampling point in **PINA** is very easy, here we show three examples using the `.discretise_domain` method of the `AbstractProblem` class.
|
||||
# Data for training can come in form of direct numerical simulation results, or points in the domains. In case we perform unsupervised learning, we just need the collocation points for training, i.e. points where we want to evaluate the neural network. Sampling point in **PINA** is very easy, here we show three examples using the `.discretise_domain` method of the `AbstractProblem` class.
|
||||
|
||||
# In[3]:
|
||||
|
||||
@@ -139,7 +139,7 @@ problem = SimpleODE()
|
||||
# sampling 20 points in [0, 1] through discretization in all locations
|
||||
problem.discretise_domain(n=20, mode='grid', variables=['x'], locations='all')
|
||||
|
||||
# sampling 20 points in (0, 1) through latin hypercube samping in D, and 1 point in x0
|
||||
# sampling 20 points in (0, 1) through latin hypercube sampling in D, and 1 point in x0
|
||||
problem.discretise_domain(n=20, mode='latin', variables=['x'], locations=['D'])
|
||||
problem.discretise_domain(n=1, mode='random', variables=['x'], locations=['x0'])
|
||||
|
||||
@@ -168,7 +168,7 @@ print('Input points labels:', problem.input_pts['D'].labels)
|
||||
|
||||
# To visualize the sampled points we can use the `.plot_samples` method of the `Plotter` class
|
||||
|
||||
# In[6]:
|
||||
# In[5]:
|
||||
|
||||
|
||||
from pina import Plotter
|
||||
@@ -181,7 +181,7 @@ pl.plot_samples(problem=problem)
|
||||
|
||||
# Once we have defined the problem and generated the data we can start the modelling. Here we will choose a `FeedForward` neural network available in `pina.model`, and we will train using the `PINN` solver from `pina.solvers`. We highlight that this training is fairly simple, for more advanced stuff consider the tutorials in the ***Physics Informed Neural Networks*** section of ***Tutorials***. For training we use the `Trainer` class from `pina.trainer`. Here we show a very short training and some method for plotting the results. Notice that by default all relevant metrics (e.g. MSE error during training) are going to be tracked using a `lightining` logger, by default `CSVLogger`. If you want to track the metric by yourself without a logger, use `pina.callbacks.MetricTracker`.
|
||||
|
||||
# In[7]:
|
||||
# In[6]:
|
||||
|
||||
|
||||
from pina import Trainer
|
||||
@@ -210,7 +210,7 @@ trainer.train()
|
||||
|
||||
# After the training we can inspect trainer logged metrics (by default **PINA** logs mean square error residual loss). The logged metrics can be accessed online using one of the `Lightinig` loggers. The final loss can be accessed by `trainer.logged_metrics`
|
||||
|
||||
# In[8]:
|
||||
# In[7]:
|
||||
|
||||
|
||||
# inspecting final loss
|
||||
@@ -219,7 +219,7 @@ trainer.logged_metrics
|
||||
|
||||
# By using the `Plotter` class from **PINA** we can also do some quatitative plots of the solution.
|
||||
|
||||
# In[9]:
|
||||
# In[8]:
|
||||
|
||||
|
||||
# plotting the solution
|
||||
@@ -228,7 +228,7 @@ pl.plot(solver=pinn)
|
||||
|
||||
# The solution is overlapped with the actual one, and they are barely indistinguishable. We can also plot easily the loss:
|
||||
|
||||
# In[10]:
|
||||
# In[9]:
|
||||
|
||||
|
||||
pl.plot_loss(trainer=trainer, label = 'mean_loss', logy=True)
|
||||
@@ -238,12 +238,14 @@ pl.plot_loss(trainer=trainer, label = 'mean_loss', logy=True)
|
||||
|
||||
# ## What's next?
|
||||
#
|
||||
# Nice you have completed the introductory tutorial of **PINA**! There are multiple directions you can go now:
|
||||
# Congratulations on completing the introductory tutorial of **PINA**! There are several directions you can go now:
|
||||
#
|
||||
# 1. Train the network for longer or with different layer sizes and assert the finaly accuracy
|
||||
#
|
||||
# 2. Train the network using other types of models (see `pina.model`)
|
||||
#
|
||||
# 3. GPU trainining and benchmark the speed
|
||||
# 3. GPU training and speed benchmarking
|
||||
#
|
||||
# 4. Many more...
|
||||
|
||||
#
|
||||
|
||||
4
tutorials/tutorial2/tutorial.ipynb
vendored
4
tutorials/tutorial2/tutorial.ipynb
vendored
@@ -116,7 +116,7 @@
|
||||
"source": [
|
||||
"After the problem, the feed-forward neural network is defined, through the class `FeedForward`. This neural network takes as input the coordinates (in this case $x$ and $y$) and provides the unkwown field of the Poisson problem. The residual of the equations are evaluated at several sampling points (which the user can manipulate using the method `CartesianDomain_pts`) and the loss minimized by the neural network is the sum of the residuals.\n",
|
||||
"\n",
|
||||
"In this tutorial, the neural network is composed by two hidden layers of 10 neurons each, and it is trained for 1000 epochs with a learning rate of 0.006 and $l_2$ weight regularization set to $10^{-7}$. These parameters can be modified as desired. We use the `MetricTracker` class to track the metrics during training."
|
||||
"In this tutorial, the neural network is composed by two hidden layers of 10 neurons each, and it is trained for 1000 epochs with a learning rate of 0.006 and $l_2$ weight regularization set to $10^{-8}$. These parameters can be modified as desired. We use the `MetricTracker` class to track the metrics during training."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -561,7 +561,7 @@
|
||||
"source": [
|
||||
"## What's next?\n",
|
||||
"\n",
|
||||
"Nice you have completed the two dimensional Poisson tutorial of **PINA**! There are multiple directions you can go now:\n",
|
||||
"Congratulations on completing the two dimensional Poisson tutorial of **PINA**! There are multiple directions you can go now:\n",
|
||||
"\n",
|
||||
"1. Train the network for longer or with different layer sizes and assert the finaly accuracy\n",
|
||||
"\n",
|
||||
|
||||
4
tutorials/tutorial2/tutorial.py
vendored
4
tutorials/tutorial2/tutorial.py
vendored
@@ -80,7 +80,7 @@ problem.discretise_domain(25, 'grid', locations=['gamma1', 'gamma2', 'gamma3', '
|
||||
|
||||
# After the problem, the feed-forward neural network is defined, through the class `FeedForward`. This neural network takes as input the coordinates (in this case $x$ and $y$) and provides the unkwown field of the Poisson problem. The residual of the equations are evaluated at several sampling points (which the user can manipulate using the method `CartesianDomain_pts`) and the loss minimized by the neural network is the sum of the residuals.
|
||||
#
|
||||
# In this tutorial, the neural network is composed by two hidden layers of 10 neurons each, and it is trained for 1000 epochs with a learning rate of 0.006 and $l_2$ weight regularization set to $10^{-7}$. These parameters can be modified as desired. We use the `MetricTracker` class to track the metrics during training.
|
||||
# In this tutorial, the neural network is composed by two hidden layers of 10 neurons each, and it is trained for 1000 epochs with a learning rate of 0.006 and $l_2$ weight regularization set to $10^{-8}$. These parameters can be modified as desired. We use the `MetricTracker` class to track the metrics during training.
|
||||
|
||||
# In[3]:
|
||||
|
||||
@@ -252,7 +252,7 @@ plotter.plot_loss(trainer_learn, logy=True, label='Learnable Features')
|
||||
|
||||
# ## What's next?
|
||||
#
|
||||
# Nice you have completed the two dimensional Poisson tutorial of **PINA**! There are multiple directions you can go now:
|
||||
# Congratulations on completing the two dimensional Poisson tutorial of **PINA**! There are multiple directions you can go now:
|
||||
#
|
||||
# 1. Train the network for longer or with different layer sizes and assert the finaly accuracy
|
||||
#
|
||||
|
||||
8
tutorials/tutorial3/tutorial.ipynb
vendored
8
tutorials/tutorial3/tutorial.ipynb
vendored
@@ -54,7 +54,7 @@
|
||||
"\\end{cases}\n",
|
||||
"\\end{equation}\n",
|
||||
"\n",
|
||||
"where $D$ is a square domain $[0,1]^2$, and $\\Gamma_i$, with $i=1,...,4$, are the boundaries of the square, and the velocity in the standard wave equation is fixed to one."
|
||||
"where $D$ is a squared domain $[0,1]^2$, and $\\Gamma_i$, with $i=1,...,4$, are the boundaries of the square, and the velocity in the standard wave equation is fixed to one."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -305,7 +305,7 @@
|
||||
"id": "35e51649",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The results are not so great, and we can clearly see that as time progress the solution get worse.... Can we do better?\n",
|
||||
"The results are not so great, and we can clearly see that as time progress the solution gets worse.... Can we do better?\n",
|
||||
"\n",
|
||||
"A valid option is to impose the initial condition as hard constraint as well. Specifically, our solution is written as:\n",
|
||||
"\n",
|
||||
@@ -491,7 +491,7 @@
|
||||
"id": "b7338109",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can see now that the results are way better! This is due to the fact that previously the network was not learning correctly the initial conditon, leading to a poor solution when the time evolved. By imposing the initial condition the network is able to correctly solve the problem."
|
||||
"We can see now that the results are way better! This is due to the fact that previously the network was not learning correctly the initial conditon, leading to a poor solution when time evolved. By imposing the initial condition the network is able to correctly solve the problem."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -501,7 +501,7 @@
|
||||
"source": [
|
||||
"## What's next?\n",
|
||||
"\n",
|
||||
"Nice you have completed the two dimensional Wave tutorial of **PINA**! There are multiple directions you can go now:\n",
|
||||
"Congratulations on completing the two dimensional Wave tutorial of **PINA**! There are multiple directions you can go now:\n",
|
||||
"\n",
|
||||
"1. Train the network for longer or with different layer sizes and assert the finaly accuracy\n",
|
||||
"\n",
|
||||
|
||||
8
tutorials/tutorial3/tutorial.py
vendored
8
tutorials/tutorial3/tutorial.py
vendored
@@ -34,7 +34,7 @@ from pina import Condition, Plotter
|
||||
# \end{cases}
|
||||
# \end{equation}
|
||||
#
|
||||
# where $D$ is a square domain $[0,1]^2$, and $\Gamma_i$, with $i=1,...,4$, are the boundaries of the square, and the velocity in the standard wave equation is fixed to one.
|
||||
# where $D$ is a squared domain $[0,1]^2$, and $\Gamma_i$, with $i=1,...,4$, are the boundaries of the square, and the velocity in the standard wave equation is fixed to one.
|
||||
|
||||
# Now, the wave problem is written in PINA code as a class, inheriting from `SpatialProblem` and `TimeDependentProblem` since we deal with spatial, and time dependent variables. The equations are written as `conditions` that should be satisfied in the corresponding domains. `truth_solution` is the exact solution which will be compared with the predicted one.
|
||||
|
||||
@@ -142,7 +142,7 @@ print('Plotting at t=1')
|
||||
plotter.plot(pinn, fixed_variables={'t': 1.0})
|
||||
|
||||
|
||||
# The results are not so great, and we can clearly see that as time progress the solution get worse.... Can we do better?
|
||||
# The results are not so great, and we can clearly see that as time progress the solution gets worse.... Can we do better?
|
||||
#
|
||||
# A valid option is to impose the initial condition as hard constraint as well. Specifically, our solution is written as:
|
||||
#
|
||||
@@ -207,11 +207,11 @@ print('Plotting at t=1')
|
||||
plotter.plot(pinn, fixed_variables={'t': 1.0})
|
||||
|
||||
|
||||
# We can see now that the results are way better! This is due to the fact that previously the network was not learning correctly the initial conditon, leading to a poor solution when the time evolved. By imposing the initial condition the network is able to correctly solve the problem.
|
||||
# We can see now that the results are way better! This is due to the fact that previously the network was not learning correctly the initial conditon, leading to a poor solution when time evolved. By imposing the initial condition the network is able to correctly solve the problem.
|
||||
|
||||
# ## What's next?
|
||||
#
|
||||
# Nice you have completed the two dimensional Wave tutorial of **PINA**! There are multiple directions you can go now:
|
||||
# Congratulations on completing the two dimensional Wave tutorial of **PINA**! There are multiple directions you can go now:
|
||||
#
|
||||
# 1. Train the network for longer or with different layer sizes and assert the finaly accuracy
|
||||
#
|
||||
|
||||
46
tutorials/tutorial4/tutorial.ipynb
vendored
46
tutorials/tutorial4/tutorial.ipynb
vendored
@@ -105,7 +105,7 @@
|
||||
"f(x, y) = [\\sin(\\pi x) \\sin(\\pi y), -\\sin(\\pi x) \\sin(\\pi y)] \\quad (x,y)\\in[0,1]\\times[0,1]\n",
|
||||
"$$\n",
|
||||
"\n",
|
||||
"using a batch size of one."
|
||||
"using a batch size equal to 1."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -130,14 +130,14 @@
|
||||
"# points in the mesh fixed to 200\n",
|
||||
"N = 200\n",
|
||||
"\n",
|
||||
"# vectorial 2 dimensional function, number_input_fileds=2\n",
|
||||
"number_input_fileds = 2\n",
|
||||
"# vectorial 2 dimensional function, number_input_fields=2\n",
|
||||
"number_input_fields = 2\n",
|
||||
"\n",
|
||||
"# 2 dimensional spatial variables, D = 2 + 1 = 3\n",
|
||||
"D = 3\n",
|
||||
"\n",
|
||||
"# create the function f domain as random 2d points in [0, 1]\n",
|
||||
"domain = torch.rand(size=(batch_size, number_input_fileds, N, D-1))\n",
|
||||
"domain = torch.rand(size=(batch_size, number_input_fields, N, D-1))\n",
|
||||
"print(f\"Domain has shape: {domain.shape}\")\n",
|
||||
"\n",
|
||||
"# create the functions\n",
|
||||
@@ -146,7 +146,7 @@
|
||||
"f2 = - torch.sin(pi * domain[:, 1, :, 0]) * torch.sin(pi * domain[:, 1, :, 1])\n",
|
||||
"\n",
|
||||
"# stacking the input domain and field values\n",
|
||||
"data = torch.empty(size=(batch_size, number_input_fileds, N, D))\n",
|
||||
"data = torch.empty(size=(batch_size, number_input_fields, N, D))\n",
|
||||
"data[..., :-1] = domain # copy the domain\n",
|
||||
"data[:, 0, :, -1] = f1 # copy first field value\n",
|
||||
"data[:, 1, :, -1] = f1 # copy second field value\n",
|
||||
@@ -174,7 +174,7 @@
|
||||
"1. `domain`: square domain (the only implemented) $[0,1]\\times[0,5]$. The minimum value is always zero, while the maximum is specified by the user\n",
|
||||
"2. `start`: start position of the filter, coordinate $(0, 0)$\n",
|
||||
"3. `jump`: the jumps of the centroid of the filter to the next position $(0.1, 0.3)$\n",
|
||||
"4. `direction`: the directions of the jump, with `1 = right`, `0 = no jump`,`-1 = left` with respect to the current position\n",
|
||||
"4. `direction`: the directions of the jump, with `1 = right`, `0 = no jump`, `-1 = left` with respect to the current position\n",
|
||||
"\n",
|
||||
"**Note**\n",
|
||||
"\n",
|
||||
@@ -188,9 +188,9 @@
|
||||
"source": [
|
||||
"### Filter definition\n",
|
||||
"\n",
|
||||
"Having defined all the previous blocks we are able to construct the continuous filter.\n",
|
||||
"Having defined all the previous blocks, we are now able to construct the continuous filter.\n",
|
||||
"\n",
|
||||
"Suppose we would like to get an ouput with only one field, and let us fix the filter dimension to be $[0.1, 0.1]$."
|
||||
"Suppose we would like to get an output with only one field, and let us fix the filter dimension to be $[0.1, 0.1]$."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -220,7 +220,7 @@
|
||||
" }\n",
|
||||
"\n",
|
||||
"# creating the filter \n",
|
||||
"cConv = ContinuousConvBlock(input_numb_field=number_input_fileds,\n",
|
||||
"cConv = ContinuousConvBlock(input_numb_field=number_input_fields,\n",
|
||||
" output_numb_field=1,\n",
|
||||
" filter_dim=filter_dim,\n",
|
||||
" stride=stride)"
|
||||
@@ -242,7 +242,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# creating the filter + optimization\n",
|
||||
"cConv = ContinuousConvBlock(input_numb_field=number_input_fileds,\n",
|
||||
"cConv = ContinuousConvBlock(input_numb_field=number_input_fields,\n",
|
||||
" output_numb_field=1,\n",
|
||||
" filter_dim=filter_dim,\n",
|
||||
" stride=stride,\n",
|
||||
@@ -254,7 +254,7 @@
|
||||
"id": "f99c290e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Let's try to do a forward pass"
|
||||
"Let's try to do a forward pass:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -310,7 +310,7 @@
|
||||
" return self.model(x)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"cConv = ContinuousConvBlock(input_numb_field=number_input_fileds,\n",
|
||||
"cConv = ContinuousConvBlock(input_numb_field=number_input_fields,\n",
|
||||
" output_numb_field=1,\n",
|
||||
" filter_dim=filter_dim,\n",
|
||||
" stride=stride,\n",
|
||||
@@ -380,7 +380,7 @@
|
||||
"id": "7f076010",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Let's now build a simple classifier. The MNIST dataset is composed by vectors of shape `[batch, 1, 28, 28]`, but we can image them as one field functions where the pixels $ij$ are the coordinate $x=i, y=j$ in a $[0, 27]\\times[0,27]$ domain, and the pixels value are the field values. We just need a function to transform the regular tensor in a tensor compatible for the continuous filter:"
|
||||
"Let's now build a simple classifier. The MNIST dataset is composed by vectors of shape `[batch, 1, 28, 28]`, but we can image them as one field functions where the pixels $ij$ are the coordinate $x=i, y=j$ in a $[0, 27]\\times[0,27]$ domain, and the pixels values are the field values. We just need a function to transform the regular tensor in a tensor compatible for the continuous filter:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -478,7 +478,7 @@
|
||||
"id": "4374c15c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Let's try to train it using a simple pytorch training loop. We train for juts 1 epoch using Adam optimizer with a $0.001$ learning rate."
|
||||
"Let's try to train it using a simple pytorch training loop. We train for just 1 epoch using Adam optimizer with a $0.001$ learning rate."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -556,7 +556,7 @@
|
||||
"id": "47fa3d0e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Let's see the performance on the train set!"
|
||||
"Let's see the performance on the test set!"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -595,7 +595,7 @@
|
||||
"id": "25cf2878",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"As we can see we have very good performance for having traing only for 1 epoch! Nevertheless, we are still using structured data... Let's see how we can build an autoencoder for unstructured data now."
|
||||
"As we can see we have very good performance for having trained only for 1 epoch! Nevertheless, we are still using structured data... Let's see how we can build an autoencoder for unstructured data now."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -876,7 +876,7 @@
|
||||
"id": "206141f9",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"As we can see the two are really similar! We can compute the $l_2$ error quite easily as well:"
|
||||
"As we can see, the two solutions are really similar! We can compute the $l_2$ error quite easily as well:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -916,7 +916,7 @@
|
||||
"source": [
|
||||
"### Filter for upsampling\n",
|
||||
"\n",
|
||||
"Suppose we have already the hidden dimension and we want to upsample on a differen grid with more points. Let's see how to do it:"
|
||||
"Suppose we have already the hidden representation and we want to upsample on a differen grid with more points. Let's see how to do it:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -946,7 +946,7 @@
|
||||
"input_data2[0, 0, :, -1] = torch.sin(pi *\n",
|
||||
" grid2[:, 0]) * torch.sin(pi * grid2[:, 1])\n",
|
||||
"\n",
|
||||
"# get the hidden dimension representation from original input\n",
|
||||
"# get the hidden representation from original input\n",
|
||||
"latent = net.encoder(input_data)\n",
|
||||
"\n",
|
||||
"# upsample on the second input_data2\n",
|
||||
@@ -996,13 +996,13 @@
|
||||
"id": "465cbd16",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Autoencoding at different resolution\n",
|
||||
"In the previous example we already had the hidden dimension (of original input) and we used it to upsample. Sometimes however we have a more fine mesh solution and we simply want to encode it. This can be done without retraining! This procedure can be useful in case we have many points in the mesh and just a smaller part of them are needed for training. Let's see the results of this:"
|
||||
"### Autoencoding at different resolutions\n",
|
||||
"In the previous example we already had the hidden representation (of the original input) and we used it to upsample. Sometimes however we could have a finer mesh solution and we would simply want to encode it. This can be done without retraining! This procedure can be useful in case we have many points in the mesh and just a smaller part of them are needed for training. Let's see the results of this:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 24,
|
||||
"execution_count": null,
|
||||
"id": "75ed28f5",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -1034,7 +1034,7 @@
|
||||
"input_data2[0, 0, :, -1] = torch.sin(pi *\n",
|
||||
" grid2[:, 0]) * torch.sin(pi * grid2[:, 1])\n",
|
||||
"\n",
|
||||
"# get the hidden dimension representation from more fine mesh input\n",
|
||||
"# get the hidden representation from finer mesh input\n",
|
||||
"latent = net.encoder(input_data2)\n",
|
||||
"\n",
|
||||
"# upsample on the second input_data2\n",
|
||||
|
||||
111
tutorials/tutorial4/tutorial.py
vendored
111
tutorials/tutorial4/tutorial.py
vendored
@@ -1,20 +1,21 @@
|
||||
#!/usr/bin/env python
|
||||
# coding: utf-8
|
||||
|
||||
# # Tutorial 4: continuous convolutional filter
|
||||
# # Tutorial: Unstructured convolutional autoencoder via continuous convolution
|
||||
|
||||
# In this tutorial, we will show how to use the Continuous Convolutional Filter, and how to build common Deep Learning architectures with it. The implementation of the filter follows the original work [**A Continuous Convolutional Trainable Filter for Modelling Unstructured Data**](https://arxiv.org/abs/2210.13416).
|
||||
# In this tutorial, we will show how to use the Continuous Convolutional Filter, and how to build common Deep Learning architectures with it. The implementation of the filter follows the original work [*A Continuous Convolutional Trainable Filter for Modelling Unstructured Data*](https://arxiv.org/abs/2210.13416).
|
||||
|
||||
# First of all we import the modules needed for the tutorial, which include:
|
||||
#
|
||||
# * `ContinuousConv` class from `pina.model.layers` which implements the continuous convolutional filter
|
||||
# * `PyTorch` and `Matplotlib` for tensorial operations and visualization respectively
|
||||
# First of all we import the modules needed for the tutorial:
|
||||
|
||||
# In[1]:
|
||||
|
||||
|
||||
import torch
|
||||
import matplotlib.pyplot as plt
|
||||
from pina.problem import AbstractProblem
|
||||
from pina.solvers import SupervisedSolver
|
||||
from pina.trainer import Trainer
|
||||
from pina import Condition, LabelTensor
|
||||
from pina.model.layers import ContinuousConvBlock
|
||||
import torchvision # for MNIST dataset
|
||||
from pina.model import FeedForward # for building AE and MNIST classification
|
||||
@@ -54,7 +55,7 @@ from pina.model import FeedForward # for building AE and MNIST classification
|
||||
# f(x, y) = [\sin(\pi x) \sin(\pi y), -\sin(\pi x) \sin(\pi y)] \quad (x,y)\in[0,1]\times[0,1]
|
||||
# $$
|
||||
#
|
||||
# using a batch size of one.
|
||||
# using a batch size equal to 1.
|
||||
|
||||
# In[2]:
|
||||
|
||||
@@ -65,14 +66,14 @@ batch_size = 1
|
||||
# points in the mesh fixed to 200
|
||||
N = 200
|
||||
|
||||
# vectorial 2 dimensional function, number_input_fileds=2
|
||||
number_input_fileds = 2
|
||||
# vectorial 2 dimensional function, number_input_fields=2
|
||||
number_input_fields = 2
|
||||
|
||||
# 2 dimensional spatial variables, D = 2 + 1 = 3
|
||||
D = 3
|
||||
|
||||
# create the function f domain as random 2d points in [0, 1]
|
||||
domain = torch.rand(size=(batch_size, number_input_fileds, N, D-1))
|
||||
domain = torch.rand(size=(batch_size, number_input_fields, N, D-1))
|
||||
print(f"Domain has shape: {domain.shape}")
|
||||
|
||||
# create the functions
|
||||
@@ -81,7 +82,7 @@ f1 = torch.sin(pi * domain[:, 0, :, 0]) * torch.sin(pi * domain[:, 0, :, 1])
|
||||
f2 = - torch.sin(pi * domain[:, 1, :, 0]) * torch.sin(pi * domain[:, 1, :, 1])
|
||||
|
||||
# stacking the input domain and field values
|
||||
data = torch.empty(size=(batch_size, number_input_fileds, N, D))
|
||||
data = torch.empty(size=(batch_size, number_input_fields, N, D))
|
||||
data[..., :-1] = domain # copy the domain
|
||||
data[:, 0, :, -1] = f1 # copy first field value
|
||||
data[:, 1, :, -1] = f1 # copy second field value
|
||||
@@ -104,7 +105,7 @@ print(f"Filter input data has shape: {data.shape}")
|
||||
# 1. `domain`: square domain (the only implemented) $[0,1]\times[0,5]$. The minimum value is always zero, while the maximum is specified by the user
|
||||
# 2. `start`: start position of the filter, coordinate $(0, 0)$
|
||||
# 3. `jump`: the jumps of the centroid of the filter to the next position $(0.1, 0.3)$
|
||||
# 4. `direction`: the directions of the jump, with `1 = right`, `0 = no jump`,`-1 = left` with respect to the current position
|
||||
# 4. `direction`: the directions of the jump, with `1 = right`, `0 = no jump`, `-1 = left` with respect to the current position
|
||||
#
|
||||
# **Note**
|
||||
#
|
||||
@@ -112,9 +113,9 @@ print(f"Filter input data has shape: {data.shape}")
|
||||
|
||||
# ### Filter definition
|
||||
#
|
||||
# Having defined all the previous blocks we are able to construct the continuous filter.
|
||||
# Having defined all the previous blocks, we are now able to construct the continuous filter.
|
||||
#
|
||||
# Suppose we would like to get an ouput with only one field, and let us fix the filter dimension to be $[0.1, 0.1]$.
|
||||
# Suppose we would like to get an output with only one field, and let us fix the filter dimension to be $[0.1, 0.1]$.
|
||||
|
||||
# In[3]:
|
||||
|
||||
@@ -130,7 +131,7 @@ stride = {"domain": [1, 1],
|
||||
}
|
||||
|
||||
# creating the filter
|
||||
cConv = ContinuousConvBlock(input_numb_field=number_input_fileds,
|
||||
cConv = ContinuousConvBlock(input_numb_field=number_input_fields,
|
||||
output_numb_field=1,
|
||||
filter_dim=filter_dim,
|
||||
stride=stride)
|
||||
@@ -142,14 +143,14 @@ cConv = ContinuousConvBlock(input_numb_field=number_input_fileds,
|
||||
|
||||
|
||||
# creating the filter + optimization
|
||||
cConv = ContinuousConvBlock(input_numb_field=number_input_fileds,
|
||||
cConv = ContinuousConvBlock(input_numb_field=number_input_fields,
|
||||
output_numb_field=1,
|
||||
filter_dim=filter_dim,
|
||||
stride=stride,
|
||||
optimize=True)
|
||||
|
||||
|
||||
# Let's try to do a forward pass
|
||||
# Let's try to do a forward pass:
|
||||
|
||||
# In[5]:
|
||||
|
||||
@@ -182,7 +183,7 @@ class SimpleKernel(torch.nn.Module):
|
||||
return self.model(x)
|
||||
|
||||
|
||||
cConv = ContinuousConvBlock(input_numb_field=number_input_fileds,
|
||||
cConv = ContinuousConvBlock(input_numb_field=number_input_fields,
|
||||
output_numb_field=1,
|
||||
filter_dim=filter_dim,
|
||||
stride=stride,
|
||||
@@ -231,7 +232,7 @@ test_loader = DataLoader(train_data, batch_size=batch_size,
|
||||
sampler=SubsetRandomSampler(subsample_train_indices))
|
||||
|
||||
|
||||
# Let's now build a simple classifier. The MNIST dataset is composed by vectors of shape `[batch, 1, 28, 28]`, but we can image them as one field functions where the pixels $ij$ are the coordinate $x=i, y=j$ in a $[0, 27]\times[0,27]$ domain, and the pixels value are the field values. We just need a function to transform the regular tensor in a tensor compatible for the continuous filter:
|
||||
# Let's now build a simple classifier. The MNIST dataset is composed by vectors of shape `[batch, 1, 28, 28]`, but we can image them as one field functions where the pixels $ij$ are the coordinate $x=i, y=j$ in a $[0, 27]\times[0,27]$ domain, and the pixels values are the field values. We just need a function to transform the regular tensor in a tensor compatible for the continuous filter:
|
||||
|
||||
# In[8]:
|
||||
|
||||
@@ -300,7 +301,7 @@ class ContinuousClassifier(torch.nn.Module):
|
||||
net = ContinuousClassifier()
|
||||
|
||||
|
||||
# Let's try to train it using a simple pytorch training loop. We train for juts 1 epoch using Adam optimizer with a $0.001$ learning rate.
|
||||
# Let's try to train it using a simple pytorch training loop. We train for just 1 epoch using Adam optimizer with a $0.001$ learning rate.
|
||||
|
||||
# In[10]:
|
||||
|
||||
@@ -336,7 +337,7 @@ for epoch in range(1): # loop over the dataset multiple times
|
||||
running_loss = 0.0
|
||||
|
||||
|
||||
# Let's see the performance on the train set!
|
||||
# Let's see the performance on the test set!
|
||||
|
||||
# In[11]:
|
||||
|
||||
@@ -357,7 +358,7 @@ print(
|
||||
f'Accuracy of the network on the 1000 test images: {(correct / total):.3%}')
|
||||
|
||||
|
||||
# As we can see we have very good performance for having traing only for 1 epoch! Nevertheless, we are still using structured data... Let's see how we can build an autoencoder for unstructured data now.
|
||||
# As we can see we have very good performance for having trained only for 1 epoch! Nevertheless, we are still using structured data... Let's see how we can build an autoencoder for unstructured data now.
|
||||
|
||||
# ## Building a Continuous Convolutional Autoencoder
|
||||
#
|
||||
@@ -463,7 +464,7 @@ class Decoder(torch.nn.Module):
|
||||
|
||||
# Very good! Notice that in the `Decoder` class in the `forward` pass we have used the `.transpose()` method of the `ContinuousConvolution` class. This method accepts the `weights` for upsampling and the `grid` on where to upsample. Let's now build the autoencoder! We set the hidden dimension in the `hidden_dimension` variable. We apply the sigmoid on the output since the field value is between $[0, 1]$.
|
||||
|
||||
# In[14]:
|
||||
# In[17]:
|
||||
|
||||
|
||||
class Autoencoder(torch.nn.Module):
|
||||
@@ -482,42 +483,32 @@ class Autoencoder(torch.nn.Module):
|
||||
out = self.decoder(weights, grid)
|
||||
return out
|
||||
|
||||
|
||||
net = Autoencoder()
|
||||
|
||||
|
||||
# Let's now train the autoencoder, minimizing the mean square error loss and optimizing using Adam.
|
||||
# Let's now train the autoencoder, minimizing the mean square error loss and optimizing using Adam. We use the `SupervisedSolver` as solver, and the problem is a simple problem created by inheriting from `AbstractProblem`. It takes approximately two minutes to train on CPU.
|
||||
|
||||
# In[15]:
|
||||
# In[19]:
|
||||
|
||||
|
||||
# setting the seed
|
||||
torch.manual_seed(seed)
|
||||
# define the problem
|
||||
class CircleProblem(AbstractProblem):
|
||||
input_variables = ['x', 'y', 'f']
|
||||
output_variables = input_variables
|
||||
conditions = {'data' : Condition(input_points=LabelTensor(input_data, input_variables), output_points=LabelTensor(input_data, output_variables))}
|
||||
|
||||
# optimizer and loss function
|
||||
optimizer = torch.optim.Adam(net.parameters(), lr=0.001)
|
||||
criterion = torch.nn.MSELoss()
|
||||
max_epochs = 150
|
||||
# define the solver
|
||||
solver = SupervisedSolver(problem=CircleProblem(), model=net, loss=torch.nn.MSELoss())
|
||||
|
||||
for epoch in range(max_epochs): # loop over the dataset multiple times
|
||||
# train
|
||||
trainer = Trainer(solver, max_epochs=150, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
|
||||
trainer.train()
|
||||
|
||||
# zero the parameter gradients
|
||||
optimizer.zero_grad()
|
||||
|
||||
# forward + backward + optimize
|
||||
outputs = net(input_data)
|
||||
loss = criterion(outputs[..., -1], input_data[..., -1])
|
||||
loss.backward()
|
||||
optimizer.step()
|
||||
|
||||
# print statistics
|
||||
if epoch % 10 ==9:
|
||||
print(f'epoch [{epoch + 1}/{max_epochs}] loss [{loss.item():.2}]')
|
||||
|
||||
|
||||
# Let's visualize the two solutions side by side!
|
||||
|
||||
# In[16]:
|
||||
# In[20]:
|
||||
|
||||
|
||||
net.eval()
|
||||
@@ -538,9 +529,9 @@ plt.tight_layout()
|
||||
plt.show()
|
||||
|
||||
|
||||
# As we can see the two are really similar! We can compute the $l_2$ error quite easily as well:
|
||||
# As we can see, the two solutions are really similar! We can compute the $l_2$ error quite easily as well:
|
||||
|
||||
# In[17]:
|
||||
# In[21]:
|
||||
|
||||
|
||||
def l2_error(input_, target):
|
||||
@@ -554,9 +545,9 @@ print(f'l2 error: {l2_error(input_data[0, 0, :, -1], output[0, 0, :, -1]):.2%}')
|
||||
|
||||
# ### Filter for upsampling
|
||||
#
|
||||
# Suppose we have already the hidden dimension and we want to upsample on a differen grid with more points. Let's see how to do it:
|
||||
# Suppose we have already the hidden representation and we want to upsample on a differen grid with more points. Let's see how to do it:
|
||||
|
||||
# In[18]:
|
||||
# In[22]:
|
||||
|
||||
|
||||
# setting the seed
|
||||
@@ -568,7 +559,7 @@ input_data2[0, 0, :, :-1] = grid2
|
||||
input_data2[0, 0, :, -1] = torch.sin(pi *
|
||||
grid2[:, 0]) * torch.sin(pi * grid2[:, 1])
|
||||
|
||||
# get the hidden dimension representation from original input
|
||||
# get the hidden representation from original input
|
||||
latent = net.encoder(input_data)
|
||||
|
||||
# upsample on the second input_data2
|
||||
@@ -589,16 +580,16 @@ plt.show()
|
||||
|
||||
# As we can see we have a very good approximation of the original function, even thought some noise is present. Let's calculate the error now:
|
||||
|
||||
# In[19]:
|
||||
# In[23]:
|
||||
|
||||
|
||||
print(f'l2 error: {l2_error(input_data2[0, 0, :, -1], output[0, 0, :, -1]):.2%}')
|
||||
|
||||
|
||||
# ### Autoencoding at different resolution
|
||||
# In the previous example we already had the hidden dimension (of original input) and we used it to upsample. Sometimes however we have a more fine mesh solution and we simply want to encode it. This can be done without retraining! This procedure can be useful in case we have many points in the mesh and just a smaller part of them are needed for training. Let's see the results of this:
|
||||
# ### Autoencoding at different resolutions
|
||||
# In the previous example we already had the hidden representation (of the original input) and we used it to upsample. Sometimes however we could have a finer mesh solution and we would simply want to encode it. This can be done without retraining! This procedure can be useful in case we have many points in the mesh and just a smaller part of them are needed for training. Let's see the results of this:
|
||||
|
||||
# In[20]:
|
||||
# In[ ]:
|
||||
|
||||
|
||||
# setting the seed
|
||||
@@ -610,7 +601,7 @@ input_data2[0, 0, :, :-1] = grid2
|
||||
input_data2[0, 0, :, -1] = torch.sin(pi *
|
||||
grid2[:, 0]) * torch.sin(pi * grid2[:, 1])
|
||||
|
||||
# get the hidden dimension representation from more fine mesh input
|
||||
# get the hidden representation from finer mesh input
|
||||
latent = net.encoder(input_data2)
|
||||
|
||||
# upsample on the second input_data2
|
||||
@@ -635,4 +626,10 @@ print(
|
||||
|
||||
# ## What's next?
|
||||
#
|
||||
# We have shown the basic usage of a convolutional filter. In the next tutorials we will show how to combine the PINA framework with the convolutional filter to train in few lines and efficiently a Neural Network!
|
||||
# We have shown the basic usage of a convolutional filter. There are additional extensions possible:
|
||||
#
|
||||
# 1. Train using Physics Informed strategies
|
||||
#
|
||||
# 2. Use the filter to build an unstructured convolutional autoencoder for reduced order modelling
|
||||
#
|
||||
# 3. Many more...
|
||||
|
||||
6
tutorials/tutorial5/tutorial.ipynb
vendored
6
tutorials/tutorial5/tutorial.ipynb
vendored
@@ -14,7 +14,7 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"In this tutorial we are going to solve the Darcy flow problem in two dimensions, presented in [*Fourier Neural Operator for\n",
|
||||
"Parametric Partial Differential Equation*](https://openreview.net/pdf?id=c8P9NQVtmnO). First of all we import the modules needed for the tutorial. Importing `scipy` is needed for input output operations."
|
||||
"Parametric Partial Differential Equation*](https://openreview.net/pdf?id=c8P9NQVtmnO). First of all we import the modules needed for the tutorial. Importing `scipy` is needed for input-output operations."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -42,7 +42,7 @@
|
||||
"source": [
|
||||
"## Data Generation\n",
|
||||
"\n",
|
||||
"We will focus on solving the a specfic PDE, the **Darcy Flow** equation. The Darcy PDE is a second order, elliptic PDE with the following form:\n",
|
||||
"We will focus on solving a specific PDE, the **Darcy Flow** equation. The Darcy PDE is a second-order elliptic PDE with the following form:\n",
|
||||
"\n",
|
||||
"$$\n",
|
||||
"-\\nabla\\cdot(k(x, y)\\nabla u(x, y)) = f(x) \\quad (x, y) \\in D.\n",
|
||||
@@ -233,7 +233,7 @@
|
||||
"id": "6b5e5aa6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Solving the problem with a Fuorier Neural Operator (FNO)\n",
|
||||
"## Solving the problem with a Fourier Neural Operator (FNO)\n",
|
||||
"\n",
|
||||
"We will now move to solve the problem using a FNO. Since we are learning operator this approach is better suited, as we shall see."
|
||||
]
|
||||
|
||||
6
tutorials/tutorial5/tutorial.py
vendored
6
tutorials/tutorial5/tutorial.py
vendored
@@ -4,7 +4,7 @@
|
||||
# # Tutorial: Two dimensional Darcy flow using the Fourier Neural Operator
|
||||
|
||||
# In this tutorial we are going to solve the Darcy flow problem in two dimensions, presented in [*Fourier Neural Operator for
|
||||
# Parametric Partial Differential Equation*](https://openreview.net/pdf?id=c8P9NQVtmnO). First of all we import the modules needed for the tutorial. Importing `scipy` is needed for input output operations.
|
||||
# Parametric Partial Differential Equation*](https://openreview.net/pdf?id=c8P9NQVtmnO). First of all we import the modules needed for the tutorial. Importing `scipy` is needed for input-output operations.
|
||||
|
||||
# In[1]:
|
||||
|
||||
@@ -22,7 +22,7 @@ import matplotlib.pyplot as plt
|
||||
|
||||
# ## Data Generation
|
||||
#
|
||||
# We will focus on solving the a specfic PDE, the **Darcy Flow** equation. The Darcy PDE is a second order, elliptic PDE with the following form:
|
||||
# We will focus on solving a specific PDE, the **Darcy Flow** equation. The Darcy PDE is a second-order elliptic PDE with the following form:
|
||||
#
|
||||
# $$
|
||||
# -\nabla\cdot(k(x, y)\nabla u(x, y)) = f(x) \quad (x, y) \in D.
|
||||
@@ -112,7 +112,7 @@ err = float(metric_err(u_test.squeeze(-1), solver.neural_net(k_test).squeeze(-1)
|
||||
print(f'Final error testing {err:.2f}%')
|
||||
|
||||
|
||||
# ## Solving the problem with a Fuorier Neural Operator (FNO)
|
||||
# ## Solving the problem with a Fourier Neural Operator (FNO)
|
||||
#
|
||||
# We will now move to solve the problem using a FNO. Since we are learning operator this approach is better suited, as we shall see.
|
||||
|
||||
|
||||
10
tutorials/tutorial6/tutorial.ipynb
vendored
10
tutorials/tutorial6/tutorial.ipynb
vendored
@@ -44,7 +44,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We will create one cartesian and two ellipsoids. For the sake of simplicity, we show here the 2-dimensional, but it's trivial the extension to 3D (and higher) cases. The geometries allows also the generation of samples belonging to the boundary. So, we will create one ellipsoid with the border and one without."
|
||||
"We will create one cartesian and two ellipsoids. For the sake of simplicity, we show here the 2-dimensional case, but the extension to 3D (and higher) cases is trivial. The geometries allow also the generation of samples belonging to the boundary. So, we will create one ellipsoid with the border and one without."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -65,7 +65,7 @@
|
||||
"source": [
|
||||
"The `{'x': [0, 2], 'y': [0, 2]}` are the bounds of the `CartesianDomain` being created. \n",
|
||||
"\n",
|
||||
"To visualize these shapes, we need to sample points on them. We will use the `sample` method of the `CartesianDomain` and `EllipsoidDomain` classes. This method takes a `n` argument which is the number of points to sample. It also takes different modes to sample such as random."
|
||||
"To visualize these shapes, we need to sample points on them. We will use the `sample` method of the `CartesianDomain` and `EllipsoidDomain` classes. This method takes a `n` argument which is the number of points to sample. It also takes different modes to sample, such as `'random'`."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -84,7 +84,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can see the samples of each of the geometries to see what we are working with."
|
||||
"We can see the samples of each geometry to see what we are working with."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -255,7 +255,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can of course sample points over the new geometries, by using the `sample` method as before. We highlihgt that the available sample strategy here is only *random*."
|
||||
"We can of course sample points over the new geometries, by using the `sample` method as before. We highlight that the available sample strategy here is only *random*."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -395,7 +395,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Because the `Location` class we are inherting from requires both a `sample` method and `is_inside` method, we will create them and just add in \"pass\" for the moment."
|
||||
"Because the `Location` class we are inheriting from requires both a `sample` method and `is_inside` method, we will create them and just add in \"pass\" for the moment."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
10
tutorials/tutorial6/tutorial.py
vendored
10
tutorials/tutorial6/tutorial.py
vendored
@@ -25,7 +25,7 @@ def plot_scatter(ax, pts, title):
|
||||
|
||||
# ## Built-in Geometries
|
||||
|
||||
# We will create one cartesian and two ellipsoids. For the sake of simplicity, we show here the 2-dimensional, but it's trivial the extension to 3D (and higher) cases. The geometries allows also the generation of samples belonging to the boundary. So, we will create one ellipsoid with the border and one without.
|
||||
# We will create one cartesian and two ellipsoids. For the sake of simplicity, we show here the 2-dimensional case, but the extension to 3D (and higher) cases is trivial. The geometries allow also the generation of samples belonging to the boundary. So, we will create one ellipsoid with the border and one without.
|
||||
|
||||
# In[2]:
|
||||
|
||||
@@ -37,7 +37,7 @@ ellipsoid_border = EllipsoidDomain({'x': [2, 4], 'y': [2, 4]}, sample_surface=Tr
|
||||
|
||||
# The `{'x': [0, 2], 'y': [0, 2]}` are the bounds of the `CartesianDomain` being created.
|
||||
#
|
||||
# To visualize these shapes, we need to sample points on them. We will use the `sample` method of the `CartesianDomain` and `EllipsoidDomain` classes. This method takes a `n` argument which is the number of points to sample. It also takes different modes to sample such as random.
|
||||
# To visualize these shapes, we need to sample points on them. We will use the `sample` method of the `CartesianDomain` and `EllipsoidDomain` classes. This method takes a `n` argument which is the number of points to sample. It also takes different modes to sample, such as `'random'`.
|
||||
|
||||
# In[3]:
|
||||
|
||||
@@ -47,7 +47,7 @@ ellipsoid_no_border_samples = ellipsoid_no_border.sample(n=1000, mode='random')
|
||||
ellipsoid_border_samples = ellipsoid_border.sample(n=1000, mode='random')
|
||||
|
||||
|
||||
# We can see the samples of each of the geometries to see what we are working with.
|
||||
# We can see the samples of each geometry to see what we are working with.
|
||||
|
||||
# In[4]:
|
||||
|
||||
@@ -118,7 +118,7 @@ cart_ellipse_b_union = Union([cartesian, ellipsoid_border])
|
||||
three_domain_union = Union([cartesian, ellipsoid_no_border, ellipsoid_border])
|
||||
|
||||
|
||||
# We can of course sample points over the new geometries, by using the `sample` method as before. We highlihgt that the available sample strategy here is only *random*.
|
||||
# We can of course sample points over the new geometries, by using the `sample` method as before. We highlight that the available sample strategy here is only *random*.
|
||||
|
||||
# In[8]:
|
||||
|
||||
@@ -180,7 +180,7 @@ class Heart(Location):
|
||||
|
||||
|
||||
|
||||
# Because the `Location` class we are inherting from requires both a `sample` method and `is_inside` method, we will create them and just add in "pass" for the moment.
|
||||
# Because the `Location` class we are inheriting from requires both a `sample` method and `is_inside` method, we will create them and just add in "pass" for the moment.
|
||||
|
||||
# In[13]:
|
||||
|
||||
|
||||
18
tutorials/tutorial7/tutorial.ipynb
vendored
18
tutorials/tutorial7/tutorial.ipynb
vendored
File diff suppressed because one or more lines are too long
8
tutorials/tutorial7/tutorial.py
vendored
8
tutorials/tutorial7/tutorial.py
vendored
@@ -19,7 +19,7 @@
|
||||
# - find the solution $u$ that satisfies the Poisson equation;
|
||||
# - find the unknown parameters ($\mu_1$, $\mu_2$) that better fit some given data (third equation in the system above).
|
||||
#
|
||||
# In order to achieve both the goals we will need to define an `InverseProblem` in PINA.
|
||||
# In order to achieve both goals we will need to define an `InverseProblem` in PINA.
|
||||
|
||||
# Let's start with useful imports.
|
||||
|
||||
@@ -63,7 +63,7 @@ plt.show()
|
||||
|
||||
# ### Inverse problem definition in PINA
|
||||
|
||||
# Then, we initialize the Poisson problem, that is inherited from the `SpatialProblem` and from the `InverseProblem` classes. We here have to define all the variables, and the domain where our unknown parameters ($\mu_1$, $\mu_2$) belong. Notice that the laplace equation takes as inputs also the unknown variables, that will be treated as parameters that the neural network optimizes during the training process.
|
||||
# Then, we initialize the Poisson problem, that is inherited from the `SpatialProblem` and from the `InverseProblem` classes. We here have to define all the variables, and the domain where our unknown parameters ($\mu_1$, $\mu_2$) belong. Notice that the Laplace equation takes as inputs also the unknown variables, that will be treated as parameters that the neural network optimizes during the training process.
|
||||
|
||||
# In[4]:
|
||||
|
||||
@@ -117,7 +117,7 @@ class Poisson(SpatialProblem, InverseProblem):
|
||||
problem = Poisson()
|
||||
|
||||
|
||||
# Then, we define the model of the neural network we want to use. Here we used a model which impose hard constrains on the boundary conditions, as also done in the Wave tutorial!
|
||||
# Then, we define the neural network model we want to use. Here we used a model which imposes hard constrains on the boundary conditions, as also done in the Wave tutorial!
|
||||
|
||||
# In[5]:
|
||||
|
||||
@@ -160,7 +160,7 @@ class SaveParameters(Callback):
|
||||
|
||||
# Then, we define the `PINN` object and train the solver using the `Trainer`.
|
||||
|
||||
# In[8]:
|
||||
# In[ ]:
|
||||
|
||||
|
||||
### train the problem with PINN
|
||||
|
||||
12
tutorials/tutorial8/tutorial.ipynb
vendored
12
tutorials/tutorial8/tutorial.ipynb
vendored
@@ -15,7 +15,7 @@
|
||||
"source": [
|
||||
"The tutorial aims to show how to employ the **PINA** library in order to apply a reduced order modeling technique [1]. Such methodologies have several similarities with machine learning approaches, since the main goal consists of predicting the solution of differential equations (typically parametric PDEs) in a real-time fashion.\n",
|
||||
"\n",
|
||||
"In particular we are going to use the Proper Orthogonal Decomposition with Neural Network (PODNN) [2], which basically perform a dimensional reduction using the POD approach, approximating the parametric solution manifold (at the reduced space) using a NN. In this example, we use a simple multilayer perceptron, but the plenty of different archiutectures can be plugged as well.\n",
|
||||
"In particular we are going to use the Proper Orthogonal Decomposition with Neural Network (PODNN) [2], which basically performs a dimensional reduction using the POD approach, approximating the parametric solution manifold (at the reduced space) using a NN. In this example, we use a simple multilayer perceptron, but the plenty of different architectures can be plugged as well.\n",
|
||||
"\n",
|
||||
"#### References\n",
|
||||
"1. Rozza G., Stabile G., Ballarin F. (2022). Advanced Reduced Order Methods and Applications in Computational Fluid Dynamics, Society for Industrial and Applied Mathematics. \n",
|
||||
@@ -118,7 +118,7 @@
|
||||
"id": "bef4d79d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The *snapshots* - aka the numerical solutions computed for several parameters - and the corresponding parameters are the only data we need to train the model, in order to predict for any new test parameter the solution.\n",
|
||||
"The *snapshots* - aka the numerical solutions computed for several parameters - and the corresponding parameters are the only data we need to train the model, in order to predict the solution for any new test parameter.\n",
|
||||
"To properly validate the accuracy, we initially split the 500 snapshots into the training dataset (90% of the original data) and the testing one (the reamining 10%). It must be said that, to plug the snapshots into **PINA**, we have to cast them to `LabelTensor` objects."
|
||||
]
|
||||
},
|
||||
@@ -172,7 +172,7 @@
|
||||
"id": "6b264569-57b3-458d-bb69-8e94fe89017d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Then, we define the model we want to use: basically we have a MLP architecture that takes in input the parameter and return the *modal coefficients*, so the reduced dimension representation (the coordinates in the POD space). Such latent variable is the projected to the original space using the POD modes, which are computed and stored in the `PODBlock` object."
|
||||
"Then, we define the model we want to use: an MLP architecture which takes in input the parameter and returns the *modal coefficients*, i.e.the interpolated coefficients of the POD expansion. Such coefficients are projected to the original space using the POD modes, which are computed and stored in the `PODBlock` object."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -227,7 +227,7 @@
|
||||
"id": "16e1f085-7818-4624-92a1-bf7010dbe528",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We highlight that the POD modes are directly computed by means of the singular value decomposition (computed over the input data), and not trained using the back-propagation approach. Only the weights of the MLP are actually trained during the optimization loop."
|
||||
"We highlight that the POD modes are directly computed by means of the singular value decomposition (computed over the input data), and not trained using the backpropagation approach. Only the weights of the MLP are actually trained during the optimization loop."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -254,7 +254,7 @@
|
||||
"id": "aab51202-36a7-40d2-b96d-47af8892cd2c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now that we set the `Problem` and the `Model`, we have just to train the model and use it for predict the test snapshots."
|
||||
"Now that we have set the `Problem` and the `Model`, we have just to train the model and use it for predicting the test snapshots."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -320,7 +320,7 @@
|
||||
"id": "3234710e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Done! Now the computational expensive part is over, we can load in future the model to infer new parameters (simply loading the checkpoint file automatically created by `Lightning`) or test its performances. We measure the relative error for the training and test datasets, printing the mean one."
|
||||
"Done! Now that the computational expensive part is over, we can load in future the model to infer new parameters (simply loading the checkpoint file automatically created by `Lightning`) or test its performances. We measure the relative error for the training and test datasets, printing the mean one."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
49
tutorials/tutorial9/tutorial.ipynb
vendored
49
tutorials/tutorial9/tutorial.ipynb
vendored
@@ -4,11 +4,11 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Tutorial: One dimensional Helmotz equation using Periodic Boundary Conditions\n",
|
||||
"# Tutorial: One dimensional Helmholtz equation using Periodic Boundary Conditions\n",
|
||||
"This tutorial presents how to solve with Physics-Informed Neural Networks (PINNs)\n",
|
||||
"a one dimensional Helmotz equation with periodic boundary conditions (PBC).\n",
|
||||
"a one dimensional Helmholtz equation with periodic boundary conditions (PBC).\n",
|
||||
"We will train with standard PINN's training by augmenting the input with\n",
|
||||
"periodic expasion as presented in [*An expert’s guide to training\n",
|
||||
"periodic expansion as presented in [*An expert’s guide to training\n",
|
||||
"physics-informed neural networks*](\n",
|
||||
"https://arxiv.org/abs/2308.08468).\n",
|
||||
"\n",
|
||||
@@ -41,7 +41,7 @@
|
||||
"source": [
|
||||
"## The problem definition\n",
|
||||
"\n",
|
||||
"The one-dimensional Helmotz problem is mathematically written as:\n",
|
||||
"The one-dimensional Helmholtz problem is mathematically written as:\n",
|
||||
"$$\n",
|
||||
"\\begin{cases}\n",
|
||||
"\\frac{d^2}{dx^2}u(x) - \\lambda u(x) -f(x) &= 0 \\quad x\\in(0,2)\\\\\n",
|
||||
@@ -49,17 +49,17 @@
|
||||
"\\end{cases}\n",
|
||||
"$$\n",
|
||||
"In this case we are asking the solution to be $C^{\\infty}$ periodic with\n",
|
||||
"period $2$, on the inifite domain $x\\in(-\\infty, \\infty)$. Notice that the\n",
|
||||
"classical PINN would need inifinite conditions to evaluate the PBC loss function,\n",
|
||||
"one for each derivative, which is of course infeasable... \n",
|
||||
"period $2$, on the infinite domain $x\\in(-\\infty, \\infty)$. Notice that the\n",
|
||||
"classical PINN would need infinite conditions to evaluate the PBC loss function,\n",
|
||||
"one for each derivative, which is of course infeasible... \n",
|
||||
"A possible solution, diverging from the original PINN formulation,\n",
|
||||
"is to use *coordinates augmentation*. In coordinates augmentation you seek for\n",
|
||||
"a coordinates transformation $v$ such that $x\\rightarrow v(x)$ such that\n",
|
||||
"the periodicity condition $ u^{(m)}(x=0) - u^{(m)}(x=2) = 0 \\quad m\\in[0, 1, \\cdots] $ is\n",
|
||||
"satisfied.\n",
|
||||
"\n",
|
||||
"For demonstration porpuses the problem specifics are $\\lambda=-10\\pi^2$,\n",
|
||||
"and $f(x)=-6\\pi^2\\sin(3\\pi x)\\cos(\\pi x)$ which gives a solution that can be\n",
|
||||
"For demonstration purposes, the problem specifics are $\\lambda=-10\\pi^2$,\n",
|
||||
"and $f(x)=-6\\pi^2\\sin(3\\pi x)\\cos(\\pi x)$ which give a solution that can be\n",
|
||||
"computed analytically $u(x) = \\sin(\\pi x)\\cos(3\\pi x)$."
|
||||
]
|
||||
},
|
||||
@@ -69,11 +69,11 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"class Helmotz(SpatialProblem):\n",
|
||||
"class Helmholtz(SpatialProblem):\n",
|
||||
" output_variables = ['u']\n",
|
||||
" spatial_domain = CartesianDomain({'x': [0, 2]})\n",
|
||||
"\n",
|
||||
" def helmotz_equation(input_, output_):\n",
|
||||
" def Helmholtz_equation(input_, output_):\n",
|
||||
" x = input_.extract('x')\n",
|
||||
" u_xx = laplacian(output_, input_, components=['u'], d=['x'])\n",
|
||||
" f = - 6.*torch.pi**2 * torch.sin(3*torch.pi*x)*torch.cos(torch.pi*x)\n",
|
||||
@@ -83,15 +83,15 @@
|
||||
" # here we write the problem conditions\n",
|
||||
" conditions = {\n",
|
||||
" 'D': Condition(location=spatial_domain,\n",
|
||||
" equation=Equation(helmotz_equation)),\n",
|
||||
" equation=Equation(Helmholtz_equation)),\n",
|
||||
" }\n",
|
||||
"\n",
|
||||
" def helmotz_sol(self, pts):\n",
|
||||
" def Helmholtz_sol(self, pts):\n",
|
||||
" return torch.sin(torch.pi * pts) * torch.cos(3. * torch.pi * pts)\n",
|
||||
" \n",
|
||||
" truth_solution = helmotz_sol\n",
|
||||
" truth_solution = Helmholtz_sol\n",
|
||||
"\n",
|
||||
"problem = Helmotz()\n",
|
||||
"problem = Helmholtz()\n",
|
||||
"\n",
|
||||
"# let's discretise the domain\n",
|
||||
"problem.discretise_domain(200, 'grid', locations=['D'])"
|
||||
@@ -101,11 +101,11 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"As usual the Helmotz problem is written in **PINA** code as a class. \n",
|
||||
"As usual, the Helmholtz problem is written in **PINA** code as a class. \n",
|
||||
"The equations are written as `conditions` that should be satisfied in the\n",
|
||||
"corresponding domains. The `truth_solution`\n",
|
||||
"is the exact solution which will be compared with the predicted one. We used\n",
|
||||
"latin hypercube sampling for choosing the collocation points."
|
||||
"Latin Hypercube Sampling for choosing the collocation points."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -159,11 +159,11 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"As simple as that! Notice in higher dimension you can specify different periods\n",
|
||||
"As simple as that! Notice that in higher dimension you can specify different periods\n",
|
||||
"for all dimensions using a dictionary, e.g. `periods={'x':2, 'y':3, ...}`\n",
|
||||
"would indicate a periodicity of $2$ in $x$, $3$ in $y$, and so on...\n",
|
||||
"\n",
|
||||
"We will now sole the problem as usually with the `PINN` and `Trainer` class."
|
||||
"We will now solve the problem as usually with the `PINN` and `Trainer` class."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -209,7 +209,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Great, they overlap perfectly! This seeams a good result, considering the simple neural network used to some this (complex) problem. We will now test the neural network on the domain $[-4, 4]$ without retraining. In principle the periodicity should be present since the $v$ function ensures the periodicity in $(-\\infty, \\infty)$."
|
||||
"Great, they overlap perfectly! This seems a good result, considering the simple neural network used to some this (complex) problem. We will now test the neural network on the domain $[-4, 4]$ without retraining. In principle the periodicity should be present since the $v$ function ensures the periodicity in $(-\\infty, \\infty)$."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -258,11 +258,11 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"It is pretty clear that the network is periodic, with also the error following a periodic pattern. Obviusly a longer training, and a more expressive neural network could improve the results!\n",
|
||||
"It is pretty clear that the network is periodic, with also the error following a periodic pattern. Obviously a longer training and a more expressive neural network could improve the results!\n",
|
||||
"\n",
|
||||
"## What's next?\n",
|
||||
"\n",
|
||||
"Nice you have completed the one dimensional Helmotz tutorial of **PINA**! There are multiple directions you can go now:\n",
|
||||
"Congratulations on completing the one dimensional Helmholtz tutorial of **PINA**! There are multiple directions you can go now:\n",
|
||||
"\n",
|
||||
"1. Train the network for longer or with different layer sizes and assert the finaly accuracy\n",
|
||||
"\n",
|
||||
@@ -272,6 +272,11 @@
|
||||
"\n",
|
||||
"4. Many more..."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
32
tutorials/tutorial9/tutorial.py
vendored
32
tutorials/tutorial9/tutorial.py
vendored
@@ -1,11 +1,11 @@
|
||||
#!/usr/bin/env python
|
||||
# coding: utf-8
|
||||
|
||||
# # Tutorial: One dimensional Helmotz equation using Periodic Boundary Conditions
|
||||
# # Tutorial: One dimensional Helmholtz equation using Periodic Boundary Conditions
|
||||
# This tutorial presents how to solve with Physics-Informed Neural Networks (PINNs)
|
||||
# a one dimensional Helmotz equation with periodic boundary conditions (PBC).
|
||||
# a one dimensional Helmholtz equation with periodic boundary conditions (PBC).
|
||||
# We will train with standard PINN's training by augmenting the input with
|
||||
# periodic expasion as presented in [*An expert’s guide to training
|
||||
# periodic expansion as presented in [*An expert’s guide to training
|
||||
# physics-informed neural networks*](
|
||||
# https://arxiv.org/abs/2308.08468).
|
||||
#
|
||||
@@ -30,7 +30,7 @@ from pina.equation import Equation
|
||||
|
||||
# ## The problem definition
|
||||
#
|
||||
# The one-dimensional Helmotz problem is mathematically written as:
|
||||
# The one-dimensional Helmholtz problem is mathematically written as:
|
||||
# $$
|
||||
# \begin{cases}
|
||||
# \frac{d^2}{dx^2}u(x) - \lambda u(x) -f(x) &= 0 \quad x\in(0,2)\\
|
||||
@@ -38,9 +38,9 @@ from pina.equation import Equation
|
||||
# \end{cases}
|
||||
# $$
|
||||
# In this case we are asking the solution to be $C^{\infty}$ periodic with
|
||||
# period $2$, on the inifite domain $x\in(-\infty, \infty)$. Notice that the
|
||||
# classical PINN would need inifinite conditions to evaluate the PBC loss function,
|
||||
# one for each derivative, which is of course infeasable...
|
||||
# period $2$, on the infinite domain $x\in(-\infty, \infty)$. Notice that the
|
||||
# classical PINN would need infinite conditions to evaluate the PBC loss function,
|
||||
# one for each derivative, which is of course infeasible...
|
||||
# A possible solution, diverging from the original PINN formulation,
|
||||
# is to use *coordinates augmentation*. In coordinates augmentation you seek for
|
||||
# a coordinates transformation $v$ such that $x\rightarrow v(x)$ such that
|
||||
@@ -54,11 +54,11 @@ from pina.equation import Equation
|
||||
# In[2]:
|
||||
|
||||
|
||||
class Helmotz(SpatialProblem):
|
||||
class Helmholtz(SpatialProblem):
|
||||
output_variables = ['u']
|
||||
spatial_domain = CartesianDomain({'x': [0, 2]})
|
||||
|
||||
def helmotz_equation(input_, output_):
|
||||
def Helmholtz_equation(input_, output_):
|
||||
x = input_.extract('x')
|
||||
u_xx = laplacian(output_, input_, components=['u'], d=['x'])
|
||||
f = - 6.*torch.pi**2 * torch.sin(3*torch.pi*x)*torch.cos(torch.pi*x)
|
||||
@@ -68,21 +68,21 @@ class Helmotz(SpatialProblem):
|
||||
# here we write the problem conditions
|
||||
conditions = {
|
||||
'D': Condition(location=spatial_domain,
|
||||
equation=Equation(helmotz_equation)),
|
||||
equation=Equation(Helmholtz_equation)),
|
||||
}
|
||||
|
||||
def helmotz_sol(self, pts):
|
||||
def Helmholtz_sol(self, pts):
|
||||
return torch.sin(torch.pi * pts) * torch.cos(3. * torch.pi * pts)
|
||||
|
||||
truth_solution = helmotz_sol
|
||||
truth_solution = Helmholtz_sol
|
||||
|
||||
problem = Helmotz()
|
||||
problem = Helmholtz()
|
||||
|
||||
# let's discretise the domain
|
||||
problem.discretise_domain(200, 'grid', locations=['D'])
|
||||
|
||||
|
||||
# As usual the Helmotz problem is written in **PINA** code as a class.
|
||||
# As usual the Helmholtz problem is written in **PINA** code as a class.
|
||||
# The equations are written as `conditions` that should be satisfied in the
|
||||
# corresponding domains. The `truth_solution`
|
||||
# is the exact solution which will be compared with the predicted one. We used
|
||||
@@ -129,7 +129,7 @@ model = torch.nn.Sequential(PeriodicBoundaryEmbedding(input_dimension=1,
|
||||
#
|
||||
# We will now sole the problem as usually with the `PINN` and `Trainer` class.
|
||||
|
||||
# In[5]:
|
||||
# In[ ]:
|
||||
|
||||
|
||||
pinn = PINN(problem=problem, model=model)
|
||||
@@ -180,7 +180,7 @@ with torch.no_grad():
|
||||
#
|
||||
# ## What's next?
|
||||
#
|
||||
# Nice you have completed the one dimensional Helmotz tutorial of **PINA**! There are multiple directions you can go now:
|
||||
# Nice you have completed the one dimensional Helmholtz tutorial of **PINA**! There are multiple directions you can go now:
|
||||
#
|
||||
# 1. Train the network for longer or with different layer sizes and assert the finaly accuracy
|
||||
#
|
||||
|
||||
Reference in New Issue
Block a user