Fixing tutorials grammar (#242)

* grammar check and sparse rephrasing
* rst created
* meta copyright adjusted
This commit is contained in:
Giuseppe Alessio D'Inverno
2024-03-05 10:43:34 +01:00
committed by GitHub
parent 15136e13f8
commit b10e02103b
23 changed files with 272 additions and 237 deletions

View File

@@ -14,13 +14,13 @@ a toy problem, following the standard API procedure.
Specifically, the tutorial aims to introduce the following topics:
- Explaining how to build **PINA** Problem,
- Showing how to generate data for ``PINN`` straining
- Explaining how to build **PINA** Problems,
- Showing how to generate data for ``PINN`` training
These are the two main steps needed **before** starting the modelling
optimization (choose model and solver, and train). We will show each
step in detail, and at the end, we will solve a simple Ordinary
Differential Equation (ODE) problem busing the ``PINN`` solver.
Differential Equation (ODE) problem using the ``PINN`` solver.
Build a PINA problem
--------------------
@@ -66,9 +66,8 @@ the tensor. The ``spatial_domain`` variable indicates where the sample
points are going to be sampled in the domain, in this case
:math:`x\in[0,1]`.
What about if our equation is also time dependent? In this case, our
``class`` will inherit from both ``SpatialProblem`` and
``TimeDependentProblem``:
What if our equation is also time-dependent? In this case, our ``class``
will inherit from both ``SpatialProblem`` and ``TimeDependentProblem``:
.. code:: ipython3
@@ -83,6 +82,13 @@ What about if our equation is also time dependent? In this case, our
# other stuff ...
.. parsed-literal::
Intel MKL WARNING: Support of Intel(R) Streaming SIMD Extensions 4.2 (Intel(R) SSE4.2) enabled only processors has been deprecated. Intel oneAPI Math Kernel Library 2025.0 will require Intel(R) Advanced Vector Extensions (Intel(R) AVX) instructions.
Intel MKL WARNING: Support of Intel(R) Streaming SIMD Extensions 4.2 (Intel(R) SSE4.2) enabled only processors has been deprecated. Intel oneAPI Math Kernel Library 2025.0 will require Intel(R) Advanced Vector Extensions (Intel(R) AVX) instructions.
where we have included the ``temporal_domain`` variable, indicating the
time domain wanted for the solution.
@@ -157,7 +163,7 @@ returning the difference between subtracting the variable ``u`` from its
gradient (the residual), which we hope to minimize to 0. This is done
for all conditions. Notice that we do not pass directly a ``python``
function, but an ``Equation`` object, which is initialized with the
``python`` function. This is done so that all the computations, and
``python`` function. This is done so that all the computations and
internal checks are done inside **PINA**.
Once we have defined the function, we need to tell the neural network
@@ -169,25 +175,25 @@ possibilities are allowed, see the documentation for reference).
Finally, its possible to define a ``truth_solution`` function, which
can be useful if we want to plot the results and see how the real
solution compares to the expected (true) solution. Notice that the
``truth_solution`` function is a method of the ``PINN`` class, but is
``truth_solution`` function is a method of the ``PINN`` class, but it is
not mandatory for problem definition.
Generate data
-------------
Data for training can come in form of direct numerical simulation
reusults, or points in the domains. In case we do unsupervised learning,
we just need the collocation points for training, i.e. points where we
want to evaluate the neural network. Sampling point in **PINA** is very
easy, here we show three examples using the ``.discretise_domain``
method of the ``AbstractProblem`` class.
results, or points in the domains. In case we perform unsupervised
learning, we just need the collocation points for training, i.e. points
where we want to evaluate the neural network. Sampling point in **PINA**
is very easy, here we show three examples using the
``.discretise_domain`` method of the ``AbstractProblem`` class.
.. code:: ipython3
# sampling 20 points in [0, 1] through discretization in all locations
problem.discretise_domain(n=20, mode='grid', variables=['x'], locations='all')
# sampling 20 points in (0, 1) through latin hypercube samping in D, and 1 point in x0
# sampling 20 points in (0, 1) through latin hypercube sampling in D, and 1 point in x0
problem.discretise_domain(n=20, mode='latin', variables=['x'], locations=['D'])
problem.discretise_domain(n=1, mode='random', variables=['x'], locations=['x0'])
@@ -301,11 +307,13 @@ If you want to track the metric by yourself without a logger, use
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
/Users/alessio/opt/anaconda3/envs/pina/lib/python3.11/site-packages/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py:67: Starting from v1.9.0, `tensorboardX` has been removed as a dependency of the `pytorch_lightning` package, due to potential conflicts with other packages in the ML ecosystem. For this reason, `logger=True` will use `CSVLogger` as the default logger, unless the `tensorboard` or `tensorboardX` packages are found. Please `pip install lightning[extra]` or one of them to enable TensorBoard support by default
Missing logger folder: /Users/alessio/Downloads/lightning_logs
.. parsed-literal::
Epoch 1499: : 1it [00:00, 272.55it/s, v_num=3, x0_loss=7.71e-6, D_loss=0.000734, mean_loss=0.000371]
Epoch 1499: | | 1/? [00:00<00:00, 167.08it/s, v_num=0, x0_loss=1.07e-5, D_loss=0.000792, mean_loss=0.000401]
.. parsed-literal::
@@ -314,7 +322,7 @@ If you want to track the metric by yourself without a logger, use
.. parsed-literal::
Epoch 1499: : 1it [00:00, 167.14it/s, v_num=3, x0_loss=7.71e-6, D_loss=0.000734, mean_loss=0.000371]
Epoch 1499: | | 1/? [00:00<00:00, 102.49it/s, v_num=0, x0_loss=1.07e-5, D_loss=0.000792, mean_loss=0.000401]
After the training we can inspect trainer logged metrics (by default
@@ -332,8 +340,8 @@ loss can be accessed by ``trainer.logged_metrics``
.. parsed-literal::
{'x0_loss': tensor(7.7149e-06),
'D_loss': tensor(0.0007),
{'x0_loss': tensor(1.0674e-05),
'D_loss': tensor(0.0008),
'mean_loss': tensor(0.0004)}
@@ -347,8 +355,13 @@ quatitative plots of the solution.
pl.plot(solver=pinn)
.. parsed-literal::
.. image:: tutorial_files/tutorial_23_0.png
Intel MKL WARNING: Support of Intel(R) Streaming SIMD Extensions 4.2 (Intel(R) SSE4.2) enabled only processors has been deprecated. Intel oneAPI Math Kernel Library 2025.0 will require Intel(R) Advanced Vector Extensions (Intel(R) AVX) instructions.
.. image:: tutorial_files/tutorial_23_1.png
@@ -375,14 +388,16 @@ could train for longer
Whats next?
------------
Nice you have completed the introductory tutorial of **PINA**! There are
multiple directions you can go now:
Congratulations on completing the introductory tutorial of **PINA**!
There are several directions you can go now:
1. Train the network for longer or with different layer sizes and assert
the finaly accuracy
2. Train the network using other types of models (see ``pina.model``)
3. GPU trainining and benchmark the speed
3. GPU training and speed benchmarking
4. Many more…

Binary file not shown.

Before

Width:  |  Height:  |  Size: 10 KiB

After

Width:  |  Height:  |  Size: 9.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 19 KiB

After

Width:  |  Height:  |  Size: 18 KiB