diff --git a/docs/source/_rst/_code.rst b/docs/source/_rst/_code.rst index f088fa3..fb0674e 100644 --- a/docs/source/_rst/_code.rst +++ b/docs/source/_rst/_code.rst @@ -14,8 +14,8 @@ The pipeline to solve differential equations with PINA follows just five steps: 1. Define the `Problem`_ the user aim to solve 2. Generate data using built in `Domains`_, or load high level simulation results as :doc:`LabelTensor ` 3. Choose or build one or more `Models`_ to solve the problem - 4. Choose a solver across PINA available `Solvers`_, or build one using the :doc:`SolverInterface ` - 5. Train the model with the PINA :doc:`Trainer `, enhance the train with `Callback`_ + 4. Choose a solver across PINA available `Solvers`_, or build one using the :doc:`SolverInterface ` + 5. Train the model with the PINA :doc:`Trainer `, enhance the train with `Callback`_ Trainer, Dataset and Datamodule @@ -33,6 +33,18 @@ Data Types :titlesonly: LabelTensor + Graph + + +Graphs Structures +------------------ +.. toctree:: + :titlesonly: + + Graph + GraphBuilder + RadiusGraph + KNNGraph Conditions @@ -53,17 +65,19 @@ Solvers .. toctree:: :titlesonly: - SolverInterface - PINNInterface - PINN - GPINN - CausalPINN - CompetitivePINN - SAPINN - RBAPINN - Supervised solver - ReducedOrderModelSolver - GAROM + SolverInterface + SingleSolverInterface + MultiSolverInterface + PINNInterface + PINN + GradientPINN + CausalPINN + CompetitivePINN + SelfAdaptivePINN + RBAPINN + SupervisedSolver + ReducedOrderModelSolver + GAROM Models @@ -112,6 +126,17 @@ Reduction and Embeddings Fourier Feature Embedding Radial Basis Function Interpolation +Optimizers and Schedulers +-------------------------- + +.. toctree:: + :titlesonly: + + Optimizer + Scheduler + TorchOptimizer + TorchScheduler + Adaptive Activation Functions ------------------------------- @@ -134,8 +159,8 @@ Adaptive Activation Functions Adaptive Exp -Equations -------------- +Equations and Differential Operators +--------------------------------------- .. toctree:: :titlesonly: @@ -144,16 +169,7 @@ Equations Equation SystemEquation Equation Factory - - -Differential Operators -------------------------- - -.. toctree:: - :titlesonly: - - Equations - Differential Operators + Differential Operators Problems @@ -162,10 +178,26 @@ Problems .. toctree:: :titlesonly: - AbstractProblem - SpatialProblem - TimeDependentProblem - ParametricProblem + AbstractProblem + InverseProblem + ParametricProblem + SpatialProblem + TimeDependentProblem + +Problems Zoo +-------------- + +.. toctree:: + :titlesonly: + + AdvectionProblem + AllenCahnProblem + DiffusionReactionProblem + HelmholtzProblem + InversePoisson2DSquareProblem + Poisson2DSquareProblem + SupervisedProblem + Geometrical Domains --------------------- diff --git a/docs/source/_rst/graph/graph.rst b/docs/source/_rst/graph/graph.rst new file mode 100644 index 0000000..1921f83 --- /dev/null +++ b/docs/source/_rst/graph/graph.rst @@ -0,0 +1,9 @@ +Graph +=========== +.. currentmodule:: pina.graph + + +.. autoclass:: Graph + :members: + :private-members: + :show-inheritance: \ No newline at end of file diff --git a/docs/source/_rst/graph/graph_builder.rst b/docs/source/_rst/graph/graph_builder.rst new file mode 100644 index 0000000..2508aec --- /dev/null +++ b/docs/source/_rst/graph/graph_builder.rst @@ -0,0 +1,9 @@ +GraphBuilder +============== +.. currentmodule:: pina.graph + + +.. autoclass:: GraphBuilder + :members: + :private-members: + :show-inheritance: \ No newline at end of file diff --git a/docs/source/_rst/graph/knn_graph.rst b/docs/source/_rst/graph/knn_graph.rst new file mode 100644 index 0000000..8ef0b19 --- /dev/null +++ b/docs/source/_rst/graph/knn_graph.rst @@ -0,0 +1,9 @@ +KNNGraph +=========== +.. currentmodule:: pina.graph + + +.. autoclass:: KNNGraph + :members: + :private-members: + :show-inheritance: \ No newline at end of file diff --git a/docs/source/_rst/graph/radius_graph.rst b/docs/source/_rst/graph/radius_graph.rst new file mode 100644 index 0000000..7414d2d --- /dev/null +++ b/docs/source/_rst/graph/radius_graph.rst @@ -0,0 +1,9 @@ +RadiusGraph +============= +.. currentmodule:: pina.graph + + +.. autoclass:: RadiusGraph + :members: + :private-members: + :show-inheritance: \ No newline at end of file diff --git a/docs/source/_rst/operator.rst b/docs/source/_rst/operator.rst new file mode 100644 index 0000000..42746a6 --- /dev/null +++ b/docs/source/_rst/operator.rst @@ -0,0 +1,8 @@ +Operators +=========== + +.. currentmodule:: pina.operator + +.. automodule:: pina.operator + :members: + :show-inheritance: \ No newline at end of file diff --git a/docs/source/_rst/operators.rst b/docs/source/_rst/operators.rst deleted file mode 100644 index 59f7c7a..0000000 --- a/docs/source/_rst/operators.rst +++ /dev/null @@ -1,8 +0,0 @@ -Operators -=========== - -.. currentmodule:: pina.operators - -.. automodule:: pina.operators - :members: - :show-inheritance: \ No newline at end of file diff --git a/docs/source/_rst/optim/optimizer_interface.rst b/docs/source/_rst/optim/optimizer_interface.rst new file mode 100644 index 0000000..88c18e8 --- /dev/null +++ b/docs/source/_rst/optim/optimizer_interface.rst @@ -0,0 +1,7 @@ +Optimizer +============ +.. currentmodule:: pina.optim.optimizer_interface + +.. autoclass:: Optimizer + :members: + :show-inheritance: \ No newline at end of file diff --git a/docs/source/_rst/optim/scheduler_interface.rst b/docs/source/_rst/optim/scheduler_interface.rst new file mode 100644 index 0000000..ab8ee29 --- /dev/null +++ b/docs/source/_rst/optim/scheduler_interface.rst @@ -0,0 +1,7 @@ +Scheduler +============= +.. currentmodule:: pina.optim.scheduler_interface + +.. autoclass:: Scheduler + :members: + :show-inheritance: \ No newline at end of file diff --git a/docs/source/_rst/optim/torch_optimizer.rst b/docs/source/_rst/optim/torch_optimizer.rst new file mode 100644 index 0000000..3e6c9d9 --- /dev/null +++ b/docs/source/_rst/optim/torch_optimizer.rst @@ -0,0 +1,7 @@ +TorchOptimizer +=============== +.. currentmodule:: pina.optim.torch_optimizer + +.. autoclass:: TorchOptimizer + :members: + :show-inheritance: \ No newline at end of file diff --git a/docs/source/_rst/optim/torch_scheduler.rst b/docs/source/_rst/optim/torch_scheduler.rst new file mode 100644 index 0000000..5c3e4df --- /dev/null +++ b/docs/source/_rst/optim/torch_scheduler.rst @@ -0,0 +1,7 @@ +TorchScheduler +=============== +.. currentmodule:: pina.optim.torch_scheduler + +.. autoclass:: TorchScheduler + :members: + :show-inheritance: \ No newline at end of file diff --git a/docs/source/_rst/plotter.rst b/docs/source/_rst/plotter.rst deleted file mode 100644 index b6e94a7..0000000 --- a/docs/source/_rst/plotter.rst +++ /dev/null @@ -1,8 +0,0 @@ -Plotter -=========== -.. currentmodule:: pina.plotter - -.. automodule:: pina.plotter - :members: - :show-inheritance: - :noindex: diff --git a/docs/source/_rst/problem/abstractproblem.rst b/docs/source/_rst/problem/abstract_problem.rst similarity index 100% rename from docs/source/_rst/problem/abstractproblem.rst rename to docs/source/_rst/problem/abstract_problem.rst diff --git a/docs/source/_rst/problem/inverse_problem.rst b/docs/source/_rst/problem/inverse_problem.rst new file mode 100644 index 0000000..5ce306f --- /dev/null +++ b/docs/source/_rst/problem/inverse_problem.rst @@ -0,0 +1,9 @@ +InverseProblem +============== +.. currentmodule:: pina.problem.inverse_problem + +.. automodule:: pina.problem.inverse_problem + +.. autoclass:: InverseProblem + :members: + :show-inheritance: diff --git a/docs/source/_rst/problem/parametricproblem.rst b/docs/source/_rst/problem/parametric_problem.rst similarity index 100% rename from docs/source/_rst/problem/parametricproblem.rst rename to docs/source/_rst/problem/parametric_problem.rst diff --git a/docs/source/_rst/problem/spatialproblem.rst b/docs/source/_rst/problem/spatial_problem.rst similarity index 100% rename from docs/source/_rst/problem/spatialproblem.rst rename to docs/source/_rst/problem/spatial_problem.rst diff --git a/docs/source/_rst/problem/timedepproblem.rst b/docs/source/_rst/problem/time_dependent_problem.rst similarity index 52% rename from docs/source/_rst/problem/timedepproblem.rst rename to docs/source/_rst/problem/time_dependent_problem.rst index 93b8cb5..db94121 100644 --- a/docs/source/_rst/problem/timedepproblem.rst +++ b/docs/source/_rst/problem/time_dependent_problem.rst @@ -1,8 +1,8 @@ TimeDependentProblem ==================== -.. currentmodule:: pina.problem.timedep_problem +.. currentmodule:: pina.problem.time_dependent_problem -.. automodule:: pina.problem.timedep_problem +.. automodule:: pina.problem.time_dependent_problem .. autoclass:: TimeDependentProblem :members: diff --git a/docs/source/_rst/problem/zoo/advection.rst b/docs/source/_rst/problem/zoo/advection.rst new file mode 100644 index 0000000..b83cc9d --- /dev/null +++ b/docs/source/_rst/problem/zoo/advection.rst @@ -0,0 +1,9 @@ +AdvectionProblem +================== +.. currentmodule:: pina.problem.zoo.advection + +.. automodule:: pina.problem.zoo.advection + +.. autoclass:: AdvectionProblem + :members: + :show-inheritance: diff --git a/docs/source/_rst/problem/zoo/allen_cahn.rst b/docs/source/_rst/problem/zoo/allen_cahn.rst new file mode 100644 index 0000000..ada3465 --- /dev/null +++ b/docs/source/_rst/problem/zoo/allen_cahn.rst @@ -0,0 +1,9 @@ +AllenCahnProblem +================== +.. currentmodule:: pina.problem.zoo.allen_cahn + +.. automodule:: pina.problem.zoo.allen_cahn + +.. autoclass:: AllenCahnProblem + :members: + :show-inheritance: diff --git a/docs/source/_rst/problem/zoo/diffusion_reaction.rst b/docs/source/_rst/problem/zoo/diffusion_reaction.rst new file mode 100644 index 0000000..0cad0fd --- /dev/null +++ b/docs/source/_rst/problem/zoo/diffusion_reaction.rst @@ -0,0 +1,9 @@ +DiffusionReactionProblem +========================= +.. currentmodule:: pina.problem.zoo.diffusion_reaction + +.. automodule:: pina.problem.zoo.diffusion_reaction + +.. autoclass:: DiffusionReactionProblem + :members: + :show-inheritance: diff --git a/docs/source/_rst/problem/zoo/helmholtz.rst b/docs/source/_rst/problem/zoo/helmholtz.rst new file mode 100644 index 0000000..af4ec7d --- /dev/null +++ b/docs/source/_rst/problem/zoo/helmholtz.rst @@ -0,0 +1,9 @@ +HelmholtzProblem +================== +.. currentmodule:: pina.problem.zoo.helmholtz + +.. automodule:: pina.problem.zoo.helmholtz + +.. autoclass:: HelmholtzProblem + :members: + :show-inheritance: diff --git a/docs/source/_rst/problem/zoo/inverse_poisson_2d_square.rst b/docs/source/_rst/problem/zoo/inverse_poisson_2d_square.rst new file mode 100644 index 0000000..727c17b --- /dev/null +++ b/docs/source/_rst/problem/zoo/inverse_poisson_2d_square.rst @@ -0,0 +1,9 @@ +InversePoisson2DSquareProblem +============================== +.. currentmodule:: pina.problem.zoo.inverse_poisson_2d_square + +.. automodule:: pina.problem.zoo.inverse_poisson_2d_square + +.. autoclass:: InversePoisson2DSquareProblem + :members: + :show-inheritance: diff --git a/docs/source/_rst/problem/zoo/poisson_2d_square.rst b/docs/source/_rst/problem/zoo/poisson_2d_square.rst new file mode 100644 index 0000000..718c33c --- /dev/null +++ b/docs/source/_rst/problem/zoo/poisson_2d_square.rst @@ -0,0 +1,9 @@ +Poisson2DSquareProblem +======================== +.. currentmodule:: pina.problem.zoo.poisson_2d_square + +.. automodule:: pina.problem.zoo.poisson_2d_square + +.. autoclass:: Poisson2DSquareProblem + :members: + :show-inheritance: diff --git a/docs/source/_rst/problem/zoo/supervised_problem.rst b/docs/source/_rst/problem/zoo/supervised_problem.rst new file mode 100644 index 0000000..aad7d5a --- /dev/null +++ b/docs/source/_rst/problem/zoo/supervised_problem.rst @@ -0,0 +1,9 @@ +SupervisedProblem +================== +.. currentmodule:: pina.problem.zoo.supervised_problem + +.. automodule:: pina.problem.zoo.supervised_problem + +.. autoclass:: SupervisedProblem + :members: + :show-inheritance: diff --git a/docs/source/_rst/solvers/garom.rst b/docs/source/_rst/solver/garom.rst similarity index 64% rename from docs/source/_rst/solvers/garom.rst rename to docs/source/_rst/solver/garom.rst index 5fcd97f..0e5820f 100644 --- a/docs/source/_rst/solvers/garom.rst +++ b/docs/source/_rst/solver/garom.rst @@ -1,6 +1,6 @@ GAROM ====== -.. currentmodule:: pina.solvers.garom +.. currentmodule:: pina.solver.garom .. autoclass:: GAROM :members: diff --git a/docs/source/_rst/solver/multi_solver_interface.rst b/docs/source/_rst/solver/multi_solver_interface.rst new file mode 100644 index 0000000..7f68c83 --- /dev/null +++ b/docs/source/_rst/solver/multi_solver_interface.rst @@ -0,0 +1,8 @@ +MultiSolverInterface +====================== +.. currentmodule:: pina.solver.solver + +.. autoclass:: MultiSolverInterface + :show-inheritance: + :members: + diff --git a/docs/source/_rst/solvers/causalpinn.rst b/docs/source/_rst/solver/physic_informed_solver/causal_pinn.rst similarity index 57% rename from docs/source/_rst/solvers/causalpinn.rst rename to docs/source/_rst/solver/physic_informed_solver/causal_pinn.rst index 28f7f15..a418776 100644 --- a/docs/source/_rst/solvers/causalpinn.rst +++ b/docs/source/_rst/solver/physic_informed_solver/causal_pinn.rst @@ -1,6 +1,6 @@ CausalPINN ============== -.. currentmodule:: pina.solvers.pinns.causalpinn +.. currentmodule:: pina.solver.physic_informed_solver.causalpinn .. autoclass:: CausalPINN :members: diff --git a/docs/source/_rst/solvers/competitivepinn.rst b/docs/source/_rst/solver/physic_informed_solver/competitive_pinn.rst similarity index 58% rename from docs/source/_rst/solvers/competitivepinn.rst rename to docs/source/_rst/solver/physic_informed_solver/competitive_pinn.rst index 2bbe242..5bfa431 100644 --- a/docs/source/_rst/solvers/competitivepinn.rst +++ b/docs/source/_rst/solver/physic_informed_solver/competitive_pinn.rst @@ -1,6 +1,6 @@ CompetitivePINN ================= -.. currentmodule:: pina.solvers.pinns.competitive_pinn +.. currentmodule:: pina.solver.physic_informed_solver.competitive_pinn .. autoclass:: CompetitivePINN :members: diff --git a/docs/source/_rst/solver/physic_informed_solver/gradient_pinn.rst b/docs/source/_rst/solver/physic_informed_solver/gradient_pinn.rst new file mode 100644 index 0000000..ea70393 --- /dev/null +++ b/docs/source/_rst/solver/physic_informed_solver/gradient_pinn.rst @@ -0,0 +1,7 @@ +GradientPINN +============== +.. currentmodule:: pina.solver.physic_informed_solver.gradient_pinn + +.. autoclass:: GradientPINN + :members: + :show-inheritance: \ No newline at end of file diff --git a/docs/source/_rst/solvers/pinn.rst b/docs/source/_rst/solver/physic_informed_solver/pinn.rst similarity index 53% rename from docs/source/_rst/solvers/pinn.rst rename to docs/source/_rst/solver/physic_informed_solver/pinn.rst index e1c2b59..974cddd 100644 --- a/docs/source/_rst/solvers/pinn.rst +++ b/docs/source/_rst/solver/physic_informed_solver/pinn.rst @@ -1,6 +1,6 @@ PINN ====== -.. currentmodule:: pina.solvers.pinns.pinn +.. currentmodule:: pina.solver.physic_informed_solver.pinn .. autoclass:: PINN :members: diff --git a/docs/source/_rst/solvers/basepinn.rst b/docs/source/_rst/solver/physic_informed_solver/pinn_interface.rst similarity index 58% rename from docs/source/_rst/solvers/basepinn.rst rename to docs/source/_rst/solver/physic_informed_solver/pinn_interface.rst index c650795..e9b83ac 100644 --- a/docs/source/_rst/solvers/basepinn.rst +++ b/docs/source/_rst/solver/physic_informed_solver/pinn_interface.rst @@ -1,6 +1,6 @@ PINNInterface ================= -.. currentmodule:: pina.solvers.pinns.basepinn +.. currentmodule:: pina.solver.physic_informed_solver.pinn_interface .. autoclass:: PINNInterface :members: diff --git a/docs/source/_rst/solvers/rba_pinn.rst b/docs/source/_rst/solver/physic_informed_solver/rba_pinn.rst similarity index 54% rename from docs/source/_rst/solvers/rba_pinn.rst rename to docs/source/_rst/solver/physic_informed_solver/rba_pinn.rst index b964cce..18899bd 100644 --- a/docs/source/_rst/solvers/rba_pinn.rst +++ b/docs/source/_rst/solver/physic_informed_solver/rba_pinn.rst @@ -1,6 +1,6 @@ RBAPINN ======== -.. currentmodule:: pina.solvers.pinns.rbapinn +.. currentmodule:: pina.solver.physic_informed_solver.rbapinn .. autoclass:: RBAPINN :members: diff --git a/docs/source/_rst/solver/physic_informed_solver/self_adaptive_pinn.rst b/docs/source/_rst/solver/physic_informed_solver/self_adaptive_pinn.rst new file mode 100644 index 0000000..dd242bb --- /dev/null +++ b/docs/source/_rst/solver/physic_informed_solver/self_adaptive_pinn.rst @@ -0,0 +1,7 @@ +SelfAdaptivePINN +================== +.. currentmodule:: pina.solver.physic_informed_solver.self_adaptive_pinn + +.. autoclass:: SelfAdaptivePINN + :members: + :show-inheritance: \ No newline at end of file diff --git a/docs/source/_rst/solvers/rom.rst b/docs/source/_rst/solver/reduced_order_model.rst similarity index 71% rename from docs/source/_rst/solvers/rom.rst rename to docs/source/_rst/solver/reduced_order_model.rst index 3ee534b..33a9095 100644 --- a/docs/source/_rst/solvers/rom.rst +++ b/docs/source/_rst/solver/reduced_order_model.rst @@ -1,6 +1,6 @@ ReducedOrderModelSolver ========================== -.. currentmodule:: pina.solvers.rom +.. currentmodule:: pina.solver.reduced_order_model .. autoclass:: ReducedOrderModelSolver :members: diff --git a/docs/source/_rst/solver/single_solver_interface.rst b/docs/source/_rst/solver/single_solver_interface.rst new file mode 100644 index 0000000..5b85f11 --- /dev/null +++ b/docs/source/_rst/solver/single_solver_interface.rst @@ -0,0 +1,8 @@ +SingleSolverInterface +====================== +.. currentmodule:: pina.solver.solver + +.. autoclass:: SingleSolverInterface + :show-inheritance: + :members: + diff --git a/docs/source/_rst/solvers/solver_interface.rst b/docs/source/_rst/solver/solver_interface.rst similarity index 70% rename from docs/source/_rst/solvers/solver_interface.rst rename to docs/source/_rst/solver/solver_interface.rst index 363e1db..9bb1178 100644 --- a/docs/source/_rst/solvers/solver_interface.rst +++ b/docs/source/_rst/solver/solver_interface.rst @@ -1,7 +1,8 @@ SolverInterface ================= -.. currentmodule:: pina.solvers.solver +.. currentmodule:: pina.solver.solver .. autoclass:: SolverInterface :show-inheritance: :members: + diff --git a/docs/source/_rst/solvers/supervised.rst b/docs/source/_rst/solver/supervised.rst similarity index 70% rename from docs/source/_rst/solvers/supervised.rst rename to docs/source/_rst/solver/supervised.rst index 895759e..19978f9 100644 --- a/docs/source/_rst/solvers/supervised.rst +++ b/docs/source/_rst/solver/supervised.rst @@ -1,6 +1,6 @@ SupervisedSolver =================== -.. currentmodule:: pina.solvers.supervised +.. currentmodule:: pina.solver.supervised .. autoclass:: SupervisedSolver :members: diff --git a/docs/source/_rst/solvers/gpinn.rst b/docs/source/_rst/solvers/gpinn.rst deleted file mode 100644 index ee076a5..0000000 --- a/docs/source/_rst/solvers/gpinn.rst +++ /dev/null @@ -1,7 +0,0 @@ -GPINN -====== -.. currentmodule:: pina.solvers.pinns.gpinn - -.. autoclass:: GPINN - :members: - :show-inheritance: \ No newline at end of file diff --git a/docs/source/_rst/solvers/sapinn.rst b/docs/source/_rst/solvers/sapinn.rst deleted file mode 100644 index b20891f..0000000 --- a/docs/source/_rst/solvers/sapinn.rst +++ /dev/null @@ -1,7 +0,0 @@ -SAPINN -====== -.. currentmodule:: pina.solvers.pinns.sapinn - -.. autoclass:: SAPINN - :members: - :show-inheritance: \ No newline at end of file diff --git a/docs/source/_rst/tutorials/tutorial1/tutorial.rst b/docs/source/_rst/tutorials/tutorial1/tutorial.rst deleted file mode 100644 index d15cb63..0000000 --- a/docs/source/_rst/tutorials/tutorial1/tutorial.rst +++ /dev/null @@ -1,385 +0,0 @@ -Tutorial: Physics Informed Neural Networks on PINA -================================================== - -|Open In Colab| - -.. |Open In Colab| image:: https://colab.research.google.com/assets/colab-badge.svg - :target: https://colab.research.google.com/github/mathLab/PINA/blob/master/tutorials/tutorial1/tutorial.ipynb - -In this tutorial, we will demonstrate a typical use case of **PINA** on -a toy problem, following the standard API procedure. - -.. raw:: html - -

- -.. raw:: html - -

- -Specifically, the tutorial aims to introduce the following topics: - -- Explaining how to build **PINA** Problems, -- Showing how to generate data for ``PINN`` training - -These are the two main steps needed **before** starting the modelling -optimization (choose model and solver, and train). We will show each -step in detail, and at the end, we will solve a simple Ordinary -Differential Equation (ODE) problem using the ``PINN`` solver. - -Build a PINA problem --------------------- - -Problem definition in the **PINA** framework is done by building a -python ``class``, which inherits from one or more problem classes -(``SpatialProblem``, ``TimeDependentProblem``, ``ParametricProblem``, …) -depending on the nature of the problem. Below is an example: - -Simple Ordinary Differential Equation -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Consider the following: - -.. math:: - - - \begin{equation} - \begin{cases} - \frac{d}{dx}u(x) &= u(x) \quad x\in(0,1)\\ - u(x=0) &= 1 \\ - \end{cases} - \end{equation} - -with the analytical solution :math:`u(x) = e^x`. In this case, our ODE -depends only on the spatial variable :math:`x\in(0,1)` , meaning that -our ``Problem`` class is going to be inherited from the -``SpatialProblem`` class: - -.. code:: python - - from pina.problem import SpatialProblem - from pina.geometry import CartesianProblem - - class SimpleODE(SpatialProblem): - - output_variables = ['u'] - spatial_domain = CartesianProblem({'x': [0, 1]}) - - # other stuff ... - -Notice that we define ``output_variables`` as a list of symbols, -indicating the output variables of our equation (in this case only -:math:`u`), this is done because in **PINA** the ``torch.Tensor``\ s are -labelled, allowing the user maximal flexibility for the manipulation of -the tensor. The ``spatial_domain`` variable indicates where the sample -points are going to be sampled in the domain, in this case -:math:`x\in[0,1]`. - -What if our equation is also time-dependent? In this case, our ``class`` -will inherit from both ``SpatialProblem`` and ``TimeDependentProblem``: - -.. code:: ipython3 - - ## routine needed to run the notebook on Google Colab - try: - import google.colab - IN_COLAB = True - except: - IN_COLAB = False - if IN_COLAB: - !pip install "pina-mathlab" - - from pina.problem import SpatialProblem, TimeDependentProblem - from pina.geometry import CartesianDomain - - class TimeSpaceODE(SpatialProblem, TimeDependentProblem): - - output_variables = ['u'] - spatial_domain = CartesianDomain({'x': [0, 1]}) - temporal_domain = CartesianDomain({'t': [0, 1]}) - - # other stuff ... - - -where we have included the ``temporal_domain`` variable, indicating the -time domain wanted for the solution. - -In summary, using **PINA**, we can initialize a problem with a class -which inherits from different base classes: ``SpatialProblem``, -``TimeDependentProblem``, ``ParametricProblem``, and so on depending on -the type of problem we are considering. Here are some examples (more on -the official documentation): - -* ``SpatialProblem`` :math:`\rightarrow` a differential equation with spatial variable(s) ``spatial_domain`` -* ``TimeDependentProblem`` :math:`\rightarrow` a time-dependent differential equation with temporal variable(s) ``temporal_domain`` -* ``ParametricProblem`` :math:`\rightarrow` a parametrized differential equation with parametric variable(s) ``parameter_domain`` -* ``AbstractProblem`` :math:`\rightarrow` any **PINA** problem inherits from here - -Write the problem class -~~~~~~~~~~~~~~~~~~~~~~~ - -Once the ``Problem`` class is initialized, we need to represent the -differential equation in **PINA**. In order to do this, we need to load -the **PINA** operators from ``pina.operators`` module. Again, we’ll -consider Equation (1) and represent it in **PINA**: - -.. code:: ipython3 - - from pina.problem import SpatialProblem - from pina.operators import grad - from pina import Condition - from pina.geometry import CartesianDomain - from pina.equation import Equation, FixedValue - - import torch - - - class SimpleODE(SpatialProblem): - - output_variables = ['u'] - spatial_domain = CartesianDomain({'x': [0, 1]}) - - # defining the ode equation - def ode_equation(input_, output_): - - # computing the derivative - u_x = grad(output_, input_, components=['u'], d=['x']) - - # extracting the u input variable - u = output_.extract(['u']) - - # calculate the residual and return it - return u_x - u - - # conditions to hold - conditions = { - 'x0': Condition(location=CartesianDomain({'x': 0.}), equation=FixedValue(1)), # We fix initial condition to value 1 - 'D': Condition(location=CartesianDomain({'x': [0, 1]}), equation=Equation(ode_equation)), # We wrap the python equation using Equation - } - - # sampled points (see below) - input_pts = None - - # defining the true solution - def truth_solution(self, pts): - return torch.exp(pts.extract(['x'])) - - problem = SimpleODE() - -After we define the ``Problem`` class, we need to write different class -methods, where each method is a function returning a residual. These -functions are the ones minimized during PINN optimization, given the -initial conditions. For example, in the domain :math:`[0,1]`, the ODE -equation (``ode_equation``) must be satisfied. We represent this by -returning the difference between subtracting the variable ``u`` from its -gradient (the residual), which we hope to minimize to 0. This is done -for all conditions. Notice that we do not pass directly a ``python`` -function, but an ``Equation`` object, which is initialized with the -``python`` function. This is done so that all the computations and -internal checks are done inside **PINA**. - -Once we have defined the function, we need to tell the neural network -where these methods are to be applied. To do so, we use the -``Condition`` class. In the ``Condition`` class, we pass the location -points and the equation we want minimized on those points (other -possibilities are allowed, see the documentation for reference). - -Finally, it’s possible to define a ``truth_solution`` function, which -can be useful if we want to plot the results and see how the real -solution compares to the expected (true) solution. Notice that the -``truth_solution`` function is a method of the ``PINN`` class, but it is -not mandatory for problem definition. - -Generate data -------------- - -Data for training can come in form of direct numerical simulation -results, or points in the domains. In case we perform unsupervised -learning, we just need the collocation points for training, i.e. points -where we want to evaluate the neural network. Sampling point in **PINA** -is very easy, here we show three examples using the -``.discretise_domain`` method of the ``AbstractProblem`` class. - -.. code:: ipython3 - - # sampling 20 points in [0, 1] through discretization in all locations - problem.discretise_domain(n=20, mode='grid', variables=['x'], locations='all') - - # sampling 20 points in (0, 1) through latin hypercube sampling in D, and 1 point in x0 - problem.discretise_domain(n=20, mode='latin', variables=['x'], locations=['D']) - problem.discretise_domain(n=1, mode='random', variables=['x'], locations=['x0']) - - # sampling 20 points in (0, 1) randomly - problem.discretise_domain(n=20, mode='random', variables=['x']) - -We are going to use latin hypercube points for sampling. We need to -sample in all the conditions domains. In our case we sample in ``D`` and -``x0``. - -.. code:: ipython3 - - # sampling for training - problem.discretise_domain(1, 'random', locations=['x0']) - problem.discretise_domain(20, 'lh', locations=['D']) - -The points are saved in a python ``dict``, and can be accessed by -calling the attribute ``input_pts`` of the problem - -.. code:: ipython3 - - print('Input points:', problem.input_pts) - print('Input points labels:', problem.input_pts['D'].labels) - - -.. parsed-literal:: - - Input points: {'x0': LabelTensor([[[0.]]]), 'D': LabelTensor([[[0.7644]], - [[0.2028]], - [[0.1789]], - [[0.4294]], - [[0.3239]], - [[0.6531]], - [[0.1406]], - [[0.6062]], - [[0.4969]], - [[0.7429]], - [[0.8681]], - [[0.3800]], - [[0.5357]], - [[0.0152]], - [[0.9679]], - [[0.8101]], - [[0.0662]], - [[0.9095]], - [[0.2503]], - [[0.5580]]])} - Input points labels: ['x'] - - -To visualize the sampled points we can use the ``.plot_samples`` method -of the ``Plotter`` class - -.. code:: ipython3 - - from pina import Plotter - - pl = Plotter() - pl.plot_samples(problem=problem) - - - -.. image:: tutorial_files/tutorial_16_0.png - - -Perform a small training ------------------------- - -Once we have defined the problem and generated the data we can start the -modelling. Here we will choose a ``FeedForward`` neural network -available in ``pina.model``, and we will train using the ``PINN`` solver -from ``pina.solvers``. We highlight that this training is fairly simple, -for more advanced stuff consider the tutorials in the **Physics Informed -Neural Networks** section of **Tutorials**. For training we use the -``Trainer`` class from ``pina.trainer``. Here we show a very short -training and some method for plotting the results. Notice that by -default all relevant metrics (e.g. MSE error during training) are going -to be tracked using a ``lightining`` logger, by default ``CSVLogger``. -If you want to track the metric by yourself without a logger, use -``pina.callbacks.MetricTracker``. - -.. code:: ipython3 - - from pina import Trainer - from pina.solvers import PINN - from pina.model import FeedForward - from pina.callbacks import MetricTracker - - - # build the model - model = FeedForward( - layers=[10, 10], - func=torch.nn.Tanh, - output_dimensions=len(problem.output_variables), - input_dimensions=len(problem.input_variables) - ) - - # create the PINN object - pinn = PINN(problem, model) - - # create the trainer - trainer = Trainer(solver=pinn, max_epochs=1500, callbacks=[MetricTracker()], accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional) - - # train - trainer.train() - -After the training we can inspect trainer logged metrics (by default -**PINA** logs mean square error residual loss). The logged metrics can -be accessed online using one of the ``Lightinig`` loggers. The final -loss can be accessed by ``trainer.logged_metrics`` - -.. code:: ipython3 - - # inspecting final loss - trainer.logged_metrics - - - - -.. parsed-literal:: - - {'x0_loss': tensor(1.0674e-05), - 'D_loss': tensor(0.0008), - 'mean_loss': tensor(0.0004)} - - - -By using the ``Plotter`` class from **PINA** we can also do some -quatitative plots of the solution. - -.. code:: ipython3 - - # plotting the solution - pl.plot(solver=pinn) - - - -.. image:: tutorial_files/tutorial_23_1.png - - - -.. parsed-literal:: - -
- - -The solution is overlapped with the actual one, and they are barely -indistinguishable. We can also plot easily the loss: - -.. code:: ipython3 - - pl.plot_loss(trainer=trainer, label = 'mean_loss', logy=True) - - - -.. image:: tutorial_files/tutorial_25_0.png - - -As we can see the loss has not reached a minimum, suggesting that we -could train for longer - -What’s next? ------------- - -Congratulations on completing the introductory tutorial of **PINA**! -There are several directions you can go now: - -1. Train the network for longer or with different layer sizes and assert - the finaly accuracy - -2. Train the network using other types of models (see ``pina.model``) - -3. GPU training and speed benchmarking - -4. Many more… - - diff --git a/docs/source/_rst/tutorials/tutorial1/tutorial_files/tutorial_16_0.png b/docs/source/_rst/tutorials/tutorial1/tutorial_files/tutorial_16_0.png deleted file mode 100644 index 3c90635..0000000 Binary files a/docs/source/_rst/tutorials/tutorial1/tutorial_files/tutorial_16_0.png and /dev/null differ diff --git a/docs/source/_rst/tutorials/tutorial1/tutorial_files/tutorial_23_1.png b/docs/source/_rst/tutorials/tutorial1/tutorial_files/tutorial_23_1.png deleted file mode 100644 index e4d92c2..0000000 Binary files a/docs/source/_rst/tutorials/tutorial1/tutorial_files/tutorial_23_1.png and /dev/null differ diff --git a/docs/source/_rst/tutorials/tutorial1/tutorial_files/tutorial_25_0.png b/docs/source/_rst/tutorials/tutorial1/tutorial_files/tutorial_25_0.png deleted file mode 100644 index 64bd43a..0000000 Binary files a/docs/source/_rst/tutorials/tutorial1/tutorial_files/tutorial_25_0.png and /dev/null differ diff --git a/docs/source/_rst/tutorials/tutorial10/tutorial.rst b/docs/source/_rst/tutorials/tutorial10/tutorial.rst deleted file mode 100644 index 4692354..0000000 --- a/docs/source/_rst/tutorials/tutorial10/tutorial.rst +++ /dev/null @@ -1,366 +0,0 @@ -Tutorial: Averaging Neural Operator for solving Kuramoto Sivashinsky equation -============================================================================= - -|Open In Colab| - -.. |Open In Colab| image:: https://colab.research.google.com/assets/colab-badge.svg - :target: https://colab.research.google.com/github/mathLab/PINA/blob/master/tutorials/tutorial10/tutorial.ipynb - -In this tutorial we will build a Neural Operator using the -``AveragingNeuralOperator`` model and the ``SupervisedSolver``. At the -end of the tutorial you will be able to train a Neural Operator for -learning the operator of time dependent PDEs. - -First of all, some useful imports. Note we use ``scipy`` for i/o -operations. - -.. code:: ipython3 - - ## routine needed to run the notebook on Google Colab - try: - import google.colab - IN_COLAB = True - except: - IN_COLAB = False - if IN_COLAB: - !pip install "pina-mathlab" - !mkdir "data" - !wget "https://github.com/mathLab/PINA/raw/refs/heads/master/tutorials/tutorial10/data/Data_KS.mat" -O "data/Data_KS.mat" - !wget "https://github.com/mathLab/PINA/raw/refs/heads/master/tutorials/tutorial10/data/Data_KS2.mat" -O "data/Data_KS2.mat" - - - import torch - import matplotlib.pyplot as plt - plt.style.use('tableau-colorblind10') - from scipy import io - from pina import Condition, LabelTensor - from pina.problem import AbstractProblem - from pina.model import AveragingNeuralOperator - from pina.solvers import SupervisedSolver - from pina.trainer import Trainer - -Data Generation ---------------- - -We will focus on solving a specific PDE, the **Kuramoto Sivashinsky** -(KS) equation. The KS PDE is a fourth-order nonlinear PDE with the -following form: - -.. math:: - - - \frac{\partial u}{\partial t}(x,t) = -u(x,t)\frac{\partial u}{\partial x}(x,t)- \frac{\partial^{4}u}{\partial x^{4}}(x,t) - \frac{\partial^{2}u}{\partial x^{2}}(x,t). - -In the above :math:`x\in \Omega=[0, 64]` represents a spatial location, -:math:`t\in\mathbb{T}=[0,50]` the time and :math:`u(x, t)` is the value -of the function :math:`u:\Omega \times\mathbb{T}\in\mathbb{R}`. We -indicate with :math:`\mathbb{U}` a suitable space for :math:`u`, i.e. we -have that the solution :math:`u\in\mathbb{U}`. - -We impose Dirichlet boundary conditions on the derivative of :math:`u` -on the border of the domain :math:`\partial \Omega` - -.. math:: - - - \frac{\partial u}{\partial x}(x,t)=0 \quad \forall (x,t)\in \partial \Omega\times\mathbb{T}. - - -Initial conditions are sampled from a distribution over truncated -Fourier series with random coefficients -:math:`\{A_k, \ell_k, \phi_k\}_k` as - -.. math:: - - - u(x,0) = \sum_{k=1}^N A_k \sin(2 \pi \ell_k x / L + \phi_k) \ , - -where :math:`A_k \in [-0.4, -0.3]`, :math:`\ell_k = 2`, -:math:`\phi_k = 2\pi \quad \forall k=1,\dots,N`. - -We have already generated some data for differenti initial conditions, -and our objective will be to build a Neural Operator that, given -:math:`u(x, t)` will output :math:`u(x, t+\delta)`, where :math:`\delta` -is a fixed time step. We will come back on the Neural Operator -architecture, for now we first need to import the data. - -**Note:** *The numerical integration is obtained by using pseudospectral -method for spatial derivative discratization and implicit Runge Kutta 5 -for temporal dynamics.* - -.. code:: ipython3 - - # load data - data=io.loadmat("dat/Data_KS.mat") - - # converting to label tensor - initial_cond_train = LabelTensor(torch.tensor(data['initial_cond_train'], dtype=torch.float), ['t','x','u0']) - initial_cond_test = LabelTensor(torch.tensor(data['initial_cond_test'], dtype=torch.float), ['t','x','u0']) - sol_train = LabelTensor(torch.tensor(data['sol_train'], dtype=torch.float), ['u']) - sol_test = LabelTensor(torch.tensor(data['sol_test'], dtype=torch.float), ['u']) - - print('Data Loaded') - print(f' shape initial condition: {initial_cond_train.shape}') - print(f' shape solution: {sol_train.shape}') - - -.. parsed-literal:: - - Data Loaded - shape initial condition: torch.Size([100, 12800, 3]) - shape solution: torch.Size([100, 12800, 1]) - - -The data are saved in the form ``B \times N \times D``, where ``B`` is -the batch_size (basically how many initial conditions we sample), ``N`` -the number of points in the mesh (which is the product of the -discretization in ``x`` timese the one in ``t``), and ``D`` the -dimension of the problem (in this case we have three variables -``[u, t, x]``). - -We are now going to plot some trajectories! - -.. code:: ipython3 - - # helper function - def plot_trajectory(coords, real, no_sol=None): - # find the x-t shapes - dim_x = len(torch.unique(coords.extract('x'))) - dim_t = len(torch.unique(coords.extract('t'))) - # if we don't have the Neural Operator solution we simply plot the real one - if no_sol is None: - fig, axs = plt.subplots(1, 1, figsize=(15, 5), sharex=True, sharey=True) - c = axs.imshow(real.reshape(dim_t, dim_x).T.detach(),extent=[0, 50, 0, 64], cmap='PuOr_r', aspect='auto') - axs.set_title('Real solution') - fig.colorbar(c, ax=axs) - axs.set_xlabel('t') - axs.set_ylabel('x') - # otherwise we plot the real one, the Neural Operator one, and their difference - else: - fig, axs = plt.subplots(1, 3, figsize=(15, 5), sharex=True, sharey=True) - axs[0].imshow(real.reshape(dim_t, dim_x).T.detach(),extent=[0, 50, 0, 64], cmap='PuOr_r', aspect='auto') - axs[0].set_title('Real solution') - axs[1].imshow(no_sol.reshape(dim_t, dim_x).T.detach(),extent=[0, 50, 0, 64], cmap='PuOr_r', aspect='auto') - axs[1].set_title('NO solution') - c = axs[2].imshow((real - no_sol).abs().reshape(dim_t, dim_x).T.detach(),extent=[0, 50, 0, 64], cmap='PuOr_r', aspect='auto') - axs[2].set_title('Absolute difference') - fig.colorbar(c, ax=axs.ravel().tolist()) - for ax in axs: - ax.set_xlabel('t') - ax.set_ylabel('x') - plt.show() - - # a sample trajectory (we use the sample 5, feel free to change) - sample_number = 20 - plot_trajectory(coords=initial_cond_train[sample_number].extract(['x', 't']), - real=sol_train[sample_number].extract('u')) - - - - -.. image:: tutorial_files/tutorial_5_0.png - - -As we can see, as the time progresses the solution becomes chaotic, -which makes it really hard to learn! We will now focus on building a -Neural Operator using the ``SupervisedSolver`` class to tackle the -problem. - -Averaging Neural Operator -------------------------- - -We will build a neural operator :math:`\texttt{NO}` which takes the -solution at time :math:`t=0` for any :math:`x\in\Omega`, the time -:math:`(t)` at which we want to compute the solution, and gives back the -solution to the KS equation :math:`u(x, t)`, mathematically: - -.. math:: - - - \texttt{NO}_\theta : \mathbb{U} \rightarrow \mathbb{U}, - -such that - -.. math:: - - - \texttt{NO}_\theta[u(t=0)](x, t) \rightarrow u(x, t). - -There are many ways on approximating the following operator, e.g. by 2D -`FNO `__ (for -regular meshes), a -`DeepOnet `__, -`Continuous Convolutional Neural -Operator `__, -`MIONet `__. In -this tutorial we will use the *Averaging Neural Operator* presented in -`The Nonlocal Neural Operator: Universal -Approximation `__ which is a `Kernel -Neural -Operator `__ -with integral kernel: - -.. math:: - - - K(v) = \sigma\left(Wv(x) + b + \frac{1}{|\Omega|}\int_\Omega v(y)dy\right) - -where: - -- :math:`v(x)\in\mathbb{R}^{\rm{emb}}` is the update for a function - :math:`v` with :math:`\mathbb{R}^{\rm{emb}}` the embedding (hidden) - size -- :math:`\sigma` is a non-linear activation -- :math:`W\in\mathbb{R}^{\rm{emb}\times\rm{emb}}` is a tunable matrix. -- :math:`b\in\mathbb{R}^{\rm{emb}}` is a tunable bias. - -If PINA many Kernel Neural Operators are already implemented, and the -modular componets of the `Kernel Neural -Operator `__ -class permits to create new ones by composing base kernel layers. - -**Note:**\ \* We will use the already built class\* -``AveragingNeuralOperator``, *as constructive excercise try to use the* -`KernelNeuralOperator `__ -*class for building a kernel neural operator from scratch. You might -employ the different layers that we have in pina, e.g.* -`FeedForward `__, -*and* -`AveragingNeuralOperator `__ -*layers*. - -.. code:: ipython3 - - class SIREN(torch.nn.Module): - def forward(self, x): - return torch.sin(x) - - embedding_dimesion = 40 # hyperparameter embedding dimension - input_dimension = 3 # ['u', 'x', 't'] - number_of_coordinates = 2 # ['x', 't'] - lifting_net = torch.nn.Linear(input_dimension, embedding_dimesion) # simple linear layers for lifting and projecting nets - projecting_net = torch.nn.Linear(embedding_dimesion + number_of_coordinates, 1) - model = AveragingNeuralOperator(lifting_net=lifting_net, - projecting_net=projecting_net, - coordinates_indices=['x', 't'], - field_indices=['u0'], - n_layers=4, - func=SIREN - ) - -Super easy! Notice that we use the ``SIREN`` activation function, more -on `Implicit Neural Representations with Periodic Activation -Functions `__. - -Solving the KS problem ----------------------- - -We will now focus on solving the KS equation using the -``SupervisedSolver`` class and the ``AveragingNeuralOperator`` model. As -done in the `FNO -tutorial `__ -we now create the ``NeuralOperatorProblem`` class with -``AbstractProblem``. - -.. code:: ipython3 - - # expected running time ~ 1 minute - - class NeuralOperatorProblem(AbstractProblem): - input_variables = initial_cond_train.labels - output_variables = sol_train.labels - conditions = {'data' : Condition(input_points=initial_cond_train, - output_points=sol_train)} - - - # initialize problem - problem = NeuralOperatorProblem() - # initialize solver - solver = SupervisedSolver(problem=problem, model=model,optimizer_kwargs={"lr":0.001}) - # train, only CPU and avoid model summary at beginning of training (optional) - trainer = Trainer(solver=solver, max_epochs=40, accelerator='cpu', enable_model_summary=False, log_every_n_steps=-1, batch_size=5) # we train on CPU and avoid model summary at beginning of training (optional) - trainer.train() - - - -.. parsed-literal:: - - GPU available: True (mps), used: False - TPU available: False, using: 0 TPU cores - IPU available: False, using: 0 IPUs - HPU available: False, using: 0 HPUs - - -.. parsed-literal:: - - Epoch 39: 100%|██████████| 20/20 [00:01<00:00, 13.59it/s, v_num=3, mean_loss=0.118] - -.. parsed-literal:: - - `Trainer.fit` stopped: `max_epochs=40` reached. - - -.. parsed-literal:: - - Epoch 39: 100%|██████████| 20/20 [00:01<00:00, 13.56it/s, v_num=3, mean_loss=0.118] - - -We can now see some plots for the solutions - -.. code:: ipython3 - - sample_number = 2 - no_sol = solver(initial_cond_test) - plot_trajectory(coords=initial_cond_test[sample_number].extract(['x', 't']), - real=sol_test[sample_number].extract('u'), - no_sol=no_sol[5]) - - - -.. image:: tutorial_files/tutorial_11_0.png - - -As we can see we can obtain nice result considering the small trainint -time and the difficulty of the problem! Let’s see how the training and -testing error: - -.. code:: ipython3 - - from pina.loss import PowerLoss - - error_metric = PowerLoss(p=2) # we use the MSE loss - - with torch.no_grad(): - no_sol_train = solver(initial_cond_train) - err_train = error_metric(sol_train.extract('u'), no_sol_train).mean() # we average the error over trajectories - no_sol_test = solver(initial_cond_test) - err_test = error_metric(sol_test.extract('u'),no_sol_test).mean() # we average the error over trajectories - print(f'Training error: {float(err_train):.3f}') - print(f'Testing error: {float(err_test):.3f}') - - -.. parsed-literal:: - - Training error: 0.128 - Testing error: 0.119 - - -as we can see the error is pretty small, which agrees with what we can -see from the previous plots. - -What’s next? ------------- - -Now you know how to solve a time dependent neural operator problem in -**PINA**! There are multiple directions you can go now: - -1. Train the network for longer or with different layer sizes and assert - the finaly accuracy - -2. We left a more challenging dataset - `Data_KS2.mat `__ where - :math:`A_k \in [-0.5, 0.5]`, :math:`\ell_k \in [1, 2, 3]`, - :math:`\phi_k \in [0, 2\pi]` for loger training - -3. Compare the performance between the different neural operators (you - can even try to implement your favourite one!) diff --git a/docs/source/_rst/tutorials/tutorial10/tutorial_files/tutorial_11_0.png b/docs/source/_rst/tutorials/tutorial10/tutorial_files/tutorial_11_0.png deleted file mode 100644 index 2f7e8cc..0000000 Binary files a/docs/source/_rst/tutorials/tutorial10/tutorial_files/tutorial_11_0.png and /dev/null differ diff --git a/docs/source/_rst/tutorials/tutorial10/tutorial_files/tutorial_5_0.png b/docs/source/_rst/tutorials/tutorial10/tutorial_files/tutorial_5_0.png deleted file mode 100644 index 0b355c3..0000000 Binary files a/docs/source/_rst/tutorials/tutorial10/tutorial_files/tutorial_5_0.png and /dev/null differ diff --git a/docs/source/_rst/tutorials/tutorial11/logging.png b/docs/source/_rst/tutorials/tutorial11/logging.png deleted file mode 100644 index c4b421e..0000000 Binary files a/docs/source/_rst/tutorials/tutorial11/logging.png and /dev/null differ diff --git a/docs/source/_rst/tutorials/tutorial11/tutorial.rst b/docs/source/_rst/tutorials/tutorial11/tutorial.rst deleted file mode 100644 index daed289..0000000 --- a/docs/source/_rst/tutorials/tutorial11/tutorial.rst +++ /dev/null @@ -1,550 +0,0 @@ -Tutorial: PINA and PyTorch Lightning, training tips and visualizations -====================================================================== - -|Open In Colab| - -.. |Open In Colab| image:: https://colab.research.google.com/assets/colab-badge.svg - :target: https://colab.research.google.com/github/mathLab/PINA/blob/master/tutorials/tutorial11/tutorial.ipynb - -In this tutorial, we will delve deeper into the functionality of the -``Trainer`` class, which serves as the cornerstone for training **PINA** -`Solvers `__. -The ``Trainer`` class offers a plethora of features aimed at improving -model accuracy, reducing training time and memory usage, facilitating -logging visualization, and more thanks to the amazing job done by the PyTorch Lightning team! -Our leading example will revolve around solving the ``SimpleODE`` -problem, as outlined in the `Introduction to PINA for Physics Informed -Neural Networks -training `__. -If you haven’t already explored it, we highly recommend doing so before -diving into this tutorial. -Let’s start by importing useful modules, define the ``SimpleODE`` -problem and the ``PINN`` solver. - -.. code:: ipython3 - - ## routine needed to run the notebook on Google Colab - try: - import google.colab - IN_COLAB = True - except: - IN_COLAB = False - if IN_COLAB: - !pip install "pina-mathlab" - - import torch - - from pina import Condition, Trainer - from pina.solvers import PINN - from pina.model import FeedForward - from pina.problem import SpatialProblem - from pina.operators import grad - from pina.geometry import CartesianDomain - from pina.equation import Equation, FixedValue - - class SimpleODE(SpatialProblem): - - output_variables = ['u'] - spatial_domain = CartesianDomain({'x': [0, 1]}) - - # defining the ode equation - def ode_equation(input_, output_): - u_x = grad(output_, input_, components=['u'], d=['x']) - u = output_.extract(['u']) - return u_x - u - - # conditions to hold - conditions = { - 'x0': Condition(location=CartesianDomain({'x': 0.}), equation=FixedValue(1)), # We fix initial condition to value 1 - 'D': Condition(location=CartesianDomain({'x': [0, 1]}), equation=Equation(ode_equation)), # We wrap the python equation using Equation - } - - # defining the true solution - def truth_solution(self, pts): - return torch.exp(pts.extract(['x'])) - - - # sampling for training - problem = SimpleODE() - problem.discretise_domain(1, 'random', locations=['x0']) - problem.discretise_domain(20, 'lh', locations=['D']) - - # build the model - model = FeedForward( - layers=[10, 10], - func=torch.nn.Tanh, - output_dimensions=len(problem.output_variables), - input_dimensions=len(problem.input_variables) - ) - - # create the PINN object - pinn = PINN(problem, model) - -Till now we just followed the extact step of the previous tutorials. The -``Trainer`` object can be initialized by simiply passing the ``PINN`` -solver - -.. code:: ipython3 - - trainer = Trainer(solver=pinn) - - -.. parsed-literal:: - - GPU available: True (mps), used: True - TPU available: False, using: 0 TPU cores - IPU available: False, using: 0 IPUs - HPU available: False, using: 0 HPUs - - -Trainer Accelerator -------------------- - -When creating the trainer, **by defualt** the ``Trainer`` will choose -the most performing ``accelerator`` for training which is available in -your system, ranked as follow: - -1. `TPU `__ -2. `IPU `__ -3. `HPU `__ -4. `GPU `__ or `MPS `__ -5. CPU - -For setting manually the ``accelerator`` run: - -- ``accelerator = {'gpu', 'cpu', 'hpu', 'mps', 'cpu', 'ipu'}`` sets the - accelerator to a specific one - -.. code:: ipython3 - - trainer = Trainer(solver=pinn, - accelerator='cpu') - - -.. parsed-literal:: - - GPU available: True (mps), used: False - TPU available: False, using: 0 TPU cores - IPU available: False, using: 0 IPUs - HPU available: False, using: 0 HPUs - - -as you can see, even if in the used system ``GPU`` is available, it is -not used since we set ``accelerator='cpu'``. - -Trainer Logging ---------------- - -In **PINA** you can log metrics in different ways. The simplest approach -is to use the ``MetricTraker`` class from ``pina.callbacks`` as seen in -the `Introduction to PINA for Physics Informed Neural Networks -training `__ -tutorial. - -However, expecially when we need to train multiple times to get an -average of the loss across multiple runs, ``pytorch_lightning.loggers`` -might be useful. Here we will use ``TensorBoardLogger`` (more on -`logging `__ -here), but you can choose the one you prefer (or make your own one). - -We will now import ``TensorBoardLogger``, do three runs of training and -then visualize the results. Notice we set ``enable_model_summary=False`` -to avoid model summary specifications (e.g. number of parameters), set -it to true if needed. - -.. code:: ipython3 - - from pytorch_lightning.loggers import TensorBoardLogger - - # three run of training, by default it trains for 1000 epochs - # we reinitialize the model each time otherwise the same parameters will be optimized - for _ in range(3): - model = FeedForward( - layers=[10, 10], - func=torch.nn.Tanh, - output_dimensions=len(problem.output_variables), - input_dimensions=len(problem.input_variables) - ) - pinn = PINN(problem, model) - trainer = Trainer(solver=pinn, - accelerator='cpu', - logger=TensorBoardLogger(save_dir='simpleode'), - enable_model_summary=False) - trainer.train() - - -.. parsed-literal:: - - GPU available: True (mps), used: False - TPU available: False, using: 0 TPU cores - IPU available: False, using: 0 IPUs - HPU available: False, using: 0 HPUs - - `Trainer.fit` stopped: `max_epochs=1000` reached. - Epoch 999: 100%|██████████| 1/1 [00:00<00:00, 133.46it/s, v_num=6, x0_loss=1.48e-5, D_loss=0.000655, mean_loss=0.000335] - - -.. parsed-literal:: - - GPU available: True (mps), used: False - TPU available: False, using: 0 TPU cores - IPU available: False, using: 0 IPUs - HPU available: False, using: 0 HPUs - - `Trainer.fit` stopped: `max_epochs=1000` reached. - Epoch 999: 100%|██████████| 1/1 [00:00<00:00, 154.49it/s, v_num=7, x0_loss=6.21e-6, D_loss=0.000221, mean_loss=0.000114] - - -.. parsed-literal:: - - GPU available: True (mps), used: False - TPU available: False, using: 0 TPU cores - IPU available: False, using: 0 IPUs - HPU available: False, using: 0 HPUs - - `Trainer.fit` stopped: `max_epochs=1000` reached. - Epoch 999: 100%|██████████| 1/1 [00:00<00:00, 62.60it/s, v_num=8, x0_loss=1.44e-5, D_loss=0.000572, mean_loss=0.000293] - - -We can now visualize the logs by simply running -``tensorboard --logdir=simpleode/`` on terminal, you should obtain a -webpage as the one shown below: - -.. image:: logging.png - -as you can see, by default, **PINA** logs the losses which are shown in -the progress bar, as well as the number of epochs. You can always insert -more loggings by either defining a **callback** (`more on -callbacks `__), -or inheriting the solver and modify the programs with different -**hooks** (`more on -hooks `__). - -Trainer Callbacks ------------------ - -Whenever we need to access certain steps of the training for logging, do -static modifications (i.e. not changing the ``Solver``) or updating -``Problem`` hyperparameters (static variables), we can use -``Callabacks``. Notice that ``Callbacks`` allow you to add arbitrary -self-contained programs to your training. At specific points during the -flow of execution (hooks), the Callback interface allows you to design -programs that encapsulate a full set of functionality. It de-couples -functionality that does not need to be in **PINA** ``Solver``\ s. -Lightning has a callback system to execute them when needed. Callbacks -should capture NON-ESSENTIAL logic that is NOT required for your -lightning module to run. - -The following are best practices when using/designing callbacks. - -- Callbacks should be isolated in their functionality. -- Your callback should not rely on the behavior of other callbacks in - order to work properly. -- Do not manually call methods from the callback. -- Directly calling methods (eg. on_validation_end) is strongly - discouraged. -- Whenever possible, your callbacks should not depend on the order in - which they are executed. - -We will try now to implement a naive version of ``MetricTraker`` to show -how callbacks work. Notice that this is a very easy application of -callbacks, fortunately in **PINA** we already provide more advanced -callbacks in ``pina.callbacks``. - -.. raw:: html - - - -.. code:: ipython3 - - from pytorch_lightning.callbacks import Callback - import torch - - # define a simple callback - class NaiveMetricTracker(Callback): - def __init__(self): - self.saved_metrics = [] - - def on_train_epoch_end(self, trainer, __): # function called at the end of each epoch - self.saved_metrics.append( - {key: value for key, value in trainer.logged_metrics.items()} - ) - -Let’s see the results when applyed to the ``SimpleODE`` problem. You can -define callbacks when initializing the ``Trainer`` by the ``callbacks`` -argument, which expects a list of callbacks. - -.. code:: ipython3 - - model = FeedForward( - layers=[10, 10], - func=torch.nn.Tanh, - output_dimensions=len(problem.output_variables), - input_dimensions=len(problem.input_variables) - ) - pinn = PINN(problem, model) - trainer = Trainer(solver=pinn, - accelerator='cpu', - enable_model_summary=False, - callbacks=[NaiveMetricTracker()]) # adding a callbacks - trainer.train() - - -.. parsed-literal:: - - GPU available: True (mps), used: False - TPU available: False, using: 0 TPU cores - IPU available: False, using: 0 IPUs - HPU available: False, using: 0 HPUs - - `Trainer.fit` stopped: `max_epochs=1000` reached. - Epoch 999: 100%|██████████| 1/1 [00:00<00:00, 149.27it/s, v_num=1, x0_loss=7.27e-5, D_loss=0.0016, mean_loss=0.000838] - - -We can easily access the data by calling -``trainer.callbacks[0].saved_metrics`` (notice the zero representing the -first callback in the list given at initialization). - -.. code:: ipython3 - - trainer.callbacks[0].saved_metrics[:3] # only the first three epochs - - - - -.. parsed-literal:: - - [{'x0_loss': tensor(0.9141), - 'D_loss': tensor(0.0304), - 'mean_loss': tensor(0.4722)}, - {'x0_loss': tensor(0.8906), - 'D_loss': tensor(0.0287), - 'mean_loss': tensor(0.4596)}, - {'x0_loss': tensor(0.8674), - 'D_loss': tensor(0.0274), - 'mean_loss': tensor(0.4474)}] - - - -PyTorch Lightning also has some built in ``Callbacks`` which can be used -in **PINA**, `here an extensive -list `__. - -We can for example try the ``EarlyStopping`` routine, which -automatically stops the training when a specific metric converged (here -the ``mean_loss``). In order to let the training keep going forever set -``max_epochs=-1``. - -.. code:: ipython3 - - # ~2 mins - from pytorch_lightning.callbacks import EarlyStopping - - model = FeedForward( - layers=[10, 10], - func=torch.nn.Tanh, - output_dimensions=len(problem.output_variables), - input_dimensions=len(problem.input_variables) - ) - pinn = PINN(problem, model) - trainer = Trainer(solver=pinn, - accelerator='cpu', - max_epochs = -1, - enable_model_summary=False, - callbacks=[EarlyStopping('mean_loss')]) # adding a callbacks - trainer.train() - - -.. parsed-literal:: - - GPU available: True (mps), used: False - TPU available: False, using: 0 TPU cores - IPU available: False, using: 0 IPUs - HPU available: False, using: 0 HPUs - - Epoch 6157: 100%|██████████| 1/1 [00:00<00:00, 139.84it/s, v_num=9, x0_loss=4.21e-9, D_loss=9.93e-6, mean_loss=4.97e-6] - - -As we can see the model automatically stop when the logging metric -stopped improving! - -Trainer Tips to Boost Accuracy, Save Memory and Speed Up Training ------------------------------------------------------------------ - -Untill now we have seen how to choose the right ``accelerator``, how to -log and visualize the results, and how to interface with the program in -order to add specific parts of code at specific points by ``callbacks``. -Now, we well focus on how boost your training by saving memory and -speeding it up, while mantaining the same or even better degree of -accuracy! - -There are several built in methods developed in PyTorch Lightning which -can be applied straight forward in **PINA**, here we report some: - -- `Stochastic Weight - Averaging `__ - to boost accuracy -- `Gradient - Clippling `__ to - reduce computational time (and improve accuracy) -- `Gradient - Accumulation `__ - to save memory consumption -- `Mixed Precision - Training `__ - to save memory consumption - -We will just demonstrate how to use the first two, and see the results -compared to a standard training. We use the -`Timer `__ -callback from ``pytorch_lightning.callbacks`` to take the times. Let’s -start by training a simple model without any optimization (train for -2000 epochs). - -.. code:: ipython3 - - from pytorch_lightning.callbacks import Timer - from pytorch_lightning import seed_everything - - # setting the seed for reproducibility - seed_everything(42, workers=True) - - model = FeedForward( - layers=[10, 10], - func=torch.nn.Tanh, - output_dimensions=len(problem.output_variables), - input_dimensions=len(problem.input_variables) - ) - - pinn = PINN(problem, model) - trainer = Trainer(solver=pinn, - accelerator='cpu', - deterministic=True, # setting deterministic=True ensure reproducibility when a seed is imposed - max_epochs = 2000, - enable_model_summary=False, - callbacks=[Timer()]) # adding a callbacks - trainer.train() - print(f'Total training time {trainer.callbacks[0].time_elapsed("train"):.5f} s') - - -.. parsed-literal:: - - Seed set to 42 - GPU available: True (mps), used: False - TPU available: False, using: 0 TPU cores - IPU available: False, using: 0 IPUs - HPU available: False, using: 0 HPUs - - - `Trainer.fit` stopped: `max_epochs=2000` reached. - Epoch 1999: 100%|██████████| 1/1 [00:00<00:00, 163.58it/s, v_num=31, x0_loss=1.12e-6, D_loss=0.000127, mean_loss=6.4e-5] - Total training time 17.36381 s - - -Now we do the same but with StochasticWeightAveraging - -.. code:: ipython3 - - from pytorch_lightning.callbacks import StochasticWeightAveraging - - # setting the seed for reproducibility - seed_everything(42, workers=True) - - model = FeedForward( - layers=[10, 10], - func=torch.nn.Tanh, - output_dimensions=len(problem.output_variables), - input_dimensions=len(problem.input_variables) - ) - pinn = PINN(problem, model) - trainer = Trainer(solver=pinn, - accelerator='cpu', - deterministic=True, - max_epochs = 2000, - enable_model_summary=False, - callbacks=[Timer(), - StochasticWeightAveraging(swa_lrs=0.005)]) # adding StochasticWeightAveraging callbacks - trainer.train() - print(f'Total training time {trainer.callbacks[0].time_elapsed("train"):.5f} s') - - -.. parsed-literal:: - - Seed set to 42 - GPU available: True (mps), used: False - TPU available: False, using: 0 TPU cores - IPU available: False, using: 0 IPUs - HPU available: False, using: 0 HPUs - - - Epoch 1598: 100%|██████████| 1/1 [00:00<00:00, 210.04it/s, v_num=47, x0_loss=4.17e-6, D_loss=0.000204, mean_loss=0.000104] - Swapping scheduler `ConstantLR` for `SWALR` - `Trainer.fit` stopped: `max_epochs=2000` reached. - Epoch 1999: 100%|██████████| 1/1 [00:00<00:00, 120.85it/s, v_num=47, x0_loss=1.56e-7, D_loss=7.49e-5, mean_loss=3.75e-5] - Total training time 17.10627 s - - -As you can see, the training time does not change at all! Notice that -around epoch ``1600`` the scheduler is switched from the defalut one -``ConstantLR`` to the Stochastic Weight Average Learning Rate -(``SWALR``). This is because by default ``StochasticWeightAveraging`` -will be activated after ``int(swa_epoch_start * max_epochs)`` with -``swa_epoch_start=0.7`` by default. Finally, the final ``mean_loss`` is -lower when ``StochasticWeightAveraging`` is used. - -We will now now do the same but clippling the gradient to be relatively -small. - -.. code:: ipython3 - - # setting the seed for reproducibility - seed_everything(42, workers=True) - - model = FeedForward( - layers=[10, 10], - func=torch.nn.Tanh, - output_dimensions=len(problem.output_variables), - input_dimensions=len(problem.input_variables) - ) - pinn = PINN(problem, model) - trainer = Trainer(solver=pinn, - accelerator='cpu', - max_epochs = 2000, - enable_model_summary=False, - gradient_clip_val=0.1, # clipping the gradient - callbacks=[Timer(), - StochasticWeightAveraging(swa_lrs=0.005)]) - trainer.train() - print(f'Total training time {trainer.callbacks[0].time_elapsed("train"):.5f} s') - - -.. parsed-literal:: - - Seed set to 42 - GPU available: True (mps), used: False - TPU available: False, using: 0 TPU cores - IPU available: False, using: 0 IPUs - HPU available: False, using: 0 HPUs - - Epoch 1598: 100%|██████████| 1/1 [00:00<00:00, 261.80it/s, v_num=46, x0_loss=9e-8, D_loss=2.39e-5, mean_loss=1.2e-5] - Swapping scheduler `ConstantLR` for `SWALR` - `Trainer.fit` stopped: `max_epochs=2000` reached. - Epoch 1999: 100%|██████████| 1/1 [00:00<00:00, 148.99it/s, v_num=46, x0_loss=7.08e-7, D_loss=1.77e-5, mean_loss=9.19e-6] - Total training time 17.01149 s - - -As we can see we by applying gradient clipping we were able to even -obtain lower error! - -What’s next? ------------- - -Now you know how to use efficiently the ``Trainer`` class **PINA**! -There are multiple directions you can go now: - -1. Explore training times on different devices (e.g.) ``TPU`` - -2. Try to reduce memory cost by mixed precision training and gradient - accumulation (especially useful when training Neural Operators) - -3. Benchmark ``Trainer`` speed for different precisions. diff --git a/docs/source/_rst/tutorials/tutorial12/tutorial.rst b/docs/source/_rst/tutorials/tutorial12/tutorial.rst deleted file mode 100644 index 0542132..0000000 --- a/docs/source/_rst/tutorials/tutorial12/tutorial.rst +++ /dev/null @@ -1,176 +0,0 @@ -Tutorial: The ``Equation`` Class -================================ - -|Open In Colab| - -.. |Open In Colab| image:: https://colab.research.google.com/assets/colab-badge.svg - :target: https://colab.research.google.com/github/mathLab/PINA/blob/master/tutorials/tutorial12/tutorial.ipynb - -In this tutorial, we will show how to use the ``Equation`` Class in -PINA. Specifically, we will see how use the Class and its inherited -classes to enforce residuals minimization in PINNs. - -Example: The Burgers 1D equation --------------------------------- - -We will start implementing the viscous Burgers 1D problem Class, -described as follows: - -.. math:: - - - \begin{equation} - \begin{cases} - \frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} &= \nu \frac{\partial^2 u}{ \partial x^2}, \quad x\in(0,1), \quad t>0\\ - u(x,0) &= -\sin (\pi x)\\ - u(x,t) &= 0 \quad x = \pm 1\\ - \end{cases} - \end{equation} - -where we set :math:`\nu = \frac{0.01}{\pi}` . - -In the class that models this problem we will see in action the -``Equation`` class and one of its inherited classes, the ``FixedValue`` -class. - -.. code:: ipython3 - - ## routine needed to run the notebook on Google Colab - try: - import google.colab - IN_COLAB = True - except: - IN_COLAB = False - if IN_COLAB: - !pip install "pina-mathlab" - - #useful imports - from pina.problem import SpatialProblem, TimeDependentProblem - from pina.equation import Equation, FixedValue, FixedGradient, FixedFlux - from pina.geometry import CartesianDomain - import torch - from pina.operators import grad, laplacian - from pina import Condition - - - -.. code:: ipython3 - - class Burgers1D(TimeDependentProblem, SpatialProblem): - - # define the burger equation - def burger_equation(input_, output_): - du = grad(output_, input_) - ddu = grad(du, input_, components=['dudx']) - return ( - du.extract(['dudt']) + - output_.extract(['u'])*du.extract(['dudx']) - - (0.01/torch.pi)*ddu.extract(['ddudxdx']) - ) - - # define initial condition - def initial_condition(input_, output_): - u_expected = -torch.sin(torch.pi*input_.extract(['x'])) - return output_.extract(['u']) - u_expected - - # assign output/ spatial and temporal variables - output_variables = ['u'] - spatial_domain = CartesianDomain({'x': [-1, 1]}) - temporal_domain = CartesianDomain({'t': [0, 1]}) - - # problem condition statement - conditions = { - 'gamma1': Condition(location=CartesianDomain({'x': -1, 't': [0, 1]}), equation=FixedValue(0.)), - 'gamma2': Condition(location=CartesianDomain({'x': 1, 't': [0, 1]}), equation=FixedValue(0.)), - 't0': Condition(location=CartesianDomain({'x': [-1, 1], 't': 0}), equation=Equation(initial_condition)), - 'D': Condition(location=CartesianDomain({'x': [-1, 1], 't': [0, 1]}), equation=Equation(burger_equation)), - } - -The ``Equation`` class takes as input a function (in this case it -happens twice, with ``initial_condition`` and ``burger_equation``) which -computes a residual of an equation, such as a PDE. In a problem class -such as the one above, the ``Equation`` class with such a given input is -passed as a parameter in the specified ``Condition``. - -The ``FixedValue`` class takes as input a value of same dimensions of -the output functions; this class can be used to enforced a fixed value -for a specific condition, e.g. Dirichlet boundary conditions, as it -happens for instance in our example. - -Once the equations are set as above in the problem conditions, the PINN -solver will aim to minimize the residuals described in each equation in -the training phase. - -Available classes of equations include also: - ``FixedGradient`` and -``FixedFlux``: they work analogously to ``FixedValue`` class, where we -can require a constant value to be enforced, respectively, on the -gradient of the solution or the divergence of the solution; - -``Laplace``: it can be used to enforce the laplacian of the solution to -be zero; - ``SystemEquation``: we can enforce multiple conditions on the -same subdomain through this class, passing a list of residual equations -defined in the problem. - -Defining a new Equation class ------------------------------ - -``Equation`` classes can be also inherited to define a new class. As -example, we can see how to rewrite the above problem introducing a new -class ``Burgers1D``; during the class call, we can pass the viscosity -parameter :math:`\nu`: - -.. code:: ipython3 - - class Burgers1DEquation(Equation): - - def __init__(self, nu = 0.): - """ - Burgers1D class. This class can be - used to enforce the solution u to solve the viscous Burgers 1D Equation. - - :param torch.float32 nu: the viscosity coefficient. Default value is set to 0. - """ - self.nu = nu - - def equation(input_, output_): - return grad(output_, input_, d='t') +\ - output_*grad(output_, input_, d='x') -\ - self.nu*laplacian(output_, input_, d='x') - - - super().__init__(equation) - -Now we can just pass the above class as input for the last condition, -setting :math:`\nu= \frac{0.01}{\pi}`: - -.. code:: ipython3 - - class Burgers1D(TimeDependentProblem, SpatialProblem): - - # define initial condition - def initial_condition(input_, output_): - u_expected = -torch.sin(torch.pi*input_.extract(['x'])) - return output_.extract(['u']) - u_expected - - # assign output/ spatial and temporal variables - output_variables = ['u'] - spatial_domain = CartesianDomain({'x': [-1, 1]}) - temporal_domain = CartesianDomain({'t': [0, 1]}) - - # problem condition statement - conditions = { - 'gamma1': Condition(location=CartesianDomain({'x': -1, 't': [0, 1]}), equation=FixedValue(0.)), - 'gamma2': Condition(location=CartesianDomain({'x': 1, 't': [0, 1]}), equation=FixedValue(0.)), - 't0': Condition(location=CartesianDomain({'x': [-1, 1], 't': 0}), equation=Equation(initial_condition)), - 'D': Condition(location=CartesianDomain({'x': [-1, 1], 't': [0, 1]}), equation=Burgers1DEquation(0.01/torch.pi)), - } - -What’s next? ------------- - -Congratulations on completing the ``Equation`` class tutorial of -**PINA**! As we have seen, you can build new classes that inherits -``Equation`` to store more complex equations, as the Burgers 1D -equation, only requiring to pass the characteristic coefficients of the -problem. From now on, you can: - define additional complex equation -classes (e.g. ``SchrodingerEquation``, ``NavierStokeEquation``..) - -define more ``FixedOperator`` (e.g. ``FixedCurl``) diff --git a/docs/source/_rst/tutorials/tutorial13/tutorial.rst b/docs/source/_rst/tutorials/tutorial13/tutorial.rst deleted file mode 100644 index 1b93290..0000000 --- a/docs/source/_rst/tutorials/tutorial13/tutorial.rst +++ /dev/null @@ -1,327 +0,0 @@ -Tutorial: Multiscale PDE learning with Fourier Feature Network -============================================================== - -|Open In Colab| - -.. |Open In Colab| image:: https://colab.research.google.com/assets/colab-badge.svg - :target: https://colab.research.google.com/github/mathLab/PINA/blob/master/tutorials/tutorial13/tutorial.ipynb - -This tutorial presents how to solve with Physics-Informed Neural -Networks (PINNs) a PDE characterized by multiscale behaviour, as -presented in `On the eigenvector bias of Fourier feature networks: From -regression to solving multi-scale PDEs with physics-informed neural -networks `__. - -First of all, some useful imports. - -.. code:: ipython3 - - ## routine needed to run the notebook on Google Colab - try: - import google.colab - IN_COLAB = True - except: - IN_COLAB = False - if IN_COLAB: - !pip install "pina-mathlab" - - import torch - - from pina import Condition, Plotter, Trainer, Plotter - from pina.problem import SpatialProblem - from pina.operators import laplacian - from pina.solvers import PINN, SAPINN - from pina.model.layers import FourierFeatureEmbedding - from pina.loss import LpLoss - from pina.geometry import CartesianDomain - from pina.equation import Equation, FixedValue - from pina.model import FeedForward - - -Multiscale Problem ------------------- - -We begin by presenting the problem which also can be found in Section 2 -of `On the eigenvector bias of Fourier feature networks: From regression -to solving multi-scale PDEs with physics-informed neural -networks `__. The -one-dimensional Poisson problem we aim to solve is mathematically -written as: - -.. math:: - - \begin{equation} - \begin{cases} - \Delta u (x) + f(x) = 0 \quad x \in [0,1], \\ - u(x) = 0 \quad x \in \partial[0,1], \\ - \end{cases} - \end{equation} - -We impose the solution as -:math:`u(x) = \sin(2\pi x) + 0.1 \sin(50\pi x)` and obtain the force -term -:math:`f(x) = (2\pi)^2 \sin(2\pi x) + 0.1 (50 \pi)^2 \sin(50\pi x)`. -Though this example is simple and pedagogical, it is worth noting that -the solution exhibits low frequency in the macro-scale and high -frequency in the micro-scale, which resembles many practical scenarios. - -In **PINA** this problem is written, as always, as a class `see here for -a tutorial on the Problem -class `__. -Below you can find the ``Poisson`` problem which is mathmatically -described above. - -.. code:: ipython3 - - class Poisson(SpatialProblem): - output_variables = ['u'] - spatial_domain = CartesianDomain({'x': [0, 1]}) - - def poisson_equation(input_, output_): - x = input_.extract('x') - u_xx = laplacian(output_, input_, components=['u'], d=['x']) - f = ((2*torch.pi)**2)*torch.sin(2*torch.pi*x) + 0.1*((50*torch.pi)**2)*torch.sin(50*torch.pi*x) - return u_xx + f - - # here we write the problem conditions - conditions = { - 'gamma0' : Condition(location=CartesianDomain({'x': 0}), - equation=FixedValue(0)), - 'gamma1' : Condition(location=CartesianDomain({'x': 1}), - equation=FixedValue(0)), - 'D': Condition(location=spatial_domain, - equation=Equation(poisson_equation)), - } - - def truth_solution(self, x): - return torch.sin(2*torch.pi*x) + 0.1*torch.sin(50*torch.pi*x) - - problem = Poisson() - - # let's discretise the domain - problem.discretise_domain(128, 'grid') - -A standard PINN approach would be to fit this model using a Feed Forward -(fully connected) Neural Network. For a conventional fully-connected -neural network is easy to approximate a function :math:`u`, given -sufficient data inside the computational domain. However solving -high-frequency or multi-scale problems presents great challenges to -PINNs especially when the number of data cannot capture the different -scales. - -Below we run a simulation using the ``PINN`` solver and the self -adaptive ``SAPINN`` solver, using a -``FeedForward`` model. We used a ``MultiStepLR`` scheduler to decrease the learning rate -slowly during training (it takes around 2 minutes to run on CPU). - -.. code:: ipython3 - - # training with PINN and visualize results - pinn = PINN(problem=problem, - model=FeedForward(input_dimensions=1, output_dimensions=1, layers=[100, 100, 100]), - scheduler=torch.optim.lr_scheduler.MultiStepLR, - scheduler_kwargs={'milestones' : [1000, 2000, 3000, 4000], 'gamma':0.9}) - trainer = Trainer(pinn, max_epochs=5000, accelerator='cpu', enable_model_summary=False) - trainer.train() - - # training with PINN and visualize results - sapinn = SAPINN(problem=problem, - model=FeedForward(input_dimensions=1, output_dimensions=1, layers=[100, 100, 100]), - scheduler_model=torch.optim.lr_scheduler.MultiStepLR, - scheduler_model_kwargs={'milestones' : [1000, 2000, 3000, 4000], 'gamma':0.9}) - trainer_sapinn = Trainer(sapinn, max_epochs=5000, accelerator='cpu', enable_model_summary=False) - trainer_sapinn.train() - - # plot results - pl = Plotter() - pl.plot(pinn, title='PINN Solution') - pl.plot(sapinn, title='Self Adaptive PINN Solution') - - - -.. parsed-literal:: - - GPU available: True (mps), used: False - TPU available: False, using: 0 TPU cores - IPU available: False, using: 0 IPUs - HPU available: False, using: 0 HPUs - Epoch 4999: 100%|██████████| 1/1 [00:00<00:00, 97.66it/s, v_num=69, gamma0_loss=2.61e+3, gamma1_loss=2.61e+3, D_loss=409.0, mean_loss=1.88e+3] - - -.. parsed-literal:: - - GPU available: True (mps), used: False - TPU available: False, using: 0 TPU cores - IPU available: False, using: 0 IPUs - HPU available: False, using: 0 HPUs - Epoch 4999: 100%|██████████| 1/1 [00:00<00:00, 65.77it/s, v_num=70, gamma0_loss=151.0, gamma1_loss=148.0, D_loss=6.38e+5, mean_loss=2.13e+5] - - - -.. image:: tutorial_files/tutorial_5_8.png - - - -.. image:: tutorial_files/tutorial_5_9.png - - -We can clearly see that the solution has not been learned by the two -different solvers. Indeed the big problem is not in the optimization -strategy (i.e. the solver), but in the model used to solve the problem. -A simple ``FeedForward`` network can hardly handle multiscales if not -enough collocation points are used! - -We can also compute the :math:`l_2` relative error for the ``PINN`` and -``SAPINN`` solutions: - -.. code:: ipython3 - - # l2 loss from PINA losses - l2_loss = LpLoss(p=2, relative=True) - - # sample new test points - pts = pts = problem.spatial_domain.sample(100, 'grid') - print(f'Relative l2 error PINN {l2_loss(pinn(pts), problem.truth_solution(pts)).item():.2%}') - print(f'Relative l2 error SAPINN {l2_loss(sapinn(pts), problem.truth_solution(pts)).item():.2%}') - - -.. parsed-literal:: - - Relative l2 error PINN 95.76% - Relative l2 error SAPINN 124.26% - - -Which is indeed very high! - -Fourier Feature Embedding in PINA ---------------------------------- - -Fourier Feature Embedding is a way to transform the input features, to -help the network in learning multiscale variations in the output. It was -first introduced in `On the eigenvector bias of Fourier feature -networks: From regression to solving multi-scale PDEs with -physics-informed neural -networks `__ showing great -results for multiscale problems. The basic idea is to map the input -:math:`\mathbf{x}` into an embedding :math:`\tilde{\mathbf{x}}` where: - -.. math:: \tilde{\mathbf{x}} =\left[\cos\left( \mathbf{B} \mathbf{x} \right), \sin\left( \mathbf{B} \mathbf{x} \right)\right] - -and :math:`\mathbf{B}_{ij} \sim \mathcal{N}(0, \sigma^2)`. This simple -operation allow the network to learn on multiple scales! - -In PINA we already have implemented the feature as a ``layer`` called -```FourierFeatureEmbedding`` `__. -Below we will build the *Multi-scale Fourier Feature Architecture*. In -this architecture multiple Fourier feature embeddings (initialized with -different :math:`\sigma`) are applied to input coordinates and then -passed through the same fully-connected neural network, before the -outputs are finally concatenated with a linear layer. - -.. code:: ipython3 - - class MultiscaleFourierNet(torch.nn.Module): - def __init__(self): - super().__init__() - self.embedding1 = FourierFeatureEmbedding(input_dimension=1, - output_dimension=100, - sigma=1) - self.embedding2 = FourierFeatureEmbedding(input_dimension=1, - output_dimension=100, - sigma=10) - self.layers = FeedForward(input_dimensions=100, output_dimensions=100, layers=[100]) - self.final_layer = torch.nn.Linear(2*100, 1) - - def forward(self, x): - e1 = self.layers(self.embedding1(x)) - e2 = self.layers(self.embedding2(x)) - return self.final_layer(torch.cat([e1, e2], dim=-1)) - - MultiscaleFourierNet() - - - - -.. parsed-literal:: - - MultiscaleFourierNet( - (embedding1): FourierFeatureEmbedding() - (embedding2): FourierFeatureEmbedding() - (layers): FeedForward( - (model): Sequential( - (0): Linear(in_features=100, out_features=100, bias=True) - (1): Tanh() - (2): Linear(in_features=100, out_features=100, bias=True) - ) - ) - (final_layer): Linear(in_features=200, out_features=1, bias=True) - ) - - - -We will train the ``MultiscaleFourierNet`` with the ``PINN`` solver (and -feel free to try also with our PINN variants (``SAPINN``, ``GPINN``, -``CompetitivePINN``, …). - -.. code:: ipython3 - - multiscale_pinn = PINN(problem=problem, - model=MultiscaleFourierNet(), - scheduler=torch.optim.lr_scheduler.MultiStepLR, - scheduler_kwargs={'milestones' : [1000, 2000, 3000, 4000], 'gamma':0.9}) - trainer = Trainer(multiscale_pinn, max_epochs=5000, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional) - trainer.train() - - -.. parsed-literal:: - - GPU available: True (mps), used: False - TPU available: False, using: 0 TPU cores - IPU available: False, using: 0 IPUs - HPU available: False, using: 0 HPUs - Epoch 4999: 100%|██████████| 1/1 [00:00<00:00, 72.21it/s, v_num=71, gamma0_loss=3.91e-5, gamma1_loss=3.91e-5, D_loss=0.000151, mean_loss=0.000113] - - -Let us now plot the solution and compute the relative :math:`l_2` again! - -.. code:: ipython3 - - # plot the solution - pl.plot(multiscale_pinn, title='Solution PINN with MultiscaleFourierNet') - - # sample new test points - pts = pts = problem.spatial_domain.sample(100, 'grid') - print(f'Relative l2 error PINN with MultiscaleFourierNet {l2_loss(multiscale_pinn(pts), problem.truth_solution(pts)).item():.2%}') - - - -.. image:: tutorial_files/tutorial_15_0.png - - -.. parsed-literal:: - - Relative l2 error PINN with MultiscaleFourierNet 2.72% - - -It is pretty clear that the network has learned the correct solution, -with also a very law error. Obviously a longer training and a more -expressive neural network could improve the results! - -What’s next? ------------- - -Congratulations on completing the one dimensional Poisson tutorial of -**PINA** using ``FourierFeatureEmbedding``! There are multiple -directions you can go now: - -1. Train the network for longer or with different layer sizes and assert - the finaly accuracy - -2. Understand the role of ``sigma`` in ``FourierFeatureEmbedding`` (see - original paper for a nice reference) - -3. Code the *Spatio-temporal multi-scale Fourier feature architecture* - for a more complex time dependent PDE (section 3 of the original - reference) - -4. Many more… diff --git a/docs/source/_rst/tutorials/tutorial13/tutorial_files/tutorial_15_0.png b/docs/source/_rst/tutorials/tutorial13/tutorial_files/tutorial_15_0.png deleted file mode 100644 index c6f0e50..0000000 Binary files a/docs/source/_rst/tutorials/tutorial13/tutorial_files/tutorial_15_0.png and /dev/null differ diff --git a/docs/source/_rst/tutorials/tutorial13/tutorial_files/tutorial_5_8.png b/docs/source/_rst/tutorials/tutorial13/tutorial_files/tutorial_5_8.png deleted file mode 100644 index 470a571..0000000 Binary files a/docs/source/_rst/tutorials/tutorial13/tutorial_files/tutorial_5_8.png and /dev/null differ diff --git a/docs/source/_rst/tutorials/tutorial13/tutorial_files/tutorial_5_9.png b/docs/source/_rst/tutorials/tutorial13/tutorial_files/tutorial_5_9.png deleted file mode 100644 index 1cfc02b..0000000 Binary files a/docs/source/_rst/tutorials/tutorial13/tutorial_files/tutorial_5_9.png and /dev/null differ diff --git a/docs/source/_rst/tutorials/tutorial2/tutorial.rst b/docs/source/_rst/tutorials/tutorial2/tutorial.rst deleted file mode 100644 index 9ed0eae..0000000 --- a/docs/source/_rst/tutorials/tutorial2/tutorial.rst +++ /dev/null @@ -1,385 +0,0 @@ -Tutorial: Two dimensional Poisson problem using Extra Features Learning -======================================================================= - -|Open In Colab| - -.. |Open In Colab| image:: https://colab.research.google.com/assets/colab-badge.svg - :target: https://colab.research.google.com/github/mathLab/PINA/blob/master/tutorials/tutorial2/tutorial.ipynb - - -This tutorial presents how to solve with Physics-Informed Neural -Networks (PINNs) a 2D Poisson problem with Dirichlet boundary -conditions. We will train with standard PINN’s training, and with -extrafeatures. For more insights on extrafeature learning please read -`An extended physics informed neural network for preliminary analysis of -parametric optimal control -problems `__. - -First of all, some useful imports. - -.. code:: ipython3 - - ## routine needed to run the notebook on Google Colab - try: - import google.colab - IN_COLAB = True - except: - IN_COLAB = False - if IN_COLAB: - !pip install "pina-mathlab" - - import torch - from torch.nn import Softplus - - from pina.problem import SpatialProblem - from pina.operators import laplacian - from pina.model import FeedForward - from pina.solvers import PINN - from pina.trainer import Trainer - from pina.plotter import Plotter - from pina.geometry import CartesianDomain - from pina.equation import Equation, FixedValue - from pina import Condition, LabelTensor - from pina.callbacks import MetricTracker - -The problem definition ----------------------- - -The two-dimensional Poisson problem is mathematically written as: - -.. math:: - - \begin{equation} - \begin{cases} - \Delta u = \sin{(\pi x)} \sin{(\pi y)} \text{ in } D, \\ - u = 0 \text{ on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4, - \end{cases} - \end{equation} - -where :math:`D` is a square domain :math:`[0,1]^2`, and -:math:`\Gamma_i`, with :math:`i=1,...,4`, are the boundaries of the -square. - -The Poisson problem is written in **PINA** code as a class. The -equations are written as *conditions* that should be satisfied in the -corresponding domains. The *truth_solution* is the exact solution which -will be compared with the predicted one. - -.. code:: ipython3 - - class Poisson(SpatialProblem): - output_variables = ['u'] - spatial_domain = CartesianDomain({'x': [0, 1], 'y': [0, 1]}) - - def laplace_equation(input_, output_): - force_term = (torch.sin(input_.extract(['x'])*torch.pi) * - torch.sin(input_.extract(['y'])*torch.pi)) - laplacian_u = laplacian(output_, input_, components=['u'], d=['x', 'y']) - return laplacian_u - force_term - - # here we write the problem conditions - conditions = { - 'gamma1': Condition(location=CartesianDomain({'x': [0, 1], 'y': 1}), equation=FixedValue(0.)), - 'gamma2': Condition(location=CartesianDomain({'x': [0, 1], 'y': 0}), equation=FixedValue(0.)), - 'gamma3': Condition(location=CartesianDomain({'x': 1, 'y': [0, 1]}), equation=FixedValue(0.)), - 'gamma4': Condition(location=CartesianDomain({'x': 0, 'y': [0, 1]}), equation=FixedValue(0.)), - 'D': Condition(location=CartesianDomain({'x': [0, 1], 'y': [0, 1]}), equation=Equation(laplace_equation)), - } - - def poisson_sol(self, pts): - return -( - torch.sin(pts.extract(['x'])*torch.pi)* - torch.sin(pts.extract(['y'])*torch.pi) - )/(2*torch.pi**2) - - truth_solution = poisson_sol - - problem = Poisson() - - # let's discretise the domain - problem.discretise_domain(25, 'grid', locations=['D']) - problem.discretise_domain(25, 'grid', locations=['gamma1', 'gamma2', 'gamma3', 'gamma4']) - -Solving the problem with standard PINNs ---------------------------------------- - -After the problem, the feed-forward neural network is defined, through -the class ``FeedForward``. This neural network takes as input the -coordinates (in this case :math:`x` and :math:`y`) and provides the -unkwown field of the Poisson problem. The residual of the equations are -evaluated at several sampling points (which the user can manipulate -using the method ``CartesianDomain_pts``) and the loss minimized by the -neural network is the sum of the residuals. - -In this tutorial, the neural network is composed by two hidden layers of -10 neurons each, and it is trained for 1000 epochs with a learning rate -of 0.006 and :math:`l_2` weight regularization set to :math:`10^{-7}`. -These parameters can be modified as desired. We use the -``MetricTracker`` class to track the metrics during training. - -.. code:: ipython3 - - # make model + solver + trainer - model = FeedForward( - layers=[10, 10], - func=Softplus, - output_dimensions=len(problem.output_variables), - input_dimensions=len(problem.input_variables) - ) - pinn = PINN(problem, model, optimizer_kwargs={'lr':0.006, 'weight_decay':1e-8}) - trainer = Trainer(pinn, max_epochs=1000, callbacks=[MetricTracker()], accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional) - - # train - trainer.train() - -.. parsed-literal:: - - `Trainer.fit` stopped: `max_epochs=1000` reached. - - -.. parsed-literal:: - - Epoch 999: : 1it [00:00, 105.33it/s, v_num=3, gamma1_loss=5.29e-5, gamma2_loss=4.09e-5, gamma3_loss=4.73e-5, gamma4_loss=4.18e-5, D_loss=0.00134, mean_loss=0.000304] - - -Now the ``Plotter`` class is used to plot the results. The solution -predicted by the neural network is plotted on the left, the exact one is -represented at the center and on the right the error between the exact -and the predicted solutions is showed. - -.. code:: ipython3 - - plotter = Plotter() - plotter.plot(solver=pinn) - - - -.. image:: tutorial_files/tutorial_9_0.png - - -Solving the problem with extra-features PINNs ---------------------------------------------- - -Now, the same problem is solved in a different way. A new neural network -is now defined, with an additional input variable, named extra-feature, -which coincides with the forcing term in the Laplace equation. The set -of input variables to the neural network is: - -.. math:: - - \begin{equation} - [x, y, k(x, y)], \text{ with } k(x, y)=\sin{(\pi x)}\sin{(\pi y)}, - \end{equation} - -where :math:`x` and :math:`y` are the spatial coordinates and -:math:`k(x, y)` is the added feature. - -This feature is initialized in the class ``SinSin``, which needs to be -inherited by the ``torch.nn.Module`` class and to have the ``forward`` -method. After declaring such feature, we can just incorporate in the -``FeedForward`` class thanks to the ``extra_features`` argument. **NB**: -``extra_features`` always needs a ``list`` as input, you you have one -feature just encapsulated it in a class, as in the next cell. - -Finally, we perform the same training as before: the problem is -``Poisson``, the network is composed by the same number of neurons and -optimizer parameters are equal to previous test, the only change is the -new extra feature. - -.. code:: ipython3 - - class SinSin(torch.nn.Module): - """Feature: sin(x)*sin(y)""" - def __init__(self): - super().__init__() - - def forward(self, x): - t = (torch.sin(x.extract(['x'])*torch.pi) * - torch.sin(x.extract(['y'])*torch.pi)) - return LabelTensor(t, ['sin(x)sin(y)']) - - - # make model + solver + trainer - model_feat = FeedForward( - layers=[10, 10], - func=Softplus, - output_dimensions=len(problem.output_variables), - input_dimensions=len(problem.input_variables)+1 - ) - pinn_feat = PINN(problem, model_feat, extra_features=[SinSin()], optimizer_kwargs={'lr':0.006, 'weight_decay':1e-8}) - trainer_feat = Trainer(pinn_feat, max_epochs=1000, callbacks=[MetricTracker()], accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional) - - # train - trainer_feat.train() - -.. parsed-literal:: - - `Trainer.fit` stopped: `max_epochs=1000` reached. - - -.. parsed-literal:: - - Epoch 999: : 1it [00:00, 85.62it/s, v_num=4, gamma1_loss=2.54e-7, gamma2_loss=2.17e-7, gamma3_loss=1.94e-7, gamma4_loss=2.69e-7, D_loss=9.2e-6, mean_loss=2.03e-6] - - -The predicted and exact solutions and the error between them are -represented below. We can easily note that now our network, having -almost the same condition as before, is able to reach additional order -of magnitudes in accuracy. - -.. code:: ipython3 - - plotter.plot(solver=pinn_feat) - - - -.. image:: tutorial_files/tutorial_14_0.png - - -Solving the problem with learnable extra-features PINNs -------------------------------------------------------- - -We can still do better! - -Another way to exploit the extra features is the addition of learnable -parameter inside them. In this way, the added parameters are learned -during the training phase of the neural network. In this case, we use: - -.. math:: - - \begin{equation} - k(x, \mathbf{y}) = \beta \sin{(\alpha x)} \sin{(\alpha y)}, - \end{equation} - -where :math:`\alpha` and :math:`\beta` are the abovementioned -parameters. Their implementation is quite trivial: by using the class -``torch.nn.Parameter`` we cam define all the learnable parameters we -need, and they are managed by ``autograd`` module! - -.. code:: ipython3 - - class SinSinAB(torch.nn.Module): - """ """ - def __init__(self): - super().__init__() - self.alpha = torch.nn.Parameter(torch.tensor([1.0])) - self.beta = torch.nn.Parameter(torch.tensor([1.0])) - - - def forward(self, x): - t = ( - self.beta*torch.sin(self.alpha*x.extract(['x'])*torch.pi)* - torch.sin(self.alpha*x.extract(['y'])*torch.pi) - ) - return LabelTensor(t, ['b*sin(a*x)sin(a*y)']) - - - # make model + solver + trainer - model_lean= FeedForward( - layers=[10, 10], - func=Softplus, - output_dimensions=len(problem.output_variables), - input_dimensions=len(problem.input_variables)+1 - ) - pinn_lean = PINN(problem, model_lean, extra_features=[SinSinAB()], optimizer_kwargs={'lr':0.006, 'weight_decay':1e-8}) - trainer_learn = Trainer(pinn_lean, max_epochs=1000, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional) - - # train - trainer_learn.train() - -.. parsed-literal:: - - `Trainer.fit` stopped: `max_epochs=1000` reached. - - -.. parsed-literal:: - - Epoch 999: : 1it [00:00, 85.94it/s, v_num=5, gamma1_loss=3.26e-8, gamma2_loss=7.84e-8, gamma3_loss=1.13e-7, gamma4_loss=3.02e-8, D_loss=2.66e-6, mean_loss=5.82e-7] - - -Umh, the final loss is not appreciabily better than previous model (with -static extra features), despite the usage of learnable parameters. This -is mainly due to the over-parametrization of the network: there are many -parameter to optimize during the training, and the model in unable to -understand automatically that only the parameters of the extra feature -(and not the weights/bias of the FFN) should be tuned in order to fit -our problem. A longer training can be helpful, but in this case the -faster way to reach machine precision for solving the Poisson problem is -removing all the hidden layers in the ``FeedForward``, keeping only the -:math:`\alpha` and :math:`\beta` parameters of the extra feature. - -.. code:: ipython3 - - # make model + solver + trainer - model_lean= FeedForward( - layers=[], - func=Softplus, - output_dimensions=len(problem.output_variables), - input_dimensions=len(problem.input_variables)+1 - ) - pinn_learn = PINN(problem, model_lean, extra_features=[SinSinAB()], optimizer_kwargs={'lr':0.01, 'weight_decay':1e-8}) - trainer_learn = Trainer(pinn_learn, max_epochs=1000, callbacks=[MetricTracker()], accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional) - - # train - trainer_learn.train() - -.. parsed-literal:: - - `Trainer.fit` stopped: `max_epochs=1000` reached. - - -.. parsed-literal:: - - Epoch 999: : 1it [00:00, 98.81it/s, v_num=6, gamma1_loss=2.55e-16, gamma2_loss=4.76e-17, gamma3_loss=2.55e-16, gamma4_loss=4.76e-17, D_loss=1.74e-13, mean_loss=3.5e-14] - - -In such a way, the model is able to reach a very high accuracy! Of -course, this is a toy problem for understanding the usage of extra -features: similar precision could be obtained if the extra features are -very similar to the true solution. The analyzed Poisson problem shows a -forcing term very close to the solution, resulting in a perfect problem -to address with such an approach. - -We conclude here by showing the graphical comparison of the unknown -field and the loss trend for all the test cases presented here: the -standard PINN, PINN with extra features, and PINN with learnable extra -features. - -.. code:: ipython3 - - plotter.plot(solver=pinn_learn) - - - -.. image:: tutorial_files/tutorial_21_0.png - - -Let us compare the training losses for the various types of training - -.. code:: ipython3 - - plotter.plot_loss(trainer, logy=True, label='Standard') - plotter.plot_loss(trainer_feat, logy=True,label='Static Features') - plotter.plot_loss(trainer_learn, logy=True, label='Learnable Features') - - - - -.. image:: tutorial_files/tutorial_23_0.png - - -What’s next? ------------- - -Nice you have completed the two dimensional Poisson tutorial of -**PINA**! There are multiple directions you can go now: - -1. Train the network for longer or with different layer sizes and assert - the finaly accuracy - -2. Propose new types of extrafeatures and see how they affect the - learning - -3. Exploit extrafeature training in more complex problems - -4. Many more… diff --git a/docs/source/_rst/tutorials/tutorial2/tutorial_files/tutorial_14_0.png b/docs/source/_rst/tutorials/tutorial2/tutorial_files/tutorial_14_0.png deleted file mode 100644 index 4974131..0000000 Binary files a/docs/source/_rst/tutorials/tutorial2/tutorial_files/tutorial_14_0.png and /dev/null differ diff --git a/docs/source/_rst/tutorials/tutorial2/tutorial_files/tutorial_21_0.png b/docs/source/_rst/tutorials/tutorial2/tutorial_files/tutorial_21_0.png deleted file mode 100644 index acaece6..0000000 Binary files a/docs/source/_rst/tutorials/tutorial2/tutorial_files/tutorial_21_0.png and /dev/null differ diff --git a/docs/source/_rst/tutorials/tutorial2/tutorial_files/tutorial_23_0.png b/docs/source/_rst/tutorials/tutorial2/tutorial_files/tutorial_23_0.png deleted file mode 100644 index 5960e46..0000000 Binary files a/docs/source/_rst/tutorials/tutorial2/tutorial_files/tutorial_23_0.png and /dev/null differ diff --git a/docs/source/_rst/tutorials/tutorial2/tutorial_files/tutorial_9_0.png b/docs/source/_rst/tutorials/tutorial2/tutorial_files/tutorial_9_0.png deleted file mode 100644 index 4dd8b3b..0000000 Binary files a/docs/source/_rst/tutorials/tutorial2/tutorial_files/tutorial_9_0.png and /dev/null differ diff --git a/docs/source/_rst/tutorials/tutorial3/tutorial.rst b/docs/source/_rst/tutorials/tutorial3/tutorial.rst deleted file mode 100644 index 54172f4..0000000 --- a/docs/source/_rst/tutorials/tutorial3/tutorial.rst +++ /dev/null @@ -1,335 +0,0 @@ -Tutorial: Two dimensional Wave problem with hard constraint -=========================================================== - -|Open In Colab| - -.. |Open In Colab| image:: https://colab.research.google.com/assets/colab-badge.svg - :target: https://colab.research.google.com/github/mathLab/PINA/blob/master/tutorials/tutorial3/tutorial.ipynb - - -In this tutorial we present how to solve the wave equation using hard -constraint PINNs. For doing so we will build a costum ``torch`` model -and pass it to the ``PINN`` solver. - -First of all, some useful imports. - -.. code:: ipython3 - - ## routine needed to run the notebook on Google Colab - try: - import google.colab - IN_COLAB = True - except: - IN_COLAB = False - if IN_COLAB: - !pip install "pina-mathlab" - - import torch - - from pina.problem import SpatialProblem, TimeDependentProblem - from pina.operators import laplacian, grad - from pina.geometry import CartesianDomain - from pina.solvers import PINN - from pina.trainer import Trainer - from pina.equation import Equation - from pina.equation.equation_factory import FixedValue - from pina import Condition, Plotter - -The problem definition ----------------------- - -The problem is written in the following form: - -.. math:: - \begin{equation} - \begin{cases} - \Delta u(x,y,t) = \frac{\partial^2}{\partial t^2} u(x,y,t) \quad \text{in } D, \\\\ - u(x, y, t=0) = \sin(\pi x)\sin(\pi y), \\\\ - u(x, y, t) = 0 \quad \text{on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4, - \end{cases} - \end{equation} - -where :math:`D` is a square domain :math:`[0,1]^2`, and -:math:`\Gamma_i`, with :math:`i=1,...,4`, are the boundaries of the -square, and the velocity in the standard wave equation is fixed to one. - -Now, the wave problem is written in PINA code as a class, inheriting -from ``SpatialProblem`` and ``TimeDependentProblem`` since we deal with -spatial, and time dependent variables. The equations are written as -``conditions`` that should be satisfied in the corresponding domains. -``truth_solution`` is the exact solution which will be compared with the -predicted one. - -.. code:: ipython3 - - class Wave(TimeDependentProblem, SpatialProblem): - output_variables = ['u'] - spatial_domain = CartesianDomain({'x': [0, 1], 'y': [0, 1]}) - temporal_domain = CartesianDomain({'t': [0, 1]}) - - def wave_equation(input_, output_): - u_t = grad(output_, input_, components=['u'], d=['t']) - u_tt = grad(u_t, input_, components=['dudt'], d=['t']) - nabla_u = laplacian(output_, input_, components=['u'], d=['x', 'y']) - return nabla_u - u_tt - - def initial_condition(input_, output_): - u_expected = (torch.sin(torch.pi*input_.extract(['x'])) * - torch.sin(torch.pi*input_.extract(['y']))) - return output_.extract(['u']) - u_expected - - conditions = { - 'gamma1': Condition(location=CartesianDomain({'x': [0, 1], 'y': 1, 't': [0, 1]}), equation=FixedValue(0.)), - 'gamma2': Condition(location=CartesianDomain({'x': [0, 1], 'y': 0, 't': [0, 1]}), equation=FixedValue(0.)), - 'gamma3': Condition(location=CartesianDomain({'x': 1, 'y': [0, 1], 't': [0, 1]}), equation=FixedValue(0.)), - 'gamma4': Condition(location=CartesianDomain({'x': 0, 'y': [0, 1], 't': [0, 1]}), equation=FixedValue(0.)), - 't0': Condition(location=CartesianDomain({'x': [0, 1], 'y': [0, 1], 't': 0}), equation=Equation(initial_condition)), - 'D': Condition(location=CartesianDomain({'x': [0, 1], 'y': [0, 1], 't': [0, 1]}), equation=Equation(wave_equation)), - } - - def wave_sol(self, pts): - return (torch.sin(torch.pi*pts.extract(['x'])) * - torch.sin(torch.pi*pts.extract(['y'])) * - torch.cos(torch.sqrt(torch.tensor(2.))*torch.pi*pts.extract(['t']))) - - truth_solution = wave_sol - - problem = Wave() - -Hard Constraint Model ---------------------- - -After the problem, a **torch** model is needed to solve the PINN. -Usually, many models are already implemented in **PINA**, but the user -has the possibility to build his/her own model in ``torch``. The hard -constraint we impose is on the boundary of the spatial domain. -Specifically, our solution is written as: - -.. math:: u_{\rm{pinn}} = xy(1-x)(1-y)\cdot NN(x, y, t), - -where :math:`NN` is the neural net output. This neural network takes as -input the coordinates (in this case :math:`x`, :math:`y` and :math:`t`) -and provides the unknown field :math:`u`. By construction, it is zero on -the boundaries. The residuals of the equations are evaluated at several -sampling points (which the user can manipulate using the method -``discretise_domain``) and the loss minimized by the neural network is -the sum of the residuals. - -.. code:: ipython3 - - class HardMLP(torch.nn.Module): - - def __init__(self, input_dim, output_dim): - super().__init__() - - self.layers = torch.nn.Sequential(torch.nn.Linear(input_dim, 40), - torch.nn.ReLU(), - torch.nn.Linear(40, 40), - torch.nn.ReLU(), - torch.nn.Linear(40, output_dim)) - - # here in the foward we implement the hard constraints - def forward(self, x): - hard = x.extract(['x'])*(1-x.extract(['x']))*x.extract(['y'])*(1-x.extract(['y'])) - return hard*self.layers(x) - -Train and Inference -------------------- - -In this tutorial, the neural network is trained for 1000 epochs with a -learning rate of 0.001 (default in ``PINN``). Training takes -approximately 3 minutes. - -.. code:: ipython3 - - # generate the data - problem.discretise_domain(1000, 'random', locations=['D', 't0', 'gamma1', 'gamma2', 'gamma3', 'gamma4']) - - # crete the solver - pinn = PINN(problem, HardMLP(len(problem.input_variables), len(problem.output_variables))) - - # create trainer and train - trainer = Trainer(pinn, max_epochs=1000, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional) - trainer.train() - -.. parsed-literal:: - - `Trainer.fit` stopped: `max_epochs=1000` reached. - - -.. parsed-literal:: - - Epoch 999: : 1it [00:00, 68.69it/s, v_num=0, gamma1_loss=0.000, gamma2_loss=0.000, gamma3_loss=0.000, gamma4_loss=0.000, t0_loss=0.0419, D_loss=0.0307, mean_loss=0.0121] - - -Notice that the loss on the boundaries of the spatial domain is exactly -zero, as expected! After the training is completed one can now plot some -results using the ``Plotter`` class of **PINA**. - -.. code:: ipython3 - - plotter = Plotter() - - # plotting at fixed time t = 0.0 - print('Plotting at t=0') - plotter.plot(pinn, fixed_variables={'t': 0.0}) - - # plotting at fixed time t = 0.5 - print('Plotting at t=0.5') - plotter.plot(pinn, fixed_variables={'t': 0.5}) - - # plotting at fixed time t = 1. - print('Plotting at t=1') - plotter.plot(pinn, fixed_variables={'t': 1.0}) - - -.. parsed-literal:: - - Plotting at t=0 - - - -.. image:: tutorial_files/tutorial_13_1.png - - -.. parsed-literal:: - - Plotting at t=0.5 - - - -.. image:: tutorial_files/tutorial_13_3.png - - -.. parsed-literal:: - - Plotting at t=1 - - - -.. image:: tutorial_files/tutorial_13_5.png - - -The results are not so great, and we can clearly see that as time -progress the solution get worse…. Can we do better? - -A valid option is to impose the initial condition as hard constraint as -well. Specifically, our solution is written as: - -.. math:: u_{\rm{pinn}} = xy(1-x)(1-y)\cdot NN(x, y, t)\cdot t + \cos(\sqrt{2}\pi t)\sin(\pi x)\sin(\pi y), - -Let us build the network first - -.. code:: ipython3 - - class HardMLPtime(torch.nn.Module): - - def __init__(self, input_dim, output_dim): - super().__init__() - - self.layers = torch.nn.Sequential(torch.nn.Linear(input_dim, 40), - torch.nn.ReLU(), - torch.nn.Linear(40, 40), - torch.nn.ReLU(), - torch.nn.Linear(40, output_dim)) - - # here in the foward we implement the hard constraints - def forward(self, x): - hard_space = x.extract(['x'])*(1-x.extract(['x']))*x.extract(['y'])*(1-x.extract(['y'])) - hard_t = torch.sin(torch.pi*x.extract(['x'])) * torch.sin(torch.pi*x.extract(['y'])) * torch.cos(torch.sqrt(torch.tensor(2.))*torch.pi*x.extract(['t'])) - return hard_space * self.layers(x) * x.extract(['t']) + hard_t - -Now let’s train with the same configuration as thre previous test - -.. code:: ipython3 - - # generate the data - problem.discretise_domain(1000, 'random', locations=['D', 't0', 'gamma1', 'gamma2', 'gamma3', 'gamma4']) - - # crete the solver - pinn = PINN(problem, HardMLPtime(len(problem.input_variables), len(problem.output_variables))) - - # create trainer and train - trainer = Trainer(pinn, max_epochs=1000, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional) - trainer.train() - - -.. parsed-literal:: - - `Trainer.fit` stopped: `max_epochs=1000` reached. - - -.. parsed-literal:: - - Epoch 999: : 1it [00:00, 45.78it/s, v_num=1, gamma1_loss=1.97e-15, gamma2_loss=0.000, gamma3_loss=2.14e-15, gamma4_loss=0.000, t0_loss=0.000, D_loss=1.25e-7, mean_loss=2.09e-8] - - -We can clearly see that the loss is way lower now. Let’s plot the -results - -.. code:: ipython3 - - plotter = Plotter() - - # plotting at fixed time t = 0.0 - print('Plotting at t=0') - plotter.plot(pinn, fixed_variables={'t': 0.0}) - - # plotting at fixed time t = 0.5 - print('Plotting at t=0.5') - plotter.plot(pinn, fixed_variables={'t': 0.5}) - - # plotting at fixed time t = 1. - print('Plotting at t=1') - plotter.plot(pinn, fixed_variables={'t': 1.0}) - - -.. parsed-literal:: - - Plotting at t=0 - - - -.. image:: tutorial_files/tutorial_19_1.png - - -.. parsed-literal:: - - Plotting at t=0.5 - - - -.. image:: tutorial_files/tutorial_19_3.png - - -.. parsed-literal:: - - Plotting at t=1 - - - -.. image:: tutorial_files/tutorial_19_5.png - - -We can see now that the results are way better! This is due to the fact -that previously the network was not learning correctly the initial -conditon, leading to a poor solution when the time evolved. By imposing -the initial condition the network is able to correctly solve the -problem. - -What’s next? ------------- - -Nice you have completed the two dimensional Wave tutorial of **PINA**! -There are multiple directions you can go now: - -1. Train the network for longer or with different layer sizes and assert - the finaly accuracy - -2. Propose new types of hard constraints in time, e.g.  - - .. math:: u_{\rm{pinn}} = xy(1-x)(1-y)\cdot NN(x, y, t)(1-\exp(-t)) + \cos(\sqrt{2}\pi t)sin(\pi x)\sin(\pi y), - -3. Exploit extrafeature training for model 1 and 2 - -4. Many more… diff --git a/docs/source/_rst/tutorials/tutorial3/tutorial_files/tutorial_13_1.png b/docs/source/_rst/tutorials/tutorial3/tutorial_files/tutorial_13_1.png deleted file mode 100644 index 795610f..0000000 Binary files a/docs/source/_rst/tutorials/tutorial3/tutorial_files/tutorial_13_1.png and /dev/null differ diff --git a/docs/source/_rst/tutorials/tutorial3/tutorial_files/tutorial_13_3.png b/docs/source/_rst/tutorials/tutorial3/tutorial_files/tutorial_13_3.png deleted file mode 100644 index c260215..0000000 Binary files a/docs/source/_rst/tutorials/tutorial3/tutorial_files/tutorial_13_3.png and /dev/null differ diff --git a/docs/source/_rst/tutorials/tutorial3/tutorial_files/tutorial_13_5.png b/docs/source/_rst/tutorials/tutorial3/tutorial_files/tutorial_13_5.png deleted file mode 100644 index ebd27a0..0000000 Binary files a/docs/source/_rst/tutorials/tutorial3/tutorial_files/tutorial_13_5.png and /dev/null differ diff --git a/docs/source/_rst/tutorials/tutorial3/tutorial_files/tutorial_19_1.png b/docs/source/_rst/tutorials/tutorial3/tutorial_files/tutorial_19_1.png deleted file mode 100644 index c9ed12f..0000000 Binary files a/docs/source/_rst/tutorials/tutorial3/tutorial_files/tutorial_19_1.png and /dev/null differ diff --git a/docs/source/_rst/tutorials/tutorial3/tutorial_files/tutorial_19_3.png b/docs/source/_rst/tutorials/tutorial3/tutorial_files/tutorial_19_3.png deleted file mode 100644 index 2523fcf..0000000 Binary files a/docs/source/_rst/tutorials/tutorial3/tutorial_files/tutorial_19_3.png and /dev/null differ diff --git a/docs/source/_rst/tutorials/tutorial3/tutorial_files/tutorial_19_5.png b/docs/source/_rst/tutorials/tutorial3/tutorial_files/tutorial_19_5.png deleted file mode 100644 index c6448a6..0000000 Binary files a/docs/source/_rst/tutorials/tutorial3/tutorial_files/tutorial_19_5.png and /dev/null differ diff --git a/docs/source/_rst/tutorials/tutorial4/tutorial.rst b/docs/source/_rst/tutorials/tutorial4/tutorial.rst deleted file mode 100644 index 2900c3e..0000000 --- a/docs/source/_rst/tutorials/tutorial4/tutorial.rst +++ /dev/null @@ -1,820 +0,0 @@ -Tutorial: Unstructured convolutional autoencoder via continuous convolution -=========================================================================== - -|Open In Colab| - -.. |Open In Colab| image:: https://colab.research.google.com/assets/colab-badge.svg - :target: https://colab.research.google.com/github/mathLab/PINA/blob/master/tutorials/tutorial4/tutorial.ipynb - -In this tutorial, we will show how to use the Continuous Convolutional -Filter, and how to build common Deep Learning architectures with it. The -implementation of the filter follows the original work `A Continuous -Convolutional Trainable Filter for Modelling Unstructured -Data `__. - -First of all we import the modules needed for the tutorial: - -.. code:: ipython3 - - ## routine needed to run the notebook on Google Colab - try: - import google.colab - IN_COLAB = True - except: - IN_COLAB = False - if IN_COLAB: - !pip install "pina-mathlab" - - import torch - import matplotlib.pyplot as plt - plt.style.use('tableau-colorblind10') - from pina.problem import AbstractProblem - from pina.solvers import SupervisedSolver - from pina.trainer import Trainer - from pina import Condition, LabelTensor - from pina.model.layers import ContinuousConvBlock - import torchvision # for MNIST dataset - from pina.model import FeedForward # for building AE and MNIST classification - -The tutorial is structured as follow: - -* `Continuous filter background <#continuous-filter-background>`__: understand how the convolutional filter works and how to use it. -* `Building a MNIST Classifier <#building-a-mnist-classifier>`__: show how to build a simple - classifier using the MNIST dataset and how to combine a continuous - convolutional layer with a feedforward neural network. -* `Building a Continuous Convolutional Autoencoder <#building-a-continuous-convolutional-autoencoder>`__: show - show to use the continuous filter to work with unstructured data for - autoencoding and up-sampling. - -Continuous filter background ----------------------------- - -As reported by the authors in the original paper: in contrast to -discrete convolution, continuous convolution is mathematically defined -as: - -.. math:: - - - \mathcal{I}_{\rm{out}}(\mathbf{x}) = \int_{\mathcal{X}} \mathcal{I}(\mathbf{x} + \mathbf{\tau}) \cdot \mathcal{K}(\mathbf{\tau}) d\mathbf{\tau}, - -where :math:`\mathcal{K} : \mathcal{X} \rightarrow \mathbb{R}` is the -*continuous filter* function, and -:math:`\mathcal{I} : \Omega \subset \mathbb{R}^N \rightarrow \mathbb{R}` -is the input function. The continuous filter function is approximated -using a FeedForward Neural Network, thus trainable during the training -phase. The way in which the integral is approximated can be different, -currently on **PINA** we approximate it using a simple sum, as suggested -by the authors. Thus, given :math:`\{\mathbf{x}_i\}_{i=1}^{n}` points in -:math:`\mathbb{R}^N` of the input function mapped on the -:math:`\mathcal{X}` filter domain, we approximate the above equation as: - -.. math:: - - - \mathcal{I}_{\rm{out}}(\mathbf{\tilde{x}}_i) = \sum_{{\mathbf{x}_i}\in\mathcal{X}} \mathcal{I}(\mathbf{x}_i + \mathbf{\tau}) \cdot \mathcal{K}(\mathbf{x}_i), - -where :math:`\mathbf{\tau} \in \mathcal{S}`, with :math:`\mathcal{S}` -the set of available strides, corresponds to the current stride position -of the filter, and :math:`\mathbf{\tilde{x}}_i` points are obtained by -taking the centroid of the filter position mapped on the :math:`\Omega` -domain. - -We will now try to pratically see how to work with the filter. From the -above definition we see that what is needed is: 1. A domain and a -function defined on that domain (the input) 2. A stride, corresponding -to the positions where the filter needs to be :math:`\rightarrow` -``stride`` variable in ``ContinuousConv`` 3. The filter rectangular -domain :math:`\rightarrow` ``filter_dim`` variable in ``ContinuousConv`` - -Input function -~~~~~~~~~~~~~~ - -The input function for the continuous filter is defined as a tensor of -shape: - -.. math:: [B \times N_{in} \times N \times D] - -\ where :math:`B` is the batch_size, :math:`N_{in}` is the number of -input fields, :math:`N` the number of points in the mesh, :math:`D` the -dimension of the problem. In particular: \* :math:`D` is the number of -spatial variables + 1. The last column must contain the field value. For -example for 2D problems :math:`D=3` and the tensor will be something -like ``[first coordinate, second coordinate, field value]`` \* -:math:`N_{in}` represents the number of vectorial function presented. -For example a vectorial function :math:`f = [f_1, f_2]` will have -:math:`N_{in}=2` - -Let’s see an example to clear the ideas. We will be verbose to explain -in details the input form. We wish to create the function: - -.. math:: - - - f(x, y) = [\sin(\pi x) \sin(\pi y), -\sin(\pi x) \sin(\pi y)] \quad (x,y)\in[0,1]\times[0,1] - -using a batch size of one. - -.. code:: ipython3 - - # batch size fixed to 1 - batch_size = 1 - - # points in the mesh fixed to 200 - N = 200 - - # vectorial 2 dimensional function, number_input_fileds=2 - number_input_fileds = 2 - - # 2 dimensional spatial variables, D = 2 + 1 = 3 - D = 3 - - # create the function f domain as random 2d points in [0, 1] - domain = torch.rand(size=(batch_size, number_input_fileds, N, D-1)) - print(f"Domain has shape: {domain.shape}") - - # create the functions - pi = torch.acos(torch.tensor([-1.])) # pi value - f1 = torch.sin(pi * domain[:, 0, :, 0]) * torch.sin(pi * domain[:, 0, :, 1]) - f2 = - torch.sin(pi * domain[:, 1, :, 0]) * torch.sin(pi * domain[:, 1, :, 1]) - - # stacking the input domain and field values - data = torch.empty(size=(batch_size, number_input_fileds, N, D)) - data[..., :-1] = domain # copy the domain - data[:, 0, :, -1] = f1 # copy first field value - data[:, 1, :, -1] = f1 # copy second field value - print(f"Filter input data has shape: {data.shape}") - - -.. parsed-literal:: - - Domain has shape: torch.Size([1, 2, 200, 2]) - Filter input data has shape: torch.Size([1, 2, 200, 3]) - - -Stride -~~~~~~ - -The stride is passed as a dictionary ``stride`` which tells the filter -where to go. Here is an example for the :math:`[0,1]\times[0,5]` domain: - -.. code:: python - - # stride definition - stride = {"domain": [1, 5], - "start": [0, 0], - "jump": [0.1, 0.3], - "direction": [1, 1], - } - -This tells the filter: - -1. ``domain``: square domain (the only implemented) :math:`[0,1]\times[0,5]`. The minimum value is always zero, - while the maximum is specified by the user -2. ``start``: start position - of the filter, coordinate :math:`(0, 0)` -3. ``jump``: the jumps of the - centroid of the filter to the next position :math:`(0.1, 0.3)` -4. ``direction``: the directions of the jump, with ``1 = right``, - ``0 = no jump``,\ ``-1 = left`` with respect to the current position - -**Note** - -We are planning to release the possibility to directly pass a list of -possible strides! - -Filter definition -~~~~~~~~~~~~~~~~~ - -Having defined all the previous blocks we are able to construct the -continuous filter. Suppose we would like to get an ouput with only one field, and let us -fix the filter dimension to be :math:`[0.1, 0.1]`. - -.. code:: ipython3 - - # filter dim - filter_dim = [0.1, 0.1] - - # stride - stride = {"domain": [1, 1], - "start": [0, 0], - "jump": [0.08, 0.08], - "direction": [1, 1], - } - - # creating the filter - cConv = ContinuousConvBlock(input_numb_field=number_input_fileds, - output_numb_field=1, - filter_dim=filter_dim, - stride=stride) - - -That’s it! In just one line of code we have created the continuous -convolutional filter. By default the ``pina.model.FeedForward`` neural -network is intitialised, more on the -`documentation `__. In -case the mesh doesn’t change during training we can set the ``optimize`` -flag equals to ``True``, to exploit optimizations for finding the points -to convolve. - -.. code:: ipython3 - - # creating the filter + optimization - cConv = ContinuousConvBlock(input_numb_field=number_input_fileds, - output_numb_field=1, - filter_dim=filter_dim, - stride=stride, - optimize=True) - - -Let’s try to do a forward pass - -.. code:: ipython3 - - print(f"Filter input data has shape: {data.shape}") - - #input to the filter - output = cConv(data) - - print(f"Filter output data has shape: {output.shape}") - - -.. parsed-literal:: - - Filter input data has shape: torch.Size([1, 2, 200, 3]) - Filter output data has shape: torch.Size([1, 1, 169, 3]) - - -If we don’t want to use the default ``FeedForward`` neural network, we -can pass a specified torch model in the ``model`` keyword as follow: - -.. code:: ipython3 - - class SimpleKernel(torch.nn.Module): - def __init__(self) -> None: - super().__init__() - self. model = torch.nn.Sequential( - torch.nn.Linear(2, 20), - torch.nn.ReLU(), - torch.nn.Linear(20, 20), - torch.nn.ReLU(), - torch.nn.Linear(20, 1)) - - def forward(self, x): - return self.model(x) - - - cConv = ContinuousConvBlock(input_numb_field=number_input_fileds, - output_numb_field=1, - filter_dim=filter_dim, - stride=stride, - optimize=True, - model=SimpleKernel) - - -Notice that we pass the class and not an already built object! - -Building a MNIST Classifier ---------------------------- - -Let’s see how we can build a MNIST classifier using a continuous -convolutional filter. We will use the MNIST dataset from PyTorch. In -order to keep small training times we use only 6000 samples for training -and 1000 samples for testing. - -.. code:: ipython3 - - from torch.utils.data import DataLoader, SubsetRandomSampler - - numb_training = 6000 # get just 6000 images for training - numb_testing= 1000 # get just 1000 images for training - seed = 111 # for reproducibility - batch_size = 8 # setting batch size - - # setting the seed - torch.manual_seed(seed) - - # downloading the dataset - train_data = torchvision.datasets.MNIST('./data/', train=True, download=True, - transform=torchvision.transforms.Compose([ - torchvision.transforms.ToTensor(), - torchvision.transforms.Normalize( - (0.1307,), (0.3081,)) - ])) - subsample_train_indices = torch.randperm(len(train_data))[:numb_training] - train_loader = DataLoader(train_data, batch_size=batch_size, - sampler=SubsetRandomSampler(subsample_train_indices)) - - test_data = torchvision.datasets.MNIST('./data/', train=False, download=True, - transform=torchvision.transforms.Compose([ - torchvision.transforms.ToTensor(), - torchvision.transforms.Normalize( - (0.1307,), (0.3081,)) - ])) - subsample_test_indices = torch.randperm(len(train_data))[:numb_testing] - test_loader = DataLoader(train_data, batch_size=batch_size, - sampler=SubsetRandomSampler(subsample_train_indices)) - -Let’s now build a simple classifier. The MNIST dataset is composed by -vectors of shape ``[batch, 1, 28, 28]``, but we can image them as one -field functions where the pixels :math:`ij` are the coordinate -:math:`x=i, y=j` in a :math:`[0, 27]\times[0,27]` domain, and the pixels -value are the field values. We just need a function to transform the -regular tensor in a tensor compatible for the continuous filter: - -.. code:: ipython3 - - def transform_input(x): - batch_size = x.shape[0] - dim_grid = tuple(x.shape[:-3:-1]) - - # creating the n dimensional mesh grid for a single channel image - values_mesh = [torch.arange(0, dim).float() for dim in dim_grid] - mesh = torch.meshgrid(values_mesh) - coordinates_mesh = [x.reshape(-1, 1) for x in mesh] - coordinates = torch.cat(coordinates_mesh, dim=1).unsqueeze( - 0).repeat((batch_size, 1, 1)).unsqueeze(1) - - return torch.cat((coordinates, x.flatten(2).unsqueeze(-1)), dim=-1) - - - # let's try it out - image, s = next(iter(train_loader)) - print(f"Original MNIST image shape: {image.shape}") - - image_transformed = transform_input(image) - print(f"Transformed MNIST image shape: {image_transformed.shape}") - - - -.. parsed-literal:: - - Original MNIST image shape: torch.Size([8, 1, 28, 28]) - Transformed MNIST image shape: torch.Size([8, 1, 784, 3]) - - -We can now build a simple classifier! We will use just one convolutional -filter followed by a feedforward neural network - -.. code:: ipython3 - - # setting the seed - torch.manual_seed(seed) - - class ContinuousClassifier(torch.nn.Module): - def __init__(self): - super().__init__() - - # number of classes for classification - numb_class = 10 - - # convolutional block - self.convolution = ContinuousConvBlock(input_numb_field=1, - output_numb_field=4, - stride={"domain": [27, 27], - "start": [0, 0], - "jumps": [4, 4], - "direction": [1, 1.], - }, - filter_dim=[4, 4], - optimize=True) - # feedforward net - self.nn = FeedForward(input_dimensions=196, - output_dimensions=numb_class, - layers=[120, 64], - func=torch.nn.ReLU) - - def forward(self, x): - # transform input + convolution - x = transform_input(x) - x = self.convolution(x) - # feed forward classification - return self.nn(x[..., -1].flatten(1)) - - - net = ContinuousClassifier() - -Let’s try to train it using a simple pytorch training loop. We train for -juts 1 epoch using Adam optimizer with a :math:`0.001` learning rate. - -.. code:: ipython3 - - # setting the seed - torch.manual_seed(seed) - - # optimizer and loss function - optimizer = torch.optim.Adam(net.parameters(), lr=0.001) - criterion = torch.nn.CrossEntropyLoss() - - for epoch in range(1): # loop over the dataset multiple times - - running_loss = 0.0 - for i, data in enumerate(train_loader, 0): - # get the inputs; data is a list of [inputs, labels] - inputs, labels = data - - # zero the parameter gradients - optimizer.zero_grad() - - # forward + backward + optimize - outputs = net(inputs) - loss = criterion(outputs, labels) - loss.backward() - optimizer.step() - - # print statistics - running_loss += loss.item() - if i % 50 == 49: - print( - f'batch [{i + 1}/{numb_training//batch_size}] loss[{running_loss / 500:.3f}]') - running_loss = 0.0 - - -.. parsed-literal:: - - batch [50/750] loss[0.161] - batch [100/750] loss[0.073] - batch [150/750] loss[0.063] - batch [200/750] loss[0.051] - batch [250/750] loss[0.044] - batch [300/750] loss[0.050] - batch [350/750] loss[0.053] - batch [400/750] loss[0.049] - batch [450/750] loss[0.046] - batch [500/750] loss[0.034] - batch [550/750] loss[0.036] - batch [600/750] loss[0.040] - batch [650/750] loss[0.028] - batch [700/750] loss[0.040] - batch [750/750] loss[0.040] - - -Let’s see the performance on the train set! - -.. code:: ipython3 - - correct = 0 - total = 0 - with torch.no_grad(): - for data in test_loader: - images, labels = data - # calculate outputs by running images through the network - outputs = net(images) - # the class with the highest energy is what we choose as prediction - _, predicted = torch.max(outputs.data, 1) - total += labels.size(0) - correct += (predicted == labels).sum().item() - - print( - f'Accuracy of the network on the 1000 test images: {(correct / total):.3%}') - - - -.. parsed-literal:: - - Accuracy of the network on the 1000 test images: 92.733% - - -As we can see we have very good performance for having traing only for 1 -epoch! Nevertheless, we are still using structured data… Let’s see how -we can build an autoencoder for unstructured data now. - -Building a Continuous Convolutional Autoencoder ------------------------------------------------ - -Just as toy problem, we will now build an autoencoder for the following -function :math:`f(x,y)=\sin(\pi x)\sin(\pi y)` on the unit circle domain -centered in :math:`(0.5, 0.5)`. We will also see the ability to -up-sample (once trained) the results without retraining. Let’s first -create the input and visualize it, we will use firstly a mesh of -:math:`100` points. - -.. code:: ipython3 - - # create inputs - def circle_grid(N=100): - """Generate points withing a unit 2D circle centered in (0.5, 0.5) - - :param N: number of points - :type N: float - :return: [x, y] array of points - :rtype: torch.tensor - """ - - PI = torch.acos(torch.zeros(1)).item() * 2 - R = 0.5 - centerX = 0.5 - centerY = 0.5 - - r = R * torch.sqrt(torch.rand(N)) - theta = torch.rand(N) * 2 * PI - - x = centerX + r * torch.cos(theta) - y = centerY + r * torch.sin(theta) - - return torch.stack([x, y]).T - - # create the grid - grid = circle_grid(500) - - # create input - input_data = torch.empty(size=(1, 1, grid.shape[0], 3)) - input_data[0, 0, :, :-1] = grid - input_data[0, 0, :, -1] = torch.sin(pi * grid[:, 0]) * torch.sin(pi * grid[:, 1]) - - # visualize data - plt.title("Training sample with 500 points") - plt.scatter(grid[:, 0], grid[:, 1], c=input_data[0, 0, :, -1]) - plt.colorbar() - plt.show() - - - - -.. image:: tutorial_files/tutorial_32_0.png - - -Let’s now build a simple autoencoder using the continuous convolutional -filter. The data is clearly unstructured and a simple convolutional -filter might not work without projecting or interpolating first. Let’s -first build and ``Encoder`` and ``Decoder`` class, and then a -``Autoencoder`` class that contains both. - -.. code:: ipython3 - - class Encoder(torch.nn.Module): - def __init__(self, hidden_dimension): - super().__init__() - - # convolutional block - self.convolution = ContinuousConvBlock(input_numb_field=1, - output_numb_field=2, - stride={"domain": [1, 1], - "start": [0, 0], - "jumps": [0.05, 0.05], - "direction": [1, 1.], - }, - filter_dim=[0.15, 0.15], - optimize=True) - # feedforward net - self.nn = FeedForward(input_dimensions=400, - output_dimensions=hidden_dimension, - layers=[240, 120]) - - def forward(self, x): - # convolution - x = self.convolution(x) - # feed forward pass - return self.nn(x[..., -1]) - - - class Decoder(torch.nn.Module): - def __init__(self, hidden_dimension): - super().__init__() - - # convolutional block - self.convolution = ContinuousConvBlock(input_numb_field=2, - output_numb_field=1, - stride={"domain": [1, 1], - "start": [0, 0], - "jumps": [0.05, 0.05], - "direction": [1, 1.], - }, - filter_dim=[0.15, 0.15], - optimize=True) - # feedforward net - self.nn = FeedForward(input_dimensions=hidden_dimension, - output_dimensions=400, - layers=[120, 240]) - - def forward(self, weights, grid): - # feed forward pass - x = self.nn(weights) - # transpose convolution - return torch.sigmoid(self.convolution.transpose(x, grid)) - - -Very good! Notice that in the ``Decoder`` class in the ``forward`` pass -we have used the ``.transpose()`` method of the -``ContinuousConvolution`` class. This method accepts the ``weights`` for -upsampling and the ``grid`` on where to upsample. Let’s now build the -autoencoder! We set the hidden dimension in the ``hidden_dimension`` -variable. We apply the sigmoid on the output since the field value is -between :math:`[0, 1]`. - -.. code:: ipython3 - - class Autoencoder(torch.nn.Module): - def __init__(self, hidden_dimension=10): - super().__init__() - - self.encoder = Encoder(hidden_dimension) - self.decoder = Decoder(hidden_dimension) - - def forward(self, x): - # saving grid for later upsampling - grid = x.clone().detach() - # encoder - weights = self.encoder(x) - # decoder - out = self.decoder(weights, grid) - return out - - net = Autoencoder() - -Let’s now train the autoencoder, minimizing the mean square error loss -and optimizing using Adam. We use the ``SupervisedSolver`` as solver, -and the problem is a simple problem created by inheriting from -``AbstractProblem``. It takes approximately two minutes to train on CPU. - -.. code:: ipython3 - - # define the problem - class CircleProblem(AbstractProblem): - input_variables = ['x', 'y', 'f'] - output_variables = input_variables - conditions = {'data' : Condition(input_points=LabelTensor(input_data, input_variables), output_points=LabelTensor(input_data, output_variables))} - - # define the solver - solver = SupervisedSolver(problem=CircleProblem(), model=net, loss=torch.nn.MSELoss()) - - # train - trainer = Trainer(solver, max_epochs=150, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional) - trainer.train() - - -.. parsed-literal:: - - `Trainer.fit` stopped: `max_epochs=150` reached. - - -Let’s visualize the two solutions side by side! - -.. code:: ipython3 - - net.eval() - - # get output and detach from computational graph for plotting - output = net(input_data).detach() - - # visualize data - fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(8, 3)) - pic1 = axes[0].scatter(grid[:, 0], grid[:, 1], c=input_data[0, 0, :, -1]) - axes[0].set_title("Real") - fig.colorbar(pic1) - plt.subplot(1, 2, 2) - pic2 = axes[1].scatter(grid[:, 0], grid[:, 1], c=output[0, 0, :, -1]) - axes[1].set_title("Autoencoder") - fig.colorbar(pic2) - plt.tight_layout() - plt.show() - - - - -.. image:: tutorial_files/tutorial_40_0.png - - -As we can see the two are really similar! We can compute the :math:`l_2` -error quite easily as well: - -.. code:: ipython3 - - def l2_error(input_, target): - return torch.linalg.norm(input_-target, ord=2)/torch.linalg.norm(input_, ord=2) - - - print(f'l2 error: {l2_error(input_data[0, 0, :, -1], output[0, 0, :, -1]):.2%}') - - -.. parsed-literal:: - - l2 error: 4.32% - - -More or less :math:`4\%` in :math:`l_2` error, which is really low -considering the fact that we use just **one** convolutional layer and a -simple feedforward to decrease the dimension. Let’s see now some -peculiarity of the filter. - -Filter for upsampling -~~~~~~~~~~~~~~~~~~~~~ - -Suppose we have already the hidden dimension and we want to upsample on -a differen grid with more points. Let’s see how to do it: - -.. code:: ipython3 - - # setting the seed - torch.manual_seed(seed) - - grid2 = circle_grid(1500) # triple number of points - input_data2 = torch.zeros(size=(1, 1, grid2.shape[0], 3)) - input_data2[0, 0, :, :-1] = grid2 - input_data2[0, 0, :, -1] = torch.sin(pi * - grid2[:, 0]) * torch.sin(pi * grid2[:, 1]) - - # get the hidden dimension representation from original input - latent = net.encoder(input_data) - - # upsample on the second input_data2 - output = net.decoder(latent, input_data2).detach() - - # show the picture - fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(8, 3)) - pic1 = axes[0].scatter(grid2[:, 0], grid2[:, 1], c=input_data2[0, 0, :, -1]) - axes[0].set_title("Real") - fig.colorbar(pic1) - plt.subplot(1, 2, 2) - pic2 = axes[1].scatter(grid2[:, 0], grid2[:, 1], c=output[0, 0, :, -1]) - axes[1].set_title("Up-sampling") - fig.colorbar(pic2) - plt.tight_layout() - plt.show() - - - - -.. image:: tutorial_files/tutorial_45_0.png - - -As we can see we have a very good approximation of the original -function, even thought some noise is present. Let’s calculate the error -now: - -.. code:: ipython3 - - print(f'l2 error: {l2_error(input_data2[0, 0, :, -1], output[0, 0, :, -1]):.2%}') - - -.. parsed-literal:: - - l2 error: 8.49% - - -Autoencoding at different resolution -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -In the previous example we already had the hidden dimension (of original -input) and we used it to upsample. Sometimes however we have a more fine -mesh solution and we simply want to encode it. This can be done without -retraining! This procedure can be useful in case we have many points in -the mesh and just a smaller part of them are needed for training. Let’s -see the results of this: - -.. code:: ipython3 - - # setting the seed - torch.manual_seed(seed) - - grid2 = circle_grid(3500) # very fine mesh - input_data2 = torch.zeros(size=(1, 1, grid2.shape[0], 3)) - input_data2[0, 0, :, :-1] = grid2 - input_data2[0, 0, :, -1] = torch.sin(pi * - grid2[:, 0]) * torch.sin(pi * grid2[:, 1]) - - # get the hidden dimension representation from more fine mesh input - latent = net.encoder(input_data2) - - # upsample on the second input_data2 - output = net.decoder(latent, input_data2).detach() - - # show the picture - fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(8, 3)) - pic1 = axes[0].scatter(grid2[:, 0], grid2[:, 1], c=input_data2[0, 0, :, -1]) - axes[0].set_title("Real") - fig.colorbar(pic1) - plt.subplot(1, 2, 2) - pic2 = axes[1].scatter(grid2[:, 0], grid2[:, 1], c=output[0, 0, :, -1]) - axes[1].set_title("Autoencoder not re-trained") - fig.colorbar(pic2) - plt.tight_layout() - plt.show() - - # calculate l2 error - print( - f'l2 error: {l2_error(input_data2[0, 0, :, -1], output[0, 0, :, -1]):.2%}') - - - - -.. image:: tutorial_files/tutorial_49_0.png - - -.. parsed-literal:: - - l2 error: 8.59% - - -What’s next? ------------- - -We have shown the basic usage of a convolutional filter. There are -additional extensions possible: - -1. Train using Physics Informed strategies - -2. Use the filter to build an unstructured convolutional autoencoder for - reduced order modelling - -3. Many more… diff --git a/docs/source/_rst/tutorials/tutorial4/tutorial_files/tutorial_32_0.png b/docs/source/_rst/tutorials/tutorial4/tutorial_files/tutorial_32_0.png deleted file mode 100644 index 229df27..0000000 Binary files a/docs/source/_rst/tutorials/tutorial4/tutorial_files/tutorial_32_0.png and /dev/null differ diff --git a/docs/source/_rst/tutorials/tutorial4/tutorial_files/tutorial_40_0.png b/docs/source/_rst/tutorials/tutorial4/tutorial_files/tutorial_40_0.png deleted file mode 100644 index 55dea5b..0000000 Binary files a/docs/source/_rst/tutorials/tutorial4/tutorial_files/tutorial_40_0.png and /dev/null differ diff --git a/docs/source/_rst/tutorials/tutorial4/tutorial_files/tutorial_45_0.png b/docs/source/_rst/tutorials/tutorial4/tutorial_files/tutorial_45_0.png deleted file mode 100644 index a3246f9..0000000 Binary files a/docs/source/_rst/tutorials/tutorial4/tutorial_files/tutorial_45_0.png and /dev/null differ diff --git a/docs/source/_rst/tutorials/tutorial4/tutorial_files/tutorial_49_0.png b/docs/source/_rst/tutorials/tutorial4/tutorial_files/tutorial_49_0.png deleted file mode 100644 index 9a15d87..0000000 Binary files a/docs/source/_rst/tutorials/tutorial4/tutorial_files/tutorial_49_0.png and /dev/null differ diff --git a/docs/source/_rst/tutorials/tutorial5/tutorial.rst b/docs/source/_rst/tutorials/tutorial5/tutorial.rst deleted file mode 100644 index 59eb62a..0000000 --- a/docs/source/_rst/tutorials/tutorial5/tutorial.rst +++ /dev/null @@ -1,249 +0,0 @@ -Tutorial: Two dimensional Darcy flow using the Fourier Neural Operator -====================================================================== - -|Open In Colab| - -.. |Open In Colab| image:: https://colab.research.google.com/assets/colab-badge.svg - :target: https://colab.research.google.com/github/mathLab/PINA/blob/master/tutorials/tutorial5/tutorial.ipynb - -In this tutorial we are going to solve the Darcy flow problem in two -dimensions, presented in `Fourier Neural Operator for Parametric Partial -Differential Equation `__. -First of all we import the modules needed for the tutorial. Importing -``scipy`` is needed for input output operations. - -.. code:: ipython3 - - ## routine needed to run the notebook on Google Colab - try: - import google.colab - IN_COLAB = True - except: - IN_COLAB = False - if IN_COLAB: - !pip install "pina-mathlab" - !pip install scipy - # get the data - !wget https://github.com/mathLab/PINA/raw/refs/heads/master/tutorials/tutorial5/Data_Darcy.mat - - - # !pip install scipy # install scipy - from scipy import io - import torch - from pina.model import FNO, FeedForward # let's import some models - from pina import Condition, LabelTensor - from pina.solvers import SupervisedSolver - from pina.trainer import Trainer - from pina.problem import AbstractProblem - import matplotlib.pyplot as plt - plt.style.use('tableau-colorblind10') - -Data Generation ---------------- - -We will focus on solving the a specfic PDE, the **Darcy Flow** equation. -The Darcy PDE is a second order, elliptic PDE with the following form: - -.. math:: - - - -\nabla\cdot(k(x, y)\nabla u(x, y)) = f(x) \quad (x, y) \in D. - -Specifically, :math:`u` is the flow pressure, :math:`k` is the -permeability field and :math:`f` is the forcing function. The Darcy flow -can parameterize a variety of systems including flow through porous -media, elastic materials and heat conduction. Here you will define the -domain as a 2D unit square Dirichlet boundary conditions. The dataset is -taken from the authors original reference. - -.. code:: ipython3 - - # download the dataset - data = io.loadmat("Data_Darcy.mat") - - # extract data (we use only 100 data for train) - k_train = LabelTensor(torch.tensor(data['k_train'], dtype=torch.float).unsqueeze(-1), ['u0']) - u_train = LabelTensor(torch.tensor(data['u_train'], dtype=torch.float).unsqueeze(-1), ['u']) - k_test = LabelTensor(torch.tensor(data['k_test'], dtype=torch.float).unsqueeze(-1), ['u0']) - u_test= LabelTensor(torch.tensor(data['u_test'], dtype=torch.float).unsqueeze(-1), ['u']) - x = torch.tensor(data['x'], dtype=torch.float)[0] - y = torch.tensor(data['y'], dtype=torch.float)[0] - -Let’s visualize some data - -.. code:: ipython3 - - plt.subplot(1, 2, 1) - plt.title('permeability') - plt.imshow(k_train.squeeze(-1)[0]) - plt.subplot(1, 2, 2) - plt.title('field solution') - plt.imshow(u_train.squeeze(-1)[0]) - plt.show() - - - -.. image:: tutorial_files/tutorial_6_0.png - - -We now create the neural operator class. It is a very simple class, -inheriting from ``AbstractProblem``. - -.. code:: ipython3 - - class NeuralOperatorSolver(AbstractProblem): - input_variables = k_train.labels - output_variables = u_train.labels - conditions = {'data' : Condition(input_points=k_train, - output_points=u_train)} - - # make problem - problem = NeuralOperatorSolver() - -Solving the problem with a FeedForward Neural Network ------------------------------------------------------ - -We will first solve the problem using a Feedforward neural network. We -will use the ``SupervisedSolver`` for solving the problem, since we are -training using supervised learning. - -.. code:: ipython3 - - # make model - model = FeedForward(input_dimensions=1, output_dimensions=1) - - - # make solver - solver = SupervisedSolver(problem=problem, model=model) - - # make the trainer and train - trainer = Trainer(solver=solver, max_epochs=10, accelerator='cpu', enable_model_summary=False, batch_size=10) # we train on CPU and avoid model summary at beginning of training (optional) - trainer.train() - - - -.. parsed-literal:: - - GPU available: False, used: False - TPU available: False, using: 0 TPU cores - IPU available: False, using: 0 IPUs - HPU available: False, using: 0 HPUs - - -.. parsed-literal:: - - Epoch 9: : 100it [00:00, 357.28it/s, v_num=1, mean_loss=0.108] - -.. parsed-literal:: - - `Trainer.fit` stopped: `max_epochs=10` reached. - - -.. parsed-literal:: - - Epoch 9: : 100it [00:00, 354.81it/s, v_num=1, mean_loss=0.108] - - -The final loss is pretty high… We can calculate the error by importing -``LpLoss``. - -.. code:: ipython3 - - from pina.loss import LpLoss - - # make the metric - metric_err = LpLoss(relative=True) - - - err = float(metric_err(u_train.squeeze(-1), solver.neural_net(k_train).squeeze(-1)).mean())*100 - print(f'Final error training {err:.2f}%') - - err = float(metric_err(u_test.squeeze(-1), solver.neural_net(k_test).squeeze(-1)).mean())*100 - print(f'Final error testing {err:.2f}%') - - -.. parsed-literal:: - - Final error training 56.04% - Final error testing 56.01% - - -Solving the problem with a Fuorier Neural Operator (FNO) --------------------------------------------------------- - -We will now move to solve the problem using a FNO. Since we are learning -operator this approach is better suited, as we shall see. - -.. code:: ipython3 - - # make model - lifting_net = torch.nn.Linear(1, 24) - projecting_net = torch.nn.Linear(24, 1) - model = FNO(lifting_net=lifting_net, - projecting_net=projecting_net, - n_modes=8, - dimensions=2, - inner_size=24, - padding=8) - - - # make solver - solver = SupervisedSolver(problem=problem, model=model) - - # make the trainer and train - trainer = Trainer(solver=solver, max_epochs=10, accelerator='cpu', enable_model_summary=False, batch_size=10) # we train on CPU and avoid model summary at beginning of training (optional) - trainer.train() - - - -.. parsed-literal:: - - GPU available: False, used: False - TPU available: False, using: 0 TPU cores - IPU available: False, using: 0 IPUs - HPU available: False, using: 0 HPUs - - -.. parsed-literal:: - - Epoch 0: : 0it [00:00, ?it/s]Epoch 9: : 100it [00:02, 47.76it/s, v_num=4, mean_loss=0.00106] - -.. parsed-literal:: - - `Trainer.fit` stopped: `max_epochs=10` reached. - - -.. parsed-literal:: - - Epoch 9: : 100it [00:02, 47.65it/s, v_num=4, mean_loss=0.00106] - - -We can clearly see that the final loss is lower. Let’s see in testing.. -Notice that the number of parameters is way higher than a -``FeedForward`` network. We suggest to use GPU or TPU for a speed up in -training, when many data samples are used. - -.. code:: ipython3 - - err = float(metric_err(u_train.squeeze(-1), solver.neural_net(k_train).squeeze(-1)).mean())*100 - print(f'Final error training {err:.2f}%') - - err = float(metric_err(u_test.squeeze(-1), solver.neural_net(k_test).squeeze(-1)).mean())*100 - print(f'Final error testing {err:.2f}%') - - -.. parsed-literal:: - - Final error training 4.83% - Final error testing 5.16% - - -As we can see the loss is way lower! - -What’s next? ------------- - -We have made a very simple example on how to use the ``FNO`` for -learning neural operator. Currently in **PINA** we implement 1D/2D/3D -cases. We suggest to extend the tutorial using more complex problems and -train for longer, to see the full potential of neural operators. diff --git a/docs/source/_rst/tutorials/tutorial5/tutorial_files/tutorial_6_0.png b/docs/source/_rst/tutorials/tutorial5/tutorial_files/tutorial_6_0.png deleted file mode 100644 index fec83e2..0000000 Binary files a/docs/source/_rst/tutorials/tutorial5/tutorial_files/tutorial_6_0.png and /dev/null differ diff --git a/docs/source/_rst/tutorials/tutorial6/tutorial.rst b/docs/source/_rst/tutorials/tutorial6/tutorial.rst deleted file mode 100644 index d021adf..0000000 --- a/docs/source/_rst/tutorials/tutorial6/tutorial.rst +++ /dev/null @@ -1,330 +0,0 @@ -Tutorial: Building custom geometries with PINA ``Location`` class -================================================================= - -|Open In Colab| - -.. |Open In Colab| image:: https://colab.research.google.com/assets/colab-badge.svg - :target: https://colab.research.google.com/github/mathLab/PINA/blob/master/tutorials/tutorial6/tutorial.ipynb - -In this tutorial we will show how to use geometries in PINA. -Specifically, the tutorial will include how to create geometries and how -to visualize them. The topics covered are: - -- Creating CartesianDomains and EllipsoidDomains -- Getting the Union and Difference of Geometries -- Sampling points in the domain (and visualize them) - -We import the relevant modules first. - -.. code:: ipython3 - - ## routine needed to run the notebook on Google Colab - try: - import google.colab - IN_COLAB = True - except: - IN_COLAB = False - if IN_COLAB: - !pip install "pina-mathlab" - - import matplotlib.pyplot as plt - plt.style.use('tableau-colorblind10') - from pina.geometry import EllipsoidDomain, Difference, CartesianDomain, Union, SimplexDomain - from pina.label_tensor import LabelTensor - - def plot_scatter(ax, pts, title): - ax.title.set_text(title) - ax.scatter(pts.extract('x'), pts.extract('y'), color='blue', alpha=0.5) - -Built-in Geometries -------------------- - -We will create one cartesian and two ellipsoids. For the sake of -simplicity, we show here the 2-dimensional, but it’s trivial the -extension to 3D (and higher) cases. The geometries allows also the -generation of samples belonging to the boundary. So, we will create one -ellipsoid with the border and one without. - -.. code:: ipython3 - - cartesian = CartesianDomain({'x': [0, 2], 'y': [0, 2]}) - ellipsoid_no_border = EllipsoidDomain({'x': [1, 3], 'y': [1, 3]}) - ellipsoid_border = EllipsoidDomain({'x': [2, 4], 'y': [2, 4]}, sample_surface=True) - -The ``{'x': [0, 2], 'y': [0, 2]}`` are the bounds of the -``CartesianDomain`` being created. - -To visualize these shapes, we need to sample points on them. We will use -the ``sample`` method of the ``CartesianDomain`` and ``EllipsoidDomain`` -classes. This method takes a ``n`` argument which is the number of -points to sample. It also takes different modes to sample such as -random. - -.. code:: ipython3 - - cartesian_samples = cartesian.sample(n=1000, mode='random') - ellipsoid_no_border_samples = ellipsoid_no_border.sample(n=1000, mode='random') - ellipsoid_border_samples = ellipsoid_border.sample(n=1000, mode='random') - -We can see the samples of each of the geometries to see what we are -working with. - -.. code:: ipython3 - - print(f"Cartesian Samples: {cartesian_samples}") - print(f"Ellipsoid No Border Samples: {ellipsoid_no_border_samples}") - print(f"Ellipsoid Border Samples: {ellipsoid_border_samples}") - - -.. parsed-literal:: - - Cartesian Samples: labels(['x', 'y']) - LabelTensor([[[0.2300, 1.6698]], - [[1.7785, 0.4063]], - [[1.5143, 1.8979]], - ..., - [[0.0905, 1.4660]], - [[0.8176, 1.7357]], - [[0.0475, 0.0170]]]) - Ellipsoid No Border Samples: labels(['x', 'y']) - LabelTensor([[[1.9341, 2.0182]], - [[1.5503, 1.8426]], - [[2.0392, 1.7597]], - ..., - [[1.8976, 2.2859]], - [[1.8015, 2.0012]], - [[2.2713, 2.2355]]]) - Ellipsoid Border Samples: labels(['x', 'y']) - LabelTensor([[[3.3413, 3.9400]], - [[3.9573, 2.7108]], - [[3.8341, 2.4484]], - ..., - [[2.7251, 2.0385]], - [[3.8654, 2.4990]], - [[3.2292, 3.9734]]]) - - -Notice how these are all ``LabelTensor`` objects. You can read more -about these in the -`documentation `__. -At a very high level, they are tensors where each element in a tensor -has a label that we can access by doing ``.labels``. We can -also access the values of the tensor by doing -``.extract(['x'])``. - -We are now ready to visualize the samples using matplotlib. - -.. code:: ipython3 - - fig, axs = plt.subplots(1, 3, figsize=(16, 4)) - pts_list = [cartesian_samples, ellipsoid_no_border_samples, ellipsoid_border_samples] - title_list = ['Cartesian Domain', 'Ellipsoid Domain', 'Ellipsoid Border Domain'] - for ax, pts, title in zip(axs, pts_list, title_list): - plot_scatter(ax, pts, title) - - - -.. image:: tutorial_files/tutorial_10_0.png - - -We have now created, sampled, and visualized our first geometries! We -can see that the ``EllipsoidDomain`` with the border has a border around -it. We can also see that the ``EllipsoidDomain`` without the border is -just the ellipse. We can also see that the ``CartesianDomain`` is just a -square. - -Simplex Domain -~~~~~~~~~~~~~~ - -Among the built-in shapes, we quickly show here the usage of -``SimplexDomain``, which can be used for polygonal domains! - -.. code:: ipython3 - - import torch - spatial_domain = SimplexDomain( - [ - LabelTensor(torch.tensor([[0, 0]]), labels=["x", "y"]), - LabelTensor(torch.tensor([[1, 1]]), labels=["x", "y"]), - LabelTensor(torch.tensor([[0, 2]]), labels=["x", "y"]), - ] - ) - - spatial_domain2 = SimplexDomain( - [ - LabelTensor(torch.tensor([[ 0., -2.]]), labels=["x", "y"]), - LabelTensor(torch.tensor([[-.5, -.5]]), labels=["x", "y"]), - LabelTensor(torch.tensor([[-2., 0.]]), labels=["x", "y"]), - ] - ) - - pts = spatial_domain2.sample(100) - fig, axs = plt.subplots(1, 2, figsize=(16, 6)) - for domain, ax in zip([spatial_domain, spatial_domain2], axs): - pts = domain.sample(1000) - plot_scatter(ax, pts, 'Simplex Domain') - - - -.. image:: tutorial_files/tutorial_13_0.png - - -Boolean Operations ------------------- - -To create complex shapes we can use the boolean operations, for example -to merge two default geometries. We need to simply use the ``Union`` -class: it takes a list of geometries and returns the union of them. - -Let’s create three unions. Firstly, it will be a union of ``cartesian`` -and ``ellipsoid_no_border``. Next, it will be a union of -``ellipse_no_border`` and ``ellipse_border``. Lastly, it will be a union -of all three geometries. - -.. code:: ipython3 - - cart_ellipse_nb_union = Union([cartesian, ellipsoid_no_border]) - cart_ellipse_b_union = Union([cartesian, ellipsoid_border]) - three_domain_union = Union([cartesian, ellipsoid_no_border, ellipsoid_border]) - -We can of course sample points over the new geometries, by using the -``sample`` method as before. We highlihgt that the available sample -strategy here is only *random*. - -.. code:: ipython3 - - c_e_nb_u_points = cart_ellipse_nb_union.sample(n=2000, mode='random') - c_e_b_u_points = cart_ellipse_b_union.sample(n=2000, mode='random') - three_domain_union_points = three_domain_union.sample(n=3000, mode='random') - -We can plot the samples of each of the unions to see what we are working -with. - -.. code:: ipython3 - - fig, axs = plt.subplots(1, 3, figsize=(16, 4)) - pts_list = [c_e_nb_u_points, c_e_b_u_points, three_domain_union_points] - title_list = ['Cartesian with Ellipsoid No Border Union', 'Cartesian with Ellipsoid Border Union', 'Three Domain Union'] - for ax, pts, title in zip(axs, pts_list, title_list): - plot_scatter(ax, pts, title) - - - -.. image:: tutorial_files/tutorial_20_0.png - - -Now, we will find the differences of the geometries. We will find the -difference of ``cartesian`` and ``ellipsoid_no_border``. - -.. code:: ipython3 - - cart_ellipse_nb_difference = Difference([cartesian, ellipsoid_no_border]) - c_e_nb_d_points = cart_ellipse_nb_difference.sample(n=2000, mode='random') - - fig, ax = plt.subplots(1, 1, figsize=(8, 6)) - plot_scatter(ax, c_e_nb_d_points, 'Difference') - - - -.. image:: tutorial_files/tutorial_22_0.png - - -Create Custom Location ----------------------- - -We will take a look on how to create our own geometry. The one we will -try to make is a heart defined by the function - -.. math:: (x^2+y^2-1)^3-x^2y^3 \le 0 - -Let’s start by importing what we will need to create our own geometry -based on this equation. - -.. code:: ipython3 - - import torch - from pina import Location - from pina import LabelTensor - import random - -Next, we will create the ``Heart(Location)`` class and initialize it. - -.. code:: ipython3 - - class Heart(Location): - """Implementation of the Heart Domain.""" - - def __init__(self, sample_border=False): - super().__init__() - - -Because the ``Location`` class we are inherting from requires both a -``sample`` method and ``is_inside`` method, we will create them and just -add in “pass” for the moment. - -.. code:: ipython3 - - class Heart(Location): - """Implementation of the Heart Domain.""" - - def __init__(self, sample_border=False): - super().__init__() - - def is_inside(self): - pass - - def sample(self): - pass - -Now we have the skeleton for our ``Heart`` class. The ``sample`` -method is where most of the work is done so let’s fill it out. - -.. code:: ipython3 - - - class Heart(Location): - """Implementation of the Heart Domain.""" - - def __init__(self, sample_border=False): - super().__init__() - - def is_inside(self): - pass - - def sample(self, n, mode='random', variables='all'): - sampled_points = [] - - while len(sampled_points) < n: - x = torch.rand(1)*3.-1.5 - y = torch.rand(1)*3.-1.5 - if ((x**2 + y**2 - 1)**3 - (x**2)*(y**3)) <= 0: - sampled_points.append([x.item(), y.item()]) - - return LabelTensor(torch.tensor(sampled_points), labels=['x','y']) - -To create the Heart geometry we simply run: - -.. code:: ipython3 - - heart = Heart() - -To sample from the Heart geometry we simply run: - -.. code:: ipython3 - - pts_heart = heart.sample(1500) - - fig, ax = plt.subplots() - plot_scatter(ax, pts_heart, 'Heart Domain') - - - -.. image:: tutorial_files/tutorial_36_0.png - - -What’s next? ------------- - -We have made a very simple tutorial on how to build custom geometries -and use domain operation to compose base geometries. Now you can play -around with different geometries and build your own! diff --git a/docs/source/_rst/tutorials/tutorial6/tutorial_files/tutorial_10_0.png b/docs/source/_rst/tutorials/tutorial6/tutorial_files/tutorial_10_0.png deleted file mode 100644 index b253ffa..0000000 Binary files a/docs/source/_rst/tutorials/tutorial6/tutorial_files/tutorial_10_0.png and /dev/null differ diff --git a/docs/source/_rst/tutorials/tutorial6/tutorial_files/tutorial_13_0.png b/docs/source/_rst/tutorials/tutorial6/tutorial_files/tutorial_13_0.png deleted file mode 100644 index a64e90b..0000000 Binary files a/docs/source/_rst/tutorials/tutorial6/tutorial_files/tutorial_13_0.png and /dev/null differ diff --git a/docs/source/_rst/tutorials/tutorial6/tutorial_files/tutorial_20_0.png b/docs/source/_rst/tutorials/tutorial6/tutorial_files/tutorial_20_0.png deleted file mode 100644 index 42862ad..0000000 Binary files a/docs/source/_rst/tutorials/tutorial6/tutorial_files/tutorial_20_0.png and /dev/null differ diff --git a/docs/source/_rst/tutorials/tutorial6/tutorial_files/tutorial_22_0.png b/docs/source/_rst/tutorials/tutorial6/tutorial_files/tutorial_22_0.png deleted file mode 100644 index 5a573bb..0000000 Binary files a/docs/source/_rst/tutorials/tutorial6/tutorial_files/tutorial_22_0.png and /dev/null differ diff --git a/docs/source/_rst/tutorials/tutorial6/tutorial_files/tutorial_36_0.png b/docs/source/_rst/tutorials/tutorial6/tutorial_files/tutorial_36_0.png deleted file mode 100644 index 8584602..0000000 Binary files a/docs/source/_rst/tutorials/tutorial6/tutorial_files/tutorial_36_0.png and /dev/null differ diff --git a/docs/source/_rst/tutorials/tutorial7/tutorial.rst b/docs/source/_rst/tutorials/tutorial7/tutorial.rst deleted file mode 100644 index ac5ace3..0000000 --- a/docs/source/_rst/tutorials/tutorial7/tutorial.rst +++ /dev/null @@ -1,240 +0,0 @@ -Tutorial: Resolution of an inverse problem -============================================ - -|Open In Colab| - -.. |Open In Colab| image:: https://colab.research.google.com/assets/colab-badge.svg - :target: https://colab.research.google.com/github/mathLab/PINA/blob/master/tutorials/tutorial7/tutorial.ipynb - -Introduction to the inverse problem -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -This tutorial shows how to solve an inverse Poisson problem with -Physics-Informed Neural Networks. The problem definition is that of a -Poisson problem with homogeneous boundary conditions and it reads: - -.. math:: - - \begin{equation} - \begin{cases} - \Delta u = e^{-2(x-\mu_1)^2-2(y-\mu_2)^2} \text{ in } \Omega\, ,\\ - u = 0 \text{ on }\partial \Omega,\\ - u(\mu_1, \mu_2) = \text{ data} - \end{cases} - \end{equation} - -where :math:`\Omega` is a square domain -:math:`[-2, 2] \times [-2, 2]`, and -:math:`\partial \Omega=\Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4` -is the union of the boundaries of the domain. - -This kind of problem, namely the “inverse problem”, has two main goals: - -* find the solution :math:`u` that satisfies the Poisson equation -* find the unknown parameters (:math:`\mu_1`, :math:`\mu_2`) that better fit some given data (third equation in the system above). - -In order to achieve both the goals we will need to define an -``InverseProblem`` in PINA. Let’s start with useful imports. - -.. code:: ipython3 - - ## routine needed to run the notebook on Google Colab - try: - import google.colab - IN_COLAB = True - except: - IN_COLAB = False - if IN_COLAB: - !pip install "pina-mathlab" - # get the data - !mkdir "data" - !wget "https://github.com/mathLab/PINA/raw/refs/heads/master/tutorials/tutorial7/data/pinn_solution_0.5_0.5" -O "data/pinn_solution_0.5_0.5" - !wget "https://github.com/mathLab/PINA/raw/refs/heads/master/tutorials/tutorial7/data/pts_0.5_0.5" -O "data/pts_0.5_0.5" - - - import matplotlib.pyplot as plt - plt.style.use('tableau-colorblind10') - import torch - from pytorch_lightning.callbacks import Callback - from pina.problem import SpatialProblem, InverseProblem - from pina.operators import laplacian - from pina.model import FeedForward - from pina.equation import Equation, FixedValue - from pina import Condition, Trainer - from pina.solvers import PINN - from pina.geometry import CartesianDomain - -Then, we import the pre-saved data, for (:math:`\mu_1`, -:math:`\mu_2`)=(:math:`0.5`, :math:`0.5`). These two values are the -optimal parameters that we want to find through the neural network -training. In particular, we import the ``input_points``\ (the spatial -coordinates), and the ``output_points`` (the corresponding :math:`u` -values evaluated at the ``input_points``). - -.. code:: ipython3 - - data_output = torch.load('data/pinn_solution_0.5_0.5').detach() - data_input = torch.load('data/pts_0.5_0.5') - -Moreover, let’s plot also the data points and the reference solution: -this is the expected output of the neural network. - -.. code:: ipython3 - - points = data_input.extract(['x', 'y']).detach().numpy() - truth = data_output.detach().numpy() - - plt.scatter(points[:, 0], points[:, 1], c=truth, s=8) - plt.axis('equal') - plt.colorbar() - plt.show() - - - -.. image:: tutorial_files/output_8_0.png - - -Inverse problem definition in PINA -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Then, we initialize the Poisson problem, that is inherited from the -``SpatialProblem`` and from the ``InverseProblem`` classes. We here have -to define all the variables, and the domain where our unknown parameters -(:math:`\mu_1`, :math:`\mu_2`) belong. Notice that the laplace equation -takes as inputs also the unknown variables, that will be treated as -parameters that the neural network optimizes during the training -process. - -.. code:: ipython3 - - ### Define ranges of variables - x_min = -2 - x_max = 2 - y_min = -2 - y_max = 2 - - class Poisson(SpatialProblem, InverseProblem): - ''' - Problem definition for the Poisson equation. - ''' - output_variables = ['u'] - spatial_domain = CartesianDomain({'x': [x_min, x_max], 'y': [y_min, y_max]}) - # define the ranges for the parameters - unknown_parameter_domain = CartesianDomain({'mu1': [-1, 1], 'mu2': [-1, 1]}) - - def laplace_equation(input_, output_, params_): - ''' - Laplace equation with a force term. - ''' - force_term = torch.exp( - - 2*(input_.extract(['x']) - params_['mu1'])**2 - - 2*(input_.extract(['y']) - params_['mu2'])**2) - delta_u = laplacian(output_, input_, components=['u'], d=['x', 'y']) - - return delta_u - force_term - - # define the conditions for the loss (boundary conditions, equation, data) - conditions = { - 'gamma1': Condition(location=CartesianDomain({'x': [x_min, x_max], - 'y': y_max}), - equation=FixedValue(0.0, components=['u'])), - 'gamma2': Condition(location=CartesianDomain({'x': [x_min, x_max], 'y': y_min - }), - equation=FixedValue(0.0, components=['u'])), - 'gamma3': Condition(location=CartesianDomain({'x': x_max, 'y': [y_min, y_max] - }), - equation=FixedValue(0.0, components=['u'])), - 'gamma4': Condition(location=CartesianDomain({'x': x_min, 'y': [y_min, y_max] - }), - equation=FixedValue(0.0, components=['u'])), - 'D': Condition(location=CartesianDomain({'x': [x_min, x_max], 'y': [y_min, y_max] - }), - equation=Equation(laplace_equation)), - 'data': Condition(input_points=data_input.extract(['x', 'y']), output_points=data_output) - } - - problem = Poisson() - -Then, we define the model of the neural network we want to use. Here we -used a model which impose hard constrains on the boundary conditions, as -also done in the Wave tutorial! - -.. code:: ipython3 - - model = FeedForward( - layers=[20, 20, 20], - func=torch.nn.Softplus, - output_dimensions=len(problem.output_variables), - input_dimensions=len(problem.input_variables) - ) - -After that, we discretize the spatial domain. - -.. code:: ipython3 - - problem.discretise_domain(20, 'grid', locations=['D'], variables=['x', 'y']) - problem.discretise_domain(1000, 'random', locations=['gamma1', 'gamma2', - 'gamma3', 'gamma4'], variables=['x', 'y']) - -Here, we define a simple callback for the trainer. We use this callback -to save the parameters predicted by the neural network during the -training. The parameters are saved every 100 epochs as ``torch`` tensors -in a specified directory (``tmp_dir`` in our case). The goal is to read -the saved parameters after training and plot their trend across the -epochs. - -.. code:: ipython3 - - # temporary directory for saving logs of training - tmp_dir = "tmp_poisson_inverse" - - class SaveParameters(Callback): - ''' - Callback to save the parameters of the model every 100 epochs. - ''' - def on_train_epoch_end(self, trainer, __): - if trainer.current_epoch % 100 == 99: - torch.save(trainer.solver.problem.unknown_parameters, '{}/parameters_epoch{}'.format(tmp_dir, trainer.current_epoch)) - -Then, we define the ``PINN`` object and train the solver using the -``Trainer``. - -.. code:: ipython3 - - ### train the problem with PINN - max_epochs = 5000 - pinn = PINN(problem, model, optimizer_kwargs={'lr':0.005}) - # define the trainer for the solver - trainer = Trainer(solver=pinn, accelerator='cpu', max_epochs=max_epochs, - default_root_dir=tmp_dir, callbacks=[SaveParameters()]) - trainer.train() - -One can now see how the parameters vary during the training by reading -the saved solution and plotting them. The plot shows that the parameters -stabilize to their true value before reaching the epoch :math:`1000`! - -.. code:: ipython3 - - epochs_saved = range(99, max_epochs, 100) - parameters = torch.empty((int(max_epochs/100), 2)) - for i, epoch in enumerate(epochs_saved): - params_torch = torch.load('{}/parameters_epoch{}'.format(tmp_dir, epoch)) - for e, var in enumerate(pinn.problem.unknown_variables): - parameters[i, e] = params_torch[var].data - - # Plot parameters - plt.close() - plt.plot(epochs_saved, parameters[:, 0], label='mu1', marker='o') - plt.plot(epochs_saved, parameters[:, 1], label='mu2', marker='s') - plt.ylim(-1, 1) - plt.grid() - plt.legend() - plt.xlabel('Epoch') - plt.ylabel('Parameter value') - plt.show() - - - -.. image:: tutorial_files/output_21_0.png - - diff --git a/docs/source/_rst/tutorials/tutorial7/tutorial_files/output_21_0.png b/docs/source/_rst/tutorials/tutorial7/tutorial_files/output_21_0.png deleted file mode 100644 index 39f313b..0000000 Binary files a/docs/source/_rst/tutorials/tutorial7/tutorial_files/output_21_0.png and /dev/null differ diff --git a/docs/source/_rst/tutorials/tutorial7/tutorial_files/output_8_0.png b/docs/source/_rst/tutorials/tutorial7/tutorial_files/output_8_0.png deleted file mode 100644 index 4f706c3..0000000 Binary files a/docs/source/_rst/tutorials/tutorial7/tutorial_files/output_8_0.png and /dev/null differ diff --git a/docs/source/_rst/tutorials/tutorial8/tutorial.rst b/docs/source/_rst/tutorials/tutorial8/tutorial.rst deleted file mode 100644 index 6be60b4..0000000 --- a/docs/source/_rst/tutorials/tutorial8/tutorial.rst +++ /dev/null @@ -1,403 +0,0 @@ -Tutorial: Reduced order model (POD-RBF or POD-NN) for parametric problems -========================================================================= - -|Open In Colab| - -.. |Open In Colab| image:: https://colab.research.google.com/assets/colab-badge.svg - :target: https://colab.research.google.com/github/mathLab/PINA/blob/master/tutorials/tutorial8/tutorial.ipynb - -The tutorial aims to show how to employ the **PINA** library in order to -apply a reduced order modeling technique [1]. Such methodologies have -several similarities with machine learning approaches, since the main -goal consists in predicting the solution of differential equations -(typically parametric PDEs) in a real-time fashion. - -In particular we are going to use the Proper Orthogonal Decomposition -with either Radial Basis Function Interpolation(POD-RBF) or Neural -Network (POD-NN) [2]. Here we basically perform a dimensional reduction -using the POD approach, and approximating the parametric solution -manifold (at the reduced space) using an interpolation (RBF) or a -regression technique (NN). In this example, we use a simple multilayer -perceptron, but the plenty of different architectures can be plugged as -well. - -References -^^^^^^^^^^ - -1. Rozza G., Stabile G., Ballarin F. (2022). Advanced Reduced Order - Methods and Applications in Computational Fluid Dynamics, Society for - Industrial and Applied Mathematics. -2. Hesthaven, J. S., & Ubbiali, S. (2018). Non-intrusive reduced order - modeling of nonlinear problems using neural networks. Journal of - Computational Physics, 363, 55-78. - -Let’s start with the necessary imports. It’s important to note the -minimum PINA version to run this tutorial is the ``0.1``. - -.. code:: ipython3 - - ## routine needed to run the notebook on Google Colab - try: - import google.colab - IN_COLAB = True - except: - IN_COLAB = False - if IN_COLAB: - !pip install "pina-mathlab" - - %matplotlib inline - - import matplotlib.pyplot as plt - plt.style.use('tableau-colorblind10') - import torch - import pina - - from pina.geometry import CartesianDomain - - from pina.problem import ParametricProblem - from pina.model.layers import PODBlock, RBFBlock - from pina import Condition, LabelTensor, Trainer - from pina.model import FeedForward - from pina.solvers import SupervisedSolver - - print(f'We are using PINA version {pina.__version__}') - - -.. parsed-literal:: - - We are using PINA version 0.1.1 - - -We exploit the `Smithers `__ library to -collect the parametric snapshots. In particular, we use the -``NavierStokesDataset`` class that contains a set of parametric -solutions of the Navier-Stokes equations in a 2D L-shape domain. The -parameter is the inflow velocity. The dataset is composed by 500 -snapshots of the velocity (along :math:`x`, :math:`y`, and the -magnitude) and pressure fields, and the corresponding parameter values. - -To visually check the snapshots, let’s plot also the data points and the -reference solution: this is the expected output of our model. - -.. code:: ipython3 - - from smithers.dataset import NavierStokesDataset - dataset = NavierStokesDataset() - - fig, axs = plt.subplots(1, 4, figsize=(14, 3)) - for ax, p, u in zip(axs, dataset.params[:4], dataset.snapshots['mag(v)'][:4]): - ax.tricontourf(dataset.triang, u, levels=16) - ax.set_title(f'$\mu$ = {p[0]:.2f}') - - - -.. image:: tutorial_files/tutorial_5_0.png - - -The *snapshots* - aka the numerical solutions computed for several -parameters - and the corresponding parameters are the only data we need -to train the model, in order to predict the solution for any new test -parameter. To properly validate the accuracy, we initially split the 500 -snapshots into the training dataset (90% of the original data) and the -testing one (the reamining 10%). It must be said that, to plug the -snapshots into **PINA**, we have to cast them to ``LabelTensor`` -objects. - -.. code:: ipython3 - - u = torch.tensor(dataset.snapshots['mag(v)']).float() - p = torch.tensor(dataset.params).float() - - p = LabelTensor(p, labels=['mu']) - u = LabelTensor(u, labels=[f's{i}' for i in range(u.shape[1])]) - - ratio_train_test = 0.9 - n = u.shape - n_train = int(u.shape[0] * ratio_train_test) - n_test = u - n_train - u_train, u_test = u[:n_train], u[n_train:] - p_train, p_test = p[:n_train], p[n_train:] - -It is now time to define the problem! We inherit from -``ParametricProblem`` (since the space invariant typically of this -methodology), just defining a simple *input-output* condition. - -.. code:: ipython3 - - class SnapshotProblem(ParametricProblem): - output_variables = [f's{i}' for i in range(u.shape[1])] - parameter_domain = CartesianDomain({'mu': [0, 100]}) - - conditions = { - 'io': Condition(input_points=p_train, output_points=u_train) - } - - poisson_problem = SnapshotProblem() - -We can then build a ``PODRBF`` model (using a Radial Basis Function -interpolation as approximation) and a ``PODNN`` approach (using an MLP -architecture as approximation). - -POD-RBF reduced order model ---------------------------- - -Then, we define the model we want to use, with the POD (``PODBlock``) -and the RBF (``RBFBlock``) objects. - -.. code:: ipython3 - - class PODRBF(torch.nn.Module): - """ - Proper orthogonal decomposition with Radial Basis Function interpolation model. - """ - - def __init__(self, pod_rank, rbf_kernel): - """ - - """ - super().__init__() - - self.pod = PODBlock(pod_rank) - self.rbf = RBFBlock(kernel=rbf_kernel) - - - def forward(self, x): - """ - Defines the computation performed at every call. - - :param x: The tensor to apply the forward pass. - :type x: torch.Tensor - :return: the output computed by the model. - :rtype: torch.Tensor - """ - coefficents = self.rbf(x) - return self.pod.expand(coefficents) - - def fit(self, p, x): - """ - Call the :meth:`pina.model.layers.PODBlock.fit` method of the - :attr:`pina.model.layers.PODBlock` attribute to perform the POD, - and the :meth:`pina.model.layers.RBFBlock.fit` method of the - :attr:`pina.model.layers.RBFBlock` attribute to fit the interpolation. - """ - self.pod.fit(x) - self.rbf.fit(p, self.pod.reduce(x)) - -We can then fit the model and ask it to predict the required field for -unseen values of the parameters. Note that this model does not need a -``Trainer`` since it does not include any neural network or learnable -parameters. - -.. code:: ipython3 - - pod_rbf = PODRBF(pod_rank=20, rbf_kernel='thin_plate_spline') - pod_rbf.fit(p_train, u_train) - -.. code:: ipython3 - - u_test_rbf = pod_rbf(p_test) - u_train_rbf = pod_rbf(p_train) - - relative_error_train = torch.norm(u_train_rbf - u_train)/torch.norm(u_train) - relative_error_test = torch.norm(u_test_rbf - u_test)/torch.norm(u_test) - - print('Error summary for POD-RBF model:') - print(f' Train: {relative_error_train.item():e}') - print(f' Test: {relative_error_test.item():e}') - - -.. parsed-literal:: - - Error summary for POD-RBF model: - Train: 1.287801e-03 - Test: 1.217041e-03 - - -POD-NN reduced order model --------------------------- - -.. code:: ipython3 - - class PODNN(torch.nn.Module): - """ - Proper orthogonal decomposition with neural network model. - """ - - def __init__(self, pod_rank, layers, func): - """ - - """ - super().__init__() - - self.pod = PODBlock(pod_rank) - self.nn = FeedForward( - input_dimensions=1, - output_dimensions=pod_rank, - layers=layers, - func=func - ) - - - def forward(self, x): - """ - Defines the computation performed at every call. - - :param x: The tensor to apply the forward pass. - :type x: torch.Tensor - :return: the output computed by the model. - :rtype: torch.Tensor - """ - coefficents = self.nn(x) - return self.pod.expand(coefficents) - - def fit_pod(self, x): - """ - Just call the :meth:`pina.model.layers.PODBlock.fit` method of the - :attr:`pina.model.layers.PODBlock` attribute. - """ - self.pod.fit(x) - -We highlight that the POD modes are directly computed by means of the -singular value decomposition (computed over the input data), and not -trained using the backpropagation approach. Only the weights of the MLP -are actually trained during the optimization loop. - -.. code:: ipython3 - - pod_nn = PODNN(pod_rank=20, layers=[10, 10, 10], func=torch.nn.Tanh) - pod_nn.fit_pod(u_train) - - pod_nn_stokes = SupervisedSolver( - problem=poisson_problem, - model=pod_nn, - optimizer=torch.optim.Adam, - optimizer_kwargs={'lr': 0.0001}) - -Now that we have set the ``Problem`` and the ``Model``, we have just to -train the model and use it for predicting the test snapshots. - -.. code:: ipython3 - - trainer = Trainer( - solver=pod_nn_stokes, - max_epochs=1000, - batch_size=100, - log_every_n_steps=5, - accelerator='cpu') - trainer.train() - - -.. parsed-literal:: - - GPU available: True (cuda), used: False - TPU available: False, using: 0 TPU cores - IPU available: False, using: 0 IPUs - HPU available: False, using: 0 HPUs - /u/a/aivagnes/anaconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/setup.py:187: GPU available but not used. You can set it by doing `Trainer(accelerator='gpu')`. - - | Name | Type | Params - ---------------------------------------- - 0 | _loss | MSELoss | 0 - 1 | _neural_net | Network | 460 - ---------------------------------------- - 460 Trainable params - 0 Non-trainable params - 460 Total params - 0.002 Total estimated model params size (MB) - /u/a/aivagnes/anaconda3/lib/python3.8/site-packages/torch/cuda/__init__.py:152: UserWarning: - Found GPU0 Quadro K600 which is of cuda capability 3.0. - PyTorch no longer supports this GPU because it is too old. - The minimum cuda capability supported by this library is 3.7. - - warnings.warn(old_gpu_warn % (d, name, major, minor, min_arch // 10, min_arch % 10)) - - - -.. parsed-literal:: - - Training: | | 0/? [00:00`__. - -First of all, some useful imports. - -.. code:: ipython3 - - ## routine needed to run the notebook on Google Colab - try: - import google.colab - IN_COLAB = True - except: - IN_COLAB = False - if IN_COLAB: - !pip install "pina-mathlab" - - import torch - import matplotlib.pyplot as plt - plt.style.use('tableau-colorblind10') - - from pina import Condition, Plotter - from pina.problem import SpatialProblem - from pina.operators import laplacian - from pina.model import FeedForward - from pina.model.layers import PeriodicBoundaryEmbedding # The PBC module - from pina.solvers import PINN - from pina.trainer import Trainer - from pina.geometry import CartesianDomain - from pina.equation import Equation - -The problem definition ----------------------- - -The one-dimensional Helmotz problem is mathematically written as: - -.. math:: - - - \begin{cases} - \frac{d^2}{dx^2}u(x) - \lambda u(x) -f(x) &= 0 \quad x\in(0,2)\\ - u^{(m)}(x=0) - u^{(m)}(x=2) &= 0 \quad m\in[0, 1, \cdots]\\ - \end{cases} - -In this case we are asking the solution to be :math:`C^{\infty}` -periodic with period :math:`2`, on the inifite domain -:math:`x\in(-\infty, \infty)`. Notice that the classical PINN would need -inifinite conditions to evaluate the PBC loss function, one for each -derivative, which is of course infeasable… A possible solution, -diverging from the original PINN formulation, is to use *coordinates -augmentation*. In coordinates augmentation you seek for a coordinates -transformation :math:`v` such that :math:`x\rightarrow v(x)` such that -the periodicity condition -:math:`u^{(m)}(x=0) - u^{(m)}(x=2) = 0 \quad, m\in[0, 1, \cdots]` is satisfied. - -For demonstration porpuses the problem specifics are -:math:`\lambda=-10\pi^2`, and -:math:`f(x)=-6\pi^2\sin(3\pi x)\cos(\pi x)` which gives a solution that -can be computed analytically :math:`u(x) = \sin(\pi x)\cos(3\pi x)`. - -.. code:: ipython3 - - class Helmotz(SpatialProblem): - output_variables = ['u'] - spatial_domain = CartesianDomain({'x': [0, 2]}) - - def helmotz_equation(input_, output_): - x = input_.extract('x') - u_xx = laplacian(output_, input_, components=['u'], d=['x']) - f = - 6.*torch.pi**2 * torch.sin(3*torch.pi*x)*torch.cos(torch.pi*x) - lambda_ = - 10. * torch.pi ** 2 - return u_xx - lambda_ * output_ - f - - # here we write the problem conditions - conditions = { - 'D': Condition(location=spatial_domain, - equation=Equation(helmotz_equation)), - } - - def helmotz_sol(self, pts): - return torch.sin(torch.pi * pts) * torch.cos(3. * torch.pi * pts) - - truth_solution = helmotz_sol - - problem = Helmotz() - - # let's discretise the domain - problem.discretise_domain(200, 'grid', locations=['D']) - -As usual the Helmotz problem is written in **PINA** code as a class. The -equations are written as ``conditions`` that should be satisfied in the -corresponding domains. The ``truth_solution`` is the exact solution -which will be compared with the predicted one. We used latin hypercube -sampling for choosing the collocation points. - -Solving the problem with a Periodic Network -------------------------------------------- - -Any :math:`\mathcal{C}^{\infty}` periodic function -:math:`u : \mathbb{R} \rightarrow \mathbb{R}` with period -:math:`L\in\mathbb{N}` can be constructed by composition of an arbitrary -smooth function :math:`f : \mathbb{R}^n \rightarrow \mathbb{R}` and a -given smooth periodic function -:math:`v : \mathbb{R} \rightarrow \mathbb{R}^n` with period :math:`L`, -that is :math:`u(x) = f(v(x))`. The formulation is generalizable for -arbitrary dimension, see `A method for representing periodic functions -and enforcing exactly periodic boundary conditions with deep neural -networks `__. - -In our case, we rewrite -:math:`v(x) = \left[1, \cos\left(\frac{2\pi}{L} x\right), \sin\left(\frac{2\pi}{L} x\right)\right]`, -i.e the coordinates augmentation, and -:math:`f(\cdot) = NN_{\theta}(\cdot)` i.e. a neural network. The -resulting neural network obtained by composing :math:`f` with :math:`v` -gives the PINN approximate solution, that is -:math:`u(x) \approx u_{\theta}(x)=NN_{\theta}(v(x))`. - -In **PINA** this translates in using the ``PeriodicBoundaryEmbedding`` layer for -:math:`v`, and any ``pina.model`` for :math:`NN_{\theta}`. Let’s see it -in action! - -.. code:: ipython3 - - # we encapsulate all modules in a torch.nn.Sequential container - model = torch.nn.Sequential(PeriodicBoundaryEmbedding(input_dimension=1, - periods=2), - FeedForward(input_dimensions=3, # output of PeriodicBoundaryEmbedding = 3 * input_dimension - output_dimensions=1, - layers=[10, 10])) - -As simple as that! Notice in higher dimension you can specify different -periods for all dimensions using a dictionary, -e.g. ``periods={'x':2, 'y':3, ...}`` would indicate a periodicity of -:math:`2` in :math:`x`, :math:`3` in :math:`y`, and so on… - -We will now sole the problem as usually with the ``PINN`` and -``Trainer`` class. - -.. code:: ipython3 - - pinn = PINN(problem=problem, model=model) - trainer = Trainer(pinn, max_epochs=5000, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional) - trainer.train() - - -.. parsed-literal:: - - GPU available: True (mps), used: False - TPU available: False, using: 0 TPU cores - IPU available: False, using: 0 IPUs - HPU available: False, using: 0 HPUs - -.. parsed-literal:: - - `Trainer.fit` stopped: `max_epochs=5000` reached. - - -.. parsed-literal:: - - Epoch 4999: 100%|██████████| 1/1 [00:00<00:00, 155.47it/s, v_num=20, D_loss=0.0123, mean_loss=0.0123] - - -We are going to plot the solution now! - -.. code:: ipython3 - - pl = Plotter() - pl.plot(pinn) - - - -.. image:: tutorial_files/tutorial_11_0.png - - -Great, they overlap perfectly! This seeams a good result, considering -the simple neural network used to some this (complex) problem. We will -now test the neural network on the domain :math:`[-4, 4]` without -retraining. In principle the periodicity should be present since the -:math:`v` function ensures the periodicity in :math:`(-\infty, \infty)`. - -.. code:: ipython3 - - # plotting solution - with torch.no_grad(): - # Notice here we put [-4, 4]!!! - new_domain = CartesianDomain({'x' : [0, 4]}) - x = new_domain.sample(1000, mode='grid') - fig, axes = plt.subplots(1, 3, figsize=(15, 5)) - # Plot 1 - axes[0].plot(x, problem.truth_solution(x), label=r'$u(x)$', color='blue') - axes[0].set_title(r'True solution $u(x)$') - axes[0].legend(loc="upper right") - # Plot 2 - axes[1].plot(x, pinn(x), label=r'$u_{\theta}(x)$', color='green') - axes[1].set_title(r'PINN solution $u_{\theta}(x)$') - axes[1].legend(loc="upper right") - # Plot 3 - diff = torch.abs(problem.truth_solution(x) - pinn(x)) - axes[2].plot(x, diff, label=r'$|u(x) - u_{\theta}(x)|$', color='red') - axes[2].set_title(r'Absolute difference $|u(x) - u_{\theta}(x)|$') - axes[2].legend(loc="upper right") - # Adjust layout - plt.tight_layout() - # Show the plots - plt.show() - - - -.. image:: tutorial_files/tutorial_13_0.png - - -It is pretty clear that the network is periodic, with also the error -following a periodic pattern. Obviusly a longer training, and a more -expressive neural network could improve the results! - -What’s next? ------------- - -Nice you have completed the one dimensional Helmotz tutorial of -**PINA**! There are multiple directions you can go now: - -1. Train the network for longer or with different layer sizes and assert - the finaly accuracy - -2. Apply the ``PeriodicBoundaryEmbedding`` layer for a time-dependent problem (see - reference in the documentation) - -3. Exploit extrafeature training ? - -4. Many more… diff --git a/docs/source/_rst/tutorials/tutorial9/tutorial_files/tutorial_11_0.png b/docs/source/_rst/tutorials/tutorial9/tutorial_files/tutorial_11_0.png deleted file mode 100644 index baf10c7..0000000 Binary files a/docs/source/_rst/tutorials/tutorial9/tutorial_files/tutorial_11_0.png and /dev/null differ diff --git a/docs/source/_rst/tutorials/tutorial9/tutorial_files/tutorial_13_0.png b/docs/source/_rst/tutorials/tutorial9/tutorial_files/tutorial_13_0.png deleted file mode 100644 index 4178e82..0000000 Binary files a/docs/source/_rst/tutorials/tutorial9/tutorial_files/tutorial_13_0.png and /dev/null differ