From 9ac57241e3077f74f820b5ac86c5a3570a50330f Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" <41898282+github-actions[bot]@users.noreply.github.com> Date: Sat, 14 Jun 2025 16:58:03 +0200 Subject: [PATCH] export tutorials changed in 7bf7d34 (#586) Co-authored-by: dario-coscia --- .../source/tutorials/tutorial11/tutorial.html | 78 +++---- .../source/tutorials/tutorial14/tutorial.html | 18 +- .../source/tutorials/tutorial17/tutorial.html | 46 ++-- .../source/tutorials/tutorial21/tutorial.html | 22 +- docs/source/tutorials/tutorial9/tutorial.html | 20 +- tutorials/tutorial11/tutorial.py | 68 +++--- tutorials/tutorial14/tutorial.py | 88 +++---- tutorials/tutorial17/tutorial.py | 217 +++++++++--------- tutorials/tutorial21/tutorial.py | 3 +- tutorials/tutorial8/tutorial.py | 38 +-- tutorials/tutorial9/tutorial.py | 90 ++++---- 11 files changed, 344 insertions(+), 344 deletions(-) diff --git a/docs/source/tutorials/tutorial11/tutorial.html b/docs/source/tutorials/tutorial11/tutorial.html index aa46e05..912f398 100644 --- a/docs/source/tutorials/tutorial11/tutorial.html +++ b/docs/source/tutorials/tutorial11/tutorial.html @@ -7658,7 +7658,7 @@ can be initialized by simiply passing the SupervisedSolver solver
-
You are using the plain ModelCheckpoint callback. Consider using LitModelCheckpoint which with seamless uploading to Model registry.
+
Using default `ModelCheckpoint`. Consider installing `litmodels` package to enable `LitModelCheckpoint` for automatic upload to the Lightning model registry.
 
@@ -7728,7 +7728,7 @@ can be initialized by simiply passing the SupervisedSolver solver
-
You are using the plain ModelCheckpoint callback. Consider using LitModelCheckpoint which with seamless uploading to Model registry.
+
Using default `ModelCheckpoint`. Consider installing `litmodels` package to enable `LitModelCheckpoint` for automatic upload to the Lightning model registry.
 
@@ -7822,7 +7822,7 @@ can be initialized by simiply passing the SupervisedSolver solver
-
You are using the plain ModelCheckpoint callback. Consider using LitModelCheckpoint which with seamless uploading to Model registry.
+
Using default `ModelCheckpoint`. Consider installing `litmodels` package to enable `LitModelCheckpoint` for automatic upload to the Lightning model registry.
 
@@ -7849,12 +7849,12 @@ can be initialized by simiply passing the SupervisedSolver solver
- @@ -7868,7 +7868,7 @@ var element = document.getElementById('c16a712e-f688-43ca-a903-6cee010328fe'); @@ -7895,12 +7895,12 @@ var element = document.getElementById('c16a712e-f688-43ca-a903-6cee010328fe');
- @@ -7914,7 +7914,7 @@ var element = document.getElementById('ceb00614-5d6d-460a-b4af-18d66a0f25ee'); @@ -7941,12 +7941,12 @@ var element = document.getElementById('ceb00614-5d6d-460a-b4af-18d66a0f25ee');
- @@ -7978,7 +7978,7 @@ var element = document.getElementById('43056923-3b48-4c3f-b445-bca760636f1c'); @@ -8084,7 +8084,7 @@ var element = document.getElementById('43056923-3b48-4c3f-b445-bca760636f1c'); @@ -8111,12 +8111,12 @@ var element = document.getElementById('43056923-3b48-4c3f-b445-bca760636f1c');
- @@ -8161,9 +8161,9 @@ var element = document.getElementById('36bf2717-e046-4cd9-9199-14267a168ee7');
@@ -8219,7 +8219,7 @@ var element = document.getElementById('36bf2717-e046-4cd9-9199-14267a168ee7');
@@ -8329,7 +8329,7 @@ We use the @@ -8356,12 +8356,12 @@ We use the - @@ -8375,7 +8375,7 @@ var element = document.getElementById('44f01de4-1367-4dec-9d8d-1a9a91935f0b'); @@ -8441,7 +8441,7 @@ var element = document.getElementById('44f01de4-1367-4dec-9d8d-1a9a91935f0b'); @@ -8468,12 +8468,12 @@ var element = document.getElementById('44f01de4-1367-4dec-9d8d-1a9a91935f0b');
- @@ -8614,7 +8614,7 @@ var element = document.getElementById('87bf78da-0c7b-4cb3-98fd-9c41cafa10d8'); @@ -8647,6 +8647,6 @@ var element = document.getElementById('87bf78da-0c7b-4cb3-98fd-9c41cafa10d8'); diff --git a/docs/source/tutorials/tutorial14/tutorial.html b/docs/source/tutorials/tutorial14/tutorial.html index 5d54218..760a93f 100644 --- a/docs/source/tutorials/tutorial14/tutorial.html +++ b/docs/source/tutorials/tutorial14/tutorial.html @@ -7599,7 +7599,7 @@ a.anchor-link {

Deep Ensemble

Deep Ensemble methods improve model performance by leveraging the diversity of predictions generated by multiple neural networks trained on the same problem. Each network in the ensemble is trained independently—typically with different weight initializations or even slight variations in the architecture or data sampling. By combining their outputs (e.g., via averaging or majority voting), ensembles reduce overfitting, increase robustness, and improve generalization.

This approach allows the ensemble to capture different perspectives of the problem, leading to more accurate and reliable predictions.

-PINA Workflow +Deep ensemble

The image above illustrates a Deep Ensemble setup, where multiple models attempt to predict the text from an image. While individual models may make errors (e.g., predicting "PONY" instead of "PINA"), combining their outputs—such as taking the majority vote—often leads to the correct result. This ensemble effect improves reliability by mitigating the impact of individual model biases.

Deep Ensemble Physics-Informed Networks

In the context of Physics-Informed Neural Networks (PINNs), Deep Ensembles help the network discover different branches or multiple solutions of a PDE that exhibits bifurcating behavior.

@@ -7760,7 +7760,7 @@ $$

@@ -7833,7 +7833,7 @@ $$

@@ -7860,12 +7860,12 @@ $$

- @@ -7916,7 +7916,7 @@ var element = document.getElementById('05dd19c2-9f33-4eae-b4f3-ce462c88181a');
@@ -7969,7 +7969,7 @@ var element = document.getElementById('05dd19c2-9f33-4eae-b4f3-ce462c88181a');
@@ -7998,6 +7998,6 @@ var element = document.getElementById('05dd19c2-9f33-4eae-b4f3-ce462c88181a'); diff --git a/docs/source/tutorials/tutorial17/tutorial.html b/docs/source/tutorials/tutorial17/tutorial.html index 1da0242..5150885 100644 --- a/docs/source/tutorials/tutorial17/tutorial.html +++ b/docs/source/tutorials/tutorial17/tutorial.html @@ -7546,7 +7546,7 @@ a.anchor-link {

Tutorial: Introductory Tutorial: A Beginner’s Guide to PINA

Open In Colab

-PINA Logo +PINA logo

Welcome to PINA!

PINA [1] is an open-source Python library designed for Scientific Machine Learning (SciML) tasks, particularly involving:

@@ -7570,7 +7570,7 @@ a.anchor-link { @@ -7893,7 +7893,7 @@ $$

@@ -7938,12 +7938,12 @@ $$

- @@ -8013,7 +8013,7 @@ var element = document.getElementById('25886227-8301-4c42-875e-1150a47a9b3d');
@@ -8137,7 +8137,7 @@ $$

@@ -8355,7 +8355,7 @@ But you can easily sample by running .discretise_domain:
@@ -8382,12 +8382,12 @@ But you can easily sample by running .discretise_domain:
- @@ -8457,7 +8457,7 @@ var element = document.getElementById('f2e65935-a566-4be6-a06d-ef6182afa5cf');
@@ -8494,6 +8494,6 @@ var element = document.getElementById('f2e65935-a566-4be6-a06d-ef6182afa5cf'); diff --git a/docs/source/tutorials/tutorial21/tutorial.html b/docs/source/tutorials/tutorial21/tutorial.html index da4fe65..4f03456 100644 --- a/docs/source/tutorials/tutorial21/tutorial.html +++ b/docs/source/tutorials/tutorial21/tutorial.html @@ -7681,7 +7681,7 @@ $$

@@ -7695,7 +7695,7 @@ $$

Solving the Neural Operator Problem

At their core, Neural Operators transform an input function $a$ into an output function $u$. The general structure of a Neural Operator consists of three key components:

-Neural Operators +Neural Operators

  1. Encoder: The encoder maps the input into a specific embedding space.

    @@ -7894,12 +7894,12 @@ The Graph Neural Operator leverages Graph Neural Networ
@@ -7979,7 +7979,7 @@ var element = document.getElementById('85b1c83a-f254-4d97-ad0b-775b2eef70fb'); @@ -8009,6 +8009,6 @@ var element = document.getElementById('85b1c83a-f254-4d97-ad0b-775b2eef70fb'); diff --git a/docs/source/tutorials/tutorial9/tutorial.html b/docs/source/tutorials/tutorial9/tutorial.html index 1803aaa..22c61fe 100644 --- a/docs/source/tutorials/tutorial9/tutorial.html +++ b/docs/source/tutorials/tutorial9/tutorial.html @@ -7544,7 +7544,7 @@ a.anchor-link { @@ -7943,7 +7943,7 @@ var element = document.getElementById('eb0f4463-8d27-4ae5-b729-96947b42329f'); @@ -7975,6 +7975,6 @@ var element = document.getElementById('eb0f4463-8d27-4ae5-b729-96947b42329f'); diff --git a/tutorials/tutorial11/tutorial.py b/tutorials/tutorial11/tutorial.py index 597f93c..2ea6e1e 100644 --- a/tutorials/tutorial11/tutorial.py +++ b/tutorials/tutorial11/tutorial.py @@ -3,15 +3,15 @@ # # Tutorial: Introduction to `Trainer` class # [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mathLab/PINA/blob/master/tutorials/tutorial11/tutorial.ipynb) -# -# In this tutorial, we will delve deeper into the functionality of the `Trainer` class, which serves as the cornerstone for training **PINA** [Solvers](https://mathlab.github.io/PINA/_rst/_code.html#solvers). -# +# +# In this tutorial, we will delve deeper into the functionality of the `Trainer` class, which serves as the cornerstone for training **PINA** [Solvers](https://mathlab.github.io/PINA/_rst/_code.html#solvers). +# # The `Trainer` class offers a plethora of features aimed at improving model accuracy, reducing training time and memory usage, facilitating logging visualization, and more thanks to the amazing job done by the PyTorch Lightning team! -# +# # Our leading example will revolve around solving a simple regression problem where we want to approximate the following function with a Neural Net model $\mathcal{M}_{\theta}$: # $$y = x^3$$ # by having only a set of $20$ observations $\{x_i, y_i\}_{i=1}^{20}$, with $x_i \sim\mathcal{U}[-3, 3]\;\;\forall i\in(1,\dots,20)$. -# +# # Let's start by importing useful modules! # In[ ]: @@ -70,16 +70,16 @@ trainer = Trainer(solver=solver) # ## Trainer Accelerator -# +# # When creating the `Trainer`, **by default** the most performing `accelerator` for training which is available in your system will be chosen, ranked as follows: # 1. [TPU](https://cloud.google.com/tpu/docs/intro-to-tpu) # 2. [IPU](https://www.graphcore.ai/products/ipu) # 3. [HPU](https://habana.ai/) # 4. [GPU](https://www.intel.com/content/www/us/en/products/docs/processors/what-is-a-gpu.html#:~:text=What%20does%20GPU%20stand%20for,video%20editing%2C%20and%20gaming%20applications) or [MPS](https://developer.apple.com/metal/pytorch/) # 5. CPU -# +# # For setting manually the `accelerator` run: -# +# # * `accelerator = {'gpu', 'cpu', 'hpu', 'mps', 'cpu', 'ipu'}` sets the accelerator to a specific one # In[15]: @@ -91,11 +91,11 @@ trainer = Trainer(solver=solver, accelerator="cpu") # As you can see, even if a `GPU` is available on the system, it is not used since we set `accelerator='cpu'`. # ## Trainer Logging -# +# # In **PINA** you can log metrics in different ways. The simplest approach is to use the `MetricTracker` class from `pina.callbacks`, as seen in the [*Introduction to Physics Informed Neural Networks training*](https://github.com/mathLab/PINA/blob/master/tutorials/tutorial1/tutorial.ipynb) tutorial. -# +# # However, especially when we need to train multiple times to get an average of the loss across multiple runs, `lightning.pytorch.loggers` might be useful. Here we will use `TensorBoardLogger` (more on [logging](https://lightning.ai/docs/pytorch/stable/extensions/logging.html) here), but you can choose the one you prefer (or make your own one). -# +# # We will now import `TensorBoardLogger`, do three runs of training, and then visualize the results. Notice we set `enable_model_summary=False` to avoid model summary specifications (e.g. number of parameters); set it to `True` if needed. # In[17]: @@ -129,25 +129,25 @@ for _ in range(3): # We can now visualize the logs by simply running `tensorboard --logdir=training_log/` in the terminal. You should obtain a webpage similar to the one shown below if running for 1000 epochs: #

-# \"Logging +# \"Logging #

# As you can see, by default, **PINA** logs the losses which are shown in the progress bar, as well as the number of epochs. You can always insert more loggings by either defining a **callback** ([more on callbacks](https://lightning.ai/docs/pytorch/stable/extensions/callbacks.html)), or inheriting the solver and modifying the programs with different **hooks** ([more on hooks](https://lightning.ai/docs/pytorch/stable/common/lightning_module.html#hooks)). -# +# # ## Trainer Callbacks -# +# # Whenever we need to access certain steps of the training for logging, perform static modifications (i.e. not changing the `Solver`), or update `Problem` hyperparameters (static variables), we can use **Callbacks**. Notice that **Callbacks** allow you to add arbitrary self-contained programs to your training. At specific points during the flow of execution (hooks), the Callback interface allows you to design programs that encapsulate a full set of functionality. It de-couples functionality that does not need to be in **PINA** `Solver`s. -# +# # Lightning has a callback system to execute them when needed. **Callbacks** should capture NON-ESSENTIAL logic that is NOT required for your lightning module to run. -# +# # The following are best practices when using/designing callbacks: -# +# # * Callbacks should be isolated in their functionality. # * Your callback should not rely on the behavior of other callbacks in order to work properly. # * Do not manually call methods from the callback. # * Directly calling methods (e.g., on_validation_end) is strongly discouraged. # * Whenever possible, your callbacks should not depend on the order in which they are executed. -# +# # We will try now to implement a naive version of `MetricTraker` to show how callbacks work. Notice that this is a very easy application of callbacks, fortunately in **PINA** we already provide more advanced callbacks in `pina.callbacks`. # In[18]: @@ -172,7 +172,7 @@ class NaiveMetricTracker(Callback): # Let's see the results when applied to the problem. You can define **callbacks** when initializing the `Trainer` by using the `callbacks` argument, which expects a list of callbacks. -# +# # In[19]: @@ -206,8 +206,8 @@ trainer.train() trainer.callbacks[0].saved_metrics[:3] # only the first three epochs -# PyTorch Lightning also has some built-in `Callbacks` which can be used in **PINA**, [here is an extensive list](https://lightning.ai/docs/pytorch/stable/extensions/callbacks.html#built-in-callbacks). -# +# PyTorch Lightning also has some built-in `Callbacks` which can be used in **PINA**, [here is an extensive list](https://lightning.ai/docs/pytorch/stable/extensions/callbacks.html#built-in-callbacks). +# # We can, for example, try the `EarlyStopping` routine, which automatically stops the training when a specific metric converges (here the `train_loss`). In order to let the training keep going forever, set `max_epochs=-1`. # In[22]: @@ -237,17 +237,17 @@ trainer.train() # As we can see the model automatically stop when the logging metric stopped improving! # ## Trainer Tips to Boost Accuracy, Save Memory and Speed Up Training -# +# # Until now we have seen how to choose the right `accelerator`, how to log and visualize the results, and how to interface with the program in order to add specific parts of code at specific points via `callbacks`. # Now, we will focus on how to boost your training by saving memory and speeding it up, while maintaining the same or even better degree of accuracy! -# +# # There are several built-in methods developed in PyTorch Lightning which can be applied straightforward in **PINA**. Here we report some: -# +# # * [Stochastic Weight Averaging](https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/) to boost accuracy # * [Gradient Clipping](https://deepgram.com/ai-glossary/gradient-clipping) to reduce computational time (and improve accuracy) # * [Gradient Accumulation](https://lightning.ai/docs/pytorch/stable/common/optimization.html#id3) to save memory consumption # * [Mixed Precision Training](https://lightning.ai/docs/pytorch/stable/common/optimization.html#id3) to save memory consumption -# +# # We will just demonstrate how to use the first two and see the results compared to standard training. # We use the [`Timer`](https://lightning.ai/docs/pytorch/stable/api/lightning.pytorch.callbacks.Timer.html#lightning.pytorch.callbacks.Timer) callback from `pytorch_lightning.callbacks` to track the times. Let's start by training a simple model without any optimization (train for 500 epochs). @@ -312,7 +312,7 @@ print(f'Total training time {trainer.callbacks[0].time_elapsed("train"):.5f} s') # As you can see, the training time does not change at all! Notice that around epoch 350 # the scheduler is switched from the defalut one `ConstantLR` to the Stochastic Weight Average Learning Rate (`SWALR`). # This is because by default `StochasticWeightAveraging` will be activated after `int(swa_epoch_start * max_epochs)` with `swa_epoch_start=0.7` by default. Finally, the final `train_loss` is lower when `StochasticWeightAveraging` is used. -# +# # We will now do the same but clippling the gradient to be relatively small. # In[25]: @@ -341,18 +341,18 @@ print(f'Total training time {trainer.callbacks[0].time_elapsed("train"):.5f} s') # As we can see, by applying gradient clipping, we were able to achieve even lower error! -# +# # ## What's Next? -# +# # Now you know how to use the `Trainer` class efficiently in **PINA**! There are several directions you can explore next: -# +# # 1. **Explore Training on Different Devices**: Test training times on various devices (e.g., `TPU`) to compare performance. -# +# # 2. **Reduce Memory Costs**: Experiment with mixed precision training and gradient accumulation to optimize memory usage, especially when training Neural Operators. -# +# # 3. **Benchmark `Trainer` Speed**: Benchmark the training speed of the `Trainer` class for different precisions to identify potential optimizations. -# +# # 4. **...and many more!**: Consider expanding to **multi-GPU** setups or other advanced configurations for large-scale training. -# +# # For more resources and tutorials, check out the [PINA Documentation](https://mathlab.github.io/PINA/). -# +# diff --git a/tutorials/tutorial14/tutorial.py b/tutorials/tutorial14/tutorial.py index 7c0b94d..666d7f7 100644 --- a/tutorials/tutorial14/tutorial.py +++ b/tutorials/tutorial14/tutorial.py @@ -2,11 +2,11 @@ # coding: utf-8 # # Tutorial: Learning Bifurcating PDE Solutions with Physics-Informed Deep Ensembles -# +# # [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mathLab/PINA/blob/master/tutorials/tutorial14/tutorial.ipynb) -# +# # This tutorial demonstrates how to use the Deep Ensemble Physics Informed Network (DeepEnsemblePINN) to learn PDEs exhibiting bifurcating behavior, as discussed in [*Learning and Discovering Multiple Solutions Using Physics-Informed Neural Networks with Random Initialization and Deep Ensemble*](https://arxiv.org/abs/2503.06320). -# +# # Let’s begin by importing the necessary libraries. # In[ ]: @@ -41,62 +41,62 @@ warnings.filterwarnings("ignore") # ## Deep Ensemble -# +# # Deep Ensemble methods improve model performance by leveraging the diversity of predictions generated by multiple neural networks trained on the same problem. Each network in the ensemble is trained independently—typically with different weight initializations or even slight variations in the architecture or data sampling. By combining their outputs (e.g., via averaging or majority voting), ensembles reduce overfitting, increase robustness, and improve generalization. -# +# # This approach allows the ensemble to capture different perspectives of the problem, leading to more accurate and reliable predictions. -# +# #

-# PINA Workflow +# Deep ensemble #

-# +# # The image above illustrates a Deep Ensemble setup, where multiple models attempt to predict the text from an image. While individual models may make errors (e.g., predicting "PONY" instead of "PINA"), combining their outputs—such as taking the majority vote—often leads to the correct result. This ensemble effect improves reliability by mitigating the impact of individual model biases. -# -# +# +# # ## Deep Ensemble Physics-Informed Networks -# +# # In the context of Physics-Informed Neural Networks (PINNs), Deep Ensembles help the network discover different branches or multiple solutions of a PDE that exhibits bifurcating behavior. -# +# # By training a diverse set of models with different initializations, Deep Ensemble methods overcome the limitations of single-initialization models, which may converge to only one of the possible solutions. This approach is particularly useful when the solution space of the problem contains multiple valid physical states or behaviors. -# -# +# +# # ## The Bratu Problem -# +# # In this tutorial, we'll train a `DeepEnsemblePINN` solver to solve a bifurcating ODE known as the **Bratu problem**. The ODE is given by: -# +# # $$ # \frac{d^2u}{dt^2} + \lambda e^u = 0, \quad t \in (0, 1) # $$ -# +# # with boundary conditions: -# +# # $$ # u(0) = u(1) = 0, # $$ -# +# # where $\lambda > 0$ is a scalar parameter. The analytical solutions to the 1D Bratu problem can be expressed as: -# +# # $$ # u(t, \alpha) = 2 \log\left(\frac{\cosh(\alpha)}{\cosh(\alpha(1 - 2t))}\right), # $$ -# +# # where $\alpha$ satisfies: -# +# # $$ # \cosh(\alpha) - 2\sqrt{2}\alpha = 0. # $$ -# +# # When $\lambda < 3.513830719$, the equation admits two solutions $\alpha_1$ and $\alpha_2$, which correspond to two distinct solutions of the original ODE: $u_1$ and $u_2$. -# +# # In this tutorial, we set $\lambda = 1$, which leads to: -# +# # - $\alpha_1 \approx 0.37929$ # - $\alpha_2 \approx 2.73468$ -# +# # We first write the problem class, we do not write the boundary conditions as we will hard impose them. -# +# # > **👉 We have a dedicated [tutorial](https://mathlab.github.io/PINA/tutorial16/tutorial.html) to teach how to build a Problem — have a look if you're interested!** -# +# # > **👉 We have a dedicated [tutorial](https://mathlab.github.io/PINA/tutorial3/tutorial.html) to teach how to impose hard constraints — have a look if you're interested!** # In[80]: @@ -135,11 +135,11 @@ problem.discretise_domain(n=101, mode="grid", domains="interior") # ## Defining the Deep Ensemble Models -# +# # Now that the problem setup is complete, we move on to creating an **ensemble of models**. Each ensemble member will be a standard `FeedForward` neural network, wrapped inside a custom `Model` class. -# +# # Each model's weights are initialized using a **normal distribution** with mean 0 and standard deviation 2. This random initialization is crucial to promote diversity across the ensemble members, allowing the models to converge to potentially different solutions of the PDE. -# +# # The final ensemble is simply a **list of PyTorch models**, which we will later pass to the `DeepEnsemblePINN` # In[81]: @@ -179,15 +179,15 @@ with torch.no_grad(): # As you can see we get different output since the neural networks are initialized differently. -# +# # ## Training with `DeepEnsemblePINN` -# +# # Now that everything is ready, we can train the models using the `DeepEnsemblePINN` solver! 🎯 -# +# # This solver is constructed by combining multiple neural network models that all aim to solve the same PDE. Each model $\mathcal{M}_{i \in \{1, \dots, 10\}}$ in the ensemble contributes a unique perspective due to different random initializations. -# +# # This diversity allows the ensemble to **capture multiple branches or bifurcating solutions** of the problem, making it especially powerful for PDEs like the Bratu problem. -# +# # Once the `DeepEnsemblePINN` solver is defined with all the models, we train them using the `Trainer` class, as with any other solver in **PINA**. We also build a callback to store the value of `u(0.5)` during training iterations. # In[83]: @@ -243,11 +243,11 @@ with torch.no_grad(): # As you can see, different networks in the ensemble converge to different values pf $u(0.5)$ — this means we can actually **spot the bifurcation** in the solution space! -# +# # This is a powerful demonstration of how **Deep Ensemble Physics-Informed Neural Networks** are capable of learning **multiple valid solutions** of a PDE that exhibits bifurcating behavior. -# +# # We can also visualize the ensemble predictions to better observe the multiple branches: -# +# # In[88]: @@ -270,13 +270,13 @@ with torch.no_grad(): # ## What's Next? -# +# # You have completed the tutorial on deep ensemble PINNs for bifurcating PDEs, well don! There are many potential next steps you can explore: -# +# # 1. **Train the network longer or with different hyperparameters**: Experiment with different configurations of the single model, you can compose an ensemble by also stacking models with different layers, activation, ... to improve accuracy. -# +# # 2. **Solve more complex problems**: The original paper provides very complex problems that can be solved with PINA, we suggest you to try implement and solve them! -# +# # 3. **...and many more!**: There are countless directions to further explore, for example, what does it happen when you vary the network initialization hyperparameters? -# +# # For more resources and tutorials, check out the [PINA Documentation](https://mathlab.github.io/PINA/). diff --git a/tutorials/tutorial17/tutorial.py b/tutorials/tutorial17/tutorial.py index 629023d..7ad946c 100644 --- a/tutorials/tutorial17/tutorial.py +++ b/tutorials/tutorial17/tutorial.py @@ -2,79 +2,80 @@ # coding: utf-8 # # Tutorial: Introductory Tutorial: A Beginner’s Guide to PINA -# +# # [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mathLab/PINA/blob/master/tutorials/tutorial17/tutorial.ipynb) -# +# #

-# PINA Logo +# PINA logo #

-# +# +# # Welcome to **PINA**! -# +# # PINA [1] is an open-source Python library designed for **Scientific Machine Learning (SciML)** tasks, particularly involving: -# +# # - **Physics-Informed Neural Networks (PINNs)** # - **Neural Operators (NOs)** # - **Reduced Order Models (ROMs)** # - **Graph Neural Networks (GNNs)** # - ... -# +# # Built on **PyTorch**, **PyTorch Lightning**, and **PyTorch Geometric**, it provides a **user-friendly, intuitive interface** for formulating and solving differential problems using neural networks. -# +# # This tutorial offers a **step-by-step guide** to using PINA—starting from basic to advanced techniques—enabling users to tackle a broad spectrum of differential problems with minimal code. -# -# -# +# +# +# -# ## The PINA Workflow -# +# ## The PINA Workflow +# #

-# PINA Workflow +# PINA Workflow #

-# +# # Solving a differential problem in **PINA** involves four main steps: -# +# # 1. ***Problem & Data*** -# Define the mathematical problem and its physical constraints using PINA’s base classes: +# Define the mathematical problem and its physical constraints using PINA’s base classes: # - `AbstractProblem` # - `SpatialProblem` -# - `InverseProblem` +# - `InverseProblem` # - ... -# +# # Then prepare inputs by discretizing the domain or importing numerical data. PINA provides essential tools like the `Conditions` class and the `pina.domain` module to facilitate domain sampling and ensure that the input data aligns with the problem's requirements. -# +# # > **👉 We have a dedicated [tutorial](https://mathlab.github.io/PINA/tutorial16/tutorial.html) to teach how to build a Problem from scratch — have a look if you're interested!** -# -# 2. ***Model Design*** +# +# 2. ***Model Design*** # Build neural network models as **PyTorch modules**. For graph-structured data, use **PyTorch Geometric** to build Graph Neural Networks. You can also import models from `pina.model` module! -# -# 3. ***Solver Selection*** +# +# 3. ***Solver Selection*** # Choose and configure a solver to optimize your model. Options include: # - **Supervised solvers**: `SupervisedSolver`, `ReducedOrderModelSolver` # - **Physics-informed solvers**: `PINN` and (many) variants -# - **Generative solvers**: `GAROM` +# - **Generative solvers**: `GAROM` # Solvers can be used out-of-the-box, extended, or fully customized. -# -# 4. ***Training*** +# +# 4. ***Training*** # Train your model using the `Trainer` class (built on **PyTorch Lightning**), which enables scalable and efficient training with advanced features. -# -# +# +# # By following these steps, PINA simplifies applying deep learning to scientific computing and differential problems. -# -# +# +# # ## A Simple Regression Problem in PINA # We'll start with a simple regression problem [2] of approximating the following function with a Neural Net model $\mathcal{M}_{\theta}$: -# $$y = x^3 + \epsilon, \quad \epsilon \sim \mathcal{N}(0, 9)$$ -# using only 20 samples: -# +# $$y = x^3 + \epsilon, \quad \epsilon \sim \mathcal{N}(0, 9)$$ +# using only 20 samples: +# # $$x_i \sim \mathcal{U}[-3, 3], \; \forall i \in \{1, \dots, 20\}$$ -# +# # Using PINA, we will: -# +# # - Generate a synthetic dataset. # - Implement a **Bayesian regressor**. # - Use **Monte Carlo (MC) Dropout** for **Bayesian inference** and **uncertainty estimation**. -# +# # This example highlights how PINA can be used for classic regression tasks with probabilistic modeling capabilities. Let's first import useful modules! # In[ ]: @@ -102,14 +103,14 @@ from pina.geometry import CartesianDomain # #### ***Problem & Data*** -# +# # We'll start by defining a `BayesianProblem` inheriting from `AbstractProblem` to handle input/output data. This is suitable when data is available. For other cases like PDEs without data, use: -# +# # - `SpatialProblem` – for spatial variables # - `TimeDependentProblem` – for temporal variables # - `ParametricProblem` – for parametric inputs # - `InverseProblem` – for parameter estimation from observations -# +# # but we will see this more in depth in a while! # In[21]: @@ -139,14 +140,14 @@ problem = BayesianProblem() # We highlight two very important features of PINA -# -# 1. **`LabelTensor` Structure** -# - Alongside the standard `torch.Tensor`, PINA introduces the `LabelTensor` structure, which allows **string-based indexing**. -# - Ideal for managing and stacking tensors with different labels (e.g., `"x"`, `"t"`, `"u"`) for improved clarity and organization. +# +# 1. **`LabelTensor` Structure** +# - Alongside the standard `torch.Tensor`, PINA introduces the `LabelTensor` structure, which allows **string-based indexing**. +# - Ideal for managing and stacking tensors with different labels (e.g., `"x"`, `"t"`, `"u"`) for improved clarity and organization. # - You can still use standard PyTorch tensors if needed. -# -# 2. **`Condition` Object** -# - The `Condition` object enforces the **constraints** that the model $\mathcal{M}_{\theta}$ must satisfy, such as boundary or initial conditions. +# +# 2. **`Condition` Object** +# - The `Condition` object enforces the **constraints** that the model $\mathcal{M}_{\theta}$ must satisfy, such as boundary or initial conditions. # - It ensures that the model adheres to the specific requirements of the problem, making constraint handling more intuitive and streamlined. # In[63]: @@ -167,14 +168,14 @@ print(f"Similarly to: \n {label_tensor[:, 0]=}") # #### ***Model Design*** -# -# We will now solve the problem using a **simple PyTorch Neural Network** with **Dropout**, which we will implement from scratch following [2]. +# +# We will now solve the problem using a **simple PyTorch Neural Network** with **Dropout**, which we will implement from scratch following [2]. # It's important to note that PINA provides a wide range of **state-of-the-art (SOTA)** architectures in the `pina.model` module, which you can explore further [here](https://mathlab.github.io/PINA/_rst/_code.html#models). -# +# # #### ***Solver Selection*** -# -# For this task, we will use a straightforward **supervised learning** approach by importing the `SupervisedSolver` from `pina.solvers`. The solver is responsible for defining the training strategy. -# +# +# For this task, we will use a straightforward **supervised learning** approach by importing the `SupervisedSolver` from `pina.solvers`. The solver is responsible for defining the training strategy. +# # The `SupervisedSolver` is designed to handle typical regression tasks effectively by minimizing the following loss function: # $$ # \mathcal{L}_{\rm{problem}} = \frac{1}{N}\sum_{i=1}^N @@ -184,14 +185,14 @@ print(f"Similarly to: \n {label_tensor[:, 0]=}") # $$ # \mathcal{L}(v) = \| v \|^2_2. # $$ -# +# # #### **Training** -# +# # Next, we will use the `Trainer` class to train the model. The `Trainer` class, based on **PyTorch Lightning**, offers many features that help: # - **Improve model accuracy** # - **Reduce training time and memory usage** -# - **Facilitate logging and visualization** -# +# - **Facilitate logging and visualization** +# # The great work done by the PyTorch Lightning team ensures a streamlined training process. # In[64]: @@ -230,15 +231,15 @@ trainer.train() # #### ***Model Training Complete! Now Visualize the Solutions*** -# +# # The model has been trained! Since we used **Dropout** during training, the model is probabilistic (Bayesian) [3]. This means that each time we evaluate the forward pass on the input points $x_i$, the results will differ due to the stochastic nature of Dropout. -# +# # To visualize the model's predictions and uncertainty, we will: -# +# # 1. **Evaluate the Forward Pass**: Perform multiple forward passes to get different predictions for each input $x_i$. # 2. **Compute the Mean**: Calculate the average prediction $\mu_\theta$ across all forward passes. # 3. **Compute the Standard Deviation**: Calculate the variability of the predictions $\sigma_\theta$, which indicates the model's uncertainty. -# +# # This allows us to understand not only the predicted values but also the confidence in those predictions. # In[65]: @@ -266,32 +267,32 @@ plt.show() # ## PINA for Physics-Informed Machine Learning -# +# # In the previous section, we used PINA for **supervised learning**. However, one of its main strengths lies in **Physics-Informed Machine Learning (PIML)**, specifically through **Physics-Informed Neural Networks (PINNs)**. -# +# # ### What Are PINNs? -# +# # PINNs are deep learning models that integrate the laws of physics directly into the training process. By incorporating **differential equations** and **boundary conditions** into the loss function, PINNs allow the modeling of complex physical systems while ensuring the predictions remain consistent with scientific laws. -# +# # ### Solving a 2D Poisson Problem -# +# # In this section, we will solve a **2D Poisson problem** with **Dirichlet boundary conditions** on an **hourglass-shaped domain** using a simple PINN [4]. You can explore other PINN variants, e.g. [5] or [6] in PINA by visiting the [PINA solvers documentation](https://mathlab.github.io/PINA/_rst/_code.html#solvers). We aim to solve the following 2D Poisson problem: -# +# # $$ # \begin{cases} # \Delta u(x, y) = \sin{(\pi x)} \sin{(\pi y)} & \text{in } D, \\ -# u(x, y) = 0 & \text{on } \partial D +# u(x, y) = 0 & \text{on } \partial D # \end{cases} # $$ -# +# # where $D$ is an **hourglass-shaped domain** defined as the difference between a **Cartesian domain** and two intersecting **ellipsoids**, and $\partial D$ is the boundary of the domain. -# +# # ### Building Complex Domains -# +# # PINA allows you to build complex geometries easily. It provides many built-in domain shapes and Boolean operators for combining them. For this problem, we will define the hourglass-shaped domain using the existing `CartesianDomain` and `EllipsoidDomain` classes, with Boolean operators like `Difference` and `Union`. -# +# # > **👉 If you are interested in exploring the `domain` module in more detail, check out [this tutorial](https://mathlab.github.io/PINA/_rst/tutorials/tutorial6/tutorial.html).** -# +# # In[66]: @@ -332,7 +333,7 @@ border_samples = border.sample(n=1000, mode="random") # #### Plotting the domain -# +# # Nice! Now that we have built the domain, let's try to plot it # In[67]: @@ -359,11 +360,11 @@ plt.show() # #### Writing the Poisson Problem Class -# -# Very good! Now we will implement the problem class for the 2D Poisson problem. Unlike the previous examples, where we inherited from `AbstractProblem`, for this problem, we will inherit from the `SpatialProblem` class. -# +# +# Very good! Now we will implement the problem class for the 2D Poisson problem. Unlike the previous examples, where we inherited from `AbstractProblem`, for this problem, we will inherit from the `SpatialProblem` class. +# # The reason for this is that the Poisson problem involves **spatial variables** as input, so we use `SpatialProblem` to handle such cases. -# +# # This will allow us to define the problem with spatial dependencies and set up the neural network model accordingly. # In[69]: @@ -401,12 +402,12 @@ poisson_problem = Poisson() # As you can see, writing the problem class for a differential equation in PINA is straightforward! The main differences are: -# +# # - We inherit from **`SpatialProblem`** instead of `AbstractProblem` to account for spatial variables. # - We use **`domain`** and **`equation`** inside the `Condition` to define the problem. -# +# # The `Equation` class can be very useful for creating modular problem classes. If you're interested, check out [this tutorial](https://mathlab.github.io/PINA/_rst/tutorial12/tutorial.html) for more details. There's also a dedicated [tutorial](https://mathlab.github.io/PINA/_rst/tutorial16/tutorial.html) for building custom problems! -# +# # Once the problem class is set, we need to **sample the domain** to obtain the data. PINA will automatically handle this, and if you forget to sample, an error will be raised before training begins 😉. # In[70]: @@ -421,13 +422,13 @@ print(f" {poisson_problem.are_all_domains_discretised=}") # ### Building the Model -# +# # After setting the problem and sampling the domain, the next step is to **build the model** $\mathcal{M}_{\theta}$. -# +# # For this, we will use the custom PINA models available [here](https://mathlab.github.io/PINA/_rst/_code.html#models). Specifically, we will use a **feed-forward neural network** by importing the `FeedForward` class. -# -# This neural network takes the **coordinates** (in this case `['x', 'y']`) as input and outputs the unknown field of the Poisson problem. -# +# +# This neural network takes the **coordinates** (in this case `['x', 'y']`) as input and outputs the unknown field of the Poisson problem. +# # In this tutorial, the neural network is composed of 2 hidden layers, each with 120 neurons and tanh activation. # In[72]: @@ -444,30 +445,30 @@ model = FeedForward( # ### Solver Selection -# +# # The thir part of the PINA pipeline involves using a **Solver**. -# +# # In this tutorial, we will use the **classical PINN** solver. However, many other variants are also available and we invite to try them! -# +# # #### Loss Function in PINA -# +# # The loss function in the **classical PINN** is defined as follows: -# +# # $$\theta_{\rm{best}}=\min_{\theta}\mathcal{L}_{\rm{problem}}(\theta), \quad \mathcal{L}_{\rm{problem}}(\theta)= \frac{1}{N_{D}}\sum_{i=1}^N # \mathcal{L}(\Delta\mathcal{M}_{\theta}(\mathbf{x}_i, \mathbf{y}_i) - \sin(\pi x_i)\sin(\pi y_i)) + # \frac{1}{N}\sum_{i=1}^N # \mathcal{L}(\mathcal{M}_{\theta}(\mathbf{x}_i, \mathbf{y}_i))$$ -# +# # This loss consists of: # 1. The **differential equation residual**: Ensures the model satisfies the Poisson equation. # 2. The **boundary condition**: Ensures the model satisfies the Dirichlet boundary condition. -# +# # ### Training -# +# # For the last part of the pipeline we need a `Trainer`. We will train the model for **1000 epochs** using the default optimizer parameters. These parameters can be adjusted as needed. For more details, check the solvers documentation [here](https://mathlab.github.io/PINA/_rst/_code.html#solvers). -# +# # To track metrics during training, we use the **`MetricTracker`** class. -# +# # > **👉 Want to know more about `Trainer` and how to boost PINA performance, check out [this tutorial](https://mathlab.github.io/PINA/_rst/tutorials/tutorial11/tutorial.html).** # In[73]: @@ -526,28 +527,28 @@ with torch.no_grad(): # ## What's Next? -# +# # Congratulations on completing the introductory tutorial of **PINA**! Now that you have a solid foundation, here are a few directions you can explore: -# +# # 1. **Explore Advanced Solvers**: Dive into more advanced solvers like **SAPINN** or **RBAPINN** and experiment with different variations of Physics-Informed Neural Networks. # 2. **Apply PINA to New Problems**: Try solving other types of differential equations or explore inverse problems and parametric problems using the PINA framework. # 3. **Optimize Model Performance**: Use the `Trainer` class to enhance model performance by exploring features like dynamic learning rates, early stopping, and model checkpoints. -# +# # 4. **...and many more!** — There are countless directions to further explore, from testing on different problems to refining the model architecture! -# +# # For more resources and tutorials, check out the [PINA Documentation](https://mathlab.github.io/PINA/). -# -# +# +# # ### References -# +# # [1] *Coscia, Dario, et al. "Physics-informed neural networks for advanced modeling." Journal of Open Source Software, 2023.* -# +# # [2] *Hernández-Lobato, José Miguel, and Ryan Adams. "Probabilistic backpropagation for scalable learning of bayesian neural networks." International conference on machine learning, 2015.* -# +# # [3] *Gal, Yarin, and Zoubin Ghahramani. "Dropout as a bayesian approximation: Representing model uncertainty in deep learning." International conference on machine learning, 2016.* -# +# # [4] *Raissi, Maziar, Paris Perdikaris, and George E. Karniadakis. "Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations." Journal of Computational Physics, 2019.* -# +# # [5] *McClenny, Levi D., and Ulisses M. Braga-Neto. "Self-adaptive physics-informed neural networks." Journal of Computational Physics, 2023.* -# +# # [6] *Anagnostopoulos, Sokratis J., et al. "Residual-based attention in physics-informed neural networks." Computer Methods in Applied Mechanics and Engineering, 2024.* diff --git a/tutorials/tutorial21/tutorial.py b/tutorials/tutorial21/tutorial.py index 3da40ff..713150b 100644 --- a/tutorials/tutorial21/tutorial.py +++ b/tutorials/tutorial21/tutorial.py @@ -127,10 +127,9 @@ plt.grid(True) # At their core, **Neural Operators** transform an input function $a$ into an output function $u$. The general structure of a Neural Operator consists of three key components: # #

-# Neural Operators +# Neural Operators #

# -# # 1. **Encoder**: The encoder maps the input into a specific embedding space. # # 2. **Processor**: The processor consists of multiple layers performing **function convolutions**, which is the core computational unit in a Neural Operator. diff --git a/tutorials/tutorial8/tutorial.py b/tutorials/tutorial8/tutorial.py index b873226..a59d41a 100644 --- a/tutorials/tutorial8/tutorial.py +++ b/tutorials/tutorial8/tutorial.py @@ -2,13 +2,13 @@ # coding: utf-8 # # Tutorial: Reduced Order Modeling with POD-RBF and POD-NN Approaches for Fluid Dynamics -# +# # [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mathLab/PINA/blob/master/tutorials/tutorial8/tutorial.ipynb) # The goal of this tutorial is to demonstrate how to use the **PINA** library to apply a reduced-order modeling technique, as outlined in [1]. These methods share several similarities with machine learning approaches, as they focus on predicting the solution to differential equations, often parametric PDEs, in real-time. -# +# # In particular, we will utilize **Proper Orthogonal Decomposition** (POD) in combination with two different regression techniques: **Radial Basis Function Interpolation** (POD-RBF) and **Neural Networks**(POD-NN) [2]. This process involves reducing the dimensionality of the parametric solution manifold through POD and then approximating it in the reduced space using a regression model (either a neural network or an RBF interpolation). In this example, we'll use a simple multilayer perceptron (MLP) as the regression model, but various architectures can be easily substituted. -# +# # Let's start with the necessary imports. # In[ ]: @@ -42,9 +42,9 @@ warnings.filterwarnings("ignore") # We utilize the [Smithers](https://github.com/mathLab/Smithers) library to gather the parametric snapshots. Specifically, we use the `NavierStokesDataset` class, which contains a collection of parametric solutions to the Navier-Stokes equations in a 2D L-shaped domain. The parameter in this case is the inflow velocity. -# +# # The dataset comprises 500 snapshots of the velocity fields (along the $x$, $y$ axes, and the magnitude), as well as the pressure fields, along with their corresponding parameter values. -# +# # To visually inspect the snapshots, let's also plot the data points alongside the reference solution. This reference solution represents the expected output of our model. # In[ ]: @@ -61,7 +61,7 @@ for ax, p, u in zip(axs, dataset.params[:4], dataset.snapshots["mag(v)"][:4]): # The *snapshots*—i.e., the numerical solutions computed for several parameters—and the corresponding parameters are the only data we need to train the model, enabling us to predict the solution for any new test parameter. To properly validate the accuracy, we will split the 500 snapshots into the training dataset (90% of the original data) and the testing dataset (the remaining 10%) inside the `Trainer`. -# +# # It is now time to define the problem! # In[ ]: @@ -73,7 +73,7 @@ problem = SupervisedProblem(input_=p, output_=u) # We can then build a `POD-NN` model (using an MLP architecture as approximation) and compare it with a `POD-RBF` model (using a Radial Basis Function interpolation as approximation). -# +# # ## POD-NN reduced order model # Let's build the `PODNN` class @@ -163,7 +163,7 @@ print(f" Test: {relative_error_test.item():e}") # ## POD-RBF Reduced Order Model -# +# # Next, we define the model we want to use, incorporating the `PODBlock` and `RBFBlock` objects. # In[ ]: @@ -210,9 +210,9 @@ print(f" Test: {relative_error_test.item():e}") # ## POD-RBF vs POD-NN -# +# # We can compare the solutions predicted by the `POD-RBF` and the `POD-NN` models with the original reference solution. By plotting these predicted solutions against the true solution, we can observe how each model performs. -# +# # ### Observations: # - **POD-RBF**: The solution predicted by the `POD-RBF` model typically offers a smooth approximation for the parametric solution, as RBF interpolation is well-suited for capturing smooth variations. # - **POD-NN**: The `POD-NN` model, while more flexible due to the neural network architecture, may show some discrepancies—especially for low velocities or in regions where the training data is sparse. However, with longer training times and adjustments in the network architecture, we can improve the predictions. @@ -274,21 +274,21 @@ plt.show() # ## What's Next? -# +# # Congratulations on completing this tutorial using **PINA** to apply reduced order modeling techniques with **POD-RBF** and **POD-NN**! There are several directions you can explore next: -# +# # 1. **Extend to More Complex Problems**: Try using more complex parametric domains or PDEs. For example, you can explore Navier-Stokes equations in 3D or more complex boundary conditions. -# +# # 2. **Combine POD with Deep Learning Techniques**: Investigate hybrid methods, such as combining **POD-NN** with convolutional layers or recurrent layers, to handle time-dependent problems or more complex spatial dependencies. -# +# # 3. **Evaluate Performance on Larger Datasets**: Work with larger datasets to assess how well these methods scale. You may want to test on datasets from simulations or real-world problems. -# +# # 4. **Hybrid Models with Physics Informed Networks (PINN)**: Integrate **POD** models with PINN frameworks to include physics-based regularization in your model and improve predictions for more complex scenarios, such as turbulent fluid flow. -# +# # 5. **...and many more!**: The potential applications of reduced order models are vast, ranging from material science simulations to real-time predictions in engineering applications. -# +# # For more information and advanced tutorials, refer to the [PINA Documentation](https://mathlab.github.io/PINA/). -# +# # ### References -# 1. Rozza G., Stabile G., Ballarin F. (2022). Advanced Reduced Order Methods and Applications in Computational Fluid Dynamics, Society for Industrial and Applied Mathematics. +# 1. Rozza G., Stabile G., Ballarin F. (2022). Advanced Reduced Order Methods and Applications in Computational Fluid Dynamics, Society for Industrial and Applied Mathematics. # 2. Hesthaven, J. S., & Ubbiali, S. (2018). Non-intrusive reduced order modeling of nonlinear problems using neural networks. Journal of Computational Physics, 363, 55-78. diff --git a/tutorials/tutorial9/tutorial.py b/tutorials/tutorial9/tutorial.py index 5901166..6797708 100644 --- a/tutorials/tutorial9/tutorial.py +++ b/tutorials/tutorial9/tutorial.py @@ -1,15 +1,15 @@ #!/usr/bin/env python # coding: utf-8 -# # Tutorial: Applying Periodic Boundary Conditions in PINNs to solve the Helmotz Problem -# +# # Tutorial: Applying Periodic Boundary Conditions in PINNs to solve the Helmholtz Problem +# # [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mathLab/PINA/blob/master/tutorials/tutorial9/tutorial.ipynb) -# -# This tutorial demonstrates how to solve a one-dimensional Helmholtz equation with periodic boundary conditions (PBC) using Physics-Informed Neural Networks (PINNs). +# +# This tutorial demonstrates how to solve a one-dimensional Helmholtz equation with periodic boundary conditions (PBC) using Physics-Informed Neural Networks (PINNs). # We will use standard PINN training, augmented with a periodic input expansion as introduced in [*An Expert’s Guide to Training Physics-Informed Neural Networks*](https://arxiv.org/abs/2308.08468). -# +# # Let's start with some useful imports: -# +# # In[1]: @@ -42,33 +42,33 @@ warnings.filterwarnings("ignore") # ## Problem Definition -# +# # The one-dimensional Helmholtz problem is mathematically expressed as: -# +# # $$ # \begin{cases} # \frac{d^2}{dx^2}u(x) - \lambda u(x) - f(x) &= 0 \quad \text{for } x \in (0, 2) \\ # u^{(m)}(x = 0) - u^{(m)}(x = 2) &= 0 \quad \text{for } m \in \{0, 1, \dots\} # \end{cases} # $$ -# -# In this case, we seek a solution that is $C^{\infty}$ (infinitely differentiable) and periodic with period 2, over the infinite domain $x \in (-\infty, \infty)$. -# +# +# In this case, we seek a solution that is $C^{\infty}$ (infinitely differentiable) and periodic with period 2, over the infinite domain $x \in (-\infty, \infty)$. +# # A classical PINN approach would require enforcing periodic boundary conditions (PBC) for all derivatives—an infinite set of constraints—which is clearly infeasible. -# +# # To address this, we adopt a strategy known as *coordinate augmentation*. In this approach, we apply a coordinate transformation $v(x)$ such that the transformed inputs naturally satisfy the periodicity condition: -# +# # $$ # u^{(m)}(x = 0) - u^{(m)}(x = 2) = 0 \quad \text{for } m \in \{0, 1, \dots\} # $$ -# +# # For demonstration purposes, we choose the specific parameters: -# +# # - $\lambda = -10\pi^2$ # - $f(x) = -6\pi^2 \sin(3\pi x) \cos(\pi x)$ -# +# # These yield an analytical solution: -# +# # $$ # u(x) = \sin(\pi x) \cos(3\pi x) # $$ @@ -111,39 +111,39 @@ problem.discretise_domain(200, "grid", domains=["phys_cond"]) # As usual, the Helmholtz problem is implemented in **PINA** as a class. The governing equations are defined as `conditions`, which must be satisfied within their respective domains. The `solution` represents the exact analytical solution, which will be used to evaluate the accuracy of the predicted solution. -# -# For selecting collocation points, we use Latin Hypercube Sampling (LHS), a common strategy for efficient space-filling in high-dimensional domains -# +# +# For selecting collocation points, we use Latin Hypercube Sampling (LHS), a common strategy for efficient space-filling in high-dimensional domains +# # ## Solving the Problem with a Periodic Network -# -# Any $\mathcal{C}^{\infty}$ periodic function $u : \mathbb{R} \rightarrow \mathbb{R}$ with period $L \in \mathbb{N}$ +# +# Any $\mathcal{C}^{\infty}$ periodic function $u : \mathbb{R} \rightarrow \mathbb{R}$ with period $L \in \mathbb{N}$ # can be constructed by composing an arbitrary smooth function $f : \mathbb{R}^n \rightarrow \mathbb{R}$ with a smooth, periodic mapping$v : \mathbb{R} \rightarrow \mathbb{R}^n$ of the same period $L$. That is, -# +# # $$ # u(x) = f(v(x)). # $$ -# -# This formulation is general and can be extended to arbitrary dimensions. +# +# This formulation is general and can be extended to arbitrary dimensions. # For more details, see [*A Method for Representing Periodic Functions and Enforcing Exactly Periodic Boundary Conditions with Deep Neural Networks*](https://arxiv.org/pdf/2007.07442). -# +# # In our specific case, we define the periodic embedding as: -# +# # $$ # v(x) = \left[1, \cos\left(\frac{2\pi}{L} x\right), \sin\left(\frac{2\pi}{L} x\right)\right], # $$ -# +# # which constitutes the coordinate augmentation. The function $f(\cdot)$ is approximated by a neural network $NN_{\theta}(\cdot)$, resulting in the approximate PINN solution: -# +# # $$ # u(x) \approx u_{\theta}(x) = NN_{\theta}(v(x)). # $$ -# -# In **PINA**, this is implemented using the `PeriodicBoundaryEmbedding` layer for $v(x)$, -# paired with any `pina.model` to define the neural network $NN_{\theta}$. -# +# +# In **PINA**, this is implemented using the `PeriodicBoundaryEmbedding` layer for $v(x)$, +# paired with any `pina.model` to define the neural network $NN_{\theta}$. +# # Let’s see how this is put into practice! -# -# +# +# # In[18]: @@ -160,11 +160,11 @@ model = torch.nn.Sequential( # As simple as that! -# -# In higher dimensions, you can specify different periods for each coordinate using a dictionary. -# For example, `periods = {'x': 2, 'y': 3, ...}` indicates a periodicity of 2 in the $x$ direction, +# +# In higher dimensions, you can specify different periods for each coordinate using a dictionary. +# For example, `periods = {'x': 2, 'y': 3, ...}` indicates a periodicity of 2 in the $x$ direction, # 3 in the $y$ direction, and so on. -# +# # We will now solve the problem using the usual `PINN` and `Trainer` classes. After training, we'll examine the losses using the `MetricTracker` callback from `pina.callback`. # In[ ]: @@ -240,15 +240,15 @@ with torch.no_grad(): # It's clear that the network successfully captures the periodicity of the solution, with the error also exhibiting a periodic pattern. Naturally, training for a longer duration or using a more expressive neural network could further improve the results. # ## What's next? -# +# # Congratulations on completing the one-dimensional Helmholtz tutorial with **PINA**! Here are a few directions you can explore next: -# +# # 1. **Train longer or with different architectures**: Experiment with extended training or modify the network's depth and width to evaluate improvements in accuracy. -# +# # 2. **Apply `PeriodicBoundaryEmbedding` to time-dependent problems**: Explore more complex scenarios such as spatiotemporal PDEs (see the official documentation for examples). -# +# # 3. **Try extra feature training**: Integrate additional physical or domain-specific features to guide the learning process more effectively. -# +# # 4. **...and many more!**: Extend to higher dimensions, test on other PDEs, or even develop custom embeddings tailored to your problem. -# +# # For more resources and tutorials, check out the [PINA Documentation](https://mathlab.github.io/PINA/).