export tutorials changed in 85b9edc (#634)

Co-authored-by: dario-coscia <dario-coscia@users.noreply.github.com>
This commit is contained in:
github-actions[bot]
2025-09-10 12:10:39 +02:00
committed by GitHub
parent 85b9edc74d
commit f3ccfd4598
7 changed files with 349 additions and 349 deletions

View File

@@ -2,11 +2,11 @@
# coding: utf-8
# # Tutorial: Learning Bifurcating PDE Solutions with Physics-Informed Deep Ensembles
#
#
# [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mathLab/PINA/blob/master/tutorials/tutorial14/tutorial.ipynb)
#
#
# This tutorial demonstrates how to use the Deep Ensemble Physics Informed Network (DeepEnsemblePINN) to learn PDEs exhibiting bifurcating behavior, as discussed in [*Learning and Discovering Multiple Solutions Using Physics-Informed Neural Networks with Random Initialization and Deep Ensemble*](https://arxiv.org/abs/2503.06320).
#
#
# Lets begin by importing the necessary libraries.
# In[ ]:
@@ -41,62 +41,62 @@ warnings.filterwarnings("ignore")
# ## Deep Ensemble
#
#
# Deep Ensemble methods improve model performance by leveraging the diversity of predictions generated by multiple neural networks trained on the same problem. Each network in the ensemble is trained independently—typically with different weight initializations or even slight variations in the architecture or data sampling. By combining their outputs (e.g., via averaging or majority voting), ensembles reduce overfitting, increase robustness, and improve generalization.
#
#
# This approach allows the ensemble to capture different perspectives of the problem, leading to more accurate and reliable predictions.
#
#
# <p align="center">
# <img src="http://raw.githubusercontent.com/mathLab/PINA/master/tutorials/static/deep_ensemble.png" alt="Deep ensemble" width="600"/>
# </p>
#
#
# The image above illustrates a Deep Ensemble setup, where multiple models attempt to predict the text from an image. While individual models may make errors (e.g., predicting "PONY" instead of "PINA"), combining their outputs—such as taking the majority vote—often leads to the correct result. This ensemble effect improves reliability by mitigating the impact of individual model biases.
#
#
#
#
# ## Deep Ensemble Physics-Informed Networks
#
#
# In the context of Physics-Informed Neural Networks (PINNs), Deep Ensembles help the network discover different branches or multiple solutions of a PDE that exhibits bifurcating behavior.
#
#
# By training a diverse set of models with different initializations, Deep Ensemble methods overcome the limitations of single-initialization models, which may converge to only one of the possible solutions. This approach is particularly useful when the solution space of the problem contains multiple valid physical states or behaviors.
#
#
#
#
# ## The Bratu Problem
#
#
# In this tutorial, we'll train a `DeepEnsemblePINN` solver to solve a bifurcating ODE known as the **Bratu problem**. The ODE is given by:
#
#
# $$
# \frac{d^2u}{dt^2} + \lambda e^u = 0, \quad t \in (0, 1)
# $$
#
#
# with boundary conditions:
#
#
# $$
# u(0) = u(1) = 0,
# $$
#
#
# where $\lambda > 0$ is a scalar parameter. The analytical solutions to the 1D Bratu problem can be expressed as:
#
#
# $$
# u(t, \alpha) = 2 \log\left(\frac{\cosh(\alpha)}{\cosh(\alpha(1 - 2t))}\right),
# $$
#
#
# where $\alpha$ satisfies:
#
#
# $$
# \cosh(\alpha) - 2\sqrt{2}\alpha = 0.
# $$
#
#
# When $\lambda < 3.513830719$, the equation admits two solutions $\alpha_1$ and $\alpha_2$, which correspond to two distinct solutions of the original ODE: $u_1$ and $u_2$.
#
#
# In this tutorial, we set $\lambda = 1$, which leads to:
#
#
# - $\alpha_1 \approx 0.37929$
# - $\alpha_2 \approx 2.73468$
#
#
# We first write the problem class, we do not write the boundary conditions as we will hard impose them.
#
#
# > **👉 We have a dedicated [tutorial](https://mathlab.github.io/PINA/tutorial16/tutorial.html) to teach how to build a Problem — have a look if you're interested!**
#
#
# > **👉 We have a dedicated [tutorial](https://mathlab.github.io/PINA/tutorial3/tutorial.html) to teach how to impose hard constraints — have a look if you're interested!**
# In[80]:
@@ -135,11 +135,11 @@ problem.discretise_domain(n=101, mode="grid", domains="interior")
# ## Defining the Deep Ensemble Models
#
#
# Now that the problem setup is complete, we move on to creating an **ensemble of models**. Each ensemble member will be a standard `FeedForward` neural network, wrapped inside a custom `Model` class.
#
#
# Each model's weights are initialized using a **normal distribution** with mean 0 and standard deviation 2. This random initialization is crucial to promote diversity across the ensemble members, allowing the models to converge to potentially different solutions of the PDE.
#
#
# The final ensemble is simply a **list of PyTorch models**, which we will later pass to the `DeepEnsemblePINN`
# In[81]:
@@ -179,15 +179,15 @@ with torch.no_grad():
# As you can see we get different output since the neural networks are initialized differently.
#
#
# ## Training with `DeepEnsemblePINN`
#
#
# Now that everything is ready, we can train the models using the `DeepEnsemblePINN` solver! 🎯
#
#
# This solver is constructed by combining multiple neural network models that all aim to solve the same PDE. Each model $\mathcal{M}_{i \in \{1, \dots, 10\}}$ in the ensemble contributes a unique perspective due to different random initializations.
#
#
# This diversity allows the ensemble to **capture multiple branches or bifurcating solutions** of the problem, making it especially powerful for PDEs like the Bratu problem.
#
#
# Once the `DeepEnsemblePINN` solver is defined with all the models, we train them using the `Trainer` class, as with any other solver in **PINA**. We also build a callback to store the value of `u(0.5)` during training iterations.
# In[83]:
@@ -243,11 +243,11 @@ with torch.no_grad():
# As you can see, different networks in the ensemble converge to different values pf $u(0.5)$ — this means we can actually **spot the bifurcation** in the solution space!
#
#
# This is a powerful demonstration of how **Deep Ensemble Physics-Informed Neural Networks** are capable of learning **multiple valid solutions** of a PDE that exhibits bifurcating behavior.
#
#
# We can also visualize the ensemble predictions to better observe the multiple branches:
#
#
# In[88]:
@@ -270,13 +270,13 @@ with torch.no_grad():
# ## What's Next?
#
#
# You have completed the tutorial on deep ensemble PINNs for bifurcating PDEs, well don! There are many potential next steps you can explore:
#
#
# 1. **Train the network longer or with different hyperparameters**: Experiment with different configurations of the single model, you can compose an ensemble by also stacking models with different layers, activation, ... to improve accuracy.
#
#
# 2. **Solve more complex problems**: The original paper provides very complex problems that can be solved with PINA, we suggest you to try implement and solve them!
#
#
# 3. **...and many more!**: There are countless directions to further explore, for example, what does it happen when you vary the network initialization hyperparameters?
#
#
# For more resources and tutorials, check out the [PINA Documentation](https://mathlab.github.io/PINA/).