export tutorials changed in db9df8b

This commit is contained in:
dario-coscia
2025-05-05 08:59:15 +00:00
committed by Dario Coscia
parent a94791f0ff
commit e3d4c2fc1a
23 changed files with 737 additions and 727 deletions

View File

@@ -2,13 +2,13 @@
# coding: utf-8
# # Tutorial: Learning Multiscale PDEs Using Fourier Feature Networks
#
#
# [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mathLab/PINA/blob/master/tutorials/tutorial13/tutorial.ipynb)
#
#
# This tutorial demonstrates how to solve a PDE with multiscale behavior using Physics-Informed Neural Networks (PINNs), as discussed in [*On the Eigenvector Bias of Fourier Feature Networks: From Regression to Solving Multi-Scale PDEs with Physics-Informed Neural Networks*](https://doi.org/10.1016/j.cma.2021.113938).
#
#
# Lets begin by importing the necessary libraries.
#
#
# In[ ]:
@@ -41,30 +41,30 @@ warnings.filterwarnings("ignore")
# ## Multiscale Problem
#
#
# We begin by presenting the problem, which is also discussed in Section 2 of [*On the Eigenvector Bias of Fourier Feature Networks: From Regression to Solving Multi-Scale PDEs with Physics-Informed Neural Networks*](https://doi.org/10.1016/j.cma.2021.113938). The one-dimensional Poisson problem we aim to solve is mathematically defined as:
#
#
# \begin{equation}
# \begin{cases}
# \Delta u(x) + f(x) = 0 \quad x \in [0,1], \\
# u(x) = 0 \quad x \in \partial[0,1],
# \end{cases}
# \end{equation}
#
#
# We define the solution as:
#
#
# $$
# u(x) = \sin(2\pi x) + 0.1 \sin(50\pi x),
# $$
#
#
# which leads to the corresponding force term:
#
#
# $$
# f(x) = (2\pi)^2 \sin(2\pi x) + 0.1 (50 \pi)^2 \sin(50\pi x).
# $$
#
#
# While this example is simple and pedagogical, it's important to note that the solution exhibits low-frequency behavior in the macro-scale and high-frequency behavior in the micro-scale. This characteristic is common in many practical scenarios.
#
#
# Below is the implementation of the `Poisson` problem as described mathematically above.
# > **👉 We have a dedicated [tutorial](https://mathlab.github.io/PINA/tutorial16/tutorial.html) to teach how to build a Problem from scratch — have a look if you're interested!**
@@ -112,12 +112,12 @@ problem.discretise_domain(128, "grid", domains=["phys_cond"])
problem.discretise_domain(1, "grid", domains=["bound_cond0", "bound_cond1"])
# A standard PINN approach would involve fitting the model using a Feed Forward (fully connected) Neural Network. For a conventional fully-connected neural network, it is relatively easy to approximate a function $u$, given sufficient data inside the computational domain.
#
# A standard PINN approach would involve fitting the model using a Feed Forward (fully connected) Neural Network. For a conventional fully-connected neural network, it is relatively easy to approximate a function $u$, given sufficient data inside the computational domain.
#
# However, solving high-frequency or multi-scale problems presents significant challenges to PINNs, especially when the number of data points is insufficient to capture the different scales effectively.
#
#
# Below, we run a simulation using both the `PINN` solver and the self-adaptive `SAPINN` solver, employing a [`FeedForward`](https://mathlab.github.io/PINA/_modules/pina/model/feed_forward.html#FeedForward) model.
#
#
# In[3]:
@@ -182,10 +182,10 @@ plt.figure()
plot_solution(sapinn, "Self Adaptive PINN solution")
# We can clearly observe that neither of the two solvers has successfully learned the solution.
# The issue is not with the optimization strategy (i.e., the solver), but rather with the model used to solve the problem.
# We can clearly observe that neither of the two solvers has successfully learned the solution.
# The issue is not with the optimization strategy (i.e., the solver), but rather with the model used to solve the problem.
# A simple `FeedForward` network struggles to handle multiscale problems, especially when there are not enough collocation points to capture the different scales effectively.
#
#
# Next, let's compute the $l_2$ relative error for both the `PINN` and `SAPINN` solutions:
# In[5]:
@@ -205,20 +205,20 @@ print(
# Which is indeed very high!
#
#
# ## Fourier Feature Embedding in PINA
# Fourier Feature Embedding is a technique used to transform the input features, aiding the network in learning multiscale variations in the output. It was first introduced in [*On the Eigenvector Bias of Fourier Feature Networks: From Regression to Solving Multi-Scale PDEs with Physics-Informed Neural Networks*](https://doi.org/10.1016/j.cma.2021.113938), where it demonstrated excellent results for multiscale problems.
#
#
# The core idea behind Fourier Feature Embedding is to map the input $\mathbf{x}$ into an embedding $\tilde{\mathbf{x}}$, defined as:
#
#
# $$
# \tilde{\mathbf{x}} = \left[\cos\left( \mathbf{B} \mathbf{x} \right), \sin\left( \mathbf{B} \mathbf{x} \right)\right],
# $$
#
#
# where $\mathbf{B}_{ij} \sim \mathcal{N}(0, \sigma^2)$. This simple operation allows the network to learn across multiple scales!
#
#
# In **PINA**, we have already implemented this feature as a `layer` called [`FourierFeatureEmbedding`](https://mathlab.github.io/PINA/_rst/layers/fourier_embedding.html). Below, we will build the *Multi-scale Fourier Feature Architecture*. In this architecture, multiple Fourier feature embeddings (initialized with different $\sigma$ values) are applied to the input coordinates. These embeddings are then passed through the same fully-connected neural network, and the outputs are concatenated with a final linear layer.
#
#
# In[6]:
@@ -243,7 +243,7 @@ class MultiscaleFourierNet(torch.nn.Module):
return self.final_layer(torch.cat([e1, e2], dim=-1))
# We will train the `MultiscaleFourierNet` using the `PINN` solver.
# We will train the `MultiscaleFourierNet` using the `PINN` solver.
# Feel free to experiment with other PINN variants as well, such as `SAPINN`, `GPINN`, `CompetitivePINN`, and others, to see how they perform on this multiscale problem.
# In[7]:
@@ -278,17 +278,17 @@ print(
# It is clear that the network has learned the correct solution, with a very low error. Of course, longer training and a more expressive neural network could further improve the results!
#
#
# ## What's Next?
#
#
# Congratulations on completing the one-dimensional Poisson tutorial of **PINA** using `FourierFeatureEmbedding`! There are many potential next steps you can explore:
#
#
# 1. **Train the network longer or with different layer sizes**: Experiment with different configurations to improve accuracy.
#
#
# 2. **Understand the role of `sigma` in `FourierFeatureEmbedding`**: The original paper provides insightful details on the impact of `sigma`. It's a good next step to dive deeper into its effect.
#
#
# 3. **Implement the *Spatio-temporal Multi-scale Fourier Feature Architecture***: Code this architecture for a more complex, time-dependent PDE (refer to Section 3 of the original paper).
#
#
# 4. **...and many more!**: There are countless directions to further explore, from testing on different problems to refining the model architecture.
#
#
# For more resources and tutorials, check out the [PINA Documentation](https://mathlab.github.io/PINA/).