295 lines
11 KiB
Plaintext
Vendored
295 lines
11 KiB
Plaintext
Vendored
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"# Tutorial: One dimensional Helmholtz equation using Periodic Boundary Conditions\n",
|
||
"\n",
|
||
"[](https://colab.research.google.com/github/mathLab/PINA/blob/master/tutorials/tutorial9/tutorial.ipynb)\n",
|
||
"\n",
|
||
"This tutorial presents how to solve with Physics-Informed Neural Networks (PINNs)\n",
|
||
"a one dimensional Helmholtz equation with periodic boundary conditions (PBC).\n",
|
||
"We will train with standard PINN's training by augmenting the input with\n",
|
||
"periodic expansion as presented in [*An expert’s guide to training\n",
|
||
"physics-informed neural networks*](\n",
|
||
"https://arxiv.org/abs/2308.08468).\n",
|
||
"\n",
|
||
"First of all, some useful imports."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"## routine needed to run the notebook on Google Colab\n",
|
||
"try:\n",
|
||
" import google.colab\n",
|
||
" IN_COLAB = True\n",
|
||
"except:\n",
|
||
" IN_COLAB = False\n",
|
||
"if IN_COLAB:\n",
|
||
" !pip install \"pina-mathlab\"\n",
|
||
"\n",
|
||
"import torch\n",
|
||
"import matplotlib.pyplot as plt\n",
|
||
"plt.style.use('tableau-colorblind10')\n",
|
||
"from pina import Condition\n",
|
||
"from pina.problem import SpatialProblem\n",
|
||
"from pina.operator import laplacian\n",
|
||
"from pina.model import FeedForward\n",
|
||
"from pina.model.block import PeriodicBoundaryEmbedding # The PBC module\n",
|
||
"from pina.solver import PINN\n",
|
||
"from pina.trainer import Trainer\n",
|
||
"from pina.domain import CartesianDomain\n",
|
||
"from pina.equation import Equation"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## The problem definition\n",
|
||
"\n",
|
||
"The one-dimensional Helmholtz problem is mathematically written as:\n",
|
||
"$$\n",
|
||
"\\begin{cases}\n",
|
||
"\\frac{d^2}{dx^2}u(x) - \\lambda u(x) -f(x) &= 0 \\quad x\\in(0,2)\\\\\n",
|
||
"u^{(m)}(x=0) - u^{(m)}(x=2) &= 0 \\quad m\\in[0, 1, \\cdots]\\\\\n",
|
||
"\\end{cases}\n",
|
||
"$$\n",
|
||
"In this case we are asking the solution to be $C^{\\infty}$ periodic with\n",
|
||
"period $2$, on the infinite domain $x\\in(-\\infty, \\infty)$. Notice that the\n",
|
||
"classical PINN would need infinite conditions to evaluate the PBC loss function,\n",
|
||
"one for each derivative, which is of course infeasible... \n",
|
||
"A possible solution, diverging from the original PINN formulation,\n",
|
||
"is to use *coordinates augmentation*. In coordinates augmentation you seek for\n",
|
||
"a coordinates transformation $v$ such that $x\\rightarrow v(x)$ such that\n",
|
||
"the periodicity condition $ u^{(m)}(x=0) - u^{(m)}(x=2) = 0 \\quad m\\in[0, 1, \\cdots] $ is\n",
|
||
"satisfied.\n",
|
||
"\n",
|
||
"For demonstration purposes, the problem specifics are $\\lambda=-10\\pi^2$,\n",
|
||
"and $f(x)=-6\\pi^2\\sin(3\\pi x)\\cos(\\pi x)$ which give a solution that can be\n",
|
||
"computed analytically $u(x) = \\sin(\\pi x)\\cos(3\\pi x)$."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"class Helmholtz(SpatialProblem):\n",
|
||
" output_variables = ['u']\n",
|
||
" spatial_domain = CartesianDomain({'x': [0, 2]})\n",
|
||
"\n",
|
||
" def Helmholtz_equation(input_, output_):\n",
|
||
" x = input_.extract('x')\n",
|
||
" u_xx = laplacian(output_, input_, components=['u'], d=['x'])\n",
|
||
" f = - 6.*torch.pi**2 * torch.sin(3*torch.pi*x)*torch.cos(torch.pi*x)\n",
|
||
" lambda_ = - 10. * torch.pi ** 2\n",
|
||
" return u_xx - lambda_ * output_ - f\n",
|
||
"\n",
|
||
" # here we write the problem conditions\n",
|
||
" conditions = {\n",
|
||
" 'phys_cond': Condition(domain=spatial_domain,\n",
|
||
" equation=Equation(Helmholtz_equation)),\n",
|
||
" }\n",
|
||
"\n",
|
||
" def Helmholtz_sol(self, pts):\n",
|
||
" return torch.sin(torch.pi * pts) * torch.cos(3. * torch.pi * pts)\n",
|
||
" \n",
|
||
" truth_solution = Helmholtz_sol\n",
|
||
"\n",
|
||
"problem = Helmholtz()\n",
|
||
"\n",
|
||
"# let's discretise the domain\n",
|
||
"problem.discretise_domain(200, 'grid', domains=['phys_cond'])"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"As usual, the Helmholtz problem is written in **PINA** code as a class. \n",
|
||
"The equations are written as `conditions` that should be satisfied in the\n",
|
||
"corresponding domains. The `truth_solution`\n",
|
||
"is the exact solution which will be compared with the predicted one. We used\n",
|
||
"Latin Hypercube Sampling for choosing the collocation points."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Solving the problem with a Periodic Network"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Any $\\mathcal{C}^{\\infty}$ periodic function\n",
|
||
"$u : \\mathbb{R} \\rightarrow \\mathbb{R}$ with period\n",
|
||
"$L\\in\\mathbb{N}$ can be constructed by composition of an\n",
|
||
"arbitrary smooth function $f : \\mathbb{R}^n \\rightarrow \\mathbb{R}$ and a\n",
|
||
"given smooth periodic function $v : \\mathbb{R} \\rightarrow \\mathbb{R}^n$ with\n",
|
||
"period $L$, that is $u(x) = f(v(x))$. The formulation is generalizable for\n",
|
||
"arbitrary dimension, see [*A method for representing periodic functions and\n",
|
||
"enforcing exactly periodic boundary conditions with\n",
|
||
"deep neural networks*](https://arxiv.org/pdf/2007.07442).\n",
|
||
"\n",
|
||
"In our case, we rewrite\n",
|
||
"$v(x) = \\left[1, \\cos\\left(\\frac{2\\pi}{L} x\\right),\n",
|
||
"\\sin\\left(\\frac{2\\pi}{L} x\\right)\\right]$, i.e\n",
|
||
"the coordinates augmentation, and $f(\\cdot) = NN_{\\theta}(\\cdot)$ i.e. a neural\n",
|
||
"network. The resulting neural network obtained by composing $f$ with $v$ gives\n",
|
||
"the PINN approximate solution, that is\n",
|
||
"$u(x) \\approx u_{\\theta}(x)=NN_{\\theta}(v(x))$.\n",
|
||
"\n",
|
||
"In **PINA** this translates in using the `PeriodicBoundaryEmbedding` layer for $v$, and any\n",
|
||
"`pina.model` for $NN_{\\theta}$. Let's see it in action! \n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# we encapsulate all modules in a torch.nn.Sequential container\n",
|
||
"model = torch.nn.Sequential(PeriodicBoundaryEmbedding(input_dimension=1,\n",
|
||
" periods=2),\n",
|
||
" FeedForward(input_dimensions=3, # output of PeriodicBoundaryEmbedding = 3 * input_dimension\n",
|
||
" output_dimensions=1,\n",
|
||
" layers=[10, 10]))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"As simple as that! Notice that in higher dimension you can specify different periods\n",
|
||
"for all dimensions using a dictionary, e.g. `periods={'x':2, 'y':3, ...}`\n",
|
||
"would indicate a periodicity of $2$ in $x$, $3$ in $y$, and so on...\n",
|
||
"\n",
|
||
"We will now solve the problem as usually with the `PINN` and `Trainer` class."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"pinn = PINN(problem=problem, model=model)\n",
|
||
"trainer = Trainer(pinn, max_epochs=5000, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)\n",
|
||
"trainer.train()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We are going to plot the solution now!"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"pts = pinn.problem.spatial_domain.sample(256, 'grid', variables='x')\n",
|
||
"predicted_output = pinn.forward(pts).extract('u').as_subclass(torch.Tensor).cpu().detach()\n",
|
||
"true_output = pinn.problem.truth_solution(pts).cpu().detach()\n",
|
||
"pts = pts.cpu()\n",
|
||
"fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(8, 8))\n",
|
||
"ax.plot(pts.extract(['x']), predicted_output, label='Neural Network solution')\n",
|
||
"ax.plot(pts.extract(['x']), true_output, label='True solution')\n",
|
||
"plt.legend()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Great, they overlap perfectly! This seems a good result, considering the simple neural network used to some this (complex) problem. We will now test the neural network on the domain $[-4, 4]$ without retraining. In principle the periodicity should be present since the $v$ function ensures the periodicity in $(-\\infty, \\infty)$."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# plotting solution\n",
|
||
"with torch.no_grad():\n",
|
||
" # Notice here we put [-4, 4]!!!\n",
|
||
" new_domain = CartesianDomain({'x' : [0, 4]})\n",
|
||
" x = new_domain.sample(1000, mode='grid')\n",
|
||
" fig, axes = plt.subplots(1, 3, figsize=(15, 5))\n",
|
||
" # Plot 1\n",
|
||
" axes[0].plot(x, problem.truth_solution(x), label=r'$u(x)$', color='blue')\n",
|
||
" axes[0].set_title(r'True solution $u(x)$')\n",
|
||
" axes[0].legend(loc=\"upper right\")\n",
|
||
" # Plot 2\n",
|
||
" axes[1].plot(x, pinn(x), label=r'$u_{\\theta}(x)$', color='green')\n",
|
||
" axes[1].set_title(r'PINN solution $u_{\\theta}(x)$')\n",
|
||
" axes[1].legend(loc=\"upper right\")\n",
|
||
" # Plot 3\n",
|
||
" diff = torch.abs(problem.truth_solution(x) - pinn(x))\n",
|
||
" axes[2].plot(x, diff, label=r'$|u(x) - u_{\\theta}(x)|$', color='red')\n",
|
||
" axes[2].set_title(r'Absolute difference $|u(x) - u_{\\theta}(x)|$')\n",
|
||
" axes[2].legend(loc=\"upper right\")\n",
|
||
" # Adjust layout\n",
|
||
" plt.tight_layout()\n",
|
||
" # Show the plots\n",
|
||
" plt.show()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"It is pretty clear that the network is periodic, with also the error following a periodic pattern. Obviously a longer training and a more expressive neural network could improve the results!\n",
|
||
"\n",
|
||
"## What's next?\n",
|
||
"\n",
|
||
"Congratulations on completing the one dimensional Helmholtz tutorial of **PINA**! There are multiple directions you can go now:\n",
|
||
"\n",
|
||
"1. Train the network for longer or with different layer sizes and assert the finaly accuracy\n",
|
||
"\n",
|
||
"2. Apply the `PeriodicBoundaryEmbedding` layer for a time-dependent problem (see reference in the documentation)\n",
|
||
"\n",
|
||
"3. Exploit extrafeature training ?\n",
|
||
"\n",
|
||
"4. Many more..."
|
||
]
|
||
}
|
||
],
|
||
"metadata": {
|
||
"kernelspec": {
|
||
"display_name": "Python 3",
|
||
"language": "python",
|
||
"name": "python3"
|
||
},
|
||
"language_info": {
|
||
"codemirror_mode": {
|
||
"name": "ipython",
|
||
"version": 3
|
||
},
|
||
"file_extension": ".py",
|
||
"mimetype": "text/x-python",
|
||
"name": "python",
|
||
"nbconvert_exporter": "python",
|
||
"pygments_lexer": "ipython3",
|
||
"version": "3.12.7"
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 2
|
||
}
|