Fixing tutorials grammar (#242)

* grammar check and sparse rephrasing
* rst created
* meta copyright adjusted
This commit is contained in:
Giuseppe Alessio D'Inverno
2024-03-05 10:43:34 +01:00
committed by GitHub
parent 15136e13f8
commit b10e02103b
23 changed files with 272 additions and 237 deletions

View File

@@ -4,11 +4,11 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Tutorial: One dimensional Helmotz equation using Periodic Boundary Conditions\n",
"# Tutorial: One dimensional Helmholtz equation using Periodic Boundary Conditions\n",
"This tutorial presents how to solve with Physics-Informed Neural Networks (PINNs)\n",
"a one dimensional Helmotz equation with periodic boundary conditions (PBC).\n",
"a one dimensional Helmholtz equation with periodic boundary conditions (PBC).\n",
"We will train with standard PINN's training by augmenting the input with\n",
"periodic expasion as presented in [*An experts guide to training\n",
"periodic expansion as presented in [*An experts guide to training\n",
"physics-informed neural networks*](\n",
"https://arxiv.org/abs/2308.08468).\n",
"\n",
@@ -41,7 +41,7 @@
"source": [
"## The problem definition\n",
"\n",
"The one-dimensional Helmotz problem is mathematically written as:\n",
"The one-dimensional Helmholtz problem is mathematically written as:\n",
"$$\n",
"\\begin{cases}\n",
"\\frac{d^2}{dx^2}u(x) - \\lambda u(x) -f(x) &= 0 \\quad x\\in(0,2)\\\\\n",
@@ -49,17 +49,17 @@
"\\end{cases}\n",
"$$\n",
"In this case we are asking the solution to be $C^{\\infty}$ periodic with\n",
"period $2$, on the inifite domain $x\\in(-\\infty, \\infty)$. Notice that the\n",
"classical PINN would need inifinite conditions to evaluate the PBC loss function,\n",
"one for each derivative, which is of course infeasable... \n",
"period $2$, on the infinite domain $x\\in(-\\infty, \\infty)$. Notice that the\n",
"classical PINN would need infinite conditions to evaluate the PBC loss function,\n",
"one for each derivative, which is of course infeasible... \n",
"A possible solution, diverging from the original PINN formulation,\n",
"is to use *coordinates augmentation*. In coordinates augmentation you seek for\n",
"a coordinates transformation $v$ such that $x\\rightarrow v(x)$ such that\n",
"the periodicity condition $ u^{(m)}(x=0) - u^{(m)}(x=2) = 0 \\quad m\\in[0, 1, \\cdots] $ is\n",
"satisfied.\n",
"\n",
"For demonstration porpuses the problem specifics are $\\lambda=-10\\pi^2$,\n",
"and $f(x)=-6\\pi^2\\sin(3\\pi x)\\cos(\\pi x)$ which gives a solution that can be\n",
"For demonstration purposes, the problem specifics are $\\lambda=-10\\pi^2$,\n",
"and $f(x)=-6\\pi^2\\sin(3\\pi x)\\cos(\\pi x)$ which give a solution that can be\n",
"computed analytically $u(x) = \\sin(\\pi x)\\cos(3\\pi x)$."
]
},
@@ -69,11 +69,11 @@
"metadata": {},
"outputs": [],
"source": [
"class Helmotz(SpatialProblem):\n",
"class Helmholtz(SpatialProblem):\n",
" output_variables = ['u']\n",
" spatial_domain = CartesianDomain({'x': [0, 2]})\n",
"\n",
" def helmotz_equation(input_, output_):\n",
" def Helmholtz_equation(input_, output_):\n",
" x = input_.extract('x')\n",
" u_xx = laplacian(output_, input_, components=['u'], d=['x'])\n",
" f = - 6.*torch.pi**2 * torch.sin(3*torch.pi*x)*torch.cos(torch.pi*x)\n",
@@ -83,15 +83,15 @@
" # here we write the problem conditions\n",
" conditions = {\n",
" 'D': Condition(location=spatial_domain,\n",
" equation=Equation(helmotz_equation)),\n",
" equation=Equation(Helmholtz_equation)),\n",
" }\n",
"\n",
" def helmotz_sol(self, pts):\n",
" def Helmholtz_sol(self, pts):\n",
" return torch.sin(torch.pi * pts) * torch.cos(3. * torch.pi * pts)\n",
" \n",
" truth_solution = helmotz_sol\n",
" truth_solution = Helmholtz_sol\n",
"\n",
"problem = Helmotz()\n",
"problem = Helmholtz()\n",
"\n",
"# let's discretise the domain\n",
"problem.discretise_domain(200, 'grid', locations=['D'])"
@@ -101,11 +101,11 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"As usual the Helmotz problem is written in **PINA** code as a class. \n",
"As usual, the Helmholtz problem is written in **PINA** code as a class. \n",
"The equations are written as `conditions` that should be satisfied in the\n",
"corresponding domains. The `truth_solution`\n",
"is the exact solution which will be compared with the predicted one. We used\n",
"latin hypercube sampling for choosing the collocation points."
"Latin Hypercube Sampling for choosing the collocation points."
]
},
{
@@ -159,11 +159,11 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"As simple as that! Notice in higher dimension you can specify different periods\n",
"As simple as that! Notice that in higher dimension you can specify different periods\n",
"for all dimensions using a dictionary, e.g. `periods={'x':2, 'y':3, ...}`\n",
"would indicate a periodicity of $2$ in $x$, $3$ in $y$, and so on...\n",
"\n",
"We will now sole the problem as usually with the `PINN` and `Trainer` class."
"We will now solve the problem as usually with the `PINN` and `Trainer` class."
]
},
{
@@ -209,7 +209,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Great, they overlap perfectly! This seeams a good result, considering the simple neural network used to some this (complex) problem. We will now test the neural network on the domain $[-4, 4]$ without retraining. In principle the periodicity should be present since the $v$ function ensures the periodicity in $(-\\infty, \\infty)$."
"Great, they overlap perfectly! This seems a good result, considering the simple neural network used to some this (complex) problem. We will now test the neural network on the domain $[-4, 4]$ without retraining. In principle the periodicity should be present since the $v$ function ensures the periodicity in $(-\\infty, \\infty)$."
]
},
{
@@ -258,11 +258,11 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"It is pretty clear that the network is periodic, with also the error following a periodic pattern. Obviusly a longer training, and a more expressive neural network could improve the results!\n",
"It is pretty clear that the network is periodic, with also the error following a periodic pattern. Obviously a longer training and a more expressive neural network could improve the results!\n",
"\n",
"## What's next?\n",
"\n",
"Nice you have completed the one dimensional Helmotz tutorial of **PINA**! There are multiple directions you can go now:\n",
"Congratulations on completing the one dimensional Helmholtz tutorial of **PINA**! There are multiple directions you can go now:\n",
"\n",
"1. Train the network for longer or with different layer sizes and assert the finaly accuracy\n",
"\n",
@@ -272,6 +272,11 @@
"\n",
"4. Many more..."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
}
],
"metadata": {

View File

@@ -1,11 +1,11 @@
#!/usr/bin/env python
# coding: utf-8
# # Tutorial: One dimensional Helmotz equation using Periodic Boundary Conditions
# # Tutorial: One dimensional Helmholtz equation using Periodic Boundary Conditions
# This tutorial presents how to solve with Physics-Informed Neural Networks (PINNs)
# a one dimensional Helmotz equation with periodic boundary conditions (PBC).
# a one dimensional Helmholtz equation with periodic boundary conditions (PBC).
# We will train with standard PINN's training by augmenting the input with
# periodic expasion as presented in [*An experts guide to training
# periodic expansion as presented in [*An experts guide to training
# physics-informed neural networks*](
# https://arxiv.org/abs/2308.08468).
#
@@ -30,7 +30,7 @@ from pina.equation import Equation
# ## The problem definition
#
# The one-dimensional Helmotz problem is mathematically written as:
# The one-dimensional Helmholtz problem is mathematically written as:
# $$
# \begin{cases}
# \frac{d^2}{dx^2}u(x) - \lambda u(x) -f(x) &= 0 \quad x\in(0,2)\\
@@ -38,9 +38,9 @@ from pina.equation import Equation
# \end{cases}
# $$
# In this case we are asking the solution to be $C^{\infty}$ periodic with
# period $2$, on the inifite domain $x\in(-\infty, \infty)$. Notice that the
# classical PINN would need inifinite conditions to evaluate the PBC loss function,
# one for each derivative, which is of course infeasable...
# period $2$, on the infinite domain $x\in(-\infty, \infty)$. Notice that the
# classical PINN would need infinite conditions to evaluate the PBC loss function,
# one for each derivative, which is of course infeasible...
# A possible solution, diverging from the original PINN formulation,
# is to use *coordinates augmentation*. In coordinates augmentation you seek for
# a coordinates transformation $v$ such that $x\rightarrow v(x)$ such that
@@ -54,11 +54,11 @@ from pina.equation import Equation
# In[2]:
class Helmotz(SpatialProblem):
class Helmholtz(SpatialProblem):
output_variables = ['u']
spatial_domain = CartesianDomain({'x': [0, 2]})
def helmotz_equation(input_, output_):
def Helmholtz_equation(input_, output_):
x = input_.extract('x')
u_xx = laplacian(output_, input_, components=['u'], d=['x'])
f = - 6.*torch.pi**2 * torch.sin(3*torch.pi*x)*torch.cos(torch.pi*x)
@@ -68,21 +68,21 @@ class Helmotz(SpatialProblem):
# here we write the problem conditions
conditions = {
'D': Condition(location=spatial_domain,
equation=Equation(helmotz_equation)),
equation=Equation(Helmholtz_equation)),
}
def helmotz_sol(self, pts):
def Helmholtz_sol(self, pts):
return torch.sin(torch.pi * pts) * torch.cos(3. * torch.pi * pts)
truth_solution = helmotz_sol
truth_solution = Helmholtz_sol
problem = Helmotz()
problem = Helmholtz()
# let's discretise the domain
problem.discretise_domain(200, 'grid', locations=['D'])
# As usual the Helmotz problem is written in **PINA** code as a class.
# As usual the Helmholtz problem is written in **PINA** code as a class.
# The equations are written as `conditions` that should be satisfied in the
# corresponding domains. The `truth_solution`
# is the exact solution which will be compared with the predicted one. We used
@@ -129,7 +129,7 @@ model = torch.nn.Sequential(PeriodicBoundaryEmbedding(input_dimension=1,
#
# We will now sole the problem as usually with the `PINN` and `Trainer` class.
# In[5]:
# In[ ]:
pinn = PINN(problem=problem, model=model)
@@ -180,7 +180,7 @@ with torch.no_grad():
#
# ## What's next?
#
# Nice you have completed the one dimensional Helmotz tutorial of **PINA**! There are multiple directions you can go now:
# Nice you have completed the one dimensional Helmholtz tutorial of **PINA**! There are multiple directions you can go now:
#
# 1. Train the network for longer or with different layer sizes and assert the finaly accuracy
#