404 lines
15 KiB
Plaintext
Vendored
404 lines
15 KiB
Plaintext
Vendored
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "6a739a84",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Tutorial: Two dimensional Wave problem with hard constraint\n",
|
|
"\n",
|
|
"[](https://colab.research.google.com/github/mathLab/PINA/blob/master/tutorials/tutorial3/tutorial.ipynb)\n",
|
|
"\n",
|
|
"In this tutorial we present how to solve the wave equation using hard constraint PINNs. For doing so we will build a costum `torch` model and pass it to the `PINN` solver.\n",
|
|
"\n",
|
|
"First of all, some useful imports."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "d93daba0",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"## routine needed to run the notebook on Google Colab\n",
|
|
"try:\n",
|
|
" import google.colab\n",
|
|
" IN_COLAB = True\n",
|
|
"except:\n",
|
|
" IN_COLAB = False\n",
|
|
"if IN_COLAB:\n",
|
|
" !pip install \"pina-mathlab\"\n",
|
|
" \n",
|
|
"import torch\n",
|
|
"import matplotlib.pyplot as plt\n",
|
|
"plt.style.use('tableau-colorblind10')\n",
|
|
"\n",
|
|
"from pina.problem import SpatialProblem, TimeDependentProblem\n",
|
|
"from pina.operator import laplacian, grad\n",
|
|
"from pina.domain import CartesianDomain\n",
|
|
"from pina.solver import PINN\n",
|
|
"from pina.trainer import Trainer\n",
|
|
"from pina.equation import Equation\n",
|
|
"from pina.equation.equation_factory import FixedValue\n",
|
|
"from pina import Condition, LabelTensor"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "2316f24e",
|
|
"metadata": {},
|
|
"source": [
|
|
"## The problem definition "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "bc2bbf62",
|
|
"metadata": {},
|
|
"source": [
|
|
"The problem is written in the following form:\n",
|
|
"\n",
|
|
"\\begin{equation}\n",
|
|
"\\begin{cases}\n",
|
|
"\\Delta u(x,y,t) = \\frac{\\partial^2}{\\partial t^2} u(x,y,t) \\quad \\text{in } D, \\\\\\\\\n",
|
|
"u(x, y, t=0) = \\sin(\\pi x)\\sin(\\pi y), \\\\\\\\\n",
|
|
"u(x, y, t) = 0 \\quad \\text{on } \\Gamma_1 \\cup \\Gamma_2 \\cup \\Gamma_3 \\cup \\Gamma_4,\n",
|
|
"\\end{cases}\n",
|
|
"\\end{equation}\n",
|
|
"\n",
|
|
"where $D$ is a squared domain $[0,1]^2$, and $\\Gamma_i$, with $i=1,...,4$, are the boundaries of the square, and the velocity in the standard wave equation is fixed to one."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "cbc50741",
|
|
"metadata": {},
|
|
"source": [
|
|
"Now, the wave problem is written in PINA code as a class, inheriting from `SpatialProblem` and `TimeDependentProblem` since we deal with spatial, and time dependent variables. The equations are written as `conditions` that should be satisfied in the corresponding domains. `truth_solution` is the exact solution which will be compared with the predicted one."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "b60176c4",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"class Wave(TimeDependentProblem, SpatialProblem):\n",
|
|
" output_variables = ['u']\n",
|
|
" spatial_domain = CartesianDomain({'x': [0, 1], 'y': [0, 1]})\n",
|
|
" temporal_domain = CartesianDomain({'t': [0, 1]})\n",
|
|
"\n",
|
|
" def wave_equation(input_, output_):\n",
|
|
" u_t = grad(output_, input_, components=['u'], d=['t'])\n",
|
|
" u_tt = grad(u_t, input_, components=['dudt'], d=['t'])\n",
|
|
" nabla_u = laplacian(output_, input_, components=['u'], d=['x', 'y'])\n",
|
|
" return nabla_u - u_tt\n",
|
|
"\n",
|
|
" def initial_condition(input_, output_):\n",
|
|
" u_expected = (torch.sin(torch.pi*input_.extract(['x'])) *\n",
|
|
" torch.sin(torch.pi*input_.extract(['y'])))\n",
|
|
" return output_.extract(['u']) - u_expected\n",
|
|
"\n",
|
|
" conditions = {\n",
|
|
" 'bound_cond1': Condition(domain=CartesianDomain({'x': [0, 1], 'y': 1, 't': [0, 1]}), equation=FixedValue(0.)),\n",
|
|
" 'bound_cond2': Condition(domain=CartesianDomain({'x': [0, 1], 'y': 0, 't': [0, 1]}), equation=FixedValue(0.)),\n",
|
|
" 'bound_cond3': Condition(domain=CartesianDomain({'x': 1, 'y': [0, 1], 't': [0, 1]}), equation=FixedValue(0.)),\n",
|
|
" 'bound_cond4': Condition(domain=CartesianDomain({'x': 0, 'y': [0, 1], 't': [0, 1]}), equation=FixedValue(0.)),\n",
|
|
" 'time_cond': Condition(domain=CartesianDomain({'x': [0, 1], 'y': [0, 1], 't': 0}), equation=Equation(initial_condition)),\n",
|
|
" 'phys_cond': Condition(domain=CartesianDomain({'x': [0, 1], 'y': [0, 1], 't': [0, 1]}), equation=Equation(wave_equation)),\n",
|
|
" }\n",
|
|
"\n",
|
|
" def wave_sol(self, pts):\n",
|
|
" return (torch.sin(torch.pi*pts.extract(['x'])) *\n",
|
|
" torch.sin(torch.pi*pts.extract(['y'])) *\n",
|
|
" torch.cos(torch.sqrt(torch.tensor(2.))*torch.pi*pts.extract(['t'])))\n",
|
|
"\n",
|
|
" truth_solution = wave_sol\n",
|
|
"\n",
|
|
"problem = Wave()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "03557e0c-1f82-4dad-b611-5d33fddfd0ef",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Hard Constraint Model"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "356fe363",
|
|
"metadata": {},
|
|
"source": [
|
|
"After the problem, a **torch** model is needed to solve the PINN. Usually, many models are already implemented in **PINA**, but the user has the possibility to build his/her own model in `torch`. The hard constraint we impose is on the boundary of the spatial domain. Specifically, our solution is written as:\n",
|
|
"\n",
|
|
"$$ u_{\\rm{pinn}} = xy(1-x)(1-y)\\cdot NN(x, y, t), $$\n",
|
|
"\n",
|
|
"where $NN$ is the neural net output. This neural network takes as input the coordinates (in this case $x$, $y$ and $t$) and provides the unknown field $u$. By construction, it is zero on the boundaries. The residuals of the equations are evaluated at several sampling points (which the user can manipulate using the method `discretise_domain`) and the loss minimized by the neural network is the sum of the residuals."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "9fbbb74f",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"class HardMLP(torch.nn.Module):\n",
|
|
"\n",
|
|
" def __init__(self, input_dim, output_dim):\n",
|
|
" super().__init__()\n",
|
|
"\n",
|
|
" self.layers = torch.nn.Sequential(torch.nn.Linear(input_dim, 40),\n",
|
|
" torch.nn.ReLU(),\n",
|
|
" torch.nn.Linear(40, 40),\n",
|
|
" torch.nn.ReLU(),\n",
|
|
" torch.nn.Linear(40, output_dim))\n",
|
|
" \n",
|
|
" # here in the foward we implement the hard constraints\n",
|
|
" def forward(self, x):\n",
|
|
" hard = x.extract(['x'])*(1-x.extract(['x']))*x.extract(['y'])*(1-x.extract(['y']))\n",
|
|
" return hard*self.layers(x)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "f79fc901-4720-4fac-8b72-84ac5f7d2ec3",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Train and Inference"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "b465bebd",
|
|
"metadata": {},
|
|
"source": [
|
|
"In this tutorial, the neural network is trained for 1000 epochs with a learning rate of 0.001 (default in `PINN`). Training takes approximately 3 minutes."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "0be8e7f5",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# generate the data\n",
|
|
"problem.discretise_domain(1000, 'random', domains=['phys_cond', 'time_cond', 'bound_cond1', 'bound_cond2', 'bound_cond3', 'bound_cond4'])\n",
|
|
"\n",
|
|
"# create the solver\n",
|
|
"pinn = PINN(problem, HardMLP(len(problem.input_variables), len(problem.output_variables)))\n",
|
|
"\n",
|
|
"# create trainer and train\n",
|
|
"trainer = Trainer(pinn, max_epochs=1000, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)\n",
|
|
"trainer.train()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "c2a5c405",
|
|
"metadata": {},
|
|
"source": [
|
|
"Notice that the loss on the boundaries of the spatial domain is exactly zero, as expected! After the training is completed one can now plot some results using the `matplotlib`. We plot the predicted output on the left side, the true solution at the center and the difference on the right side."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "c086c05f",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"def fixed_time_plot(fixed_variables, pinn):\n",
|
|
" #sample domain points and get values corresponding to fixed variables\n",
|
|
" pts = pinn.problem.spatial_domain.sample(256, 'grid', variables=['x','y'])\n",
|
|
" grids = [p_.reshape(256, 256) for p_ in pts.extract(['x','y']).T]\n",
|
|
" fixed_pts = torch.ones(pts.shape[0], len(fixed_variables))\n",
|
|
" fixed_pts *= torch.tensor(list(fixed_variables.values()))\n",
|
|
" fixed_pts = fixed_pts.as_subclass(LabelTensor)\n",
|
|
" fixed_pts.labels = list(fixed_variables.keys())\n",
|
|
" pts = pts.append(fixed_pts).to(device=pinn.device)\n",
|
|
" predicted_output = pinn.forward(pts).extract('u').as_subclass(torch.Tensor).cpu().detach().reshape(256,256)\n",
|
|
" #get true solution\n",
|
|
" true_output = pinn.problem.truth_solution(pts).cpu().detach().reshape(256,256)\n",
|
|
" pts = pts.cpu()\n",
|
|
" #plot prediction, true solution and difference\n",
|
|
" grids = [p_.reshape(256, 256) for p_ in pts.extract(['x','y']).T]\n",
|
|
" fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(16, 6))\n",
|
|
" cb = getattr(ax[0], 'contourf')(*grids, predicted_output)\n",
|
|
" fig.colorbar(cb, ax=ax[0])\n",
|
|
" ax[0].title.set_text('Neural Network prediction')\n",
|
|
" cb = getattr(ax[1], 'contourf')(*grids, true_output)\n",
|
|
" fig.colorbar(cb, ax=ax[1])\n",
|
|
" ax[1].title.set_text('True solution')\n",
|
|
" cb = getattr(ax[2],'contourf')(*grids,(true_output - predicted_output))\n",
|
|
" fig.colorbar(cb, ax=ax[2])\n",
|
|
" ax[2].title.set_text('Residual')\n",
|
|
" plt.show(block=True)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "910c55d8",
|
|
"metadata": {},
|
|
"source": [
|
|
"Let's take a look at the results at different times, for example `0.0`, `0.5` and `1.0`:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "0265003f",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"print('Plotting at t=0')\n",
|
|
"fixed_time_plot(fixed_variables={'t':0.0},pinn=pinn)\n",
|
|
"print('Plotting at t=0.5')\n",
|
|
"fixed_time_plot(fixed_variables={'t':0.5},pinn=pinn)\n",
|
|
"print('Plotting at t=1.0')\n",
|
|
"fixed_time_plot(fixed_variables={'t':1.0},pinn=pinn)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "35e51649",
|
|
"metadata": {},
|
|
"source": [
|
|
"The results are not so great, and we can clearly see that as time progresses the solution gets worse.... Can we do better?\n",
|
|
"\n",
|
|
"A valid option is to impose the initial condition as hard constraint as well. Specifically, our solution is written as:\n",
|
|
"\n",
|
|
"$$ u_{\\rm{pinn}} = xy(1-x)(1-y)\\cdot NN(x, y, t)\\cdot t + \\cos(\\sqrt{2}\\pi t)\\sin(\\pi x)\\sin(\\pi y), $$\n",
|
|
"\n",
|
|
"Let us build the network first"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "33e43412",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"class HardMLPtime(torch.nn.Module):\n",
|
|
"\n",
|
|
" def __init__(self, input_dim, output_dim):\n",
|
|
" super().__init__()\n",
|
|
"\n",
|
|
" self.layers = torch.nn.Sequential(torch.nn.Linear(input_dim, 40),\n",
|
|
" torch.nn.ReLU(),\n",
|
|
" torch.nn.Linear(40, 40),\n",
|
|
" torch.nn.ReLU(),\n",
|
|
" torch.nn.Linear(40, output_dim))\n",
|
|
" \n",
|
|
" # here in the foward we implement the hard constraints\n",
|
|
" def forward(self, x):\n",
|
|
" hard_space = x.extract(['x'])*(1-x.extract(['x']))*x.extract(['y'])*(1-x.extract(['y']))\n",
|
|
" hard_t = torch.sin(torch.pi*x.extract(['x'])) * torch.sin(torch.pi*x.extract(['y'])) * torch.cos(torch.sqrt(torch.tensor(2.))*torch.pi*x.extract(['t']))\n",
|
|
" return hard_space * self.layers(x) * x.extract(['t']) + hard_t"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "5d3dc67b",
|
|
"metadata": {},
|
|
"source": [
|
|
"Now let's train with the same configuration as thre previous test"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "f4bc6be2",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# generate the data\n",
|
|
"problem.discretise_domain(1000, 'random', domains=['phys_cond', 'time_cond', 'bound_cond1', 'bound_cond2', 'bound_cond3', 'bound_cond4'])\n",
|
|
"\n",
|
|
"# crete the solver\n",
|
|
"pinn = PINN(problem, HardMLPtime(len(problem.input_variables), len(problem.output_variables)))\n",
|
|
"\n",
|
|
"# create trainer and train\n",
|
|
"trainer = Trainer(pinn, max_epochs=1000, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)\n",
|
|
"trainer.train()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "a0f80cb8",
|
|
"metadata": {},
|
|
"source": [
|
|
"We can clearly see that the loss is way lower now. Let's plot the results"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "019767e5",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"print('Plotting at t=0')\n",
|
|
"fixed_time_plot(fixed_variables={'t':0.0},pinn=pinn)\n",
|
|
"print('Plotting at t=0.5')\n",
|
|
"fixed_time_plot(fixed_variables={'t':0.5},pinn=pinn)\n",
|
|
"print('Plotting at t=1.0')\n",
|
|
"fixed_time_plot(fixed_variables={'t':1.0},pinn=pinn)\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "b7338109",
|
|
"metadata": {},
|
|
"source": [
|
|
"We can see now that the results are way better! This is due to the fact that previously the network was not learning correctly the initial conditon, leading to a poor solution when time evolved. By imposing the initial condition the network is able to correctly solve the problem."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "61195b1f",
|
|
"metadata": {},
|
|
"source": [
|
|
"## What's next?\n",
|
|
"\n",
|
|
"Congratulations on completing the two dimensional Wave tutorial of **PINA**! There are multiple directions you can go now:\n",
|
|
"\n",
|
|
"1. Train the network for longer or with different layer sizes and assert the finaly accuracy\n",
|
|
"\n",
|
|
"2. Propose new types of hard constraints in time, e.g. $$ u_{\\rm{pinn}} = xy(1-x)(1-y)\\cdot NN(x, y, t)(1-\\exp(-t)) + \\cos(\\sqrt{2}\\pi t)sin(\\pi x)\\sin(\\pi y), $$\n",
|
|
"\n",
|
|
"3. Exploit extrafeature training for model 1 and 2\n",
|
|
"\n",
|
|
"4. Many more..."
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.12.7"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 5
|
|
}
|