522 lines
17 KiB
Plaintext
Vendored
522 lines
17 KiB
Plaintext
Vendored
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "6a739a84",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Tutorial: Two dimensional Wave problem with hard constraint\n",
|
|
"\n",
|
|
"[](https://colab.research.google.com/github/mathLab/PINA/blob/master/tutorials/tutorial3/tutorial.ipynb)\n",
|
|
"\n",
|
|
"In this tutorial we present how to solve the wave equation using hard constraint PINNs. For doing so we will build a costum `torch` model and pass it to the `PINN` solver.\n",
|
|
"\n",
|
|
"First of all, some useful imports."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "d93daba0",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"## routine needed to run the notebook on Google Colab\n",
|
|
"try:\n",
|
|
" import google.colab\n",
|
|
" IN_COLAB = True\n",
|
|
"except:\n",
|
|
" IN_COLAB = False\n",
|
|
"if IN_COLAB:\n",
|
|
" !pip install \"pina-mathlab\"\n",
|
|
" \n",
|
|
"import torch\n",
|
|
"import matplotlib.pyplot as plt\n",
|
|
"import warnings\n",
|
|
"\n",
|
|
"from pina import Condition, LabelTensor\n",
|
|
"from pina.problem import SpatialProblem, TimeDependentProblem\n",
|
|
"from pina.operator import laplacian, grad\n",
|
|
"from pina.domain import CartesianDomain\n",
|
|
"from pina.solver import PINN\n",
|
|
"from pina.trainer import Trainer\n",
|
|
"from pina.equation import Equation, FixedValue\n",
|
|
"\n",
|
|
"from lightning.pytorch.loggers import TensorBoardLogger\n",
|
|
"\n",
|
|
"warnings.filterwarnings('ignore')"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "2316f24e",
|
|
"metadata": {},
|
|
"source": [
|
|
"## The problem definition "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "bc2bbf62",
|
|
"metadata": {},
|
|
"source": [
|
|
"The problem is written in the following form:\n",
|
|
"\n",
|
|
"\\begin{equation}\n",
|
|
"\\begin{cases}\n",
|
|
"\\Delta u(x,y,t) = \\frac{\\partial^2}{\\partial t^2} u(x,y,t) \\quad \\text{in } D, \\\\\\\\\n",
|
|
"u(x, y, t=0) = \\sin(\\pi x)\\sin(\\pi y), \\\\\\\\\n",
|
|
"u(x, y, t) = 0 \\quad \\text{on } \\Gamma_1 \\cup \\Gamma_2 \\cup \\Gamma_3 \\cup \\Gamma_4,\n",
|
|
"\\end{cases}\n",
|
|
"\\end{equation}\n",
|
|
"\n",
|
|
"where $D$ is a squared domain $[0,1]^2$, and $\\Gamma_i$, with $i=1,...,4$, are the boundaries of the square, and the velocity in the standard wave equation is fixed to one."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "cbc50741",
|
|
"metadata": {},
|
|
"source": [
|
|
"Now, the wave problem is written in PINA code as a class, inheriting from `SpatialProblem` and `TimeDependentProblem` since we deal with spatial, and time dependent variables. The equations are written as `conditions` that should be satisfied in the corresponding domains. `truth_solution` is the exact solution which will be compared with the predicted one."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "b60176c4",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"class Wave(TimeDependentProblem, SpatialProblem):\n",
|
|
" output_variables = [\"u\"]\n",
|
|
" spatial_domain = CartesianDomain({\"x\": [0, 1], \"y\": [0, 1]})\n",
|
|
" temporal_domain = CartesianDomain({\"t\": [0, 1]})\n",
|
|
"\n",
|
|
" def wave_equation(input_, output_):\n",
|
|
" u_t = grad(output_, input_, components=[\"u\"], d=[\"t\"])\n",
|
|
" u_tt = grad(u_t, input_, components=[\"dudt\"], d=[\"t\"])\n",
|
|
" nabla_u = laplacian(output_, input_, components=[\"u\"], d=[\"x\", \"y\"])\n",
|
|
" return nabla_u - u_tt\n",
|
|
"\n",
|
|
" def initial_condition(input_, output_):\n",
|
|
" u_expected = torch.sin(torch.pi * input_.extract([\"x\"])) * torch.sin(\n",
|
|
" torch.pi * input_.extract([\"y\"])\n",
|
|
" )\n",
|
|
" return output_.extract([\"u\"]) - u_expected\n",
|
|
"\n",
|
|
" conditions = {\n",
|
|
" \"bound_cond1\": Condition(\n",
|
|
" domain=CartesianDomain({\"x\": [0, 1], \"y\": 1, \"t\": [0, 1]}),\n",
|
|
" equation=FixedValue(0.0),\n",
|
|
" ),\n",
|
|
" \"bound_cond2\": Condition(\n",
|
|
" domain=CartesianDomain({\"x\": [0, 1], \"y\": 0, \"t\": [0, 1]}),\n",
|
|
" equation=FixedValue(0.0),\n",
|
|
" ),\n",
|
|
" \"bound_cond3\": Condition(\n",
|
|
" domain=CartesianDomain({\"x\": 1, \"y\": [0, 1], \"t\": [0, 1]}),\n",
|
|
" equation=FixedValue(0.0),\n",
|
|
" ),\n",
|
|
" \"bound_cond4\": Condition(\n",
|
|
" domain=CartesianDomain({\"x\": 0, \"y\": [0, 1], \"t\": [0, 1]}),\n",
|
|
" equation=FixedValue(0.0),\n",
|
|
" ),\n",
|
|
" \"time_cond\": Condition(\n",
|
|
" domain=CartesianDomain({\"x\": [0, 1], \"y\": [0, 1], \"t\": 0}),\n",
|
|
" equation=Equation(initial_condition),\n",
|
|
" ),\n",
|
|
" \"phys_cond\": Condition(\n",
|
|
" domain=CartesianDomain({\"x\": [0, 1], \"y\": [0, 1], \"t\": [0, 1]}),\n",
|
|
" equation=Equation(wave_equation),\n",
|
|
" ),\n",
|
|
" }\n",
|
|
"\n",
|
|
" def truth_solution(self, pts):\n",
|
|
" f = (\n",
|
|
" torch.sin(torch.pi * pts.extract([\"x\"]))\n",
|
|
" * torch.sin(torch.pi * pts.extract([\"y\"]))\n",
|
|
" * torch.cos(\n",
|
|
" torch.sqrt(torch.tensor(2.0)) * torch.pi * pts.extract([\"t\"])\n",
|
|
" )\n",
|
|
" )\n",
|
|
" return LabelTensor(f, self.output_variables)\n",
|
|
"\n",
|
|
"\n",
|
|
"# define problem\n",
|
|
"problem = Wave()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "03557e0c-1f82-4dad-b611-5d33fddfd0ef",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Hard Constraint Model"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "356fe363",
|
|
"metadata": {},
|
|
"source": [
|
|
"After the problem, a **torch** model is needed to solve the PINN. Usually, many models are already implemented in **PINA**, but the user has the possibility to build his/her own model in `torch`. The hard constraint we impose is on the boundary of the spatial domain. Specifically, our solution is written as:\n",
|
|
"\n",
|
|
"$$ u_{\\rm{pinn}} = xy(1-x)(1-y)\\cdot NN(x, y, t), $$\n",
|
|
"\n",
|
|
"where $NN$ is the neural net output. This neural network takes as input the coordinates (in this case $x$, $y$ and $t$) and provides the unknown field $u$. By construction, it is zero on the boundaries. The residuals of the equations are evaluated at several sampling points (which the user can manipulate using the method `discretise_domain`) and the loss minimized by the neural network is the sum of the residuals."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "9fbbb74f",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"class HardMLP(torch.nn.Module):\n",
|
|
"\n",
|
|
" def __init__(self, input_dim, output_dim):\n",
|
|
" super().__init__()\n",
|
|
"\n",
|
|
" self.layers = torch.nn.Sequential(\n",
|
|
" torch.nn.Linear(input_dim, 40),\n",
|
|
" torch.nn.ReLU(),\n",
|
|
" torch.nn.Linear(40, 40),\n",
|
|
" torch.nn.ReLU(),\n",
|
|
" torch.nn.Linear(40, output_dim),\n",
|
|
" )\n",
|
|
"\n",
|
|
" # here in the foward we implement the hard constraints\n",
|
|
" def forward(self, x):\n",
|
|
" hard = (\n",
|
|
" x.extract([\"x\"])\n",
|
|
" * (1 - x.extract([\"x\"]))\n",
|
|
" * x.extract([\"y\"])\n",
|
|
" * (1 - x.extract([\"y\"]))\n",
|
|
" )\n",
|
|
" return hard * self.layers(x)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "f79fc901-4720-4fac-8b72-84ac5f7d2ec3",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Train and Inference"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "b465bebd",
|
|
"metadata": {},
|
|
"source": [
|
|
"In this tutorial, the neural network is trained for 1000 epochs with a learning rate of 0.001 (default in `PINN`). As always, we will log using `Tensorboard`."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "0be8e7f5",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# generate the data\n",
|
|
"problem.discretise_domain(\n",
|
|
" 1000,\n",
|
|
" \"random\",\n",
|
|
" domains=[\n",
|
|
" \"phys_cond\",\n",
|
|
" \"time_cond\",\n",
|
|
" \"bound_cond1\",\n",
|
|
" \"bound_cond2\",\n",
|
|
" \"bound_cond3\",\n",
|
|
" \"bound_cond4\",\n",
|
|
" ],\n",
|
|
")\n",
|
|
"\n",
|
|
"# define model\n",
|
|
"model = HardMLP(len(problem.input_variables), len(problem.output_variables))\n",
|
|
"\n",
|
|
"# crete the solver\n",
|
|
"pinn = PINN(problem=problem, model=model)\n",
|
|
"\n",
|
|
"# create trainer and train\n",
|
|
"trainer = Trainer(\n",
|
|
" solver=pinn,\n",
|
|
" max_epochs=1000,\n",
|
|
" accelerator=\"cpu\",\n",
|
|
" enable_model_summary=False,\n",
|
|
" train_size=1.0,\n",
|
|
" val_size=0.0,\n",
|
|
" test_size=0.0,\n",
|
|
" logger=TensorBoardLogger(\"tutorial_logs\"),\n",
|
|
" enable_progress_bar=False,\n",
|
|
")\n",
|
|
"trainer.train()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "4c6dbfac",
|
|
"metadata": {},
|
|
"source": [
|
|
"Let's now plot the logging to see how the losses vary during training. For this, we will use `TensorBoard`."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "77bfcb6e",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Load the TensorBoard extension\n",
|
|
"%load_ext tensorboard\n",
|
|
"%tensorboard --logdir 'tutorial_logs'\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "c2a5c405",
|
|
"metadata": {},
|
|
"source": [
|
|
"Notice that the loss on the boundaries of the spatial domain is exactly zero, as expected! After the training is completed one can now plot some results using the `matplotlib`. We plot the predicted output on the left side, the true solution at the center and the difference on the right side using the `plot_solution` function."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "c086c05f",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"@torch.no_grad()\n",
|
|
"def plot_solution(solver, time):\n",
|
|
" # get the problem\n",
|
|
" problem = solver.problem\n",
|
|
" # get spatial points\n",
|
|
" spatial_samples = problem.spatial_domain.sample(30, \"grid\")\n",
|
|
" # get temporal value\n",
|
|
" time = LabelTensor(torch.tensor([[time]]), \"t\")\n",
|
|
" # cross data\n",
|
|
" points = spatial_samples.append(time, mode=\"cross\")\n",
|
|
" # compute pinn solution, true solution and absolute difference\n",
|
|
" data = {\n",
|
|
" \"PINN solution\": solver(points),\n",
|
|
" \"True solution\": problem.truth_solution(points),\n",
|
|
" \"Absolute Difference\": torch.abs(\n",
|
|
" solver(points) - problem.truth_solution(points)\n",
|
|
" )\n",
|
|
" }\n",
|
|
" # plot the solution\n",
|
|
" plt.suptitle(f'Solution for time {time.item()}')\n",
|
|
" for idx, (title, field) in enumerate(data.items()):\n",
|
|
" plt.subplot(1, 3, idx + 1)\n",
|
|
" plt.title(title)\n",
|
|
" plt.tricontourf( # convert to torch tensor + flatten\n",
|
|
" points.extract(\"x\").tensor.flatten(),\n",
|
|
" points.extract(\"y\").tensor.flatten(),\n",
|
|
" field.tensor.flatten(),\n",
|
|
" )\n",
|
|
" plt.colorbar(), plt.tight_layout()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "910c55d8",
|
|
"metadata": {},
|
|
"source": [
|
|
"Let's take a look at the results at different times, for example `0.0`, `0.5` and `1.0`:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "0265003f",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"plt.figure(figsize=(12, 6))\n",
|
|
"plot_solution(solver=pinn, time=0)\n",
|
|
"\n",
|
|
"plt.figure(figsize=(12, 6))\n",
|
|
"plot_solution(solver=pinn, time=0.5)\n",
|
|
"\n",
|
|
"plt.figure(figsize=(12, 6))\n",
|
|
"plot_solution(solver=pinn, time=1)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "35e51649",
|
|
"metadata": {},
|
|
"source": [
|
|
"The results are not so great, and we can clearly see that as time progresses the solution gets worse.... Can we do better?\n",
|
|
"\n",
|
|
"A valid option is to impose the initial condition as hard constraint as well. Specifically, our solution is written as:\n",
|
|
"\n",
|
|
"$$ u_{\\rm{pinn}} = xy(1-x)(1-y)\\cdot NN(x, y, t)\\cdot t + \\cos(\\sqrt{2}\\pi t)\\sin(\\pi x)\\sin(\\pi y), $$\n",
|
|
"\n",
|
|
"Let us build the network first"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "33e43412",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"class HardMLPtime(torch.nn.Module):\n",
|
|
"\n",
|
|
" def __init__(self, input_dim, output_dim):\n",
|
|
" super().__init__()\n",
|
|
"\n",
|
|
" self.layers = torch.nn.Sequential(\n",
|
|
" torch.nn.Linear(input_dim, 40),\n",
|
|
" torch.nn.ReLU(),\n",
|
|
" torch.nn.Linear(40, 40),\n",
|
|
" torch.nn.ReLU(),\n",
|
|
" torch.nn.Linear(40, output_dim),\n",
|
|
" )\n",
|
|
"\n",
|
|
" # here in the foward we implement the hard constraints\n",
|
|
" def forward(self, x):\n",
|
|
" hard_space = (\n",
|
|
" x.extract([\"x\"])\n",
|
|
" * (1 - x.extract([\"x\"]))\n",
|
|
" * x.extract([\"y\"])\n",
|
|
" * (1 - x.extract([\"y\"]))\n",
|
|
" )\n",
|
|
" hard_t = (\n",
|
|
" torch.sin(torch.pi * x.extract([\"x\"]))\n",
|
|
" * torch.sin(torch.pi * x.extract([\"y\"]))\n",
|
|
" * torch.cos(\n",
|
|
" torch.sqrt(torch.tensor(2.0)) * torch.pi * x.extract([\"t\"])\n",
|
|
" )\n",
|
|
" )\n",
|
|
" return hard_space * self.layers(x) * x.extract([\"t\"]) + hard_t"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "5d3dc67b",
|
|
"metadata": {},
|
|
"source": [
|
|
"Now let's train with the same configuration as the previous test"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "f4bc6be2",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# define model\n",
|
|
"model = HardMLPtime(len(problem.input_variables), len(problem.output_variables))\n",
|
|
"\n",
|
|
"# crete the solver\n",
|
|
"pinn = PINN(problem=problem, model=model)\n",
|
|
"\n",
|
|
"# create trainer and train\n",
|
|
"trainer = Trainer(\n",
|
|
" solver=pinn,\n",
|
|
" max_epochs=1000,\n",
|
|
" accelerator=\"cpu\",\n",
|
|
" enable_model_summary=False,\n",
|
|
" train_size=1.0,\n",
|
|
" val_size=0.0,\n",
|
|
" test_size=0.0,\n",
|
|
" logger=TensorBoardLogger(\"tutorial_logs\"),\n",
|
|
" enable_progress_bar=False,\n",
|
|
")\n",
|
|
"trainer.train()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "a0f80cb8",
|
|
"metadata": {},
|
|
"source": [
|
|
"We can clearly see that the loss is way lower now. Let's plot the results"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "019767e5",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"plt.figure(figsize=(12, 6))\n",
|
|
"plot_solution(solver=pinn, time=0)\n",
|
|
"\n",
|
|
"plt.figure(figsize=(12, 6))\n",
|
|
"plot_solution(solver=pinn, time=0.5)\n",
|
|
"\n",
|
|
"plt.figure(figsize=(12, 6))\n",
|
|
"plot_solution(solver=pinn, time=1)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "b7338109",
|
|
"metadata": {},
|
|
"source": [
|
|
"We can see now that the results are way better! This is due to the fact that previously the network was not learning correctly the initial conditon, leading to a poor solution when time evolved. By imposing the initial condition the network is able to correctly solve the problem. We can also see using Tensorboard how the two losses decreased:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "7ce34dac",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"%tensorboard --logdir 'tutorial_logs'"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "61195b1f",
|
|
"metadata": {},
|
|
"source": [
|
|
"## What's next?\n",
|
|
"\n",
|
|
"Congratulations on completing the two dimensional Wave tutorial of **PINA**! There are multiple directions you can go now:\n",
|
|
"\n",
|
|
"1. Train the network for longer or with different layer sizes and assert the finaly accuracy\n",
|
|
"\n",
|
|
"2. Propose new types of hard constraints in time, e.g. $$ u_{\\rm{pinn}} = xy(1-x)(1-y)\\cdot NN(x, y, t)(1-\\exp(-t)) + \\cos(\\sqrt{2}\\pi t)sin(\\pi x)\\sin(\\pi y), $$\n",
|
|
"\n",
|
|
"3. Exploit extrafeature training for model 1 and 2\n",
|
|
"\n",
|
|
"4. Many more..."
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "pina",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.9.21"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 5
|
|
}
|