{ "cells": [ { "attachments": {}, "cell_type": "markdown", "id": "6f71ca5c", "metadata": {}, "source": [ "# Tutorial: Introductory Tutorial: A Beginner’s Guide to PINA\n", "\n", "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mathLab/PINA/blob/master/tutorials/tutorial17/tutorial.ipynb)\n", "\n", "

\n", " \"PINA\n", "

\n", "\n", "\n", "Welcome to **PINA**!\n", "\n", "PINA [1] is an open-source Python library designed for **Scientific Machine Learning (SciML)** tasks, particularly involving:\n", "\n", "- **Physics-Informed Neural Networks (PINNs)**\n", "- **Neural Operators (NOs)**\n", "- **Reduced Order Models (ROMs)**\n", "- **Graph Neural Networks (GNNs)**\n", "- ...\n", "\n", "Built on **PyTorch**, **PyTorch Lightning**, and **PyTorch Geometric**, it provides a **user-friendly, intuitive interface** for formulating and solving differential problems using neural networks.\n", "\n", "This tutorial offers a **step-by-step guide** to using PINA—starting from basic to advanced techniques—enabling users to tackle a broad spectrum of differential problems with minimal code.\n", "\n", "\n" ] }, { "cell_type": "markdown", "id": "3014129d", "metadata": {}, "source": [ "## The PINA Workflow \n", "\n", "

\n", " \"PINA\n", "

\n", "\n", "Solving a differential problem in **PINA** involves four main steps:\n", "\n", "1. ***Problem & Data***\n", " Define the mathematical problem and its physical constraints using PINA’s base classes: \n", " - `AbstractProblem`\n", " - `SpatialProblem`\n", " - `InverseProblem` \n", " - ...\n", "\n", " Then prepare inputs by discretizing the domain or importing numerical data. PINA provides essential tools like the `Conditions` class and the `pina.domain` module to facilitate domain sampling and ensure that the input data aligns with the problem's requirements.\n", "\n", "> **👉 We have a dedicated [tutorial](https://mathlab.github.io/PINA/tutorial16/tutorial.html) to teach how to build a Problem from scratch — have a look if you're interested!**\n", "\n", "2. ***Model Design*** \n", " Build neural network models as **PyTorch modules**. For graph-structured data, use **PyTorch Geometric** to build Graph Neural Networks. You can also import models from `pina.model` module!\n", "\n", "3. ***Solver Selection*** \n", " Choose and configure a solver to optimize your model. Options include:\n", " - **Supervised solvers**: `SupervisedSolver`, `ReducedOrderModelSolver`\n", " - **Physics-informed solvers**: `PINN` and (many) variants\n", " - **Generative solvers**: `GAROM` \n", " Solvers can be used out-of-the-box, extended, or fully customized.\n", "\n", "4. ***Training*** \n", " Train your model using the `Trainer` class (built on **PyTorch Lightning**), which enables scalable and efficient training with advanced features.\n", "\n", "\n", "By following these steps, PINA simplifies applying deep learning to scientific computing and differential problems.\n", "\n", "\n", "## A Simple Regression Problem in PINA\n", "We'll start with a simple regression problem [2] of approximating the following function with a Neural Net model $\\mathcal{M}_{\\theta}$:\n", "$$y = x^3 + \\epsilon, \\quad \\epsilon \\sim \\mathcal{N}(0, 9)$$ \n", "using only 20 samples: \n", "\n", "$$x_i \\sim \\mathcal{U}[-3, 3], \\; \\forall i \\in \\{1, \\dots, 20\\}$$\n", "\n", "Using PINA, we will:\n", "\n", "- Generate a synthetic dataset.\n", "- Implement a **Bayesian regressor**.\n", "- Use **Monte Carlo (MC) Dropout** for **Bayesian inference** and **uncertainty estimation**.\n", "\n", "This example highlights how PINA can be used for classic regression tasks with probabilistic modeling capabilities. Let's first import useful modules!" ] }, { "cell_type": "code", "execution_count": null, "id": "0981f1e9", "metadata": {}, "outputs": [], "source": [ "## routine needed to run the notebook on Google Colab\n", "try:\n", " import google.colab\n", "\n", " IN_COLAB = True\n", "except:\n", " IN_COLAB = False\n", "if IN_COLAB:\n", " !pip install \"pina-mathlab[tutorial]\"\n", "\n", "import warnings\n", "import torch\n", "import matplotlib.pyplot as plt\n", "\n", "warnings.filterwarnings(\"ignore\")\n", "\n", "from pina import Condition, LabelTensor\n", "from pina.problem import AbstractProblem\n", "from pina.domain import CartesianDomain" ] }, { "cell_type": "markdown", "id": "7b91de38", "metadata": {}, "source": [ "#### ***Problem & Data***\n", "\n", "We'll start by defining a `BayesianProblem` inheriting from `AbstractProblem` to handle input/output data. This is suitable when data is available. For other cases like PDEs without data, use:\n", "\n", "- `SpatialProblem` – for spatial variables\n", "- `TimeDependentProblem` – for temporal variables\n", "- `ParametricProblem` – for parametric inputs\n", "- `InverseProblem` – for parameter estimation from observations\n", " \n", "but we will see this more in depth in a while!" ] }, { "cell_type": "code", "execution_count": null, "id": "014bbd86", "metadata": {}, "outputs": [], "source": [ "# (a) Data generation and plot\n", "domain = CartesianDomain({\"x\": [-3, 3]})\n", "x = domain.sample(n=20, mode=\"random\")\n", "y = LabelTensor(x.pow(3) + 3 * torch.randn_like(x), \"y\")\n", "\n", "\n", "# (b) PINA Problem formulation\n", "class BayesianProblem(AbstractProblem):\n", "\n", " output_variables = [\"y\"]\n", " input_variables = [\"x\"]\n", " conditions = {\"data\": Condition(input=x, target=y)}\n", "\n", "\n", "problem = BayesianProblem()\n", "\n", "# # (b) EXTRA!\n", "# # alternatively you can do the following which is easier\n", "# # uncomment to try it!\n", "# from pina.problem.zoo import SupervisedProblem\n", "# problem = SupervisedProblem(input_=x, output_=y)" ] }, { "cell_type": "markdown", "id": "b1b1e4c4", "metadata": {}, "source": [ "We highlight two very important features of PINA\n", "\n", "1. **`LabelTensor` Structure** \n", " - Alongside the standard `torch.Tensor`, PINA introduces the `LabelTensor` structure, which allows **string-based indexing**. \n", " - Ideal for managing and stacking tensors with different labels (e.g., `\"x\"`, `\"t\"`, `\"u\"`) for improved clarity and organization. \n", " - You can still use standard PyTorch tensors if needed.\n", "\n", "2. **`Condition` Object** \n", " - The `Condition` object enforces the **constraints** that the model $\\mathcal{M}_{\\theta}$ must satisfy, such as boundary or initial conditions. \n", " - It ensures that the model adheres to the specific requirements of the problem, making constraint handling more intuitive and streamlined." ] }, { "cell_type": "code", "execution_count": null, "id": "6f25d3a6", "metadata": {}, "outputs": [], "source": [ "# EXTRA - on the use of LabelTensor\n", "\n", "# We define a 2D tensor, and we index with ['a', 'b', 'c', 'd'] its columns\n", "label_tensor = LabelTensor(torch.rand(3, 4), [\"a\", \"b\", \"c\", \"d\"])\n", "\n", "print(f\"The Label Tensor object, a very short introduction... \\n\")\n", "print(label_tensor, \"\\n\")\n", "print(f\"Torch methods can be used, {label_tensor.shape=}\")\n", "print(f\"also {label_tensor.requires_grad=} \\n\")\n", "print(f\"But we have labels as well, e.g. {label_tensor.labels=}\")\n", "print(f'And we can slice with labels: \\n {label_tensor[\"a\"]=}')\n", "print(f\"Similarly to: \\n {label_tensor[:, 0]=}\")" ] }, { "cell_type": "markdown", "id": "98cba096", "metadata": {}, "source": [ "#### ***Model Design***\n", "\n", "We will now solve the problem using a **simple PyTorch Neural Network** with **Dropout**, which we will implement from scratch following [2]. \n", "It's important to note that PINA provides a wide range of **state-of-the-art (SOTA)** architectures in the `pina.model` module, which you can explore further [here](https://mathlab.github.io/PINA/_rst/_code.html#models).\n", "\n", "#### ***Solver Selection***\n", "\n", "For this task, we will use a straightforward **supervised learning** approach by importing the `SupervisedSolver` from `pina.solvers`. The solver is responsible for defining the training strategy. \n", "\n", "The `SupervisedSolver` is designed to handle typical regression tasks effectively by minimizing the following loss function:\n", "$$\n", "\\mathcal{L}_{\\rm{problem}} = \\frac{1}{N}\\sum_{i=1}^N\n", "\\mathcal{L}(y_i - \\mathcal{M}_{\\theta}(x_i))\n", "$$\n", "where $\\mathcal{L}$ is the loss function, with the default being **Mean Squared Error (MSE)**:\n", "$$\n", "\\mathcal{L}(v) = \\| v \\|^2_2.\n", "$$\n", "\n", "#### **Training**\n", "\n", "Next, we will use the `Trainer` class to train the model. The `Trainer` class, based on **PyTorch Lightning**, offers many features that help:\n", "- **Improve model accuracy**\n", "- **Reduce training time and memory usage**\n", "- **Facilitate logging and visualization** \n", "\n", "The great work done by the PyTorch Lightning team ensures a streamlined training process." ] }, { "cell_type": "code", "execution_count": null, "id": "5388aaaa", "metadata": {}, "outputs": [], "source": [ "from pina.solver import SupervisedSolver\n", "from pina.trainer import Trainer\n", "\n", "\n", "# define problem & data (step 1)\n", "class BayesianModel(torch.nn.Module):\n", " def __init__(self):\n", " super().__init__()\n", " self.layers = torch.nn.Sequential(\n", " torch.nn.Linear(1, 100),\n", " torch.nn.ReLU(),\n", " torch.nn.Dropout(0.5),\n", " torch.nn.Linear(100, 1),\n", " )\n", "\n", " def forward(self, x):\n", " return self.layers(x)\n", "\n", "\n", "problem = BayesianProblem()\n", "\n", "# model design (step 2)\n", "model = BayesianModel()\n", "\n", "# solver selection (step 3)\n", "solver = SupervisedSolver(problem, model)\n", "\n", "# training (step 4)\n", "trainer = Trainer(solver=solver, max_epochs=2000, accelerator=\"cpu\")\n", "trainer.train()" ] }, { "cell_type": "markdown", "id": "5bf9b0d5", "metadata": {}, "source": [ "#### ***Model Training Complete! Now Visualize the Solutions***\n", "\n", "The model has been trained! Since we used **Dropout** during training, the model is probabilistic (Bayesian) [3]. This means that each time we evaluate the forward pass on the input points $x_i$, the results will differ due to the stochastic nature of Dropout.\n", "\n", "To visualize the model's predictions and uncertainty, we will:\n", "\n", "1. **Evaluate the Forward Pass**: Perform multiple forward passes to get different predictions for each input $x_i$.\n", "2. **Compute the Mean**: Calculate the average prediction $\\mu_\\theta$ across all forward passes.\n", "3. **Compute the Standard Deviation**: Calculate the variability of the predictions $\\sigma_\\theta$, which indicates the model's uncertainty.\n", "\n", "This allows us to understand not only the predicted values but also the confidence in those predictions." ] }, { "cell_type": "code", "execution_count": null, "id": "f2555911", "metadata": {}, "outputs": [], "source": [ "x_test = LabelTensor(torch.linspace(-4, 4, 100).reshape(-1, 1), \"x\")\n", "y_test = torch.stack([solver(x_test) for _ in range(1000)], dim=0)\n", "y_mean, y_std = y_test.mean(0).detach(), y_test.std(0).detach()\n", "# plot\n", "x_test = x_test.flatten()\n", "y_mean = y_mean.flatten()\n", "y_std = y_std.flatten()\n", "plt.plot(x_test, y_mean, label=r\"$\\mu_{\\theta}$\")\n", "plt.fill_between(\n", " x_test,\n", " y_mean - 3 * y_std,\n", " y_mean + 3 * y_std,\n", " alpha=0.3,\n", " label=r\"3$\\sigma_{\\theta}$\",\n", ")\n", "plt.plot(x_test, x_test.pow(3), label=\"true\")\n", "plt.scatter(x, y, label=\"train data\")\n", "plt.legend()\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "ea79c71d", "metadata": {}, "source": [ "## PINA for Physics-Informed Machine Learning\n", "\n", "In the previous section, we used PINA for **supervised learning**. However, one of its main strengths lies in **Physics-Informed Machine Learning (PIML)**, specifically through **Physics-Informed Neural Networks (PINNs)**.\n", "\n", "### What Are PINNs?\n", "\n", "PINNs are deep learning models that integrate the laws of physics directly into the training process. By incorporating **differential equations** and **boundary conditions** into the loss function, PINNs allow the modeling of complex physical systems while ensuring the predictions remain consistent with scientific laws.\n", "\n", "### Solving a 2D Poisson Problem\n", "\n", "In this section, we will solve a **2D Poisson problem** with **Dirichlet boundary conditions** on an **hourglass-shaped domain** using a simple PINN [4]. You can explore other PINN variants, e.g. [5] or [6] in PINA by visiting the [PINA solvers documentation](https://mathlab.github.io/PINA/_rst/_code.html#solvers). We aim to solve the following 2D Poisson problem:\n", "\n", "$$\n", "\\begin{cases}\n", "\\Delta u(x, y) = \\sin{(\\pi x)} \\sin{(\\pi y)} & \\text{in } D, \\\\\n", "u(x, y) = 0 & \\text{on } \\partial D \n", "\\end{cases}\n", "$$\n", "\n", "where $D$ is an **hourglass-shaped domain** defined as the difference between a **Cartesian domain** and two intersecting **ellipsoids**, and $\\partial D$ is the boundary of the domain.\n", "\n", "### Building Complex Domains\n", "\n", "PINA allows you to build complex geometries easily. It provides many built-in domain shapes and Boolean operators for combining them. For this problem, we will define the hourglass-shaped domain using the existing `CartesianDomain` and `EllipsoidDomain` classes, with Boolean operators like `Difference` and `Union`.\n", "\n", "> **👉 If you are interested in exploring the `domain` module in more detail, check out [this tutorial](https://mathlab.github.io/PINA/_rst/tutorials/tutorial6/tutorial.html).**\n" ] }, { "cell_type": "code", "execution_count": null, "id": "02518706", "metadata": {}, "outputs": [], "source": [ "from pina.domain import EllipsoidDomain, Difference, CartesianDomain, Union\n", "\n", "# (a) Building the interior of the hourglass-shaped domain\n", "cartesian = CartesianDomain({\"x\": [-3, 3], \"y\": [-3, 3]})\n", "ellipsoid_1 = EllipsoidDomain({\"x\": [-5, -1], \"y\": [-3, 3]})\n", "ellipsoid_2 = EllipsoidDomain({\"x\": [1, 5], \"y\": [-3, 3]})\n", "interior = Difference([cartesian, ellipsoid_1, ellipsoid_2])\n", "\n", "# (a) Building the boundary of the hourglass-shaped domain\n", "border_ellipsoid_1 = EllipsoidDomain(\n", " {\"x\": [-5, -1], \"y\": [-3, 3]}, sample_surface=True\n", ")\n", "border_ellipsoid_2 = EllipsoidDomain(\n", " {\"x\": [1, 5], \"y\": [-3, 3]}, sample_surface=True\n", ")\n", "border_1 = CartesianDomain({\"x\": [-3, 3], \"y\": 3})\n", "border_2 = CartesianDomain({\"x\": [-3, 3], \"y\": -3})\n", "ex_1 = CartesianDomain({\"x\": [-5, -3], \"y\": [-3, 3]})\n", "ex_2 = CartesianDomain({\"x\": [3, 5], \"y\": [-3, 3]})\n", "border_ells = Union([border_ellipsoid_1, border_ellipsoid_2])\n", "border = Union(\n", " [\n", " border_1,\n", " border_2,\n", " Difference(\n", " [Union([border_ellipsoid_1, border_ellipsoid_2]), ex_1, ex_2]\n", " ),\n", " ]\n", ")\n", "\n", "# (c) Sample the domains\n", "interior_samples = interior.sample(n=1000, mode=\"random\")\n", "border_samples = border.sample(n=1000, mode=\"random\")" ] }, { "cell_type": "markdown", "id": "b0da3d52", "metadata": {}, "source": [ "#### Plotting the domain\n", "\n", "Nice! Now that we have built the domain, let's try to plot it" ] }, { "cell_type": "code", "execution_count": null, "id": "47459922", "metadata": {}, "outputs": [], "source": [ "plt.figure(figsize=(8, 4))\n", "plt.subplot(1, 2, 1)\n", "plt.scatter(\n", " interior_samples.extract(\"x\"),\n", " interior_samples.extract(\"y\"),\n", " c=\"blue\",\n", " alpha=0.5,\n", ")\n", "plt.title(\"Hourglass Interior\")\n", "plt.subplot(1, 2, 2)\n", "plt.scatter(\n", " border_samples.extract(\"x\"),\n", " border_samples.extract(\"y\"),\n", " c=\"blue\",\n", " alpha=0.5,\n", ")\n", "plt.title(\"Hourglass Border\")\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "4d2e59a9", "metadata": {}, "source": [ "#### Writing the Poisson Problem Class\n", "\n", "Very good! Now we will implement the problem class for the 2D Poisson problem. Unlike the previous examples, where we inherited from `AbstractProblem`, for this problem, we will inherit from the `SpatialProblem` class. \n", "\n", "The reason for this is that the Poisson problem involves **spatial variables** as input, so we use `SpatialProblem` to handle such cases.\n", "\n", "This will allow us to define the problem with spatial dependencies and set up the neural network model accordingly." ] }, { "cell_type": "code", "execution_count": null, "id": "e1eb5a09", "metadata": {}, "outputs": [], "source": [ "from pina.problem import SpatialProblem\n", "from pina.operator import laplacian\n", "from pina.equation import FixedValue, Equation\n", "\n", "\n", "def poisson_equation(input_, output_):\n", " force_term = torch.sin(input_.extract([\"x\"]) * torch.pi) * torch.sin(\n", " input_.extract([\"y\"]) * torch.pi\n", " )\n", " laplacian_u = laplacian(output_, input_, components=[\"u\"], d=[\"x\", \"y\"])\n", " return laplacian_u - force_term\n", "\n", "\n", "class Poisson(SpatialProblem):\n", " # define output_variables and spatial_domain\n", " output_variables = [\"u\"]\n", " spatial_domain = Union([interior, border])\n", " # define the domains\n", " domains = {\"border\": border, \"interior\": interior}\n", " # define the conditions\n", " conditions = {\n", " \"border\": Condition(domain=\"border\", equation=FixedValue(0.0)),\n", " \"interior\": Condition(\n", " domain=\"interior\", equation=Equation(poisson_equation)\n", " ),\n", " }\n", "\n", "\n", "poisson_problem = Poisson()" ] }, { "cell_type": "markdown", "id": "f49a8307", "metadata": {}, "source": [ "As you can see, writing the problem class for a differential equation in PINA is straightforward! The main differences are:\n", "\n", "- We inherit from **`SpatialProblem`** instead of `AbstractProblem` to account for spatial variables.\n", "- We use **`domain`** and **`equation`** inside the `Condition` to define the problem.\n", "\n", "The `Equation` class can be very useful for creating modular problem classes. If you're interested, check out [this tutorial](https://mathlab.github.io/PINA/_rst/tutorial12/tutorial.html) for more details. There's also a dedicated [tutorial](https://mathlab.github.io/PINA/_rst/tutorial16/tutorial.html) for building custom problems!\n", "\n", "Once the problem class is set, we need to **sample the domain** to obtain the data. PINA will automatically handle this, and if you forget to sample, an error will be raised before training begins 😉." ] }, { "cell_type": "code", "execution_count": null, "id": "a95bb250", "metadata": {}, "outputs": [], "source": [ "print(\"Points are not automatically sampled, you can see this by:\")\n", "print(f\" {poisson_problem.are_all_domains_discretised=}\\n\")\n", "print(\"But you can easily sample by running .discretise_domain:\")\n", "poisson_problem.discretise_domain(n=1000, domains=[\"interior\"])\n", "poisson_problem.discretise_domain(n=100, domains=[\"border\"])\n", "print(f\" {poisson_problem.are_all_domains_discretised=}\")" ] }, { "cell_type": "markdown", "id": "a2c7b406", "metadata": {}, "source": [ "### Building the Model\n", "\n", "After setting the problem and sampling the domain, the next step is to **build the model** $\\mathcal{M}_{\\theta}$.\n", "\n", "For this, we will use the custom PINA models available [here](https://mathlab.github.io/PINA/_rst/_code.html#models). Specifically, we will use a **feed-forward neural network** by importing the `FeedForward` class.\n", "\n", "This neural network takes the **coordinates** (in this case `['x', 'y']`) as input and outputs the unknown field of the Poisson problem. \n", "\n", "In this tutorial, the neural network is composed of 2 hidden layers, each with 120 neurons and tanh activation." ] }, { "cell_type": "code", "execution_count": null, "id": "b893232b", "metadata": {}, "outputs": [], "source": [ "from pina.model import FeedForward\n", "\n", "model = FeedForward(\n", " func=torch.nn.Tanh,\n", " layers=[120] * 2,\n", " output_dimensions=len(poisson_problem.output_variables),\n", " input_dimensions=len(poisson_problem.input_variables),\n", ")" ] }, { "cell_type": "markdown", "id": "37b09ea9", "metadata": {}, "source": [ "### Solver Selection\n", "\n", "The thir part of the PINA pipeline involves using a **Solver**.\n", "\n", "In this tutorial, we will use the **classical PINN** solver. However, many other variants are also available and we invite to try them!\n", "\n", "#### Loss Function in PINA\n", "\n", "The loss function in the **classical PINN** is defined as follows:\n", "\n", "$$\\theta_{\\rm{best}}=\\min_{\\theta}\\mathcal{L}_{\\rm{problem}}(\\theta), \\quad \\mathcal{L}_{\\rm{problem}}(\\theta)= \\frac{1}{N_{D}}\\sum_{i=1}^N\n", "\\mathcal{L}(\\Delta\\mathcal{M}_{\\theta}(\\mathbf{x}_i, \\mathbf{y}_i) - \\sin(\\pi x_i)\\sin(\\pi y_i)) +\n", "\\frac{1}{N}\\sum_{i=1}^N\n", "\\mathcal{L}(\\mathcal{M}_{\\theta}(\\mathbf{x}_i, \\mathbf{y}_i))$$\n", "\n", "This loss consists of:\n", "1. The **differential equation residual**: Ensures the model satisfies the Poisson equation.\n", "2. The **boundary condition**: Ensures the model satisfies the Dirichlet boundary condition.\n", "\n", "### Training\n", "\n", "For the last part of the pipeline we need a `Trainer`. We will train the model for **1000 epochs** using the default optimizer parameters. These parameters can be adjusted as needed. For more details, check the solvers documentation [here](https://mathlab.github.io/PINA/_rst/_code.html#solvers).\n", "\n", "To track metrics during training, we use the **`MetricTracker`** class.\n", "\n", "> **👉 Want to know more about `Trainer` and how to boost PINA performance, check out [this tutorial](https://mathlab.github.io/PINA/_rst/tutorials/tutorial11/tutorial.html).**" ] }, { "cell_type": "code", "execution_count": null, "id": "0f135cc4", "metadata": {}, "outputs": [], "source": [ "from pina.solver import PINN\n", "from pina.callback import MetricTracker\n", "\n", "# define the solver\n", "solver = PINN(poisson_problem, model)\n", "\n", "# define trainer\n", "trainer = Trainer(\n", " solver,\n", " max_epochs=1500,\n", " callbacks=[MetricTracker()],\n", " accelerator=\"cpu\",\n", " enable_model_summary=False,\n", ")\n", "\n", "# train\n", "trainer.train()" ] }, { "cell_type": "markdown", "id": "a3d9fc51", "metadata": {}, "source": [ "Done! We can plot the solution and its residual" ] }, { "cell_type": "code", "execution_count": null, "id": "dea7acf4", "metadata": {}, "outputs": [], "source": [ "# sample points in the domain. remember to set requires_grad!\n", "pts = poisson_problem.spatial_domain.sample(1000).requires_grad_(True)\n", "# compute the solution\n", "solution = solver(pts)\n", "# compute the residual in the interior\n", "equation = poisson_problem.conditions[\"interior\"].equation\n", "residual = solver.compute_residual(pts, equation)\n", "# simple plot\n", "with torch.no_grad():\n", " plt.subplot(1, 2, 1)\n", " plt.scatter(\n", " pts.extract(\"x\").flatten(),\n", " pts.extract(\"y\").flatten(),\n", " c=solution.extract(\"u\").flatten(),\n", " )\n", " plt.colorbar()\n", " plt.title(\"Solution\")\n", " plt.subplot(1, 2, 2)\n", " plt.scatter(\n", " pts.extract(\"x\").flatten(),\n", " pts.extract(\"y\").flatten(),\n", " c=residual.flatten(),\n", " )\n", " plt.colorbar()\n", " plt.tight_layout()\n", " plt.title(\"Residual\")" ] }, { "cell_type": "markdown", "id": "487c1d47", "metadata": {}, "source": [ "## What's Next?\n", "\n", "Congratulations on completing the introductory tutorial of **PINA**! Now that you have a solid foundation, here are a few directions you can explore:\n", "\n", "1. **Explore Advanced Solvers**: Dive into more advanced solvers like **SAPINN** or **RBAPINN** and experiment with different variations of Physics-Informed Neural Networks.\n", "2. **Apply PINA to New Problems**: Try solving other types of differential equations or explore inverse problems and parametric problems using the PINA framework.\n", "3. **Optimize Model Performance**: Use the `Trainer` class to enhance model performance by exploring features like dynamic learning rates, early stopping, and model checkpoints.\n", "\n", "4. **...and many more!** — There are countless directions to further explore, from testing on different problems to refining the model architecture!\n", "\n", "For more resources and tutorials, check out the [PINA Documentation](https://mathlab.github.io/PINA/).\n", "\n", "\n", "### References\n", "\n", "[1] *Coscia, Dario, et al. \"Physics-informed neural networks for advanced modeling.\" Journal of Open Source Software, 2023.*\n", "\n", "[2] *Hernández-Lobato, José Miguel, and Ryan Adams. \"Probabilistic backpropagation for scalable learning of bayesian neural networks.\" International conference on machine learning, 2015.*\n", "\n", "[3] *Gal, Yarin, and Zoubin Ghahramani. \"Dropout as a bayesian approximation: Representing model uncertainty in deep learning.\" International conference on machine learning, 2016.*\n", "\n", "[4] *Raissi, Maziar, Paris Perdikaris, and George E. Karniadakis. \"Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations.\" Journal of Computational Physics, 2019.*\n", "\n", "[5] *McClenny, Levi D., and Ulisses M. Braga-Neto. \"Self-adaptive physics-informed neural networks.\" Journal of Computational Physics, 2023.*\n", "\n", "[6] *Anagnostopoulos, Sokratis J., et al. \"Residual-based attention in physics-informed neural networks.\" Computer Methods in Applied Mechanics and Engineering, 2024.*" ] } ], "metadata": { "kernelspec": { "display_name": "deep", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.11" } }, "nbformat": 4, "nbformat_minor": 5 }