diff --git a/docs/source/_rst/tutorials/tutorial1/tutorial.rst b/docs/source/_rst/tutorials/tutorial1/tutorial.rst index b0ed833..1a75301 100644 --- a/docs/source/_rst/tutorials/tutorial1/tutorial.rst +++ b/docs/source/_rst/tutorials/tutorial1/tutorial.rst @@ -28,8 +28,12 @@ Build a PINA problem Problem definition in the **PINA** framework is done by building a python ``class``, which inherits from one or more problem classes (``SpatialProblem``, ``TimeDependentProblem``, ``ParametricProblem``, …) -depending on the nature of the problem. Below is an example: ### Simple -Ordinary Differential Equation Consider the following: +depending on the nature of the problem. Below is an example: + +Simple Ordinary Differential Equation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Consider the following: .. math:: @@ -83,12 +87,6 @@ will inherit from both ``SpatialProblem`` and ``TimeDependentProblem``: # other stuff ... -.. parsed-literal:: - - Intel MKL WARNING: Support of Intel(R) Streaming SIMD Extensions 4.2 (Intel(R) SSE4.2) enabled only processors has been deprecated. Intel oneAPI Math Kernel Library 2025.0 will require Intel(R) Advanced Vector Extensions (Intel(R) AVX) instructions. - Intel MKL WARNING: Support of Intel(R) Streaming SIMD Extensions 4.2 (Intel(R) SSE4.2) enabled only processors has been deprecated. Intel oneAPI Math Kernel Library 2025.0 will require Intel(R) Advanced Vector Extensions (Intel(R) AVX) instructions. - - where we have included the ``temporal_domain`` variable, indicating the time domain wanted for the solution. @@ -96,12 +94,12 @@ In summary, using **PINA**, we can initialize a problem with a class which inherits from different base classes: ``SpatialProblem``, ``TimeDependentProblem``, ``ParametricProblem``, and so on depending on the type of problem we are considering. Here are some examples (more on -the official documentation): \* ``SpatialProblem`` :math:`\rightarrow` a -differential equation with spatial variable(s) \* -``TimeDependentProblem`` :math:`\rightarrow` a time-dependent -differential equation \* ``ParametricProblem`` :math:`\rightarrow` a -parametrized differential equation \* ``AbstractProblem`` -:math:`\rightarrow` any **PINA** problem inherits from here +the official documentation): + +* ``SpatialProblem`` :math:`\rightarrow` a differential equation with spatial variable(s) ``spatial_domain`` +* ``TimeDependentProblem`` :math:`\rightarrow` a time-dependent differential equation with temporal variable(s) ``temporal_domain`` +* ``ParametricProblem`` :math:`\rightarrow` a parametrized differential equation with parametric variable(s) ``parameter_domain`` +* ``AbstractProblem`` :math:`\rightarrow` any **PINA** problem inherits from here Write the problem class ~~~~~~~~~~~~~~~~~~~~~~~ @@ -300,31 +298,6 @@ If you want to track the metric by yourself without a logger, use # train trainer.train() - -.. parsed-literal:: - - GPU available: False, used: False - TPU available: False, using: 0 TPU cores - IPU available: False, using: 0 IPUs - HPU available: False, using: 0 HPUs - /Users/alessio/opt/anaconda3/envs/pina/lib/python3.11/site-packages/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py:67: Starting from v1.9.0, `tensorboardX` has been removed as a dependency of the `pytorch_lightning` package, due to potential conflicts with other packages in the ML ecosystem. For this reason, `logger=True` will use `CSVLogger` as the default logger, unless the `tensorboard` or `tensorboardX` packages are found. Please `pip install lightning[extra]` or one of them to enable TensorBoard support by default - Missing logger folder: /Users/alessio/Downloads/lightning_logs - - -.. parsed-literal:: - - Epoch 1499: | | 1/? [00:00<00:00, 167.08it/s, v_num=0, x0_loss=1.07e-5, D_loss=0.000792, mean_loss=0.000401] - -.. parsed-literal:: - - `Trainer.fit` stopped: `max_epochs=1500` reached. - - -.. parsed-literal:: - - Epoch 1499: | | 1/? [00:00<00:00, 102.49it/s, v_num=0, x0_loss=1.07e-5, D_loss=0.000792, mean_loss=0.000401] - - After the training we can inspect trainer logged metrics (by default **PINA** logs mean square error residual loss). The logged metrics can be accessed online using one of the ``Lightinig`` loggers. The final @@ -355,11 +328,6 @@ quatitative plots of the solution. pl.plot(solver=pinn) -.. parsed-literal:: - - Intel MKL WARNING: Support of Intel(R) Streaming SIMD Extensions 4.2 (Intel(R) SSE4.2) enabled only processors has been deprecated. Intel oneAPI Math Kernel Library 2025.0 will require Intel(R) Advanced Vector Extensions (Intel(R) AVX) instructions. - - .. image:: tutorial_files/tutorial_23_1.png diff --git a/docs/source/_rst/tutorials/tutorial2/tutorial.rst b/docs/source/_rst/tutorials/tutorial2/tutorial.rst index f7c7a45..cc001f5 100644 --- a/docs/source/_rst/tutorials/tutorial2/tutorial.rst +++ b/docs/source/_rst/tutorials/tutorial2/tutorial.rst @@ -31,12 +31,17 @@ The problem definition ---------------------- The two-dimensional Poisson problem is mathematically written as: -:raw-latex:`\begin{equation} -\begin{cases} -\Delta u = \sin{(\pi x)} \sin{(\pi y)} \text{ in } D, \\ -u = 0 \text{ on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4, -\end{cases} -\end{equation}` where :math:`D` is a square domain :math:`[0,1]^2`, and + +.. math:: + + \begin{equation} + \begin{cases} + \Delta u = \sin{(\pi x)} \sin{(\pi y)} \text{ in } D, \\ + u = 0 \text{ on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4, + \end{cases} + \end{equation} + +where :math:`D` is a square domain :math:`[0,1]^2`, and :math:`\Gamma_i`, with :math:`i=1,...,4`, are the boundaries of the square. @@ -112,19 +117,6 @@ These parameters can be modified as desired. We use the # train trainer.train() - -.. parsed-literal:: - - GPU available: False, used: False - TPU available: False, using: 0 TPU cores - IPU available: False, using: 0 IPUs - HPU available: False, using: 0 HPUs - - -.. parsed-literal:: - - Epoch 999: : 1it [00:00, 158.53it/s, v_num=3, gamma1_loss=5.29e-5, gamma2_loss=4.09e-5, gamma3_loss=4.73e-5, gamma4_loss=4.18e-5, D_loss=0.00134, mean_loss=0.000304] - .. parsed-literal:: `Trainer.fit` stopped: `max_epochs=1000` reached. @@ -158,9 +150,11 @@ is now defined, with an additional input variable, named extra-feature, which coincides with the forcing term in the Laplace equation. The set of input variables to the neural network is: -:raw-latex:`\begin{equation} -[x, y, k(x, y)], \text{ with } k(x, y)=\sin{(\pi x)}\sin{(\pi y)}, -\end{equation}` +.. math:: + + \begin{equation} + [x, y, k(x, y)], \text{ with } k(x, y)=\sin{(\pi x)}\sin{(\pi y)}, + \end{equation} where :math:`x` and :math:`y` are the spatial coordinates and :math:`k(x, y)` is the added feature. @@ -203,19 +197,6 @@ new extra feature. # train trainer_feat.train() - -.. parsed-literal:: - - GPU available: False, used: False - TPU available: False, using: 0 TPU cores - IPU available: False, using: 0 IPUs - HPU available: False, using: 0 HPUs - - -.. parsed-literal:: - - Epoch 999: : 1it [00:00, 111.88it/s, v_num=4, gamma1_loss=2.54e-7, gamma2_loss=2.17e-7, gamma3_loss=1.94e-7, gamma4_loss=2.69e-7, D_loss=9.2e-6, mean_loss=2.03e-6] - .. parsed-literal:: `Trainer.fit` stopped: `max_epochs=1000` reached. @@ -249,9 +230,11 @@ Another way to exploit the extra features is the addition of learnable parameter inside them. In this way, the added parameters are learned during the training phase of the neural network. In this case, we use: -:raw-latex:`\begin{equation} -k(x, \mathbf{y}) = \beta \sin{(\alpha x)} \sin{(\alpha y)}, -\end{equation}` +.. math:: + + \begin{equation} + k(x, \mathbf{y}) = \beta \sin{(\alpha x)} \sin{(\alpha y)}, + \end{equation} where :math:`\alpha` and :math:`\beta` are the abovementioned parameters. Their implementation is quite trivial: by using the class @@ -289,19 +272,6 @@ need, and they are managed by ``autograd`` module! # train trainer_learn.train() - -.. parsed-literal:: - - GPU available: False, used: False - TPU available: False, using: 0 TPU cores - IPU available: False, using: 0 IPUs - HPU available: False, using: 0 HPUs - - -.. parsed-literal:: - - Epoch 999: : 1it [00:00, 119.29it/s, v_num=5, gamma1_loss=3.26e-8, gamma2_loss=7.84e-8, gamma3_loss=1.13e-7, gamma4_loss=3.02e-8, D_loss=2.66e-6, mean_loss=5.82e-7] - .. parsed-literal:: `Trainer.fit` stopped: `max_epochs=1000` reached. @@ -338,19 +308,6 @@ removing all the hidden layers in the ``FeedForward``, keeping only the # train trainer_learn.train() - -.. parsed-literal:: - - GPU available: False, used: False - TPU available: False, using: 0 TPU cores - IPU available: False, using: 0 IPUs - HPU available: False, using: 0 HPUs - - -.. parsed-literal:: - - Epoch 0: : 0it [00:00, ?it/s]Epoch 999: : 1it [00:00, 131.20it/s, v_num=6, gamma1_loss=2.55e-16, gamma2_loss=4.76e-17, gamma3_loss=2.55e-16, gamma4_loss=4.76e-17, D_loss=1.74e-13, mean_loss=3.5e-14] - .. parsed-literal:: `Trainer.fit` stopped: `max_epochs=1000` reached. diff --git a/docs/source/_rst/tutorials/tutorial3/tutorial.rst b/docs/source/_rst/tutorials/tutorial3/tutorial.rst index 01b3c1c..e7c0c8f 100644 --- a/docs/source/_rst/tutorials/tutorial3/tutorial.rst +++ b/docs/source/_rst/tutorials/tutorial3/tutorial.rst @@ -25,13 +25,14 @@ The problem definition The problem is written in the following form: -:raw-latex:`\begin{equation} -\begin{cases} -\Delta u(x,y,t) = \frac{\partial^2}{\partial t^2} u(x,y,t) \quad \text{in } D, \\\\ -u(x, y, t=0) = \sin(\pi x)\sin(\pi y), \\\\ -u(x, y, t) = 0 \quad \text{on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4, -\end{cases} -\end{equation}` +.. math:: + \begin{equation} + \begin{cases} + \Delta u(x,y,t) = \frac{\partial^2}{\partial t^2} u(x,y,t) \quad \text{in } D, \\\\ + u(x, y, t=0) = \sin(\pi x)\sin(\pi y), \\\\ + u(x, y, t) = 0 \quad \text{on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4, + \end{cases} + \end{equation} where :math:`D` is a square domain :math:`[0,1]^2`, and :math:`\Gamma_i`, with :math:`i=1,...,4`, are the boundaries of the @@ -136,20 +137,6 @@ approximately 3 minutes. trainer = Trainer(pinn, max_epochs=1000, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional) trainer.train() - -.. parsed-literal:: - - GPU available: False, used: False - TPU available: False, using: 0 TPU cores - IPU available: False, using: 0 IPUs - HPU available: False, using: 0 HPUs - Missing logger folder: /Users/dariocoscia/Desktop/PINA/tutorials/tutorial3/lightning_logs - - -.. parsed-literal:: - - Epoch 999: : 1it [00:00, 84.47it/s, v_num=0, gamma1_loss=0.000, gamma2_loss=0.000, gamma3_loss=0.000, gamma4_loss=0.000, t0_loss=0.0419, D_loss=0.0307, mean_loss=0.0121] - .. parsed-literal:: `Trainer.fit` stopped: `max_epochs=1000` reached. @@ -214,7 +201,7 @@ progress the solution get worse…. Can we do better? A valid option is to impose the initial condition as hard constraint as well. Specifically, our solution is written as: -.. math:: u_{\rm{pinn}} = xy(1-x)(1-y)\cdot NN(x, y, t)\cdot t + \cos(\sqrt{2}\pi t)sin(\pi x)\sin(\pi y), +.. math:: u_{\rm{pinn}} = xy(1-x)(1-y)\cdot NN(x, y, t)\cdot t + \cos(\sqrt{2}\pi t)\sin(\pi x)\sin(\pi y), Let us build the network first @@ -252,18 +239,6 @@ Now let’s train with the same configuration as thre previous test trainer.train() -.. parsed-literal:: - - GPU available: False, used: False - TPU available: False, using: 0 TPU cores - IPU available: False, using: 0 IPUs - HPU available: False, using: 0 HPUs - - -.. parsed-literal:: - - Epoch 0: : 0it [00:00, ?it/s]Epoch 999: : 1it [00:00, 52.10it/s, v_num=1, gamma1_loss=1.97e-15, gamma2_loss=0.000, gamma3_loss=2.14e-15, gamma4_loss=0.000, t0_loss=0.000, D_loss=1.25e-7, mean_loss=2.09e-8] - .. parsed-literal:: `Trainer.fit` stopped: `max_epochs=1000` reached. diff --git a/docs/source/_rst/tutorials/tutorial4/tutorial.rst b/docs/source/_rst/tutorials/tutorial4/tutorial.rst index f93c2fe..409a460 100644 --- a/docs/source/_rst/tutorials/tutorial4/tutorial.rst +++ b/docs/source/_rst/tutorials/tutorial4/tutorial.rst @@ -415,15 +415,6 @@ juts 1 epoch using Adam optimizer with a :math:`0.001` learning rate. running_loss = 0.0 - -.. parsed-literal:: - - /u/d/dcoscia/.local/lib/python3.9/site-packages/torch/autograd/__init__.py:200: UserWarning: CUDA initialization: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:109.) - Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass - /u/d/dcoscia/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:546: UserWarning: Can't initialize NVML - warnings.warn("Can't initialize NVML") - - .. parsed-literal:: batch [50/750] loss[0.161] @@ -637,21 +628,6 @@ and the problem is a simple problem created by inheriting from trainer.train() - -.. parsed-literal:: - - GPU available: False, used: False - TPU available: False, using: 0 TPU cores - IPU available: False, using: 0 IPUs - HPU available: False, using: 0 HPUs - - - -.. parsed-literal:: - - Training: 0it [00:00, ?it/s] - - .. parsed-literal:: `Trainer.fit` stopped: `max_epochs=150` reached. diff --git a/docs/source/_rst/tutorials/tutorial7/tutorial.rst b/docs/source/_rst/tutorials/tutorial7/tutorial.rst index 6750ffc..b69d5b7 100644 --- a/docs/source/_rst/tutorials/tutorial7/tutorial.rst +++ b/docs/source/_rst/tutorials/tutorial7/tutorial.rst @@ -1,4 +1,4 @@ -Tutorial 7: Resolution of an inverse problem +Tutorial: Resolution of an inverse problem ============================================ Introduction to the inverse problem @@ -7,26 +7,29 @@ Introduction to the inverse problem This tutorial shows how to solve an inverse Poisson problem with Physics-Informed Neural Networks. The problem definition is that of a Poisson problem with homogeneous boundary conditions and it reads: -:raw-latex:`\begin{equation} -\begin{cases} -\Delta u = e^{-2(x-\mu_1)^2-2(y-\mu_2)^2} \text{ in } \Omega\, ,\\ -u = 0 \text{ on }\partial \Omega,\\ -u(\mu_1, \mu_2) = \text{ data} -\end{cases} -\end{equation}` where :math:`\Omega` is a square domain + +.. math:: + + \begin{equation} + \begin{cases} + \Delta u = e^{-2(x-\mu_1)^2-2(y-\mu_2)^2} \text{ in } \Omega\, ,\\ + u = 0 \text{ on }\partial \Omega,\\ + u(\mu_1, \mu_2) = \text{ data} + \end{cases} + \end{equation} + +where :math:`\Omega` is a square domain :math:`[-2, 2] \times [-2, 2]`, and :math:`\partial \Omega=\Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4` is the union of the boundaries of the domain. This kind of problem, namely the “inverse problem”, has two main goals: -- find the solution :math:`u` that satisfies the Poisson equation; - -find the unknown parameters (:math:`\mu_1`, :math:`\mu_2`) that better -fit some given data (third equation in the system above). + +* find the solution :math:`u` that satisfies the Poisson equation +* find the unknown parameters (:math:`\mu_1`, :math:`\mu_2`) that better fit some given data (third equation in the system above). In order to achieve both the goals we will need to define an -``InverseProblem`` in PINA. - -Let’s start with useful imports. +``InverseProblem`` in PINA. Let’s start with useful imports. .. code:: ipython3 diff --git a/docs/source/_rst/tutorials/tutorial8/tutorial.rst b/docs/source/_rst/tutorials/tutorial8/tutorial.rst index 5d7f3a9..b160e09 100644 --- a/docs/source/_rst/tutorials/tutorial8/tutorial.rst +++ b/docs/source/_rst/tutorials/tutorial8/tutorial.rst @@ -1,4 +1,4 @@ -Tutorial 8: Reduced order model (PODNN) for parametric problems +Tutorial: Reduced order model (PODNN) for parametric problems =============================================================== The tutorial aims to show how to employ the **PINA** library in order to @@ -73,17 +73,6 @@ reference solution: this is the expected output of the neural network. ax.set_title(f'$\mu$ = {p[0]:.2f}') -.. parsed-literal:: - - Epoch 0: 0%| | 0/5 [48:45