diff --git a/docs/source/tutorials/tutorial1/tutorial.html b/docs/source/tutorials/tutorial1/tutorial.html index d8010e1..3d8bffc 100644 --- a/docs/source/tutorials/tutorial1/tutorial.html +++ b/docs/source/tutorials/tutorial1/tutorial.html @@ -7544,7 +7544,10 @@ a.anchor-link {
+⚠️ Before starting:¶
We assume you are already familiar with the concepts covered in the Getting started with PINA tutorials. If not, we strongly recommend reviewing them before exploring this advanced topic.
+
In this tutorial, we will demonstrate a typical use case of PINA on a toy problem, following the standard API procedure.
-
-
-
Specifically, the tutorial aims to introduce the following topics:
-PINN trainingThese are the two main steps needed before starting the modelling optimization (choose model and solver, and train). We will show each step in detail, and at the end, we will solve a simple Ordinary Differential Equation (ODE) problem using the PINN solver.
In this tutorial, we will demonstrate a typical use case of PINA for Physics Informed Neural Network (PINN) training. We will cover the basics of training a PINN with PINA, if you want to go further into PINNs look at our dedicated tutorials on the topic.
+Let's start by importing the useful modules:
Problem definition in the PINA framework is done by building a python class, which inherits from one or more problem classes (SpatialProblem, TimeDependentProblem, ParametricProblem, ...) depending on the nature of the problem. Below is an example:
Consider the following:
-$$ -\begin{equation} -\begin{cases} -\frac{d}{dx}u(x) &= u(x) \quad x\in(0,1)\\ -u(x=0) &= 1 \\ -\end{cases} -\end{equation} -$$
-with the analytical solution $u(x) = e^x$. In this case, our ODE depends only on the spatial variable $x\in(0,1)$ , meaning that our Problem class is going to be inherited from the SpatialProblem class:
from pina.problem import SpatialProblem
-from pina.domain import CartesianProblem
-
-class SimpleODE(SpatialProblem):
-
- output_variables = ['u']
- spatial_domain = CartesianProblem({'x': [0, 1]})
-
- # other stuff ...
-Notice that we define output_variables as a list of symbols, indicating the output variables of our equation (in this case only $u$), this is done because in PINA the torch.Tensors are labelled, allowing the user maximal flexibility for the manipulation of the tensor. The spatial_domain variable indicates where the sample points are going to be sampled in the domain, in this case $x\in[0,1]$.
What if our equation is also time-dependent? In this case, our class will inherit from both SpatialProblem and TimeDependentProblem:
where we have included the temporal_domain variable, indicating the time domain wanted for the solution.
In summary, using PINA, we can initialize a problem with a class which inherits from different base classes: SpatialProblem, TimeDependentProblem, ParametricProblem, and so on depending on the type of problem we are considering. Here are some examples (more on the official documentation):
SpatialProblem $\rightarrow$ a differential equation with spatial variable(s) spatial_domainTimeDependentProblem $\rightarrow$ a time-dependent differential equation with temporal variable(s) temporal_domainParametricProblem $\rightarrow$ a parametrized differential equation with parametric variable(s) parameter_domainAbstractProblem $\rightarrow$ any PINA problem inherits from hereOnce the Problem class is initialized, we need to represent the differential equation in PINA. In order to do this, we need to load the PINA operators from pina.operator module. Again, we'll consider Equation (1) and represent it in PINA:
We will use a simple Ordinary Differential Equation as pedagogical example:
+$$ +\begin{equation} +\begin{cases} +\frac{d}{dx}u(x) &= u(x) \quad x\in(0,1)\\ +u(x=0) &= 1 \\ +\end{cases} +\end{equation} +$$
+with the analytical solution $u(x) = e^x$.
+The PINA problem is easly written as:
import torch
-import matplotlib.pyplot as plt
-
-from pina.problem import SpatialProblem
-from pina.operator import grad
-from pina import Condition
-from pina.domain import CartesianDomain
-from pina.equation import Equation, FixedValue
-
-
-# defining the ode equation
-def ode_equation(input_, output_):
-
- # computing the derivative
+def ode_equation(input_, output_):
u_x = grad(output_, input_, components=["u"], d=["x"])
-
- # extracting the u input variable
u = output_.extract(["u"])
-
- # calculate the residual and return it
return u_x - u
@@ -7720,13 +7646,11 @@ $$
"D": CartesianDomain({"x": [0, 1]}),
}
- # conditions to hold
conditions = {
"bound_cond": Condition(domain="x0", equation=FixedValue(1.0)),
"phys_cond": Condition(domain="D", equation=Equation(ode_equation)),
}
- # defining the true solution
def solution(self, pts):
return torch.exp(pts.extract(["x"]))
@@ -7744,9 +7668,23 @@ $$
-After we define the Problem class, we need to write different class methods, where each method is a function returning a residual. These functions are the ones minimized during PINN optimization, given the initial conditions. For example, in the domain $[0,1]$, the ODE equation (ode_equation) must be satisfied. We represent this by returning the difference between subtracting the variable u from its gradient (the residual), which we hope to minimize to 0. This is done for all conditions. Notice that we do not pass directly a python function, but an Equation object, which is initialized with the python function. This is done so that all the computations and internal checks are done inside PINA.
-Once we have defined the function, we need to tell the neural network where these methods are to be applied. To do so, we use the Condition class. In the Condition class, we pass the location points and the equation we want minimized on those points (other possibilities are allowed, see the documentation for reference).
-Finally, it's possible to define a solution function, which can be useful if we want to plot the results and see how the real solution compares to the expected (true) solution. Notice that the solution function is a method of the PINN class, but it is not mandatory for problem definition.
+We are going to use latin hypercube points for sampling. We need to sample in all the conditions domains. In our case we sample in domain D and x0:
+
+
+# sampling for training
+problem.discretise_domain(1, "lh", domains=["x0"])
+problem.discretise_domain(20, "lh", domains=["D"])
+# sampling 20 points in [0, 1] through discretization in all locations
@@ -7799,7 +7737,7 @@ $$
-In [4]:
+In [5]:
# sampling for training
@@ -7811,65 +7749,6 @@ $$
-
-
-
-
-
-
-The points are saved in a python dict, and can be accessed by calling the attribute input_pts of the problem
-
-
-
-
-
-
-
-
-In [5]:
-
-
-print("Input points:", problem.discretised_domains)
-print("Input points labels:", problem.discretised_domains["D"].labels)
-
-
-
-
-
-
-
-
-
-
-
-
-Input points: {'x0': LabelTensor([[0.]]), 'D': LabelTensor([[0.9337],
- [0.0857],
- [0.7990],
- [0.8456],
- [0.2606],
- [0.1254],
- [0.5825],
- [0.6755],
- [0.2170],
- [0.9972],
- [0.8914],
- [0.4642],
- [0.4323],
- [0.1694],
- [0.6003],
- [0.0351],
- [0.5070],
- [0.3535],
- [0.7230],
- [0.3159]])}
-Input points labels: ['x']
-
-
-
-
-
-
@@ -7893,7 +7772,7 @@ Input points labels: ['x']
problem.input_pts[location].extract(problem.spatial_variables).flatten()
)
plt.scatter(coords, torch.zeros_like(coords), s=10, label=location)
-plt.legend()
+_ = plt.legend()
@@ -7903,16 +7782,10 @@ Input points labels: ['x']
-
-Out[6]:
-
-<matplotlib.legend.Legend at 0x7fc94be8e940>
-
-
-
+
@@ -7924,7 +7797,7 @@ Input points labels: ['x']
Once we have defined the problem and generated the data we can start the modelling. Here we will choose a FeedForward neural network available in pina.model, and we will train using the PINN solver from pina.solver. We highlight that this training is fairly simple, for more advanced stuff consider the tutorials in the Physics Informed Neural Networks section of Tutorials. For training we use the Trainer class from pina.trainer. Here we show a very short training and some method for plotting the results. Notice that by default all relevant metrics (e.g. MSE error during training) are going to be tracked using a lightning logger, by default CSVLogger. If you want to track the metric by yourself without a logger, use pina.callback.MetricTracker.
Once the problem is defined and the data is generated, we can move on to modeling. This process consists of three key steps:
+Choosing a Model
+pina.model module (see here for a full list), or define a custom PyTorch module (more on this here).Choosing a PINN Solver & Defining the Trainer
+pina.solver module to solve the problem using the specified model. We have already implemented most State-Of-The-Arte solvers for you, have a look if interested. Today we will use the standard PINN solver.Training
+Trainer class. The Trainer class provides powerful features to enhance model accuracy, optimize training time and memory, and simplify logging and visualization, thanks to PyTorch Lightning's excellent work, see our dedicated tutorial for further details. By default, training metrics (e.g., MSE error) are logged using a lightning logger (CSVLogger). If you prefer manual tracking, use pina.callback.MetricTracker.Let's cover all steps one by one!
+First we build the model, in this case a FeedForward neural network, with two layers of size 10 and hyperbolic tangent activation:
from pina import Trainer
-from pina.solver import PINN
-from pina.model import FeedForward
-from lightning.pytorch.loggers import TensorBoardLogger
-from pina.optim import TorchOptimizer
-
-
-# build the model
+# build the model
model = FeedForward(
layers=[10, 10],
func=torch.nn.Tanh,
output_dimensions=len(problem.output_variables),
input_dimensions=len(problem.input_variables),
)
-
-# create the PINN object
-pinn = PINN(problem, model, TorchOptimizer(torch.optim.Adam, lr=0.005))
-
-# create the trainer
+
+Then we build the solver. The Physics-Informed Neural Network (PINN) solver class needs to be initialised with a model and a specific problem to be solved. They also take extra arguments, as the optimizer, scheduler, loss type and weighting for the different conditions which are all set to their defualt values.
++💡Bonus tip:¶
All physics solvers in PINA can handle both forward and inverse problems without requiring any changes to the model or solver structure! See our tutorial of inverse problems for more infos.
+
# create the PINN object with RAdam Optimizer, notice that Optimizer need to
+# be wrapped with the pina.optim.TorchOptimizer class
+pinn = PINN(problem, model, TorchOptimizer(torch.optim.RAdam, lr=0.005))
+Finally, we train the model using the Trainer API. The trainer offers various options to customize your training, refer to the official documentation for details. Here, we highlight the MetricTracker from pina.callback, which helps track metrics during training. In order to train just call the .train() method.
++⚠️ Important Note:¶
In PINA you can log metrics in different ways. The simplest approach is to use the
+MetricTrakerclass frompina.callbacksas we will see today. However, expecially when we need to train multiple times to get an average of the loss across multiple runs, we suggest to uselightning.pytorch.loggers(see here for reference).
# create the trainer
trainer = Trainer(
- solver=pinn,
- max_epochs=1500,
- logger=TensorBoardLogger("tutorial_logs"),
- accelerator="cpu",
- train_size=1.0,
- test_size=0.0,
- val_size=0.0,
- enable_model_summary=False,
-) # we train on CPU and avoid model summary at beginning of training (optional)
+ solver=pinn, # The PINN solver to be used for training
+ max_epochs=1500, # Maximum number of training epochs
+ logger=True, # Enables logging (default logger is CSVLogger)
+ callbacks=[MetricTracker()], # Tracks training metrics using MetricTracker
+ accelerator="cpu", # Specifies the computing device ("cpu", "gpu", ...)
+ train_size=1.0, # Fraction of the dataset used for training (100%)
+ test_size=0.0, # Fraction of the dataset used for testing (0%)
+ val_size=0.0, # Fraction of the dataset used for validation (0%)
+ enable_model_summary=False, # Disables model summary printing
+)
# train
trainer.train()
@@ -7991,6 +7925,13 @@ Input points labels: ['x']
+You are using the plain ModelCheckpoint callback. Consider using LitModelCheckpoint which with seamless uploading to Model registry.
+
+
+
+
+
+
GPU available: False, used: False
@@ -8011,19 +7952,12 @@ Input points labels: ['x']
-
-Missing logger folder: tutorial_logs/lightning_logs
-
-
-
-
-
-
@@ -8052,7 +7986,7 @@ var element = document.getElementById('47625d80-86e1-4553-8f28-55bdcc99682b');
-In [8]:
+In [10]:
# inspecting final loss
@@ -8067,11 +8001,11 @@ var element = document.getElementById('47625d80-86e1-4553-8f28-55bdcc99682b');
-Out[8]:
+Out[10]:
-{'bound_cond_loss': tensor(2.0141e-08),
- 'phys_cond_loss': tensor(1.1210e-05),
- 'train_loss': tensor(1.1231e-05)}
+{'bound_cond_loss': tensor(1.2807e-07),
+ 'phys_cond_loss': tensor(3.4339e-05),
+ 'train_loss': tensor(3.4467e-05)}
@@ -8092,16 +8026,16 @@ var element = document.getElementById('47625d80-86e1-4553-8f28-55bdcc99682b');
-In [9]:
+In [11]:
pts = pinn.problem.spatial_domain.sample(256, "grid", variables="x")
predicted_output = pinn.forward(pts).extract("u").tensor.detach()
true_output = pinn.problem.solution(pts).detach()
-fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(8, 8))
+fig, ax = plt.subplots(nrows=1, ncols=1)
ax.plot(pts.extract(["x"]), predicted_output, label="Neural Network solution")
ax.plot(pts.extract(["x"]), true_output, label="True solution")
-plt.legend()
+_ = plt.legend()
@@ -8111,16 +8045,10 @@ var element = document.getElementById('47625d80-86e1-4553-8f28-55bdcc99682b');
-
-Out[9]:
-
-<matplotlib.legend.Legend at 0x7fc84f9891c0>
-
-
-
+
@@ -8132,54 +8060,7 @@ var element = document.getElementById('47625d80-86e1-4553-8f28-55bdcc99682b');
-The solution is overlapped with the actual one, and they are barely indistinguishable. We can also take a look at the loss using TensorBoard:
-
-
-
-
-
-
-
-
-In [10]:
-
-
-print("\nTo load TensorBoard run load_ext tensorboard on your terminal")
-print(
- "To visualize the loss you can run tensorboard --logdir 'tutorial_logs' on your terminal\n"
-)
-# # uncomment for running tensorboard
-# %load_ext tensorboard
-# %tensorboard --logdir=tutorial_logs
-
-
-
-
-
-
-
-
-
-
-
-
-
-To load TensorBoard run load_ext tensorboard on your terminal
-To visualize the loss you can run tensorboard --logdir 'tutorial_logs' on your terminal
-
-
-
-
-
-
-
-
-
-
-
-
-
-As we can see the loss has not reached a minimum, suggesting that we could train for longer! Alternatively, we can also take look at the loss using callbacks. Here we use MetricTracker from pina.callback:
+The solution is overlapped with the actual one, and they are barely indistinguishable. We can also visualize the loss during training using the MetricTracker:
@@ -8188,42 +8069,11 @@ To visualize the loss you can run tensorboard --logdir 'tutorial_logs' on your t
-In [11]:
+In [12]:
-from pina.callback import MetricTracker
-
-# create the model
-newmodel = FeedForward(
- layers=[10, 10],
- func=torch.nn.Tanh,
- output_dimensions=len(problem.output_variables),
- input_dimensions=len(problem.input_variables),
-)
-
-# create the PINN object
-newpinn = PINN(
- problem, newmodel, optimizer=TorchOptimizer(torch.optim.Adam, lr=0.005)
-)
-
-# create the trainer
-newtrainer = Trainer(
- solver=newpinn,
- max_epochs=1500,
- logger=True, # enable parameter logging
- callbacks=[MetricTracker()],
- accelerator="cpu",
- train_size=1.0,
- test_size=0.0,
- val_size=0.0,
- enable_model_summary=False,
-) # we train on CPU and avoid model summary at beginning of training (optional)
-
-# train
-newtrainer.train()
-
-# plot loss
-trainer_metrics = newtrainer.callbacks[0].metrics
+# plot loss
+trainer_metrics = trainer.callbacks[0].metrics
loss = trainer_metrics["train_loss"]
epochs = range(len(loss))
plt.plot(epochs, loss.cpu())
@@ -8242,54 +8092,8 @@ To visualize the loss you can run tensorboard --logdir 'tutorial_logs' on your t
-
-GPU available: False, used: False
-
-
-
-
-
-
-TPU available: False, using: 0 TPU cores
-
-
-
-
-
-
-HPU available: False, using: 0 HPUs
-
-
-
-
-
-
-Missing logger folder: /home/runner/work/PINA/PINA/tutorials/tutorial1/lightning_logs
-
-
-
-
-
-
-
-
-
-
-`Trainer.fit` stopped: `max_epochs=1500` reached.
-
-
-
-
-
-
+
@@ -8301,17 +8105,20 @@ var element = document.getElementById('9a0a1fa7-2934-4a4a-9969-e79b2e0c6c87');
-What's next?¶
Congratulations on completing the introductory tutorial of PINA! There are several directions you can go now:
+What's Next?¶
Congratulations on completing the introductory tutorial on Physics-Informed Training! Now that you have a solid foundation, here are several exciting directions you can explore:
-Train the network for longer or with different layer sizes and assert the finaly accuracy
+Experiment with Training Duration & Network Architecture: Try different training durations and tweak the network architecture to optimize performance.
-Train the network using other types of models (see pina.model)
+Explore Other Models in pina.model: Check out other models available in pina.model or design your own custom PyTorch module to suit your needs.
-GPU training and speed benchmarking
+Run Training on a GPU: Speed up your training by running on a GPU and compare the performance improvements.
-Many more...
+Test Various Solvers: Explore and evaluate different solvers to assess their performance on various types of problems.
+
+... and many more!: The possibilities are vast! Continue experimenting with advanced configurations, solvers, and other features in PINA.
+For more resources and tutorials, check out the PINA Documentation.
@@ -8319,6 +8126,6 @@ var element = document.getElementById('9a0a1fa7-2934-4a4a-9969-e79b2e0c6c87');