Tutorials and Doc (#191)

* Tutorial doc update
* update doc tutorial
* doc not compiling

---------

Co-authored-by: Dario Coscia <dcoscia@euclide.maths.sissa.it>
Co-authored-by: Dario Coscia <dariocoscia@Dario-Coscia.local>
This commit is contained in:
Nicola Demo
2023-10-23 12:48:09 +02:00
parent ac829aece9
commit 0c8072274e
93 changed files with 2306 additions and 1685 deletions

29
tutorials/README.md vendored
View File

@@ -1,14 +1,27 @@
# Tutorials
# PINA Tutorials
In this folder we collect useful tutorials in order to understand the principles and the potential of **PINA**. Please read the following table for details about the tutorials. The HTML version of all the tutorials is available also within the [documentation](http://mathlab.github.io/PINA/).
## Getting started with PINA
| Name | Description | Type of Problem |
|-------|---------------|-------------------|
| Tutorial1&#160;[[.ipynb](tutorial1/tutorial.ipynb),&#160;[.py](tutorial1/tutorial.py),&#160;[.html](http://mathlab.github.io/PINA/_rst/tutorial1/tutorial.html)]| Introduction to PINA features | `SpatialProblem` |
| Tutorial2&#160;[[.ipynb](tutorial2/tutorial.ipynb),&#160;[.py](tutorial2/tutorial.py),&#160;[.html](http://mathlab.github.io/PINA/_rst/tutorial2/tutorial.html)]| Poisson problem on regular domain using extra features | `SpatialProblem` |
| Tutorial3&#160;[[.ipynb](tutorial3/tutorial.ipynb),&#160;[.py](tutorial3/tutorial.py),&#160;[.html](http://mathlab.github.io/PINA/_rst/tutorial3/tutorial.html)]| Wave problem on regular domain using custom pytorch networks. | `SpatialProblem`, `TimeDependentProblem` |
| Tutorial4&#160;[[.ipynb](tutorial4/tutorial.ipynb),&#160;[.py](tutorial4/tutorial.py),&#160;[.html](http://mathlab.github.io/PINA/_rst/tutorial4/tutorial.html)]| Continuous Convolutional Filter usage. | `None` |
| Tutorial5&#160;[[.ipynb](tutorial5/tutorial.ipynb),&#160;[.py](tutorial5/tutorial.py),&#160;[.html](http://mathlab.github.io/PINA/_rst/tutorial5/tutorial.html)]| Fourier Neural Operator. | `AbstractProblem` |
| Description | Tutorial |
|---------------|-----------|
Introduction to PINA for Physics Informed Neural Networks training|[[.ipynb](tutorial1/tutorial.ipynb),&#160;[.py](tutorial1/tutorial.py),&#160;[.html](http://mathlab.github.io/PINA/_rst/tutorial1/tutorial.html)]|
Building custom geometries with PINA `Location` class|[[.ipynb](tutorial1/tutorial.ipynb),&#160;[.py](tutorial1/tutorial.py),&#160;[.html](http://mathlab.github.io/PINA/_rst/tutorial1/tutorial.html)]|
## Physics Informed Neural Networks
| Description | Tutorial |
|---------------|-----------|
Two dimensional Poisson problem using Extra Features Learning &nbsp; &nbsp; |[[.ipynb](tutorial2/tutorial.ipynb),&#160;[.py](tutorial2/tutorial.py),&#160;[.html](http://mathlab.github.io/PINA/_rst/tutorial2/tutorial.html)]|
Two dimensional Wave problem with hard constraint |[[.ipynb](tutorial3/tutorial.ipynb),&#160;[.py](tutorial3/tutorial.py),&#160;[.html](http://mathlab.github.io/PINA/_rst/tutorial3/tutorial.html)]|
## Neural Operator Learning
| Description | Tutorial |
|---------------|-----------|
Two dimensional Darcy flow using the Fourier Neural Operator &nbsp; &nbsp; &nbsp;&nbsp; &nbsp;|[[.ipynb](tutorial5/tutorial.ipynb),&#160;[.py](tutorial5/tutorial.py),&#160;[.html](http://mathlab.github.io/PINA/_rst/tutorial5/tutorial.html)]|
## Supervised Learning
| Description | Tutorial |
|---------------|-----------|
Unstructured convolutional autoencoder via continuous convolution |[[.ipynb](tutorial4/tutorial.ipynb),&#160;[.py](tutorial4/tutorial.py),&#160;[.html](http://mathlab.github.io/PINA/_rst/tutorial4/tutorial.html)]|

File diff suppressed because one or more lines are too long

View File

@@ -1,22 +1,25 @@
#!/usr/bin/env python
# coding: utf-8
# # Tutorial 1: Physics Informed Neural Networks on PINA
# # Tutorial: Physics Informed Neural Networks on PINA
# In this tutorial, we will demonstrate a typical use case of PINA on a toy problem. Specifically, the tutorial aims to introduce the following topics:
# In this tutorial, we will demonstrate a typical use case of **PINA** on a toy problem, following the standard API procedure.
#
# * Defining a PINA Problem,
# * Building a `pinn` object,
# * Sampling points in a domain
# <p align="center">
# <img src="../../readme/API_color.png" alt="PINA API" width="400"/>
# </p>
#
# These are the three main steps needed **before** training a Physics Informed Neural Network (PINN). We will show each step in detail, and at the end, we will solve the problem.
# Specifically, the tutorial aims to introduce the following topics:
#
# * Explaining how to build **PINA** Problem,
# * Showing how to generate data for `PINN` straining
#
# These are the two main steps needed **before** starting the modelling optimization (choose model and solver, and train). We will show each step in detail, and at the end, we will solve a simple Ordinary Differential Equation (ODE) problem busing the `PINN` solver.
# ## PINA Problem
# ## Build a PINA problem
# ### Initialize the `Problem` class
# Problem definition in the PINA framework is done by building a python `class`, which inherits from one or more problem classes (`SpatialProblem`, `TimeDependentProblem`, `ParametricProblem`) depending on the nature of the problem. Below is an example:
# #### Simple Ordinary Differential Equation
# Problem definition in the **PINA** framework is done by building a python `class`, which inherits from one or more problem classes (`SpatialProblem`, `TimeDependentProblem`, `ParametricProblem`, ...) depending on the nature of the problem. Below is an example:
# ### Simple Ordinary Differential Equation
# Consider the following:
#
# $$
@@ -42,8 +45,8 @@
# # other stuff ...
# ```
#
# Notice that we define `output_variables` as a list of symbols, indicating the output variables of our equation (in this case only $u$). The `spatial_domain` variable indicates where the sample points are going to be sampled in the domain, in this case $x\in[0,1]$.
# Notice that we define `output_variables` as a list of symbols, indicating the output variables of our equation (in this case only $u$), this is done because in **PINA** the `torch.Tensor`s are labelled, allowing the user maximal flexibility for the manipulation of the tensor. The `spatial_domain` variable indicates where the sample points are going to be sampled in the domain, in this case $x\in[0,1]$.
#
# What about if our equation is also time dependent? In this case, our `class` will inherit from both `SpatialProblem` and `TimeDependentProblem`:
#
@@ -64,22 +67,24 @@ class TimeSpaceODE(SpatialProblem, TimeDependentProblem):
# where we have included the `temporal_domain` variable, indicating the time domain wanted for the solution.
#
# In summary, using PINA, we can initialize a problem with a class which inherits from three base classes: `SpatialProblem`, `TimeDependentProblem`, `ParametricProblem`, depending on the type of problem we are considering. For reference:
# In summary, using **PINA**, we can initialize a problem with a class which inherits from different base classes: `SpatialProblem`, `TimeDependentProblem`, `ParametricProblem`, and so on depending on the type of problem we are considering. Here are some examples (more on the official documentation):
# * `SpatialProblem` $\rightarrow$ a differential equation with spatial variable(s)
# * `TimeDependentProblem` $\rightarrow$ a time-dependent differential equation
# * `ParametricProblem` $\rightarrow$ a parametrized differential equation
# * `AbstractProblem` $\rightarrow$ any **PINA** problem inherits from here
# ### Write the `Problem` class
# ### Write the problem class
#
# Once the `Problem` class is initialized, we need to represent the differential equation in PINA. In order to do this, we need to load the PINA operators from `pina.operators` module. Again, we'll consider Equation (1) and represent it in PINA:
# Once the `Problem` class is initialized, we need to represent the differential equation in **PINA**. In order to do this, we need to load the **PINA** operators from `pina.operators` module. Again, we'll consider Equation (1) and represent it in **PINA**:
# In[2]:
from pina.problem import SpatialProblem
from pina.operators import grad
from pina import Condition, CartesianDomain
from pina.equation.equation import Equation
from pina import Condition
from pina.geometry import CartesianDomain
from pina.equation import Equation, FixedValue
import torch
@@ -101,22 +106,10 @@ class SimpleODE(SpatialProblem):
# calculate the residual and return it
return u_x - u
# defining the initial condition
def initial_condition(input_, output_):
# setting the initial value
value = 1.0
# extracting the u input variable
u = output_.extract(['u'])
# calculate the residual and return it
return u - value
# conditions to hold
conditions = {
'x0': Condition(location=CartesianDomain({'x': 0.}), equation=Equation(initial_condition)),
'D': Condition(location=CartesianDomain({'x': [0, 1]}), equation=Equation(ode_equation)),
'x0': Condition(location=CartesianDomain({'x': 0.}), equation=FixedValue(1)), # We fix initial condition to value 1
'D': Condition(location=CartesianDomain({'x': [0, 1]}), equation=Equation(ode_equation)), # We wrap the python equation using Equation
}
# sampled points (see below)
@@ -125,27 +118,76 @@ class SimpleODE(SpatialProblem):
# defining the true solution
def truth_solution(self, pts):
return torch.exp(pts.extract(['x']))
problem = SimpleODE()
# After we define the `Problem` class, we need to write different class methods, where each method is a function returning a residual. These functions are the ones minimized during PINN optimization, given the initial conditions. For example, in the domain $[0,1]$, the ODE equation (`ode_equation`) must be satisfied. We represent this by returning the difference between subtracting the variable `u` from its gradient (the residual), which we hope to minimize to 0. This is done for all conditions (`ode_equation`, `initial_condition`).
# After we define the `Problem` class, we need to write different class methods, where each method is a function returning a residual. These functions are the ones minimized during PINN optimization, given the initial conditions. For example, in the domain $[0,1]$, the ODE equation (`ode_equation`) must be satisfied. We represent this by returning the difference between subtracting the variable `u` from its gradient (the residual), which we hope to minimize to 0. This is done for all conditions. Notice that we do not pass directly a `python` function, but an `Equation` object, which is initialized with the `python` function. This is done so that all the computations, and internal checks are done inside **PINA**.
#
# Once we have defined the function, we need to tell the neural network where these methods are to be applied. To do so, we use the `Condition` class. In the `Condition` class, we pass the location points and the function we want minimized on those points (other possibilities are allowed, see the documentation for reference) as parameters.
# Once we have defined the function, we need to tell the neural network where these methods are to be applied. To do so, we use the `Condition` class. In the `Condition` class, we pass the location points and the equation we want minimized on those points (other possibilities are allowed, see the documentation for reference).
#
# Finally, it's possible to define a `truth_solution` function, which can be useful if we want to plot the results and see how the real solution compares to the expected (true) solution. Notice that the `truth_solution` function is a method of the `PINN` class, but is not mandatory for problem definition.
#
# ## Build the `PINN` object
# The basic requirements for building a `PINN` model are a `Problem` and a model. We have just covered the `Problem` definition. For the model parameter, one can use either the default models provided in PINA or a custom model. We will not go into the details of model definition (see Tutorial2 and Tutorial3 for more details on model definition).
# ## Generate data
#
# Data for training can come in form of direct numerical simulation reusults, or points in the domains. In case we do unsupervised learning, we just need the collocation points for training, i.e. points where we want to evaluate the neural network. Sampling point in **PINA** is very easy, here we show three examples using the `.discretise_domain` method of the `AbstractProblem` class.
# In[3]:
from pina.model import FeedForward
from pina import PINN
# sampling 20 points in [0, 1] through discretization in all locations
problem.discretise_domain(n=20, mode='grid', variables=['x'], locations='all')
# sampling 20 points in (0, 1) through latin hypercube samping in D, and 1 point in x0
problem.discretise_domain(n=20, mode='latin', variables=['x'], locations=['D'])
problem.discretise_domain(n=1, mode='random', variables=['x'], locations=['x0'])
# sampling 20 points in (0, 1) randomly
problem.discretise_domain(n=20, mode='random', variables=['x'])
# We are going to use latin hypercube points for sampling. We need to sample in all the conditions domains. In our case we sample in `D` and `x0`.
# In[4]:
# sampling for training
problem.discretise_domain(1, 'random', locations=['x0'])
problem.discretise_domain(20, 'lh', locations=['D'])
# The points are saved in a python `dict`, and can be accessed by calling the attribute `input_pts` of the problem
# In[5]:
print('Input points:', problem.input_pts)
print('Input points labels:', problem.input_pts['D'].labels)
# To visualize the sampled points we can use the `.plot_samples` method of the `Plotter` class
# In[6]:
from pina import Plotter
pl = Plotter()
pl.plot_samples(problem=problem)
# ## Perform a small training
# Once we have defined the problem and generated the data we can start the modelling. Here we will choose a `FeedForward` neural network available in `pina.model`, and we will train using the `PINN` solver from `pina.solvers`. We highlight that this training is fairly simple, for more advanced stuff consider the tutorials in the ***Physics Informed Neural Networks*** section of ***Tutorials***. For training we use the `Trainer` class from `pina.trainer`. Here we show a very short training and some method for plotting the results. Notice that by default all relevant metrics (e.g. MSE error during training) are going to be tracked using a `lightining` logger, by default `CSVLogger`. If you want to track the metric by yourself without a logger, use `pina.callbacks.MetricTracker`.
# In[7]:
from pina import PINN, Trainer
from pina.model import FeedForward
from pina.callbacks import MetricTracker
# initialize the problem
problem = SimpleODE()
# build the model
model = FeedForward(
@@ -158,38 +200,49 @@ model = FeedForward(
# create the PINN object
pinn = PINN(problem, model)
# create the trainer
trainer = Trainer(solver=pinn, max_epochs=1500, callbacks=[MetricTracker()], accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
# Creating the `PINN` object is fairly simple. Different optional parameters include: optimizer, batch size, ... (see [documentation](https://mathlab.github.io/PINA/) for reference).
# ## Sample points in the domain
# Once the `PINN` object is created, we need to generate the points for starting the optimization. To do so, we use the `sample` method of the `CartesianDomain` class. Below are three examples of sampling methods on the $[0,1]$ domain:
# In[4]:
# sampling 20 points in [0, 1] through discretization
pinn.problem.discretise_domain(n=20, mode='grid', variables=['x'])
# sampling 20 points in (0, 1) through latin hypercube samping
pinn.problem.discretise_domain(n=20, mode='latin', variables=['x'])
# sampling 20 points in (0, 1) randomly
pinn.problem.discretise_domain(n=20, mode='random', variables=['x'])
# ### Very simple training and plotting
#
# Once we have defined the PINA model, created a network, and sampled points in the domain, we have everything necessary for training a PINN. To do so, we make use of the `Trainer` class.
# In[5]:
from pina import Trainer
# initialize trainer
trainer = Trainer(pinn)
# train the model
# train
trainer.train()
# After the training we can inspect trainer logged metrics (by default **PINA** logs mean square error residual loss). The logged metrics can be accessed online using one of the `Lightinig` loggers. The final loss can be accessed by `trainer.logged_metrics`
# In[8]:
# inspecting final loss
trainer.logged_metrics
# By using the `Plotter` class from **PINA** we can also do some quatitative plots of the solution.
# In[9]:
# plotting the solution
pl.plot(trainer=trainer)
# The solution is overlapped with the actual one, and they are barely indistinguishable. We can also plot easily the loss:
# In[10]:
pl.plot_loss(trainer=trainer, metric='mean_loss', log_scale=True)
# As we can see the loss has not reached a minimum, suggesting that we could train for longer
# ## What's next?
#
# Nice you have completed the introductory tutorial of **PINA**! There are multiple directions you can go now:
#
# 1. Train the network for longer or with different layer sizes and assert the finaly accuracy
#
# 2. Train the network using other types of models (see `pina.model`)
#
# 3. GPU trainining and benchmark the speed
#
# 4. Many more...

File diff suppressed because one or more lines are too long

View File

@@ -1,21 +1,10 @@
#!/usr/bin/env python
# coding: utf-8
# # Tutorial 2: resolution of Poisson problem and usage of extra-features
# ### The problem definition
# This tutorial presents how to solve with Physics-Informed Neural Networks a 2D Poisson problem with Dirichlet boundary conditions. Using extrafeatures.
# # Tutorial: Two dimensional Poisson problem using Extra Features Learning
#
# This tutorial presents how to solve with Physics-Informed Neural Networks (PINNs) a 2D Poisson problem with Dirichlet boundary conditions. We will train with standard PINN's training, and with extrafeatures. For more insights on extrafeature learning please read [*An extended physics informed neural network for preliminary analysis of parametric optimal control problems*](https://www.sciencedirect.com/science/article/abs/pii/S0898122123002018).
#
# The problem is written as:
# \begin{equation}
# \begin{cases}
# \Delta u = \sin{(\pi x)} \sin{(\pi y)} \text{ in } D, \\
# u = 0 \text{ on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
# \end{cases}
# \end{equation}
# where $D$ is a square domain $[0,1]^2$, and $\Gamma_i$, with $i=1,...,4$, are the boundaries of the square.
# First of all, some useful imports.
# In[1]:
@@ -36,7 +25,18 @@ from pina import Condition, LabelTensor
from pina.callbacks import MetricTracker
# Now, the Poisson problem is written in PINA code as a class. The equations are written as *conditions* that should be satisfied in the corresponding domains. *truth_solution*
# ## The problem definition
# The two-dimensional Poisson problem is mathematically written as:
# \begin{equation}
# \begin{cases}
# \Delta u = \sin{(\pi x)} \sin{(\pi y)} \text{ in } D, \\
# u = 0 \text{ on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
# \end{cases}
# \end{equation}
# where $D$ is a square domain $[0,1]^2$, and $\Gamma_i$, with $i=1,...,4$, are the boundaries of the square.
#
# The Poisson problem is written in **PINA** code as a class. The equations are written as *conditions* that should be satisfied in the corresponding domains. The *truth_solution*
# is the exact solution which will be compared with the predicted one.
# In[2]:
@@ -52,6 +52,7 @@ class Poisson(SpatialProblem):
laplacian_u = laplacian(output_, input_, components=['u'], d=['x', 'y'])
return laplacian_u - force_term
# here we write the problem conditions
conditions = {
'gamma1': Condition(location=CartesianDomain({'x': [0, 1], 'y': 1}), equation=FixedValue(0.)),
'gamma2': Condition(location=CartesianDomain({'x': [0, 1], 'y': 0}), equation=FixedValue(0.)),
@@ -75,11 +76,11 @@ problem.discretise_domain(25, 'grid', locations=['D'])
problem.discretise_domain(25, 'grid', locations=['gamma1', 'gamma2', 'gamma3', 'gamma4'])
# ### The problem solution
# ## Solving the problem with standard PINNs
# After the problem, the feed-forward neural network is defined, through the class `FeedForward`. This neural network takes as input the coordinates (in this case $x$ and $y$) and provides the unkwown field of the Poisson problem. The residual of the equations are evaluated at several sampling points (which the user can manipulate using the method `CartesianDomain_pts`) and the loss minimized by the neural network is the sum of the residuals.
#
# In this tutorial, the neural network is composed by two hidden layers of 10 neurons each, and it is trained for 1000 epochs with a learning rate of 0.006. These parameters can be modified as desired.
# In this tutorial, the neural network is composed by two hidden layers of 10 neurons each, and it is trained for 1000 epochs with a learning rate of 0.006 and $l_2$ weight regularization set to $10^{-7}$. These parameters can be modified as desired. We use the `MetricTracker` class to track the metrics during training.
# In[3]:
@@ -92,7 +93,7 @@ model = FeedForward(
input_dimensions=len(problem.input_variables)
)
pinn = PINN(problem, model, optimizer_kwargs={'lr':0.006, 'weight_decay':1e-8})
trainer = Trainer(pinn, max_epochs=1000, callbacks=[MetricTracker()])
trainer = Trainer(pinn, max_epochs=1000, callbacks=[MetricTracker()], accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
# train
trainer.train()
@@ -108,7 +109,7 @@ plotter = Plotter()
plotter.plot(trainer)
# ### The problem solution with extra-features
# ## Solving the problem with extra-features PINNs
# Now, the same problem is solved in a different way.
# A new neural network is now defined, with an additional input variable, named extra-feature, which coincides with the forcing term in the Laplace equation.
@@ -147,7 +148,7 @@ model_feat = FeedForward(
input_dimensions=len(problem.input_variables)+1
)
pinn_feat = PINN(problem, model_feat, extra_features=[SinSin()], optimizer_kwargs={'lr':0.006, 'weight_decay':1e-8})
trainer_feat = Trainer(pinn_feat, max_epochs=1000, callbacks=[MetricTracker()])
trainer_feat = Trainer(pinn_feat, max_epochs=1000, callbacks=[MetricTracker()], accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
# train
trainer_feat.train()
@@ -162,7 +163,7 @@ trainer_feat.train()
plotter.plot(trainer_feat)
# ### The problem solution with learnable extra-features
# ## Solving the problem with learnable extra-features PINNs
# We can still do better!
#
@@ -176,7 +177,7 @@ plotter.plot(trainer_feat)
# where $\alpha$ and $\beta$ are the abovementioned parameters.
# Their implementation is quite trivial: by using the class `torch.nn.Parameter` we cam define all the learnable parameters we need, and they are managed by `autograd` module!
# In[7]:
# In[8]:
class SinSinAB(torch.nn.Module):
@@ -202,8 +203,8 @@ model_lean= FeedForward(
output_dimensions=len(problem.output_variables),
input_dimensions=len(problem.input_variables)+1
)
pinn_lean = PINN(problem, model_lean, extra_features=[SinSin()], optimizer_kwargs={'lr':0.006, 'weight_decay':1e-8})
trainer_learn = Trainer(pinn_lean, max_epochs=1000)
pinn_lean = PINN(problem, model_lean, extra_features=[SinSinAB()], optimizer_kwargs={'lr':0.006, 'weight_decay':1e-8})
trainer_learn = Trainer(pinn_lean, max_epochs=1000, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
# train
trainer_learn.train()
@@ -211,7 +212,7 @@ trainer_learn.train()
# Umh, the final loss is not appreciabily better than previous model (with static extra features), despite the usage of learnable parameters. This is mainly due to the over-parametrization of the network: there are many parameter to optimize during the training, and the model in unable to understand automatically that only the parameters of the extra feature (and not the weights/bias of the FFN) should be tuned in order to fit our problem. A longer training can be helpful, but in this case the faster way to reach machine precision for solving the Poisson problem is removing all the hidden layers in the `FeedForward`, keeping only the $\alpha$ and $\beta$ parameters of the extra feature.
# In[8]:
# In[11]:
# make model + solver + trainer
@@ -221,8 +222,8 @@ model_lean= FeedForward(
output_dimensions=len(problem.output_variables),
input_dimensions=len(problem.input_variables)+1
)
pinn_learn = PINN(problem, model_lean, extra_features=[SinSin()], optimizer_kwargs={'lr':0.006, 'weight_decay':1e-8})
trainer_learn = Trainer(pinn_learn, max_epochs=1000, callbacks=[MetricTracker()])
pinn_learn = PINN(problem, model_lean, extra_features=[SinSinAB()], optimizer_kwargs={'lr':0.006, 'weight_decay':1e-8})
trainer_learn = Trainer(pinn_learn, max_epochs=1000, callbacks=[MetricTracker()], accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
# train
trainer_learn.train()
@@ -233,8 +234,30 @@ trainer_learn.train()
#
# We conclude here by showing the graphical comparison of the unknown field and the loss trend for all the test cases presented here: the standard PINN, PINN with extra features, and PINN with learnable extra features.
# In[9]:
# In[12]:
plotter.plot(trainer_learn)
# Let us compare the training losses for the various types of training
# In[14]:
plotter.plot_loss(trainer, label='Standard')
plotter.plot_loss(trainer_feat, label='Static Features')
plotter.plot_loss(trainer_learn, label='Learnable Features')
# ## What's next?
#
# Nice you have completed the two dimensional Poisson tutorial of **PINA**! There are multiple directions you can go now:
#
# 1. Train the network for longer or with different layer sizes and assert the finaly accuracy
#
# 2. Propose new types of extrafeatures and see how they affect the learning
#
# 3. Exploit extrafeature training in more complex problems
#
# 4. Many more...

File diff suppressed because one or more lines are too long

View File

@@ -1,24 +1,10 @@
#!/usr/bin/env python
# coding: utf-8
# # Tutorial 3: resolution of wave equation with hard constraint PINNs.
# ## The problem definition
# In this tutorial we present how to solve the wave equation using hard constraint PINNs. For doing so we will build a costum torch model and pass it to the `PINN` solver.
# # Tutorial: Two dimensional Wave problem with hard constraint
#
# The problem is written in the following form:
# In this tutorial we present how to solve the wave equation using hard constraint PINNs. For doing so we will build a costum `torch` model and pass it to the `PINN` solver.
#
# \begin{equation}
# \begin{cases}
# \Delta u(x,y,t) = \frac{\partial^2}{\partial t^2} u(x,y,t) \quad \text{in } D, \\\\
# u(x, y, t=0) = \sin(\pi x)\sin(\pi y), \\\\
# u(x, y, t) = 0 \quad \text{on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
# \end{cases}
# \end{equation}
#
# where $D$ is a square domain $[0,1]^2$, and $\Gamma_i$, with $i=1,...,4$, are the boundaries of the square, and the velocity in the standard wave equation is fixed to one.
# First of all, some useful imports.
# In[1]:
@@ -36,6 +22,20 @@ from pina.equation.equation_factory import FixedValue
from pina import Condition, Plotter
# ## The problem definition
# The problem is written in the following form:
#
# \begin{equation}
# \begin{cases}
# \Delta u(x,y,t) = \frac{\partial^2}{\partial t^2} u(x,y,t) \quad \text{in } D, \\\\
# u(x, y, t=0) = \sin(\pi x)\sin(\pi y), \\\\
# u(x, y, t) = 0 \quad \text{on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
# \end{cases}
# \end{equation}
#
# where $D$ is a square domain $[0,1]^2$, and $\Gamma_i$, with $i=1,...,4$, are the boundaries of the square, and the velocity in the standard wave equation is fixed to one.
# Now, the wave problem is written in PINA code as a class, inheriting from `SpatialProblem` and `TimeDependentProblem` since we deal with spatial, and time dependent variables. The equations are written as `conditions` that should be satisfied in the corresponding domains. `truth_solution` is the exact solution which will be compared with the predicted one.
# In[2]:
@@ -78,7 +78,7 @@ problem = Wave()
# ## Hard Constraint Model
# After the problem, a **torch** model is needed to solve the PINN. Usually, many models are already implemented in `PINA`, but the user has the possibility to build his/her own model in `PyTorch`. The hard constraint we impose is on the boundary of the spatial domain. Specifically, our solution is written as:
# After the problem, a **torch** model is needed to solve the PINN. Usually, many models are already implemented in **PINA**, but the user has the possibility to build his/her own model in `torch`. The hard constraint we impose is on the boundary of the spatial domain. Specifically, our solution is written as:
#
# $$ u_{\rm{pinn}} = xy(1-x)(1-y)\cdot NN(x, y, t), $$
#
@@ -92,11 +92,11 @@ class HardMLP(torch.nn.Module):
def __init__(self, input_dim, output_dim):
super().__init__()
self.layers = torch.nn.Sequential(torch.nn.Linear(input_dim, 20),
torch.nn.Tanh(),
torch.nn.Linear(20, 20),
torch.nn.Tanh(),
torch.nn.Linear(20, output_dim))
self.layers = torch.nn.Sequential(torch.nn.Linear(input_dim, 40),
torch.nn.ReLU(),
torch.nn.Linear(40, 40),
torch.nn.ReLU(),
torch.nn.Linear(40, output_dim))
# here in the foward we implement the hard constraints
def forward(self, x):
@@ -106,14 +106,19 @@ class HardMLP(torch.nn.Module):
# ## Train and Inference
# In this tutorial, the neural network is trained for 3000 epochs with a learning rate of 0.001 (default in `PINN`). Training takes approximately 1 minute.
# In this tutorial, the neural network is trained for 1000 epochs with a learning rate of 0.001 (default in `PINN`). Training takes approximately 3 minutes.
# In[4]:
# generate the data
problem.discretise_domain(1000, 'random', locations=['D', 't0', 'gamma1', 'gamma2', 'gamma3', 'gamma4'])
# crete the solver
pinn = PINN(problem, HardMLP(len(problem.input_variables), len(problem.output_variables)))
problem.discretise_domain(1000, 'random', locations=['D','t0', 'gamma1', 'gamma2', 'gamma3', 'gamma4'])
trainer = Trainer(pinn, max_epochs=3000)
# create trainer and train
trainer = Trainer(pinn, max_epochs=1000, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
trainer.train()
@@ -125,11 +130,93 @@ trainer.train()
plotter = Plotter()
# plotting at fixed time t = 0.0
print('Plotting at t=0')
plotter.plot(trainer, fixed_variables={'t': 0.0})
# plotting at fixed time t = 0.5
print('Plotting at t=0.5')
plotter.plot(trainer, fixed_variables={'t': 0.5})
# plotting at fixed time t = 1.
print('Plotting at t=1')
plotter.plot(trainer, fixed_variables={'t': 1.0})
# The results are not so great, and we can clearly see that as time progress the solution get worse.... Can we do better?
#
# A valid option is to impose the initial condition as hard constraint as well. Specifically, our solution is written as:
#
# $$ u_{\rm{pinn}} = xy(1-x)(1-y)\cdot NN(x, y, t)\cdot t + \cos(\sqrt{2}\pi t)sin(\pi x)\sin(\pi y), $$
#
# Let us build the network first
# In[6]:
class HardMLPtime(torch.nn.Module):
def __init__(self, input_dim, output_dim):
super().__init__()
self.layers = torch.nn.Sequential(torch.nn.Linear(input_dim, 40),
torch.nn.ReLU(),
torch.nn.Linear(40, 40),
torch.nn.ReLU(),
torch.nn.Linear(40, output_dim))
# here in the foward we implement the hard constraints
def forward(self, x):
hard_space = x.extract(['x'])*(1-x.extract(['x']))*x.extract(['y'])*(1-x.extract(['y']))
hard_t = torch.sin(torch.pi*x.extract(['x'])) * torch.sin(torch.pi*x.extract(['y'])) * torch.cos(torch.sqrt(torch.tensor(2.))*torch.pi*x.extract(['t']))
return hard_space * self.layers(x) * x.extract(['t']) + hard_t
# Now let's train with the same configuration as thre previous test
# In[7]:
# generate the data
problem.discretise_domain(1000, 'random', locations=['D', 't0', 'gamma1', 'gamma2', 'gamma3', 'gamma4'])
# crete the solver
pinn = PINN(problem, HardMLPtime(len(problem.input_variables), len(problem.output_variables)))
# create trainer and train
trainer = Trainer(pinn, max_epochs=1000, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
trainer.train()
# We can clearly see that the loss is way lower now. Let's plot the results
# In[8]:
plotter = Plotter()
# plotting at fixed time t = 0.0
print('Plotting at t=0')
plotter.plot(trainer, fixed_variables={'t': 0.0})
# plotting at fixed time t = 0.5
print('Plotting at t=0.5')
plotter.plot(trainer, fixed_variables={'t': 0.5})
# plotting at fixed time t = 1.
print('Plotting at t=1')
plotter.plot(trainer, fixed_variables={'t': 1.0})
# We can see now that the results are way better! This is due to the fact that previously the network was not learning correctly the initial conditon, leading to a poor solution when the time evolved. By imposing the initial condition the network is able to correctly solve the problem.
# ## What's next?
#
# Nice you have completed the two dimensional Wave tutorial of **PINA**! There are multiple directions you can go now:
#
# 1. Train the network for longer or with different layer sizes and assert the finaly accuracy
#
# 2. Propose new types of hard constraints in time, e.g. $$ u_{\rm{pinn}} = xy(1-x)(1-y)\cdot NN(x, y, t)(1-\exp(-t)) + \cos(\sqrt{2}\pi t)sin(\pi x)\sin(\pi y), $$
#
# 3. Exploit extrafeature training for model 1 and 2
#
# 4. Many more...

File diff suppressed because one or more lines are too long

View File

@@ -5,7 +5,7 @@
"id": "e80567a6",
"metadata": {},
"source": [
"# Tutorial 5: Fourier Neural Operator Learning"
"# Tutorial: Two dimensional Darcy flow using the Fourier Neural Operator"
]
},
{
@@ -13,8 +13,8 @@
"id": "8762bbe5",
"metadata": {},
"source": [
"In this tutorial we are going to solve the Darcy flow 2d problem, presented in [Fourier Neural Operator for\n",
"Parametric Partial Differential Equation](https://openreview.net/pdf?id=c8P9NQVtmnO). First of all we import the modules needed for the tutorial. Importing `scipy` is needed for input output operation, run `pip install scipy` for installing it."
"In this tutorial we are going to solve the Darcy flow problem in two dimensions, presented in [*Fourier Neural Operator for\n",
"Parametric Partial Differential Equation*](https://openreview.net/pdf?id=c8P9NQVtmnO). First of all we import the modules needed for the tutorial. Importing `scipy` is needed for input output operations."
]
},
{
@@ -22,18 +22,9 @@
"execution_count": 1,
"id": "5f2744dc",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/opt/sissa/apps/intelpython/2022.0.2/intelpython/latest/lib/python3.9/site-packages/scipy/__init__.py:138: UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.26.0)\n",
" warnings.warn(f\"A NumPy version >={np_minversion} and <{np_maxversion} is required for this version of \"\n"
]
}
],
"outputs": [],
"source": [
"\n",
"# !pip install scipy # install scipy\n",
"from scipy import io\n",
"import torch\n",
"from pina.model import FNO, FeedForward # let's import some models\n",
@@ -63,7 +54,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 17,
"id": "2ffb8a4c",
"metadata": {},
"outputs": [],
@@ -71,9 +62,9 @@
"# download the dataset\n",
"data = io.loadmat(\"Data_Darcy.mat\")\n",
"\n",
"# extract data\n",
"k_train = torch.tensor(data['k_train'], dtype=torch.float).unsqueeze(-1)\n",
"u_train = torch.tensor(data['u_train'], dtype=torch.float).unsqueeze(-1)\n",
"# extract data (we use only 100 data for train)\n",
"k_train = torch.tensor(data['k_train'], dtype=torch.float).unsqueeze(-1)[:100, ...]\n",
"u_train = torch.tensor(data['u_train'], dtype=torch.float).unsqueeze(-1)[:100, ...]\n",
"k_test = torch.tensor(data['k_test'], dtype=torch.float).unsqueeze(-1)\n",
"u_test= torch.tensor(data['u_test'], dtype=torch.float).unsqueeze(-1)\n",
"x = torch.tensor(data['x'], dtype=torch.float)[0]\n",
@@ -90,7 +81,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 18,
"id": "c8501b6f",
"metadata": {},
"outputs": [
@@ -125,7 +116,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 19,
"id": "8b27d283",
"metadata": {},
"outputs": [],
@@ -152,7 +143,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 20,
"id": "e34f18b0",
"metadata": {},
"outputs": [
@@ -160,35 +151,16 @@
"name": "stderr",
"output_type": "stream",
"text": [
"/u/n/ndemo/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:611: UserWarning: Can't initialize NVML\n",
" warnings.warn(\"Can't initialize NVML\")\n",
"GPU available: True (cuda), used: True\n",
"GPU available: False, used: False\n",
"TPU available: False, using: 0 TPU cores\n",
"IPU available: False, using: 0 IPUs\n",
"HPU available: False, using: 0 HPUs\n",
"Missing logger folder: /u/n/ndemo/PINA/tutorials/tutorial5/lightning_logs\n",
"2023-10-17 10:41:03.316644: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n",
"2023-10-17 10:41:03.333768: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.\n",
"2023-10-17 10:41:03.383188: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
"To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n",
"2023-10-17 10:41:07.712785: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\n",
"LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]\n",
"\n",
" | Name | Type | Params\n",
"----------------------------------------\n",
"0 | _loss | MSELoss | 0 \n",
"1 | _neural_net | Network | 481 \n",
"----------------------------------------\n",
"481 Trainable params\n",
"0 Non-trainable params\n",
"481 Total params\n",
"0.002 Total estimated model params size (MB)\n"
"HPU available: False, using: 0 HPUs\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "eb573678e5d94f0490ce09817a06f5cb",
"model_id": "40f63403b97248a88e49755e8cb096fc",
"version_major": 2,
"version_minor": 0
},
@@ -203,22 +175,20 @@
"name": "stderr",
"output_type": "stream",
"text": [
"/u/n/ndemo/.local/lib/python3.9/site-packages/torch/_tensor.py:1386: UserWarning: The use of `x.T` on tensors of dimension other than 2 to reverse their shape is deprecated and it will throw an error in a future release. Consider `x.mT` to transpose batches of matrices or `x.permute(*torch.arange(x.ndim - 1, -1, -1))` to reverse the dimensions of a tensor. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3614.)\n",
" ret = func(*args, **kwargs)\n",
"`Trainer.fit` stopped: `max_epochs=100` reached.\n"
]
}
],
"source": [
"# make model\n",
"model=FeedForward(input_dimensions=1, output_dimensions=1)\n",
"model = FeedForward(input_dimensions=1, output_dimensions=1)\n",
"\n",
"\n",
"# make solver\n",
"solver = SupervisedSolver(problem=problem, model=model)\n",
"\n",
"# make the trainer and train\n",
"trainer = Trainer(solver=solver, max_epochs=100)\n",
"trainer = Trainer(solver=solver, max_epochs=100, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)\n",
"trainer.train()\n"
]
},
@@ -232,7 +202,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 21,
"id": "0e2a6aa4",
"metadata": {},
"outputs": [
@@ -240,8 +210,8 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Final error training 56.86%\n",
"Final error testing 56.82%\n"
"Final error training 56.24%\n",
"Final error testing 55.95%\n"
]
}
],
@@ -271,7 +241,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 22,
"id": "9af523a5",
"metadata": {},
"outputs": [
@@ -279,27 +249,16 @@
"name": "stderr",
"output_type": "stream",
"text": [
"GPU available: True (cuda), used: True\n",
"GPU available: False, used: False\n",
"TPU available: False, using: 0 TPU cores\n",
"IPU available: False, using: 0 IPUs\n",
"HPU available: False, using: 0 HPUs\n",
"LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]\n",
"\n",
" | Name | Type | Params\n",
"----------------------------------------\n",
"0 | _loss | MSELoss | 0 \n",
"1 | _neural_net | Network | 591 K \n",
"----------------------------------------\n",
"591 K Trainable params\n",
"0 Non-trainable params\n",
"591 K Total params\n",
"2.364 Total estimated model params size (MB)\n"
"HPU available: False, using: 0 HPUs\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "0f7225d39f7241e692c6027c72adfd5f",
"model_id": "5328859a5d9344ddb818622fd058d2a5",
"version_major": 2,
"version_minor": 0
},
@@ -314,7 +273,7 @@
"name": "stderr",
"output_type": "stream",
"text": [
"`Trainer.fit` stopped: `max_epochs=20` reached.\n"
"`Trainer.fit` stopped: `max_epochs=100` reached.\n"
]
}
],
@@ -334,7 +293,7 @@
"solver = SupervisedSolver(problem=problem, model=model)\n",
"\n",
"# make the trainer and train\n",
"trainer = Trainer(solver=solver, max_epochs=20)\n",
"trainer = Trainer(solver=solver, max_epochs=100, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)\n",
"trainer.train()\n"
]
},
@@ -343,12 +302,12 @@
"id": "84964cb9",
"metadata": {},
"source": [
"We can clearly see that with 1/3 of the total epochs the loss is lower. Let's see in testing.. Notice that the number of parameters is way higher than a `FeedForward` network. We suggest to use GPU or TPU for a speed up in training."
"We can clearly see that the final loss is lower. Let's see in testing.. Notice that the number of parameters is way higher than a `FeedForward` network. We suggest to use GPU or TPU for a speed up in training, when many data samples are used."
]
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 23,
"id": "58e2db89",
"metadata": {},
"outputs": [
@@ -356,8 +315,8 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Final error training 26.19%\n",
"Final error testing 25.89%\n"
"Final error training 10.86%\n",
"Final error testing 12.77%\n"
]
}
],

View File

@@ -1,14 +1,15 @@
#!/usr/bin/env python
# coding: utf-8
# # Tutorial 5: Fourier Neural Operator Learning
# # Tutorial: Two dimensional Darcy flow using the Fourier Neural Operator
# In this tutorial we are going to solve the Darcy flow 2d problem, presented in [Fourier Neural Operator for
# Parametric Partial Differential Equation](https://openreview.net/pdf?id=c8P9NQVtmnO). First of all we import the modules needed for the tutorial. Importing `scipy` is needed for input output operation, run `pip install scipy` for installing it.
# In this tutorial we are going to solve the Darcy flow problem in two dimensions, presented in [*Fourier Neural Operator for
# Parametric Partial Differential Equation*](https://openreview.net/pdf?id=c8P9NQVtmnO). First of all we import the modules needed for the tutorial. Importing `scipy` is needed for input output operations.
# In[1]:
# !pip install scipy # install scipy
from scipy import io
import torch
from pina.model import FNO, FeedForward # let's import some models
@@ -31,15 +32,15 @@ import matplotlib.pyplot as plt
# Specifically, $u$ is the flow pressure, $k$ is the permeability field and $f$ is the forcing function. The Darcy flow can parameterize a variety of systems including flow through porous media, elastic materials and heat conduction. Here you will define the domain as a 2D unit square Dirichlet boundary conditions. The dataset is taken from the authors original reference.
#
# In[2]:
# In[17]:
# download the dataset
data = io.loadmat("Data_Darcy.mat")
# extract data
k_train = torch.tensor(data['k_train'], dtype=torch.float).unsqueeze(-1)
u_train = torch.tensor(data['u_train'], dtype=torch.float).unsqueeze(-1)
# extract data (we use only 100 data for train)
k_train = torch.tensor(data['k_train'], dtype=torch.float).unsqueeze(-1)[:100, ...]
u_train = torch.tensor(data['u_train'], dtype=torch.float).unsqueeze(-1)[:100, ...]
k_test = torch.tensor(data['k_test'], dtype=torch.float).unsqueeze(-1)
u_test= torch.tensor(data['u_test'], dtype=torch.float).unsqueeze(-1)
x = torch.tensor(data['x'], dtype=torch.float)[0]
@@ -48,7 +49,7 @@ y = torch.tensor(data['y'], dtype=torch.float)[0]
# Let's visualize some data
# In[3]:
# In[18]:
plt.subplot(1, 2, 1)
@@ -62,7 +63,7 @@ plt.show()
# We now create the neural operator class. It is a very simple class, inheriting from `AbstractProblem`.
# In[4]:
# In[19]:
class NeuralOperatorSolver(AbstractProblem):
@@ -79,24 +80,24 @@ problem = NeuralOperatorSolver()
#
# We will first solve the problem using a Feedforward neural network. We will use the `SupervisedSolver` for solving the problem, since we are training using supervised learning.
# In[5]:
# In[20]:
# make model
model=FeedForward(input_dimensions=1, output_dimensions=1)
model = FeedForward(input_dimensions=1, output_dimensions=1)
# make solver
solver = SupervisedSolver(problem=problem, model=model)
# make the trainer and train
trainer = Trainer(solver=solver, max_epochs=100)
trainer = Trainer(solver=solver, max_epochs=100, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
trainer.train()
# The final loss is pretty high... We can calculate the error by importing `LpLoss`.
# In[6]:
# In[21]:
from pina.loss import LpLoss
@@ -116,7 +117,7 @@ print(f'Final error testing {err:.2f}%')
#
# We will now move to solve the problem using a FNO. Since we are learning operator this approach is better suited, as we shall see.
# In[7]:
# In[22]:
# make model
@@ -134,13 +135,13 @@ model = FNO(lifting_net=lifting_net,
solver = SupervisedSolver(problem=problem, model=model)
# make the trainer and train
trainer = Trainer(solver=solver, max_epochs=20)
trainer = Trainer(solver=solver, max_epochs=100, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
trainer.train()
# We can clearly see that with 1/3 of the total epochs the loss is lower. Let's see in testing.. Notice that the number of parameters is way higher than a `FeedForward` network. We suggest to use GPU or TPU for a speed up in training.
# We can clearly see that the final loss is lower. Let's see in testing.. Notice that the number of parameters is way higher than a `FeedForward` network. We suggest to use GPU or TPU for a speed up in training, when many data samples are used.
# In[8]:
# In[23]:
err = float(metric_err(u_train.squeeze(-1), solver.models[0](k_train).squeeze(-1)).mean())*100

View File

@@ -5,29 +5,15 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Tutorial 6: How to Use Geometries in PINA"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Built-in Geometries"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Tutorial: Building custom geometries with PINA `Location` class\n",
"\n",
"In this tutorial we will show how to use geometries in PINA. Specifically, the tutorial will include how to create geometries and how to visualize them. The topics covered are:\n",
"\n",
"* Creating CartesianDomains and EllipsoidDomains\n",
"* Getting the Union and Difference of Geometries\n",
"* Sampling points in the domain (and visualize them)\n",
"\n",
"We import the relevant modules."
"We import the relevant modules first."
]
},
{
@@ -45,6 +31,14 @@
" ax.scatter(pts.extract('x'), pts.extract('y'), color='blue', alpha=0.5)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Built-in Geometries"
]
},
{
"attachments": {},
"cell_type": "markdown",
@@ -401,7 +395,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Because the `Location` class we are inherting from requires both a sample method and `is_inside` method, we will create them and just add in \"pass\" for the moment."
"Because the `Location` class we are inherting from requires both a `sample` method and `is_inside` method, we will create them and just add in \"pass\" for the moment."
]
},
{

View File

@@ -1,17 +1,15 @@
#!/usr/bin/env python
# coding: utf-8
# # Tutorial 6: How to Use Geometries in PINA
# ## Built-in Geometries
# # Tutorial: Building custom geometries with PINA `Location` class
#
# In this tutorial we will show how to use geometries in PINA. Specifically, the tutorial will include how to create geometries and how to visualize them. The topics covered are:
#
# * Creating CartesianDomains and EllipsoidDomains
# * Getting the Union and Difference of Geometries
# * Sampling points in the domain (and visualize them)
#
# We import the relevant modules.
# We import the relevant modules first.
# In[1]:
@@ -25,6 +23,8 @@ def plot_scatter(ax, pts, title):
ax.scatter(pts.extract('x'), pts.extract('y'), color='blue', alpha=0.5)
# ## Built-in Geometries
# We will create one cartesian and two ellipsoids. For the sake of simplicity, we show here the 2-dimensional, but it's trivial the extension to 3D (and higher) cases. The geometries allows also the generation of samples belonging to the boundary. So, we will create one ellipsoid with the border and one without.
# In[2]:
@@ -180,7 +180,7 @@ class Heart(Location):
# Because the `Location` class we are inherting from requires both a sample method and `is_inside` method, we will create them and just add in "pass" for the moment.
# Because the `Location` class we are inherting from requires both a `sample` method and `is_inside` method, we will create them and just add in "pass" for the moment.
# In[13]: