tutorial validation (#185)

Co-authored-by: Ben Volokh <89551265+benv123@users.noreply.github.com>
This commit is contained in:
Nicola Demo
2023-10-17 10:54:31 +02:00
parent 2e2fe93458
commit 32ff5de1f4
38 changed files with 1072 additions and 1006 deletions

File diff suppressed because one or more lines are too long

View File

@@ -3,18 +3,19 @@
# # Tutorial 1: Physics Informed Neural Networks on PINA
# In this tutorial we will show the typical use case of PINA on a toy problem solved by Physics Informed Problems. Specifically, the tutorial aims to introduce the following topics:
# In this tutorial, we will demonstrate a typical use case of PINA on a toy problem. Specifically, the tutorial aims to introduce the following topics:
#
# * Defining a PINA Problem,
# * Build a `PINN` Solver,
# * Building a `pinn` object,
# * Sampling points in a domain
#
# We will show in detailed each step, and at the end we will solve a very simple problem with PINA.
# These are the three main steps needed **before** training a Physics Informed Neural Network (PINN). We will show each step in detail, and at the end, we will solve the problem.
# ## Defining a Problem
# ## PINA Problem
# ### Initialize the Problem class
# The problem definition in the PINA framework is done by building a phython `class`, inherited from `AbsractProblem`. A problem is an object which explains what the solver is supposed to solve. For Physics Informed Neural Networks, a problem can be inherited from one or more problem (already implemented) classes (`SpatialProblem`, `TimeDependentProblem`, `ParametricProblem`), depending on the nature of the problem treated.
# Let's see an example to better understand:
# ### Initialize the `Problem` class
# Problem definition in the PINA framework is done by building a python `class`, which inherits from one or more problem classes (`SpatialProblem`, `TimeDependentProblem`, `ParametricProblem`) depending on the nature of the problem. Below is an example:
# #### Simple Ordinary Differential Equation
# Consider the following:
#
@@ -27,55 +28,58 @@
# \end{equation}
# $$
#
# with analytical solution $u(x) = e^x$. In this case we have that our ODE depends only on the spatial variable $x\in(0,1)$ , this means that our problem class is going to be inherited from `SpatialProblem` class:
# with the analytical solution $u(x) = e^x$. In this case, our ODE depends only on the spatial variable $x\in(0,1)$ , meaning that our `Problem` class is going to be inherited from the `SpatialProblem` class:
#
# ```python
# from pina.problem import SpatialProblem
# from pina.geometry import CartesianDomain
# from pina import CartesianProblem
#
# class SimpleODE(SpatialProblem):
#
# output_variables = ['u']
# spatial_domain = CartesianDomain({'x': [0, 1]})
# spatial_domain = CartesianProblem({'x': [0, 1]})
#
# # other stuff ...
# ```
#
# Notice that we define `output_variables` as a list of symbols, indicating the output variables of our equation (in this case only $u$). The `spatial_domain` variable indicates where the sample points are going to be sampled in the domain, in this case $x\in(0,1)$
#
# What about if we also have a time depencency in the equation? Well in that case our `class` will inherit from both `SpatialProblem` and `TimeDependentProblem`:
# ```python
# from pina.problem import SpatialProblem, TimeDependentProblem
# from pina.geometry import CartesianDomain
#
# class TimeSpaceODE(SpatialProblem, TimeDependentProblem):
#
# output_variables = ['u']
# spatial_domain = CartesianDomain({'x': [0, 1]})
# temporal_domain = CartesianDomain({'x': [0, 1]})
#
# # other stuff ...
# ```
# where we have included the `temporal_domain` variable indicating the time domain where we want the solution.
#
# Summarizing, in PINA we can initialize a problem with a class which is inherited from three base classes: `SpatialProblem`, `TimeDependentProblem`, `ParametricProblem`, depending on the type of problem we are considering. For reference:
# * `SpatialProblem` $\rightarrow$ spatial variable(s) presented in the differential equation
# * `TimeDependentProblem` $\rightarrow$ time variable(s) presented in the differential equation
# * `ParametricProblem` $\rightarrow$ parameter(s) presented in the differential equation
# Notice that we define `output_variables` as a list of symbols, indicating the output variables of our equation (in this case only $u$). The `spatial_domain` variable indicates where the sample points are going to be sampled in the domain, in this case $x\in[0,1]$.
# What about if our equation is also time dependent? In this case, our `class` will inherit from both `SpatialProblem` and `TimeDependentProblem`:
#
# ### Write the problem class
# In[1]:
from pina.problem import SpatialProblem, TimeDependentProblem
from pina import CartesianDomain
class TimeSpaceODE(SpatialProblem, TimeDependentProblem):
output_variables = ['u']
spatial_domain = CartesianDomain({'x': [0, 1]})
temporal_domain = CartesianDomain({'t': [0, 1]})
# other stuff ...
# where we have included the `temporal_domain` variable, indicating the time domain wanted for the solution.
#
# Once the problem class is initialized we need to write the differential equation in PINA language. For doing this we need to load the pina operators found in `pina.operators` module. Let's again consider the Equation (1) and try to write the PINA model class:
# In summary, using PINA, we can initialize a problem with a class which inherits from three base classes: `SpatialProblem`, `TimeDependentProblem`, `ParametricProblem`, depending on the type of problem we are considering. For reference:
# * `SpatialProblem` $\rightarrow$ a differential equation with spatial variable(s)
# * `TimeDependentProblem` $\rightarrow$ a time-dependent differential equation
# * `ParametricProblem` $\rightarrow$ a parametrized differential equation
# ### Write the `Problem` class
#
# Once the `Problem` class is initialized, we need to represent the differential equation in PINA. In order to do this, we need to load the PINA operators from `pina.operators` module. Again, we'll consider Equation (1) and represent it in PINA:
# In[2]:
from pina.problem import SpatialProblem
from pina.operators import grad
from pina.geometry import CartesianDomain
from pina.equation import Equation
from pina import Condition
from pina import Condition, CartesianDomain
from pina.equation.equation import Equation
import torch
@@ -91,50 +95,54 @@ class SimpleODE(SpatialProblem):
# computing the derivative
u_x = grad(output_, input_, components=['u'], d=['x'])
# extracting u input variable
# extracting the u input variable
u = output_.extract(['u'])
# calculate residual and return it
# calculate the residual and return it
return u_x - u
# defining initial condition
# defining the initial condition
def initial_condition(input_, output_):
# setting initial value
# setting the initial value
value = 1.0
# extracting u input variable
# extracting the u input variable
u = output_.extract(['u'])
# calculate residual and return it
# calculate the residual and return it
return u - value
# Conditions to hold
# conditions to hold
conditions = {
'x0': Condition(location=CartesianDomain({'x': 0.}), equation=Equation(initial_condition)),
'D': Condition(location=CartesianDomain({'x': [0, 1]}), equation=Equation(ode_equation)),
}
# defining true solution
# sampled points (see below)
input_pts = None
# defining the true solution
def truth_solution(self, pts):
return torch.exp(pts.extract(['x']))
# After the defition of the Class we need to write different class methods, where each method is a function returning a residual. This functions are the ones minimized during the PINN optimization, for the different conditions. For example, in the domain $(0,1)$ the ODE equation (`ode_equation`) must be satisfied, so we write it by putting all the ODE equation on the right hand side, such that we return the zero residual. This is done for all the conditions (`ode_equation`, `initial_condition`). Notice that we do not pass directly a `python` function, but an `Equation` object, which is initialized with the `python` function. This is done so that all the computations, and internal checks are done inside PINA.
# After we define the `Problem` class, we need to write different class methods, where each method is a function returning a residual. These functions are the ones minimized during PINN optimization, given the initial conditions. For example, in the domain $[0,1]$, the ODE equation (`ode_equation`) must be satisfied. We represent this by returning the difference between subtracting the variable `u` from its gradient (the residual), which we hope to minimize to 0. This is done for all conditions (`ode_equation`, `initial_condition`).
#
# Once we have defined the function we need to tell the network where these methods have to be applied. For doing this we use the class `Condition`. In `Condition` we pass the location points and the function to be minimized on those points (other possibilities are allowed, see the documentation for reference).
# Once we have defined the function, we need to tell the neural network where these methods are to be applied. To do so, we use the `Condition` class. In the `Condition` class, we pass the location points and the function we want minimized on those points (other possibilities are allowed, see the documentation for reference) as parameters.
#
# Finally, it's possible to define a `truth_solution` function, which can be useful if we want to plot the results and see how the real solution compares to the expected (true) solution. Notice that the `truth_solution` function is a method of the `PINN` class, but is not mandatory for problem definition.
#
# Finally, it's possible to defing the `truth_solution` function, which can be useful if we want to plot the results and see a comparison of real vs expected solution. Notice that `truth_solution` function is a method of the `PINN` class, but it is not mandatory for the problem definition.
# ## Build PINN object
# ## Build the `PINN` object
# In PINA we have already developed different solvers, one of them is `PINN`. The basics requirements for building a `PINN` model are a problem and a model. We have already covered the problem definition. For the model one can use the default models provided in PINA or use a custom model. We will not go into the details of model definition, Tutorial2 and Tutorial3 treat the topic in detail.
# The basic requirements for building a `PINN` model are a `Problem` and a model. We have just covered the `Problem` definition. For the model parameter, one can use either the default models provided in PINA or a custom model. We will not go into the details of model definition (see Tutorial2 and Tutorial3 for more details on model definition).
# In[3]:
from pina.model import FeedForward
from pina.solvers import PINN
from pina import PINN
# initialize the problem
problem = SimpleODE()
@@ -147,81 +155,41 @@ model = FeedForward(
input_dimensions=len(problem.input_variables)
)
# create the PINN object, see the PINN documentation for extra argument in the constructor
# create the PINN object
pinn = PINN(problem, model)
# Creating the pinn object is fairly simple by using the `PINN` class, different optional inputs can be passed: optimizer, batch size, ... (see [documentation](https://mathlab.github.io/PINA/) for reference).
# Creating the `PINN` object is fairly simple. Different optional parameters include: optimizer, batch size, ... (see [documentation](https://mathlab.github.io/PINA/) for reference).
# ## Sample points in the domain and create the Trainer
# ## Sample points in the domain
# Once the `PINN` object is created, we need to generate the points for starting the optimization. For doing this we use the `.discretise_domain` method of the `AbstractProblem` class. Let's see some methods to sample in $(0,1 )$.
# Once the `PINN` object is created, we need to generate the points for starting the optimization. To do so, we use the `sample` method of the `CartesianDomain` class. Below are three examples of sampling methods on the $[0,1]$ domain:
# In[4]:
# sampling 20 points in (0, 1) with discrite step
problem.discretise_domain(20, 'grid', locations=['D'])
# sampling 20 points in [0, 1] through discretization
pinn.problem.discretise_domain(n=20, mode='grid', variables=['x'])
# sampling 20 points in (0, 1) with latin hypercube
problem.discretise_domain(20, 'latin', locations=['D'])
# sampling 20 points in (0, 1) through latin hypercube samping
pinn.problem.discretise_domain(n=20, mode='latin', variables=['x'])
# sampling 20 points in (0, 1) randomly
problem.discretise_domain(20, 'random', locations=['D'])
# We are going to use equispaced points for sampling. We need to sample in all the conditions domains. In our case we sample in `D` and `x0`.
# In[5]:
# sampling for training
problem.discretise_domain(1, 'random', locations=['x0'])
problem.discretise_domain(20, 'grid', locations=['D'])
pinn.problem.discretise_domain(n=20, mode='random', variables=['x'])
# ### Very simple training and plotting
#
# Once we have defined the PINA model, created a network and sampled points in the domain, we have everything that is necessary for training a `PINN`. For training we use the `Trainer` class. Here we show a very short training and some method for plotting the results. Notice that by default all relevant metrics (e.g. MSE error during training) is going to be tracked using a `lightining` logger, by default `CSVLogger`. If you want to track the metric by yourself without a logger, use `pina.callbacks.MetricTracker`.
# Once we have defined the PINA model, created a network, and sampled points in the domain, we have everything necessary for training a PINN. To do so, we make use of the `Trainer` class.
# In[6]:
# In[5]:
# create the trainer
from pina.trainer import Trainer
from pina.callbacks import MetricTracker
from pina import Trainer
trainer = Trainer(solver=pinn, max_epochs=3000, callbacks=[MetricTracker()])
# initialize trainer
trainer = Trainer(pinn)
# train
# train the model
trainer.train()
# After the training we can inspect trainer logged metrics (by default PINA logs mean square error residual loss). The logged metrics can be accessed online using one of the `Lightinig` loggers. The final loss can be accessed by `trainer.logged_metrics`.
# In[7]:
# inspecting final loss
trainer.logged_metrics
# By using the `Plotter` class from PINA we can also do some quatitative plots of the solution.
# In[8]:
from pina.plotter import Plotter
# plotting the loss
plotter = Plotter()
plotter.plot(trainer=trainer)
# The solution is completely overlapped with the actual one. We can also plot easily the loss:
# In[9]:
plotter.plot_loss(trainer=trainer, metric='mean_loss', log_scale=True)

File diff suppressed because one or more lines are too long

View File

@@ -98,7 +98,7 @@ trainer = Trainer(pinn, max_epochs=1000, callbacks=[MetricTracker()])
trainer.train()
# Now the *Plotter* class is used to plot the results.
# Now the `Plotter` class is used to plot the results.
# The solution predicted by the neural network is plotted on the left, the exact one is represented at the center and on the right the error between the exact and the predicted solutions is showed.
# In[4]:
@@ -238,18 +238,3 @@ trainer_learn.train()
plotter.plot(trainer_learn)
# In[10]:
import matplotlib.pyplot as plt
plt.figure(figsize=(16, 6))
plotter.plot_loss(trainer, label='Standard')
plotter.plot_loss(trainer_feat, label='Static Features')
plotter.plot_loss(trainer_learn, label='Learnable Features')
plt.grid()
plt.legend()
plt.show()

File diff suppressed because one or more lines are too long

View File

@@ -3,7 +3,7 @@
# # Tutorial 3: resolution of wave equation with hard constraint PINNs.
# ### The problem solution
# ## The problem definition
# In this tutorial we present how to solve the wave equation using hard constraint PINNs. For doing so we will build a costum torch model and pass it to the `PINN` solver.
#
@@ -76,11 +76,13 @@ class Wave(TimeDependentProblem, SpatialProblem):
problem = Wave()
# After the problem, a **torch** model is needed to solve the PINN. Usually many models are already implemented in `PINA`, but the user has the possibility to build his/her own model in `pyTorch`. The hard constraint we impose are on the boundary of the spatial domain. Specificly our solution is written as:
# ## Hard Constraint Model
# After the problem, a **torch** model is needed to solve the PINN. Usually, many models are already implemented in `PINA`, but the user has the possibility to build his/her own model in `PyTorch`. The hard constraint we impose is on the boundary of the spatial domain. Specifically, our solution is written as:
#
# $$ u_{\rm{pinn}} = xy(1-x)(1-y)\cdot NN(x, y, t), $$
#
# where $NN$ is the neural net output. This neural network takes as input the coordinates (in this case $x$, $y$ and $t$) and provides the unkwown field of the Wave problem. By construction it is zero on the boundaries. The residual of the equations are evaluated at several sampling points (which the user can manipulate using the method `discretise_domain`) and the loss minimized by the neural network is the sum of the residuals.
# where $NN$ is the neural net output. This neural network takes as input the coordinates (in this case $x$, $y$ and $t$) and provides the unknown field $u$. By construction, it is zero on the boundaries. The residuals of the equations are evaluated at several sampling points (which the user can manipulate using the method `discretise_domain`) and the loss minimized by the neural network is the sum of the residuals.
# In[3]:
@@ -102,9 +104,11 @@ class HardMLP(torch.nn.Module):
return hard*self.layers(x)
# ## Train and Inference
# In this tutorial, the neural network is trained for 3000 epochs with a learning rate of 0.001 (default in `PINN`). Training takes approximately 1 minute.
# In[7]:
# In[4]:
pinn = PINN(problem, HardMLP(len(problem.input_variables), len(problem.output_variables)))
@@ -115,7 +119,7 @@ trainer.train()
# Notice that the loss on the boundaries of the spatial domain is exactly zero, as expected! After the training is completed one can now plot some results using the `Plotter` class of **PINA**.
# In[11]:
# In[5]:
plotter = Plotter()

File diff suppressed because one or more lines are too long

View File

@@ -3,7 +3,7 @@
# # Tutorial 4: continuous convolutional filter
# In this tutorial we will show how to use the Continouous Convolutional Filter, and how to build common Deep Learning architectures with it. The implementation of the filter follows the original work [**A Continuous Convolutional Trainable Filter for Modelling Unstructured Data**](https://arxiv.org/abs/2210.13416) of Coscia Dario, Laura Meneghetti, Nicola Demo, Giovanni Stabile, and Gianluigi Rozza.
# In this tutorial, we will show how to use the Continuous Convolutional Filter, and how to build common Deep Learning architectures with it. The implementation of the filter follows the original work [**A Continuous Convolutional Trainable Filter for Modelling Unstructured Data**](https://arxiv.org/abs/2210.13416).
# First of all we import the modules needed for the tutorial, which include:
#
@@ -116,7 +116,7 @@ print(f"Filter input data has shape: {data.shape}")
#
# Suppose we would like to get an ouput with only one field, and let us fix the filter dimension to be $[0.1, 0.1]$.
# In[4]:
# In[3]:
# filter dim
@@ -138,7 +138,7 @@ cConv = ContinuousConvBlock(input_numb_field=number_input_fileds,
# That's it! In just one line of code we have created the continuous convolutional filter. By default the `pina.model.FeedForward` neural network is intitialised, more on the [documentation](https://mathlab.github.io/PINA/_rst/fnn.html). In case the mesh doesn't change during training we can set the `optimize` flag equals to `True`, to exploit optimizations for finding the points to convolve.
# In[5]:
# In[4]:
# creating the filter + optimization
@@ -151,7 +151,7 @@ cConv = ContinuousConvBlock(input_numb_field=number_input_fileds,
# Let's try to do a forward pass
# In[6]:
# In[5]:
print(f"Filter input data has shape: {data.shape}")
@@ -165,7 +165,7 @@ print(f"Filter output data has shape: {output.shape}")
# If we don't want to use the default `FeedForward` neural network, we can pass a specified torch model in the `model` keyword as follow:
#
# In[7]:
# In[6]:
class SimpleKernel(torch.nn.Module):
@@ -196,7 +196,7 @@ cConv = ContinuousConvBlock(input_numb_field=number_input_fileds,
#
# Let's see how we can build a MNIST classifier using a continuous convolutional filter. We will use the MNIST dataset from PyTorch. In order to keep small training times we use only 6000 samples for training and 1000 samples for testing.
# In[8]:
# In[7]:
from torch.utils.data import DataLoader, SubsetRandomSampler
@@ -233,7 +233,7 @@ test_loader = DataLoader(train_data, batch_size=batch_size,
# Let's now build a simple classifier. The MNIST dataset is composed by vectors of shape `[batch, 1, 28, 28]`, but we can image them as one field functions where the pixels $ij$ are the coordinate $x=i, y=j$ in a $[0, 27]\times[0,27]$ domain, and the pixels value are the field values. We just need a function to transform the regular tensor in a tensor compatible for the continuous filter:
# In[9]:
# In[8]:
def transform_input(x):
@@ -260,7 +260,7 @@ print(f"Transformed MNIST image shape: {image_transformed.shape}")
# We can now build a simple classifier! We will use just one convolutional filter followed by a feedforward neural network
# In[11]:
# In[9]:
# setting the seed
@@ -302,7 +302,7 @@ net = ContinuousClassifier()
# Let's try to train it using a simple pytorch training loop. We train for juts 1 epoch using Adam optimizer with a $0.001$ learning rate.
# In[14]:
# In[10]:
# setting the seed
@@ -338,7 +338,7 @@ for epoch in range(1): # loop over the dataset multiple times
# Let's see the performance on the train set!
# In[15]:
# In[11]:
correct = 0
@@ -363,7 +363,7 @@ print(
#
# Just as toy problem, we will now build an autoencoder for the following function $f(x,y)=\sin(\pi x)\sin(\pi y)$ on the unit circle domain centered in $(0.5, 0.5)$. We will also see the ability to up-sample (once trained) the results without retraining. Let's first create the input and visualize it, we will use firstly a mesh of $100$ points.
# In[16]:
# In[12]:
# create inputs
@@ -406,7 +406,7 @@ plt.show()
# Let's now build a simple autoencoder using the continuous convolutional filter. The data is clearly unstructured and a simple convolutional filter might not work without projecting or interpolating first. Let's first build and `Encoder` and `Decoder` class, and then a `Autoencoder` class that contains both.
# In[19]:
# In[13]:
class Encoder(torch.nn.Module):
@@ -463,7 +463,7 @@ class Decoder(torch.nn.Module):
# Very good! Notice that in the `Decoder` class in the `forward` pass we have used the `.transpose()` method of the `ContinuousConvolution` class. This method accepts the `weights` for upsampling and the `grid` on where to upsample. Let's now build the autoencoder! We set the hidden dimension in the `hidden_dimension` variable. We apply the sigmoid on the output since the field value is between $[0, 1]$.
# In[20]:
# In[14]:
class Autoencoder(torch.nn.Module):
@@ -488,7 +488,7 @@ net = Autoencoder()
# Let's now train the autoencoder, minimizing the mean square error loss and optimizing using Adam.
# In[21]:
# In[15]:
# setting the seed
@@ -517,7 +517,7 @@ for epoch in range(max_epochs): # loop over the dataset multiple times
# Let's visualize the two solutions side by side!
# In[22]:
# In[16]:
net.eval()
@@ -540,7 +540,7 @@ plt.show()
# As we can see the two are really similar! We can compute the $l_2$ error quite easily as well:
# In[23]:
# In[17]:
def l2_error(input_, target):
@@ -556,7 +556,7 @@ print(f'l2 error: {l2_error(input_data[0, 0, :, -1], output[0, 0, :, -1]):.2%}')
#
# Suppose we have already the hidden dimension and we want to upsample on a differen grid with more points. Let's see how to do it:
# In[24]:
# In[18]:
# setting the seed
@@ -589,7 +589,7 @@ plt.show()
# As we can see we have a very good approximation of the original function, even thought some noise is present. Let's calculate the error now:
# In[25]:
# In[19]:
print(f'l2 error: {l2_error(input_data2[0, 0, :, -1], output[0, 0, :, -1]):.2%}')
@@ -598,7 +598,7 @@ print(f'l2 error: {l2_error(input_data2[0, 0, :, -1], output[0, 0, :, -1]):.2%}'
# ### Autoencoding at different resolution
# In the previous example we already had the hidden dimension (of original input) and we used it to upsample. Sometimes however we have a more fine mesh solution and we simply want to encode it. This can be done without retraining! This procedure can be useful in case we have many points in the mesh and just a smaller part of them are needed for training. Let's see the results of this:
# In[26]:
# In[20]:
# setting the seed

File diff suppressed because one or more lines are too long

View File

@@ -6,8 +6,7 @@
# In this tutorial we are going to solve the Darcy flow 2d problem, presented in [Fourier Neural Operator for
# Parametric Partial Differential Equation](https://openreview.net/pdf?id=c8P9NQVtmnO). First of all we import the modules needed for the tutorial. Importing `scipy` is needed for input output operation, run `pip install scipy` for installing it.
# In[29]:
# In[1]:
from scipy import io
@@ -32,7 +31,7 @@ import matplotlib.pyplot as plt
# Specifically, $u$ is the flow pressure, $k$ is the permeability field and $f$ is the forcing function. The Darcy flow can parameterize a variety of systems including flow through porous media, elastic materials and heat conduction. Here you will define the domain as a 2D unit square Dirichlet boundary conditions. The dataset is taken from the authors original reference.
#
# In[36]:
# In[2]:
# download the dataset
@@ -49,7 +48,7 @@ y = torch.tensor(data['y'], dtype=torch.float)[0]
# Let's visualize some data
# In[88]:
# In[3]:
plt.subplot(1, 2, 1)
@@ -63,7 +62,7 @@ plt.show()
# We now create the neural operator class. It is a very simple class, inheriting from `AbstractProblem`.
# In[69]:
# In[4]:
class NeuralOperatorSolver(AbstractProblem):
@@ -80,7 +79,7 @@ problem = NeuralOperatorSolver()
#
# We will first solve the problem using a Feedforward neural network. We will use the `SupervisedSolver` for solving the problem, since we are training using supervised learning.
# In[78]:
# In[5]:
# make model
@@ -97,7 +96,7 @@ trainer.train()
# The final loss is pretty high... We can calculate the error by importing `LpLoss`.
# In[79]:
# In[6]:
from pina.loss import LpLoss
@@ -117,7 +116,7 @@ print(f'Final error testing {err:.2f}%')
#
# We will now move to solve the problem using a FNO. Since we are learning operator this approach is better suited, as we shall see.
# In[70]:
# In[7]:
# make model
@@ -141,7 +140,7 @@ trainer.train()
# We can clearly see that with 1/3 of the total epochs the loss is lower. Let's see in testing.. Notice that the number of parameters is way higher than a `FeedForward` network. We suggest to use GPU or TPU for a speed up in training.
# In[77]:
# In[8]:
err = float(metric_err(u_train.squeeze(-1), solver.models[0](k_train).squeeze(-1)).mean())*100