Updates to tutorial and run post codacy changes

This commit is contained in:
Matteo Bertocchi
2025-03-10 17:18:48 +01:00
committed by Nicola Demo
parent 9e55746546
commit b38b0894b1
27 changed files with 952 additions and 393 deletions

View File

@@ -9,7 +9,7 @@
#
# First of all, some useful imports.
# In[ ]:
# In[1]:
## routine needed to run the notebook on Google Colab
@@ -54,7 +54,7 @@ warnings.filterwarnings('ignore')
# Now, the wave problem is written in PINA code as a class, inheriting from `SpatialProblem` and `TimeDependentProblem` since we deal with spatial, and time dependent variables. The equations are written as `conditions` that should be satisfied in the corresponding domains. `truth_solution` is the exact solution which will be compared with the predicted one.
# In[ ]:
# In[2]:
class Wave(TimeDependentProblem, SpatialProblem):
@@ -124,7 +124,7 @@ problem = Wave()
#
# where $NN$ is the neural net output. This neural network takes as input the coordinates (in this case $x$, $y$ and $t$) and provides the unknown field $u$. By construction, it is zero on the boundaries. The residuals of the equations are evaluated at several sampling points (which the user can manipulate using the method `discretise_domain`) and the loss minimized by the neural network is the sum of the residuals.
# In[ ]:
# In[3]:
class HardMLP(torch.nn.Module):
@@ -155,7 +155,7 @@ class HardMLP(torch.nn.Module):
# In this tutorial, the neural network is trained for 1000 epochs with a learning rate of 0.001 (default in `PINN`). As always, we will log using `Tensorboard`.
# In[ ]:
# In[4]:
# generate the data
@@ -187,15 +187,14 @@ trainer = Trainer(
train_size=1.0,
val_size=0.0,
test_size=0.0,
logger=TensorBoardLogger("tutorial_logs"),
enable_progress_bar=False,
logger=TensorBoardLogger("tutorial_logs")
)
trainer.train()
# Let's now plot the logging to see how the losses vary during training. For this, we will use `TensorBoard`.
# In[ ]:
# In[5]:
# Load the TensorBoard extension
@@ -205,7 +204,7 @@ get_ipython().run_line_magic('tensorboard', "--logdir 'tutorial_logs'")
# Notice that the loss on the boundaries of the spatial domain is exactly zero, as expected! After the training is completed one can now plot some results using the `matplotlib`. We plot the predicted output on the left side, the true solution at the center and the difference on the right side using the `plot_solution` function.
# In[ ]:
# In[6]:
@torch.no_grad()
@@ -241,7 +240,7 @@ def plot_solution(solver, time):
# Let's take a look at the results at different times, for example `0.0`, `0.5` and `1.0`:
# In[ ]:
# In[7]:
plt.figure(figsize=(12, 6))
@@ -262,7 +261,7 @@ plot_solution(solver=pinn, time=1)
#
# Let us build the network first
# In[ ]:
# In[8]:
class HardMLPtime(torch.nn.Module):
@@ -298,7 +297,7 @@ class HardMLPtime(torch.nn.Module):
# Now let's train with the same configuration as the previous test
# In[ ]:
# In[9]:
# define model
@@ -316,15 +315,14 @@ trainer = Trainer(
train_size=1.0,
val_size=0.0,
test_size=0.0,
logger=TensorBoardLogger("tutorial_logs"),
enable_progress_bar=False,
logger=TensorBoardLogger("tutorial_logs")
)
trainer.train()
# We can clearly see that the loss is way lower now. Let's plot the results
# In[ ]:
# In[10]:
plt.figure(figsize=(12, 6))
@@ -339,7 +337,7 @@ plot_solution(solver=pinn, time=1)
# We can see now that the results are way better! This is due to the fact that previously the network was not learning correctly the initial conditon, leading to a poor solution when time evolved. By imposing the initial condition the network is able to correctly solve the problem. We can also see using Tensorboard how the two losses decreased:
# In[ ]:
# In[11]:
get_ipython().run_line_magic('tensorboard', "--logdir 'tutorial_logs'")