update plotter
This commit is contained in:
committed by
Nicola Demo
parent
934ae409ff
commit
0d38de5afe
25
tutorials/tutorial1/tutorial.py
vendored
25
tutorials/tutorial1/tutorial.py
vendored
@@ -35,7 +35,7 @@
|
||||
#
|
||||
# ```python
|
||||
# from pina.problem import SpatialProblem
|
||||
# from pina import CartesianProblem
|
||||
# from pina.geometry import CartesianProblem
|
||||
#
|
||||
# class SimpleODE(SpatialProblem):
|
||||
#
|
||||
@@ -54,7 +54,7 @@
|
||||
|
||||
|
||||
from pina.problem import SpatialProblem, TimeDependentProblem
|
||||
from pina import CartesianDomain
|
||||
from pina.geometry import CartesianDomain
|
||||
|
||||
class TimeSpaceODE(SpatialProblem, TimeDependentProblem):
|
||||
|
||||
@@ -77,7 +77,7 @@ class TimeSpaceODE(SpatialProblem, TimeDependentProblem):
|
||||
#
|
||||
# Once the `Problem` class is initialized, we need to represent the differential equation in **PINA**. In order to do this, we need to load the **PINA** operators from `pina.operators` module. Again, we'll consider Equation (1) and represent it in **PINA**:
|
||||
|
||||
# In[3]:
|
||||
# In[2]:
|
||||
|
||||
|
||||
from pina.problem import SpatialProblem
|
||||
@@ -133,7 +133,7 @@ problem = SimpleODE()
|
||||
#
|
||||
# Data for training can come in form of direct numerical simulation reusults, or points in the domains. In case we do unsupervised learning, we just need the collocation points for training, i.e. points where we want to evaluate the neural network. Sampling point in **PINA** is very easy, here we show three examples using the `.discretise_domain` method of the `AbstractProblem` class.
|
||||
|
||||
# In[4]:
|
||||
# In[3]:
|
||||
|
||||
|
||||
# sampling 20 points in [0, 1] through discretization in all locations
|
||||
@@ -149,7 +149,7 @@ problem.discretise_domain(n=20, mode='random', variables=['x'])
|
||||
|
||||
# We are going to use latin hypercube points for sampling. We need to sample in all the conditions domains. In our case we sample in `D` and `x0`.
|
||||
|
||||
# In[5]:
|
||||
# In[4]:
|
||||
|
||||
|
||||
# sampling for training
|
||||
@@ -159,7 +159,7 @@ problem.discretise_domain(20, 'lh', locations=['D'])
|
||||
|
||||
# The points are saved in a python `dict`, and can be accessed by calling the attribute `input_pts` of the problem
|
||||
|
||||
# In[6]:
|
||||
# In[5]:
|
||||
|
||||
|
||||
print('Input points:', problem.input_pts)
|
||||
@@ -168,7 +168,7 @@ print('Input points labels:', problem.input_pts['D'].labels)
|
||||
|
||||
# To visualize the sampled points we can use the `.plot_samples` method of the `Plotter` class
|
||||
|
||||
# In[7]:
|
||||
# In[6]:
|
||||
|
||||
|
||||
from pina import Plotter
|
||||
@@ -181,10 +181,11 @@ pl.plot_samples(problem=problem)
|
||||
|
||||
# Once we have defined the problem and generated the data we can start the modelling. Here we will choose a `FeedForward` neural network available in `pina.model`, and we will train using the `PINN` solver from `pina.solvers`. We highlight that this training is fairly simple, for more advanced stuff consider the tutorials in the ***Physics Informed Neural Networks*** section of ***Tutorials***. For training we use the `Trainer` class from `pina.trainer`. Here we show a very short training and some method for plotting the results. Notice that by default all relevant metrics (e.g. MSE error during training) are going to be tracked using a `lightining` logger, by default `CSVLogger`. If you want to track the metric by yourself without a logger, use `pina.callbacks.MetricTracker`.
|
||||
|
||||
# In[8]:
|
||||
# In[7]:
|
||||
|
||||
|
||||
from pina import PINN, Trainer
|
||||
from pina import Trainer
|
||||
from pina.solvers import PINN
|
||||
from pina.model import FeedForward
|
||||
from pina.callbacks import MetricTracker
|
||||
|
||||
@@ -209,7 +210,7 @@ trainer.train()
|
||||
|
||||
# After the training we can inspect trainer logged metrics (by default **PINA** logs mean square error residual loss). The logged metrics can be accessed online using one of the `Lightinig` loggers. The final loss can be accessed by `trainer.logged_metrics`
|
||||
|
||||
# In[9]:
|
||||
# In[8]:
|
||||
|
||||
|
||||
# inspecting final loss
|
||||
@@ -218,7 +219,7 @@ trainer.logged_metrics
|
||||
|
||||
# By using the `Plotter` class from **PINA** we can also do some quatitative plots of the solution.
|
||||
|
||||
# In[12]:
|
||||
# In[9]:
|
||||
|
||||
|
||||
# plotting the solution
|
||||
@@ -227,7 +228,7 @@ pl.plot(solver=pinn)
|
||||
|
||||
# The solution is overlapped with the actual one, and they are barely indistinguishable. We can also plot easily the loss:
|
||||
|
||||
# In[14]:
|
||||
# In[10]:
|
||||
|
||||
|
||||
pl.plot_loss(trainer=trainer, label = 'mean_loss', logy=True)
|
||||
|
||||
Reference in New Issue
Block a user