Update Tensorboard use
This commit is contained in:
committed by
Nicola Demo
parent
b38b0894b1
commit
67a2b0796c
8
tutorials/tutorial1/tutorial.ipynb
vendored
8
tutorials/tutorial1/tutorial.ipynb
vendored
@@ -505,7 +505,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"execution_count": null,
|
||||
"id": "fcac93e4",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -546,10 +546,8 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Load the TensorBoard extension\n",
|
||||
"%load_ext tensorboard\n",
|
||||
"# Show saved losses\n",
|
||||
"%tensorboard --logdir 'tutorial_logs'"
|
||||
"print('\\nTo load TensorBoard run load_ext tensorboard on your terminal')\n",
|
||||
"print(\"To visualize the loss you can run tensorboard --logdir 'tutorial_logs' on your terminal\\n\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
8
tutorials/tutorial1/tutorial.py
vendored
8
tutorials/tutorial1/tutorial.py
vendored
@@ -261,13 +261,11 @@ plt.legend()
|
||||
|
||||
# The solution is overlapped with the actual one, and they are barely indistinguishable. We can also take a look at the loss using `TensorBoard`:
|
||||
|
||||
# In[10]:
|
||||
# In[ ]:
|
||||
|
||||
|
||||
# Load the TensorBoard extension
|
||||
get_ipython().run_line_magic('load_ext', 'tensorboard')
|
||||
# Show saved losses
|
||||
get_ipython().run_line_magic('tensorboard', "--logdir 'tutorial_logs'")
|
||||
print('\nTo load TensorBoard run load_ext tensorboard on your terminal')
|
||||
print("To visualize the loss you can run tensorboard --logdir 'tutorial_logs' on your terminal\n")
|
||||
|
||||
|
||||
# As we can see the loss has not reached a minimum, suggesting that we could train for longer! Alternatively, we can also take look at the loss using callbacks. Here we use `MetricTracker` from `pina.callback`:
|
||||
|
||||
56
tutorials/tutorial2/tutorial.ipynb
vendored
56
tutorials/tutorial2/tutorial.ipynb
vendored
File diff suppressed because one or more lines are too long
7
tutorials/tutorial2/tutorial.py
vendored
7
tutorials/tutorial2/tutorial.py
vendored
@@ -311,12 +311,11 @@ trainer_learn.train()
|
||||
|
||||
# Let us compare the training losses for the various types of training
|
||||
|
||||
# In[10]:
|
||||
# In[ ]:
|
||||
|
||||
|
||||
# Load the TensorBoard extension
|
||||
get_ipython().run_line_magic('load_ext', 'tensorboard')
|
||||
get_ipython().run_line_magic('tensorboard', "--logdir 'tutorial_logs'")
|
||||
print('To load TensorBoard run load_ext tensorboard on your terminal')
|
||||
print("To visualize the loss you can run tensorboard --logdir 'tutorial_logs' on your terminal")
|
||||
|
||||
|
||||
# ## What's next?
|
||||
|
||||
99
tutorials/tutorial3/tutorial.ipynb
vendored
99
tutorials/tutorial3/tutorial.ipynb
vendored
File diff suppressed because one or more lines are too long
13
tutorials/tutorial3/tutorial.py
vendored
13
tutorials/tutorial3/tutorial.py
vendored
@@ -194,12 +194,11 @@ trainer.train()
|
||||
|
||||
# Let's now plot the logging to see how the losses vary during training. For this, we will use `TensorBoard`.
|
||||
|
||||
# In[5]:
|
||||
# In[ ]:
|
||||
|
||||
|
||||
# Load the TensorBoard extension
|
||||
get_ipython().run_line_magic('load_ext', 'tensorboard')
|
||||
get_ipython().run_line_magic('tensorboard', "--logdir 'tutorial_logs'")
|
||||
print('\nTo load TensorBoard run load_ext tensorboard on your terminal')
|
||||
print("To visualize the loss you can run tensorboard --logdir 'tutorial_logs' on your terminal\n")
|
||||
|
||||
|
||||
# Notice that the loss on the boundaries of the spatial domain is exactly zero, as expected! After the training is completed one can now plot some results using the `matplotlib`. We plot the predicted output on the left side, the true solution at the center and the difference on the right side using the `plot_solution` function.
|
||||
@@ -335,12 +334,12 @@ plt.figure(figsize=(12, 6))
|
||||
plot_solution(solver=pinn, time=1)
|
||||
|
||||
|
||||
# We can see now that the results are way better! This is due to the fact that previously the network was not learning correctly the initial conditon, leading to a poor solution when time evolved. By imposing the initial condition the network is able to correctly solve the problem. We can also see using Tensorboard how the two losses decreased:
|
||||
# We can see now that the results are way better! This is due to the fact that previously the network was not learning correctly the initial conditon, leading to a poor solution when time evolved. By imposing the initial condition the network is able to correctly solve the problem. We can also see how the two losses decreased using Tensorboard.
|
||||
|
||||
# In[11]:
|
||||
# In[ ]:
|
||||
|
||||
|
||||
get_ipython().run_line_magic('tensorboard', "--logdir 'tutorial_logs'")
|
||||
print("To visualize the loss you can run tensorboard --logdir 'tutorial_logs' on your terminal")
|
||||
|
||||
|
||||
# ## What's next?
|
||||
|
||||
Reference in New Issue
Block a user