Files
PINA/tutorials/tutorial11/tutorial.ipynb
Dario Coscia 29b14ee9b6 Update Tutorials (#544)
* update tutorials
* tutorial guidelines
* doc
2025-04-23 18:53:30 +02:00

779 lines
26 KiB
Plaintext
Vendored

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Tutorial: Introduction to `Trainer` class\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mathLab/PINA/blob/master/tutorials/tutorial11/tutorial.ipynb)\n",
"\n",
"In this tutorial, we will delve deeper into the functionality of the `Trainer` class, which serves as the cornerstone for training **PINA** [Solvers](https://mathlab.github.io/PINA/_rst/_code.html#solvers). \n",
"\n",
"The `Trainer` class offers a plethora of features aimed at improving model accuracy, reducing training time and memory usage, facilitating logging visualization, and more thanks to the amazing job done by the PyTorch Lightning team!\n",
"\n",
"Our leading example will revolve around solving a simple regression problem where we want to approximate the following function with a Neural Net model $\\mathcal{M}_{\\theta}$:\n",
"$$y = x^3$$\n",
"by having only a set of $20$ observations $\\{x_i, y_i\\}_{i=1}^{20}$, with $x_i \\sim\\mathcal{U}[-3, 3]\\;\\;\\forall i\\in(1,\\dots,20)$.\n",
"\n",
"Let's start by importing useful modules!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"try:\n",
" import google.colab\n",
"\n",
" IN_COLAB = True\n",
"except:\n",
" IN_COLAB = False\n",
"if IN_COLAB:\n",
" !pip install \"pina-mathlab[tutorial]\"\n",
"\n",
"import torch\n",
"import warnings\n",
"\n",
"from pina import Trainer\n",
"from pina.solver import SupervisedSolver\n",
"from pina.model import FeedForward\n",
"from pina.problem.zoo import SupervisedProblem\n",
"\n",
"warnings.filterwarnings(\"ignore\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Define problem and solver."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"# defining the problem\n",
"x_train = torch.empty((20, 1)).uniform_(-3, 3)\n",
"y_train = x_train.pow(3) + 3 * torch.randn_like(x_train)\n",
"\n",
"problem = SupervisedProblem(x_train, y_train)\n",
"\n",
"# build the model\n",
"model = FeedForward(\n",
" layers=[10, 10],\n",
" func=torch.nn.Tanh,\n",
" output_dimensions=1,\n",
" input_dimensions=1,\n",
")\n",
"\n",
"# create the SupervisedSolver object\n",
"solver = SupervisedSolver(problem, model, use_lt=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Till now we just followed the extact step of the previous tutorials. The `Trainer` object\n",
"can be initialized by simiply passing the `SupervisedSolver` solver"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"GPU available: True (mps), used: True\n",
"TPU available: False, using: 0 TPU cores\n",
"HPU available: False, using: 0 HPUs\n"
]
}
],
"source": [
"trainer = Trainer(solver=solver)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Trainer Accelerator\n",
"\n",
"When creating the `Trainer`, **by default** the most performing `accelerator` for training which is available in your system will be chosen, ranked as follows:\n",
"1. [TPU](https://cloud.google.com/tpu/docs/intro-to-tpu)\n",
"2. [IPU](https://www.graphcore.ai/products/ipu)\n",
"3. [HPU](https://habana.ai/)\n",
"4. [GPU](https://www.intel.com/content/www/us/en/products/docs/processors/what-is-a-gpu.html#:~:text=What%20does%20GPU%20stand%20for,video%20editing%2C%20and%20gaming%20applications) or [MPS](https://developer.apple.com/metal/pytorch/)\n",
"5. CPU\n",
"\n",
"For setting manually the `accelerator` run:\n",
"\n",
"* `accelerator = {'gpu', 'cpu', 'hpu', 'mps', 'cpu', 'ipu'}` sets the accelerator to a specific one"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"GPU available: True (mps), used: False\n",
"TPU available: False, using: 0 TPU cores\n",
"HPU available: False, using: 0 HPUs\n"
]
}
],
"source": [
"trainer = Trainer(solver=solver, accelerator=\"cpu\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As you can see, even if a `GPU` is available on the system, it is not used since we set `accelerator='cpu'`."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Trainer Logging\n",
"\n",
"In **PINA** you can log metrics in different ways. The simplest approach is to use the `MetricTracker` class from `pina.callbacks`, as seen in the [*Introduction to Physics Informed Neural Networks training*](https://github.com/mathLab/PINA/blob/master/tutorials/tutorial1/tutorial.ipynb) tutorial.\n",
"\n",
"However, especially when we need to train multiple times to get an average of the loss across multiple runs, `lightning.pytorch.loggers` might be useful. Here we will use `TensorBoardLogger` (more on [logging](https://lightning.ai/docs/pytorch/stable/extensions/logging.html) here), but you can choose the one you prefer (or make your own one).\n",
"\n",
"We will now import `TensorBoardLogger`, do three runs of training, and then visualize the results. Notice we set `enable_model_summary=False` to avoid model summary specifications (e.g. number of parameters); set it to `True` if needed."
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"GPU available: True (mps), used: False\n",
"TPU available: False, using: 0 TPU cores\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"HPU available: False, using: 0 HPUs\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "775a2d088e304b2589631b176c9e99e2",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Training: | | 0/? [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"`Trainer.fit` stopped: `max_epochs=100` reached.\n",
"GPU available: True (mps), used: False\n",
"TPU available: False, using: 0 TPU cores\n",
"HPU available: False, using: 0 HPUs\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "d858dc0a31214f5f86aae78823525b56",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Training: | | 0/? [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"`Trainer.fit` stopped: `max_epochs=100` reached.\n",
"GPU available: True (mps), used: False\n",
"TPU available: False, using: 0 TPU cores\n",
"HPU available: False, using: 0 HPUs\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "739bf2009f7a48a1b59b7df695276672",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Training: | | 0/? [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"`Trainer.fit` stopped: `max_epochs=100` reached.\n"
]
}
],
"source": [
"from lightning.pytorch.loggers import TensorBoardLogger\n",
"\n",
"# three run of training, by default it trains for 1000 epochs, we set the max to 100\n",
"# we reinitialize the model each time otherwise the same parameters will be optimized\n",
"for _ in range(3):\n",
" model = FeedForward(\n",
" layers=[10, 10],\n",
" func=torch.nn.Tanh,\n",
" output_dimensions=1,\n",
" input_dimensions=1,\n",
" )\n",
" solver = SupervisedSolver(problem, model, use_lt=False)\n",
" trainer = Trainer(\n",
" solver=solver,\n",
" accelerator=\"cpu\",\n",
" logger=TensorBoardLogger(save_dir=\"training_log\"),\n",
" enable_model_summary=False,\n",
" train_size=1.0,\n",
" val_size=0.0,\n",
" test_size=0.0,\n",
" max_epochs=100\n",
" )\n",
" trainer.train()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can now visualize the logs by simply running `tensorboard --logdir=training_log/` in the terminal. You should obtain a webpage similar to the one shown below if running for 1000 epochs:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<p align=\\\"center\\\">\n",
"<img src=\"../static/logging.png\" alt=\\\"Logging API\\\" width=\\\"400\\\"/>\n",
"</p>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As you can see, by default, **PINA** logs the losses which are shown in the progress bar, as well as the number of epochs. You can always insert more loggings by either defining a **callback** ([more on callbacks](https://lightning.ai/docs/pytorch/stable/extensions/callbacks.html)), or inheriting the solver and modifying the programs with different **hooks** ([more on hooks](https://lightning.ai/docs/pytorch/stable/common/lightning_module.html#hooks)).\n",
"\n",
"## Trainer Callbacks\n",
"\n",
"Whenever we need to access certain steps of the training for logging, perform static modifications (i.e. not changing the `Solver`), or update `Problem` hyperparameters (static variables), we can use **Callbacks**. Notice that **Callbacks** allow you to add arbitrary self-contained programs to your training. At specific points during the flow of execution (hooks), the Callback interface allows you to design programs that encapsulate a full set of functionality. It de-couples functionality that does not need to be in **PINA** `Solver`s.\n",
"\n",
"Lightning has a callback system to execute them when needed. **Callbacks** should capture NON-ESSENTIAL logic that is NOT required for your lightning module to run.\n",
"\n",
"The following are best practices when using/designing callbacks:\n",
"\n",
"* Callbacks should be isolated in their functionality.\n",
"* Your callback should not rely on the behavior of other callbacks in order to work properly.\n",
"* Do not manually call methods from the callback.\n",
"* Directly calling methods (e.g., on_validation_end) is strongly discouraged.\n",
"* Whenever possible, your callbacks should not depend on the order in which they are executed.\n",
"\n",
"We will try now to implement a naive version of `MetricTraker` to show how callbacks work. Notice that this is a very easy application of callbacks, fortunately in **PINA** we already provide more advanced callbacks in `pina.callbacks`."
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [],
"source": [
"from lightning.pytorch.callbacks import Callback\n",
"from lightning.pytorch.callbacks import EarlyStopping\n",
"import torch\n",
"\n",
"\n",
"# define a simple callback\n",
"class NaiveMetricTracker(Callback):\n",
" def __init__(self):\n",
" self.saved_metrics = []\n",
"\n",
" def on_train_epoch_end(\n",
" self, trainer, __\n",
" ): # function called at the end of each epoch\n",
" self.saved_metrics.append(\n",
" {key: value for key, value in trainer.logged_metrics.items()}\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's see the results when applied to the problem. You can define **callbacks** when initializing the `Trainer` by using the `callbacks` argument, which expects a list of callbacks.\n"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"GPU available: True (mps), used: False\n",
"TPU available: False, using: 0 TPU cores\n",
"HPU available: False, using: 0 HPUs\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "f38442d749ad4702a0c99715ecf08c59",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Training: | | 0/? [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"`Trainer.fit` stopped: `max_epochs=10` reached.\n"
]
}
],
"source": [
"model = FeedForward(\n",
" layers=[10, 10],\n",
" func=torch.nn.Tanh,\n",
" output_dimensions=1,\n",
" input_dimensions=1,\n",
")\n",
"solver = SupervisedSolver(problem, model, use_lt=False)\n",
"trainer = Trainer(\n",
" solver=solver,\n",
" accelerator=\"cpu\",\n",
" logger=True,\n",
" callbacks=[NaiveMetricTracker()], # adding a callbacks\n",
" enable_model_summary=False,\n",
" train_size=1.0,\n",
" val_size=0.0,\n",
" test_size=0.0,\n",
" max_epochs=10, # training only for 10 epochs\n",
")\n",
"trainer.train()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can easily access the data by calling `trainer.callbacks[0].saved_metrics` (notice the zero representing the first callback in the list given at initialization)."
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'data_loss': tensor(126.2887), 'train_loss': tensor(126.2887)},\n",
" {'data_loss': tensor(126.2346), 'train_loss': tensor(126.2346)},\n",
" {'data_loss': tensor(126.1805), 'train_loss': tensor(126.1805)}]"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"trainer.callbacks[0].saved_metrics[:3] # only the first three epochs"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"PyTorch Lightning also has some built-in `Callbacks` which can be used in **PINA**, [here is an extensive list](https://lightning.ai/docs/pytorch/stable/extensions/callbacks.html#built-in-callbacks). \n",
"\n",
"We can, for example, try the `EarlyStopping` routine, which automatically stops the training when a specific metric converges (here the `train_loss`). In order to let the training keep going forever, set `max_epochs=-1`."
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"GPU available: True (mps), used: False\n",
"TPU available: False, using: 0 TPU cores\n",
"HPU available: False, using: 0 HPUs\n"
]
}
],
"source": [
"model = FeedForward(\n",
" layers=[10, 10],\n",
" func=torch.nn.Tanh,\n",
" output_dimensions=1,\n",
" input_dimensions=1,\n",
")\n",
"solver = SupervisedSolver(problem, model, use_lt=False)\n",
"trainer = Trainer(\n",
" solver=solver,\n",
" accelerator=\"cpu\",\n",
" max_epochs=-1,\n",
" enable_model_summary=False,\n",
" enable_progress_bar=False,\n",
" val_size=0.2,\n",
" train_size=0.8,\n",
" test_size=0.0,\n",
" callbacks=[EarlyStopping(\"val_loss\")],\n",
") # adding a callbacks\n",
"trainer.train()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As we can see the model automatically stop when the logging metric stopped improving!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Trainer Tips to Boost Accuracy, Save Memory and Speed Up Training\n",
"\n",
"Until now we have seen how to choose the right `accelerator`, how to log and visualize the results, and how to interface with the program in order to add specific parts of code at specific points via `callbacks`.\n",
"Now, we will focus on how to boost your training by saving memory and speeding it up, while maintaining the same or even better degree of accuracy!\n",
"\n",
"There are several built-in methods developed in PyTorch Lightning which can be applied straightforward in **PINA**. Here we report some:\n",
"\n",
"* [Stochastic Weight Averaging](https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/) to boost accuracy\n",
"* [Gradient Clipping](https://deepgram.com/ai-glossary/gradient-clipping) to reduce computational time (and improve accuracy)\n",
"* [Gradient Accumulation](https://lightning.ai/docs/pytorch/stable/common/optimization.html#id3) to save memory consumption\n",
"* [Mixed Precision Training](https://lightning.ai/docs/pytorch/stable/common/optimization.html#id3) to save memory consumption\n",
"\n",
"We will just demonstrate how to use the first two and see the results compared to standard training.\n",
"We use the [`Timer`](https://lightning.ai/docs/pytorch/stable/api/lightning.pytorch.callbacks.Timer.html#lightning.pytorch.callbacks.Timer) callback from `pytorch_lightning.callbacks` to track the times. Let's start by training a simple model without any optimization (train for 500 epochs)."
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Seed set to 42\n",
"GPU available: True (mps), used: False\n",
"TPU available: False, using: 0 TPU cores\n",
"HPU available: False, using: 0 HPUs\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "822b8c60e73f49a486d3d702d413d6ff",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Training: | | 0/? [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"`Trainer.fit` stopped: `max_epochs=500` reached.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Total training time 15.49781 s\n"
]
}
],
"source": [
"from lightning.pytorch.callbacks import Timer\n",
"from lightning.pytorch import seed_everything\n",
"\n",
"# setting the seed for reproducibility\n",
"seed_everything(42, workers=True)\n",
"\n",
"model = FeedForward(\n",
" layers=[10, 10],\n",
" func=torch.nn.Tanh,\n",
" output_dimensions=1,\n",
" input_dimensions=1,\n",
")\n",
"\n",
"solver = SupervisedSolver(problem, model, use_lt=False)\n",
"trainer = Trainer(\n",
" solver=solver,\n",
" accelerator=\"cpu\",\n",
" deterministic=True, # setting deterministic=True ensure reproducibility when a seed is imposed\n",
" max_epochs=500,\n",
" enable_model_summary=False,\n",
" callbacks=[Timer()],\n",
") # adding a callbacks\n",
"trainer.train()\n",
"print(f'Total training time {trainer.callbacks[0].time_elapsed(\"train\"):.5f} s')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we do the same but with `StochasticWeightAveraging` enabled"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Seed set to 42\n",
"GPU available: True (mps), used: False\n",
"TPU available: False, using: 0 TPU cores\n",
"HPU available: False, using: 0 HPUs\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "dc5f3b47abff4facae7a60d0871f3bfe",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Training: | | 0/? [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"Swapping scheduler `ConstantLR` for `SWALR`\n",
"`Trainer.fit` stopped: `max_epochs=500` reached.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Total training time 15.52474 s\n"
]
}
],
"source": [
"from lightning.pytorch.callbacks import StochasticWeightAveraging\n",
"\n",
"# setting the seed for reproducibility\n",
"seed_everything(42, workers=True)\n",
"\n",
"model = FeedForward(\n",
" layers=[10, 10],\n",
" func=torch.nn.Tanh,\n",
" output_dimensions=1,\n",
" input_dimensions=1,\n",
")\n",
"solver = SupervisedSolver(problem, model, use_lt=False)\n",
"trainer = Trainer(\n",
" solver=solver,\n",
" accelerator=\"cpu\",\n",
" deterministic=True,\n",
" max_epochs=500,\n",
" enable_model_summary=False,\n",
" callbacks=[Timer(), StochasticWeightAveraging(swa_lrs=0.005)],\n",
") # adding StochasticWeightAveraging callbacks\n",
"trainer.train()\n",
"print(f'Total training time {trainer.callbacks[0].time_elapsed(\"train\"):.5f} s')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As you can see, the training time does not change at all! Notice that around epoch 350\n",
"the scheduler is switched from the defalut one `ConstantLR` to the Stochastic Weight Average Learning Rate (`SWALR`).\n",
"This is because by default `StochasticWeightAveraging` will be activated after `int(swa_epoch_start * max_epochs)` with `swa_epoch_start=0.7` by default. Finally, the final `train_loss` is lower when `StochasticWeightAveraging` is used.\n",
"\n",
"We will now do the same but clippling the gradient to be relatively small."
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Seed set to 42\n",
"GPU available: True (mps), used: False\n",
"TPU available: False, using: 0 TPU cores\n",
"HPU available: False, using: 0 HPUs\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "d475613ad7f34fe6abd182eed8907004",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Training: | | 0/? [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"Swapping scheduler `ConstantLR` for `SWALR`\n",
"`Trainer.fit` stopped: `max_epochs=500` reached.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Total training time 15.94719 s\n"
]
}
],
"source": [
"# setting the seed for reproducibility\n",
"seed_everything(42, workers=True)\n",
"\n",
"model = FeedForward(\n",
" layers=[10, 10],\n",
" func=torch.nn.Tanh,\n",
" output_dimensions=1,\n",
" input_dimensions=1,\n",
")\n",
"solver = SupervisedSolver(problem, model, use_lt=False)\n",
"trainer = Trainer(\n",
" solver=solver,\n",
" accelerator=\"cpu\",\n",
" max_epochs=500,\n",
" enable_model_summary=False,\n",
" gradient_clip_val=0.1, # clipping the gradient\n",
" callbacks=[Timer(), StochasticWeightAveraging(swa_lrs=0.005)],\n",
")\n",
"trainer.train()\n",
"print(f'Total training time {trainer.callbacks[0].time_elapsed(\"train\"):.5f} s')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As we can see, by applying gradient clipping, we were able to achieve even lower error!\n",
"\n",
"## What's Next?\n",
"\n",
"Now you know how to use the `Trainer` class efficiently in **PINA**! There are several directions you can explore next:\n",
"\n",
"1. **Explore Training on Different Devices**: Test training times on various devices (e.g., `TPU`) to compare performance.\n",
"\n",
"2. **Reduce Memory Costs**: Experiment with mixed precision training and gradient accumulation to optimize memory usage, especially when training Neural Operators.\n",
"\n",
"3. **Benchmark `Trainer` Speed**: Benchmark the training speed of the `Trainer` class for different precisions to identify potential optimizations.\n",
"\n",
"4. **...and many more!**: Consider expanding to **multi-GPU** setups or other advanced configurations for large-scale training.\n",
"\n",
"For more resources and tutorials, check out the [PINA Documentation](https://mathlab.github.io/PINA/).\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "pina",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.21"
}
},
"nbformat": 4,
"nbformat_minor": 2
}