Fixing tutorials grammar (#242)

* grammar check and sparse rephrasing
* rst created
* meta copyright adjusted
This commit is contained in:
Giuseppe Alessio D'Inverno
2024-03-05 10:43:34 +01:00
committed by GitHub
parent 15136e13f8
commit b10e02103b
23 changed files with 272 additions and 237 deletions

View File

@@ -15,7 +15,7 @@
"source": [
"The tutorial aims to show how to employ the **PINA** library in order to apply a reduced order modeling technique [1]. Such methodologies have several similarities with machine learning approaches, since the main goal consists of predicting the solution of differential equations (typically parametric PDEs) in a real-time fashion.\n",
"\n",
"In particular we are going to use the Proper Orthogonal Decomposition with Neural Network (PODNN) [2], which basically perform a dimensional reduction using the POD approach, approximating the parametric solution manifold (at the reduced space) using a NN. In this example, we use a simple multilayer perceptron, but the plenty of different archiutectures can be plugged as well.\n",
"In particular we are going to use the Proper Orthogonal Decomposition with Neural Network (PODNN) [2], which basically performs a dimensional reduction using the POD approach, approximating the parametric solution manifold (at the reduced space) using a NN. In this example, we use a simple multilayer perceptron, but the plenty of different architectures can be plugged as well.\n",
"\n",
"#### References\n",
"1. Rozza G., Stabile G., Ballarin F. (2022). Advanced Reduced Order Methods and Applications in Computational Fluid Dynamics, Society for Industrial and Applied Mathematics. \n",
@@ -118,7 +118,7 @@
"id": "bef4d79d",
"metadata": {},
"source": [
"The *snapshots* - aka the numerical solutions computed for several parameters - and the corresponding parameters are the only data we need to train the model, in order to predict for any new test parameter the solution.\n",
"The *snapshots* - aka the numerical solutions computed for several parameters - and the corresponding parameters are the only data we need to train the model, in order to predict the solution for any new test parameter.\n",
"To properly validate the accuracy, we initially split the 500 snapshots into the training dataset (90% of the original data) and the testing one (the reamining 10%). It must be said that, to plug the snapshots into **PINA**, we have to cast them to `LabelTensor` objects."
]
},
@@ -172,7 +172,7 @@
"id": "6b264569-57b3-458d-bb69-8e94fe89017d",
"metadata": {},
"source": [
"Then, we define the model we want to use: basically we have a MLP architecture that takes in input the parameter and return the *modal coefficients*, so the reduced dimension representation (the coordinates in the POD space). Such latent variable is the projected to the original space using the POD modes, which are computed and stored in the `PODBlock` object."
"Then, we define the model we want to use: an MLP architecture which takes in input the parameter and returns the *modal coefficients*, i.e.the interpolated coefficients of the POD expansion. Such coefficients are projected to the original space using the POD modes, which are computed and stored in the `PODBlock` object."
]
},
{
@@ -227,7 +227,7 @@
"id": "16e1f085-7818-4624-92a1-bf7010dbe528",
"metadata": {},
"source": [
"We highlight that the POD modes are directly computed by means of the singular value decomposition (computed over the input data), and not trained using the back-propagation approach. Only the weights of the MLP are actually trained during the optimization loop."
"We highlight that the POD modes are directly computed by means of the singular value decomposition (computed over the input data), and not trained using the backpropagation approach. Only the weights of the MLP are actually trained during the optimization loop."
]
},
{
@@ -254,7 +254,7 @@
"id": "aab51202-36a7-40d2-b96d-47af8892cd2c",
"metadata": {},
"source": [
"Now that we set the `Problem` and the `Model`, we have just to train the model and use it for predict the test snapshots."
"Now that we have set the `Problem` and the `Model`, we have just to train the model and use it for predicting the test snapshots."
]
},
{
@@ -320,7 +320,7 @@
"id": "3234710e",
"metadata": {},
"source": [
"Done! Now the computational expensive part is over, we can load in future the model to infer new parameters (simply loading the checkpoint file automatically created by `Lightning`) or test its performances. We measure the relative error for the training and test datasets, printing the mean one."
"Done! Now that the computational expensive part is over, we can load in future the model to infer new parameters (simply loading the checkpoint file automatically created by `Lightning`) or test its performances. We measure the relative error for the training and test datasets, printing the mean one."
]
},
{