Fixing tutorials grammar (#242)
* grammar check and sparse rephrasing * rst created * meta copyright adjusted
This commit is contained in:
committed by
GitHub
parent
15136e13f8
commit
b10e02103b
46
tutorials/tutorial4/tutorial.ipynb
vendored
46
tutorials/tutorial4/tutorial.ipynb
vendored
@@ -105,7 +105,7 @@
|
||||
"f(x, y) = [\\sin(\\pi x) \\sin(\\pi y), -\\sin(\\pi x) \\sin(\\pi y)] \\quad (x,y)\\in[0,1]\\times[0,1]\n",
|
||||
"$$\n",
|
||||
"\n",
|
||||
"using a batch size of one."
|
||||
"using a batch size equal to 1."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -130,14 +130,14 @@
|
||||
"# points in the mesh fixed to 200\n",
|
||||
"N = 200\n",
|
||||
"\n",
|
||||
"# vectorial 2 dimensional function, number_input_fileds=2\n",
|
||||
"number_input_fileds = 2\n",
|
||||
"# vectorial 2 dimensional function, number_input_fields=2\n",
|
||||
"number_input_fields = 2\n",
|
||||
"\n",
|
||||
"# 2 dimensional spatial variables, D = 2 + 1 = 3\n",
|
||||
"D = 3\n",
|
||||
"\n",
|
||||
"# create the function f domain as random 2d points in [0, 1]\n",
|
||||
"domain = torch.rand(size=(batch_size, number_input_fileds, N, D-1))\n",
|
||||
"domain = torch.rand(size=(batch_size, number_input_fields, N, D-1))\n",
|
||||
"print(f\"Domain has shape: {domain.shape}\")\n",
|
||||
"\n",
|
||||
"# create the functions\n",
|
||||
@@ -146,7 +146,7 @@
|
||||
"f2 = - torch.sin(pi * domain[:, 1, :, 0]) * torch.sin(pi * domain[:, 1, :, 1])\n",
|
||||
"\n",
|
||||
"# stacking the input domain and field values\n",
|
||||
"data = torch.empty(size=(batch_size, number_input_fileds, N, D))\n",
|
||||
"data = torch.empty(size=(batch_size, number_input_fields, N, D))\n",
|
||||
"data[..., :-1] = domain # copy the domain\n",
|
||||
"data[:, 0, :, -1] = f1 # copy first field value\n",
|
||||
"data[:, 1, :, -1] = f1 # copy second field value\n",
|
||||
@@ -174,7 +174,7 @@
|
||||
"1. `domain`: square domain (the only implemented) $[0,1]\\times[0,5]$. The minimum value is always zero, while the maximum is specified by the user\n",
|
||||
"2. `start`: start position of the filter, coordinate $(0, 0)$\n",
|
||||
"3. `jump`: the jumps of the centroid of the filter to the next position $(0.1, 0.3)$\n",
|
||||
"4. `direction`: the directions of the jump, with `1 = right`, `0 = no jump`,`-1 = left` with respect to the current position\n",
|
||||
"4. `direction`: the directions of the jump, with `1 = right`, `0 = no jump`, `-1 = left` with respect to the current position\n",
|
||||
"\n",
|
||||
"**Note**\n",
|
||||
"\n",
|
||||
@@ -188,9 +188,9 @@
|
||||
"source": [
|
||||
"### Filter definition\n",
|
||||
"\n",
|
||||
"Having defined all the previous blocks we are able to construct the continuous filter.\n",
|
||||
"Having defined all the previous blocks, we are now able to construct the continuous filter.\n",
|
||||
"\n",
|
||||
"Suppose we would like to get an ouput with only one field, and let us fix the filter dimension to be $[0.1, 0.1]$."
|
||||
"Suppose we would like to get an output with only one field, and let us fix the filter dimension to be $[0.1, 0.1]$."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -220,7 +220,7 @@
|
||||
" }\n",
|
||||
"\n",
|
||||
"# creating the filter \n",
|
||||
"cConv = ContinuousConvBlock(input_numb_field=number_input_fileds,\n",
|
||||
"cConv = ContinuousConvBlock(input_numb_field=number_input_fields,\n",
|
||||
" output_numb_field=1,\n",
|
||||
" filter_dim=filter_dim,\n",
|
||||
" stride=stride)"
|
||||
@@ -242,7 +242,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# creating the filter + optimization\n",
|
||||
"cConv = ContinuousConvBlock(input_numb_field=number_input_fileds,\n",
|
||||
"cConv = ContinuousConvBlock(input_numb_field=number_input_fields,\n",
|
||||
" output_numb_field=1,\n",
|
||||
" filter_dim=filter_dim,\n",
|
||||
" stride=stride,\n",
|
||||
@@ -254,7 +254,7 @@
|
||||
"id": "f99c290e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Let's try to do a forward pass"
|
||||
"Let's try to do a forward pass:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -310,7 +310,7 @@
|
||||
" return self.model(x)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"cConv = ContinuousConvBlock(input_numb_field=number_input_fileds,\n",
|
||||
"cConv = ContinuousConvBlock(input_numb_field=number_input_fields,\n",
|
||||
" output_numb_field=1,\n",
|
||||
" filter_dim=filter_dim,\n",
|
||||
" stride=stride,\n",
|
||||
@@ -380,7 +380,7 @@
|
||||
"id": "7f076010",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Let's now build a simple classifier. The MNIST dataset is composed by vectors of shape `[batch, 1, 28, 28]`, but we can image them as one field functions where the pixels $ij$ are the coordinate $x=i, y=j$ in a $[0, 27]\\times[0,27]$ domain, and the pixels value are the field values. We just need a function to transform the regular tensor in a tensor compatible for the continuous filter:"
|
||||
"Let's now build a simple classifier. The MNIST dataset is composed by vectors of shape `[batch, 1, 28, 28]`, but we can image them as one field functions where the pixels $ij$ are the coordinate $x=i, y=j$ in a $[0, 27]\\times[0,27]$ domain, and the pixels values are the field values. We just need a function to transform the regular tensor in a tensor compatible for the continuous filter:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -478,7 +478,7 @@
|
||||
"id": "4374c15c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Let's try to train it using a simple pytorch training loop. We train for juts 1 epoch using Adam optimizer with a $0.001$ learning rate."
|
||||
"Let's try to train it using a simple pytorch training loop. We train for just 1 epoch using Adam optimizer with a $0.001$ learning rate."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -556,7 +556,7 @@
|
||||
"id": "47fa3d0e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Let's see the performance on the train set!"
|
||||
"Let's see the performance on the test set!"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -595,7 +595,7 @@
|
||||
"id": "25cf2878",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"As we can see we have very good performance for having traing only for 1 epoch! Nevertheless, we are still using structured data... Let's see how we can build an autoencoder for unstructured data now."
|
||||
"As we can see we have very good performance for having trained only for 1 epoch! Nevertheless, we are still using structured data... Let's see how we can build an autoencoder for unstructured data now."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -876,7 +876,7 @@
|
||||
"id": "206141f9",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"As we can see the two are really similar! We can compute the $l_2$ error quite easily as well:"
|
||||
"As we can see, the two solutions are really similar! We can compute the $l_2$ error quite easily as well:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -916,7 +916,7 @@
|
||||
"source": [
|
||||
"### Filter for upsampling\n",
|
||||
"\n",
|
||||
"Suppose we have already the hidden dimension and we want to upsample on a differen grid with more points. Let's see how to do it:"
|
||||
"Suppose we have already the hidden representation and we want to upsample on a differen grid with more points. Let's see how to do it:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -946,7 +946,7 @@
|
||||
"input_data2[0, 0, :, -1] = torch.sin(pi *\n",
|
||||
" grid2[:, 0]) * torch.sin(pi * grid2[:, 1])\n",
|
||||
"\n",
|
||||
"# get the hidden dimension representation from original input\n",
|
||||
"# get the hidden representation from original input\n",
|
||||
"latent = net.encoder(input_data)\n",
|
||||
"\n",
|
||||
"# upsample on the second input_data2\n",
|
||||
@@ -996,13 +996,13 @@
|
||||
"id": "465cbd16",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Autoencoding at different resolution\n",
|
||||
"In the previous example we already had the hidden dimension (of original input) and we used it to upsample. Sometimes however we have a more fine mesh solution and we simply want to encode it. This can be done without retraining! This procedure can be useful in case we have many points in the mesh and just a smaller part of them are needed for training. Let's see the results of this:"
|
||||
"### Autoencoding at different resolutions\n",
|
||||
"In the previous example we already had the hidden representation (of the original input) and we used it to upsample. Sometimes however we could have a finer mesh solution and we would simply want to encode it. This can be done without retraining! This procedure can be useful in case we have many points in the mesh and just a smaller part of them are needed for training. Let's see the results of this:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 24,
|
||||
"execution_count": null,
|
||||
"id": "75ed28f5",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -1034,7 +1034,7 @@
|
||||
"input_data2[0, 0, :, -1] = torch.sin(pi *\n",
|
||||
" grid2[:, 0]) * torch.sin(pi * grid2[:, 1])\n",
|
||||
"\n",
|
||||
"# get the hidden dimension representation from more fine mesh input\n",
|
||||
"# get the hidden representation from finer mesh input\n",
|
||||
"latent = net.encoder(input_data2)\n",
|
||||
"\n",
|
||||
"# upsample on the second input_data2\n",
|
||||
|
||||
Reference in New Issue
Block a user