605 lines
18 KiB
Plaintext
Vendored
605 lines
18 KiB
Plaintext
Vendored
{
|
||
"cells": [
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"id": "6f71ca5c",
|
||
"metadata": {},
|
||
"source": [
|
||
"# Tutorial: Data structure for SciML: `Tensor`, `LabelTensor`, `Data` and `Graph`\n",
|
||
"\n",
|
||
"[](https://colab.research.google.com/github/mathLab/PINA/blob/master/tutorials/tutorial19/tutorial.ipynb)\n",
|
||
"\n",
|
||
"In this tutorial, we’ll quickly go through the basics of Data Structures for Scientific Machine Learning, convering:\n",
|
||
"1. **PyTorch Tensors** / **PINA LabelTensors**\n",
|
||
"2. **PyTorch Geometric Data** / **PINA Graph**\n",
|
||
"\n",
|
||
"first let's import the data structures we will use!"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "0981f1e9",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"## routine needed to run the notebook on Google Colab\n",
|
||
"try:\n",
|
||
" import google.colab\n",
|
||
" IN_COLAB = True\n",
|
||
"except:\n",
|
||
" IN_COLAB = False\n",
|
||
"if IN_COLAB:\n",
|
||
" !pip install \"pina-mathlab[tutorial]\"\n",
|
||
"\n",
|
||
"import warnings\n",
|
||
"import torch\n",
|
||
"from torch_geometric.data import Data\n",
|
||
"\n",
|
||
"warnings.filterwarnings(\"ignore\")\n",
|
||
"\n",
|
||
"from pina import LabelTensor, Graph"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "8afae117",
|
||
"metadata": {},
|
||
"source": [
|
||
"## PyTorch Tensors\n",
|
||
"\n",
|
||
"A **tensor** is a multi-dimensional matrix used for storing and manipulating data in PyTorch. It's the basic building block for all computations in PyTorch, including deep learning models.\n",
|
||
"\n",
|
||
"You can create a tensor in several ways:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 2,
|
||
"id": "6558c37a",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"tensor([1, 2, 3, 4])\n",
|
||
"tensor([[0., 0., 0.],\n",
|
||
" [0., 0., 0.]])\n",
|
||
"tensor([[1., 1., 1.],\n",
|
||
" [1., 1., 1.]])\n",
|
||
"tensor([[-0.4420, 0.9948, 0.3727],\n",
|
||
" [-0.2328, 0.0719, -0.1929]])\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"# Creating a tensor from a list\n",
|
||
"tensor_1 = torch.tensor([1, 2, 3, 4])\n",
|
||
"print(tensor_1)\n",
|
||
"\n",
|
||
"# Creating a tensor of zeros\n",
|
||
"tensor_zeros = torch.zeros(2, 3) # 2x3 tensor of zeros\n",
|
||
"print(tensor_zeros)\n",
|
||
"\n",
|
||
"# Creating a tensor of ones\n",
|
||
"tensor_ones = torch.ones(2, 3) # 2x3 tensor of ones\n",
|
||
"print(tensor_ones)\n",
|
||
"\n",
|
||
"# Creating a random tensor\n",
|
||
"tensor_random = torch.randn(2, 3) # 2x3 tensor with random values\n",
|
||
"print(tensor_random)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "f015f61d",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Basic Tensor Operations\n",
|
||
"Tensors support a variety of operations, such as element-wise arithmetic, matrix operations, and more:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 4,
|
||
"id": "d5369bf3",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Addition\n",
|
||
"sum_tensor = tensor_1 + tensor_1\n",
|
||
"\n",
|
||
"# Matrix multiplication\n",
|
||
"result = torch.matmul(tensor_zeros, tensor_ones.T)\n",
|
||
"\n",
|
||
"# Element-wise multiplication\n",
|
||
"elementwise_prod = tensor_1 * tensor_1"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "619364cc",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Device Management\n",
|
||
"PyTorch allows you to move tensors to different devices (CPU or GPU). For instance:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 6,
|
||
"id": "6b82839b",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Move tensor to GPU\n",
|
||
"if torch.cuda.is_available():\n",
|
||
" tensor_gpu = tensor_1.cuda()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "75fd37ca",
|
||
"metadata": {},
|
||
"source": [
|
||
"To know more about PyTorch Tensors, see the dedicated tutorial done by the PyTorch team [here](https://pytorch.org/tutorials/beginner/introyt/tensors_deeper_tutorial.html)."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "6073dc6d",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Label Tensors\n",
|
||
"\n",
|
||
"In scientific machine learning, especially when working with **Physics-Informed Neural Networks (PINNs)**, handling tensors effectively is crucial. Often, we deal with many indices that represent physical quantities such as spatial and temporal coordinates, making it vital to ensure we use the correct indexing.\n",
|
||
"\n",
|
||
"For instance, in PINNs, if the wrong index is used to represent the coordinates of a physical domain, it could lead to incorrect calculations of derivatives, integrals, or residuals. This can significantly affect the accuracy and correctness of the model.\n",
|
||
"\n",
|
||
"### What are Label Tensors?\n",
|
||
"\n",
|
||
"**Label Tensors** are a specialized type of tensor used to keep track of indices that represent specific labels. Similar to torch tensor we can perform operation, but the slicing is simplified by using indeces:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 7,
|
||
"id": "25e8353e",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# standard torch tensor\n",
|
||
"tensor = torch.randn(10, 2)\n",
|
||
"\n",
|
||
"# PINA LabelTensor\n",
|
||
"label_tensor = LabelTensor(tensor, labels=[\"x\", \"y\"])"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "bb21b45c",
|
||
"metadata": {},
|
||
"source": [
|
||
"The label tensor is initialized by passing the tensor, and a set of labels. Specifically, the labels must match the following conditions:\n",
|
||
"\n",
|
||
"- At each dimension, the number of labels must match the size of the dimension.\n",
|
||
"- At each dimension, the labels must be unique.\n",
|
||
"\n",
|
||
"For example:\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 9,
|
||
"id": "0e9dc23e",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# full labels\n",
|
||
"tensor = LabelTensor(\n",
|
||
" torch.rand((2000, 3)), {1: {\"name\": \"space\", \"dof\": [\"a\", \"b\", \"c\"]}}\n",
|
||
")\n",
|
||
"# if you index the last column you can simply pass a list\n",
|
||
"tensor = LabelTensor(torch.rand((2000, 3)), [\"a\", \"b\", \"c\"])"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "cfe2d8dd",
|
||
"metadata": {},
|
||
"source": [
|
||
"You can access last column labels by `.labels` attribute, or using `.full_labels` to access all labels"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 10,
|
||
"id": "235b92d4",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"tensor.labels=['a', 'b', 'c']\n",
|
||
"tensor.full_labels={0: {'dof': range(0, 2000), 'name': 0}, 1: {'dof': ['a', 'b', 'c'], 'name': 1}}\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"print(f'{tensor.labels=}')\n",
|
||
"print(f\"{tensor.full_labels=}\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "e8b230ea",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Label Tensors slicing\n",
|
||
"\n",
|
||
"One of the powerful features of label tensors is the ability to easily slice and extract specific parts of the tensor based on labels, just like regular PyTorch tensors but with the ease of labels. \n",
|
||
"\n",
|
||
"Here’s how slicing works with label tensors. Suppose we have a label tensor that contains both spatial and temporal data, and we want to slice specific parts of this data to focus on certain time intervals or spatial regions."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 26,
|
||
"id": "45365ea8",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Tensor:\n",
|
||
" tensor([[0.0000, 0.0000],\n",
|
||
" [1.0000, 0.5000],\n",
|
||
" [2.0000, 1.0000],\n",
|
||
" [3.0000, 1.5000]])\n",
|
||
"Torch methods can be used, label_tensor.shape=torch.Size([4, 2])\n",
|
||
"also label_tensor.requires_grad=False \n",
|
||
"\n",
|
||
"We can slice with labels: \n",
|
||
" label_tensor[\"x\"]=LabelTensor([[0.],\n",
|
||
" [1.],\n",
|
||
" [2.],\n",
|
||
" [3.]])\n",
|
||
"Similarly to: \n",
|
||
" label_tensor[:, 0]=LabelTensor([[0.],\n",
|
||
" [1.],\n",
|
||
" [2.],\n",
|
||
" [3.]])\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"# Create a label tensor containing spatial and temporal coordinates\n",
|
||
"x = torch.tensor([0.0, 1.0, 2.0, 3.0]) # Spatial coordinates\n",
|
||
"t = torch.tensor([0.0, 0.5, 1.0, 1.5]) # Time coordinates\n",
|
||
"\n",
|
||
"# Combine x and t into a label tensor (2D tensor)\n",
|
||
"tensor = torch.stack([x, t], dim=-1) # Shape: [4, 2]\n",
|
||
"print(\"Tensor:\\n\", tensor)\n",
|
||
"\n",
|
||
"# Build the LabelTensor\n",
|
||
"label_tensor = LabelTensor(tensor, [\"x\", \"t\"])\n",
|
||
"\n",
|
||
"print(f\"Torch methods can be used, {label_tensor.shape=}\")\n",
|
||
"print(f\"also {label_tensor.requires_grad=} \\n\")\n",
|
||
"print(f'We can slice with labels: \\n {label_tensor[\"x\"]=}')\n",
|
||
"print(f\"Similarly to: \\n {label_tensor[:, 0]=}\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "ea4adc6e",
|
||
"metadata": {},
|
||
"source": [
|
||
"You can do more complex slicing by using the extract method. For example:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 30,
|
||
"id": "caec2d14",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Extract labels: label_tensor.extract({\"points\" : [0, 2]})=LabelTensor([[[0., 0.]],\n",
|
||
" [[2., 1.]]])\n",
|
||
"Similar to: label_tensor[slice(0, 4, 2), :]=LabelTensor([[[0., 0.]],\n",
|
||
" [[2., 1.]]])\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"label_tensor = LabelTensor(\n",
|
||
" tensor,\n",
|
||
" {\n",
|
||
" 0: {\"dof\": range(4), \"name\": \"points\"},\n",
|
||
" 1: {\"dof\": [\"x\", \"t\"], \"name\": \"coords\"},\n",
|
||
" },\n",
|
||
")\n",
|
||
"\n",
|
||
"print(f'Extract labels: {label_tensor.extract({\"points\" : [0, 2]})=}')\n",
|
||
"print(f'Similar to: {label_tensor[slice(0, 4, 2), :]=}')"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "331d6080",
|
||
"metadata": {},
|
||
"source": [
|
||
"## PyTorch Geometric Data\n",
|
||
"PyTorch Geometric (PyG) extends PyTorch to handle graph-structured data. It provides utilities to represent graphs and perform graph-based learning tasks such as node classification, graph classification, and more.\n",
|
||
"\n",
|
||
"### Graph Data Structure\n",
|
||
"PyTorch Geometric uses a custom `Data` object to store graph data. The `Data` object contains the following attributes:\n",
|
||
"\n",
|
||
"- **x**: Node features (tensor of shape `[num_nodes, num_features]`)\n",
|
||
"\n",
|
||
"- **edge_index**: Edge indices (tensor of shape `[2, num_edges]`), representing the graph's connectivity\n",
|
||
"\n",
|
||
"- **edge_attr**: Edge features (optional, tensor of shape `[num_edges, num_edge_features]`)\n",
|
||
"\n",
|
||
"- **y**: Target labels for nodes/graphs (optional)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 32,
|
||
"id": "9427b274",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Data(x=[2, 3], edge_index=[2, 2])\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"# Node features: [2 nodes, 3 features]\n",
|
||
"x = torch.tensor([[1, 2, 3], [4, 5, 6]], dtype=torch.float)\n",
|
||
"\n",
|
||
"# Edge indices: representing a graph with two edges (node 0 to node 1, node 1 to node 0)\n",
|
||
"edge_index = torch.tensor([[0, 1], [1, 0]], dtype=torch.long)\n",
|
||
"\n",
|
||
"# Create a PyG data object\n",
|
||
"data = Data(x=x, edge_index=edge_index)\n",
|
||
"\n",
|
||
"print(data)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "fde2dcc7",
|
||
"metadata": {},
|
||
"source": [
|
||
"Once you have your graph in a Data object, you can easily perform graph-based operations using PyTorch Geometric’s built-in functions:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 33,
|
||
"id": "bdebb42e",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"tensor([[1., 2., 3.],\n",
|
||
" [4., 5., 6.]])\n",
|
||
"tensor([[0, 1],\n",
|
||
" [1, 0]])\n",
|
||
"tensor([[ 7.4528, -3.2700],\n",
|
||
" [ 7.4528, -3.2700]], grad_fn=<AddBackward0>)\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"# Accessing node features\n",
|
||
"print(data.x) # Node features\n",
|
||
"\n",
|
||
"# Accessing edge list\n",
|
||
"print(data.edge_index) # Edge indices\n",
|
||
"\n",
|
||
"# Applying Graph Convolution (Graph Neural Networks - GCN)\n",
|
||
"from torch_geometric.nn import GCNConv\n",
|
||
"\n",
|
||
"# Define a simple GCN layer\n",
|
||
"conv = GCNConv(3, 2) # 3 input features, 2 output features\n",
|
||
"out = conv(data.x, data.edge_index)\n",
|
||
"print(out) # Output node features after applying GCN"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "287a0d4f",
|
||
"metadata": {},
|
||
"source": [
|
||
"## PINA Graph\n",
|
||
"\n",
|
||
"If you've understood Label Tensors and Data in PINA, then you're well on your way to grasping how **PINA Graph** works. Simply put, a **Graph** in PINA is a `Data` object with extra methods for handling label tensors. We highly suggest to use `Graph` instead of `Data` in PINA, expecially when using label tensors."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 36,
|
||
"id": "27f5c9ac",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Graph(x=[2, 3], edge_index=[2, 2])\n",
|
||
"tensor([[1., 2., 3.],\n",
|
||
" [4., 5., 6.]])\n",
|
||
"tensor([[0, 1],\n",
|
||
" [1, 0]])\n",
|
||
"tensor([[-0.0606, 5.7191],\n",
|
||
" [-0.0606, 5.7191]], grad_fn=<AddBackward0>)\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"# Node features: [2 nodes, 3 features]\n",
|
||
"x = torch.tensor([[1, 2, 3], [4, 5, 6]], dtype=torch.float)\n",
|
||
"\n",
|
||
"# Edge indices: representing a graph with two edges (node 0 to node 1, node 1 to node 0)\n",
|
||
"edge_index = torch.tensor([[0, 1], [1, 0]], dtype=torch.long)\n",
|
||
"\n",
|
||
"# Create a PINA graph object (similar to PyG)\n",
|
||
"data = Graph(x=x, edge_index=edge_index)\n",
|
||
"\n",
|
||
"print(data)\n",
|
||
"\n",
|
||
"# Accessing node features\n",
|
||
"print(data.x) # Node features\n",
|
||
"\n",
|
||
"# Accessing edge list\n",
|
||
"print(data.edge_index) # Edge indices\n",
|
||
"\n",
|
||
"# Applying Graph Convolution (Graph Neural Networks - GCN)\n",
|
||
"from torch_geometric.nn import GCNConv\n",
|
||
"\n",
|
||
"# Define a simple GCN layer\n",
|
||
"conv = GCNConv(3, 2) # 3 input features, 2 output features\n",
|
||
"out = conv(data.x, data.edge_index)\n",
|
||
"print(out) # Output node features after applying GCN"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "6ee7cc14",
|
||
"metadata": {},
|
||
"source": [
|
||
"But we can also use labeltensors...."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 40,
|
||
"id": "3866a8ae",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Graph(x=[2, 3], edge_index=[2, 2])\n",
|
||
"Graph(x=[2, 1], edge_index=[2, 2])\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"# Node features: [2 nodes, 3 features]\n",
|
||
"x = LabelTensor(torch.tensor([[1, 2, 3], [4, 5, 6]], dtype=torch.float),\n",
|
||
" [\"a\", \"b\", \"c\"])\n",
|
||
"\n",
|
||
"# Edge indices: representing a graph with two edges (node 0 to node 1, node 1 to node 0)\n",
|
||
"edge_index = torch.tensor([[0, 1], [1, 0]], dtype=torch.long)\n",
|
||
"\n",
|
||
"# Create a PINA graph object (similar to PyG)\n",
|
||
"data = Graph(x=x, edge_index=edge_index)\n",
|
||
"\n",
|
||
"print(data)\n",
|
||
"print(data.extract(attr=\"x\", labels=[\"a\"])) # here we extract 1 feature"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "7a2ef072",
|
||
"metadata": {},
|
||
"source": [
|
||
"In PINA Conditions, you always need to pass a list of `Graph` or `Data`, see [here]() for details. In case you are loading a PyG dataset remember to put it in this format!"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "c8edb68f",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stderr",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Downloading https://deepchemdata.s3-us-west-1.amazonaws.com/datasets/qm7b.mat\n",
|
||
"Processing...\n",
|
||
"Done!\n"
|
||
]
|
||
},
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"Data(edge_index=[2, 324], edge_attr=[324], y=[1, 14], num_nodes=18)"
|
||
]
|
||
},
|
||
"execution_count": 42,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"from torch_geometric.datasets import QM7b\n",
|
||
"\n",
|
||
"dataset = QM7b(root=\"./tutorial_logs\").shuffle()\n",
|
||
"\n",
|
||
"# save the dataset\n",
|
||
"input_ = [data for data in dataset]\n",
|
||
"input_[0]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "487c1d47",
|
||
"metadata": {},
|
||
"source": [
|
||
"## What's Next?\n",
|
||
"\n",
|
||
"Congratulations on completing the tutorials on the **PINA Data Structures**! You now have a solid foundation in using the different data structures within PINA, such as **Tensors**, **Label Tensors**, and **Graphs**. Here are some exciting next steps you can take to continue your learning journey:\n",
|
||
"\n",
|
||
"1. **Deep Dive into Label Tensors**: Check the documentation of [`LabelTensor`](https://mathlab.github.io/PINA/_rst/label_tensor.html) to learn more about the available methods.\n",
|
||
"\n",
|
||
"2. **Working with Graphs in PINA**: In PINA we implement many graph structures, e.g. `KNNGraph`, `RadiusGraph`, .... see [here](https://mathlab.github.io/PINA/_rst/_code.html#graphs-structures) for further details.\n",
|
||
"\n",
|
||
"3. **...and many more!**: Consider exploring `LabelTensor` for PINNs!\n",
|
||
"\n",
|
||
"For more resources and tutorials, check out the [PINA Documentation](https://mathlab.github.io/PINA/)."
|
||
]
|
||
}
|
||
],
|
||
"metadata": {
|
||
"kernelspec": {
|
||
"display_name": "pina",
|
||
"language": "python",
|
||
"name": "python3"
|
||
},
|
||
"language_info": {
|
||
"codemirror_mode": {
|
||
"name": "ipython",
|
||
"version": 3
|
||
},
|
||
"file_extension": ".py",
|
||
"mimetype": "text/x-python",
|
||
"name": "python",
|
||
"nbconvert_exporter": "python",
|
||
"pygments_lexer": "ipython3",
|
||
"version": "3.9.21"
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 5
|
||
}
|