Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dataset extraction API #483

Merged
merged 11 commits into from
Feb 26, 2020
Merged
Show file tree
Hide file tree
Changes from 9 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
234 changes: 234 additions & 0 deletions examples/Image-Data-Extraction-API.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,234 @@
{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We banished notebooks a while ago as they don't play nice with git, CC @abhiskk

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had asked Michael to make one so that it serves as a tutorial. What would be better if not a notebook?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We banished them from git, they still exist, just not in git.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oleksandr said I should make a tutorial on the habitat website that reflects the same information

"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Image Extraction in Habitat Sim\n",
"\n",
"author: Michael Piseno (mpiseno@gatech.edu)\n",
"\n",
"This notebook will go over how to use the Image Extraction API in Habitat Sim and the different user options available."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Helper functions\n",
"def display_sample(sample):\n",
" img = sample['rgba']\n",
" depth = sample['depth']\n",
" semantic = sample['semantic']\n",
" label = sample['label']\n",
"\n",
" arr = [img, depth, semantic]\n",
" titles = ['rgba', 'depth', 'semantic']\n",
" plt.figure(figsize=(12 ,8))\n",
" for i, data in enumerate(arr):\n",
" ax = plt.subplot(1, 3, i+1)\n",
" ax.axis('off')\n",
" ax.set_title(titles[i])\n",
" plt.imshow(data)\n",
" \n",
" plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Setting up the Extractor\n",
"\n",
"The main class that handles image data extraction in Habitat Sim is called ImageExtractor. The user needs to provide a scene filepath (either a .glb or .ply file) to the constructor. This is the only required constructor argument.\n",
"\n",
"Habitat Sim does not currently support multiple instances or extractors, so if you're done using an extractor you need to call the close method before instantiating a new one. Below we will go over some optional arguments for the extractor class.\n",
"\n",
"* scene_filepath: The filepath to the scene file as explained above\n",
"* labels: Class labels of the type of images the user wants to extract. Currently we only support extracting images of 'unnavigable points' like walls. In the future we hope to extend this functionality to allow the user to specify more unique class labels, but for now this argument is not that useful.\n",
"* img_size: The size of images to be output in the format (height, width)\n",
"* output: A list of the different output image types the user can obtain. Default is rgba.\n",
"\n",
"### Using the Extractor\n",
"\n",
"#### Indexing and Slicing\n",
"\n",
"The extractor can be indexed and sliced like a normal python list. Internaly, indexing into the extractor sets an agent position and rotation within the simulator and returns the corresponding agent observation. Indexing returns a dictionary that contains an image of each type specified in the \"output\" argument given by the user in the constructor. The dictionary also contains a key \"label\" which is the class label (specified by the user in the constructor) of the image. Note: for the scene in this example there is no semantic data which is why the semantic output below is not represented."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from habitat_sim.utils.data.dataextractor import ImageExtractor\n",
"\n",
"# Give the extractor a path to the scene\n",
"scene_filepath = \"../data/scene_datasets/habitat-test-scenes/skokloster-castle.glb\"\n",
"\n",
"# Instantiate an extractor. The only required argument is the scene filepath\n",
"extractor = ImageExtractor(scene_filepath, labels=[0.0], img_size=(512, 512),\n",
" output=['rgba', 'depth', 'semantic'])\n",
"\n",
"# Index in to the extractor like a normal python list\n",
"sample = extractor[0]\n",
"display_sample(sample)\n",
"\n",
"# Or use slicing\n",
"samples = extractor[1:4]\n",
"for sample in samples:\n",
" display_sample(sample)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Integrating with PyTorch Datasets\n",
"\n",
"It is very easy to plug an ImageExtractor into a PyTorch Datasets and DataLoaders for end to end training in PyTorch models without writing to disk. For a great tutorial on how to use PyTorch Dataset and DataLoader, refer to [this guide](https://pytorch.org/tutorials/beginner/data_loading_tutorial.html)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import torch\n",
"import torch.nn as nn\n",
"import torch.nn.functional as F\n",
"from torch.utils.data import Dataset, DataLoader\n",
"\n",
"class HabitatDataset(Dataset):\n",
" def __init__(self, extractor):\n",
" self.extractor = extractor\n",
" \n",
" def __len__(self):\n",
" return len(self.extractor)\n",
" \n",
" def __getitem__(self, idx):\n",
" sample = self.extractor[idx]\n",
" output = {\n",
" 'rgba': sample['rgba'].astype(np.float32), # dataloader requires certain types\n",
" 'label': sample['label']\n",
" }\n",
" return output\n",
"\n",
"\n",
"class TrivialNet(nn.Module):\n",
" def __init__(self):\n",
" super(TrivialNet, self).__init__()\n",
" self.conv1 = nn.Conv2d(4, 8, 10, 10)\n",
" self.fc1 = nn.Linear(20808, 10)\n",
"\n",
" def forward(self, x):\n",
" x = self.conv1(x)\n",
" x = F.relu(x)\n",
" x = x.view(-1, self.get_flat_dim(x))\n",
" x = self.fc1(x)\n",
" return x\n",
" \n",
" def get_flat_dim(self, x):\n",
" dim_size = 1\n",
" for dim in x.size()[1:]:\n",
" dim_size *= dim\n",
" \n",
" return dim_size\n",
" \n",
"dataset = HabitatDataset(extractor)\n",
"dataloader = DataLoader(dataset, batch_size=2)\n",
"net = TrivialNet()\n",
"\n",
"for i, sample_batch in enumerate(dataloader):\n",
" img, label = sample_batch['rgba'], sample_batch['label']\n",
" img = img.permute(0, 3, 1, 2) # Reshape to PyTorch format for convolutions\n",
" out = net(img)\n",
" if i % 5 == 0:\n",
" print(\"TrivialNet: Batch: {}, Output: {}\\n\".format(i, out))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Appendix\n",
"\n",
"In this section I'll explain briefly how the image extraction is actually done so that others can make changes if necessary. When the user creates a ImageExtractor, the following sequence of events happen:\n",
"\n",
"1. A Simulator class is created and a 2D topdown view of the scene is generated\n",
"2. Using the topdown view, the PoseExtractor class creates a grid of points spaced equally across the topdown view\n",
"3. For each grid point, the PoseExtractor uses breadth-first search to find the closest 'point of interest'. A point of interest is a point specified by the class labels argument to ImageExtractor.\n",
"4. The PoseExtractor returns a list of poses, where each pose contains (position, rotation, label) information. When it comes time for the ImageExtractor to return an image to the user, these poses are used to set the agent state within the simulator."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"extractor.pose_extractor._show_topdown_view()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Make sure to close the simulator after using it (explained above) if you want to instantiate another one at a later time!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"extractor.close()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.10"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
Loading