Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tutorial for image data extraction #518

Merged
merged 4 commits into from
Mar 18, 2020
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
158 changes: 158 additions & 0 deletions docs/pages/image-extractor.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,158 @@
Habitat Sim Image Extractor Demo
################################

.. contents::
:class: m-block m-default


This notebook will go over how to use the Image Extraction API in Habitat Sim and the different user options available.

.. code:: py

import os

import numpy as np
import matplotlib.pyplot as plt

# Helper functions
def display_sample(sample):
img = sample['rgba']
depth = sample['depth']
semantic = sample['semantic']
label = sample['label']

arr = [img, depth, semantic]
titles = ['rgba', 'depth', 'semantic']
plt.figure(figsize=(12 ,8))
for i, data in enumerate(arr):
ax = plt.subplot(1, 3, i+1)
ax.axis('off')
ax.set_title(titles[i])
plt.imshow(data)

plt.show()

`Setting up the Extractor`_
===========================

The main class that handles image data extraction in Habitat Sim is called ImageExtractor.
The user needs to provide a scene filepath (either a .glb or .ply file) to the constructor.
This is the only required constructor argument.

* scene_filepath: The filepath to the scene file as explained above
* labels: Class labels of the type of images the user wants to extract. Currently we only
support extracting images of 'unnavigable points' like walls. In the future we hope to extend this functionality to allow the user to specify more unique class labels, but for now this argument is not that useful.
* img_size: The size of images to be output in the format (height, width)
* output: A list of the different output image types the user can obtain. Default is rgba.

Habitat Sim does not currently support multiple instances of extractors, so if you're done using
an extractor you need to call the close method before instantiating a new one. Below we will go
over some optional arguments for the extractor class.


`Using the Extractor`_
======================

The extractor can be indexed and sliced like a normal python list. Internaly, indexing into
the extractor sets an agent position and rotation within the simulator and returns the corresponding
agent observation. Indexing returns a dictionary that contains an image of each type specified in
the "output" argument given by the user in the constructor. The dictionary also contains a key
"label" which is the class label (specified by the user in the constructor) of the image. Note:
for the scene in this example there is no semantic data which is why the semantic output
below is not represented.

.. code:: py

from habitat_sim.utils.data.data_extractor import ImageExtractor

# Give the extractor a path to the scene
scene_filepath = "../data/scene_datasets/habitat-test-scenes/skokloster-castle.glb"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
scene_filepath = "../data/scene_datasets/habitat-test-scenes/skokloster-castle.glb"
scene_filepath = "data/scene_datasets/habitat-test-scenes/skokloster-castle.glb"


# Instantiate an extractor. The only required argument is the scene filepath
extractor = ImageExtractor(scene_filepath, labels=[0.0], img_size=(512, 512),
output=['rgba', 'depth', 'semantic'])

# Index in to the extractor like a normal python list
sample = extractor[0]

# Or use slicing
samples = extractor[1:4]
for sample in samples:
display_sample(sample)


.. image:: /static/extractor-example-output.png


`Integrating with Pytorch Datasets`_
====================================

It is very easy to plug an ImageExtractor into a PyTorch Datasets and DataLoaders for end to end
training in PyTorch models without writing to disk. For a great tutorial on how to use PyTorch Dataset
and DataLoader, refer to this guide: https://pytorch.org/tutorials/beginner/data_loading_tutorial.html

.. code:: py

import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader

class HabitatDataset(Dataset):
def __init__(self, extractor):
self.extractor = extractor

def __len__(self):
return len(self.extractor)

def __getitem__(self, idx):
sample = self.extractor[idx]
output = {
'rgba': sample['rgba'].astype(np.float32), # dataloader requires certain types
mpiseno marked this conversation as resolved.
Show resolved Hide resolved
'label': sample['label']
}
return output


class TrivialNet(nn.Module):
def __init__(self):
super(TrivialNet, self).__init__()

def forward(self, x):
x = F.relu(x)
return x


dataset = HabitatDataset(extractor)
dataloader = DataLoader(dataset, batch_size=2)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Worth mentioning that num_workers must be set to 0 currently.

net = TrivialNet()

for i, sample_batch in enumerate(dataloader):
img, label = sample_batch['rgba'], sample_batch['label']
img = img.permute(0, 3, 1, 2) # Reshape to PyTorch format for convolutions
out = net(img)


`Appendix`_
===========

I'll explain briefly how the image extraction is actually done so that others can make changes
if necessary. When the user creates a ImageExtractor, the following sequence of events happen:

1. A Simulator class is created and a 2D topdown view of the scene is generated
2. Using the topdown view, the PoseExtractor class creates a grid of points spaced equally across the topdown view
3. For each grid point, the PoseExtractor uses a predefined method for extracting good camera poses to find the
closest 'point of interest'. For example, one method for pose extraction is to breadth-first-search from every
gridpoint to find the closest point of interest. A point of interest is a point specified by the class
labels argument to ImageExtractor.
4. The PoseExtractor returns a list of poses, where each pose contains (position, rotation, label)
information. When it comes time for the ImageExtractor to return an image to the user, these poses are
used to set the agent state within the simulator.


Make sure to close the simulator after using it (explained above) if you want to instantiate another one
at a later time!

.. code:: py

extractor.close()