-
Notifications
You must be signed in to change notification settings - Fork 419
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for importing semantics from https://3dscenegraph.stanford.edu for use with gibson dataset #374
Comments
The 3dscenegraph semantic dataset is currently limited to gibson_tiny. However, semantics for gibson_medium are expected to be release soon. |
The mesh used for semantics is different from the mesh used for Habitat. The coordinate system is also different but both meshes. The Y and Z axis are switched but the origin is the same. We should be able to generate a semantic ply mesh from the origin .obj mesh by transforming vertex coordinates. |
The semantic data from 3DSceneGraph is available via an npz file. To access the data:
This will return the following dictionary:
building
room
object
camera
panorama
|
Was hoping to load the .npz file in C++ using cnpy, but the .npz contains pickled data which cnpy can't handle. I could potentially handle the pickled data using http://www.picklingtools.com/. Alternatively, I could do all the processing in python but the python tools for writing a mesh are cumbersome and likely slow. So for now, I think I will write a python script to convert the data I need into a format that can be easily loaded in C++ and then do the processing in C++. |
The semantic mask information is located in:
which an array that is the same size as the number of faces. Each element in the array contains the semantic object_id for that face. If we don't have semantic information for that face, the id is 0. The format of the array:
|
We should be able to write out the object ids using the following code:
|
Bounding box information: Looking at the object schema:
location and size may be the axis-aligned bounding box. This will have to be verified. |
Semantics seem to be working with the following transformation: x1 = x0 |
It is a two-step process to create a Gibson semantic mesh. First you need to extract the object_id table from the .npz file. Then you can create the semantic_mesh from the extracted ids file and the .obj file the .npz is based on. Addresses: Issue #374
…is missing (#406) With current code state 3dscenegraph semantic annotation files (*.scn) won't load, as our semantic loading pipeline triggers only on *.house files. To enable functionality implemented in #393 and #374 added loading of Gibson Semantics scene if MP3D semantic is missing. To test semantic loading e2e added integration test that will run only when *.scn test data is available.
Hello, I want to know can we get the room's centers and bounding boxes from habitat in Gibson dataset? I used the 3Dscenegraph as gibson semantics but only get the SemanticObject class. Thanks! |
…is missing (facebookresearch#406) With current code state 3dscenegraph semantic annotation files (*.scn) won't load, as our semantic loading pipeline triggers only on *.house files. To enable functionality implemented in facebookresearch#393 and facebookresearch#374 added loading of Gibson Semantics scene if MP3D semantic is missing. To test semantic loading e2e added integration test that will run only when *.scn test data is available.
…ibson semantic scenes (facebookresearch#407) To leverage 3dscenegraph semantic annotation spatial information added support of object's bounding boxes to Gibson semantic scenes. Related to issue facebookresearch#374 and depends on PR facebookresearch#406.
…#430) This is to implement the Encoder-Decoder CNN feature extractor to be used in the EQA baseline implementation, as in facebookresearch#374. Feature extraction from scene images is the first part of each of the subsequent trainers (VQA, PACMAN) in the EQA implementation. Implementation based on EmbodiedQA, Das et al, CVPR 2018 (paper, code)
🚀 Feature
We want to be able to use the semantic dataset from https://3dscenegraph.stanford.edu/ with the scene dataset from http://gibsonenv.stanford.edu/
The text was updated successfully, but these errors were encountered: