Skip to content

Commit

Permalink
Merge pull request #474 from fxia22/ig-develop
Browse files Browse the repository at this point in the history
merge ig-develop into master for 2.0.1 release
  • Loading branch information
fxia22 committed Sep 8, 2021
2 parents b12a370 + 4b6ca9d commit 64ad570
Show file tree
Hide file tree
Showing 19 changed files with 294 additions and 130 deletions.
1 change: 0 additions & 1 deletion docs/objects.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,6 @@ We provide a wide variety of **Objects** that can be imported into the **Simulat
- `SoftObject`
- `Cube`
- `VisualMarker`
- `VisualShape`

Typically, they take in the name or the path of an object (in `igibson.assets_path`) and provide a `load` function that be invoked externally (usually by `import_object` and `import_object` of `Simulator`). The `load` function imports the object into PyBullet. Some **Objects** (e.g. `ArticulatedObject`) also provide APIs to get and set the object pose.

Expand Down
76 changes: 76 additions & 0 deletions docs/virtual_reality.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
### Virtual Reality Overview

Virtual reality is currently only supported on Windows 10/11 on the HTC Vive and Oculus Rift/Quest (with link).

Linux support is planned, but the HTC Vive Pro Eye tracking driver is not available for Linux, and currently aysnchronous projection/motion smoothing is not supported on the Nvidia proprietary drivers on linux.

### Setup
1. Set up the HTC Vive VR hardware according to the [setup guide](https://support.steampowered.com/steamvr/HTC_Vive/)

2. (optional) if you plan to use eye tracking on Windows, create a [vive developer account](https://hub.vive.com/sso/login) then download and install the [SRAnipal runtime](https://developer.vive.com/resources/vive-sense/sdk/vive-eye-and-facial-tracking-sdk/). Note you should [calibrate](https://developer.vive.com/us/support/sdk/category_howto/how-to-calibrate-eye-tracking.html) the Vive eye tracker before each recording session.

3. Ensure you have installed iGibson according to the installation [instructions](http://svl.stanford.edu/igibson/docs/installation.html#installation-method)

### VR examples

We have several examples showing how to use our VR interface:

* vr demo files: `igibson/examples/demo/vr_demos`

* multi-user VR (experimental): `igibson/examples/demo/vr_demos/muvr`

* benchmark: `igibson/examples/demo/vr_demos/in_development`
- `vr_hand_dex_benchmark.py` -- Demonstrates various challenging manipulation tasks
- `vr_hand_speed_benchmark.py` -- Demonstrates various tasks to assess user speed at working in VR


* data_save_replay: `igibson/examples/demo/vr_demos/data_save_replay`
- This folder demonstrates how to save VR recordings and "replay" them to reproduce the reorded trajectory

* robot_embodiment: `igibson/examples/demo/vr_demos/robot_embodiment`
- This folder demonstrates the VR interface to the Fetch robot (WIP)

Benchmarks:

We have two benchmarks - a hand and speed benchmark, both of which can be found in the top level of the vr_demos folder. In these demos, you can time yourself performing various challenges,
such as putting objects away into specific containers/cabinets. Please see the comments in these demo files for more information.

### VR config and button mapping:

You can find the global vr settings in the `vr_config.yaml` in the igibson folder. We highly recommend reading through the default config as it is heavily documented. The three most crucial settings are **current_device**, **use_tracked_body**, and **torso_tracker_serial**.

* `current_device`: determines which of the `device_settings` keys will be used, and is used to set options specific to Oculus or the HTC Vive. The currently available keys, as seen in `device_settings` are `HTC_VIVE_PRO_EYE` and `OCULUS`
* `use_tracked_body`: determines if we will use [HTC Vive Trackers](https://developer.vive.com/us/support/sdk/category_howto/how-to-calibrate-eye-tracking.html) to track the body instead of inferring body position from the headset position.
* `torso_tracker_serial`: is the serial number of the tracker used if `use_tracked_body` is `True`.

Some additional options you may be interested in changing:
* `use_vr` (default True): whether to render to the HMD and use VR system or just render to screen (used for debugging)
* `eye_tracking` (default True): whether to use eye tracking. Turn this off if your headset does not support eye tracking (only the HTC Vive Pro Eye does)
* `touchpad_movement` (default True): whether to enable use of touchpad to move - this will help you get around large scenes if your play area is small
* `movement_controller` (default 'right'): device to controler movement - can be right or left (representing the corresponding controllers). Default is for right-handed people - please change to left if that is more comfortable.
* `relative_movement_device` (default 'hmd'): which device to use to control touchpad movement direction (can be any VR device). You should not need to change this.
* `movement_speed` (default 0.01): how fast you move when using the touchpad. This number has been calibrated carefully, however feel free to change it if you want to speed up/slow down.

We recommend looking at `igibson/render/mesh_renderer/mesh_renderer_vr.py` to see the VrSettings class which reads from `vr_config.yaml`. A single VrSettings object is created and passed in to the `Simulator` constructor.

Note(optional): If you are using a device not already mapped, please run `igibson/examples/demo/vr_demos/in_development/vr_button_mapping.py` to figure out which physical controller buttons correspond to which indices in OpenVR.

### Mirroring the VR view on the monitor

iGibson VR utilizes Steam VR's built-in companion window to visualize what the user sees in their headset. To open this window:
* launch Steam VR
* click on the hamburger menu in the top-left corner of the VR status menu (the dark blue window with icons for the VR devices)
* then click "Display VR View" button.

From this view, you can change which eye you are looking at (or look at both), and can make the window fullscreen.
Note that this window will be black until an application is running, and the headset is detected by the lighthouse sensors. We also support a custom-build companion window that can run in iGibson - this can be enabled in the vr_config file, described below (although it is off by default).

Note: Press ESCAPE to force the fullscreen rendering window to close during program execution (although fullscreen is disabled by default)

### Contributing
* Most VR functions can be found in `igibson/simulator.py`
* The BehaviorRobot is located in `igibson/robots/behavior_robot.py`
* VR utility functions are found in `igibson/utils/vr_utils.py`
* The VR renderer can be found in `igibson/render/mesh_renderer.py`
* The underlying VR C++ code (querying controller states from openvr, renderer for VR) can be found in `igibson/render/cpp/vr_mesh_renderer{.cpp,.h}

91 changes: 81 additions & 10 deletions igibson/challenge/behavior_challenge.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,15 @@
import json
import logging
import os
import shutil
from collections import defaultdict

import bddl
import numpy as np

import igibson
from igibson.envs.behavior_env import BehaviorEnv
from igibson.metrics.agent import AgentMetric
from igibson.metrics.agent import BehaviorRobotMetric
from igibson.metrics.disarrangement import KinematicDisarrangement, LogicalDisarrangement
from igibson.metrics.task import TaskMetric
from igibson.utils.utils import parse_config
Expand All @@ -18,7 +21,7 @@ def get_metrics_callbacks():
metrics = [
KinematicDisarrangement(),
LogicalDisarrangement(),
AgentMetric(),
BehaviorRobotMetric(),
TaskMetric(),
]

Expand All @@ -33,14 +36,13 @@ def get_metrics_callbacks():
class BehaviorChallenge(object):
def __init__(self):
self.config_file = os.environ["CONFIG_FILE"]
self.phase = os.environ["PHASE"]
self.split = os.environ["SPLIT"]
self.output_dir = os.environ["OUTPUT_DIR"]

def submit(self, agent):
env_config = parse_config(self.config_file)

per_episode_metrics = {}

if self.phase == "minival":
if self.split == "minival":
# Only eval one activity in the config file
tasks = [env_config["task"]]
else:
Expand All @@ -54,28 +56,55 @@ def submit(self, agent):
)
assert len(tasks) == 100

log_path = os.path.join(self.output_dir, "per_episode_metrics.json")
summary_log_path = os.path.join(self.output_dir, "aggregated_metrics.json")

self_reported_log_path = os.path.join(
self.output_dir, "..", "participant_reported_results", "per_episode_metrics.json"
)
self_reported_summary_log_path = os.path.join(
self.output_dir, "..", "participant_reported_results", "aggregated_metrics.json"
)
if os.path.exists(self_reported_log_path):
shutil.copyfile(self_reported_log_path, log_path)
print("Per episode eval results copied from self-reported results %s" % log_path)
with open(self_reported_log_path) as f:
self_reported_log = json.load(f)
assert len(self_reported_log) == len(tasks) * 9

if os.path.exists(self_reported_summary_log_path):
shutil.copyfile(self_reported_summary_log_path, summary_log_path)
print("Aggregated eval results copied from self-reported results %s" % summary_log_path)
return

episode = 0
per_episode_metrics = {}
scene_json = os.path.join(os.path.dirname(bddl.__file__), "../utils", "activity_to_preselected_scenes.json")

with open(scene_json) as f:
activity_to_scenes = json.load(f)

with open(os.path.join(igibson.ig_dataset_path, "metadata", "behavior_activity_statistics.json")) as f:
activity_metadata = json.load(f)

for task in tasks:
assert task in activity_to_scenes
scenes = sorted(set(activity_to_scenes[tasks[0]]))
num_scenes = len(scenes)
human_demo_mean_step = activity_metadata[task]["mean"]
env_config["max_step"] = human_demo_mean_step * 2 # adjust env_config['max_step'] based on the human
# demonstration, we give agent 2x steps of average human demonstration across all possible scenes

assert num_scenes <= 3

# Evaluate 9 activity instances in the training set for now
if num_scenes == 3:
scene_instance_ids = {scenes[0]: range(1), scenes[1]: range(0), scenes[2]: range(0)}
scene_instance_ids = {scenes[0]: range(3), scenes[1]: range(3), scenes[2]: range(3)}
elif num_scenes == 2:
scene_instance_ids = {scenes[0]: range(4), scenes[1]: range(5)}
else:
scene_instance_ids = {scenes[0]: range(9)}

# TODO: adjust env_config['episode_length'] based on the activity
for scene_id, instance_ids in scene_instance_ids.items():
env_config["scene_id"] = scene_id
for instance_id in instance_ids:
Expand Down Expand Up @@ -103,14 +132,56 @@ def submit(self, agent):
metrics_summary = {}
for callback in data_callbacks:
metrics_summary.update(callback())

metrics_summary["task"] = task
per_episode_metrics[episode] = metrics_summary
episode += 1
env.close()

log_path = "eval.json"
with open(log_path, "w+") as f:
json.dump(per_episode_metrics, f)
print("Eval results saved to %s" % log_path)
print("Per episode eval results saved to %s" % log_path)

aggregated_metrics = {}
success_score = []
simulator_time = []
kinematic_disarrangement = []
logical_disarrangement = []
distance_navigated = []
displacement_of_hands = []

task_to_mean_success_score = defaultdict(list)
task_scores = []

for episode, metric in per_episode_metrics.items():
task_to_mean_success_score[metric["task"]].append(metric["q_score"]["timestep"][-1])

for task, scores in task_to_mean_success_score.items():
task_scores.append(np.mean(scores))

task_scores = sorted(task_scores, reverse=True)

for episode, metric in per_episode_metrics.items():
success_score.append(metric["q_score"]["timestep"][-1])
simulator_time.append(metric["time"]["simulator_time"])
kinematic_disarrangement.append(metric["kinematic_disarrangement"]["relative"])
logical_disarrangement.append(metric["logical_disarrangement"]["relative"])
distance_navigated.append(np.sum(metric["agent_distance"]["timestep"]["body"]))
displacement_of_hands.append(
np.sum(metric["grasp_distance"]["timestep"]["left_hand"])
+ np.sum(metric["grasp_distance"]["timestep"]["right_hand"])
)

aggregated_metrics["Success Score"] = np.mean(success_score)
aggregated_metrics["Success Score Top 5"] = np.mean(np.array(task_scores)[:5])
aggregated_metrics["Simulated Time"] = np.mean(simulator_time)
aggregated_metrics["Kinematic Disarrangement"] = np.mean(kinematic_disarrangement)
aggregated_metrics["Logical Disarrangement"] = np.mean(logical_disarrangement)
aggregated_metrics["Distance Navigated"] = np.mean(distance_navigated)
aggregated_metrics["Displacement of Hands"] = np.mean(displacement_of_hands)
with open(summary_log_path, "w+") as f:
json.dump(aggregated_metrics, f)
print("Aggregated eval results saved to %s" % summary_log_path)


if __name__ == "__main__":
Expand Down
3 changes: 0 additions & 3 deletions igibson/examples/configs/behavior_full_observability.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,6 @@ load_texture: true
pybullet_load_texture: true
should_open_all_doors: true

# Ignore task, use EmptyScene instead
debug: False

# domain randomization
texture_randomization_freq: null
object_randomization_freq: null
Expand Down
11 changes: 5 additions & 6 deletions igibson/examples/configs/behavior_full_observability_fetch.yaml
Original file line number Diff line number Diff line change
@@ -1,20 +1,19 @@
# scene
scene_id: Rs_int
scene_id: Beechwood_1_int
clutter: false
build_graph: true
load_texture: true
pybullet_load_texture: true
should_open_all_doors: true

# Ignore task, use EmptyScene instead
debug: False

# domain randomization
texture_randomization_freq: null
object_randomization_freq: null

# robot
robot: FetchGripper
use_ag: true
ag_strict_mode: true
default_arm_pose: diagonal30
trunk_offset: 0.085
controller:
Expand All @@ -24,13 +23,13 @@ controller:
input_min: [-1, -1, -1, -1, -1, -1]
output_max: [0.2, 0.2, 0.2, 0.5, 0.5, 0.5]
output_min: [-0.2, -0.2, -0.2, -0.5, -0.5, -0.5]
eef_always_in_frame: true # if true, will add hard workspace left-right constraints to prevent EEF from moving off-frame in horizontal directions
eef_always_in_frame: false # if true, will add hard workspace left-right constraints to prevent EEF from moving off-frame in horizontal directions
neutral_xy: [0.25, 0] # (x,y) relative pos values taken in robot base frame from which radius limits will be computed
radius_limit: 0.5 # x-y reaching limit
height_limits: [0.2, 1.5] # min, max height limits

# task
task: re-shelving_library_books
task: cleaning_cupboards
task_id: 0
online_sampling: false
target_dist_min: 1.0
Expand Down
3 changes: 0 additions & 3 deletions igibson/examples/configs/behavior_onboard_sensing.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,6 @@ load_texture: true
pybullet_load_texture: true
should_open_all_doors: true

# Ignore task, use EmptyScene instead
debug: False

# domain randomization
texture_randomization_freq: null
object_randomization_freq: null
Expand Down
11 changes: 5 additions & 6 deletions igibson/examples/configs/behavior_onboard_sensing_fetch.yaml
Original file line number Diff line number Diff line change
@@ -1,20 +1,19 @@
# scene
scene_id: Rs_int
scene_id: Beechwood_1_int
clutter: false
build_graph: true
load_texture: true
pybullet_load_texture: true
should_open_all_doors: true

# Ignore task, use EmptyScene instead
debug: False

# domain randomization
texture_randomization_freq: null
object_randomization_freq: null

# robot
robot: FetchGripper
use_ag: true
ag_strict_mode: true
default_arm_pose: diagonal30
trunk_offset: 0.085
controller:
Expand All @@ -24,13 +23,13 @@ controller:
input_min: [-1, -1, -1, -1, -1, -1]
output_max: [0.2, 0.2, 0.2, 0.5, 0.5, 0.5]
output_min: [-0.2, -0.2, -0.2, -0.5, -0.5, -0.5]
eef_always_in_frame: true # if true, will add hard workspace left-right constraints to prevent EEF from moving off-frame in horizontal directions
eef_always_in_frame: false # if true, will add hard workspace left-right constraints to prevent EEF from moving off-frame in horizontal directions
neutral_xy: [0.25, 0] # (x,y) relative pos values taken in robot base frame from which radius limits will be computed
radius_limit: 0.5 # x-y reaching limit
height_limits: [0.2, 1.5] # min, max height limits

# task
task: re-shelving_library_books
task: cleaning_cupboards
task_id: 0
online_sampling: false
target_dist_min: 1.0
Expand Down
17 changes: 13 additions & 4 deletions igibson/object_states/heat_source_or_sink.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
import os

import numpy as np
import pybullet as p

import igibson
from igibson.object_states.aabb import AABB
from igibson.object_states.inside import Inside
Expand All @@ -9,11 +12,11 @@
from igibson.object_states.toggle import ToggledOn

# The name of the heat source link inside URDF files.
from igibson.objects.visual_shape import VisualShape
from igibson.objects.visual_marker import VisualMarker

_HEATING_ELEMENT_LINK_NAME = "heat_source"

_HEATING_ELEMENT_MARKER_SCALE = 1.0
_HEATING_ELEMENT_MARKER_SCALE = [1.0] * 3
_HEATING_ELEMENT_MARKER_FILENAME = os.path.join(igibson.assets_path, "models/fire/fire.obj")

# TODO: Delete default values for this and make them required.
Expand Down Expand Up @@ -120,7 +123,9 @@ def _compute_state_and_position(self):
def _initialize(self):
super(HeatSourceOrSink, self)._initialize()
self.initialize_link_mixin()
self.marker = VisualShape(_HEATING_ELEMENT_MARKER_FILENAME, _HEATING_ELEMENT_MARKER_SCALE)
self.marker = VisualMarker(
visual_shape=p.GEOM_MESH, filename=_HEATING_ELEMENT_MARKER_FILENAME, scale=_HEATING_ELEMENT_MARKER_SCALE
)
self.simulator.import_object(self.marker, use_pbr=False, use_pbr_mapping=False)
self.marker.set_position([0, 0, -100])

Expand All @@ -131,7 +136,11 @@ def _update(self):
marker_position = [0, 0, -100]
if self.position is not None:
marker_position = self.position
self.marker.set_position(marker_position)

# Lazy update of marker position
if not np.all(np.isclose(marker_position, self.marker.get_position())):
self.marker.set_position(marker_position)
self.marker.force_wakeup()

def _get_value(self):
return self.status, self.position
Expand Down
Loading

0 comments on commit 64ad570

Please sign in to comment.