Skip to content

Commit

Permalink
Merge branch 'develop'
Browse files Browse the repository at this point in the history
  • Loading branch information
btalb committed May 12, 2020
2 parents 0120fff + 5e5121b commit dbad812
Show file tree
Hide file tree
Showing 2 changed files with 13 additions and 11 deletions.
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
**NOTE: this software is part of the BenchBot software stack, and not intended to be run in isolation (although it can be installed independently through pip & run on results files if desired). For a working BenchBot system, please install the BenchBot software stack by following the instructions [here](https://github.com/RoboticVisionOrg/benchbot).**
**NOTE: this software is part of the BenchBot software stack, and not intended to be run in isolation (although it can be installed independently through pip & run on results files if desired). For a working BenchBot system, please install the BenchBot software stack by following the instructions [here](https://github.com/roboticvisionorg/benchbot).**

# BenchBot Evaluation
BenchBot Evaluation is a library of functions used to evaluate the performance of a BenchBot system in two core semantic scene understanding tasks: semantic SLAM, and scene change detection. The easiest way to use this module is through the helper scripts provided with the [BenchBot software stack](https://github.com/RoboticVisionOrg/benchbot).
BenchBot Evaluation is a library of functions used to evaluate the performance of a BenchBot system in two core semantic scene understanding tasks: semantic SLAM, and scene change detection. The easiest way to use this module is through the helper scripts provided with the [BenchBot software stack](https://github.com/roboticvisionorg/benchbot).

## Installing & performing evaluation with BenchBot Evaluation

Expand Down Expand Up @@ -88,7 +88,7 @@ Notes:
}
```

The above dicts can be obtained at runtime through the `BenchBot.task_details` & `BenchBot.environment_details` [API properties](https://github.com/RoboticVisionOrg/benchbot_api).
The above dicts can be obtained at runtime through the `BenchBot.task_details` & `BenchBot.environment_details` [API properties](https://github.com/roboticvisionorg/benchbot_api).
- For `'task_details'`:
- `'type'` must be either `'semantic_slam'` or `'scd'`
- `'control_mode'` must be either `'passive'` or `'active'`
Expand All @@ -110,7 +110,7 @@ Notes:

## Generating results for evaluation

An algorithm attempting to solve a semantic scene understanding task only has to fill in the list of `'objects'` and the `'class_list'` field (only if a custom class list has been used); everything else can be pre-populated using the [provided BenchBot API methods](https://github.com/RoboticVisionOrg/benchbot_api). Using these helper methods, only a few lines of code is needed to create results that can be used with our evaluator:
An algorithm attempting to solve a semantic scene understanding task only has to fill in the list of `'objects'` and the `'class_list'` field (only if a custom class list has been used); everything else can be pre-populated using the [provided BenchBot API methods](https://github.com/roboticvisionorg/benchbot_api). Using these helper methods, only a few lines of code is needed to create results that can be used with our evaluator:

```python
from benchbot_api import BenchBot
Expand Down
16 changes: 9 additions & 7 deletions benchbot_eval/evaluator.py
Original file line number Diff line number Diff line change
Expand Up @@ -331,19 +331,21 @@ def _validate_results_set(results_set,
(results_set[0][0], task_str, f, s))
elif s != task_str:
raise ValueError(
"Evaluator was configured only accept results for task "
"Evaluator was configured to only accept results for task "
"'%s', but results file '%s' is for task '%s'" %
(required_task, f, s))

env_strs.append(Evaluator._get_env_string(
d['environment_details']))
if (required_envs is not None and
env_strs[-1] not in required_envs):
s = Evaluator._get_env_string(d['environment_details'])
if (required_envs is not None and s not in required_envs):
raise ValueError(
"Evaluator was configured to require environments: %s. "
"Results file '%s' is for environment '%s' which is not "
"in the list." %
(", ".join(required_envs), f, env_strs[-1]))
"in the list." % (", ".join(required_envs), f, s))
elif s in env_strs:
raise ValueError(
"Evaluator received multiple results for environment '%s'. "
"Only one result is permitted per environment." % s)
env_strs.append(s)

# Lastly, ensure we have all required environments if relevant
if required_envs is not None:
Expand Down

0 comments on commit dbad812

Please sign in to comment.