Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flake8 + Bugfixes + Linter #459

Merged
merged 29 commits into from
Aug 22, 2020
Merged
Show file tree
Hide file tree
Changes from 24 commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
c0894dc
Ran autoflake
Skylion007 Aug 21, 2020
6bacb37
flake8 examples
Skylion007 Aug 21, 2020
af4c4ef
flake8 habitat
Skylion007 Aug 21, 2020
d434532
Flake8 Habitat_baselines
Skylion007 Aug 21, 2020
e40aa13
flake8 tests
Skylion007 Aug 21, 2020
aa7ddf7
Add flake8 linter
Skylion007 Aug 21, 2020
6334471
Fix eqa __init__
Skylion007 Aug 21, 2020
f5389fb
Fix vln datasets.py
Skylion007 Aug 21, 2020
822e605
fix nav __init__
Skylion007 Aug 21, 2020
7dcf2f9
Fix .circleci linter
Skylion007 Aug 21, 2020
895a135
Fix EQA __init__.py
Skylion007 Aug 21, 2020
a0c0293
Re-enable pre-commit hook
Skylion007 Aug 21, 2020
24209ef
Refix vln
Skylion007 Aug 21, 2020
762fc83
More __init__ repair
Skylion007 Aug 21, 2020
2750961
Fix two inits
Skylion007 Aug 21, 2020
a5770fe
Fix typo
Skylion007 Aug 21, 2020
4ae3050
Finalize __init__s and tests fixes
Skylion007 Aug 22, 2020
1c51adf
merge profile removal
Skylion007 Aug 22, 2020
ab73efc
Update deprecated isort pre-commit hook
Skylion007 Aug 22, 2020
d8a4746
Add missing auotflake flag
Skylion007 Aug 22, 2020
0a3d833
clean up errors on pyrobot import
Skylion007 Aug 22, 2020
bd8d32d
Apparently seed-isort pre-commit is deprecated
Skylion007 Aug 22, 2020
dda4d4c
More bugfixes
Skylion007 Aug 22, 2020
dd991b4
reuse __init__ in vocabdict and fix m_docstring
Skylion007 Aug 22, 2020
2051171
Fix typo
Skylion007 Aug 22, 2020
2089ffb
Fix typo
Skylion007 Aug 22, 2020
13e8adf
Address comments
Skylion007 Aug 22, 2020
4e780d2
Add back known_first_party
Skylion007 Aug 22, 2020
51dd96b
Fix isort tutorial
Skylion007 Aug 22, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 6 additions & 2 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ jobs:
- run:
name: setup
command: |
sudo pip install black "isort[pyproject]" numpy --progress-bar off
sudo pip install black flake8 "isort[pyproject]" numpy --progress-bar off
sudo pip install -r requirements.txt --progress-bar off
- run:
name: run black
Expand All @@ -28,7 +28,11 @@ jobs:
isort --version
isort -rc habitat/. habitat_baselines/. examples/. test/. setup.py --diff
isort -rc habitat/. habitat_baselines/. examples/. test/. setup.py --check-only

- run:
name: run flake8
command: |
flake8 --version
flake8 habitat/. habitat_baselines/. examples/. tests/. setup.py
install_and_test_ubuntu:
<<: *gpu
steps:
Expand Down
17 changes: 8 additions & 9 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -20,14 +20,8 @@ repos:
- id: mixed-line-ending
args: ['--fix=lf']

- repo: https://github.com/asottile/seed-isort-config
Skylion007 marked this conversation as resolved.
Show resolved Hide resolved
rev: v2.2.0
hooks:
- id: seed-isort-config
language_version: python3

- repo: https://github.com/pre-commit/mirrors-isort
rev: v5.0.7
- repo: https://github.com/timothycrosley/isort
rev: 5.4.2
hooks:
- id: isort
exclude: docs/
Expand All @@ -43,9 +37,14 @@ repos:
rev: master
hooks:
- id: autoflake
args: ['--expand-star-imports', '--ignore-init-module-imports', '--in-place']
args: ['--expand-star-imports', '--ignore-init-module-imports', '--in-place', '-c']
exclude: docs/

- repo: https://gitlab.com/pycqa/flake8
rev: 3.8.3
hooks:
- id: flake8

- repo: https://github.com/kynan/nbstripout
rev: master
hooks:
Expand Down
4 changes: 2 additions & 2 deletions examples/example.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,12 +14,12 @@ def example():
config=habitat.get_config("configs/tasks/pointnav.yaml")
) as env:
print("Environment creation successful")
observations = env.reset()
observations = env.reset() # noqa: F841

print("Agent stepping around inside environment.")
count_steps = 0
while not env.episode_over:
observations = env.step(env.action_space.sample())
observations = env.step(env.action_space.sample()) # noqa: F841
count_steps += 1
print("Episode finished after {} steps.".format(count_steps))

Expand Down
45 changes: 19 additions & 26 deletions examples/tutorials/colabs/Habitat_Interactive_Tasks.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,6 @@
"import habitat_sim\n",
"from habitat.config import Config\n",
"from habitat.core.registry import registry\n",
"from habitat_sim.utils import common as ut\n",
"from habitat_sim.utils import viz_utils as vut\n",
"\n",
"if \"google.colab\" in sys.modules:\n",
Expand Down Expand Up @@ -145,10 +144,6 @@
" video_file = output_path + prefix + \".mp4\"\n",
" print(\"Encoding the video: %s \" % video_file)\n",
" writer = vut.get_fast_video_writer(video_file, fps=fps)\n",
" thumb_size = (int(videodims[0] / 5), int(videodims[1] / 5))\n",
" outline_frame = (\n",
" np.ones((thumb_size[1] + 2, thumb_size[0] + 2, 3), np.uint8) * 150\n",
" )\n",
" for ob in observations:\n",
" # If in RGB/RGBA format, remove the alpha channel\n",
" rgb_im_1st_person = cv2.cvtColor(ob[\"rgb\"], cv2.COLOR_RGBA2RGB)\n",
Expand Down Expand Up @@ -815,7 +810,7 @@
" sim, \"rgb\", crosshair_pos=[128, 190], max_distance=1.0\n",
" )\n",
" print(f\"Closest Object ID: {closest_object} using 1.0 threshold\")\n",
" assert closest_object == -1, f\"Agent shoud not be able to pick any object\""
" assert closest_object == -1, \"Agent shoud not be able to pick any object\""
]
},
{
Expand Down Expand Up @@ -1129,7 +1124,6 @@
"from habitat.core.embodied_task import Measure\n",
"from habitat.core.simulator import Observations, Sensor, SensorTypes, Simulator\n",
"from habitat.tasks.nav.nav import PointGoalSensor\n",
"from habitat_sim.utils.common import quat_from_magnum\n",
"\n",
"\n",
"@registry.register_sensor\n",
Expand Down Expand Up @@ -1555,7 +1549,7 @@
" action_name == \"GRAB_RELEASE\"\n",
" and observations[\"gripped_object_id\"] >= 0\n",
" ):\n",
" obj_id = observations[\"gripped_object_id\"]\n",
" obj_id = observations[\"gripped_object_id\"] # noqa: 841\n",
" self._prev_measure[\"gripped_object_count\"] += 1\n",
"\n",
" gripped_success_reward = (\n",
Expand Down Expand Up @@ -1674,19 +1668,17 @@
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import time\n",
"from collections import defaultdict, deque\n",
"from typing import Any, Dict, List, Optional\n",
"import os # noqa: 841\n",
"import time # noqa: 841\n",
"from collections import defaultdict, deque # noqa: 841\n",
"from typing import Any, Dict, List, Optional # noqa :841\n",
"\n",
"import numpy as np\n",
"import torch\n",
"import numpy as np # noqa: 841\n",
"import torch # noqa: 841\n",
"from torch.optim.lr_scheduler import LambdaLR\n",
"\n",
"from habitat import Config, logger\n",
"from habitat.core.vector_env import ThreadedVectorEnv\n",
"from habitat.utils.visualizations.utils import observations_to_image\n",
"from habitat_baselines.common.base_trainer import BaseRLTrainer\n",
"from habitat_baselines.common.baseline_registry import baseline_registry\n",
"from habitat_baselines.common.env_utils import construct_envs, make_env_fn\n",
"from habitat_baselines.common.environments import get_env_class\n",
Expand All @@ -1698,12 +1690,12 @@
" linear_decay,\n",
")\n",
"from habitat_baselines.rl.models.rnn_state_encoder import RNNStateEncoder\n",
"from habitat_baselines.rl.ppo import PPO, PointNavBaselinePolicy\n",
"from habitat_baselines.rl.ppo import PPO\n",
"from habitat_baselines.rl.ppo.policy import Net, Policy\n",
"from habitat_baselines.rl.ppo.ppo_trainer import PPOTrainer\n",
"\n",
"\n",
"def construct_envs(\n",
"def construct_envs( # noqa: 841\n",
" config, env_class, workers_ignore_signals=False,\n",
"):\n",
" r\"\"\"Create VectorEnv object with specified config and env class type.\n",
Expand All @@ -1721,7 +1713,7 @@
" num_processes = config.NUM_PROCESSES\n",
" configs = []\n",
" env_classes = [env_class for _ in range(num_processes)]\n",
" dataset = make_dataset(config.TASK_CONFIG.DATASET.TYPE)\n",
" dataset = habitat.datasets.make_dataset(config.TASK_CONFIG.DATASET.TYPE)\n",
" scenes = config.TASK_CONFIG.DATASET.CONTENT_SCENES\n",
" if \"*\" in config.TASK_CONFIG.DATASET.CONTENT_SCENES:\n",
" scenes = dataset.get_scenes_to_load(config.TASK_CONFIG.DATASET)\n",
Expand Down Expand Up @@ -2154,15 +2146,16 @@
"# @title Train an RL agent on a single episode\n",
"!if [ -d \"data/tb\" ]; then rm -r data/tb; fi\n",
"\n",
"import random\n",
"import random # noqa: 841\n",
"\n",
"import numpy as np\n",
"import torch\n",
"import numpy as np # noqa: 841\n",
"import torch # noqa: 841\n",
"\n",
"import habitat\n",
"from habitat import Config, Env, RLEnv, VectorEnv, make_dataset\n",
"from habitat.config import get_config\n",
"from habitat_baselines.config.default import get_config as get_baseline_config\n",
"import habitat # noqa: 841\n",
"from habitat import Config, make_dataset # noqa: 841\n",
"from habitat_baselines.config.default import (\n",
" get_config as get_baseline_config, # noqa: 841\n",
")\n",
"\n",
"baseline_config = get_baseline_config(\n",
" \"habitat_baselines/config/pointnav/ppo_pointnav.yaml\"\n",
Expand Down
45 changes: 19 additions & 26 deletions examples/tutorials/nb_python/Habitat_Interactive_Tasks.py
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,6 @@
import habitat_sim
from habitat.config import Config
from habitat.core.registry import registry
from habitat_sim.utils import common as ut
from habitat_sim.utils import viz_utils as vut

if "google.colab" in sys.modules:
Expand Down Expand Up @@ -141,10 +140,6 @@ def make_video_cv2(
video_file = output_path + prefix + ".mp4"
print("Encoding the video: %s " % video_file)
writer = vut.get_fast_video_writer(video_file, fps=fps)
thumb_size = (int(videodims[0] / 5), int(videodims[1] / 5))
outline_frame = (
np.ones((thumb_size[1] + 2, thumb_size[0] + 2, 3), np.uint8) * 150
)
for ob in observations:
# If in RGB/RGBA format, remove the alpha channel
rgb_im_1st_person = cv2.cvtColor(ob["rgb"], cv2.COLOR_RGBA2RGB)
Expand Down Expand Up @@ -749,7 +744,7 @@ def raycast(sim, sensor_name, crosshair_pos=[128, 128], max_distance=2.0):
sim, "rgb", crosshair_pos=[128, 190], max_distance=1.0
)
print(f"Closest Object ID: {closest_object} using 1.0 threshold")
assert closest_object == -1, f"Agent shoud not be able to pick any object"
assert closest_object == -1, "Agent shoud not be able to pick any object"


# %%
Expand Down Expand Up @@ -1039,7 +1034,6 @@ def step(self, action: int):
from habitat.core.embodied_task import Measure
from habitat.core.simulator import Observations, Sensor, SensorTypes, Simulator
from habitat.tasks.nav.nav import PointGoalSensor
from habitat_sim.utils.common import quat_from_magnum


@registry.register_sensor
Expand Down Expand Up @@ -1445,7 +1439,7 @@ def get_reward(self, observations):
action_name == "GRAB_RELEASE"
and observations["gripped_object_id"] >= 0
):
obj_id = observations["gripped_object_id"]
obj_id = observations["gripped_object_id"] # noqa: 841
self._prev_measure["gripped_object_count"] += 1

gripped_success_reward = (
Expand Down Expand Up @@ -1559,19 +1553,17 @@ def get_info(self, observations):


# %%
import os
import time
from collections import defaultdict, deque
from typing import Any, Dict, List, Optional
import os # noqa: 841
import time # noqa: 841
from collections import defaultdict, deque # noqa: 841
from typing import Any, Dict, List, Optional # noqa :841

import numpy as np
import torch
import numpy as np # noqa: 841
import torch # noqa: 841
from torch.optim.lr_scheduler import LambdaLR

from habitat import Config, logger
from habitat.core.vector_env import ThreadedVectorEnv
from habitat.utils.visualizations.utils import observations_to_image
from habitat_baselines.common.base_trainer import BaseRLTrainer
from habitat_baselines.common.baseline_registry import baseline_registry
from habitat_baselines.common.env_utils import construct_envs, make_env_fn
from habitat_baselines.common.environments import get_env_class
Expand All @@ -1583,12 +1575,12 @@ def get_info(self, observations):
linear_decay,
)
from habitat_baselines.rl.models.rnn_state_encoder import RNNStateEncoder
from habitat_baselines.rl.ppo import PPO, PointNavBaselinePolicy
from habitat_baselines.rl.ppo import PPO
from habitat_baselines.rl.ppo.policy import Net, Policy
from habitat_baselines.rl.ppo.ppo_trainer import PPOTrainer


def construct_envs(
def construct_envs( # noqa: 841
config, env_class, workers_ignore_signals=False,
):
r"""Create VectorEnv object with specified config and env class type.
Expand All @@ -1606,7 +1598,7 @@ def construct_envs(
num_processes = config.NUM_PROCESSES
configs = []
env_classes = [env_class for _ in range(num_processes)]
dataset = make_dataset(config.TASK_CONFIG.DATASET.TYPE)
dataset = habitat.datasets.make_dataset(config.TASK_CONFIG.DATASET.TYPE)
scenes = config.TASK_CONFIG.DATASET.CONTENT_SCENES
if "*" in config.TASK_CONFIG.DATASET.CONTENT_SCENES:
scenes = dataset.get_scenes_to_load(config.TASK_CONFIG.DATASET)
Expand Down Expand Up @@ -2028,15 +2020,16 @@ def eval(self) -> None:
# @title Train an RL agent on a single episode
# !if [ -d "data/tb" ]; then rm -r data/tb; fi

import random
import random # noqa: 841
Skylion007 marked this conversation as resolved.
Show resolved Hide resolved

import numpy as np
import torch
import numpy as np # noqa: 841
import torch # noqa: 841

import habitat
from habitat import Config, Env, RLEnv, VectorEnv, make_dataset
from habitat.config import get_config
from habitat_baselines.config.default import get_config as get_baseline_config
import habitat # noqa: 841
from habitat import Config, make_dataset # noqa: 841
from habitat_baselines.config.default import (
get_config as get_baseline_config, # noqa: 841
)

baseline_config = get_baseline_config(
"habitat_baselines/config/pointnav/ppo_pointnav.yaml"
Expand Down
1 change: 1 addition & 0 deletions examples/vln_benchmark.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@

import argparse
from collections import defaultdict
from typing import Dict

import habitat
from habitat.config.default import get_config
Expand Down
2 changes: 1 addition & 1 deletion habitat/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
from habitat.core.embodied_task import EmbodiedTask, Measure, Measurements
from habitat.core.env import Env, RLEnv
from habitat.core.logging import logger
from habitat.core.registry import registry
from habitat.core.registry import registry # noqa : F401
from habitat.core.simulator import Sensor, SensorSuite, SensorTypes, Simulator
from habitat.core.vector_env import ThreadedVectorEnv, VectorEnv
from habitat.datasets import make_dataset
Expand Down
2 changes: 1 addition & 1 deletion habitat/core/benchmark.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ def remote_evaluate(
import pickle
import time

import evalai_environment_habitat
import evalai_environment_habitat # noqa: F401
import evaluation_pb2
import evaluation_pb2_grpc
import grpc
Expand Down
2 changes: 1 addition & 1 deletion habitat/core/embodied_task.py
Original file line number Diff line number Diff line change
Expand Up @@ -320,7 +320,7 @@ def step(self, action: Union[int, Dict[str, Any]], episode: Type[Episode]):

def get_action_name(self, action_index: int):
if action_index >= len(self.actions):
raise ValueError(f"Action index '{action}' is out of range.")
raise ValueError(f"Action index '{action_index}' is out of range.")
return self._action_keys[action_index]

@property
Expand Down
2 changes: 1 addition & 1 deletion habitat/core/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
from typing import List

import numpy as np
import quaternion
import quaternion # noqa: F401

from habitat.utils.geometry_utils import quaternion_to_list

Expand Down
2 changes: 1 addition & 1 deletion habitat/datasets/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.

from habitat.datasets.registration import make_dataset
from habitat.datasets.registration import make_dataset # noqa: F401 .
11 changes: 3 additions & 8 deletions habitat/datasets/eqa/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,17 +10,12 @@

def _try_register_mp3d_eqa_dataset():
try:
from habitat.datasets.eqa.mp3d_eqa_dataset import Matterport3dDatasetV1

has_mp3deqa = True
from habitat.datasets.eqa.mp3d_eqa_dataset import ( # noqa: F401 isort:skip
Matterport3dDatasetV1,
)
except ImportError as e:
has_mp3deqa = False
mp3deqa_import_error = e

if has_mp3deqa:
from habitat.datasets.eqa.mp3d_eqa_dataset import Matterport3dDatasetV1
else:

@registry.register_dataset(name="MP3DEQA-v1")
class Matterport3dDatasetImportError(Dataset):
def __init__(self, *args, **kwargs):
Expand Down
2 changes: 1 addition & 1 deletion habitat/datasets/eqa/mp3d_eqa_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
from habitat.core.dataset import Dataset
from habitat.core.registry import registry
from habitat.core.simulator import AgentState
from habitat.datasets.utils import VocabDict, VocabFromText
from habitat.datasets.utils import VocabDict
from habitat.tasks.eqa.eqa import EQAEpisode, QuestionData
from habitat.tasks.nav.nav import ShortestPathPoint
from habitat.tasks.nav.object_nav_task import ObjectGoal
Expand Down
Loading