Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Sokoban envs crash consistently #104

Open
leor-c opened this issue Jul 4, 2024 · 0 comments
Open

[BUG] Sokoban envs crash consistently #104

leor-c opened this issue Jul 4, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@leor-c
Copy link

leor-c commented Jul 4, 2024

🐛 Bug

The Sokoban environment crashes after a while when calling step. I've reproduced this behavior with MiniHack-Sokoban1a-v0, MiniHack-Sokoban1b-v0 and MiniHack-Sokoban2a-v0 (didn't test other versions but it looks consistent).

To Reproduce

Steps to reproduce the behavior:

  1. Reset the environment.
  2. Take a sequence of random actions, at some point the step function will crash (prior to episode end).

Code:

import gym
import minihack
import numpy as np

env = gym.make("MiniHack-Sokoban1b-v0")
num_actions = env.action_space.n
env.reset()
done = False

while not done:
  env.step(np.random.randint(num_actions))

Trace:

Traceback (most recent call last):
  File "<stdin>", line 2, in <module>
  File "/root/miniconda3/lib/python3.10/site-packages/minihack/envs/sokoban.py", line 30, in step
    return super().step(action)
  File "/root/miniconda3/lib/python3.10/site-packages/minihack/base.py", line 396, in step
    return super().step(action)
  File "/root/miniconda3/lib/python3.10/site-packages/nle/env/base.py", line 373, in step
    end_status = self._is_episode_end(observation)
  File "/root/miniconda3/lib/python3.10/site-packages/minihack/envs/sokoban.py", line 41, in _is_episode_end
    agent_pos = list(self._object_positions(observation, "@"))[0]
IndexError: list index out of range

Expected behavior

should not crash.

Environment

MiniHack version: 0.1.6
NLE version: 0.9.0
Gym version: 0.22.0
PyTorch version: 2.3.1
Is debug build: No
CUDA used to build PyTorch: 12.1

OS: Ubuntu 22.04.4 LTS
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
CMake version: version 3.22.1

Python version: 3.10
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 555.42.06
cuDNN version: Could not collect

Versions of relevant libraries:
[pip3] numpy==1.24.2
[pip3] torch==2.3.1
[pip3] torchaudio==2.3.1
[pip3] torchvision==0.18.1
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.8 py310h5eee18b_0
[conda] mkl_random 1.2.4 py310hdb19cb5_0
[conda] pytorch 2.3.1 py3.10_cuda12.1_cudnn8.9.2_0 pytorch
[conda] pytorch-cuda 12.1 ha16c6d3_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.3.1 py310_cu121 pytorch
[conda] torchtriton 2.3.1 py310 pytorch
[conda] torchvision 0.18.1 py310_cu121 pytorch

Additional context

@leor-c leor-c added the bug Something isn't working label Jul 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant