Code of the CVPR 2021 Oral paper: A Recurrent Vision-and-Language BERT for Navigation
-
Updated
Aug 13, 2022 - Python
Code of the CVPR 2021 Oral paper: A Recurrent Vision-and-Language BERT for Navigation
Code and Data of the CVPR 2022 paper: Bridging the Gap Between Learning in Discrete and Continuous Environments for Vision-and-Language Navigation
Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"
Code of the NeurIPS 2021 paper: Language and Visual Entity Relationship Graph for Agent Navigation
Code and data of the Fine-Grained R2R Dataset proposed in the EMNLP 2021 paper Sub-Instruction Aware Vision-and-Language Navigation
Repository for Vision-and-Language Navigation via Causal Learning (Accepted by CVPR 2024)
Planning as In-Painting: A Diffusion-Based Embodied Task Planning Framework for Environments under Uncertainty
Code for ORAR Agent for Vision and Language Navigation on Touchdown and map2seq
Fast-Slow Test-time Adaptation for Online Vision-and-Language Navigation
Official implementation of the NAACL 2024 paper "Navigation as Attackers Wish? Towards Building Robust Embodied Agents under Federated Learning"
Official repository of "Mind the Error! Detection and Localization of Instruction Errors in Vision-and-Language Navigation". We present the first dataset - R2R-IE-CE - to benchmark instructions errors in VLN. We then propose a method, IEDL.
Add a description, image, and links to the vision-and-language-navigation topic page so that developers can more easily learn about it.
To associate your repository with the vision-and-language-navigation topic, visit your repo's landing page and select "manage topics."