Skip to content

Fast-Slow Test-time Adaptation for Online Vision-and-Language Navigation

Notifications You must be signed in to change notification settings

Feliciaxyao/ICML2024-FSTTA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ICML2024-FSTTA

Fast-Slow Test-time Adaptation for Online Vision-and-Language Navigation

Introduction

image

Fast-Slow Test-time Adaptation for Online Vision-and-Language Navigation

Junyu Gao, Xuan Yao, Changsheng Xu

State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences.

Paper Link on ICML 2024

Usage

Prerequisites

  1. Install Matterport3D simulators: follow instructions here. We use the latest version the same as DUET.
export PYTHONPATH=Matterport3DSimulator/build:$PYTHONPATH
  1. Install requirements:
conda create --name fsvln python=3.8.5
conda activate fsvln
  • Required packages are listed in requirements.txt. You can install by running:
pip install -r requirements.txt
  1. Please download data from Dropbox, including processed annotations, features and pretrained models of REVERIE datasets and R2R datasets. Before running the code, please put the data in `datasets' directory.

  2. Please download pretrained LXMERT model by running:

mkdir -p datasets/pretrained 
wget https://nlp.cs.unc.edu/data/model_LXRT.pth -P datasets/pretrained

Pretraining (Base Model)

Combine behavior cloning and auxiliary proxy tasks in pretraining:

cd pretrain_src
bash run_reverie.sh 

Fine-tuning (Base Model)

Use pseudo interative demonstrator to fine-tune the model:

cd map_nav_src
bash scripts/run_reverie.sh 

Test-time Adaptation & Evaluation

Use pseudo interative demonstrator to equip the model with our FSTTA:

cd map_nav_src
bash scrips/run_reverie_tta.sh

Acknowledgements

Our implementations are partially based on VLN-DUET, HM3DAutoVLN and VLN-BEVBert. Thanks to the authors for sharing their code.

Related Work

Citation

If you find this project useful in your research, please consider cite:

@inproceedings{Gao2024Fast,
  title={Fast-Slow Test-time Adaptation for Online Vision-and-Language Navigation},
  author={Junyu Gao and Xuan Yao and Changsheng Xu},
  journal={Proceedings of the 41st International Conference on Machine Learning},
  year={2024},
  url={}
}

If you have any questions, comments or suggestions, please feel free to get in touch with us via the contact information below: junyu.gao@nlpr.ia.ac.cn.

About

Fast-Slow Test-time Adaptation for Online Vision-and-Language Navigation

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published