Skip to content
This repository has been archived by the owner on Aug 2, 2024. It is now read-only.
/ awesome-vln Public archive

A curated list of research papers in Vision-Language Navigation (VLN)

License

Notifications You must be signed in to change notification settings

daqingliu/awesome-vln

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 

Repository files navigation

Awesome Vision-Language Navigation

A curated list of research papers in Vision-Language Navigation (VLN). Link to the code and website if available is also present. You can also find more embodied vision papers in awesome-embodied-vision.

Contributing

Please feel free to contact me via email (liudq@mail.ustc.edu.cn) or open an issue or submit a pull request.

To add a new paper via pull request:

  1. Fork the repo, edit README.md.

  2. Put the new paper at the correct chronological position as the following format:

    - **Paper Title** <br>
       *Author(s)* <br>
       Conference, Year. [[Paper]](link) [[Code]](link) [[Website]](link)
    
  3. Send a pull request. Ideally, I will review the request within a week.

Papers

Tasks:

  • Vision-and-Language Navigation: Interpreting Visually-Grounded Navigation Instructions in Real Environments
    Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian Reid, Stephen Gould, Anton van den Hengel
    CVPR, 2018. [Paper] [Code] [Website]

  • HoME: a Household Multimodal Environment
    Simon Brodeur, Ethan Perez, Ankesh Anand, Florian Golemo, Luca Celotti, Florian Strub, Jean Rouat, Hugo Larochelle, Aaron Courville
    NIPS Workshop, 2017. [Paper] [Code]

  • Talk the Walk: Navigating New York City through Grounded Dialogue
    Harm de Vries, Kurt Shuster, Dhruv Batra, Devi Parikh, Jason Weston, Douwe Kiela
    arXiv, 2019. [Paper] [Code]

  • Touchdown: Natural Language Navigation and Spatial Reasoning in Visual Street Environments
    Howard Chen, Alane Suhr, Dipendra Misra, Noah Snavely, Yoav Artzi
    CVPR, 2019. [Paper] [Code] [Website]

  • Vision-based Navigation with Language-based Assistance via Imitation Learning with Indirect Intervention
    Khanh Nguyen, Debadeepta Dey, Chris Brockett, Bill Dolan
    CVPR, 2019. [Paper] [Code] [Video]

  • Learning To Follow Directions in Street View
    Karl Moritz Hermann, Mateusz Malinowski, Piotr Mirowski, Andras Banki-Horvath, Keith Anderson, Raia Hadsell
    AAAI, 2020. [Paper] [Website]

  • REVERIE: Remote Embodied Visual Referring Expression in Real Indoor Environments
    Yuankai Qi, Qi Wu, Peter Anderson, Xin Wang, William Yang Wang, Chunhua Shen, Anton van den Hengel
    CVPR, 2020. [Paper]

  • Stay on the Path: Instruction Fidelity in Vision-and-Language Navigation
    Vihan Jain, Gabriel Magalhaes, Alexander Ku, Ashish Vaswani, Eugene Ie, Jason Baldridge
    ACL, 2019. [Paper] [Code]

  • Vision-and-Dialog Navigation
    Jesse Thomason, Michael Murray, Maya Cakmak, Luke Zettlemoyer
    CoRL, 2019. [Paper] [Website]

  • Help, Anna! Visual Navigation with Natural Multimodal Assistance via Retrospective Curiosity-Encouraging Imitation Learning
    Khanh Nguyen, Hal Daumé III
    EMNLP, 2019. [Paper] [Code] [Video]

  • Talk2Nav: Long-Range Vision-and-Language Navigation with Dual Attention and Spatial Memory
    Arun Balajee Vasudevan, Dengxin Dai, Luc Van Gool
    arXiv, 2019. [Paper] [Website]

  • Cross-Lingual Vision-Language Navigation
    An Yan, Xin Wang, Jiangtao Feng, Lei Li, William Yang Wang
    arXiv, 2019. [Paper] [Code]

  • Beyond the Nav-Graph: Vision-and-Language Navigation in Continuous Environments
    Jacob Krantz, Erik Wijmans, Arjun Majumdar, Dhruv Batra, Stefan Lee
    arXiv, 2020. [Paper] [Code]

  • Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation
    Muhammad Zubair Irshad, Chih-Yao Ma, Zsolt Kira
    ICRA, 2021. [Paper] [Code]

Roadmap (Chronological Order):

  • Vision-and-Language Navigation: Interpreting Visually-Grounded Navigation Instructions in Real Environments
    Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian Reid, Stephen Gould, Anton van den Hengel
    CVPR, 2018. [Paper] [Code] [Website]

  • Look Before You Leap: Bridging Model-Free and Model-Based Reinforcement Learning for Planned-Ahead Vision-and-Language Navigation
    Xin Wang, Wenhan Xiong, Hongmin Wang, William Yang Wang
    ECCV, 2018. [Paper]

  • Speaker-Follower Models for Vision-and-Language Navigation
    Daniel Fried, Ronghang Hu, Volkan Cirik, Anna Rohrbach, Jacob Andreas, Louis-Philippe Morency, Taylor Berg-Kirkpatrick, Kate Saenko, Dan Klein, Trevor Darrell
    NeurIPS, 2018. [Paper] [Code] [Website]

  • Shifting the Baseline: Single Modality Performance on Visual Navigation & QA
    Jesse Thomason, Daniel Gordon, Yonatan Bisk
    NAACL, 2019. [Paper] [Poster]

  • Reinforced Cross-Modal Matching and Self-Supervised Imitation Learning for Vision-Language Navigation
    Xin Wang, Qiuyuan Huang, Asli Celikyilmaz, Jianfeng Gao, Dinghan Shen, Yuan-Fang Wang, William Yang Wang, Lei Zhang
    CVPR, 2019. [Paper]

  • Self-Monitoring Navigation Agent via Auxiliary Progress Estimation
    Chih-Yao Ma, Jiasen Lu, Zuxuan Wu, Ghassan AlRegib, Zsolt Kira, Richard Socher, Caiming Xiong
    ICLR, 2019. [Paper] [Code] [Website]

  • The Regretful Agent: Heuristic-Aided Navigation through Progress Estimation
    Chih-Yao Ma, Zuxuan Wu, Ghassan AlRegib, Caiming Xiong, Zsolt Kira
    CVPR, 2019. [Paper] [Code] [Website]

  • Tactical Rewind: Self-Correction via Backtracking in Vision-and-Language Navigation
    Liyiming Ke, Xiujun Li, Yonatan Bisk, Ari Holtzman, Zhe Gan, Jingjing Liu, Jianfeng Gao, Yejin Choi, Siddhartha Srinivasa
    CVPR, 2019. [Paper] [Code] [Video]

  • Learning to Navigate Unseen Environments: Back Translation with Environmental Dropout
    Hao Tan, Licheng Yu, Mohit Bansal
    NAACL, 2019. [Paper] [Code]

  • Multi-modal Discriminative Model for Vision-and-Language Navigation
    Haoshuo Huang, Vihan Jain, Harsh Mehta, Jason Baldridge, Eugene Ie
    NAACL Workshop, 2019. [Paper]

  • Are You Looking? Grounding to Multiple Modalities in Vision-and-Language Navigation
    Ronghang Hu, Daniel Fried, Anna Rohrbach, Dan Klein, Trevor Darrell, Kate Saenko
    ACL, 2019. [Paper]

  • Chasing Ghosts: Instruction Following as Bayesian State Tracking
    Peter Anderson, Ayush Shrivastava, Devi Parikh, Dhruv Batra, Stefan Lee
    NeurIPS, 2019. [Paper] [Code] [Video]

  • Embodied Vision-and-Language Navigation with Dynamic Convolutional Filters
    Federico Landi, Lorenzo Baraldi, Massimiliano Corsini, Rita Cucchiara
    BMVC, 2019. [Paper] [Code]

  • Transferable Representation Learning in Vision-and-Language Navigation
    Haoshuo Huang, Vihan Jain, Harsh Mehta, Alexander Ku, Gabriel Magalhaes, Jason Baldridge, Eugene Ie
    ICCV, 2019. [Paper]

  • Robust Navigation with Language Pretraining and Stochastic Sampling
    Xiujun Li, Chunyuan Li, Qiaolin Xia, Yonatan Bisk, Asli Celikyilmaz, Jianfeng Gao, Noah Smith, Yejin Choi
    EMNLP, 2019. [Paper] [Code]

  • Counterfactual Vision-and-Language Navigation via Adversarial Path Sampling
    Tsu-Jui Fu, Xin Wang, Matthew Peterson, Scott Grafton, Miguel Eckstein, William Yang Wang
    arXiv, 2019. [Paper]

  • Unsupervised Reinforcement Learning of Transferable Meta-Skills for Embodied Navigation
    Juncheng Li, Xin Wang, Siliang Tang, Haizhou Shi, Fei Wu, Yueting Zhuang, William Yang Wang
    CVPR, 2020. [Paper]

  • Vision-Language Navigation with Self-Supervised Auxiliary Reasoning Tasks
    Fengda Zhu, Yi Zhu, Xiaojun Chang, Xiaodan Liang
    CVPR, 2020. [Paper]

  • Perceive, Transform, and Act: Multi-Modal Attention Networks for Vision-and-Language Navigation
    Federico Landi, Lorenzo Baraldi, Marcella Cornia, Massimiliano Corsini, Rita Cucchiara
    arXiv, 2019. [Paper] [Code]

  • Just Ask: An Interactive Learning Framework for Vision and Language Navigation
    Ta-Chung Chi, Mihail Eric, Seokhwan Kim, Minmin Shen, Dilek Hakkani-tur
    AAAI, 2020. [Paper]

  • Towards Learning a Generic Agent for Vision-and-Language Navigation via Pre-training
    Weituo Hao, Chunyuan Li, Xiujun Li, Lawrence Carin, Jianfeng Gao
    CVPR, 2020. [Paper] [Code]

  • Multi-View Learning for Vision-and-Language Navigation
    Qiaolin Xia, Xiujun Li, Chunyuan Li, Yonatan Bisk, Zhifang Sui, Jianfeng Gao, Yejin Choi, Noah A. Smith
    arXiv, 2020. [Paper]

  • Vision-Dialog Navigation by Exploring Cross-modal Memory
    Yi Zhu, Fengda Zhu, Zhaohuan Zhan, Bingqian Lin, Jianbin Jiao, Xiaojun Chang, Xiaodan Liang
    CVPR, 2020. [Paper] [Code]

  • Take the Scenic Route: Improving Generalization in Vision-and-Language Navigation
    Felix Yu, Zhiwei Deng, Karthik Narasimhan, Olga Russakovsky
    arXiv, 2020. [Paper]

  • Sub-Instruction Aware Vision-and-Language Navigation
    Yicong Hong, Cristian Rodriguez-Opazo, Qi Wu, Stephen Gould
    arXiv, 2020. [Paper]

  • Beyond the Nav-Graph: Vision-and-Language Navigation in Continuous Environments
    Jacob Krantz, Erik Wijmans, Arjun Majumdar, Dhruv Batra, Stefan Lee
    ECCV, 2020. [Paper] [Code] [Website]

  • Counterfactual Vision-and-Language Navigation via Adversarial Path Sampling
    Tsu-Jui Fu, Xin Eric Wang, Matthew Peterson, Scott Grafton, Miguel Eckstein, William Yang Wang
    ECCV, 2020. [Paper]

  • Improving Vision-and-Language Navigation with Image-Text Pairs from the Web
    Arjun Majumdar, Ayush Shrivastava, Stefan Lee, Peter Anderson, Devi Parikh, Dhruv Batra
    ECCV, 2020. [Paper]

  • Soft Expert Reward Learning for Vision-and-Language Navigation
    Hu Wang, Qi Wu, Chunhua Shen
    ECCV, 2020. [Paper]

  • Active Visual Information Gathering for Vision-Language Navigation
    Hanqing Wang, Wenguan Wang, Tianmin Shu, Wei Liang, Jianbing Shen
    ECCV, 2020. [Paper] [Code]

  • Environment-agnostic Multitask Learning for Natural Language Grounded Navigation
    Xin Eric Wang, Vihan Jain, Eugene Ie, William Yang Wang, Zornitsa Kozareva, Sujith Ravi
    ECCV, 2020. [Paper]

  • Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation
    Muhammad Zubair Irshad, Chih-Yao Ma, Zsolt Kira
    ICRA, 2021. [Paper] [Code] [Website] [Video]