Skip to content

Code and dataset for the ICLR 2024 paper "Thought Propagation: An analogical Approach to Complex Reasoning with Large Language Models."

License

Notifications You must be signed in to change notification settings

Samyu0304/thought-propagation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

[ICLR 2024] Thought Propagation: An analogical approach to complex reasoning with large language models.

This repo holds the code, data, and instructions to reproduce the Thought Propagation (TP) results.

Introduction

Thought-Propagation (TP)

Analogical reasoning is fundamental to human cognition as humans usually solve new problems by reusing experiences in handling similar problems. Motivated by such a reasoning process from humans, Thought Propagation (TP) teaches Large Language Models (LLMs) to explore analogous problems related to the input one and distill useful experience to facilitate input problem-solving.

What is the thought in TP?

The thought usually refers to a solution to a sub-problem of the input problem. The solution to the input problem is produced by chaining such thoughts together. However, the thought in TP refers to the solution to a problem instead of a sub-problem.

Why propagating thoughts?

Many prompt-based reasoning methods, such as IO prompting, Chain-of-Thought Prompting, and etc, teach LLMs to reason from scratch. Thus, they cannot reuse the insights in solving similar problems to:

  1. ease the difficulty of solving complex problems with the prior knowledge of such insights,
  2. refine the initial solutions to input problems as reasoning from scratch is sensitive to the hallucinations and mistakes made by LLMs. Thus, TP propagates the thoughts of solving similar problems (aka analogous problems) to amend the limitations of reasoning from scratch.

The TP framework is inspired by the message-passing module in deep graph learning. The code of TP benefits from ToT and Reflexion.

Experiments

Graph Algorithm Reasoning

Data Preparation

First, you need to generate the undirected graphs for this task. Run the following commands to generate graphs:

cd graph-algorithm-reasoning/data
python data_generator.py

Change the params in data_generator.py to customize the generated graphs. You could also use our generated graphs located in graph-algorithm-reasoning/data/shortest_path/easy/.

Prompt

The way to convert graphs into strings affects the performance of prompting methods. We use 3 ways (vallina, graph-modeling-language, edge-description). The prompts are located in graph-algorithm-reasoning/prompts/.

Run

To run IO, CoT, Build-a-Graphs prompting methods on this task, just run

cd graph-algorithm-reasoning
bash run_bash.sh

To run ToT prompting methods on this task, just run

cd graph-algorithm-reasoning
bash run_tot.sh

To run TP prompting methods on this task, just run

cd graph-algorithm-reasoning
bash run_tp.sh

After running experiments, the results are saved in graph-algorithm-reasoning/logs/. Use two .py files to get evaluation results.

Creative Writing

Follow the instructions below to run experiments:

cd creative-writing
bash bfs.sh (ToT)
bash tp.sh (TP)
bash cot_sampling.sh (CoT)
bash standard_sampling.sh (IO)

LLM-Agent Planning

First, follow this repo to install Alfworld. Then move planning to the Alfworld directory. Notice: I failed to run this experiment on a Macbook with M2 Pro. But I managed to run it on a Macbook with Intel Core i5. This phenomenon is mostly due to the dependency of the Alfworld environment.

Run Reflexion:

python reflexion_alfworld.py

Run TP:

python tp_alfworld.py --use_memory --use_simulation

To activate/deactivate memory and simulation result in 4 variant models of TP.

Cite

Please cite this work if you find it helpful:

@inproceedings{yu2023thought,
  title={THOUGHT PROPAGATION: AN ANALOGICAL APPROACH TO COMPLEX REASONING WITH LARGE LANGUAGE MODELS},
  author={Yu, Junchi and He, Ran and Ying, Zhitao},
  booktitle={The Twelfth International Conference on Learning Representations},
  year={2024}
}

About

Code and dataset for the ICLR 2024 paper "Thought Propagation: An analogical Approach to Complex Reasoning with Large Language Models."

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published