Skip to main content

Official Implementation of "Tree of Thoughts: Deliberate Problem Solving with Large Language Models"

Project description

Official Repo of Tree of Thoughts (ToT)

DOI

Note: https://github.com/kyegomez/tree-of-thoughts CANNOT replicate paper results.

In fact, people have reported that his code cannot properly run, and is probably automatically generated by ChatGPT, and kyegomez has done so for other popular ML methods, while intentionally refusing to link to official implementations for his own interests (see https://github.com/kyegomez/tree-of-thoughts/issues/54, https://github.com/kyegomez/tree-of-thoughts/issues/55, https://github.com/kyegomez/tree-of-thoughts/issues/56). Unfortunately, Google/Github searches go to kyegomez's malicious repo by default as it has more stars. Please DE-STAR his repo and STAR this to help other people avoid being misled, thanks!

teaser

Official implementation for paper Tree of Thoughts: Deliberate Problem Solving with Large Language Models with code, prompts, model outputs. Also check its tweet thread in 1min.

Setup

  • Set up OpenAI API key and store in environment variable OPENAI_API_KEY (see here).

  • Install dependencies and tot package (PyPI package coming soon):

git clone https://github.com/princeton-nlp/tree-of-thought-llm
cd tree-of-thought-llm
pip install -r requirements.txt
pip install -e .  # install `tot` package

Quick Start

The following minimal script will attempt to solve the game of 24 with 4 5 6 10 (might be a bit slow as it's using GPT-4):

import argparse
from tot.methods.bfs import solve
from tot.tasks.game24 import Game24Task

args = argparse.Namespace(backend='gpt-4', temperature=0.7, task='game24', naive_run=False, prompt_sample=None, method_generate='propose', method_evaluate='value', method_select='greedy', n_generate_sample=1, n_evaluate_sample=3, n_select_sample=5)

task = Game24Task()
ys, infos = solve(args, task, 900)
print(ys[0])

And the output would be something like (note it's not deterministic, and sometimes the output can be wrong):

10 - 4 = 6 (left: 5 6 6)
5 * 6 = 30 (left: 6 30)
30 - 6 = 24 (left: 24)
Answer: (5 * (10 - 4)) - 6 = 24

Paper Experiments

Run experiments via sh scripts/{game24, text, crosswords}/{standard_sampling, cot_sampling, bfs}.sh, except in crosswords we use a DFS algorithm for ToT, which can be run via scripts/crosswords/search_crosswords-dfs.ipynb.

The very simple run.py implements the ToT + BFS algorithm, as well as the naive IO/CoT sampling. Some key arguments:

  • --naive_run: if True, run naive IO/CoT sampling instead of ToT + BFS.
  • --prompt_sample (choices=[standard, cot]): sampling prompt
  • --method_generate (choices=[sample, propose]): thought generator, whether to sample independent thoughts (used in Creative Writing) or propose sequential thoughts (used in Game of 24)
  • --method_evaluate (choices=[value, vote]): state evaluator, whether to use the value states independently (used in Game of 24) or vote on states together (used in Creative Writing)
  • --n_generate_sample: number of times to prompt for thought generation
  • --n_evaluate_sample: number of times to prompt for state evaluation
  • --n_select_sample: number of states to keep from each step (i.e. b in the paper's ToT + BFS algorithm)

Paper Trajectories

logs/ contains all the trajectories from the paper's experiments, except for logs/game24/gpt-4_0.7_propose1_value3_greedy5_start900_end1000.json which was reproduced after the paper (as the original experiment was done in a notebook) and achieved a 69% score instead of the original 74% score due to randomness in GPT decoding. We hope to aggregate multiple runs in the future to account for sampling randomness and update the paper, but this shouldn't affect the main conclusions of the paper.

How to Add A New Task

Setting up a new task is easy, and mainly involves two steps.

  • Set up a new task class in tot/tasks/ and task files in tot/data/. See tot/tasks/game24.py for an example. Add the task to tot/tasks/__init__.py.
  • Set up task-specific prompts in tot/prompts/. See tot/prompts/game24.py for an example. Depending on the nature of the task, choose --method_generate (choices=[sample, propose]) and --method_evaluate (choices=[value, vote]) and their corresponding prompts.

Citations

Please cite the paper and star this repo if you use ToT and find it interesting/useful, thanks! Feel free to contact shunyuyao.cs@gmail.com or open an issue if you have any questions.

@misc{yao2023tree,
      title={{Tree of Thoughts}: Deliberate Problem Solving with Large Language Models}, 
      author={Shunyu Yao and Dian Yu and Jeffrey Zhao and Izhak Shafran and Thomas L. Griffiths and Yuan Cao and Karthik Narasimhan},
      year={2023},
      eprint={2305.10601},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tree-of-thoughts-llm-0.1.0.tar.gz (64.0 kB view details)

Uploaded Source

Built Distribution

tree_of_thoughts_llm-0.1.0-py3-none-any.whl (63.6 kB view details)

Uploaded Python 3

File details

Details for the file tree-of-thoughts-llm-0.1.0.tar.gz.

File metadata

  • Download URL: tree-of-thoughts-llm-0.1.0.tar.gz
  • Upload date:
  • Size: 64.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.16

File hashes

Hashes for tree-of-thoughts-llm-0.1.0.tar.gz
Algorithm Hash digest
SHA256 40ff1be901a689b11b1b8cf5a4db52f91c3579ef07a989c878dfe5515481852d
MD5 9fbdd24fc9ef7560e1d52a5f07feee42
BLAKE2b-256 5c73617d2443db412efe4e20568ded5fe0529ead95a47c493c7fb5b9b39dc318

See more details on using hashes here.

File details

Details for the file tree_of_thoughts_llm-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for tree_of_thoughts_llm-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 54405a3c4d07e9a86af064ed96ea3f4ee38197d286a4b5ec14f2a14ea7ba0049
MD5 7102db6cb26be1db056cb19f40f5b005
BLAKE2b-256 93f845b74dcbcdfcc61d21b899ee6885e9a213c3b2567248ddb3dec25daef725

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page