Tree of Thoughts - Pytorch
Project description
Paper link Author's implementation
Introduction
Tree of Thoughts (ToT) is a powerful and flexible algorithm that significantly advances model reasoning by up to 70%. This plug-and-play version allows you to connect your own models and experience superintelligence!
Install
pip install tree-of-thoughts
Usage
import os
from tree_of_thoughts import ToTAgent, MonteCarloSearch
from dotenv import load_dotenv
from swarms import Agent, OpenAIChat
load_dotenv()
# Get the API key from the environment
api_key = os.environ.get("OPENAI_API_KEY")
# Initialize an agent from swarms
agent = Agent(
agent_name="tree_of_thoughts",
agent_description="This agent uses the tree_of_thoughts library to generate thoughts.",
system_prompt=None,
llm = OpenAIChat(),
)
# Initialize the ToTAgent class with the API key
model = ToTAgent(
agent,
strategy="cot",
evaluation_strategy="value",
enable_react=True,
k=3,
)
# Initialize the MonteCarloSearch class with the model
tree_of_thoughts = MonteCarloSearch(model)
# Define the initial prompt
initial_prompt = """
Input: 2 8 8 14
Possible next steps:
2 + 8 = 10 (left: 8 10 14)
8 / 2 = 4 (left: 4 8 14)
14 + 2 = 16 (left: 8 8 16)
2 * 8 = 16 (left: 8 14 16)
8 - 2 = 6 (left: 6 8 14)
14 - 8 = 6 (left: 2 6 8)
14 / 2 = 7 (left: 7 8 8)
14 - 2 = 12 (left: 8 8 12)
Input: use 4 numbers and basic arithmetic operations (+-*/) to obtain 24 in 1 equation
Possible next steps:
"""
# Define the number of thoughts to generate
num_thoughts = 1
max_steps = 3
max_states = 4
pruning_threshold = 0.5
# Generate the thoughts
solution = tree_of_thoughts.solve(
initial_prompt=initial_prompt,
num_thoughts=num_thoughts,
max_steps=max_steps,
max_states=max_states,
pruning_threshold=pruning_threshold,
# sleep_time=sleep_time
)
print(f"Solution: {solution}")
ToT with HF LLM
To run Hugging Face Transformers with Tree of Thoughts:
import os
from tree_of_thoughts import ToTAgent, MonteCarloSearch
from dotenv import load_dotenv
from swarms import Agent, HuggingfaceLLM
load_dotenv()
# Get the API key from the environment
api_key = os.environ.get("OPENAI_API_KEY")
# Initialize an agent from swarms
agent = Agent(
agent_name="tree_of_thoughts",
agent_description=(
"This agent uses the tree_of_thoughts library to generate thoughts."
),
system_prompt=None,
llm=HuggingfaceLLM(
"EleutherAI/gpt-neo-2.7B",
),
)
# Initialize the ToTAgent class with the API key
model = ToTAgent(
agent,
strategy="cot",
evaluation_strategy="value",
enable_react=True,
k=3,
)
# Initialize the MonteCarloSearch class with the model
tree_of_thoughts = MonteCarloSearch(model)
# Define the initial prompt
initial_prompt = """
Input: 2 8 8 14
Possible next steps:
2 + 8 = 10 (left: 8 10 14)
8 / 2 = 4 (left: 4 8 14)
14 + 2 = 16 (left: 8 8 16)
2 * 8 = 16 (left: 8 14 16)
8 - 2 = 6 (left: 6 8 14)
14 - 8 = 6 (left: 2 6 8)
14 / 2 = 7 (left: 7 8 8)
14 - 2 = 12 (left: 8 8 12)
Input: use 4 numbers and basic arithmetic operations (+-*/) to obtain 24 in 1 equation
Possible next steps:
"""
# Define the number of thoughts to generate
num_thoughts = 1
max_steps = 3
max_states = 4
pruning_threshold = 0.5
# Generate the thoughts
solution = tree_of_thoughts.solve(
initial_prompt=initial_prompt,
num_thoughts=num_thoughts,
max_steps=max_steps,
max_states=max_states,
pruning_threshold=pruning_threshold,
# sleep_time=sleep_time
)
print(f"Solution: {solution}")
Basic Prompts
Imagine three different experts are answering this question. All experts will write down 1 step of their thinking, then share it with the group. Then all experts will go on to the next step, etc. If any expert realises they're wrong at any point then they leave. The question is...
################ 2nd ################
Simulate three brilliant, logical experts collaboratively answering a question. Each one verbosely explains their thought process in real-time, considering the prior explanations of others and openly acknowledging mistakes. At each step, whenever possible, each expert refines and builds upon the thoughts of others, acknowledging their contributions. They continue until there is a definitive answer to the question. For clarity, your entire response should be in a markdown table. The question is...
################ ################
Imagine three highly intelligent experts working together to answer a question. They will follow a tree of thoughts approach, where each expert shares their thought process step by step. They will consider the input from others, refine their thoughts, and build upon the group's collective knowledge. If an expert realizes their thought is incorrect, they will acknowledge it and withdraw from the discussion. Continue this process until a definitive answer is reached. Present the entire response in a markdown table. The question is...
################ 2nd ################
Three experts with exceptional logical thinking skills are collaboratively answering a question using a tree of thoughts method. Each expert will share their thought process in detail, taking into account the previous thoughts of others and admitting any errors. They will iteratively refine and expand upon each other's ideas, giving credit where it's due. The process continues until a conclusive answer is found. Organize the entire response in a markdown table format. The question is...
################ 2nd ################
Envision a group of three experts working in unison to tackle a question by employing a tree of thoughts strategy. Each expert will thoroughly explain their line of thinking at every step, while also considering the insights provided by their peers. They will openly recognize any mistakes and build upon the group's shared understanding. This iterative process will continue until a definitive solution is reached. Structure the entire response as a markdown table. The question is...
################ 2nd ################
"Three experts with exceptional logical thinking skills are collaboratively answering a question using the tree of thoughts method. Each expert will share their thought process in detail, taking into account the previous thoughts of others and admitting any errors. They will iteratively refine and expand upon each other's ideas, giving credit where it's due. The process continues until a conclusive answer is found. Organize the entire response in a markdown table format. The task is:
Acknowledgements
Thanks to: Shunyu Yao Princeton University, Dian Yu Google DeepMind, Jeffrey Zhao, Google DeepMind, Izhak Shafran Google DeepMind, Thomas L. Griffiths, Princeton University, Yuan Cao Google DeepMind, Karthik Narasimha, Princeton University for sharing this amazing work with the world!
And, thanks to Phil Wang or Lucidrains for inspiring me to devote myself to open source AI Research
License
Apache
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for tree_of_thoughts-0.5.8-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2083dbbd38d95e50d036578b87de09b45a1dc9290096766d16b5b87f2b9136b1 |
|
MD5 | 8f01449361f43aaef985646eacbe7fc8 |
|
BLAKE2b-256 | a1abfcbd5db6b5a075d165ab730e77e8e1bf2d852840eb1593b53cbdd4a84b3d |