An experimental project using Monte Carlo Tree Search (MCTS) to refine LLM responses for better accuracy and decision-making.
Project description
LLM MCTS Inference
An experimental project using Monte Carlo Tree Search (MCTS) to refine Language Model (LLM) responses for better accuracy and decision-making.
Overview
This project leverages MCTS to explore multiple answer candidates generated by a language model. By iteratively generating an initial answer, evaluating it, and refining it based on targeted feedback, the system strives to improve response quality and decision-making. This approach increases test time compute to produce more precise and robust model outputs.
Features
- Initial Answer Generation: Uses greedy decoding to generate an initial response.
- Feedback Generation: Provides constructive, concise feedback on generated answers.
- Iterative Improvement: Refines responses based on feedback through additional model queries.
- Monte Carlo Tree Search: Employs MCTS to explore and evaluate multiple answer paths.
- Structured Response Handling: Validates responses against JSON schemas using tools like
pydantic. - Modular Codebase: Organized into modules for inference, prompts, MCTS logic, configuration, and utilities.
Installation
Dependencies
- Python: Version 3.11 or higher
The project depends mainly on the following packages:
instructorfor guided generationlitellmprovides a unified API to interact with multiple LLM providers
Setup Instructions
-
Clone the Repository:
git clone https://github.com/brotSchimmelt/llm-mcts-inference.git cd llm-mcts-inference
-
Install the Project Dependencies:
If you use
uv, run the following commands to create a virtualenv and install all requirements:uv venv --python 3.11 uv sync
Otherwise, install the required packages with pip:
pip install -r pyproject.toml
-
Configure Environment Variables: Rename the provided example.env file to .env and update it with your API keys or other configuration details as needed.
Usage
Use the MonteCarloLLM class to generate and improve responses via MCTS:
from llm_mcts_inference.MonteCarloLLM import MonteCarloLLM
# Initialize with a specific model; defaults are defined in settings
llm = MonteCarloLLM(model_name="gpt-3.5-turbo")
# Define your prompt
prompt = "What is the capital of France?"
# Generate a response using Monte Carlo Tree Search
result = llm.generate(prompt)
# Output the final improved answer
print("Final Answer:", result.answer)
# Optionally, display the sequence of nodes (answers) along the best path
print("Best Path:", [node.answer for node in result.valid_path])
License
This project is licensed under the MIT license.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llm_mcts_inference-0.1.3.tar.gz.
File metadata
- Download URL: llm_mcts_inference-0.1.3.tar.gz
- Upload date:
- Size: 12.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.5.30
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4bbfd365684c38b52c9fb6cd5608265374ec9853cefd70ed06161dfdb4518029
|
|
| MD5 |
8221e153f6f76cd01e5b5ecaf66b5005
|
|
| BLAKE2b-256 |
a3f613f34045554c86084102be01b624348736a25229afd33a6babab555191d1
|
File details
Details for the file llm_mcts_inference-0.1.3-py3-none-any.whl.
File metadata
- Download URL: llm_mcts_inference-0.1.3-py3-none-any.whl
- Upload date:
- Size: 12.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.5.30
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
63695118a9165eadece05c228361b736f510e88bfee50ada000b54aad7a597ad
|
|
| MD5 |
5d182c44b228509dbd70c6573113b578
|
|
| BLAKE2b-256 |
29bbd7f5d3ac69342c63880512b5fb5827f16a5056ee2da1f7fdd2113f0a0d8e
|