Skip to main content

An experimental project using Monte Carlo Tree Search (MCTS) to refine LLM responses for better accuracy and decision-making.

Project description

LLM MCTS Inference

Tests Publish

An experimental project using Monte Carlo Tree Search (MCTS) to refine Language Model (LLM) responses for better accuracy and decision-making.

Overview

This project leverages MCTS to explore multiple answer candidates generated by a language model. By iteratively generating an initial answer, evaluating it, and refining it based on targeted feedback, the system strives to improve response quality and decision-making. This approach increases test time compute to produce more precise and robust model outputs.

Features

  • Initial Answer Generation: Uses greedy decoding to generate an initial response.
  • Feedback Generation: Provides constructive, concise feedback on generated answers.
  • Iterative Improvement: Refines responses based on feedback through additional model queries.
  • Monte Carlo Tree Search: Employs MCTS to explore and evaluate multiple answer paths.
  • Structured Response Handling: Validates responses against JSON schemas using tools like pydantic.
  • Modular Codebase: Organized into modules for inference, prompts, MCTS logic, configuration, and utilities.

Installation

Dependencies

  • Python: Version 3.11 or higher

The project depends mainly on the following packages:

  • instructor for guided generation
  • litellm provides a unified API to interact with multiple LLM providers

Setup Instructions

  1. Clone the Repository:

    git clone https://github.com/brotSchimmelt/llm-mcts-inference.git
    cd llm-mcts-inference
    
  2. Install the Project Dependencies:

    If you use uv, run the following commands to create a virtualenv and install all requirements:

    uv venv --python 3.11
    uv sync
    

    Otherwise, install the required packages with pip:

    pip install -r pyproject.toml
    
  3. Configure Environment Variables: Rename the provided example.env file to .env and update it with your API keys or other configuration details as needed.

Usage

Use the MonteCarloLLM class to generate and improve responses via MCTS:

from llm_mcts_inference.MonteCarloLLM import MonteCarloLLM

# Initialize with a specific model; defaults are defined in settings
llm = MonteCarloLLM(model_name="gpt-3.5-turbo")

# Define your prompt
prompt = "What is the capital of France?"

# Generate a response using Monte Carlo Tree Search
result = llm.generate(prompt)

# Output the final improved answer
print("Final Answer:", result.answer)

# Optionally, display the sequence of nodes (answers) along the best path
print("Best Path:", [node.answer for node in result.valid_path])

License

This project is licensed under the MIT license.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_mcts_inference-0.1.2.tar.gz (11.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llm_mcts_inference-0.1.2-py3-none-any.whl (12.1 kB view details)

Uploaded Python 3

File details

Details for the file llm_mcts_inference-0.1.2.tar.gz.

File metadata

  • Download URL: llm_mcts_inference-0.1.2.tar.gz
  • Upload date:
  • Size: 11.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.5.30

File hashes

Hashes for llm_mcts_inference-0.1.2.tar.gz
Algorithm Hash digest
SHA256 c69f068702137a8ff46cd29d2c94c0fb02e6da5cbafcf9ff834487b13fbfd4ca
MD5 2677256a260aa3887d5faa9d8869d0a3
BLAKE2b-256 063cc9acd8242671a48511192668a2dad303e29dc3f3622a0d35f27c8eb59f09

See more details on using hashes here.

File details

Details for the file llm_mcts_inference-0.1.2-py3-none-any.whl.

File metadata

File hashes

Hashes for llm_mcts_inference-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 919963c4802b2e55d5f020434be8c72003be3cdb5cf3bec2a31f9c324d0b77ab
MD5 e535f5b857ab1b7e3b8482acf48850cf
BLAKE2b-256 9bcfe15123e3400637e4af338c157744e919bdee486bc2348f74042d97b21d54

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page