Skip to main content

An experimental project using Monte Carlo Tree Search (MCTS) to refine LLM responses for better accuracy and decision-making.

Project description

LLM MCTS Inference

Tests

An experimental project using Monte Carlo Tree Search (MCTS) to refine Language Model (LLM) responses for better accuracy and decision-making.

Overview

This project leverages MCTS to explore multiple answer candidates generated by a language model. By iteratively generating an initial answer, evaluating it, and refining it based on targeted feedback, the system strives to improve response quality and decision-making. This approach increases test time compute to produce more precise and robust model outputs.

Features

  • Initial Answer Generation: Uses greedy decoding to generate an initial response.
  • Feedback Generation: Provides constructive, concise feedback on generated answers.
  • Iterative Improvement: Refines responses based on feedback through additional model queries.
  • Monte Carlo Tree Search: Employs MCTS to explore and evaluate multiple answer paths.
  • Structured Response Handling: Validates responses against JSON schemas using tools like pydantic.
  • Modular Codebase: Organized into modules for inference, prompts, MCTS logic, configuration, and utilities.

Installation

Dependencies

  • Python: Version 3.11 or higher

The project depends mainly on the following packages:

  • instructor for guided generation
  • litellm provides a unified API to interact with multiple LLM providers

Setup Instructions

  1. Clone the Repository:

    git clone https://github.com/brotSchimmelt/llm-mcts-inference.git
    cd llm-mcts-inference
    
  2. Install the Project Dependencies:

    If you use uv, run the following commands to create a virtualenv and install all requirements:

    uv venv --python 3.11
    uv sync
    

    Otherwise, install the required packages with pip:

    pip install -r pyproject.toml
    
  3. Configure Environment Variables: Rename the provided example.env file to .env and update it with your API keys or other configuration details as needed.

Usage

Use the MonteCarloLLM class to generate and improve responses via MCTS:

from llm_mcts_inference.MonteCarloLLM import MonteCarloLLM

# Initialize with a specific model; defaults are defined in settings
llm = MonteCarloLLM(model_name="gpt-3.5-turbo")

# Define your prompt
prompt = "What is the capital of France?"

# Generate a response using Monte Carlo Tree Search
result = llm.generate(prompt)

# Output the final improved answer
print("Final Answer:", result.answer)

# Optionally, display the sequence of nodes (answers) along the best path
print("Best Path:", [node.answer for node in result.valid_path])

License

This project is licensed under the MIT license.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_mcts_inference-0.1.0.tar.gz (11.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llm_mcts_inference-0.1.0-py3-none-any.whl (12.0 kB view details)

Uploaded Python 3

File details

Details for the file llm_mcts_inference-0.1.0.tar.gz.

File metadata

  • Download URL: llm_mcts_inference-0.1.0.tar.gz
  • Upload date:
  • Size: 11.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.5.29

File hashes

Hashes for llm_mcts_inference-0.1.0.tar.gz
Algorithm Hash digest
SHA256 d9bb3e1b30fa0348fe0bc5bb0c127295ea68c8e4fed4bbefcb9558bb70bf7ad5
MD5 c2039621c6f79a2f197723fc456f5db1
BLAKE2b-256 dabe0969dc6a3d2b6cb87b2c4619e969be1bf7575a9b9a0a4df3952972e2b0e9

See more details on using hashes here.

File details

Details for the file llm_mcts_inference-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llm_mcts_inference-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 8f6835ed19b9d2ad7df06ebb9c78c33fe1d28ee386397e72eee1b2f7251b4738
MD5 662458fd450d64f4df7fecaba7484a1c
BLAKE2b-256 3b4a14040b01e10517d335b12e2753a2faabb6a4291b3347cb1bbbc2ec0a4edd

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page