LITS: LLM Inference via Tree Search — modular agentic reasoning framework.
Project description
LiTS — Language Inference via Tree Search
A modular Python framework for building LLM agents with tree search (MCTS, BFS) and chain reasoning (ReAct), supporting multi-provider LLMs and tool use.
Why LiTS?
| Concern | Challenge | LiTS Solution |
|---|---|---|
| Reusability | Reimplementing search algorithms for each new task | Task-agnostic data structures (Action → Step → State → Node) that hide search procedures from task-specific logic |
| Extensibility | Adding new tasks requires modifying many files | Modular components (Policy, Transition, RewardModel) + decorator-based registry — add a task by registering prompts and a transition |
| Observability | Tree search is expensive and hard to debug | Built-in InferenceLogger tracks token usage at component, instance, and search-phase levels; incremental checkpointing for fault tolerance |
Installation
pip install -e . # editable install
pip install -e .[dev] # with dev extras
Requires Python >= 3.11.
Quick Start — CLI
LiTS provides four CLI commands installed via pip install:
lits-search # Run tree search experiments
lits-eval # Evaluate tree search results
lits-chain # Run chain agents (ReAct, EnvChain)
lits-eval-chain # Evaluate chain results
All example CLI commands below assume you are in the demos/ directory, which contains lits_benchmark (example benchmarks) and sample data files:
cd demos
Run MCTS on MATH500
lits-search --include lits_benchmark.math_qa \
--dataset math500 \
--policy concat --transition concat --reward generative \
--search-arg n_iters=50 n_actions=3 max_steps=10 \
--var limit=5
Swap to RAP (different components, same algorithm)
lits-search --include lits_benchmark.math_qa lits_benchmark.formulations.rap \
--dataset math500 \
--policy rap --transition rap --reward rap \
--search-arg n_iters=10 n_confidence=3
Swap to BFS (different algorithm, same components)
lits-search --include lits_benchmark.math_qa \
--dataset math500 \
--cfg search_algorithm=bfs \
--policy concat --transition concat --reward generative \
--search-arg roll_out_steps=2 n_actions=3 max_steps=10
Environment-grounded task (BlocksWorld)
lits-search --include lits_benchmark.blocksworld \
--dataset blocksworld \
--transition blocksworld \
--search-arg max_steps=6 n_iters=50
Tool-use task (MapEval-SQL)
lits-search --include lits_benchmark.mapeval \
--dataset mapeval-sql
No component flags needed — the framework auto-selects tool-use components.
Evaluate results
lits-eval --result_dir <result_dir>
Dry run (validate config without inference)
lits-search --include lits_benchmark.math_qa \
--dataset math500 --dry-run
Quick Start — Python API
Tree search algorithms are class-based, inheriting from BaseTreeSearch:
from lits.agents.tree.mcts import MCTSSearch, MCTSConfig
from lits.lm import get_lm
# Load model
model = get_lm("bedrock/us.anthropic.claude-3-5-haiku-20241022-v1:0")
# Configure search
config = MCTSConfig(
max_steps=10,
n_actions=3,
n_iters=50,
)
# Create search instance with components
search = MCTSSearch(
config=config,
policy=policy, # generates candidate actions
world_model=transition, # executes actions, produces new states
reward_model=reward, # evaluates action quality
)
# Run search
result = search.run(query="What is 25 * 17?", query_idx=0)
# Extract answers from terminal nodes
from lits.agents.tree.common import extract_answers_from_terminal_nodes
vote_answers, answer_rewards, best_node, trace = extract_answers_from_terminal_nodes(
terminal_nodes_collected=result.terminal_nodes_collected,
retrieve_answer=retrieve_answer_fn,
question="What is 25 * 17?"
)
ReAct Agent (tool use)
from lits.agents import create_tool_use_agent
agent = create_tool_use_agent(tools=tool_list, max_iter=50)
state = agent.run(query="Find restaurants near Sydney Opera House")
Supported LLM Providers
from lits.lm import get_lm
model = get_lm("bedrock/us.anthropic.claude-3-5-haiku-20241022-v1:0") # AWS Bedrock
model = get_lm("openai/gpt-4") # OpenAI
model = get_lm("Qwen/Qwen2.5-0.5B-Instruct", device="cuda") # HuggingFace
model = get_lm("groq/llama-3.1-8b-instant") # Groq
model = get_lm("tgi:///meta-llama/Meta-Llama-3-8B") # TGI
Architecture
Three core component abstractions compose into agents:
Policy → generates candidate actions from states
Transition → executes actions, produces new states
RewardModel → evaluates action quality (optional)
Search frameworks bundle these with an algorithm:
| Framework | Algorithm | Components |
|---|---|---|
| ReST-MCTS* | MCTS | ConcatPolicy + ConcatTransition + GenerativePRM |
| RAP | MCTS | RAPPolicy + RAPTransition + RapPRM |
| ToT-BFS | BFS | ConcatPolicy + ConcatTransition + GenerativePRM |
Extending with Custom Components
Register components via decorators — no core code changes needed:
from lits.components.registry import register_transition, register_dataset
@register_transition("my_domain")
class MyTransition(Transition):
def step(self, example, state, action, **kwargs):
...
def is_terminal(self, state, example, **kwargs):
...
@register_dataset("my_dataset", task_type="env_grounded")
def load_my_dataset(**kwargs):
...
Then use via CLI:
lits-search --include my_package \
--dataset my_dataset --transition my_domain
Task Types
| Task Type | State Space | Examples |
|---|---|---|
language_grounded |
Text context | Math reasoning (GSM8K, MATH500) |
env_grounded |
Symbolic/physical state | BlocksWorld, Crosswords |
tool_use |
Context + tool state | SQL queries, web search, APIs |
Project Structure
lits/ # Core framework
├── agents/ # MCTS, BFS, ReAct, EnvChain
├── components/ # Policy, Transition, RewardModel
├── lm/ # Multi-provider LLM interface
├── structures/ # State, Action, Step, Node
├── cli/ # CLI entry points
├── eval/ # Evaluation utilities
└── tools/ # Tool implementations
Documentation
License
Apache License 2.0
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file lits_llm-0.3.1.tar.gz.
File metadata
- Download URL: lits_llm-0.3.1.tar.gz
- Upload date:
- Size: 245.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
af161b04c820d16b4dbbf248d27add1b3edbab8438dacb9a4276b21859f4db85
|
|
| MD5 |
6ba2718975c4a07cb2c88404dcb250f3
|
|
| BLAKE2b-256 |
fa97c56f9f5e3651a93a32c490117b9b97e9b258e8e40be3b18d937e611e24a8
|
Provenance
The following attestation bundles were made for lits_llm-0.3.1.tar.gz:
Publisher:
publish.yml on xinzhel/lits-llm
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
lits_llm-0.3.1.tar.gz -
Subject digest:
af161b04c820d16b4dbbf248d27add1b3edbab8438dacb9a4276b21859f4db85 - Sigstore transparency entry: 962228385
- Sigstore integration time:
-
Permalink:
xinzhel/lits-llm@018fdaf195f570f3dcb3c02e9de6b04ca50e05e8 -
Branch / Tag:
refs/tags/v0.3.1 - Owner: https://github.com/xinzhel
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@018fdaf195f570f3dcb3c02e9de6b04ca50e05e8 -
Trigger Event:
push
-
Statement type:
File details
Details for the file lits_llm-0.3.1-py3-none-any.whl.
File metadata
- Download URL: lits_llm-0.3.1-py3-none-any.whl
- Upload date:
- Size: 293.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
bd2325e610e634e35ba5aee9b1473acebf4d4ddc10ca01c9b2c8537dbb6b3e9e
|
|
| MD5 |
6dc29a098ce6e65b9114f74627144731
|
|
| BLAKE2b-256 |
392f5c1e446d059acb67114a8997b60f31c0ef75aa45d5fb267b8d47438c54a0
|
Provenance
The following attestation bundles were made for lits_llm-0.3.1-py3-none-any.whl:
Publisher:
publish.yml on xinzhel/lits-llm
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
lits_llm-0.3.1-py3-none-any.whl -
Subject digest:
bd2325e610e634e35ba5aee9b1473acebf4d4ddc10ca01c9b2c8537dbb6b3e9e - Sigstore transparency entry: 962228389
- Sigstore integration time:
-
Permalink:
xinzhel/lits-llm@018fdaf195f570f3dcb3c02e9de6b04ca50e05e8 -
Branch / Tag:
refs/tags/v0.3.1 - Owner: https://github.com/xinzhel
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@018fdaf195f570f3dcb3c02e9de6b04ca50e05e8 -
Trigger Event:
push
-
Statement type: