Skip to main content

A reasoning framework for all LLMs

Project description

Reasoning Framework

A Python package that adds R1-style reasoning capabilities to any Language Model (LLM). This framework enables step-by-step reasoning and verification of responses using a two-step process:

  1. Initial reasoning and response generation
  2. Verification and refinement of the response

Installation

Basic installation:

pip install reasoning

With OpenAI support:

pip install "reasoning[openai]"

With Anthropic support:

pip install "reasoning[anthropic]"

With all supported APIs:

pip install "reasoning[openai,anthropic]"

Environment Variables

Depending on which API you're using, you'll need to set the appropriate environment variables:

  • For OpenAI: OPENAI_API_KEY
  • For Anthropic: ANTHROPIC_API_KEY
  • For OpenRouter: OPENROUTER_API_KEY

You can set these in your shell:

export OPENAI_API_KEY='your-api-key'
export ANTHROPIC_API_KEY='your-api-key'
export OPENROUTER_API_KEY='your-api-key'

Or in Python:

import os
os.environ['OPENAI_API_KEY'] = 'your-api-key'

Quick Start

Using OpenAI

from reasoning import ReasoningFramework
from reasoning.examples.openai_example import create_openai_call

# Create model-specific callers
gpt4_call = create_openai_call("gpt-4")
gpt35_call = create_openai_call("gpt-3.5-turbo")

# Initialize the framework
framework = ReasoningFramework(
    reasoning_llm_call=gpt4_call,
    verification_llm_call=gpt35_call
)

# Process a question
response = framework.process(
    "What would be the implications of achieving AGI?",
    reasoning_kwargs={"temperature": 0.7},
    verification_kwargs={"temperature": 0.5}
)

print("Original Message:", response.message)
print("\nReasoning Process:", response.reasoning)
print("\nInitial Response:", response.initial_response)
print("\nVerified Response:", response.final_response)

Using Anthropic

from reasoning import ReasoningFramework
from reasoning.examples.anthropic_example import create_anthropic_call

# Create model-specific callers
sonnet_call = create_anthropic_call("claude-3-sonnet")
opus_call = create_anthropic_call("claude-3-opus")

# Initialize the framework
framework = ReasoningFramework(
    reasoning_llm_call=sonnet_call,
    verification_llm_call=opus_call
)

# Process a question
response = framework.process(
    "What would be the implications of achieving AGI?",
    reasoning_kwargs={"temperature": 0.7},
    verification_kwargs={"temperature": 0.5}
)

Using OpenRouter

from reasoning import ReasoningFramework
from reasoning.examples.openrouter_example import create_openrouter_call

# Create model-specific callers
r1_call = create_openrouter_call("deepseek/deepseek-r1")
claude_call = create_openrouter_call("anthropic/claude-3-sonnet")

# Initialize the framework
framework = ReasoningFramework(
    reasoning_llm_call=r1_call,
    verification_llm_call=claude_call
)

# Process a question
response = framework.process(
    "What would be the implications of achieving AGI?",
    reasoning_kwargs={"temperature": 0.7},
    verification_kwargs={"temperature": 0.5}
)

Features

  • Flexible integration with any LLM through callback functions
  • Built-in support for OpenAI, Anthropic, and OpenRouter APIs
  • Structured reasoning process with verification
  • Customizable system prompts for both reasoning and verification
  • Type-safe implementation using Pydantic models
  • Comprehensive logging for debugging

Advanced Usage

Custom System Prompts

framework = ReasoningFramework(
    reasoning_llm_call=my_llm_call,
    verification_llm_call=my_verification_call,
    reasoning_system_prompt="You are an expert at breaking down complex problems...",
    verification_system_prompt="You are a critical thinker who verifies conclusions..."
)

Error Handling

The framework includes built-in error handling and logging:

import logging
logging.basicConfig(level=logging.DEBUG)  # Set to see detailed logs

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

reasoning-0.1.0.tar.gz (6.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

reasoning-0.1.0-py3-none-any.whl (8.7 kB view details)

Uploaded Python 3

File details

Details for the file reasoning-0.1.0.tar.gz.

File metadata

  • Download URL: reasoning-0.1.0.tar.gz
  • Upload date:
  • Size: 6.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.5 CPython/3.13.1 Darwin/23.0.0

File hashes

Hashes for reasoning-0.1.0.tar.gz
Algorithm Hash digest
SHA256 5fb470d223d91dcb494c5b881a17e82296fbd0b292a77962358c6ae220b6e624
MD5 e8245afb6bb4d1538bd45965a7e34907
BLAKE2b-256 fc244b4b70d2c1617271a092a9ad3e0ffacfbbe5b13fdf3d7e75f45d977ca304

See more details on using hashes here.

File details

Details for the file reasoning-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: reasoning-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 8.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.5 CPython/3.13.1 Darwin/23.0.0

File hashes

Hashes for reasoning-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 d563685b36888cce0b608e505bf79310916fa977afac523b59f23ee4d8c12f6a
MD5 a4982d718ecc8e6ab701ac1daf495c16
BLAKE2b-256 466789b0b972d978e1e0c759abbe203f4671cb4f357d3a0cef3e96419b0bb65b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page