Skip to main content

a Python library designed to simplify interactions with Large Language Models (LLMs) by providing a stateful, fluent interface for managing conversation history, tool usage, and response parsing. It leverages libraries like `mirascope` for LLM calls and `pydantic` for data validation and parsing.

Project description

llmscope

llmscope is a Python library designed to simplify interactions with Large Language Models (LLMs) by providing a stateful, fluent interface for managing conversation history, tool usage, and response parsing. It leverages libraries like mirascope for LLM calls and pydantic for data validation and parsing.

A Taste of llmscope

from pydantic import BaseModel
import llmscope

class CodeItemDoc(BaseModel):
    summary: str = Field("A concise description of this function/struct/enum... ")
    example: str = Field("Provide an example of how this item is used. ")

@llmscope.fn("openai", model="gpt-4o")
def generate_doc(llm, code: str, item_type: Literal["function", "struct"]) -> CodeItemDoc:
    # Organize your prompts like `print`
    llm.system("You are a professional software engineer writing documentation. ")
    if item_type == "function":
        llm.system("* For functions, start with a verb and describe the functionality.")
    else:
        llm.system("* For structs, start with a noun phrase summarizing this type.")
    
    llm.user(code)

    # Elegant structured output selectors.
    # Using `os.fork` to collect different json schemas in different places.
    # Equilvalent to mirascope.llm.call(tools=[ViewCodeSpace], response_model=CodeItemDoc).
    while tool := llm.try_tool(ViewCodeSpace):
        llm.assistant(tool.call())

    return llm.parse(CodeItemDoc) 

Installation

pip install llmscope[openai,anthropic,...] 

(Note: Similar to Mirascope, llmscope also relies upon different dependencies for different LLM providers, see full list of providers: https://mirascope.com/api/llm/call/)

Usage

Use the @fn decorator to wrap a function that defines the agent's behavior. This decorator automatically injects an LLMState instance.

from llmscope import fn, BaseTool, Field
# Assuming necessary imports for Provider, BaseMessageParam, etc.

# Define a tool (using mirascope's BaseTool)
class EmotionTool(BaseTool):
    """Tool to represent a chosen emotion."""
    emotion: str = Field(..., description="The name of the emotion.")
    reason: str = Field(..., description="A brief reason for choosing this emotion.")

    def call(self):
        print(f"Tool Call: Emotion={self.emotion}, Reason={self.reason}")
        return f"Emotion {self.emotion} acknowledged."

# Define the agent function
@fn(provider="openai", model="gpt-4o") # Configure provider and model
def emotion_agent(llm, initial_prompt: str):
    llm.system("You are an llm that chooses emotions when asked.")
    llm.user(initial_prompt)

    # Loop while the LLM decides to use the EmotionTool
    while tool_call := llm.try_tool(EmotionTool):
        result = tool_call.call() # Execute the tool
        print(f"Tool Result: {result}")
        # Add tool execution result back to the conversation
        llm.assistant(f"Okay, I chose {tool_call.emotion}.") # Or use llm.tool(tool_call=..., content=result) with mirascope
        llm.user("Okay, choose another different emotion and explain why.")

    # If no tool is called, get the final text response
    try:
        final_response = llm.generate()
        print(f"Agent's final text response: {final_response.content}")
    except Exception as e:
        # Handle cases where generate might fail or is used incorrectly (e.g., after try_parse)
        print(f"Could not generate final response: {e}")

    return "Agent finished."

# Run the llm
result = emotion_agent("Choose an emotion and explain why.")
print(result)

2. Key LLMState Methods

  • config(provider=..., model=..., call_params=...): Sets the LLM provider, model, and optional call parameters.
  • msg(role, *message): Adds a message to the history.
  • system(*message), user(*message), assistant(*message): Convenience methods for msg.
  • try_tool(ToolClass): Attempts to get the LLM to use the specified tool. Returns the tool instance if successful, None otherwise. Can be used in a loop.
  • try_tools(*ToolClasses): Similar to try_tool but for multiple possible tools. Returns a list of successful tool calls.
  • try_parse(PydanticModel): Attempts to parse the LLM response into the given Pydantic model without finalizing the request. Useful for checking intermediate structured outputs.
  • parse(PydanticModel): Finalizes the request and parses the LLM response into the given Pydantic model. Raises ValidationError on failure.
  • generate(): Finalizes the request and returns the raw LLM response (BaseCallResponse from mirascope), typically used when no specific parsing or tool use is expected at the end.

Important: Methods like try_tool, try_parse, parse, and generate trigger internal state management and potentially LLM calls. Avoid calling config or msg between a try_ call and its corresponding parse or generate call within the same logical block.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmscope-0.1.0.tar.gz (60.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llmscope-0.1.0-py3-none-any.whl (9.0 kB view details)

Uploaded Python 3

File details

Details for the file llmscope-0.1.0.tar.gz.

File metadata

  • Download URL: llmscope-0.1.0.tar.gz
  • Upload date:
  • Size: 60.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.14

File hashes

Hashes for llmscope-0.1.0.tar.gz
Algorithm Hash digest
SHA256 e327cfb96e82b2b79b868b141b1b1380e6857e5c16c6df0017d701447d3b5c5f
MD5 2617a3d6141078adadc2d2d7738465f6
BLAKE2b-256 fb0cc6427e6177b05ebff50b091a7b256bcd3b251de42131719ba494ae036483

See more details on using hashes here.

File details

Details for the file llmscope-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: llmscope-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 9.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.14

File hashes

Hashes for llmscope-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ed577460557313f4b6f1ef8c19c2c9cf18b52c47b78479ee4f666ec1bc6dafb3
MD5 185ff2b4f7dbe7af0c9118d13735649c
BLAKE2b-256 18a391f869543d1da6569a77322ee832f7c834a6eeb81349e1a21c536433c6c5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page