Skip to main content

a Python library designed to simplify interactions with Large Language Models (LLMs) by providing a stateful, fluent interface for managing conversation history, tool usage, and response parsing. It leverages libraries like `mirascope` for LLM calls and `pydantic` for data validation and parsing.

Project description

llmscope

PyPI Documentation

llmscope is a Python library designed to simplify interactions with Large Language Models (LLMs) by providing a stateful, fluent interface for managing conversation history, tool usage, and response parsing. It leverages libraries like mirascope for LLM calls and pydantic for data validation and parsing.

A Taste of llmscope

from pydantic import BaseModel
import llmscope

class CodeItemDoc(BaseModel):
    summary: str = Field("A concise description of this function/struct/enum... ")
    example: str = Field("Provide an example of how this item is used. ")

@llmscope.fn("openai", model="gpt-4o")
def generate_doc(llm, code: str, item_type: Literal["function", "struct"]) -> CodeItemDoc:
    # Organize your prompts like `print`
    llm.system("You are a professional software engineer writing documentation. ")a
    if item_type == "function":
        llm.system("* For functions, start with a verb and describe the functionality.")
    else:
        llm.system("* For structs, start with a noun phrase summarizing this type.")
    
    llm.user(code)

    # Elegant structured output selectors.
    # Using `os.fork` to collect different json schemas in different places.
    # Equilvalent to mirascope.llm.call(tools=[ViewCodeSpace], response_model=CodeItemDoc).
    while tool := llm.try_tool(ViewCodeSpace):
        llm.assistant(tool.call())

    return llm.parse(CodeItemDoc) 

Installation

pip install llmscope[openai,anthropic,...] 

(Note: Similar to Mirascope, llmscope also relies upon different dependencies for different LLM providers, see full list of providers: https://mirascope.com/api/llm/call/)

Usage

Use the @fn decorator to wrap a function that defines the agent's behavior. This decorator automatically injects an LLMState instance.

from llmscope import fn, BaseTool, Field
# Assuming necessary imports for Provider, BaseMessageParam, etc.

# Define a tool (using mirascope's BaseTool)
class EmotionTool(BaseTool):
    """Tool to represent a chosen emotion."""
    emotion: str = Field(..., description="The name of the emotion.")
    reason: str = Field(..., description="A brief reason for choosing this emotion.")

    def call(self):
        print(f"Tool Call: Emotion={self.emotion}, Reason={self.reason}")
        return f"Emotion {self.emotion} acknowledged."

# Define the agent function
@fn(provider="openai", model="gpt-4o") # Configure provider and model
def emotion_agent(llm, initial_prompt: str):
    llm.system("You are an llm that chooses emotions when asked.")
    llm.user(initial_prompt)

    # Loop while the LLM decides to use the EmotionTool
    while tool_call := llm.try_tool(EmotionTool):
        result = tool_call.call() # Execute the tool
        print(f"Tool Result: {result}")
        # Add tool execution result back to the conversation
        llm.assistant(f"Okay, I chose {tool_call.emotion}.") # Or use llm.tool(tool_call=..., content=result) with mirascope
        llm.user("Okay, choose another different emotion and explain why.")

    # If no tool is called, get the final text response
    try:
        final_response = llm.generate()
        print(f"Agent's final text response: {final_response.content}")
    except Exception as e:
        # Handle cases where generate might fail or is used incorrectly (e.g., after try_parse)
        print(f"Could not generate final response: {e}")

    return "Agent finished."

# Run the llm
result = emotion_agent("Choose an emotion and explain why.")
print(result)

2. Key LLMState Methods

  • config(provider=..., model=..., call_params=...): Sets the LLM provider, model, and optional call parameters.
  • msg(role, *message): Adds a message to the history.
  • system(*message), user(*message), assistant(*message): Convenience methods for msg.
  • try_tool(ToolClass): Attempts to get the LLM to use the specified tool. Returns the tool instance if successful, None otherwise. Can be used in a loop.
  • try_tools(*ToolClasses): Similar to try_tool but for multiple possible tools. Returns a list of successful tool calls.
  • try_parse(PydanticModel): Attempts to parse the LLM response into the given Pydantic model without finalizing the request. Useful for checking intermediate structured outputs.
  • parse(PydanticModel): Finalizes the request and parses the LLM response into the given Pydantic model. Raises ValidationError on failure.
  • generate(): Finalizes the request and returns the raw LLM response (BaseCallResponse from mirascope), typically used when no specific parsing or tool use is expected at the end.

Important: Methods like try_tool, try_parse, parse, and generate trigger internal state management and potentially LLM calls. Avoid calling config or msg between a try_ call and its corresponding parse or generate call within the same logical block.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmscope-0.1.1.tar.gz (60.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llmscope-0.1.1-py3-none-any.whl (9.1 kB view details)

Uploaded Python 3

File details

Details for the file llmscope-0.1.1.tar.gz.

File metadata

  • Download URL: llmscope-0.1.1.tar.gz
  • Upload date:
  • Size: 60.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.14

File hashes

Hashes for llmscope-0.1.1.tar.gz
Algorithm Hash digest
SHA256 63bdccad7a770be54d2ce486afc01be6a1320d2aab953748a58e8301ff9b1018
MD5 485537024a570eed72a9c64e722c8abd
BLAKE2b-256 89f6e3610168d872ba6599a2f963e20efe83c296f6c358d9922360ca3ddc809d

See more details on using hashes here.

File details

Details for the file llmscope-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: llmscope-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 9.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.14

File hashes

Hashes for llmscope-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 c6641dc57a92e8af0506827c2b28d7cfcea5faba8485ebee5b1854ce6ec213f4
MD5 96b8ab5dcc041d028d6aba32b5d656a8
BLAKE2b-256 ec429e1dafa4b34d78b8c49f7e9fe4df952d55e76fe52da43ec5887b6a84d663

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page