Skip to main content

llm utility package

Project description

Serapeum Core

Serapeum Core is the provider-agnostic foundation for the Serapeum ecosystem. It defines the core LLM abstractions, prompt templates, structured-output parsers, and tool-execution utilities used by integration packages (OpenAI, Ollama, etc.).

If you are implementing a new LLM backend or want a consistent set of models and helpers for prompts, output parsing, and tool calling, this is the package to use.

Highlights

  • Unified LLM interfaces for chat/completions, streaming, and async variants.
  • Prompt templates for string and chat workflows with variable mapping utilities.
  • Structured output parsing with Pydantic models and JSON schema hints.
  • Tool metadata, schema generation, and safe tool execution utilities.
  • Orchestrators for tool calling and structured outputs.

Package layout

  • serapeum.core.base.llms: Base interfaces and data models (messages, chunks, responses, metadata).
  • serapeum.core.llms: High-level LLM class with predict/stream helpers and structured output utilities.
  • serapeum.core.prompts: Prompt templates for text and chat models.
  • serapeum.core.output_parsers: Parsers for structured outputs (e.g., Pydantic).
  • serapeum.core.tools: Tool metadata, tool interfaces, and tool execution.
  • serapeum.core.llms.orchestrators: Higher-level programs for structured outputs and function-calling orchestration.
  • serapeum.core.utils: JSON/schema helpers and async utilities.
  • serapeum.core.configs: Global configuration singleton (Configs).
  • serapeum.core.chat: Agent response container with streaming helpers.

Installation

From the repo:

cd serapeum-core
python -m pip install -e .

Python 3.11+ is required.

Quick start

1) Build a minimal LLM implementation

from serapeum.core.llms import LLM, CompletionResponse, Metadata
from serapeum.core.prompts import PromptTemplate


class EchoLLM(LLM):
  metadata = Metadata.model_construct(is_chat_model=False)

  def chat(self, messages, **kwargs):
    raise NotImplementedError()

  def stream_chat(self, messages, **kwargs):
    raise NotImplementedError()

  async def achat(self, messages, **kwargs):
    raise NotImplementedError()

  async def astream_chat(self, messages, **kwargs):
    raise NotImplementedError()

  def complete(self, prompt, formatted=False, **kwargs):
    return CompletionResponse(text=prompt, delta=prompt)

  def stream_complete(self, prompt, formatted=False, **kwargs):
    raise NotImplementedError()

  async def acomplete(self, prompt, formatted=False, **kwargs):
    return CompletionResponse(text=prompt, delta=prompt)

  async def astream_complete(self, prompt, formatted=False, **kwargs):
    raise NotImplementedError()


llm = EchoLLM()
prompt = PromptTemplate("Hello, {name}!")
result = llm.predict(prompt, name="Serapeum")
print(result)

2) Parse structured outputs with Pydantic

from pydantic import BaseModel
from serapeum.core.output_parsers import PydanticParser
from serapeum.core.prompts import PromptTemplate


class Greeting(BaseModel):
    message: str


parser = PydanticParser(output_cls=Greeting)
prompt = PromptTemplate(
    'Return JSON like {"message": "<text>"}. Text: {text}',
    output_parser=parser,
)

# With a real LLM backend, this returns a validated Greeting model.
# result = llm.predict(prompt, text="Hello")

3) Define tools and execute them safely

from serapeum.core.tools import BaseTool, ToolMetadata, ToolOutput, ToolExecutor


class EchoTool(BaseTool):
    @property
    def metadata(self) -> ToolMetadata:
        return ToolMetadata(description="Echo input", name="echo")

    def __call__(self, input_values: dict) -> ToolOutput:
        return ToolOutput(tool_name="echo", content=input_values.get("input", ""))


tool = EchoTool()
executor = ToolExecutor()
output = executor.execute(tool, {"input": "hi"})
print(output.content)

Core concepts

LLM interface and orchestration

  • BaseLLM defines the provider contract for chat/completion endpoints, streaming, and async variants.
  • LLM builds on BaseLLM with helpers for prompt formatting, prediction, and structured output modes.
  • FunctionCallingLLM extends LLM with tool-calling helper methods.

Prompts

Use PromptTemplate for string prompts and ChatPromptTemplate for message templates. Both support template variable mappings and optional output parsers.

Output parsers

PydanticParser injects a compact JSON schema into prompts and parses LLM output into validated Pydantic models. output_parsers.utils also provides markdown JSON/code extraction helpers.

Tools and schemas

  • ToolMetadata produces OpenAI-style tool specs and JSON schema for tool inputs.
  • CallableTool (in serapeum.core.tools.callable_tool) can wrap functions or Pydantic models into tool definitions.
  • ToolExecutor runs tools safely with optional auto-unpacking and error normalization.

Structured programs

  • TextCompletionLLM runs a prompt + parser + LLM pipeline to return Pydantic outputs for completion-style models.
  • ToolOrchestratingLLM uses function-calling models to produce structured outputs via tools.

Configuration

serapeum.core.configs.Configs is a small global configuration holder. You can set a default LLM instance and control structured output mode:

from serapeum.core.configs import Configs
from serapeum.core.types import StructuredLLMMode


Configs.llm = llm
Configs.pydantic_program_mode = StructuredLLMMode.OPENAI

Links

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

serapeum_core-0.1.0.tar.gz (94.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

serapeum_core-0.1.0-py3-none-any.whl (113.7 kB view details)

Uploaded Python 3

File details

Details for the file serapeum_core-0.1.0.tar.gz.

File metadata

  • Download URL: serapeum_core-0.1.0.tar.gz
  • Upload date:
  • Size: 94.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.11

File hashes

Hashes for serapeum_core-0.1.0.tar.gz
Algorithm Hash digest
SHA256 0dec69107b02d347047bc62b8c8a5c53ee09de109c908eb53ae363144c2b02da
MD5 231be1c1121088d0425e171c14f46e29
BLAKE2b-256 55b222814cedb654c73ea5eab6abaec8be82aa19d423a48b30793f37d1e59ae5

See more details on using hashes here.

File details

Details for the file serapeum_core-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: serapeum_core-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 113.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.11

File hashes

Hashes for serapeum_core-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 cc33bb2c60c45256a0cacb613286c76a52f8c3d6f626849a3e420c9110a745b2
MD5 4948dbcd62811c0c68013f4c1e7f67a9
BLAKE2b-256 3f5bdc2d7f24048415f577a44cc38d5f82ad25375751e253770f8760104dfaee

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page