llm utility package
Project description
Serapeum Core
Serapeum Core is the provider-agnostic foundation for the Serapeum ecosystem. It defines the core LLM abstractions, prompt templates, structured-output parsers, and tool-execution utilities used by integration packages (OpenAI, Ollama, etc.).
If you are implementing a new LLM backend or want a consistent set of models and helpers for prompts, output parsing, and tool calling, this is the package to use.
Highlights
- Unified LLM interfaces for chat/completions, streaming, and async variants.
- Prompt templates for string and chat workflows with variable mapping utilities.
- Structured output parsing with Pydantic models and JSON schema hints.
- Tool metadata, schema generation, and safe tool execution utilities.
- Orchestrators for tool calling and structured outputs.
Package layout
serapeum.core.base.llms: Base interfaces and data models (messages, chunks, responses, metadata).serapeum.core.llms: High-level LLM class with predict/stream helpers and structured output utilities.serapeum.core.prompts: Prompt templates for text and chat models.serapeum.core.output_parsers: Parsers for structured outputs (e.g., Pydantic).serapeum.core.tools: Tool metadata, tool interfaces, and tool execution.serapeum.core.llms.orchestrators: Higher-level programs for structured outputs and function-calling orchestration.serapeum.core.utils: JSON/schema helpers and async utilities.serapeum.core.configs: Global configuration singleton (Configs).serapeum.core.chat: Agent response container with streaming helpers.
Installation
From the repo root:
cd libs/core
python -m pip install -e .
Python 3.11+ is required.
Quick start
1) Build a minimal LLM implementation
from serapeum.core.llms import LLM, CompletionResponse, Metadata
from serapeum.core.prompts import PromptTemplate
class EchoLLM(LLM):
metadata = Metadata.model_construct(is_chat_model=False)
def chat(self, messages, stream=False, **kwargs):
raise NotImplementedError()
async def achat(self, messages, stream=False, **kwargs):
raise NotImplementedError()
def complete(self, prompt, formatted=False, stream=False, **kwargs):
if stream:
raise NotImplementedError()
return CompletionResponse(text=prompt, delta=prompt)
async def acomplete(self, prompt, formatted=False, stream=False, **kwargs):
if stream:
raise NotImplementedError()
return CompletionResponse(text=prompt, delta=prompt)
llm = EchoLLM()
prompt = PromptTemplate("Hello, {name}!")
result = llm.predict(prompt, name="Serapeum")
print(result)
2) Parse structured outputs with Pydantic
from pydantic import BaseModel
from serapeum.core.output_parsers import PydanticParser
from serapeum.core.prompts import PromptTemplate
class Greeting(BaseModel):
message: str
parser = PydanticParser(output_cls=Greeting)
prompt = PromptTemplate(
'Return JSON like {"message": "<text>"}. Text: {text}',
output_parser=parser,
)
# With a real LLM backend, this returns a validated Greeting model.
# result = llm.predict(prompt, text="Hello")
3) Define tools and execute them safely
from serapeum.core.tools import BaseTool, ToolMetadata, ToolOutput, ToolExecutor
class EchoTool(BaseTool):
@property
def metadata(self) -> ToolMetadata:
return ToolMetadata(description="Echo input", name="echo")
def __call__(self, input_values: dict) -> ToolOutput:
return ToolOutput(tool_name="echo", content=input_values.get("input", ""))
tool = EchoTool()
executor = ToolExecutor()
output = executor.execute(tool, {"input": "hi"})
print(output.content)
Core concepts
LLM interface and orchestration
BaseLLMdefines the provider contract for chat/completion endpoints, streaming, and async variants.LLMbuilds onBaseLLMwith helpers for prompt formatting, prediction, and structured output modes.FunctionCallingLLMextendsLLMwith tool-calling helper methods.
Prompts
Use PromptTemplate for string prompts and ChatPromptTemplate for message
templates. Both support template variable mappings and optional output parsers.
Output parsers
PydanticParser injects a compact JSON schema into prompts and parses LLM output
into validated Pydantic models. output_parsers.utils also provides markdown
JSON/code extraction helpers.
Tools and schemas
ToolMetadataproduces OpenAI-style tool specs and JSON schema for tool inputs.CallableTool(inserapeum.core.tools.callable_tool) can wrap functions or Pydantic models into tool definitions.ToolExecutorruns tools safely with optional auto-unpacking and error normalization.
Structured programs
TextCompletionLLMruns a prompt + parser + LLM pipeline to return Pydantic outputs for completion-style models.ToolOrchestratingLLMuses function-calling models to produce structured outputs via tools.
Configuration
serapeum.core.configs.Configs is a small global configuration holder. You can
set a default LLM instance and control structured output mode:
from serapeum.core.configs import Configs
from serapeum.core.types import StructuredOutputMode
Configs.llm = llm
Configs.structured_output_mode = StructuredOutputMode.OPENAI
Links
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file serapeum_core-0.5.0.tar.gz.
File metadata
- Download URL: serapeum_core-0.5.0.tar.gz
- Upload date:
- Size: 104.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3c895c31f73e281dc866221088ad0988578af61853e3d7367379e0b43047afa8
|
|
| MD5 |
626c6e02526816f04dbb7b5f09d085fd
|
|
| BLAKE2b-256 |
ece115b677b3cc0d606f36bc89bea7158c00a4d5bbff883994ec0410975d963b
|
File details
Details for the file serapeum_core-0.5.0-py3-none-any.whl.
File metadata
- Download URL: serapeum_core-0.5.0-py3-none-any.whl
- Upload date:
- Size: 125.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3a4baa82f850ee278f5c4910acbb79a8d676d9f6e5325ffec7ca3fa1583c5e84
|
|
| MD5 |
c18944fe0a7cf0a3d08647ba5f9d85c3
|
|
| BLAKE2b-256 |
a360668c8200ded5ff77999c1b603226417472b8ca32f76687f35eaf1b36cc07
|