Skip to main content

SilkLoom Core: minimal stateful batch engine for LLM and VLM workloads

Project description

SilkLoom Core

中文 | English

SilkLoom Core V1.0.0 is a minimal, stateful LLM batch engine.

The public surface is intentionally small:

  • LLMTask
  • ResultSet
  • TaskResult

This README is split into two parts:

  1. User Guide: installation, input formats, prompt rules, and examples.
  2. API Reference: constructor arguments, method signatures, and returned objects.

Prompt templates use strict Jinja2 syntax. user_prompt and system_prompt render against each input item, so template variables must match the keys in that item's dictionary. Missing variables raise an error instead of rendering as empty text. For a plain string list, SilkLoom wraps each item as {"text": "..."}.

Install

pip install silkloom-core

From source:

git clone https://github.com/LeLiu-GeoAI/silkloom-core.git
cd silkloom-core
pip install -e .

User Guide

Quick Start

from openai import OpenAI
from silkloom_core import LLMTask

client = OpenAI(api_key="your_key")

task = LLMTask(
    model="gpt-4o-mini",
    user_prompt="Translate into English: {{ text }}",
    client=client,
)

results = task.map(["你好", "今天天气不错"])
print(results[0])
print(results.success_count, results.failed_count)

Input Formats

LLMTask.map() accepts three common input shapes:

  • list[str]: each string is wrapped as {"text": ...}
  • list[dict]: each dict becomes one prompt context
  • pandas.DataFrame: optional; each row becomes one prompt context and the column names become template variables

If you want to pass a DataFrame, install pandas separately. It is not required for normal usage.

Dictionary list example:

from silkloom_core import LLMTask

task = LLMTask(
    model="gpt-4o-mini",
    user_prompt="Extract name and intent from text: {{ text }}",
)

results = task.map([
    {"text": "My name is Alice. I want a refund."},
    {"text": "Bob asks about delivery."},
])

Pandas DataFrame

Each DataFrame row is treated as one input item, and the column names are available as template variables.

import pandas as pd
from silkloom_core import LLMTask

df = pd.DataFrame(
    [
        {"text": "Urban heat island is intensifying.", "lang": "en"},
        {"text": "城市更新需要兼顾公平。", "lang": "zh"},
    ]
)

task = LLMTask(
    model="gpt-4o-mini",
    user_prompt="Rewrite the following {{ lang }} text: {{ text }}",
)

results = task.map(df)

Prompt Template Rules

Template variables must match the keys in the input context.

task = LLMTask(
    model="gpt-4o-mini",
    user_prompt="Rewrite the following {{ lang }} text: {{ text }}",
)

For a DataFrame, this row exposes text and lang to the template:

{"text": "Urban heat is rising.", "lang": "en"}

Structured Output

from pydantic import BaseModel
from silkloom_core import LLMTask


class ExtractInfo(BaseModel):
    name: str
    intent: str


task = LLMTask(
    model="gpt-4o-mini",
    user_prompt="Extract name and intent from text: {{ text }}",
    response_model=ExtractInfo,
)

results = task.map([
    {"text": "My name is Alice. I want a refund."},
    {"text": "Bob asks about delivery."},
])

print(results[0].name)

GLM and Ollama

GLM-4-Flash

import os
from openai import OpenAI
from silkloom_core import LLMTask

glm_client = OpenAI(
    api_key=os.environ["ZHIPUAI_API_KEY"],
    base_url="https://open.bigmodel.cn/api/paas/v4/",
)

task = LLMTask(
    model="glm-4-flash",
    user_prompt="Summarize this text: {{ text }}",
    client=glm_client,
)

results = task.map(["Urban renewal should balance efficiency and equity."])

Ollama

from openai import OpenAI
from silkloom_core import LLMTask

ollama_client = OpenAI(
    api_key="ollama",
    base_url="http://localhost:11434/v1",
)

task = LLMTask(
    model="qwen2.5:7b",
    user_prompt="Rewrite in academic tone: {{ text }}",
    client=ollama_client,
)

results = task.map(["Traffic is usually worst in evening peak."])

Multimodal Input

Pass image sources in images (supports local path, URL, or base64/data URI):

from silkloom_core import LLMTask

task = LLMTask(
    model="gpt-4o",
    user_prompt="Describe these images and answer: {{ text }}",
)

results = task.map([
    {
        "text": "What is shown?",
        "images": ["./pic1.jpg", "https://example.com/pic2.png"],
    }
])

Resumability

map supports resumability with SQLite via db_path + run_id:

results = task.map(
    [{"text": "a"}, {"text": "b"}],
    db_path="my_run.db",
    run_id="demo_001",
    workers=5,
)

Running again with the same run_id reuses successful records.

Exporting Results

ResultSet supports in-memory access and file export:

results.run_id
results.success_count
results.failed_count
results.total_tokens
results.errors
results[0]
results.export_jsonl("out.jsonl")
results.export_csv("out.csv", flatten=True)

API Reference

LLMTask

Constructor:

LLMTask(
    model: str,
    user_prompt: str,
    system_prompt: str | None = None,
    response_model: type[BaseModel] | None = None,
    max_retries: int = 3,
    client: Any | None = None,
)

Arguments:

  • model: target model name, such as gpt-4o-mini
  • user_prompt: required Jinja2 template for the user message
  • system_prompt: optional Jinja2 template for the system message
  • response_model: optional Pydantic model for structured output parsing
  • max_retries: number of attempts for one item
  • client: optional OpenAI-compatible client; defaults to the official client

Method:

map(sequence, db_path=".silkloom_cache.db", run_id=None, workers=5) -> ResultSet

Accepted inputs:

  • list[str]
  • list[dict]
  • pandas.DataFrame

ResultSet

ResultSet behaves like a sequence aligned with the input order.

Properties:

  • run_id
  • success_count
  • failed_count
  • total_tokens
  • errors

Methods:

  • results[0]: returns the result at the same index as the input
  • export_jsonl(path): write successful results to JSONL
  • export_csv(path, flatten=False, include_usage=True): write a CSV export

TaskResult

Each raw task result contains:

  • is_success
  • data
  • error
  • usage
  • input_data

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

silkloom_core-1.0.1.tar.gz (12.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

silkloom_core-1.0.1-py3-none-any.whl (12.0 kB view details)

Uploaded Python 3

File details

Details for the file silkloom_core-1.0.1.tar.gz.

File metadata

  • Download URL: silkloom_core-1.0.1.tar.gz
  • Upload date:
  • Size: 12.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for silkloom_core-1.0.1.tar.gz
Algorithm Hash digest
SHA256 074e5d30ed4ad5e8d4a03a09fa22d97ae85a1c9b863254e37125f2669db5c27e
MD5 5078ee473890f8936c996910bdfccbab
BLAKE2b-256 328fe8f2c653986b4542a53af2e8a31fcd7ec709c88b07350b3dff35dbcdadc1

See more details on using hashes here.

Provenance

The following attestation bundles were made for silkloom_core-1.0.1.tar.gz:

Publisher: publish.yml on LeLiu-GeoAI/silkloom-core

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file silkloom_core-1.0.1-py3-none-any.whl.

File metadata

  • Download URL: silkloom_core-1.0.1-py3-none-any.whl
  • Upload date:
  • Size: 12.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for silkloom_core-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 ec2dd54371be3c98e40f18bc9734824044e1363958137b7b44ed774f4fcf5e14
MD5 55b49dbf15231db17dcb309cd4d48d79
BLAKE2b-256 a03f466917d7d544141ac1f8cf4af072b725aacc013e7ce0d4c056bbd6824a73

See more details on using hashes here.

Provenance

The following attestation bundles were made for silkloom_core-1.0.1-py3-none-any.whl:

Publisher: publish.yml on LeLiu-GeoAI/silkloom-core

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page