LLM-powered structured data transformation
Project description
Smelt AI
LLM-powered structured data transformation. Feed in rows of data, get back strictly typed Pydantic models or free-text responses — batched, concurrent, and validated.
from smelt import Model, Job
from pydantic import BaseModel
class Classification(BaseModel):
sector: str
sub_sector: str
is_public: bool
model = Model(provider="openai", name="gpt-4.1-mini")
job = Job(
prompt="Classify each company by industry sector and whether it's publicly traded.",
output_model=Classification,
)
result = job.run(model, data=[
{"name": "Apple", "desc": "Consumer electronics and software"},
{"name": "Stripe", "desc": "Payment processing platform"},
{"name": "Mayo Clinic", "desc": "Nonprofit medical center"},
])
for row in result.data:
print(row) # Classification(sector='Technology', sub_sector='Consumer Electronics', is_public=True)
Free-text mode — skip the schema, get plain text back:
job = Job(prompt="Write a one-paragraph summary for each company")
result = job.run(model, data=[
{"name": "Apple", "desc": "Consumer electronics and software"},
{"name": "Stripe", "desc": "Payment processing platform"},
])
for text in result.data:
print(text) # "Apple is a multinational technology company..."
Install
pip install smelt-ai[openai] # OpenAI models
pip install smelt-ai[anthropic] # Anthropic models
pip install smelt-ai[google] # Google Gemini models
pip install smelt-ai[vision] # Image support (Pillow)
Combine extras: pip install smelt-ai[openai,vision]
Requires Python 3.10+.
How It Works
list[dict] → Tag with row_id → Split into batches → Concurrent LLM calls → Validate → Reorder → SmeltResult[T]
- Each input row gets a
row_idfor tracking - Rows are split into batches of configurable size
- Batches run concurrently through the LLM with structured output
- Each response is validated (schema, row IDs, count)
- Results are reordered to match original input order
- Everything is returned as a typed
SmeltResultwith metrics
Vision (Image Support)
Pass PIL images directly in your data dicts — smelt auto-detects them, base64-encodes, and sends multimodal content blocks to vision-capable LLMs.
pip install smelt-ai[anthropic,vision]
from PIL import Image
from pydantic import BaseModel, Field
from smelt import Model, Job
class ECGAnalysis(BaseModel):
heart_rhythm: str = Field(description="Detected heart rhythm")
heart_rate_bpm: int = Field(description="Estimated heart rate in bpm")
abnormalities: list[str] = Field(description="List of detected abnormalities")
model = Model(provider="anthropic", name="claude-sonnet-4-6")
job = Job(
prompt="Analyze the ECG image and provide a structured cardiac assessment.",
output_model=ECGAnalysis,
batch_size=1,
)
result = job.run(model, data=[
{"patient_id": "P001", "ecg": Image.open("ecg_1.jpeg")},
])
print(result.data[0])
# ECGAnalysis(heart_rhythm='Sinus Tachycardia', heart_rate_bpm=120, abnormalities=[...])
Works with any vision-capable model — OpenAI GPT-4o, Anthropic Claude, Google Gemini, etc. Use batch_size=1 for image-heavy payloads.
Aggregate (Many-to-One)
Reduce an entire dataset to a single output using tree-parallel reduction:
from pydantic import BaseModel, Field
from smelt import AggregateJob, Model
class PortfolioSummary(BaseModel):
total_companies: int
sectors: list[str]
total_revenue_millions: float
top_5_by_revenue: list[str]
model = Model(provider="anthropic", name="claude-sonnet-4-6")
job = AggregateJob(
prompt="Analyze this portfolio of companies. Count totals, list all sectors, sum revenues.",
output_model=PortfolioSummary,
batch_size=15,
)
result = job.run(model, data=companies) # 60 companies → 1 summary
print(result.data[0])
# PortfolioSummary(total_companies=60, sectors=['Technology', 'Finance', ...], ...)
How it works: batches are mapped in parallel, then merged pairwise in a tree until one result remains. Design your schema with additive fields (lists, counts) for best results.
API
Model
Wraps a LangChain chat model provider. Any LangChain-supported provider works.
model = Model(
provider="openai", # LangChain provider name
name="gpt-4.1-mini", # Model identifier
api_key="sk-...", # Optional — falls back to env var (e.g. OPENAI_API_KEY)
params={"temperature": 0}, # Forwarded to the chat model constructor
)
Job
Defines what transformation to run and how to batch it.
job = Job(
prompt="Your transformation instructions here",
output_model=MyPydanticModel, # Schema for each output row (None for free-text)
batch_size=10, # Rows per LLM request (default: 10)
concurrency=3, # Max concurrent requests (default: 3)
max_retries=3, # Retries per failed batch (default: 3)
shuffle=False, # Shuffle rows before batching (default: False)
stop_on_exhaustion=True, # Raise on failure vs collect errors (default: True)
)
Run:
result = job.run(model, data=rows) # Sync
result = await job.arun(model, data=rows) # Async
Test with a single row first:
result = job.test(model, data=rows) # Sync — runs only the first row
result = await job.atest(model, data=rows) # Async
SmeltResult[T]
result.data # list[T] — transformed rows in original order
result.errors # list[BatchError] — failed batches
result.metrics # SmeltMetrics — tokens, timing, retries
result.success # bool — True if no errors
Error Handling
All exceptions inherit from SmeltError.
| Exception | When |
|---|---|
SmeltConfigError |
Invalid config (bad provider, empty prompt, etc.) |
SmeltValidationError |
LLM output fails schema validation |
SmeltAPIError |
Non-retriable API error (401, 403) |
SmeltExhaustionError |
Batch exhausted all retries (stop_on_exhaustion=True) |
from smelt.errors import SmeltExhaustionError
try:
result = job.run(model, data=rows)
except SmeltExhaustionError as e:
print(f"Partial: {len(e.partial_result.data)} rows succeeded")
Or collect errors without raising:
job = Job(prompt="...", output_model=MyModel, stop_on_exhaustion=False)
result = job.run(model, data=rows)
if not result.success:
for err in result.errors:
print(f"Batch {err.batch_index} failed: {err.message}")
Supported Providers
| Provider | provider value |
Example models |
|---|---|---|
| OpenAI | "openai" |
gpt-5.2, gpt-4.1-mini, gpt-4.1, gpt-4o, o4-mini |
| Anthropic | "anthropic" |
claude-sonnet-4-6, claude-opus-4-6, claude-haiku-4-5-20251001 |
| Google Gemini | "google_genai" |
gemini-3.1-pro-preview, gemini-3-flash-preview, gemini-2.5-flash |
Links
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file smelt_ai-0.4.0.tar.gz.
File metadata
- Download URL: smelt_ai-0.4.0.tar.gz
- Upload date:
- Size: 2.6 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.19
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1df802bddd6b84d6f4227412456f07f041fdbd5b3250abc8895cc28b65b8063b
|
|
| MD5 |
f1a12e486d495dd53cbb4c08e02e491f
|
|
| BLAKE2b-256 |
afd0d0c105dfb84939670ac8f24f022c3192bab33a06a3f306e28707df975375
|
File details
Details for the file smelt_ai-0.4.0-py3-none-any.whl.
File metadata
- Download URL: smelt_ai-0.4.0-py3-none-any.whl
- Upload date:
- Size: 27.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.19
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f0022870ae0d52f706abd117bdde9d911ed0dc79652011be3fe462a640e6be0f
|
|
| MD5 |
65a0f174e96a343b53c2b7c67f5adfb2
|
|
| BLAKE2b-256 |
da1892d23f2fcb871f417ecb208c5d5f9f8b8e328d424e8c9d199beff53d0a1f
|