Dynamically extract and subset Pydantic V2 models using dot-notation while preserving validators.
Project description
pydantic-pick
Dynamically extract and subset Pydantic V2 models using dot-notation, while preserving your validators, methods, and constraints.
In modern API development (especially with FastAPI) and AI Agent frameworks, it's common to have a "fat" data model that contains heavy internal data (like password_hash or massive tool_responses) and a "thin" model for JSON responses or LLM context windows. Manually writing and maintaining dozens of subset models is tedious.
While some existing libraries allow you to subset Pydantic models, they usually drop all your custom validation logic and methods when generating the new class.
pydantic-pick is different. It recursively rebuilds your models while safely copying over your @field_validators, @computed_fields, Field constraints, and user-defined methods.
Installation
pip install pydantic-pick
Note: This library requires pydantic >= 2.0.0 and Python 3.10+. It is deeply tied to Pydantic V2's core architecture and is not compatible with Pydantic V1.
Quick Start
Pass your base model, a tuple of dot-notation paths to keep, and the name for the new dynamically generated class.
from pydantic import BaseModel, Field, field_validator
from pydantic_pick import create_subset
class DBUser(BaseModel):
id: int = Field(..., ge=1)
username: str
password_hash: str
is_active: bool = True
@field_validator("username")
@classmethod
def check_username(cls, v: str):
if "admin" in v.lower():
raise ValueError("Reserved username")
return v
# Create a subset keeping only 'id' and 'username'
PublicUser = create_subset(DBUser, ("id", "username"), "PublicUser")
# The new model works exactly as expected
user = PublicUser(id=10, username="alice")
print(user.model_dump())
# {'id': 10, 'username': 'alice'}
# AND your validators/constraints survived!
PublicUser(id=-5, username="bob") # Fails: id must be >= 1
PublicUser(id=1, username="admin123") # Fails: Reserved username
Deep Nesting & Complex Types
pydantic-pick handles deeply nested models and complex standard library types natively. You can drill into models wrapped in List, Dict, Tuple, Set, Union, Optional, and Annotated.
class Profile(BaseModel):
avatar_url: str
billing_secret: str
class Account(BaseModel):
user_id: int
# Works perfectly through Lists, Dicts, Unions, and Optionals!
profiles: list[Profile]
# Use dot-notation to drill down into the nested lists
paths = (
"user_id",
"profiles.avatar_url" # Keeps the avatar, drops the billing_secret
)
PublicAccount = create_subset(Account, paths, "PublicAccount")
Advanced Use Case: LLM Context Compression
When building autonomous AI agents, tool responses (like executing a Python script or scraping a webpage) can return thousands of lines of raw output. Appending this directly to your LLM's conversation history quickly exhausts the context window and skyrockets API costs.
You can use pydantic-pick to maintain a "Fat History" for your database, but dynamically generate a "Thin History" before calling the LLM.
from pydantic import BaseModel
from pydantic_pick import create_subset
# 1. Your "Fat" schema that gets saved to your database
class ToolResponse(BaseModel):
tool_response: str # Might contain 10,000 tokens of raw terminal output
tool_close_instructions: str = "Analyze the tool_response above. Trigger ToolComplete next."
# 2. Dynamically drop the heavy data, but keep the structural instructions
CompressedToolResponse = create_subset(
ToolResponse,
("tool_close_instructions",), # Keeps instructions, DROPS 'tool_response'
"CompressedToolResponse"
)
# Now, when you build your LLM prompt payload:
history_for_llm = []
for event in database_history:
if isinstance(event, ToolResponse):
# Convert to thin model, saving thousands of tokens instantly
thin_event = CompressedToolResponse(**event.model_dump())
history_for_llm.append(thin_event.model_dump_json())
💡 Performance Tip: The create_subset function uses functools.lru_cache. Generating a model dynamically takes a few milliseconds, but subsequent calls requesting the exact same subset of the same model return instantly from memory. It is completely safe to use inside fast-paced API endpoints or intensive AI agent loops.
What Survives Extraction?
Unlike naive create_model wrappers, this library actively preserves your business logic:
- ✅ Field Constraints: Everything inside
Field(...)(likege,max_length,alias). - ✅ Field Validators:
@field_validatorlogic is preserved (as long as the fields it targets were not omitted). - ✅ Computed Fields:
@computed_fieldproperties are safely carried over. - ✅ Methods: Custom instance methods,
@classmethod,@staticmethod, and custom wrappers. - ✅ ClassVars:
typing.ClassVarattributes are safely mapped. - ✅ Config: Your
model_config(likefrozen=Trueoralias_generator) is inherited.
Intelligent Dependency Resolution (AST Parsing)
What happens if you have a @computed_field or a custom method that relies on a data field, but you omit that data field during extraction?
Instead of letting your application crash randomly at runtime with a cryptic Python error, pydantic-pick uses Abstract Syntax Tree (AST) parsing to peek inside your methods and wrappers.
It maps exactly which self attributes your functions access. If a method relies on a field that you omitted, pydantic-pick gracefully and silently omits the method as well! This cascades, so if method_b relies on method_a, and method_a was dropped, method_b is safely dropped too.
Clean Developer Experience Errors
If another developer on your team tries to call a method or field that was dynamically dropped, pydantic-pick intercepts it via a custom __getattr__ and provides a beautiful, clear traceback:
PublicUser = create_subset(DBUser, ("id", "username"), "PublicUser")
user = PublicUser(id=1, username="alice")
user.check_password("secret")
Output:
AttributeError: 'PublicUser' object has no attribute 'check_password'.
-> This field/method was intentionally omitted by pydantic-pick during extraction.
Truthful Limitations & Quirks
Because dynamic AST generation and Pydantic's Rust-based core have strict boundaries, there are a few edge cases this library does not currently handle. Be aware of these before using it in production:
⚠️ Warning: Model Validators are Dropped: Model Validators are Dropped: Both @model_validator and @model_serializer are intentionally ignored during extraction. Because mode="before" model validators check dictionary state rather than self.attribute state, our AST parser cannot reliably map their dependencies. Copying them to a subset class where fields might be missing would cause fatal dictionary/Attribute errors at runtime, so pydantic-pick safely drops them.
- Forward References: If you use string-based forward references for circular imports (e.g.,
leader: "User"), the extraction engine cannot peek inside the string to extract nested fields. - Private Attributes:
PrivateAttr()definitions are currently lost during extraction. - Field Aliases in Paths: When defining your include paths, you must use the actual internal Python variable name, not the Pydantic alias. (e.g., Use
"first_name", not"firstName"). - Sets and
model_dump: If you extract a model containing aSet[NestedModel], remember that Pydantic V2 requires you to usemodel_dump(mode="json")to serialize sets. Standardmodel_dump()will throw a standard PythonTypeError: unhashable type: 'dict'. - Generic Models: Dynamically creating a subset of a
Generic[T]model results in a standard model; it will lose its generic subscriptable properties.
Links
- GitHub: https://github.com/StoneSteel27/pydantic-pick
- Issues: https://github.com/StoneSteel27/pydantic-pick/issues
License
MIT License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pydantic_pick-0.1.3.tar.gz.
File metadata
- Download URL: pydantic_pick-0.1.3.tar.gz
- Upload date:
- Size: 21.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
40040a18883560b72d68bf4b06550afe3175ed5deae908675d291308af1e2727
|
|
| MD5 |
bb9c096b7b315308929fda57d7b57aaf
|
|
| BLAKE2b-256 |
1e3b79a8a5910e24c0a3e9a7dd6990881b582e354da9f59597b6ebf9b6418ab8
|
File details
Details for the file pydantic_pick-0.1.3-py3-none-any.whl.
File metadata
- Download URL: pydantic_pick-0.1.3-py3-none-any.whl
- Upload date:
- Size: 10.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
88e0ff68734d11d3f54bc3b49c0a2aacdc95e1da7a214103aeb29b04ac1f6f76
|
|
| MD5 |
439d0c14433a462608b2afc18817520f
|
|
| BLAKE2b-256 |
f845a9f962f10a3b3bc6fcac8484225ead9312e38e88606afabcf0238241e5de
|