create nested data structure easily
Project description
A hierarchical solution for data fetching and processing
- use declaretive way to define view data, easy to maintain and develop
- use main query and loader query to break down complex queries, and better reuse
- provide various tools to precisely construct view data, no overhead
If you are using pydantic v2, please use pydantic2-resolve instead.
It is the key to composition-oriented-development-pattern (wip) https://github.com/allmonday/composition-oriented-development-pattern
Install
pip install pydantic-resolve
Concept
It taks 5 steps to convert from root data into view data.
- define schema of root data and descdants & load root data
- forward-resolve all descdants data
- backward-process the data
- tree shake the data marked as exclude=True
- get the output
How resolve works, level by level:
Quick start
Basic usage, resolve your fields. (N+1 query will happens in batch scenario)
Let's build a blog site with blogs and comments.
import asyncio
from pydantic import BaseModel
from pydantic_resolve import Resolver
comments_table = [
dict(id=1, blog_id=1, content='its interesting'),
dict(id=2, blog_id=1, content='i dont understand'),
dict(id=3, blog_id=2, content='why? how?'),
dict(id=4, blog_id=2, content='wow!')]
async def query_comments(blog_id: int):
print(f'run query - {blog_id}')
return [c for c in comments_table if c['blog_id'] == blog_id]
async def get_blogs():
return [
dict(id=1, title='what is pydantic-resolve'),
dict(id=2, title='what is composition oriented development pattarn'),
]
class Comment(BaseModel):
id: int
content: str
class Blog(BaseModel):
id: int
title: str
comments: list[Comment] = []
async def resolve_comments(self):
return await query_comments(self.id)
comment_count: int = 0
def post_comment_count(self):
return len(self.comments)
class MyBlogSite(BaseModel):
name: str
blogs: list[Blog] = []
async def resolve_blogs(self):
return await get_blogs()
comment_count: int = 0
def post_comment_count(self):
return sum([b.comment_count for b in self.blogs])
async def single():
blog = Blog(id=1, title='what is pydantic-resolve')
blog = await Resolver().resolve(blog)
print(blog)
async def batch():
my_blog_site = MyBlogSite(name: "tangkikodo's blog")
my_blog_site = await Resolver().resolve(my_blog_site)
print(my_blog_site.json(indent=4))
async def main():
await single()
# query_comments():
# >> run query - 1
await batch()
# query_comments(): Ooops!, N+1 happens
# >> run query - 1
# >> run query - 2
asyncio.run(main())
output of my_blog_site:
{
"name": "tangkikodo's blog",
"blogs": [
{
"id": 1,
"title": "what is pydantic-resolve",
"comments": [
{
"id": 1,
"content": "its interesting"
},
{
"id": 2,
"content": "i dont understand"
}
]
},
{
"id": 2,
"title": "what is composition oriented development pattarn",
"comments": [
{
"id": 3,
"content": "why? how?"
},
{
"id": 4,
"content": "wow!"
}
]
}
]
}
Optimize N+1 with dataloader
- change
query_commentstoblog_to_comments_loader(loader function) - user loader in
resolve_comments
from pydantic_resolve import LoaderDepend
async def blog_to_comments_loader(blog_ids: list[int]):
print(blog_ids)
return build_list(comments_table, blog_ids, lambda c: c['blog_id'])
class Blog(BaseModel):
id: int
title: str
comments: list[Comment] = []
def resolve_comments(self, loader=LoaderDepend(blog_to_comments_loader)):
return loader.load(self.id)
comment_count: int = 0
def post_comment_count(self):
return len(self.comments)
# run main again
async def main():
await batch()
# blog_to_comments_loader():
# >> [1, 2]
# N+1 fixed
asyncio.run(main())
Simple demo:
cd examples
python -m readme_demo.0_basic
python -m readme_demo.1_filter
python -m readme_demo.2_post_methods
python -m readme_demo.3_context
python -m readme_demo.4_loader_instance
python -m readme_demo.5_subset
python -m readme_demo.6_mapper
python -m readme_demo.7_single
Advanced demo:
https://github.com/allmonday/composition-oriented-development-pattern
API Reference
https://allmonday.github.io/pydantic-resolve/reference_api/
Run FastAPI example
poetry shell
cd examples
uvicorn fastapi_demo.main:app
# http://localhost:8000/docs#/default/get_tasks_tasks_get
Unittest
poetry run python -m unittest # or
poetry run pytest # or
poetry run tox
Coverage
poetry run coverage run -m pytest
poetry run coverage report -m
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pydantic_resolve-1.10.0.tar.gz.
File metadata
- Download URL: pydantic_resolve-1.10.0.tar.gz
- Upload date:
- Size: 14.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.5.1 CPython/3.10.4 Darwin/22.2.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a56205f4d0132ea137a31c1c4fdd8577f23f3d51de9ba84ec04d8bd10f7e156f
|
|
| MD5 |
857b72b350b018cb4305688553270f07
|
|
| BLAKE2b-256 |
68c84d6374a50c65477913db13ed4775e32a1b4591c075ba9f493ad178c7b2c9
|
File details
Details for the file pydantic_resolve-1.10.0-py3-none-any.whl.
File metadata
- Download URL: pydantic_resolve-1.10.0-py3-none-any.whl
- Upload date:
- Size: 14.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.5.1 CPython/3.10.4 Darwin/22.2.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
702be57d4a858f7fb34a83605cb2c43063cdb3bc041db66fe6bc266408c86649
|
|
| MD5 |
022390e754cd5b8687e6b70f7f50ff7b
|
|
| BLAKE2b-256 |
82d67640ab392300e33cbdef30b2d20e8cca47a3de86b1eb156036494b649017
|