Skip to main content

Automatic response model inference for FastAPI using AST analysis

Project description

FastAPI AST Inference

PyPI version Python Versions License: MIT

Automatic response model inference for FastAPI using AST analysis.

🇷🇺 Русская версия

This library analyzes your FastAPI endpoint functions at startup time and automatically generates Pydantic models for OpenAPI documentation based on the dictionary structures you return.

Why?

Before (Standard Pydantic Approach)

from fastapi import FastAPI
from pydantic import BaseModel
from typing import List

# You must define a separate model class for every response structure
class Item(BaseModel):
    item_id: int
    name: str
    price: float
    in_stock: bool

class CustomerInfo(BaseModel):
    name: str
    vip_status: bool

class OrderResponse(BaseModel):
    order_id: str
    status: str
    total_amount: float
    items: List[Item]
    customer_info: CustomerInfo

app = FastAPI()

@app.get("/orders/{order_id}", response_model=OrderResponse)
async def get_order(order_id: str):
    return {
        "order_id": order_id,
        "status": "processing",
        "total_amount": 150.50,
        "items": [{"item_id": 1, "name": "Laptop", "price": 45.0, "in_stock": True}],
        "customer_info": {"name": "John", "vip_status": False}
    }

After (With AST Inference)

from fastapi import FastAPI
from fastapi_ast_inference import infer_response

app = FastAPI()

# No model definition needed! Types are inferred from the return statement.
@app.get("/orders/{order_id}")
@infer_response
async def get_order(order_id: str):
    return {
        "order_id": order_id,
        "status": "processing",
        "total_amount": 150.50,
        "items": [{"item_id": 1, "name": "Laptop", "price": 45.0, "in_stock": True}],
        "customer_info": {"name": "John", "vip_status": False}
    }

Result: Full OpenAPI schema with typed fields, zero boilerplate!

Installation

pip install fastapi-ast-inference

Usage

Option 1: Decorator (Recommended)

Use the @infer_response decorator on specific endpoints — no additional configuration needed:

from fastapi import FastAPI
from fastapi_ast_inference import infer_response

app = FastAPI()

@app.get("/")
@infer_response
async def root():
    return {"message": "Hello, World!", "count": 42}

@app.get("/users/{user_id}")
@infer_response
async def get_user(user_id: int):
    return {"id": user_id, "name": "John", "active": True}

Option 2: App-Wide Configuration

⚠️ Note: This affects ALL endpoints in your application.

Apply AST inference to all routes automatically:

from fastapi import FastAPI
from fastapi_ast_inference import InferredAPIRoute

app = FastAPI()
app.router.route_class = InferredAPIRoute  # Affects entire app

@app.get("/")
async def root():
    return {"message": "Hello, World!", "count": 42}

Option 3: Router-Level Configuration

⚠️ Note: This affects all endpoints in the router.

Apply to specific routers:

from fastapi import APIRouter
from fastapi_ast_inference import InferredAPIRoute, create_inferred_router

# Using the helper function
router = create_inferred_router(prefix="/api/v1", tags=["api"])

# Or manually
router = APIRouter(route_class=InferredAPIRoute)  # Affects entire router

@router.get("/items")
async def get_items():
    return {"items": ["a", "b", "c"], "total": 3}

Option 4: Programmatic API

Use the inference function directly for custom use cases:

from fastapi_ast_inference import infer_response_model_from_ast

def my_endpoint():
    return {"name": "test", "value": 123}

model = infer_response_model_from_ast(my_endpoint)
# model is now a Pydantic BaseModel with 'name: str' and 'value: int' fields

Supported Patterns

✅ Direct Dictionary Literals

@app.get("/")
async def endpoint():
    return {"key": "value", "number": 42, "flag": True}

✅ Variable Returns

@app.get("/")
async def endpoint():
    data = {"status": "ok", "items": [1, 2, 3]}
    return data

✅ Annotated Variables

@app.get("/")
async def endpoint():
    result: Dict[str, Any] = {"count": 100, "active": True}
    return result

✅ Nested Structures

@app.get("/")
async def endpoint():
    return {
        "user": {"name": "John", "age": 30},
        "settings": {"theme": "dark", "notifications": True}
    }

✅ Type Inference from Arguments

@app.get("/items/{item_id}")
async def get_item(item_id: int, name: str):
    return {"id": item_id, "name": name}  # Types inferred from parameters

❌ Not Supported

  • Multiple return statements (different structures in if/else)
  • Dynamic dictionary construction (e.g., dict(key=value))
  • Function call returns (e.g., return some_function())
  • Non-string dictionary keys

How It Works

  1. At Application Startup: When FastAPI registers routes, the library intercepts endpoint functions.

  2. AST Analysis: The source code is parsed into an Abstract Syntax Tree.

  3. Type Inference: Return statements are analyzed to extract dictionary structure and infer types:

    • Constants → their Python types ("hello"str, 42int)
    • Lists → List[T] where T is inferred from elements
    • Nested dicts → nested Pydantic models
    • Function arguments → types from annotations
  4. Model Generation: A Pydantic model is dynamically created with the inferred fields.

  5. OpenAPI Integration: The generated model is used for response documentation.

Performance

  • Zero runtime overhead: AST analysis happens once at startup, not per request
  • Cached results: Inferred models are stored and reused
  • Graceful fallback: If inference fails, standard FastAPI behavior is preserved

Logging

Enable debug logging to see inference decisions:

import logging
logging.getLogger("fastapi_ast_inference").setLevel(logging.DEBUG)

Example output:

DEBUG:fastapi_ast_inference:AST inference skipped for 'get_data': multiple return statements detected (2)
DEBUG:fastapi_ast_inference:AST inference skipped for 'get_external': return value is not a dict literal

API Reference

infer_response_model_from_ast(func) -> Optional[Type[BaseModel]]

Analyze a function and return an inferred Pydantic model, or None if inference fails.

@infer_response

Decorator that pre-computes the inferred model and sets it as the function's return annotation. Works independently without InferredAPIRoute.

InferredAPIRoute

Custom route class that automatically applies AST inference to endpoints.

create_inferred_router(**kwargs) -> APIRouter

Create an APIRouter with InferredAPIRoute as the default route class.

get_inferred_model(func) -> Optional[Type[BaseModel]]

Retrieve the inferred model from a decorated function.

License

MIT License - see LICENSE for details.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fastapi_ast_inference-0.1.0.tar.gz (21.3 kB view details)

Uploaded Source

File details

Details for the file fastapi_ast_inference-0.1.0.tar.gz.

File metadata

  • Download URL: fastapi_ast_inference-0.1.0.tar.gz
  • Upload date:
  • Size: 21.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.0

File hashes

Hashes for fastapi_ast_inference-0.1.0.tar.gz
Algorithm Hash digest
SHA256 f24da31ad1c09474a85289eec04126ff15d32b1d0e52f39267c65185fffaf3d1
MD5 246395666f80dcc60c20ad28b3dd201f
BLAKE2b-256 9f83eff0fe71b6cc6d1c53430c0f7d56cd5da3e19f697a11afe47670f0264605

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page