Skip to main content

A simple rest api generator for django based on models

Project description

easyapi-django

A REST API generator for Django. Define a class, point it at a model, get full async CRUD endpoints with authentication, filtering, pagination, caching, rate limiting, multi-tenancy, Pydantic validation and OpenAPI docs out of the box.

Why

Most Django REST resources end up as hundreds of lines of plumbing: list/detail views, write handlers with field whitelists, session auth, rate limit, Redis caching with invalidation, multi-tenant DB switching. easyapi packages all of that as a class with attributes — usually under 30 lines per resource.

Install

pip install easyapi-django

Optional Pydantic schemas for input validation and response shaping:

pip install 'easyapi-django[schemas]'

Required environment

REDIS_SERVER=localhost
REDIS_DB=0
REDIS_PREFIX=myapp           # optional; namespaces all Redis keys

Redis is used for sessions, cache, rate limiting and abuse blocking.

Add middleware in Django settings

MIDDLEWARE = [
    ...
    'easyapi.SecurityMiddleware',    # pattern/UA/4xx-flood instant block
    'easyapi.AuthMiddleware',        # session-based auth from Redis
    'easyapi.ExceptionMiddleware',
]

EASYAPI = {
    'TRUSTED_PROXIES': ['10.0.0.0/8'],   # only trust X-Real-IP from these
    # 'COOKIE_ID': 'sessionid', 'ENFORCE_TOKEN': True, ...
}

Create a resource

from easyapi import BaseResource
from your_models import YourModel

class YourResource(BaseResource):
    model = YourModel

Wire up routes

from easyapi import get_routes
from your_resources import YourResource

endpoints = {
    r'yourendpoint(.*)$': YourResource,
}
urlpatterns = [...] + get_routes(endpoints)

GET, POST, PATCH, DELETE are ready. You also get:

  • GET /openapi.json — OpenAPI 3.0.3 spec
  • GET /docs — interactive Scalar UI

Configuration cheat sheet

class YourResource(BaseResource):
    model = YourModel

    authenticated = True               # default; set False to allow anonymous
    allowed_methods = ['get', 'post', 'patch', 'delete']

    # Listing
    list_fields = ['id', 'name']
    list_related_fields = {'account': ['name', 'plan']}
    list_exclude_fields = []
    normalize_list = False             # return {id: {...}} instead of [{...}]

    # Filtering / searching / ordering
    filter_fields = ['name', 'active']
    search_fields = ['name', 'email']
    search_operator = 'icontains'
    order_fields = ['id', 'name']

    # Detail / write
    edit_fields = ['id', 'name']
    update_fields = ['name']
    create_fields = ['name']
    normalize_obj = False              # return {id: {...}} from PATCH/POST

    # Ownership (DELETE/PATCH scoped to rows owned by user)
    owner_field = 'owner_id'

    # Pagination
    limit = 25                         # 0 returns everything
    order_by = 'id'

    # Cache
    cache = True
    cache_ttl = 600                    # default 120s; settings.CACHE_TTL overrides

Querystrings

Param Effect
?count=true Return only {count: N}
?search=value Search across search_fields with OR
?field=value / ?field__gte=... Filter on whitelisted fields
?fields=a,b Restrict returned fields (filtered by list_fields)
?filter=<json> Advanced filter expression on whitelisted fields
?<stored_filter_param>=N Apply a server-side stored filter expression (see below)
?page=N&limit=M&order_by=field Pagination + order
?normalize=true Return list as {id: {...}} instead of array

Saved filter expressions

Layer-2 expressions can be persisted server-side and reapplied by id — saved views, marketing audiences, dashboard presets. The framework owns the URL plumbing; the project owns the storage, lookup and policy.

Two pieces on the resource:

class ClientResource(BaseResource):
    model = Client
    filter_fields = ['active', 'name']

    stored_filter_param = 'segment_id'   # any name — view_id, audience_id, …

    async def resolve_stored_filter(self, value):
        seg = await Segment.objects.filter(
            id=value, account=self.account_id   # tenant scoping etc.
        ).afirst()
        return seg.conditions if seg else None

When a request carries ?segment_id=42, easyapi calls the hook, trusts its return value, and applies it through the same Layer-2 pipeline. Returning None raises 404. The hook can raise HTTPException(403, ...) itself for "exists but you can't access it."

?segment_id= and ?filter= compose with AND — saved view plus ad-hoc narrowing. The URL JSON still validates against filter_fields; the stored conditions skip that check because the hook owns them.

For projects that let end users author stored expressions, validate at write time with the public helper:

from easyapi import validate_conditions

class SegmentResource(BaseResource):
    model = Segment
    create_fields = ['name', 'conditions', 'context_id']

    async def hydrate(self, body):
        conditions = body.get('conditions')
        if conditions:
            allowed = ALLOWED_FIELDS_BY_CONTEXT[body['context_id']]
            validate_conditions(conditions, allowed)
        return body

Server-side admin-only expressions can skip the write-time check; only user-authored ones need it.

For projects with one Segment row type targeting many list resources, factor out a mixin so each resource opts in with two lines and the filter_fields whitelist auto-extends from a project registry:

# modules/segment/mixin.py
from django.db.models import Q
from .constants import INCLUDE_FIELDS
from .models import Segment, CONTEXT

class SegmentMixin:
    stored_filter_param = 'segment_id'
    segment_context_id = None

    def __init__(self):
        super().__init__()
        if self.segment_context_id is None:
            return
        label = CONTEXT.LABEL[self.segment_context_id]
        bag = INCLUDE_FIELDS.get(label, {})
        extra  = list(bag.get('segment_fields') or [])
        extra += [m.split('__')[0] for m in bag.get('related_models') or []]
        self.filter_fields = list(
            dict.fromkeys((self.filter_fields or []) + extra)
        )

    async def resolve_stored_filter(self, value, **kwargs):
        is_master = bool(self.user and (
            self.user.get('is_admin') or self.user.get('is_owner')
        ))
        qs = Segment.objects.filter(
            id=value, context_id=self.segment_context_id,
        )
        if not is_master:
            qs = qs.filter(
                Q(public=True) | Q(created_by_id=self.user['id'])
            )
        seg = await qs.afirst()
        return seg.conditions if seg else None

class AgentResource(SegmentMixin, BaseResource):
    segment_context_id = CONTEXT.AGENT
    model = Agent
    filter_fields = ['agency_id']    # explicit URL shorthands
    # The mixin unions in the segment_fields whitelist + related prefixes,
    # so ?filter=<json> and segment authoring share one source of truth.

The minimum storage model is a JSON column:

class Segment(models.Model):
    name = models.CharField(max_length=120)
    conditions = models.JSONField()

Pydantic schemas (optional)

Set any of create_schema, update_schema, list_schema and easyapi validates inputs and shapes outputs through the schema. Resources without schemas keep the legacy field-list behaviour.

from pydantic import BaseModel, EmailStr, Field

class UserCreate(BaseModel):
    email: EmailStr
    password: str = Field(min_length=8)

class UserOut(BaseModel):
    id: int
    email: EmailStr
    name: str

class UserResource(BaseResource):
    model = User
    create_schema = UserCreate         # validates POST body, 422 on failure
    list_schema = UserOut              # shapes GET responses

Validation errors are returned as HTTPException(422, [...]):

{
  "success": false,
  "status": 422,
  "detail": [
    {"field": "email", "message": "value is not a valid email address"}
  ]
}

OpenAPI

get_routes() always registers two routes:

  • /openapi.json — generated from your resources. Pydantic schemas are emitted as JSON Schema; resources without schemas fall back to Django model introspection.
  • /docs — Scalar API reference (two-column layout, search, dark mode, try-it-out). The Scalar AI assistant is disabled in this build.

Custom routes can be enriched with the @openapi(...) decorator:

from easyapi import openapi

class UserResource(BaseResource):
    routes = [{'path': r'/me$', 'func': 'me', 'allowed_methods': ['get']}]

    @openapi(summary='Current user', response=UserOut)
    async def me(self, request, match=None):
        return {'id': self.user['id'], 'email': self.user['email']}

Custom routes

class YourResource(BaseResource):
    model = YourModel
    routes = [
        {'path': r'(\d+)/accept$', 'func': 'accept', 'allowed_methods': ['patch']},
        {'path': r'me$',           'func': 'get_me', 'cache': True},
    ]

    async def accept(self, request, match=None, body=None):
        ...

    async def get_me(self, request, match=None):
        ...

Cache

Per-resource opt-in Redis cache. Namespaced invalidation — editing row 5 does not drop the cache for row 7.

Operation Cache effect
GET /spaces Cached under list:<model> namespace
GET /spaces/5 Cached under detail:<model>:5
PATCH /spaces/5 Invalidates list:<model> + detail:<model>:5
DELETE /spaces/5 Same as PATCH
POST /spaces Invalidates list:<model> only

Cache key includes a hash of the querystring, so different filters do not collide.

Tenant isolation is automatic. Multi-tenant deployments share Redis, so _build_cache_key folds self.account_id into the key whenever it is set — different tenants hitting the same path get different keys. No configuration needed; it just works for any project that uses aset_tenant. The auto-fold is keyed by account_id is not None, so an explicit account_id = 0 still produces a per-tenant key (real value, not absence). Disable globally via EASYAPI = {'AUTO_SCOPE_CACHE_BY_ACCOUNT': False} if you have a single-tenant deployment and want the legacy key shape.

If you override _build_cache_key in a project, call self._account_cache_segment() and append the result so the override inherits the tenant isolation.

TTL settings. Two project-level knobs in the EASYAPI bag:

  • CACHE_TTL — default 120s; overrides the framework default for resources that don't declare an explicit cache_ttl.
  • CACHE_TTL_ENABLE — default True; flip to False for a global kill switch (every cache=True resource becomes cache=False at runtime, no Redis read or write).

Every easyapi setting lives inside the EASYAPI = {...} dict (DRF/Celery-style namespace):

# settings.py
EASYAPI = {
    'CACHE_TTL': 300,
    'CACHE_TTL_ENABLE': True,
    'ENFORCE_TOKEN': True,
    'COOKIE_ID': 'sessionid',
    'RATE_LIMITS': {...},
}

Inside the bag the historical EASYAPI_ prefix is redundant — EASYAPI_API_KEY_RESOLVER and API_KEY_RESOLVER resolve to the same setting.

CACHE_TTL only sets the default — resources that declare cache_ttl = N keep that explicit value. CACHE_TTL_ENABLE = False is a kill switch that forces self.cache = False for every request, useful for incident response without code edits.

Per-scope caching. When the response varies on a user/account dimension inside the same tenant — role, space, plan, country — declare it with cache_scope_fields so users sharing the same scope share the cache and different scopes get isolated keys:

class TaskResource(BaseResource):
    model = Task
    cache = True
    # Strings are shorthand for `self.user[field]`. Tuples select the
    # source explicitly: ('user', ...) or ('account', ...).
    # Don't add ('account', 'id') — tenant isolation is already automatic.
    cache_scope_fields = ['space_id', ('account', 'plan_id')]

When a request has authenticated context but a configured scope field is missing from the session payload, the framework logs a WARNING (logger easyapi.base) and disables cache for that request — the response is neither read from nor written to Redis. Sharing a key across users when the scope can't be resolved would be a silent leak across whatever dimension the operator was trying to protect. Anonymous requests skip the fold cleanly (no warning, no leak). None, 0 and '' count as present (a real value).

Use before_cache for the rare case that needs context outside self.user / self.account:

async def before_cache(self, request):
    """Escape hatch for scope sources not covered by cache_scope_fields."""
    feature = await get_feature_flag(self.user)
    self.cache_key += f':flag={feature}'

Hit/miss stats:

from easyapi import get_cache_stats

stats = await get_cache_stats()
# {'hits': ..., 'misses': ..., 'total': ..., 'ratio': ..., 'by_model': {...}}

Authentication

Two mechanisms, both Redis-backed:

  • Session cookieCookie: <COOKIE_ID>=<key>, validated against a strict regex before any Redis lookup.
  • API keyX-Api-Key: <token>. Format is your project's choice; easyapi resolves the token to a session via your UserApi model. See the docs for the default resolution flow and how to issue keys.

When both are present, the API key wins. authenticated = False opts a resource out of authentication while keeping rate limit and security middleware in effect.

Security defaults

  • Session cookie validated against ^[a-zA-Z0-9_\-:]{5,100}$.
  • ?fields= is filtered against list_fields to prevent attribute leakage.
  • ?filter= is validated against filter_fields. Stored filter expressions are validated by the project (typically at write time via validate_conditions).
  • owner_field scopes PATCH/DELETE to rows owned by the authenticated user.
  • Request rate limiting runs inside BaseResource.dispatch; edge scanner blocking runs in SecurityMiddleware before the view.
  • Both layers converge on the same blocked-IP store in Redis (rate_limit:blocked:<ip>), with automatic 24h blocking.
  • SecurityMiddleware instant-blocks scanner paths/UAs and 4xx floods.
  • get_client_ip honours X-Real-IP only from TRUSTED_PROXIES.
  • Unhandled handler exceptions return a sanitized JSON 500 in production (no stack trace in the response). Full trace still goes to logger.exception.
  • Optional anti-replay token via ENFORCE_TOKEN=True (X-Token header). Server validates HMAC, timestamp drift and a Redis-tracked nonce (SET NX PX, TTL = 2× drift). Replayed nonces inside the window are rejected. Helpers: make_token (mint), validate_token (sync HMAC check), validate_token_async (HMAC + nonce reservation).

Tenancy

Multi-tenant database routing through easyapi.DBRouter and aset_tenant(account_id). Configure in your settings:

DEFAULT_DATABASE = DATABASES['default']
TENANT_ACCOUNT_MODEL = 'core.Account'
TENANT_USER_MODEL = 'core.User'
TENANT_USER_API_MODEL = 'core.UserApi'
TENANT_DB_PREFIX = 'tenant'
HASH_LENGTH = 32
DATABASE_ROUTERS = ['easyapi.DBRouter']

set_default(account_id) and unset_default(account_id) are script-only — they mutate the global default connection and are unsafe inside ASGI request handling. They are not re-exported from the top-level easyapi package; import them from easyapi.tenant.tenant when you really need them in a management command or one-off script. For per-request tenant switching, use aset_tenant.

MCP server (agent-callable tools)

Optional. Expose every resource as a typed tool that LLM agents can call — same auth, same rate limit, same Pydantic schemas, same dispatch.

pip install 'easyapi-django[mcp]'

One liner — adds POST /api/mcp:

urlpatterns = [
    path('api/', include(get_routes(endpoints, mcp=True))),
]

Or subclass for custom behaviour:

from easyapi import MCPResource

class MyMCP(MCPResource):
    endpoints = my_endpoints
    summary = 'agent-tools'

    async def post_process(self, response):
        await audit_log(self.user, self.body, response)
        return response

urlpatterns = [path('mcp/', MyMCP.as_view())]

For desktop agents (Claude Desktop, Cursor) over stdio:

EASYAPI_MCP_API_KEY="<key>" python manage.py mcp_serve myapp.urls.endpoints

Tool calls run through the same BaseResource.dispatch as REST — no parallel handlers, no schema duplication. The bridge also wraps the view in your project's settings.MIDDLEWARE, so SecurityMiddleware, AuthMiddleware, ExceptionMiddleware and any custom async-capable middleware run exactly as on a REST hit. Sync-only middleware is skipped — mark it async_capable = True or enforce the equivalent invariant inside dispatch if it is critical. Hide a resource from MCP with mcp_expose = False; restrict to read-only with mcp_expose = ['list', 'get']. See the docs for details.

Metrics endpoint

get_routes() automatically registers POST /metrics for aggregations and group-bys. Useful for charts, dashboards and reports — one endpoint covers what would otherwise be dozens of bespoke routes.

POST /metrics
{
  "model": "myapp.Order",
  "calc": {"formula": ["sum"], "field": "total"},
  "group_by": {"date": {"field": "created_at", "group_by": "month"}},
  "filter_by": {"period": "this_year"}
}

Supports count, sum, avg, min, max, variance, std dev, with optional grouping by field and date period (year/quarter/month/day/ weekday/hour).

WebSocket consumer

from easyapi import BaseWSConsumer

class MyConsumer(BaseWSConsumer):
    # Defaults — override per consumer when needed
    allow_unauthenticated = False        # UUID-based connections opt-in
    track_online = False                 # Redis-backed presence tracking

    async def on_connect(self, user):
        await self.send_state(['ready'], True)

    async def allowed_channels(self, user):
        # Return an iterable of channel suffix names this user may
        # subscribe to. Channel names are also gated server-side by
        # ^[A-Za-z0-9_\-.]{1,64}$. Return None to allow any well-formed
        # name (legacy default), an empty list to block extra subs.
        return ['inbox', 'alerts']

Requires Django Channels. allow_unauthenticated defaults to False since 0.30 — set it to True explicitly on consumers that need the UUID-based signup flow.

Hooks

Override on your resource:

Hook When
pre_process After auth, before body parsing
before_cache Before the cache lookup (GET)
hydrate(body) Before write (POST/PATCH)
dehydrate(row) Per row before serialize
alter_list Mutate list result
alter_detail Mutate detail result
post_process Last chance before save_cache + response
add_m2m(result) Custom M2M handling

BaseTagsResource and BaseCustomResource are ready-made subclasses for projects that use tags and user-defined custom attributes.

Tests

pip install -r requirements-dev.txt
pytest

301 tests covering util, redis, cache (incl. per-account auto-fold and per-scope keys), filters, filter validation, init, auth tokens (incl. nonce replay), schemas, openapi, helpers, serializer (incl. per-call timezone subclass), client_ip, allowed-domain checks, SecurityMiddleware, dispatch error handling, tenant connection and registry, MCP middleware chain, route gating, WS subscription hardening, public exports, and WebSocket optional import.

Author

Stamatios Stamou Jr — github.com/ssjunior

Project details


Release history Release notifications | RSS feed

This version

0.37

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

easyapi_django-0.37.tar.gz (104.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

easyapi_django-0.37-py3-none-any.whl (80.4 kB view details)

Uploaded Python 3

File details

Details for the file easyapi_django-0.37.tar.gz.

File metadata

  • Download URL: easyapi_django-0.37.tar.gz
  • Upload date:
  • Size: 104.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.1

File hashes

Hashes for easyapi_django-0.37.tar.gz
Algorithm Hash digest
SHA256 8bdef61878c650f526815f2dbb33d53c2b24ddafb4d331d7fe72439714359403
MD5 b276e4b90dc7d214c8fcb8383088e41b
BLAKE2b-256 090511b39647982d6f16cfb27ad5d4eb2bcaebeb891619ea5029463f094d0689

See more details on using hashes here.

File details

Details for the file easyapi_django-0.37-py3-none-any.whl.

File metadata

  • Download URL: easyapi_django-0.37-py3-none-any.whl
  • Upload date:
  • Size: 80.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.1

File hashes

Hashes for easyapi_django-0.37-py3-none-any.whl
Algorithm Hash digest
SHA256 7f0cc60b025a541907d71b21c03d837079072f8bfc8f337a2f151b5684c4b99e
MD5 9082efd9ea92fc9aa30477f0d419e456
BLAKE2b-256 251de62412a8ec72b15de2ba5b6155d7e0283dd6c33341dce59cf90d34725e34

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page