Skip to main content

Use DataLoaders in your Python GraphQL servers that have to run in a sync context (i.e. Django).

Project description

graphql-sync-dataloaders

Use DataLoaders in your Python GraphQL servers that have to run in a sync context (i.e. Django).

Requirements

  • Python 3.8+
  • graphql-core >=3.2.0

Installation

This package can be installed from PyPi by running:

pip install graphql-sync-dataloaders

Strawberry setup

When creating your Strawberry Schema pass DeferredExecutionContext as the execution_context_class argument:

# schema.py
import strawberry
from graphql_sync_dataloaders import DeferredExecutionContext

schema = strawberry.Schema(Query, execution_context_class=DeferredExecutionContext)

Then create your dataloaders using the SyncDataLoader class:

from typing import List

from graphql_sync_dataloaders import SyncDataLoader

from .app import models  # your Django models

def load_users(keys: List[int]) -> List[User]:
    qs = models.User.objects.filter(id__in=keys)
    user_map = {user.id: user for user in qs}
    return [user_map.get(key, None) for key in keys]

user_loader = SyncDataLoader(load_users)

You can then use the loader in your resolvers and it will automatically be batched to reduce the number of SQL queries:

import strawberry

@strawberry.type
class Query:
    @strawberry.field
    def get_user(self, id: strawberry.ID) -> User:
        return user_loader.load(id)

Note: You probably want to setup your loaders in context. See https://strawberry.rocks/docs/guides/dataloaders#usage-with-context for more details

The following query will only make 1 SQL query:

fragment UserDetails on User {
  username
}

query {
  user1: getUser(id: '1') {
    ...UserDetails
  }
  user2: getUser(id: '2') {
    ...UserDetails
  }
  user3: getUser(id: '3') {
    ...UserDetails
  }
}

Graphene-Django setup

Requires graphene-django >=3.0.0b8

When setting up your GraphQLView pass DeferredExecutionContext as the execution_context_class argument:

# urls.py
from django.urls import path
from graphene_django.views import GraphQLView
from graphql_sync_dataloaders import DeferredExecutionContext

from .schema import schema

urlpatterns = [
    path(
        "graphql",
        csrf_exempt(
            GraphQLView.as_view(
                schema=schema, 
                execution_context_class=DeferredExecutionContext
            )
        ),
    ),
]

Then create your dataloaders using the SyncDataLoader class:

from typing import List

from graphql_sync_dataloaders import SyncDataLoader

from .app import models  # your Django models

def load_users(keys: List[int]) -> List[User]:
    qs = models.User.objects.filter(id__in=keys)
    user_map = {user.id: user for user in qs}
    return [user_map.get(key, None) for key in keys]

user_loader = SyncDataLoader(load_users)

You can then use the loader in your resolvers and it will automatically be batched to reduce the number of SQL queries:

import graphene

class Query(graphene.ObjectType):
    get_user = graphene.Field(User, id=graphene.ID)

    def resolve_get_user(root, info, id):
        return user_loader.load(id)

The following query will only make 1 SQL query:

fragment UserDetails on User {
  username
}

query {
  user1: getUser(id: '1') {
    ...UserDetails
  }
  user2: getUser(id: '2') {
    ...UserDetails
  }
  user3: getUser(id: '3') {
    ...UserDetails
  }
}

How it works

This library implements a custom version of the graphql-core ExecutionContext class that is aware of the SyncFuture objects defined in this library. A SyncFuture represents a value that hasn't been resolved to a value yet (similiar to asycnio Futures or JavaScript Promises) and that is what the SyncDataLoader returns when you call the .load function.

When the custom ExecutionContext encounters a SyncFuture that gets returned from a resolver and it keeps track of them. Then after the first pass of the exection it triggers the SyncFuture callbacks until there are none left. Once there are none left the data is fully resolved and can be returned to the caller synchronously. This allows us to implement a DataLoader pattern that batches calls to a loader function, and it allows us to do this in a fully synchronously way.

Credits

@Cito for graphql-core and for implementing the first version of this in https://github.com/graphql-python/graphql-core/pull/155

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

graphql-sync-dataloaders-0.1.1.tar.gz (8.6 kB view details)

Uploaded Source

Built Distribution

graphql_sync_dataloaders-0.1.1-py3-none-any.whl (8.1 kB view details)

Uploaded Python 3

File details

Details for the file graphql-sync-dataloaders-0.1.1.tar.gz.

File metadata

  • Download URL: graphql-sync-dataloaders-0.1.1.tar.gz
  • Upload date:
  • Size: 8.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.2.1 CPython/3.10.2 Linux/5.15.0-1020-azure

File hashes

Hashes for graphql-sync-dataloaders-0.1.1.tar.gz
Algorithm Hash digest
SHA256 67c708e89c46b401188b773af055850278e4e1d5af2f66bf31d9bd0d504d95b9
MD5 15016b305b6b6e3e3de30069f59d46ff
BLAKE2b-256 01ba329bda79f7876804d03da659f9b6317ff30f34a1a93c3f4b097f4038d618

See more details on using hashes here.

File details

Details for the file graphql_sync_dataloaders-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for graphql_sync_dataloaders-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 f003f1184417acf156b2f73c4f0e76487514a455143a22ab796c110db4bc4693
MD5 cb6911576ea4d0edc8ba2d9ff8adbe12
BLAKE2b-256 aa045176bd5dabb56f835ed80ba2367e190575b3c9f8015f8d2ce9058a9a4fab

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page