Skip to main content

Use DataLoaders in your Python GraphQL servers that have to run in a sync context (i.e. Django).

Project description

graphql-sync-dataloaders

Use DataLoaders in your Python GraphQL servers that have to run in a sync context (i.e. Django).

Requirements

  • Python 3.8+
  • graphql-core >=3.2.0

Installation

This package can be installed from PyPi by running:

pip install graphql-sync-dataloaders

Strawberry setup

When creating your Strawberry Schema pass DeferredExecutionContext as the execution_context_class argument:

# schema.py
import strawberry
from graphql_sync_dataloaders import DeferredExecutionContext

schema = strawberry.Schema(Query, execution_context_class=DeferredExecutionContext)

Then create your dataloaders using the SyncDataLoader class:

from typing import List

from graphql_sync_dataloaders import SyncDataLoader

from .app import models  # your Django models

def load_users(keys: List[int]) -> List[User]:
    qs = models.User.objects.filter(id__in=keys)
    user_map = {user.id: user for user in qs}
    return [user_map.get(key, None) for key in keys]

user_loader = SyncDataLoader(load_users)

You can then use the loader in your resolvers and it will automatically be batched to reduce the number of SQL queries:

import strawberry

@strawberry.type
class Query:
    @strawberry.field
    def get_user(self, id: strawberry.ID) -> User:
        return user_loader.load(id)

Note: You probably want to setup your loaders in context. See https://strawberry.rocks/docs/guides/dataloaders#usage-with-context for more details

The following query will only make 1 SQL query:

fragment UserDetails on User {
  username
}

query {
  user1: getUser(id: '1') {
    ...UserDetails
  }
  user2: getUser(id: '2') {
    ...UserDetails
  }
  user3: getUser(id: '3') {
    ...UserDetails
  }
}

Graphene-Django setup

Requires graphene-django >=3.0.0b8

When setting up your GraphQLView pass DeferredExecutionContext as the execution_context_class argument:

# urls.py
from django.urls import path
from graphene_django.views import GraphQLView
from graphql_sync_dataloaders import DeferredExecutionContext

from .schema import schema

urlpatterns = [
    path(
        "graphql",
        csrf_exempt(
            GraphQLView.as_view(
                schema=schema, 
                execution_context_class=DeferredExecutionContext
            )
        ),
    ),
]

Then create your dataloaders using the SyncDataLoader class:

from typing import List

from graphql_sync_dataloaders import SyncDataLoader

from .app import models  # your Django models

def load_users(keys: List[int]) -> List[User]:
    qs = models.User.objects.filter(id__in=keys)
    user_map = {user.id: user for user in qs}
    return [user_map.get(key, None) for key in keys]

user_loader = SyncDataLoader(load_users)

You can then use the loader in your resolvers and it will automatically be batched to reduce the number of SQL queries:

import graphene

class Query(graphene.ObjectType):
    get_user = graphene.Field(User, id=graphene.ID)

    def resolve_get_user(root, info, id):
        return user_loader.load(id)

The following query will only make 1 SQL query:

fragment UserDetails on User {
  username
}

query {
  user1: getUser(id: '1') {
    ...UserDetails
  }
  user2: getUser(id: '2') {
    ...UserDetails
  }
  user3: getUser(id: '3') {
    ...UserDetails
  }
}

How it works

This library implements a custom version of the graphql-core ExecutionContext class that is aware of the SyncFuture objects defined in this library. A SyncFuture represents a value that hasn't been resolved to a value yet (similiar to asycnio Futures or JavaScript Promises) and that is what the SyncDataLoader returns when you call the .load function.

When the custom ExecutionContext encounters a SyncFuture that gets returned from a resolver and it keeps track of them. Then after the first pass of the exection it triggers the SyncFuture callbacks until there are none left. Once there are none left the data is fully resolved and can be returned to the caller synchronously. This allows us to implement a DataLoader pattern that batches calls to a loader function, and it allows us to do this in a fully synchronously way.

Credits

@Cito for graphql-core and for implementing the first version of this in https://github.com/graphql-python/graphql-core/pull/155

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

graphql-sync-dataloaders-0.1.0.tar.gz (8.5 kB view details)

Uploaded Source

Built Distribution

graphql_sync_dataloaders-0.1.0-py3-none-any.whl (8.0 kB view details)

Uploaded Python 3

File details

Details for the file graphql-sync-dataloaders-0.1.0.tar.gz.

File metadata

  • Download URL: graphql-sync-dataloaders-0.1.0.tar.gz
  • Upload date:
  • Size: 8.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.2.1 CPython/3.10.2 Linux/5.15.0-1019-azure

File hashes

Hashes for graphql-sync-dataloaders-0.1.0.tar.gz
Algorithm Hash digest
SHA256 c1a75d72abd7e81f120ca59d1532ac7ab58be64ac8ffed3e382fa65ecd457d97
MD5 4942a3778365daf61bf00c3b7174f406
BLAKE2b-256 83943c64641b7675896a4eee3d712f192cab4e986211ee8514f8032fc5aebebc

See more details on using hashes here.

File details

Details for the file graphql_sync_dataloaders-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for graphql_sync_dataloaders-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6b2e23ac73aaafa77e41ae4d32fdaaca57c64d2e54f3c89c10f3a529f90eb07a
MD5 708a3655067b553f8dafb3c10c3151d1
BLAKE2b-256 4ac750e1c0eab83fbb240a12bb38f4ce8a47f175a1c585b71b0d5797e15c8a2c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page