Skip to main content

A batteries-included framework to build high performance, async GraphQL APIs

Project description

Turbulette

test codacy-coverage codacy-grade pypi py-version license mypy black bandit pre-commit gitter netlify

Turbulette packages all you need to build great GraphQL APIs :

ASGI framework, GraphQL library, ORM and data validation


Documentation : https://turbulette.netlify.app


Features :

  • Split your API in small, independent applications
  • Generate Pydantic models from GraphQL types
  • JWT authentication with refresh and fresh tokens
  • Declarative, powerful and extendable policy-based access control (PBAC)
  • Extendable auth user model with role management
  • Async caching (provided by async-caches)
  • Built-in CLI to manage project, apps, and DB migrations
  • Built-in pytest plugin to quickly test your resolvers
  • Settings management at project and app-level (thanks to simple-settings)
  • CSRF middleware
  • 100% test coverage
  • 100% typed, your IDE will thank you ;)
  • Handcrafted with โค๏ธ, from ๐Ÿ‡ซ๐Ÿ‡ท

Requirements

Python 3.6+

๐Ÿ‘ Turbulette makes use of great tools/frameworks and wouldn't exist without them :

  • Ariadne - Schema-first GraphQL library
  • Starlette - The little ASGI framework that shines
  • GINO - Lightweight, async ORM
  • Pydantic - Powerful data validation with type annotations
  • Alembic - Lightweight database migration tool
  • simple-settings - A generic settings system inspired by Django's one
  • async-caches - Async caching library
  • Click - A "Command Line Interface Creation Kit"

Installation

pip install turbulette

You will also need an ASGI server, such as uvicorn :

pip install uvicorn

๐Ÿš€ Quick Start

Here is a short example that demonstrates a minimal project setup.

We will see how to scaffold a simple Turbulette project, create a Turbulette application, and write some GraphQL schema/resolver. It's advisable to start the project in a virtualenv to isolate your dependencies. Here we will be using poetry :

poetry init

Then, install Turbulette from PyPI :

poetry add turbulette

For the rest of the tutorial, we will assume that commands will be executed under the virtualenv. To spawn a shell inside the virtualenv, run :

poetry shell

1: Create a project

First, create a directory that will contain the whole project.

Now, inside this folder, create your Turbulette project using the turb CLI :

turb project eshop

You should get with something like this :

.
โ””โ”€โ”€ ๐Ÿ“ eshop
    โ”œโ”€โ”€ ๐Ÿ“ alembic
    โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ env.py
    โ”‚   โ””โ”€โ”€ ๐Ÿ“„ script.py.mako
    โ”œโ”€โ”€ ๐Ÿ“„ .env
    โ”œโ”€โ”€ ๐Ÿ“„ alembic.ini
    โ”œโ”€โ”€ ๐Ÿ“„ app.py
    โ””โ”€โ”€ ๐Ÿ“„ settings.py

Let's break down the structure :

  • ๐Ÿ“ eshop : Here is the so-called Turbulette project folder, it will contain applications and project-level configuration files
  • ๐Ÿ“ alembic : Contains the Alembic scripts used when generating/applying DB migrations
    • ๐Ÿ“„ env.py
    • ๐Ÿ“„ script.py.mako
  • ๐Ÿ“„ .env : The actual project settings live here
  • ๐Ÿ“„ app.py : Your API entrypoint, it contains the ASGI app
  • ๐Ÿ“„ settings.py : Will load settings from .env file

Why have both .env and settings.py?

You don't have to. You can also put all your settings in settings.py. But Turbulette encourage you to follow the twelve-factor methodology, that recommend to separate settings from code because config varies substantially across deploys, code does not. This way, you can untrack .env from version control and only keep tracking settings.py, which will load settings from .env using Starlette's Config object.

2: Create the first app

Now it's time to create a Turbulette application!

Run this command under the project directory (eshop) :

turb app --name account

You need to run turb app under the project dir because the CLI needs to access the almebic.ini file to create the initial database migration.

You should see your new app under the project folder :

.
โ””โ”€โ”€ ๐Ÿ“ eshop
    ...
    |
    โ””โ”€โ”€ ๐Ÿ“ account
        โ”œโ”€โ”€ ๐Ÿ“ graphql
        โ”œโ”€โ”€ ๐Ÿ“ migrations
        โ”‚   โ””โ”€โ”€ ๐Ÿ“„ 20200926_1508_auto_ef7704f9741f_initial.py
        โ”œโ”€โ”€ ๐Ÿ“ resolvers
        โ””โ”€โ”€ ๐Ÿ“„ models.py

Details :

  • ๐Ÿ“ graphql : All the GraphQL schema will live here
  • ๐Ÿ“ migrations : Will contain database migrations generated by Alembic
  • ๐Ÿ“ resolvers : Python package where you will write resolvers binded to the schema
  • ๐Ÿ“„ models.py : Will hold GINO models for this app

What is this "initial" python file under ๐Ÿ“ migrations?

We won't cover database connection in this quickstart, but note that it's the initial database migration for the account app that creates its dedicated Alembic branch, needed to generate/apply per-app migrations.

Before writing some code, the only thing to do is make Turbulette aware of our lovely account app.

To do this, open ๐Ÿ“„ eshop/settings.py and add "eshop.account" to INSTALLED_APPS, so the application is registered and can be picked up by Turbulette at startup :

# List installed Turbulette apps that defines some GraphQL schema
INSTALLED_APPS = ["eshop.account"]

3: GraphQL schema

Now that we have our project scaffold, we can start writing actual schema/code.

Create a schema.gql file in the ๐Ÿ“ graphql folder and add this base schema :

extend type Mutation {
    registerCard(input: CreditCard!): SuccessOut!
}

input CreditCard {
    number: String!
    expiration: Date!
    name: String!
}

type SuccessOut {
    success: Boolean
    errors: [String]
}

Note that we extend the type Mutation because Turbulette already defines it. The same goes for Query type

Notice that with use the Date scalar, it's one of the custom scalars provided by Turbulette. It parses string in the ISO8601 date format YYY-MM-DD.

4: Add pydantic model

We want to validate our CreditCard input to ensure the user has entered a valid card number and date. Fortunately, Turbulette integrates with Pydantic, a data validation library that uses python type annotations, and offers a convenient way to generate a Pydantic model from a schema type.

Create a new ๐Ÿ“„ pyd_models.py under ๐Ÿ“ account :

from turbulette.validation import GraphQLModel
from pydantic import PaymentCardNumber


class CreditCard(GraphQLModel):
    class GraphQL:
        gql_type = "CreditCard"
        fields = {"number": PaymentCardNumber}

What's happening here?

The inherited GraphQLModel class is a pydantic model that knows about the GraphQL schema and can produce pydantic fields from a given GraphQL type. We specify the GraphQL type with the gql_type attribute; it's the only one required.

But we also add a fields attribute to override the type of number field because it is string typed in our schema. If we don't add this, Turbulette will assume that number is a string and will annotate the number field as str. fields is a mapping between GraphQL field names and the type that will override the schema's one.

Let's add another validation check: the expiration date. We want to ensure the user has entered a valid date (i.e., at least greater than now) :

from datetime import datetime
from pydantic import PaymentCardNumber
from turbulette.validation import GraphQLModel, validator


class CreditCard(GraphQLModel):
    class GraphQL:
        gql_type = "CreditCard"
        fields = {"number": PaymentCardNumber}

    @validator("expiration")
    def check_expiration_date(cls, value):
        if value < datetime.now():
            raise ValueError("Expiration date is invalid")
        return value

Why don't we use the @validator from Pydantic?

For those who have already used Pydantic, you probably know about the @validator decorator used add custom validation rules on fields.

But here, we use a @validator imported from turbulette.validation, why?

They're almost identical. Turbulette's validator is just a shortcut to the Pydantic one with check_fields=False as a default, instead of True, because we use an inherited BaseModel. The above snippet would correctly work if we used Pydantic's validator and explicitly set @validator("expiration", check_fields=False).

5: Add a resolver

The last missing piece is the resolver for our user mutation, to make the API returning something when querying for it.

The GraphQL part is handled by Ariadne, a schema-first GraphQL library that allows binding the logic to the schema with minimal code.

As you may have guessed, we will create a new Python module in our ๐Ÿ“ resolvers package.

Let's call it ๐Ÿ“„ user.py :

from turbulette import mutation
from ..pyd_models import CreditCard

@mutation.field("registerCard")
async def register(obj, info, **kwargs):
    return {"success": True}

mutation is the base mutation type defined by Turbulette and is used to register all mutation resolvers (hence the use of extend type Mutation on the schema). For now, our resolver is very simple and doesn't do any data validation on inputs and doesn't handle errors.

Turbulette has a @validate decorator that can be used to validate resolver input using a pydantic model (like the one defined in Step 4).

Here's how to use it:

from turbulette import mutation
from ..pyd_models import CreditCard
from turbulette.validation import validate

@mutation.field("registerCard")
@validate(CreditCard)
async def register(obj, info, **kwargs):
    return {"success": True}

If the validation succeeds, you can access the validated input data in kwargs["_val_data"] But what happens otherwise? Normally, if the validation fails, pydantic will raise a ValidationError, but here the @validate decorator handles the exception and will add error messages returned by pydantic into a dedicated error field in the GraphQL response.

5: Run it

Our registerCard mutation is now binded to the schema, so let's test it.

Start the server in the root directory (the one containing ๐Ÿ“ eshop folder) :

uvicorn eshop.app:app --port 8000

Now, go to http://localhost:8000/graphql, you will see the GraphQL Playground IDE. Finally, run the registerCard mutation, for example :

mutation card {
  registerCard(
    input: {
      number: "4000000000000002"
      expiration: "2023-05-12"
      name: "John Doe"
    }
  ) {
    success
    errors
  }
}

Should give you the following expected result :

{
  "data": {
    "registerCard": {
      "success": true,
      "errors": null
    }
  }
}

Now, try entering a wrong date (before now). You should see the validation error as expected:

{
  "data": {
    "registerCard": {
      "success": null,
      "errors": [
        "expiration: Expiration date is invalid"
      ]
    }
  }
}

How the error message end in the errors key?

Indeed, we didn't specify anywhere that validation errors should be passed to the errors key in our SuccessOut GraphQL type. That is because Turbulette has a setting called ERROR_FIELD, which defaults to "errors". This setting indicates the error field on the GraphLQ output type used by Turbulette when collecting query errors.

It means that if you didn't specify ERROR_FIELD on the GraphQL type, you would get an exception telling you that the field is missing.

It's the default (and recommended) way of handling errors in Turbulette. Still, as all happens in the @validate, you can always remove it and manually instantiate your Pydantic models in resolvers.

Good job! ๐Ÿ‘

That was a straightforward example, showing off a simple Turbulette API set up. To get the most of it, follow the User Guide .

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for turbulette, version 0.5.1
Filename, size File type Python version Upload date Hashes
Filename, size turbulette-0.5.1-py3-none-any.whl (65.7 kB) File type Wheel Python version py3 Upload date Hashes View
Filename, size turbulette-0.5.1.tar.gz (55.6 kB) File type Source Python version None Upload date Hashes View

Supported by

AWS AWS Cloud computing Datadog Datadog Monitoring DigiCert DigiCert EV certificate Facebook / Instagram Facebook / Instagram PSF Sponsor Fastly Fastly CDN Google Google Object Storage and Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Salesforce Salesforce PSF Sponsor Sentry Sentry Error logging StatusPage StatusPage Status page