Skip to main content

Django REST framework views using the pipeline pattern.

Project description

Django REST Framework Pipeline Views

Coverage Status GitHub Workflow Status PyPI GitHub GitHub last commit

pip install drf-pipeline-views

Inspired by a talk on The Clean Architecture in Python by Brandon Rhodes, drf-pipeline-views aims to simplify writing testable API endpoints with Django REST framework using the Pipeline Design Pattern.

The main idea behind the pipeline pattern is to process data in steps. Input from the previous step is passed to the next, resulting in a collection of data-in, data-out functions. These functions can be easily unit tested, since none of the functions depend on the state of the objects in the other parts of the pipeline. Furthermore, IO can be separated into its own step, making the other parts of the logic simpler and faster to test by not having to mock or do any other special setup around the IO. This also means that the IO block, or in fact any other part of the application, can be replaced as long as the data flowing through the pipeline remains the same.

Basic Usage:

Let's create a basic pipeline:

def step1(step1_input1, step1_input2):
    # Process the data...
    return {"step2_input1": ..., "step2_input2": ...}

def step2(step2_input1, step2_input2):
    # Maybe do some IO...
    return {"step3_input1": ..., "step3_input2": ...}

def step3(step3_input1, step3_input2):
    # Process the data, but do not pass on anything...
    return

def step4():
    # Build some response...
    return {"end_result1": ..., "end_result2": ...}

Next, we'll create input and output serializers for our endpoint:

from rest_framework import fields
from rest_framework.serializers import Serializer

class InputSerializer(Serializer):
    step1_input1 = fields.CharField()
    step1_input2 = fields.DateField()

class OutputSerializer(Serializer):
    end_result1 = fields.CharField()
    end_result2 = fields.FloatField()

Finally, we can create our view:

from pipeline_views import BaseAPIView, GetMixin


class SomeView(GetMixin, BaseAPIView):

  pipelines = {
    "GET": [
        InputSerializer,
        [
            step1,
            step2,
            step3,
            step4,
        ],
        OutputSerializer,
    ],
  }

Using input and output serializers like this forces verification of the incoming and outcoming data, so that if something changes in the logic, or some unexpected values are returned, the endpoint will break instead of creating side effects in the application using the API.


Using serializers is totally optional. A pipeline like this will work just as well:

class SomeView(GetMixin, BaseAPIView):

  pipelines = {
    "GET": [
        step1,
        step2,
        step3,
        step4,
    ],
  }

BaseAPIView will try to infer a serializer with the correct serializer fields for based on the type hints to the first function step1.

from rest_framework.fields import CharField, IntegerField
from pipeline_views import MockSerializer

# Callable
def logic_callable(name: str, age: int):
    ...

# Inferred Serializer
class LogicCallableSerializer(MockSerializer):
    name = CharField()
    age = IntegerField()

This is only used by the Django REST Framework Browsable API to create forms. MockSerializer makes sure the fields are only used for input and not validation.


Pipeline logic can be grouped into blocks:

class SomeView(GetMixin, BaseAPIView):

  pipelines = {
    "GET": [
        [
            block1_step1,
            block1_step2,
        ],
        [
            block2_step1,
            block2_step2,
        ],
    ],
  }

Logic blocks are useful if you want to skip some logic methods under certain conditions, e.g., to return a cached result. This can be accomplished by raising a NextLogicBlock exception. The exception can be initialized with any number of keyword arguments that will be passed to the next step in the logic, or to the response if it's the last step in the logic.

from pipeline_views import NextLogicBlock


def block1_step1(step1_input1, step1_input2):
    if condition:
        raise NextLogicBlock(step3_input1=..., step3_input2=...)
    ...

def block1_step2(step2_input1, step2_input2):
    ...

def block2_step1(step3_input1, step3_input2):
    ...

def block2_step2():
    ...

If you wish to add data to a request, you can do that on the endpoint level by overriding _process_request, or on the endpoint HTTP method level by overriding the spesific method, like get.

from rest_framework.exceptions import NotAuthenticated
from rest_framework.authentication import get_authorization_header
from pipeline_views import BaseAPIView, GetMixin


class BasicView(GetMixin, BaseAPIView):

    pipelines = {"GET": ...}

    def get(self, request, *args, **kwargs):
        # Add language to every get request for this endpoint
        kwargs["lang"] = request.LANGUAGE_CODE
        return super().get(request, *args, **kwargs)

    def _process_request(self, data):
        # Add authorization token to every http method
        data["token"] = self._token_from_headers()
        return super()._process_request(data)

    def _token_from_headers(self):
        auth_header = get_authorization_header(self.request)
        if not auth_header:
            raise NotAuthenticated("You must be logged in for this endpoint.")
        return auth_header.split()[1].decode()

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

drf-pipeline-views-0.1.2.tar.gz (9.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

drf_pipeline_views-0.1.2-py3-none-any.whl (9.6 kB view details)

Uploaded Python 3

File details

Details for the file drf-pipeline-views-0.1.2.tar.gz.

File metadata

  • Download URL: drf-pipeline-views-0.1.2.tar.gz
  • Upload date:
  • Size: 9.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.8 CPython/3.9.0 Windows/10

File hashes

Hashes for drf-pipeline-views-0.1.2.tar.gz
Algorithm Hash digest
SHA256 4db445e46c54e0721c0ec61bc3e453f4987c5a68fb77b5bdfef86d2c01849f1a
MD5 a798ef900159b6676c50dd54f5f6938a
BLAKE2b-256 ca2a7d6ffd56fcd732386b1b46fdf6113537f8b82ae8793865c828bfec766940

See more details on using hashes here.

File details

Details for the file drf_pipeline_views-0.1.2-py3-none-any.whl.

File metadata

File hashes

Hashes for drf_pipeline_views-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 f4130efa8ff75e653ba4ab6976c4b5622ef26eab856fb0e706e8f0b975889393
MD5 b0eb45d0cf540a7c389e35535cb787d5
BLAKE2b-256 bd1ea527a53eb479694d2ac1db05e6751963e4b4a70fb82155e58044fed1e7e5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page