Skip to main content

A collection of various tools used to improve Django and Django Rest Framework developments

Project description

travis pypi requiresio codecov


A collection of various tools used to improve Django and Django Rest Framework developments

This project is intended to contain a set of improvements/addons for Django and DRF that I’ve used and/or developed during using DRF.

Credits

Some of these tweaks have been partially (or largely…) inspired by others toolsets. I’ve tried to always respect the code licensing (all third party code were under MIT licensing or equivalent).

Here follows the list of contributors:

Current tweaks


Extended Serializers

There are a few improvements that the standard DRF Serializer could benefit from. Each improvement, how to use it & rationale for it is described in the sections below.

One-step validation

Standard serializer is validating the data in three steps: * field-level validation (required, blank, validators) * custom field-level validation (method validate_fieldname(…)) * custom general validation (method validate(…))

So for example if you have a serializer with 4 required fields: first_name, email, password & confirm_password and you pass data without first_name and with wrong confirm_password, you’ll get first the error for first_name, and then, after you correct it you’ll get error for confirm_password, instead of getting both errors at once. This results in bad user experience, and that’s why we’ve changed all validation to be run in one step.

Validation of our Serializer runs all three phases, and merges errors from all of them. However if a given field generated an error on two different stages, it returns the error only from the former one.

When using our Serializer/ModelSerializer, when writing “validate” method, you need to remember that given field may not be in a dictionary, so the validation must be more sophisticated:

def validate(self, data):
    errors = {}
    # wrong - password & confirm_password may raise KeyError
    if data["password"] != data["confirm_password"]:
        errors["confirm_password"] = [_("This field must match")]

    # correct
    if data.get("password") != data.get("confirm_password"):
        errors["confirm_password"] = [_("Passwords")]

    if errors:
        raise serializer.ValidationError(errors)

    return data

Making fields required

Standard ModelSerializer is taking the “required” state from the corresponding Model field. To make not-required model field required in serializer, you have to declare it explicitly on serializer, so if the field first_name is not required in the model, you need to do:

class MySerializer(serializers.ModelSerializer):
    first_name = serializers.CharField(..., required=True)

This is quite annoying when you have to do it often, that’s why our ModelSerializer allows you to override this by simple specifying the list of fields you want to make required:

from my_django_tweaks.serializers import ModelSerializer

class MySerializer(ModelSerializer):
    required_fields = ["first_name"]

Custom errors

Our serializers provide a simple way to override blank & required error messages, by either specifying default error for all fields or specifying error for specific field. To each error message “fieldname” is passed as format parameter. Example:

from my_django_tweaks.serializers import ModelSerializer

class MySerializer(ModelSerializer):
    required_error = blank_error = "{fieldname} is required"
    custom_required_errors = custom_blank_errors = {
        "credit_card_number": "You make me a saaaad Panda."
    }

Passing context to subserializers

Rationale: In DRF context is not passed to sub-serializers. So for example, in the standard serializer, you will have “request” in the context for the main object (say, Message), but the context for a sub-serializer (say, sender’s Account) context will be empty. To workaround this you could for example re-initialize sub-serializers on the serializer’s init, or instead of using a sub-serializer use a SerializerMethodField and initialize a sub-serializer inside it, etc. The problem is described here: https://github.com/encode/django-rest-framework/issues/2471

Our serializers includes a mechanism to pass context to sub-serializers, workarounding the problem stated above.

If for any reason you are using SerializerMethodField with a Serializer inside, and you want to pass context, use pass_context method to filter the fields & include fields properly.

from my_django_tweaks.serializers import pass_context

class SomeSerializer(Serializer):
    some_field = serializers.SerializerMethodField()

    def get_some_field(self, obj):
        return OtherSerializer(obj, context=pass_context("some_field", self.context)).data

WARNING: passing context may cause some unexpected behaviours, since sub-serializer will start receive the main context (and earlier they were not getting it).

Control over serialized fields

Our serializers provide control over serialized fields. It may be useful in following cases: * You have quite heavy serializer (many fields, foreign keys, db calls, etc.), that you need in one place, but in the other place you just need some basic data from it - say just name & id. You could provide separate serializer for such case, or even separate endpoint, but it would be easier if the client can have control over which fields get serialized. * You have some fields that should be serialized only for some state of the serialized object, and not for other.

Both things can be achieved with our serializer. By default they check if the “.fields” were passed in the context or if “.fields” were passed as a GET parameter (in such case “request” must be present in the context), but you can define custom behaviour by overriding the following method in the Serializer:

def get_fields_for_serialization(self, fields):  # fields must be in (".fields", ".include_fields")
    return {"name", "id"}

This works also with sub-serializers (using context-passing). Here is an example usage:

https://your.url?.fields=some_field,other_field,nested_serializer__some_field,nested_serializer__other_field

Making fields available only on demand

Rationale: it is a good practice to minimize the number of APIs, by making them as generic as possible. This however creates a performance problem when the amount of data being serialized grows by including sub-serializers (which can include sub-serializers themselves). Using control over serialized fields, as described above should be sufficient. However, in practice this mechanism will not be used as frequent as it should. That’s why we’ve introduced another mechanism: on demand fields. Those are fields, specified in the serializer, that will be returned only if requested either by passing their name in “fields” (see the previous chapter) or in “include_fields” parameter.

class MySerializer(serializers.ModelSerializer):
    some_subserializer = OtherSerializer()

    class Meta:
        model = MyModel
        fields = ["some_property", "some_subserializer"]
        on_demand_fields = ["some_subserializer"]
https://your.url?.include_fields=some_subserializer

Auto filtering and ordering

Rationale

There are nice OrderingFilter and DjangoFilterBackend backends in place, however sorting and filtering fields have to be declared explicitly, which is sometimes time consuming. That’s why we’ve created a decorator that allows to sort & filter (with some extra lookup methods by default) by all the indexed fields present in model and in serializer class (as non write-only). Non-indexed fields may also be added to sorting & filtering, but it must be done explicitly - the idea is, that ordering or filtering by non-indexed field is not optimal from the DB perspective, so if the field is not included in sorting/filtering you should rather create index on it than declare it explicitly.

Decorator works with explicitly defined FilterBackends, as well as with explicitly defined ordering_fields, filter_fields or filter_class. In order to work, it requires ModelSerializer (obtainable either serializer_class or get_serializer_class), from which fields & model class are extracted.

Usage

@autofilter()
class SomeAPI(...):
    serializer_class = SomeModelSerializer

# it works well with autodoc:
@autodoc()  # autodoc should be before autofilter, so it operates on the result from autofilter
@autofilter()
class SomeAPI(...):
    serializer_class = SomeModelSerializer

# you can add some extra fields to sort or filter
@autofilter(extra_filter=("non_indexed_field", ), extra_ordering=("non_indexed_field", ), exclude_fields=("some_field", ))
class SomeAPI(...):
    serializer_class = SomeModelSerializer
    ordering_fields = ("other_non_indexed_field", )
    filter_fields = ("other_non_indexed_field", )

# it works also when you have a custom filter_class set
class SomeFilter(filters.FilterSet):
    class Meta:
        model = SomeModel
        fields = ("non_indexed_field", )

@autofilter()
class SomeAPI(...):
    serializer_class = SomeModelSerializer
    filter_class = SomeFilter

Pagination without counts

Rationale

Calling “count” each time a queryset gets paginated is inefficient - especialy for large datasets. Moreover, in most cases it is unnecessary to have counts (for example for endless scrolls). The fastest pagination in such case is CursorPaginator, however it is not as easy to use as LimitOffsetPaginator/PageNumberPaginator and does not allow sorting.

Usage

from my_django_tweaks.pagination import NoCountsLimitOffsetPagination
from my_django_tweaks.pagination import NoCountsPageNumberPagination

Use it as standard pagination - the only difference is that it does not return “count” in the dictionary. Page indicated by “next” may be empty. Next page url is present if the current page size is as requested - if it contains less items then requested, it means we’re on the last page.

NoCountsLimitOffsetPagination

A limit/offset based pagination, without performing counts. For example: * http://api.example.org/accounts/?limit=100 - will return first 100 items * http://api.example.org/accounts/?offset=400&limit=100 - will returns 100 items starting from 401th * http://api.example.org/accounts/?offset=-50&limit=100 - will return first 50 items

HTML is not handled (no get_html_context).

Pros: * no counts * easier to use than cursor pagination (especially if you need sorting) * works with angular ui-scroll (which requires negative offsets)

Cons: * skip is a relatively slow operation, so this paginator is not as fast as cursor paginator when you use large offsets

NoCountsPageNumberPagination

A standard page number pagination, without performing counts.

HTML is not handled (no get_html_context).

Pros: * no counts * easier to use than cursor pagination (especially if you need sorting)

Cons: * skip is a relatively slow operation, so this paginator is not as fast as cursor paginator when you use large page numbers

Versioning extensions

Rationale

DRF provides a nice versioning mechanism, however there are two things that could be more automated, and this is the point of this extension:

  • Handling deprecation & obsoletion: when you don’t have control over upgrading client app, it is best to set the deprecation/obsoletion mechanism at the very beginning of your project - something that will start reminding a user that he is using old app and he should update it, or in case of obsolition - information, that this app is outdated and it must be upgraded in order to use it. This extension adds warning to header if the API version client is using is deprecated and responds with 410: Gone error when the API version is obsolete.

  • Choosing serializer. In DRF you have to overwrite get_serializer_class to provide different serializers for different versions. This extension allows you to define just dictionary with it: versioning_serializer_classess. You may still override get_serializer_class however if you choose to.

Configuration

In order to make deprecation warning work, you need to add DeprecationMiddleware to MIDDLEWARE or MIDDLEWARE_CLASSESS (depends on django version you’re using):

# django >= 1.10
MIDDLEWARE (
    ...
    "my_django_tweaks.versioning.DeprecationMiddleware"
)

It is highly recommended to add DEFAULT_VERSION along with DEFAUlt_VERSIONINg_CLASS to DRF settings:

REST_FRAMEWORK = {
    ...
    "DEFAULT_VERSIONING_CLASS": "rest_framework.versioning.AcceptHeaderVersioning",
    "DEFAULT_VERSION": "1",
}

By default the DEFAULT_VERSION is None, which will in effect work as “latest” - it is safer to make passing newer version explicitly.

ApiVersionMixin

Use this as first in inheritance chain when creating own API classes, so for example:

class MyApi(ApiVersionMixin, GenericApiView):
    ...

Returns serializer depending on versioning_serializer_classess and version:

versioning_serializer_classess = {
    1: "x",
    2: "x",
}

You can set custom deprecated/obsolete versions on the class-level

CUSTOM_DEPRECATED_VERSION = X
CUSTOM_OBSOLETE_VERSION = Y

It can be also configured on the settings level as a fixed version

API_DEPRECATED_VERSION = X
API_OBSOLETE_VERSION = Y

or as an offset - for example:

API_VERSION_DEPRECATION_OFFSET = 6
API_VERSION_OBSOLETE_OFFSET = 10

Offset is calculated using the highest version number, only if versioning_serializer_classess is defined:

deprecated = max(self.versioning_serializer_classess.keys() - API_VERSION_DEPRECATION_OFFSET)
obsolete = max(self.versioning_serializer_classess.keys() - API_VERSION_OBSOLETE_OFFSET)

If neither is set, deprecation/obsolete will not work. Only the first applicable setting is taken into account (in the order as presented above).

Autodocumentation

Rationale

[Django Rest Swagger][drs] is a awesome tool that generates swagger documentation out of your DRF API. There is however one deficiency - it does not offer any hooks that would allow you to automaticaly generate some additional documentation. For example, if you want pagination parameters to be visible in the docs, you’d have to set it explicitly:

class SomeAPi(ListAPIView):
    def get(...):
        """ page_number -- optional, page number """

You may also want to generate some part of description based on some fields in API and make it change automatically each time you update them. Django Rest Swagger does not offer any hooks for that, and that is why this extension was created.

Since there are no hooks available to add custom documentation, this extension is made in a form of class decorator, that creates facade for each API method (get/post/patch/put - defined on the Autodoc class level) and creates a docstring for them based on original docstring (if present) & applicable Autodoc classess.

Usage & Configuration

@autodoc("List or create an account")
class SomeApi(ApiVersionMixin, ListCreateAPIView):
    ...

# you can skip certain classes:
@autodoc("Base docstring", skip_classess=[PaginationAutodoc])

# or add certain classess:
@autodoc("Base docstring", add_classess=[CustomAutodoc])

# you can also override autodoc classess - this one cannot be used with skip_classess or add_classess:
@autodoc("Base docstring", classess=[PaginationAutodoc])

Available Classess

Classess are applied in the same order they are defined.

BaseInfo

This one is adding basic info (the one passed to the decorator itself), as well as custom text or yaml if defined, as in following examples:

@autodoc("some caption")
class SomeApi(RetrieveUpdateAPIView):

    @classmethod
    def get_custom_get_doc(cls):
        return "custom get doc"

    @classmethod
    def get_custom_patch_doc_yaml(cls):
        return "some yaml"

Pagination

This one is adding parameters to “get” method in swagger in following format:

page_number -- optional, page number
page_size -- optional, page size

It adds all “*_query_param” from pagination class, as long as they have name defined, so for standard PageNumberPagination, that has page_size_query_param defined as None it will not be enclodes.

If default pagination class is defined, and you don’t want it to be added, you can simply:

class SomeClassWithoutPagination(RetrieveAPIView):
    pagination_class = None

OrderingAndFiltering

This one is adding ordering & filtering information, based on OrderingFilter and DjangoFilterBackend for “get” method in swagger in following format: .. code:

Sorting:
    usage: ?ordering=FIELD_NAME,-OTHER_FIELD_NAME
    available fields: id, first_name, last_name, date_of_birth

Filtering:
    id: exact, __gt, __gte, __lt, __lte, __in, __isnull
    date_of_birth: exact, __gt, __gte, __lt, __lte, __in
    first_name: exact, __gt, __gte, __lt, __lte, __in, __icontains, __istartswith
    last_name: exact, __gt, __gte, __lt, __lte, __in, __icontains, __istartswith

Versioning

Autodoc for versioning - applied only when ApiVersionMixin is present and the decorated class is using rest_framework.versioning.AcceptHeaderVersioning and has versioning_serializer_classess defined. It adds all available versions to a swagger, so you can make a call from it using different API versions.

Permissions

Autodoc for permissions - adds permission class name & it’s docstring to the method description.

Adding custom classess

Custom class should inherit from AutodocBase:

class CustomAutodoc(AutodocBase):
    applies_to = ("get", "post", "put", "patch", "delete")

    @classmethod
    def _generate_yaml(cls, documented_cls, method_name):
        return ""  # your implementation goes here

    @classmethod
    def _generate_text(cls, documented_cls, base_doc, method_name):
        return ""  # your implementation goes here

Autooptimization

You can discover select related & prefetch related structure just by using AutoOptimizeMixin mixin. It takes fields & include_fields parameters, so if the related object is not going to be serialized, it will not be queried.

The structure is discovered based on serializer that is retrieved by get_serializer_class() with context obtained by get_serializer_context().

The optimization discovery is run in get_queryset, and it obtains serializer_class thorugh get_serializer_class.

from my_django_tweaks.optimizator import AutoOptimizeMixin

class MyAPI(AutoOptimizeMixin, ListCreateAPIView):
    serializer_class = SerializerClassWithManyLevelsOfSubserializers

Linting database usage

Rationale

It is important to make sure your web application is efficient and can work well under high load. The my_django_tweaks.test_utils.DatabaseAccessLintingApiTestCase can detect two potential gotchas: * large number of queries: print out warnings and raise an Exception based on thresholds on query counts set via project settings, * multi-table select_for_update: raise an Exception if the code tries to lock more than one table, unless it’s a combination whitelisted in project settings.

Usage & Configuration

from django.urls import reverse_lazy
from my_django_tweaks.test_utils import DatabaseAccessLintingApiTestCase

class TestFoo(DatabaseAccessLintingApiTestCase):
    def test_bar():
        # the linter will raise an Exception or print out a warning when it detects one of gotchas, as configured in settings
        self.client.post(reverse_lazy("some-post-url"))
        # ...

To configure, set in your settings:

TEST_QUERY_NUMBER_SHOW_WARNING

Print out a warning if the count of queries in a single view reaches this threshold. Default: 10.

TEST_QUERY_NUMBER_RAISE_ERROR

Raise an Exception if the count of queries in a single view reaches this threshold. Default: 15.

TEST_QUERY_NUMBER_PRINT_QUERIES

Set to True to print out queries stack (with tracebacks). Default: False.

TEST_QUERY_COUNTER_IGNORE_PATTERNS

Exclude some queries from counting. Set as a list of texts containing regular expressions. Default: [“.*SAVEPOINT.*”].

TEST_SELECT_FOR_UPDATE_LIMITER_ENABLED

Raise an Exception if the view tries to select_for_update more than one table. Default: False.

TEST_SELECT_FOR_UPDATE_WHITELISTED_TABLE_SETS

Allow to perform select_for_update on specified combinations of multiple tables. Default: []. Example: [(“table1”, “table2”), …]

To override those settings in tests, use the django.test.override_settings decorator (check the docs).

To temporarily disable query counting (for example, not to count queries executed in Celery tasks), use TestQueryCounter.freeze:

with TestQueryCounter.freeze():
    # the query counter will ignore all queries executed within this block

Bulk edit API mixin

Bulk edit/create/delete can be easily enabled for any model. All you need to have are details and list serializer.

class BulkEditAPI(BulkEditAPIMixin, ListCreateAPIView):
    queryset = SomeModel.objects.all()
    serializer_class = SomeModelSerializer
    details_serializer_class = SomeModelDetailsSerializer
    BULK_EDIT_ALLOW_DELETE_ITEMS = True  # default: False
    BULK_EDIT_MAX_ITEMS = 10  # API will not be limited if set to None

Creating

To create a new object, “temp_id” key must be passed along with object’s data. Temporary id is required to match validation errors to appropriate object. The create method uses the serializer_class serializer to create new objects. The view must implement the create method to be able to add new items - if the method is not present, the view will still work but adding new items will not be allowed.

Editing

The details_serializer_class is used for editing items. If one item does not pass validation, none of the items will be editied.

Deleting

The BULK_EDIT_ALLOW_DELETE_ITEMS flag must be set to True to enable deleting objects. To mark that object should be deleted, add “delete_object”: True next to it’s id in the payload, for example:

[{"id": 1, "delete_object": True}]

Log configurator

This module contains a single function witch allows to configure python logging using a configuration file. Default configuration can be provided (and commited with source).

def configure_logging(log_name, LOG_CONFIG_PATH, LOG_PATH, DEFAULT_LOG_FORMAT, RUNNING_UNITTEST):
    ...

Parameters

  • log_name: base name for configuration filename. Il will be used for both custom settings and default settings. Example : if set to ‘my-app’, settings file lookup will occur with these names:

    • logging-my-app.py (for new python module format)

    • logging-my-app.ini (for old config format)

    • logging-my-app.default.py (fallback config, can be commited with sources)

  • LOG_CONFIG_PATH: path to the folder containing logging config files

  • LOG_PATH: path to the logs folder (used to configure fallback logger in case all config files failed to load)

  • DEFAULT_LOG_FORMAT: format used by fallback formatter

  • RUNNING_UNITTEST: set True if unittest are running. Used to disable logging during unittest (except for critical).

WARNING

If you use django.utils.log.AdminEmailHandler as log handler, the django.utils.log.AdminEmailHandler() function be called after the SECRET_KEY generation in settings.py.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

my_django_tweaks-0.0.6.tar.gz (38.0 kB view details)

Uploaded Source

Built Distribution

my_django_tweaks-0.0.6-py2.py3-none-any.whl (78.1 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file my_django_tweaks-0.0.6.tar.gz.

File metadata

  • Download URL: my_django_tweaks-0.0.6.tar.gz
  • Upload date:
  • Size: 38.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/50.3.1 requests-toolbelt/0.9.1 tqdm/4.50.2 CPython/3.8.5

File hashes

Hashes for my_django_tweaks-0.0.6.tar.gz
Algorithm Hash digest
SHA256 f134230f93fced82009eb4538f8b5ef3fa44264aff00cdaf32ffe874127ec6aa
MD5 3102f049559ab9bb1931e3cb53371ea3
BLAKE2b-256 7ed63d3b24e2d1a4844aa94cf826228ecf7f638aaae7676f4562096055a6ef8d

See more details on using hashes here.

File details

Details for the file my_django_tweaks-0.0.6-py2.py3-none-any.whl.

File metadata

  • Download URL: my_django_tweaks-0.0.6-py2.py3-none-any.whl
  • Upload date:
  • Size: 78.1 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/50.3.1 requests-toolbelt/0.9.1 tqdm/4.50.2 CPython/3.8.5

File hashes

Hashes for my_django_tweaks-0.0.6-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 d542d5e9290bb392c2e4de7b7c610a952d5d8b9c1b71a43577f932764ce057fa
MD5 99e87ed3e4b59d5925c0948266380580
BLAKE2b-256 9a473c9bea71dcfa94dbee25d27d7fffe2e355abb5d5d75b7e71445483a7044a

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page