Skip to main content

Python SDK for developing indexers of Tezos smart contracts inspired by The Graph

Project description

dipdup

PyPI version Tests Docker Build Status Made With License: MIT

Python SDK for developing indexers of Tezos smart contracts inspired by The Graph.

Installation

Python 3.8+ is required for dipdup to run.

$ pip install dipdup

Creating indexer

If you want to see dipdup in action before diving into details you can run a demo project and use it as reference. Clone this repo and run the following command in it's root directory:

$ dipdup -c src/demo_hic_et_nunc/dipdup.yml run

Examples in this guide are simplified Hic Et Nunc demo.

Write configuration file

Create a new YAML file and adapt the following example to your needs:

spec_version: 0.1
package: demo_hic_et_nunc

database:
  kind: sqlite
  path: db.sqlite3

contracts:
  HEN_objkts: 
    address: ${HEN_OBJKTS:-KT1RJ6PbjHpwc3M5rw5s2Nbmefwbuwbdxton}
    typename: hen_objkts
  HEN_minter: 
    address: ${HEN_MINTER:-KT1Hkg5qeNhfwpKW4fXvq7HGZB9z2EnmCCA9}
    typename: hen_minter

datasources:
  tzkt_mainnet:
    kind: tzkt
    url: ${TZKT_URL:-https://staging.api.tzkt.io}

indexes:
  hen_mainnet:
    kind: operation
    datasource: tzkt_mainnet
    contracts:
      - HEN_minter
    handlers:
      - callback: on_mint
        pattern:
          - destination: HEN_minter
            entrypoint: mint_OBJKT
          - destination: HEN_objkts
            entrypoint: mint

Each handler in index config matches an operation group based on operations' entrypoints and destination addresses in pattern. Matched operation groups will be passed to handlers you define.

Initialize project structure

Run the following command replacing config.yml with path to YAML file you just created:

$ dipdup -c config.yml init

This command will create a new package with the following structure (some lines were omitted for readability):

demo_hic_et_nunc/
├── handlers
│   ├── on_mint.py
│   └── on_rollback.py
├── hasura-metadata.json
├── models.py
└── types
    ├── hen_minter
    │   ├── storage.py
    │   └── parameter
    │       └── mint_OBJKT.py
    └── hen_objkts
        ├── storage.py
        └── parameter
            └── mint.py

types directory is Pydantic dataclasses of contract storage and parameter. This directory is autogenerated, you shouldn't modify any files in it. models and handlers modules are placeholders for your future code and will be discussed later.

You could invoke init command on existing project (must be in your PYTHONPATH. Do it each time you update contract addresses or models. Code you've wrote won't be overwritten.

Define models

Dipdup uses Tortoise under the hood, fast asynchronous ORM supporting all major database engines. Check out examples to learn how to use is.

Now open models.py file in your project and define some models:

from tortoise import Model, fields


class Holder(Model):
    address = fields.CharField(58, pk=True)


class Token(Model):
    id = fields.BigIntField(pk=True)
    creator = fields.ForeignKeyField('models.Holder', 'tokens')
    supply = fields.IntField()
    level = fields.BigIntField()
    timestamp = fields.DatetimeField()

Write event handlers

Now take a look at handlers module generated by init command. When operation group matching pattern block of corresponding handler at config will arrive callback will be fired. This example will simply save minted Hic Et Nunc tokens and their owners to the database:

import demo_hic_et_nunc.models as models
from demo_hic_et_nunc.types.hen_minter.parameter.mint_objkt import MintOBJKTParameter
from demo_hic_et_nunc.types.hen_minter.storage import HenMinterStorage
from demo_hic_et_nunc.types.hen_objkts.parameter.mint import MintParameter
from demo_hic_et_nunc.types.hen_objkts.storage import HenObjktsStorage
from dipdup.models import OperationContext, OperationHandlerContext


async def on_mint(
    ctx: OperationHandlerContext,
    mint_objkt: OperationContext[MintOBJKTParameter, HenMinterStorage],
    mint: OperationContext[MintParameter, HenObjktsStorage],
) -> None:
    holder, _ = await models.Holder.get_or_create(address=mint.parameter.address)
    token = models.Token(
        id=mint.parameter.token_id,
        creator=holder,
        supply=mint.parameter.amount,
        level=mint.data.level,
        timestamp=mint.data.timestamp,
    )
    await token.save()

Handler name on_rollback is reserved by dipdup, this special handler will be discussed later.

Atomicity and persistency

Here's a few important things to know before running your indexer:

  • WARNING! Make sure that database you're connecting to is used by dipdup exclusively. When index configuration or models change the whole database will be dropped and indexing will start from scratch.
  • Do not rename existing indexes in config file without cleaning up database first, didpup won't handle this renaming automatically and will consider renamed index as a new one.
  • Multiple indexes pointing to different contracts must not reuse the same models because synchronization is performed by index first and then by block.
  • Reorg messages signal about chain reorganizations, when some blocks, including all operations, are rolled back in favor of blocks with higher weight. Chain reorgs happen quite often, so it's not something you can ignore. You have to handle such messages correctly, otherwise you will likely accumulate duplicate data or, worse, invalid data. By default Dipdup will start indexing from scratch on such messages. To implement your own rollback logic edit generated on_rollback handler.

Run your dapp

Now everything is ready to run your indexer:

$ dipdup -c config.yml run

Parameters wrapped with ${VARIABLE:-default_value} in config could be set from corresponding environment variables. For example if you want to use another TzKT instance:

$ TZKT_URL=https://api.tzkt.io dipdup -c config.yml run

You can interrupt indexing at any moment, it will start from last processed block next time you run your app again.

Use docker-compose.yml included in this repo if you prefer to run dipdup in Docker:

$ docker-compose build
$ # example target, edit volumes section to change dipdup config
$ docker-compose up hic_et_nunc

For debugging purposes you can index specific block range only and skip realtime indexing. To do this set first_block and last_block fields in index config.

Index templates

Sometimes you need to run multiple indexes with similar configs whose only difference is contract addresses. In this case you can use index templates like this:

templates:
  trades:
    kind: operation
    datasource: tzkt_staging
    contracts:
      - <dex>
    handlers:
      - callback: on_fa12_token_to_tez
        pattern:
          - destination: <dex>
            entrypoint: tokenToTezPayment
          - destination: <token>
            entrypoint: transfer
      - callback: on_fa20_tez_to_token
        pattern:
          - destination: <dex>
            entrypoint: tezToTokenPayment
          - destination: <token>
            entrypoint: transfer

indexes:
  trades_fa12:
    template: trades
    values:
      dex: FA12_dex
      token: FA12_token

  trades_fa20:
    template: trades
    values:
      dex: FA20_dex
      token: FA20_token

Template values mapping could be accessed from within handlers at ctx.template_values.

Optional: configure Hasura GraphQL Engine

When using PostgreSQL as a storage solution you can use Hasura integration to get GraphQL API out-of-the-box. Add the following section to your config, Hasura will be configured automatically when you run your indexer.

hasura:
  url: http://hasura:8080
  admin_secret: changeme

When using included docker-compose example make sure you run Hasura first:

$ docker-compose up -d hasura

Then run your indexer and navigate to 127.0.0.1:8080.

Optional: configure logging

You may want to tune logging to get notifications on errors or enable debug messages. Specify path to Python logging config in YAML format at --logging-config argument. Default config to start with:

  version: 1
  disable_existing_loggers: false
  formatters:
    brief:
      format: "%(levelname)-8s %(name)-35s %(message)s"
  handlers:
    console:
      level: INFO
      formatter: brief
      class: logging.StreamHandler
      stream : ext://sys.stdout
  loggers:
    SignalRCoreClient:
      formatter: brief
    dipdup.datasources.tzkt.datasource:
      level: INFO
    dipdup.datasources.tzkt.cache:
      level: INFO
  root:
    level: INFO
    handlers:
      - console

Contribution

To set up development environment you need to install poetry package manager and GNU Make. Then run one of the following commands at project's root:

$ # install project dependencies
$ make install
$ # run linters
$ make lint
$ # run tests
$ make test cover
$ # run full CI pipeline
$ make

Contact

About

This project is maintained by Baking Bad team. Development is supported by Tezos Foundation.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dipdup-0.2.0.tar.gz (29.3 kB view details)

Uploaded Source

Built Distribution

dipdup-0.2.0-py3-none-any.whl (31.2 kB view details)

Uploaded Python 3

File details

Details for the file dipdup-0.2.0.tar.gz.

File metadata

  • Download URL: dipdup-0.2.0.tar.gz
  • Upload date:
  • Size: 29.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.4 CPython/3.8.9 Linux/5.4.0-1046-azure

File hashes

Hashes for dipdup-0.2.0.tar.gz
Algorithm Hash digest
SHA256 2bfe09c5f6d3cca0009629dfbf1145a6d44faf2c68b578796e1fe4d15bd080f6
MD5 ba7ad5379f3985802e9ce8eb71e9ddd2
BLAKE2b-256 96a2c31d678f82eb95d273fffc135a54ac7e4aca69ac4ec61824ae583c28fd4d

See more details on using hashes here.

Provenance

File details

Details for the file dipdup-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: dipdup-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 31.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.4 CPython/3.8.9 Linux/5.4.0-1046-azure

File hashes

Hashes for dipdup-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 423987a15f8d771aaac670a26bc6dfc40b4cf691cdf37dd7140c8fa6b4bea5f5
MD5 6b429d49d517cace3fc4aca604139e9a
BLAKE2b-256 360eaa335a9528c1395a7346e46448c58af74fee367e21759b38a3026efb5fc7

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page