Skip to main content

Python SDK for developing indexers of Tezos smart contracts inspired by The Graph

Project description

dipdup

PyPI version Tests Docker Build Status Made With License: MIT

Python SDK for developing indexers of Tezos smart contracts inspired by The Graph.

Installation

Python 3.8+ is required for dipdup to run.

$ pip install dipdup

Creating indexer

If you want to see dipdup in action before diving into details you can run a demo project and use it as reference:

$ dipdup -c src/demo_hic_et_nunc/dipdup.yml run

Write configuration file

Create a new YAML file and adapt the following example to your needs:

spec_version: 0.0.1
package: dipdup_hic_et_nunc

database:
  kind: sqlite
  path: db.sqlite3

contracts:
  HEN_objkts:
    address: ${HEN_OBJKTS:-KT1Hkg5qeNhfwpKW4fXvq7HGZB9z2EnmCCA9}
  HEN_minter:
    address: ${HEN_MINTER:-KT1RJ6PbjHpwc3M5rw5s2Nbmefwbuwbdxton}

datasources:
  tzkt_mainnet:
    kind: tzkt
    url: ${TZKT_URL:-https://staging.api.tzkt.io}

indexes:
  operations_mainnet:
    kind: operation
    datasource: tzkt_mainnet
    contract: HEN_objkts
    first_block: 0
    handlers:

      - callback: on_mint
        pattern:
          - destination: HEN_objkts
            entrypoint: mint_OBJKT
          - destination: HEN_minter
            entrypoint: mint

      - callback: on_transfer
        pattern:
          - destination: HEN_minter
            entrypoint: transfer

Initialize project structure

Run the following command replacing config.yml with path to YAML file you just created:

$ dipdup -c config.yml init

This command will create a new package with the following structure (some lines were omitted for readability):

dipdup_hic_et_nunc/
├── handlers
│   ├── on_mint.py
│   ├── on_rollback.py
│   └── on_transfer.py
├── hasura-metadata.json
├── models.py
├── schemas
│   ├── KT1Hkg5qeNhfwpKW4fXvq7HGZB9z2EnmCCA9
│   │   └── parameter
│   │       └── mint_OBJKT.json
│   └── KT1RJ6PbjHpwc3M5rw5s2Nbmefwbuwbdxton
│       └── parameter
│           └── mint.json
└── types
    ├── KT1Hkg5qeNhfwpKW4fXvq7HGZB9z2EnmCCA9
    │   └── parameter
    │       └── mint_OBJKT.py
    └── KT1RJ6PbjHpwc3M5rw5s2Nbmefwbuwbdxton
        └── parameter
            └── mint.py

schemas directory is JSON schemas describing parameters of corresponding contract entrypoints. types are Pydantic dataclasses of these schemas. These two directories are autogenerated, you don't need to modify them. models and handlers modules will be discussed later.

You could invoke init command on existing project (must be in your PYTHONPATH. Do it each time you update contract addresses or models. Code you've wrote won't be overwritten.

Define models

Dipdup uses Tortoise under the hood, fast asynchronous ORM supporting all major database engines. Check out examples to learn how to use is.

Now open models.py file in your project and define some models:

from tortoise import Model, fields


class Address(Model):
    address = fields.CharField(58, pk=True)


class Token(Model):
    id = fields.IntField(pk=True)
    token_id = fields.IntField()
    token_info = fields.CharField(255)
    holder = fields.ForeignKeyField('models.Address', 'tokens')

Write event handlers

Now take a look at handlers module generated by init command. When operation group matching pattern block of corresponding handler at config will arrive callback will be fired. This example will simply save minted Hic Et Nunc tokens and their owners to the database:

from dipdup.models import HandlerContext, OperationContext
from tests.test_dipdup.dipdup_hic_et_nunc.models import *
from tests.test_dipdup.dipdup_hic_et_nunc.types.KT1Hkg5qeNhfwpKW4fXvq7HGZB9z2EnmCCA9 import MintObjkt
from tests.test_dipdup.dipdup_hic_et_nunc.types.KT1RJ6PbjHpwc3M5rw5s2Nbmefwbuwbdxton.parameter.mint import Mint


async def on_mint(
        ctx: HandlerContext,
        mint_OBJKT: OperationContext[MintObjkt],
        mint: OperationContext[Mint],
) -> None:
    address, _ = await Address.get_or_create(address=mint.parameter.address)

    for _ in range(int(mint.parameter.amount)):
        token = Token(
            token_id=int(mint.parameter.token_id),
            token_info=mint.parameter.token_info[''],
            holder=address,
            transaction=mint.transaction,
        )
        await token.save()

Handler name on_rollback is reserved by dipdup, this special handler will be discussed later.

Atomicity and persistency

Here's a few important things to know before running your indexer:

  • WARNING! Make sure that database you're connecting to is used by dipdup exclusively. When index configuration or models change the whole database will be dropped and indexing will start from scratch.
  • Do not rename existing indexes in config file without cleaning up database first, didpup won't handle this renaming automatically and will consider renamed index as a new one.
  • Multiple indexes pointing to different contracts must not reuse the same models because synchronization is performed by index first and then by block.
  • Reorg messages signal about chain reorganizations, when some blocks, including all operations, are rolled back in favor of blocks with higher weight. Chain reorgs happen quite often, so it's not something you can ignore. You have to handle such messages correctly, otherwise you will likely accumulate duplicate data or, worse, invalid data. By default Dipdup will start indexing from scratch on such messages. To implement your own rollback logic edit generated on_rollback handler.

Run your dapp

Now everything is ready to run your indexer:

$ dipdup -c config.yml run

Parameters wrapped with ${VARIABLE:-default_value} in config could be set from corresponding environment variables. For example if you want to use another TzKT instance:

$ TZKT_URL=https://api.tzkt.io dipdup -c config.yml run

You can interrupt indexing at any moment, it will start from last processed block next time you run your app again.

Use docker-compose.yml included in this repo if you prefer to run dipdup in Docker:

$ docker-compose build
$ cp secrets.env.example secrets.env
$ # edit `secrets.env` file, change credentials
$ docker-compose up dipdup

Index templates

Sometimes you need to run multiple indexes with similar configs whose only difference is contract addresses. In this case you can use index templates like this:

templates:
  trades:
    kind: operation
    datasource: tzkt_staging
    contract: < dex >
    handlers:
      - callback: on_fa12_token_to_tez
        pattern:
          - destination: < dex >
            entrypoint: tokenToTezPayment
          - destination: < token >
            entrypoint: transfer
      - callback: on_fa20_tez_to_token
        pattern:
          - destination: < dex >
            entrypoint: tezToTokenPayment
          - destination: < token >
            entrypoint: transfer

indexes:
  trades_fa12:
    template: trades
    values:
      dex: FA12_dex
      token: FA12_token

  trades_fa20:
    template: trades
    values:
      dex: FA20_dex
      token: FA20_token

Optional: configure Hasura GraphQL Engine

init command generates Hasura metadata JSON in the package root. You can use configure-graphql command to apply it to the running GraphQL Engine instance:

$ dipdup -c config.yml configure-graphql --url http://127.0.0.1:8080 --admin-secret changeme

Or if using included docker-compose.yml example:

$ docker-compose up -d graphql-engine
$ docker-compose up configure-graphql

Optional: configure logging

You may want to tune logging to get notifications on errors or enable debug messages. Specify path to Python logging config in YAML format at --logging-config argument. Default config to start with:

  version: 1
  disable_existing_loggers: false
  formatters:
    brief:
      format: "%(levelname)-8s %(name)-35s %(message)s"
  handlers:
    console:
      level: INFO
      formatter: brief
      class: logging.StreamHandler
      stream : ext://sys.stdout
  loggers:
    SignalRCoreClient:
      formatter: brief
    dipdup.datasources.tzkt.datasource:
      level: INFO
    dipdup.datasources.tzkt.cache:
      level: INFO
  root:
    level: INFO
    handlers:
      - console

Contribution

To set up development environment you need to install poetry package manager. Then run one of the following commands at project's root:

$ # install project dependencies
$ make install
$ # run linters
$ make lint
$ # run tests
$ make test cover
$ # run full CI pipeline
$ make

Contact

About

This project is maintained by Baking Bad team. Development is supported by Tezos Foundation.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dipdup-0.1.0.tar.gz (23.0 kB view hashes)

Uploaded Source

Built Distribution

dipdup-0.1.0-py3-none-any.whl (23.1 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page