Python SDK for developing indexers of Tezos smart contracts inspired by The Graph
Project description
dipdup
Python SDK for developing indexers of Tezos smart contracts inspired by The Graph.
Installation
Python 3.8+ is required for dipdup to run.
$ pip install dipdup
Creating indexer
If you want to see dipdup in action before diving into details you can run a demo project and use it as reference. Clone this repo and run the following command in it's root directory:
$ dipdup -c src/demo_hic_et_nunc/dipdup.yml run
Examples in this guide are simplified Hic Et Nunc demo.
Write configuration file
Create a new YAML file and adapt the following example to your needs:
spec_version: 0.1
package: demo_hic_et_nunc
database:
kind: sqlite
path: db.sqlite3
contracts:
HEN_objkts:
address: ${HEN_OBJKTS:-KT1RJ6PbjHpwc3M5rw5s2Nbmefwbuwbdxton}
typename: hen_objkts
HEN_minter:
address: ${HEN_MINTER:-KT1Hkg5qeNhfwpKW4fXvq7HGZB9z2EnmCCA9}
typename: hen_minter
datasources:
tzkt_mainnet:
kind: tzkt
url: ${TZKT_URL:-https://staging.api.tzkt.io}
indexes:
hen_mainnet:
kind: operation
datasource: tzkt_mainnet
contracts:
- HEN_minter
handlers:
- callback: on_mint
pattern:
- type: transaction
destination: HEN_minter
entrypoint: mint_OBJKT
- type: transaction
destination: HEN_objkts
entrypoint: mint
Each handler in index config matches an operation group based on operations' entrypoints and destination addresses in pattern. Matched operation groups will be passed to handlers you define.
Initialize project structure
Run the following command replacing config.yml
with path to YAML file you just created:
$ dipdup -c config.yml init
This command will create a new package with the following structure (some lines were omitted for readability):
demo_hic_et_nunc/
├── handlers
│ ├── on_mint.py
│ └── on_rollback.py
├── hasura-metadata.json
├── models.py
└── types
├── hen_minter
│ ├── storage.py
│ └── parameter
│ └── mint_OBJKT.py
└── hen_objkts
├── storage.py
└── parameter
└── mint.py
types
directory is Pydantic dataclasses of contract storage and parameter. This directory is autogenerated, you shouldn't modify any files in it. models
and handlers
modules are placeholders for your future code and will be discussed later.
You could invoke init
command on existing project (must be in your PYTHONPATH
. Do it each time you update contract addresses or models. Code you've wrote won't be overwritten.
Define models
Dipdup uses Tortoise under the hood, fast asynchronous ORM supporting all major database engines. Check out examples to learn how to use is.
Now open models.py
file in your project and define some models:
from tortoise import Model, fields
class Holder(Model):
address = fields.CharField(58, pk=True)
class Token(Model):
id = fields.BigIntField(pk=True)
creator = fields.ForeignKeyField('models.Holder', 'tokens')
supply = fields.IntField()
level = fields.BigIntField()
timestamp = fields.DatetimeField()
Write event handlers
Now take a look at handlers
module generated by init
command. When operation group matching pattern
block of corresponding handler at config will arrive callback will be fired. This example will simply save minted Hic Et Nunc tokens and their owners to the database:
import demo_hic_et_nunc.models as models
from demo_hic_et_nunc.types.hen_minter.parameter.mint_objkt import MintOBJKTParameter
from demo_hic_et_nunc.types.hen_minter.storage import HenMinterStorage
from demo_hic_et_nunc.types.hen_objkts.parameter.mint import MintParameter
from demo_hic_et_nunc.types.hen_objkts.storage import HenObjktsStorage
from dipdup.models import TransactionContext, OperationHandlerContext
async def on_mint(
ctx: OperationHandlerContext,
mint_objkt: TransactionContext[MintOBJKTParameter, HenMinterStorage],
mint: TransactionContext[MintParameter, HenObjktsStorage],
) -> None:
holder, _ = await models.Holder.get_or_create(address=mint.parameter.address)
token = models.Token(
id=mint.parameter.token_id,
creator=holder,
supply=mint.parameter.amount,
level=mint.data.level,
timestamp=mint.data.timestamp,
)
await token.save()
Handler name on_rollback
is reserved by dipdup, this special handler will be discussed later.
Atomicity and persistency
Here's a few important things to know before running your indexer:
- WARNING! Make sure that database you're connecting to is used by dipdup exclusively. When index configuration or models change the whole database will be dropped and indexing will start from scratch.
- Do not rename existing indexes in config file without cleaning up database first, didpup won't handle this renaming automatically and will consider renamed index as a new one.
- Multiple indexes pointing to different contracts must not reuse the same models because synchronization is performed by index first and then by block.
- Reorg messages signal about chain reorganizations, when some blocks, including all operations, are rolled back in favor of blocks with higher weight. Chain reorgs happen quite often, so it's not something you can ignore. You have to handle such messages correctly, otherwise you will likely accumulate duplicate data or, worse, invalid data. By default Dipdup will start indexing from scratch on such messages. To implement your own rollback logic edit generated
on_rollback
handler.
Run your dapp
Now everything is ready to run your indexer:
$ dipdup -c config.yml run
Parameters wrapped with ${VARIABLE:-default_value}
in config could be set from corresponding environment variables. For example if you want to use another TzKT instance:
$ TZKT_URL=https://api.tzkt.io dipdup -c config.yml run
You can interrupt indexing at any moment, it will start from last processed block next time you run your app again.
Use docker-compose.yml
included in this repo if you prefer to run dipdup in Docker:
$ docker-compose build
$ # example target, edit volumes section to change dipdup config
$ docker-compose up hic_et_nunc
For debugging purposes you can index specific block range only and skip realtime indexing. To do this set first_block
and last_block
fields in index config.
Index templates
Sometimes you need to run multiple indexes with similar configs whose only difference is contract addresses. In this case you can use index templates like this:
templates:
trades:
kind: operation
datasource: tzkt_staging
contracts:
- <dex>
handlers:
- callback: on_fa12_token_to_tez
pattern:
- type: transaction
destination: <dex>
entrypoint: tokenToTezPayment
- type: transaction
destination: <token>
entrypoint: transfer
- callback: on_fa20_tez_to_token
pattern:
- type: transaction
destination: <dex>
entrypoint: tezToTokenPayment
- type: transaction
destination: <token>
entrypoint: transfer
indexes:
trades_fa12:
template: trades
values:
dex: FA12_dex
token: FA12_token
trades_fa20:
template: trades
values:
dex: FA20_dex
token: FA20_token
Template values mapping could be accessed from within handlers at ctx.template_values
.
Optional: configure Hasura GraphQL Engine
When using PostgreSQL as a storage solution you can use Hasura integration to get GraphQL API out-of-the-box. Add the following section to your config, Hasura will be configured automatically when you run your indexer.
hasura:
url: http://hasura:8080
admin_secret: changeme
When using included docker-compose example make sure you run Hasura first:
$ docker-compose up -d hasura
Then run your indexer and navigate to 127.0.0.1:8080
.
Optional: configure logging
You may want to tune logging to get notifications on errors or enable debug messages. Specify path to Python logging config in YAML format at --logging-config
argument. Default config to start with:
version: 1
disable_existing_loggers: false
formatters:
brief:
format: "%(levelname)-8s %(name)-35s %(message)s"
handlers:
console:
level: INFO
formatter: brief
class: logging.StreamHandler
stream : ext://sys.stdout
loggers:
SignalRCoreClient:
formatter: brief
dipdup.datasources.tzkt.datasource:
level: INFO
dipdup.datasources.tzkt.cache:
level: INFO
root:
level: INFO
handlers:
- console
Contribution
To set up development environment you need to install poetry package manager and GNU Make. Then run one of the following commands at project's root:
$ # install project dependencies
$ make install
$ # run linters
$ make lint
$ # run tests
$ make test cover
$ # run full CI pipeline
$ make
Contact
- Telegram chat: @baking_bad_chat
- Slack channel: #baking-bad
About
This project is maintained by Baking Bad team. Development is supported by Tezos Foundation.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file dipdup-0.3.0.tar.gz
.
File metadata
- Download URL: dipdup-0.3.0.tar.gz
- Upload date:
- Size: 33.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.1.4 CPython/3.8.10 Linux/5.4.0-1047-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | aed968636f03a4192107b860482e9c750dd45397adc0b021d454275a32c9ba8d |
|
MD5 | 8db6435744a54e8733aa5d844dc515d8 |
|
BLAKE2b-256 | 57db07ce75169bf519f9fda047b7415b14ab22c072772f0afcc51623b5c58a44 |
Provenance
File details
Details for the file dipdup-0.3.0-py3-none-any.whl
.
File metadata
- Download URL: dipdup-0.3.0-py3-none-any.whl
- Upload date:
- Size: 35.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.1.4 CPython/3.8.10 Linux/5.4.0-1047-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8ba4a14f721453e8dc9a72780b70cbae03d12a1e2effc52c7fd59cc1c3352038 |
|
MD5 | 9fdabce4d949d689b39a11c0ac8216a5 |
|
BLAKE2b-256 | d1c9661037c995000900955bffa19c537e70978ce4c219d52a29fcc0d99ad8e6 |