Pulsar-odm is build on top of pulsar, sqlalchemy and greenlet libraries to provide an implicit asynchronous object data mapper to use with code written with asyncio. Currently only one dialect is implemented and tested:
To be able to use the object data mapper within standard blocking code, one needs to use pulsar GreenPool as the following snippet highlights:
from datetime import datetime from sqlalchemy import Integer, Column, String, DateTime, Boolean from pulsar.apps.greenio import GreenPool import odm class Task(odm.Model): id = Column(Integer, primary_key=True) subject = Column(String(250)) done = Column(Boolean, default=False) created = Column(DateTime, default=datetime.utcnow) def __str__(self): return '%d: %s' % (self.id, self.subject) def example(mp): # Make sure table is available mp.table_create() # Insert a new Task in the table with mp.begin() as session: task = mp.task(subject='my task') session.add(task) return task if __name__ == '__main__': pool = GreenPool() mp = odm.Mapper('postgresql+green://odm:firstname.lastname@example.org:5432/odmtests') mp.register(Task) task = pool._loop.run_until_complete(pool.submit(example, mp)) print(task)
The example function is executed in a greenlet other than the main one. This is important otherwise the call fails:
>> example(mp) >> Traceback (most recent call last): ... RuntimeError: acquire in main greenlet
Running the function on the greenlet pool guarantees the correct asynchronous execution. When psycopg2 executes a command against the database on a child greenlet, it switches control to the parent (main) greenlet, which is controlled by the asyncio eventloop so that other asynchronous operations can be carried out. Once the result of the execution is ready, the execution switches back to the original child greenlet so that the example function can continue.
To run tests, create a new role and database, for postgresql:
psql -a -f tests/db.sql