ORM for the dataset library
Project description
Fast Active Record ORM for the dataset library
Why?
The dataset library is a great and easy tool to work with any SQL database. Unfortunately, it lacks an object mapper ( ORM) - if you need one you are left with the complexity that is sqlalchemy.
Enter dataset-orm
How?
Installation
$ pip install dataset-orm
Define classes that define a dataset.Table:
from dataset_orm import Model, connect
class User(Model):
username = Column(types.string, unique=True)
data = Column(types.json)
connect('sqlite:///mydb.sqlite')
Alternatively, use the functional API, e.g. to create models dynamically:
User = ds.Model.from_spec(name='User',
columns=[ds.Column(ds.types.string, 'name', unique=True),
ds.Column(ds.types.json, 'data')],
db=db)
Then create rows directly from Python objects:
user = User(username='dave', data={'sports': ['football', 'tennis']'})
user.save()
user = User.objects.find_one(username='dave')
user.data
=>
{'sports': ['football', 'tennis']'}
Query exiting tables, ORM-style:
User = Model.from_table(db['customer'])
User.objects.all()
=>
[ User(pk=1), User(pk=2), User(pk=3)]
user = User.objects.find_one(name='John Walker')
print(user.pk, user.name)
=>
1 John Walker
Update and delete
user = User.objects.find_one(name='John Walker')
user.place = 'New York'
user.save()
users = User.objects.find(place='London')
users.delete()
Store and access any data types, including json and binary values
class User(Model):
# in some dbs, unique strings must be limited in length
username = Column(types.string(length=100), unique=True)
picture = Column(types.binary)
user = User.objects.get(name='Dave')
with open('image.png', 'rb') as fimg:
user.picture = fimg.read()
user.save()
Use the file column type for transparently storing binary data:
class Image(Model):
imagefile = Column(types.file)
Usage:
img = Image()
with open('/path/to/image') as f:
img.imagefile.write(f)
img.save()
data = img.imagefile.read()
Here the imagefile field provides a file-like API. This is an efficient way to store binary data in the database. The file's data is split in chunks and written to the database in multiple parts. On reading back, the chunks are retrieved from the db in parallel, in order to improve performance for large files. Tests indicate a 25% speed up is possible v.v. a binary field.
You may use the dataset.files
API to get a filesystem-like API to binary data stored in the database, without the need
to use a model:
from dataset_orm import files
connect('sqlite:///test.sqlite')
files.write('myfile', b'some data')
files.read('myfile')
=> b'some data'
files.exists('myfile')
=> True
files.list()
=> ['myfile']
files.find('*file*')
=> ['myfile']
files.remove('myfile')
The convenience methods put()
and get()
allow for an even simpler use of the files api:
files.put(b'some data', 'myfile')
data = files.get('myfile').read()
=> b'some data'
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
File details
Details for the file dataset_orm-0.3-py3-none-any.whl
.
File metadata
- Download URL: dataset_orm-0.3-py3-none-any.whl
- Upload date:
- Size: 19.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/34.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.9 tqdm/4.63.1 importlib-metadata/4.2.0 keyring/23.5.0 rfc3986/2.0.0 colorama/0.4.4 CPython/3.7.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0ede1076c58a6b2edac7bb8ad5ff28f0dead35c2388baced69d8624acf17eea3 |
|
MD5 | fb19dc7faacd8406c620198aaddc356d |
|
BLAKE2b-256 | 8425b1024827eb853ac404e6d0e726892c73b57af257b118117d2296d1d470cd |