Automate CRUD actions with a Falcon API
Project description
Makes RESTful CRUD easier.
Acknowledgements
This is a Falcon 2 compatible adaptation of Gary Monson’s Falcon AutoCRUD package – huge props to him for building out an amazing set of features! I very much plan to keep this package in the same spirit and format as the original.
Quick start for contributing
virtualenv -p `which python3` virtualenv source virtualenv/bin/activate pip install -r requirements.txt pip install -r dev_requirements.txt nosetests
This runs the tests with SQLite. To run the tests with Postgres (using pg8000), you must have a Postgres server running, and a postgres user with permission to create databases:
export BIONIC_DSN=postgresql+pg8000://myuser:mypassword@localhost:5432 nosetests
Some tests are run only when testing on Postgres due to only being relevant to Postgres, such as when testing features to do with Postgres data types.
Usage
Declare your SQLAlchemy models:
from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import create_engine, Column, Integer, String Base = declarative_base() class Employee(Base): __tablename__ = 'employees' id = Column(Integer, primary_key=True) name = Column(String(50)) age = Column(Integer)
Declare your resources:
from bionic_falcon.resource import CollectionResource, SingleResource class EmployeeCollectionResource(CollectionResource): model = Employee class EmployeeResource(SingleResource): model = Employee
Apply them to your app, ensuring you pass an SQLAlchemy engine to the resource classes:
from sqlalchemy import create_engine import falcon from bionic_falcon.middleware import Middleware db_engine = create_engine('sqlite:///stuff.db') app = falcon.API( middleware=[Middleware()], ) app.add_route('/employees', EmployeeCollectionResource(db_engine)) app.add_route('/employees/{id}', EmployeeResource(db_engine))
This automatically creates RESTful endpoints for your resources:
http GET http://localhost/employees http GET http://localhost/employees?name=Bob http GET http://localhost/employees?age__gt=24 http GET http://localhost/employees?age__gte=25 http GET http://localhost/employees?age__lt=25 http GET http://localhost/employees?age__lte=24 http GET http://localhost/employees?name__contains=John http GET http://localhost/employees?name__startswith=John http GET http://localhost/employees?name__endswith=Smith http GET http://localhost/employees?name__icontains=john http GET http://localhost/employees?name__istartswith=john http GET http://localhost/employees?name__iendswith=smith http GET http://localhost/employees?name__in=[Grace Hopper,Ada Lovelace] http GET http://localhost/employees?company_id__null=1 http GET http://localhost/employees?company_id__null=0 echo '{"name": "Jim"}' | http POST http://localhost/employees http GET http://localhost/employees/100 echo '{"name": "Jim"}' | http PUT http://localhost/employees/100 echo '{"name": "Jim"}' | http PATCH http://localhost/employees/100 http DELETE http://localhost/employees/100 # POST an array to add entities in bulk echo '[{"name": "Carol"}, {"name": "Elisa"}]' | http POST http://localhost/employees
Note that by default, PUT will only update, and will not insert a new resource if a matching one does not exist at the address. If you wish new resources to be created, then add the following to your resource:
allow_put_insert = True
Limiting methods
By default collections will autogenerate methods GET, POST and PATCH, while single resources will autogenerate methods GET, PUT, PATCH, DELETE.
To limit which methods are autogenerated for your resource, simply list method names as follows:
# Able to create and search collection: class AccountCollectionResource(CollectionResource): model = Account methods = ['GET', 'POST'] # Only able to read individual accounts: class AccountResource(CollectionResource): model = Account methods = ['GET']
Pre-method functionality.
To do something before a POST or PATCH method is called, add special methods as follows:
class AccountCollectionResource(CollectionResource): model = Account def before_post(self, req, resp, db_session, resource, *args, **kwargs): # Anything you do with db_session is in the same transaction as the # resource creation. Resource is the new resource not yet added to the # database. pass class AccountResource(SingleResource): model = Account def before_patch(self, req, resp, db_session, resource, *args, **kwargs): # Anything you do with db_session is in the same transaction as the # resource update. Resource is the modified resource not yet saved to # the database. pass def before_delete(self, req, resp, db_session, resource, *args, **kwargs): # Anything you do with db_session is in the same transaction as the # resource delete. Resource is the resource to be deleted (or "marked as deleted" - see section on "not really deleting"). pass
Post-method functionality
To do something after success of a method, add special methods as follows:
class AccountCollectionResource(CollectionResource): model = Account def after_get(self, req, resp, collection, *args, **kwargs): # 'collection' is the SQLAlchemy collection resulting from the search pass def after_post(self, req, resp, new, *args, **kwargs): # 'new' is the created SQLAlchemy instance pass def after_patch(self, req, resp, *args, **kwargs): pass class AccountResource(CollectionResource): model = Account def after_get(self, req, resp, item, *args, **kwargs): # 'item' is the retrieved SQLAlchemy instance pass def after_put(self, req, resp, item, *args, **kwargs): # 'item' is the changed SQLAlchemy instance pass def after_patch(self, req, resp, item, *args, **kwargs): # 'item' is the patched SQLAlchemy instance pass def after_delete(self, req, resp, item, *args, **kwargs): pass
Be careful not to throw an exception in the above methods, as this will end up propagating a 500 Internal Server Error.
Modifying a patch
If you want to modify the patched resource before it is saved (e.g. to set default values), you can override the default empty method in SingleResource:
class AccountResource(SingleResource): model = Account def modify_patch(self, req, resp, resource, *args, **kwargs): """ Add 'arino' to people's names """ resource.name = resource.name + 'arino'
Filters/Preconditions
You may filter on GET, and set preconditions on single resource PATCH or DELETE:
class AccountCollectionResource(CollectionResource): model = Account def get_filter(self, req, resp, query, *args, **kwargs): # Only allow getting accounts below id 5 return query.filter(Account.id < 5) class AccountResource(SingleResource): model = Account def get_filter(self, req, resp, query, *args, **kwargs): # Only allow getting accounts below id 5 return query.filter(Account.id < 5) def patch_precondition(self, req, resp, query, *args, **kwargs): # Only allow setting owner of non-owned account if 'owner' in req.context['doc'] and req.context['doc']['owner'] is not None: return query.filter(Account.owner == None) else: return query def delete_precondition(self, req, resp, query, *args, **kwargs): # Only allow deletes of non-owned accounts return query.filter(Account.owner == None)
Note that there is an opportunity for a race condition here, where another process updates the row AFTER the check triggered by patch_precondition is run, but BEFORE the row update. This would leave inconsistent data in your application if the other update would make the precondition no longer hold.
To prevent this, you can simply add a versioning column to your model. When your model contains such a column, then as long as you have a precondition to check for the correct conditions before updating, you will be guaranteed that if another process changes the row in the meantime, you will fail to update, and a 409 response will be returned. This doesn’t necessarily mean the row no longer conforms to the precondition, so you can try the update again, and it will update if the precondition still holds.
This versioning only helps you on an UPDATE, not a DELETE, so if you want a delete_precondition to be protected, you will need to use mark_deleted to update the row (see “not really deleting”, next), instead of doing a true delete.
Not really deleting
If you want to just mark a resource as deleted in the database, but not really delete the row, define a ‘mark_deleted’ in your SingleResource subclass:
class AccountResource(SingleResource): model = Account def mark_deleted(self, req, resp, instance, *args, **kwargs): instance.deleted = datetime.utcnow()
This will cause the changed instance to be updated in the database instead of doing a DELETE.
Of course, the database row will still be accessible via GET, but you can automatically filter out “deleted” rows like this:
class AccountCollectionResource(CollectionResource): model = Account def get_filter(self, req, resp, resources, *args, **kwargs): return resources.filter(Account.deleted == None) class AccountResource(SingleResource): model = Account def get_filter(self, req, resp, resources, *args, **kwargs): return resources.filter(Account.deleted == None) def mark_deleted(self, req, resp, instance, *args, **kwargs): instance.deleted = datetime.utcnow()
You could also look at the request to only filter out “deleted” rows for some users.
Joins
If you want to add query parameters to your collection queries, that do not refer to a resource attribute, but which refer to an attribute in a linked table, you can do this in get_filter, as with the below example. Ensure that you remove the extra parameter value from req.params before returning from get_filter, as bionic-falcon will try (and fail) to look up the parameter in the main resource class.
class Company(Base): __tablename__ = 'companies' id = Column(Integer, primary_key=True) name = Column(String(50), unique=True) employees = relationship('Employee') class Employee(Base): __tablename__ = 'employees' id = Column(Integer, primary_key=True) name = Column(String(50), unique=True) company_id = Column(Integer, ForeignKey('companies.id'), nullable=True) company = relationship('Company', back_populates='employees') class EmployeeCollectionResource(CollectionResource): model = Employee def get_filter(self, req, resp, query, *args, **kwargs): if 'company_name' in req.params: company_name = req.params['company_name'] del req.params['company_name'] query = query.join(Employee.company).filter(Company.name == company_name) return query
Alternatively, for arguments that are part of the URL you may use lookup_attr_map directly (note that attr_map is now deprecated - see below):
class CompanyEmployeeCollectionResource(CollectionResource): model = Employee lookup_attr_map = { 'company_id': lambda req, resp, query, *args, **kwargs: query.join(Employee.company).filter(Company.id == kwargs['company_id']) }
This is useful for the following sort of URL:
GET /companies/{company_id}/employees
Mapping
Mapping used to be done with attr_map. This is now deprecated in favour of lookup_attr_map and inbound_attr_map (since attr_map was used for two different purposes before).
To look up an entry via part of the URL:
GET /companies/{company_id}/employees
Use the name of the column to map to:
class CompanyEmployeeCollectionResource(CollectionResource): model = Employee lookup_attr_map = { 'company_id': 'coy_id' }
Or use a lambda to return a modified query:
class CompanyEmployeeCollectionResource(CollectionResource): model = Employee lookup_attr_map = { 'company_id': lambda req, resp, query, *args, **kwargs: query.join(Employee.company).filter(Company.id == kwargs['company_id']) }
You may use inbound_attr_map to specify mappings to place the value from a URL component into another field:
class CompanyEmployeeCollectionResource(CollectionResource): model = Employee inbound_attr_map = { 'company_id': 'coy_id' }
Both lookup_attr_map and inbound_attr_map may have a mapping value set to None, in which case the mapping key in the URL component is ignored.
Sorting
You can specify a default sorting of results from the collection search. The below example sorts firstly by name, then by salary descending:
class EmployeeCollectionResource(CollectionResource): model = Employee default_sort = ['name', '-salary']
The caller can specify a sort (which overrides the default if defined):
GET /path/to/collection?__sort=name,-salary
Paging
The caller can specify an offset and/or limit to collection GET to provide paging of search results.
GET /path/to/collection?__offset=10&__limit=10
This is generally most useful in combination with __sort to ensure consistency of sorting.
Limiting response fields
You can limit which fields are returned to the client like this:
class EmployeeCollectionResource(CollectionResource): model = Employee response_fields = ['id', 'name']
Or you can limit them programmatically like this:
class EmployeeCollectionResource(CollectionResource): model = Employee def response_fields(self, req, resp, resource, *args, **kwargs): # Determine response fields via things such as authenticated user return fields
Creating linked resources
The collection POST method allows creation of linked resources in the one POST call. If your model includes a relationship to the linked resource, you can include the attributes to use in the new linked resource, and the link will be automatically made in the database:
class Company(Base): __tablename__ = 'companies' id = Column(Integer, primary_key=True) name = Column(String(50), unique=True) employees = relationship('Employee') class Employee(Base): __tablename__ = 'employees' id = Column(Integer, primary_key=True) name = Column(String(50), unique=True) company_id = Column(Integer, ForeignKey('companies.id'), nullable=True) company = relationship('Company', back_populates='employees') class CompanyCollectionResource(CollectionResource): model = Company allow_subresources = True
cat post.json { name: "Initech", employees: [ { name: "Alice" }, { name: "Bob" } ] } cat post.json | http POST http://localhost/companies
This will create a company called Initech and two employees, who will be linked to Initech via Employee.company_id. Note the that CollectionResource subclass must have the attribute allow_subresources and set it to True, for this feature to be enabled.
Bulk operations
You can bulk add entities using a PATCH method to a collection. If the collection is defined in the standard way, you are limited to adding to only that model:
class EmployeeCollectionResource(CollectionResource): model = Employee
To add to the employee collection, each operation’s path must be ‘/’:
echo '{"patches": [{"op": "add", "path": "/", "value": {"name": "Jim"}}]}' | http PATCH http://localhost/employees
If you would like to be able to add to multiple types of collection in one bulk update, define the path and model for each in a special collection:
class RootResource(CollectionResource): patch_paths = { '/employees': Employee, '/accounts': Account, } app.add_route('/', RootResource(db_engine))
To add to the collections, each operation’s path must be in the defined patch_paths:
cat patches.json { "patches": [ {"op": "add", "path": "/employees", "value": {"name": "Jim"}} {"op": "add", "path": "/accounts", "value": {"name": "Sales"}} ] } cat patches.json | http PATCH http://localhost/employees
All the operations done in a single PATCH are performed within a transaction.
Naive datetimes
Normally a datetime is assumed to be in UTC, so they are expected to be in the format ‘YYYY-mm-ddTHH:MM:SSZ’, and are also output like that.
Sometimes (not often!) you need to store a “naive” datetime, where time zone is not relevant (e.g. to store the datetime of a nationwide public holiday, where the time zone is not relevant, and the “real” date/time is simply in the local time zone, whatever that might be - i.e. the client can treat is as being in their own localtime.
For cases such as this, set the naive_datetimes class variable as a list of the column names to be treated as naive datetimes:
class PublicHolidayCollectionResource(CollectionResource): model = PublicHoliday naive_datetimes = ['start', 'end']
These fields will then be parsed and returned in the format ‘YYYY-mm-ddTHH:MM:SS’, i.e. without the ‘Z’ suffix.
Additionally, when a numeric datetime is desired rather than a a datetime string, you can similarly specify that the resource should treat any input and output as a number representing milliseconds since the Unix epoch.
class DeadlineCollectionResource(CollectionResource): model = Deadlines datetime_in_ms = ['started_on', 'due_by']
Meta-information
To add meta-information to each resource in a collection response, assuming your models are:
class Team(Base): __tablename__ = 'teams' id = Column(Integer, primary_key=True) name = Column(String(50)) characters = relationship('Character') class Character(Base): __tablename__ = 'characters' id = Column(Integer, primary_key=True) name = Column(String(50)) team_id = Column(Integer, ForeignKey('teams.id'), nullable=True) team = relationship('Team', back_populates='characters')
Then include the following:
catchphrases = { 'Oliver': 'You have failed this city', 'Cisco': "OK, you don't get to pick the names", } class CharacterCollectionResource(CollectionResource): model = Character resource_meta = { 'catchphrase': lambda resource: catchphrases.get(resource.name, None) }
To add meta-information to the top level of a single resource response, include the following:
catchphrases = { 'Oliver': 'You have failed this city', 'Cisco': "OK, you don't get to pick the names", } class CharacterResource(SingleResource): model = Character meta = { 'catchphrase': lambda resource: catchphrases.get(resource.name, None) }
You can join another table to get the meta information:
class CharacterCollectionResource(CollectionResource): model = Character resource_meta = { 'catchphrase': lambda resource, team_name: catchphrases.get(resource.name, None), 'team_name': lambda resource, team_name: team_name, } extra_select = [Team.name] def get_filter(self, req, resp, query, *args, **kwargs): return query.join(Team) class CharacterResource(SingleResource): model = Character meta = { 'catchphrase': lambda resource, team_name: catchphrases.get(resource.name, None), 'team_name': lambda resource, team_name: team_name, } extra_select = [Team.name] def get_filter(self, req, resp, query, *args, **kwargs): return query.join(Team)
You can even use SQL functions to calculate the values in the meta-information:
from sqlalchemy import func class TeamCollectionResource(CollectionResource): model = Team resource_meta = { 'team_size': lambda resource, team_size: team_size, } extra_select = [func.count(Character.id)] def get_filter(self, req, resp, query, *args, **kwargs): return query.join(Character).group_by(Team.id) class TeamResource(SingleResource): model = Team meta = { 'team_size': lambda resource, team_size: team_size, } extra_select = [func.count(Character.id)] def get_filter(self, req, resp, query, *args, **kwargs): return query.join(Character).group_by(Team.id)
Or you can determine them entirely programmatically like this:
class TeamCollectionResource(CollectionResource): model = Team def resource_meta(self, req, resp, resource, team_size, *args, **kwargs): return { 'team_size': team_size, } extra_select = [func.count(Character.id)] def get_filter(self, req, resp, query, *args, **kwargs): return query.join(Character).group_by(Team.id) class TeamResource(SingleResource): model = Team def meta(self, req, resp, resource, team_size, *args, **kwargs): return { 'team_size': team_size, } extra_select = [func.count(Character.id)] def get_filter(self, req, resp, query, *args, **kwargs): return query.join(Character).group_by(Team.id)
The advantage of using the above method is that the keys can also be determined at runtime, and may change in difference circumstances (e.g. according to query parameters, or the permissions of the caller). To include NO meta at all for the resource, return None from resource_meta or meta functions.
Access to submitted data
Note that the request body can be accessed (e.g. in pre-method functionality function) either from req.context['doc'] (as JSON), or the original binary body content is available in req.context['request_body'] if you specify that the HTTP method should retain it:
class TeamResource(CollectionResource): model = Team keep_request_body = ['POST']
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file bionic-falcon-1.1.0.tar.gz
.
File metadata
- Download URL: bionic-falcon-1.1.0.tar.gz
- Upload date:
- Size: 47.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.0 CPython/3.8.10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 424686cee4483e5eca6818ecb003ab6bade0b352e7f8eea153f24af19f845f28 |
|
MD5 | 06370a28646dfee63c53ff04e0987cc8 |
|
BLAKE2b-256 | 51c200d9ed52822efe0919ac9b01f89254935706775dbe83a43a9b466b0068d9 |
File details
Details for the file bionic_falcon-1.1.0-py3-none-any.whl
.
File metadata
- Download URL: bionic_falcon-1.1.0-py3-none-any.whl
- Upload date:
- Size: 47.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.0 CPython/3.8.10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | adb03707e5fd7919e87b59919161db7072112b52180412be325f17fd89af41e3 |
|
MD5 | 0b284ee37f58866b8c3080ec05203c1e |
|
BLAKE2b-256 | 4c05d073bbfa1b2c5952c4094a444f01bc929632278f094ff9d98a30527b2f46 |