Audit logs using Flask-SQLAlchemy and PostgreSQL.
Project description
Flask Audit Logger
Auditing extension for Flask using Flask-SQLAlchemy, Alembic, and PostgreSQL. This package tracks all every version of a set of target tables along with the actor ID who made the modification. This project was forked from PostgreSQL-Audit, but makes not attempt to maintain backwards compatability. It is inspired by other projects such as SQLAlchemy-Continuum, Papertrail, and Audit Trigger.
This package has the following characteristics:
- Stores versioned records with old and changed data in an
activity
table - Uses triggers to record activity records, keeping INSERTs, UPDATEs and DELETEs as fast as possible
- Uses SQLAlchemy events to track actor IDs in a
transaction
table
Installation
pip install flask-audit-logger
Setup
Create a single AuditLogger()
instance after your models have been declared. E.g. in app/models.py
:
from flask_sqlalchemy import SQLAlchemy
from flask_audit_logger import AuditLogger
db = SQLAlchemy()
class User(db.Model):
__tablename__ = "users"
__table_args__ = ({"info": {"versioned": {}}},)
id = Column(BigInteger, primary_key=True, auto_increment=True)
name = Column(String)
# ... more model declarations ...
audit_logger = AuditLogger(db)
Identify the tables you want audited using __table_args__
.
This determines which tables will have triggers generated by the migration.
Next, configure migrations/env.py
so the AuditLogger can rewrite migration files.
This assumes you have already initialized your project to handle database migrations.
from alembic import context
from app.models import audit_logger
def run_migrations_online():
# ...
context.configure(
# ...
process_revision_directives=audit_logger.process_revision_directives,
)
There's typically a lot going on in the env.py
file.
You may need to call audit_logger.process_revision_directives()
inside an existing function.
Finally, run the migration which will create audit tables, functions, and triggers. Here I'm using Flask-Migrate, but you can use Alembic directly if you wish.
flask db migrate -m 'setup audit_logger'
flask db upgrade
If you need an audit trail for another table in the future, add {"info": {"versioned": {}}
to the __table_args__
tuple.
When you generate the next migration, the newly versioned table will be detected and the correct triggers will get created.
Features
Determining actor_id
By default, transaction.actor_id
will be set using flask_login.current_user
.
Use the get_actor_id
constructor option if you want to change this behavior:
from flask import g
def get_current_user_id():
return g.current_user.id
audit_logger = AuditLogger(db, get_actor_id=get_current_user_id)
Changing the AuditLogger schema
The activity
and transaction
tables are created in the public
schema by default.
If you want these tables to live in a different schema, pass in the schema name when you instantiate the AuditLogger.
audit_logger = AuditLogger(db, schema="audit_logs")
You will also need to make sure alembic can support multiple schemas.
This can be done through an env.py
configuration.
def run_migrations_online():
# ...
context.configure(
# ...
include_schemas=True,
process_revision_directives=audit_logger.process_revision_directives,
)
Customizing actor_cls
The AuditLogger.actor_cls
should align with your current_user type.
By default, this package assumes the User
model is also the actor class.
This can be customized by passing in the model name as a string when the AuditLogger is created.
audit_logger = AuditLogger(db, actor_cls="SuperUser")
This model will be used to populate the AuditLogTransaction.actor
relationship.
For example, the following query loads the first activity and its responsible actor.
AuditLogActivity = audit_logger.activity_cls
AuditLogTransaction = audit_logger.transaction_cls
activity = db.session.scalar(
select(AuditLogActivity)
.options(joinedload(AuditLogActivity.transaction).joinedload(AuditLogTransaction.actor))
.limit(1)
)
print(activity.transaction.actor)
<SuperUser 123456>
Excluding Columns
You may want to ignore version tracking on specific database columns.
This can be done by adding "exclude"
with a list of column names to __table_args__
.
# app/models.py
class User(db.Model):
__tablename__ = "users"
__table_args__ = ({"info": {"versioned": {"exclude": ["hair_color"]}}},)
id = Column(BigInteger, primary_key=True, auto_increment=True)
name = Column(String)
hair_color = Column(String)
# flask db migrate -m 'exclude hair_color'
# migrations/versions/xxxx_exclude_hair_color.py
def upgrade_():
# ### commands auto generated by Alembic - please adjust! ###
op.init_audit_logger_triggers("users", excluded_columns=["hair_color"])
# ### end Alembic commands ###
def downgrade_():
# ### commands auto generated by Alembic - please adjust! ###
op.remove_audit_logger_triggers("users")
# ### end Alembic commands ###
Known Limitations
- This package does not play nicely with Alembic Utils
- Changes to
excluded_columns
are not remembered. You will need to editdowngrade_()
manually to property revert this change
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for flask_audit_logger-1.0.0b4.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9d77948f66bcce50fcc2c24dc3b89fed790d702fdd302c392f7715544f345edf |
|
MD5 | d44bb3c4973e7e895bcb53d7a9791ecb |
|
BLAKE2b-256 | 5e98936f24b2a6170bc000ee0de368b41cc339b7838f1cfb523994a08ead47b7 |
Hashes for flask_audit_logger-1.0.0b4-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7c59b710dc2a0e2207836009366fe068843cc60d8346b2227a711e585ffb0ca0 |
|
MD5 | 26f666f1e464c0704da5be8a1d716606 |
|
BLAKE2b-256 | 4d8471d256bcf06c137b3975c3748729601a3e5116673ff81939df219ead1579 |