A package that integrates pandas and sqlalchemy with change tracking and optimized SQL operations.
Project description
pandalchemy
Pandas + SQLAlchemy = Smart DataFrames with Automatic Database Sync
Work with database tables as pandas DataFrames while pandalchemy automatically tracks changes and syncs to your database with optimized SQL operations.
Why pandalchemy?
import pandalchemy as pa
# Connect to your database
db = pa.DataBase(engine)
# Work with tables like DataFrames
users = db['users']
users.loc[users['age'] > 30, 'senior'] = True
# Changes are automatically tracked and synced
db.push() # One line, optimized transaction
No more manual SQL. No more tracking what changed. Just work with your data.
Version 1.6.0: All raw SQL has been eliminated in favor of SQLAlchemy ORM/Table API methods, improving type safety, maintainability, and database portability.
Quick Start
Installation
pip install pandalchemy
30-Second Example
from sqlalchemy import create_engine
import pandalchemy as pa
# Connect
engine = create_engine('postgresql://localhost/mydb')
db = pa.DataBase(engine)
# Read, modify, sync
users = db['users']
users['age'] = users['age'] + 1
users.push() # All changes synced automatically
Key Features
Automatic Change Tracking
users['age'] = users['age'] + 1 # Tracked
users.push() # All changes synced
Immutable Primary Keys
users.update_row(1, {'name': 'Alice'}) # ✅ Works
users.update_row(1, {'id': 999}) # ❌ Raises error
Composite Primary Keys
memberships.update_row(('user1', 'org1'), {'role': 'admin'}) # Tuple keys
Auto-Increment Support
users.add_row({'name': 'Bob'}, auto_increment=True) # ID auto-generated
Conditional Updates & Deletes
users.update_where(users._data['age'] > 65, {'senior': True})
users.delete_where(users._data['status'] == 'inactive')
Schema Evolution
users.add_column_with_default('verified', False)
users.rename_column_safe('old_name', 'new_name')
users.drop_column_safe('legacy_field')
users.push() # Executes ALTER TABLE
Transaction Safety
users['age'] = users['age'] + 1
users.push() # Atomic transaction with rollback on error
pandas to_sql Compatibility
# pandas-compatible to_sql with enhanced features
df = pd.DataFrame({'name': ['Alice', 'Bob'], 'age': [30, 25]})
tdf = pa.TableDataFrame(data=df, primary_key='id')
# Create table with auto-increment primary key
tdf.to_sql('users', engine, primary_key='id', auto_increment=True, if_exists='replace')
# Append to existing table (infers primary key automatically)
tdf.to_sql('users', engine, if_exists='append')
Core API
DataBase
db = pa.DataBase(engine)
users = db['users'] # Access table
db.push() # Push all changes
db.create_table('products', df, 'id') # Create new table
TableDataFrame
# Create from DataFrame
df = pd.DataFrame({'name': ['Alice', 'Bob']}, index=[1, 2])
users = pa.TableDataFrame('users', df, 'id', engine, auto_increment=True)
users.push()
# CRUD
users.add_row({'name': 'Charlie'}, auto_increment=True)
users.update_row(1, {'name': 'Alice Updated'})
users.delete_row(2)
users.push()
# Use full pandas API
users._data['age'] = users._data['age'] + 1
filtered = users._data[users._data['age'] > 30]
See CRUD Operations notebook for comprehensive examples.
Best Practices
- Multi-table changes: Use
db.push()instead of individualtable.push()calls - Schema changes: Push schema first, then pull and update data
- Validation: Check
has_changes()andget_changes_summary()before push - Bulk operations: Use
update_where()anddelete_where()instead of loops
See Interactive Examples for detailed patterns and workflows.
Known Limitations
- Boolean columns: SQLite BOOLEAN doesn't accept NaN - use explicit defaults
- Schema changes: Push schema changes separately from data updates
- Primary keys: Cannot be updated (delete + insert instead)
See Full Limitations Guide for details and workarounds.
Troubleshooting
Error: "Cannot update primary key"
Problem: You're trying to update a primary key column, which is immutable.
Solution:
# ❌ This fails
users.update_row(1, {'id': 999}) # Raises DataValidationError
# ✅ Instead, delete and re-insert
old_data = users.get_row(1)
users.delete_row(1)
users.add_row({**old_data, 'id': 999})
users.push()
Enhanced Error Message: The error now includes detailed context showing which table, operation, and suggested fix.
Error: "Boolean column errors in SQLite"
Problem: SQLite doesn't accept NaN values for BOOLEAN columns.
Solution:
# ❌ This may fail
users['active'] = None # Becomes NaN, may fail on push
# ✅ Use explicit False instead
users.add_column_with_default('active', False)
# Or handle None explicitly
users['active'] = users['active'].fillna(False)
Issue: "Schema changes not visible after push"
Problem: Schema changes (add/drop/rename columns) need to be pushed separately, then you need to pull to refresh.
Solution:
# Add column
users.add_column_with_default('email', '')
users.push() # Push schema change first
# Pull to refresh with new schema
users.pull()
# Now you can update the new column
users['email'] = 'user@example.com'
users.push() # Push data changes
Best Practice: Push schema changes and data changes in separate transactions for reliability.
Issue: "Memory issues with large tables"
Problem: Working with very large tables can consume significant memory.
Solutions:
-
Use lazy change computation (already implemented):
# Changes are only computed when needed users['age'] = users['age'] + 1 # No computation yet if users.has_changes(): # Computes here if needed users.push()
-
Use bulk operations instead of loops:
# ✅ Fast - single bulk operation users.update_where(users._data['age'] > 65, {'senior': True}) users.bulk_insert(new_rows) # ❌ Slow - many individual operations for row in new_rows: users.add_row(row)
-
Batch your changes:
# Make all changes first users['age'] = users['age'] + 1 users['status'] = 'active' # Then push once users.push() # Single transaction
Issue: "Transaction rollback not working as expected"
Problem: Understanding when transactions rollback and when they commit.
Solution:
push()automatically wraps all changes in a transaction- If any error occurs, the entire transaction rolls back
- Schema changes and data changes happen in the correct order automatically
try:
users['age'] = users['age'] + 1
products['price'] = products['price'] * 1.1
db.push() # All changes in one transaction
except Exception as e:
# All changes rolled back automatically
print(f"Error: {e}. No changes were committed.")
Error: "No row found with primary key value"
Problem: Trying to update or delete a row that doesn't exist.
Solution: Check if the row exists first, or use upsert_row():
# Check first
if users.row_exists(pk_value):
users.update_row(pk_value, updates)
else:
# Row doesn't exist, create it
users.add_row({**updates, 'id': pk_value})
# Or use upsert (update if exists, insert if not)
users.upsert_row({**updates, 'id': pk_value})
Enhanced Error: The error message now shows the table name, operation, and suggests using get_row() or row_exists() to verify.
Issue: "Performance is slow with many updates"
Problem: Many individual update_row() calls are slow.
Solution: Use update_where() for bulk conditional updates:
# ✅ Fast - single SQL operation
users.update_where(
users._data['age'] > 65,
{'senior': True, 'discount': 0.1}
)
# ❌ Slow - many SQL operations
for idx in old_users.index:
users.update_row(idx, {'senior': True, 'discount': 0.1})
Error: "Column 'X' does not exist"
Problem: Trying to access or modify a column that doesn't exist in the DataFrame.
Enhanced Error: The error now shows:
- Which table you're working with
- The operation that failed
- Available columns in the table
- Suggested fix
Solution: Check column names and use add_column_with_default() if you need to add it:
# Check available columns
print(users.columns.tolist())
# Add missing column if needed
if 'new_column' not in users.columns:
users.add_column_with_default('new_column', default_value=0)
users.push() # Push schema change
users.pull() # Refresh
For more detailed troubleshooting, see the Full Limitations Guide below.
Installation & Setup
Requirements
- Python 3.9+
- pandas >= 1.5.0
- SQLAlchemy >= 2.0.0
Install
pip install pandalchemy
Supported Databases
Works with any SQLAlchemy-supported database:
- PostgreSQL (fully tested)
- MySQL/MariaDB (fully tested)
- SQLite
- Oracle
- SQL Server
- And more
Multi-Database Testing: pandalchemy is extensively tested on PostgreSQL and MySQL with 534 tests ensuring cross-platform compatibility. Over 150+ tests run on multiple database backends (SQLite, PostgreSQL, MySQL) to validate consistent behavior across platforms.
API Reference
DataBase Class
db = pa.DataBase(
engine, # SQLAlchemy engine
lazy=False, # Lazy load tables
schema=None # Optional schema
)
Methods:
db['table_name']- Access tabledb.push()- Push all changesdb.pull()- Refresh all tablesdb.create_table(name, df, primary_key)- Create new tabledb.add_table(table, push=False)- Add existing tabledb.table_names- List all tables
TableDataFrame Class
table = pa.TableDataFrame(
name, # Table name (or DataFrame for standalone)
data=None, # DataFrame data
primary_key='id', # PK column(s)
engine=None, # SQLAlchemy engine
auto_increment=False # Enable auto-increment
)
CRUD Methods:
add_row(row_data, auto_increment=False)- Insert rowupdate_row(pk_value, updates)- Update rowdelete_row(pk_value)- Delete rowupsert_row(row_data)- Update or insertget_row(pk_value)- Get row as dictrow_exists(pk_value)- Check if existsbulk_insert(records)- Bulk insert
Conditional Methods:
update_where(condition, updates)- Bulk conditional updatedelete_where(condition)- Bulk conditional delete
Schema Methods:
add_column_with_default(name, default)- Add columnrename_column_safe(old, new)- Rename columndrop_column_safe(name)- Drop columnconvert_column_type(name, new_type)- Change typeset_primary_key(columns)- Change PK
Database Methods:
push()- Sync to databasepull()- Refresh from databaseto_sql(name, con, ...)- Write DataFrame to SQL table (pandas-compatible with enhanced features)get_next_pk_value()- Next auto-increment value
Inspection Methods:
has_changes()- Check if modifiedget_changes_summary()- Change statisticsvalidate_data()- Check data integrityto_pandas()- Get underlying DataFrame
pandas Compatibility: All standard pandas DataFrame operations work and are automatically tracked.
Interactive Examples
Comprehensive Jupyter notebooks demonstrating all features (with pre-executed outputs):
- CRUD Operations - Create, read, update, delete
- Change Tracking - Automatic change detection
- Composite Primary Keys - Multi-column keys with MultiIndex
- Auto-Increment - Automatic ID generation
- Conditional Operations - Bulk update_where/delete_where
- Schema Evolution - Add, drop, rename columns
- Transactions - ACID guarantees and rollback
- Index-based PKs - Using DataFrame index as primary key ⭐ NEW
- Immutable Primary Keys - PK constraints
- Pandas Integration - Full pandas API
- E-Commerce System - Real-world workflow
All notebooks include working code with outputs and can be run interactively.
Example: Data Cleaning
users = db['users']
# Clean with pandas operations
users._data['email'] = users._data['email'].str.lower().str.strip()
users._data['phone'] = users._data['phone'].str.replace(r'\D', '', regex=True)
# Add derived column
users._data['full_name'] = users._data['first_name'] + ' ' + users._data['last_name']
# Remove invalid rows
deleted = users.delete_where(users._data['email'].isna())
users.push() # All changes in one transaction
See Interactive Examples for 11 comprehensive notebooks with live outputs.
Performance Tips
Quick Performance Wins
- Bulk inserts: Use
bulk_insert()instead of loopingadd_row() - Minimize push(): Batch all changes, then push once
- Conditional updates: Use
update_where()instead of loopingupdate_row()
# ✅ Fast
table.bulk_insert(rows)
users.update_where(users._data['age'] > 65, {'senior': True})
# ❌ Slow
for row in rows: table.add_row(row)
for idx in old_users.index: users.update_row(idx, {'senior': True})
Performance Tuning Guide
When to Use bulk_insert() vs add_row()
Use bulk_insert() when inserting multiple rows:
# ✅ Fast - Single SQL operation
users.bulk_insert([
{'name': 'Alice', 'age': 30},
{'name': 'Bob', 'age': 25},
{'name': 'Charlie', 'age': 35}
])
# ❌ Slow - Multiple SQL operations
for row in rows:
users.add_row(row)
Benchmark: bulk_insert() is typically 10-100x faster for 100+ rows.
Batching Strategies for Large Updates
- Batch by condition: Use
update_where()for conditional bulk updates
# ✅ Process all matching rows in one operation
users.update_where(
users._data['status'] == 'pending',
{'processed': True, 'processed_at': datetime.now()}
)
- Batch by chunks: For very large datasets, process in chunks
# Process 1000 rows at a time
chunk_size = 1000
for i in range(0, len(users), chunk_size):
chunk = users.iloc[i:i+chunk_size]
# Process chunk
chunk['age'] = chunk['age'] + 1
chunk.push() # Push this chunk
Memory Optimization Tips
- Use lazy computation: Changes are computed only when needed
# Make changes
users['age'] = users['age'] + 1
# Computation hasn't happened yet - no performance cost
# Only computes when you check or push
if users.has_changes(): # Computes here if needed
users.push()
- Pull only what you need: If working with large tables, filter before pulling
# ❌ Pulls entire table into memory
users = db['users'] # Could be millions of rows
# ✅ Pull with filter (if supported by your workflow)
# Or work with specific columns
users = db['users']
subset = users[users['department'] == 'Sales'] # Work with subset
- Connection pooling: Use SQLAlchemy connection pooling for better performance
from sqlalchemy import create_engine
from sqlalchemy.pool import QueuePool
engine = create_engine(
'postgresql://localhost/mydb',
poolclass=QueuePool,
pool_size=5,
max_overflow=10
)
Performance Benchmarks
Typical performance characteristics:
- bulk_insert(): ~10,000 rows/second (depends on row size and database)
- update_where(): ~5,000-10,000 rows/second (depends on condition complexity)
- push(): ~1,000-5,000 operations/second (combines all changes into optimized SQL)
Note: Actual performance depends on:
- Database type (PostgreSQL, MySQL, SQLite)
- Network latency (for remote databases)
- Row size and complexity
- Index presence on affected columns
Full Limitations
Click to expand detailed limitations and workarounds
Primary Key Operations
Cannot Update Primary Keys
# PK values are immutable
users.update_row(1, {'id': 999}) # Raises DataValidationError
# Workaround: delete + insert
old_data = users.get_row(1)
users.delete_row(1)
users.add_row({**old_data, 'id': 999})
Schema Changes
Column Additions Timing
# New columns should be pushed before updating
users.add_column_with_default('new_col', 0)
users.push() # Commit schema
users.pull() # Refresh
users['new_col'] = 100 # Now safe
users.push()
Renaming Newly Added Columns
# Can't rename columns added in same transaction
users.add_column_with_default('temp_name', 0)
users.rename_column_safe('temp_name', 'final_name') # May fail on push
# Workaround: push between operations
users.add_column_with_default('temp_name', 0)
users.push()
users.rename_column_safe('temp_name', 'final_name')
users.push()
Type Constraints
Boolean NULL Values
# SQLite BOOLEAN doesn't accept NaN
users['active'] = None # Becomes NaN, fails on push
# Workaround: use explicit False or create as nullable
users.add_column_with_default('active', False)
String to Numeric Conversion
# Automatic type inference may fail
# Manually convert if needed:
users['numeric_string'] = pd.to_numeric(users['string_col'], errors='coerce')
Composite Keys
Auto-Increment with Composite Keys
# Auto-increment only works with single-column integer PKs
table = pa.TableDataFrame('t', df, ['user_id', 'org_id'], engine, auto_increment=True)
# Raises ValueError
# Workaround: use single-column surrogate key
Data Integrity
Duplicate Column Names
# DataFrame with duplicate columns will fail validation
df.columns = ['id', 'name', 'name'] # Fails on push
# Workaround: ensure unique column names
NULL in Primary Keys
# Primary keys cannot be NULL
users.add_row({'id': None, 'name': 'Test'}) # Raises DataValidationError
# Always provide valid PK values
Comparison with Similar Tools
pandalchemy is the only package offering the complete workflow of automatic change tracking, schema evolution, and transaction safety for pandas-to-database synchronization.
Feature Comparison Matrix
| Feature | pandalchemy | pandas + SQLAlchemy | Pangres | pandabase | pandas-gbq | pandera |
|---|---|---|---|---|---|---|
| Automatic Change Tracking | ✅ | ❌ Manual | ❌ | ❌ | ❌ | ❌ |
| Schema Evolution (DDL) | ✅ | ❌ Manual ALTER | ⚠️ Limited | ❌ | ❌ | ❌ |
| Transaction Safety & Rollback | ✅ | ❌ Manual | ❌ | ❌ | ❌ | N/A |
| Primary Key Immutability | ✅ Enforced | ❌ | ❌ | ❌ | ❌ | N/A |
| Composite Primary Keys | ✅ Full Support | ✅ Manual | ✅ | ⚠️ Limited | ⚠️ Limited | N/A |
| Auto-Increment PKs | ✅ | ❌ Manual | ❌ | ❌ | ❌ | N/A |
| Delete Tracking | ✅ | ❌ Manual | ❌ | ❌ | ❌ | N/A |
| Column Type Change Tracking | ✅ | ❌ Manual | ⚠️ Limited | ❌ | ❌ | N/A |
| Optimized Execution Plans | ✅ | ❌ | ⚠️ Upserts only | ❌ | ⚠️ Bulk only | N/A |
| Change Audit Trail | ✅ | ❌ | ❌ | ❌ | ❌ | N/A |
Conditional Updates (update_where) |
✅ | ❌ Manual | ❌ | ❌ | ❌ | N/A |
| Full pandas API | ✅ | ✅ | ✅ | ⚠️ Partial | ✅ | ✅ |
| Multi-Database Support | ✅ All SQLAlchemy | ✅ All SQLAlchemy | ✅ Postgres/MySQL/SQLite | ✅ SQLite/Postgres | ❌ BigQuery only | ✅ |
| Database Integration | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ Validation only |
| Active Maintenance | ✅ | ✅ | ✅ | ❌ Deprecated | ✅ | ✅ |
Legend: ✅ = Included | ❌ = Missing | ⚠️ = Partial/Limited | N/A = Not Applicable
Package Links
- pandalchemy - This package
- pandas + SQLAlchemy - Manual integration baseline
- Pangres - Upsert-focused alternative
- pandabase - Deprecated (use Pangres instead)
- pandas-gbq - Google BigQuery integration
- pandera - DataFrame validation library
Choose pandalchemy when you need:
- Automatic tracking of all DataFrame changes (inserts, updates, deletes)
- Schema evolution with automatic DDL generation
- Transaction safety with automatic rollback on errors
- Composite primary key support with immutability enforcement
- Production-ready workflows with minimal boilerplate
Development
Setup
git clone https://github.com/eddiethedean/pandalchemy
cd pandalchemy
pip install -e ".[dev]"
Run Tests
# Run all tests (SQLite only)
pytest tests/
# Run PostgreSQL tests (requires testing.postgresql)
pytest -m postgres
# Run MySQL tests (requires testing.mysqld)
pytest -m mysql
# Run all multi-database tests
pytest -m multidb
# See tests/TESTING_MULTI_DB.md for setup instructions
Code Quality
# Type checking
mypy src/pandalchemy
# Linting
ruff check src/pandalchemy
# Format
ruff format src/pandalchemy
Contributing
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Add tests for new features
- Ensure all tests pass
- Submit a pull request
See CONTRIBUTING.rst for details.
License
MIT License - see LICENSE file for details.
Links
- Documentation: Read the Docs (coming soon)
- Source Code: GitHub
- Issue Tracker: GitHub Issues
- PyPI: pandalchemy
Acknowledgments
Built with:
- pandas - Data manipulation
- SQLAlchemy - Database toolkit
- fullmetalalchemy - SQL helpers
- transmutation - Schema operations
Version History
1.5.0 (Latest) 🎉
- pandas
to_sqlCompatibility: Full pandas-compatibleto_sqlmethod with enhanced features- Primary key creation and auto-increment support for new tables
- Automatic primary key inference from DataFrame index (named or MultiIndex)
- Support for all pandas
to_sqlparameters (index, index_label, chunksize, dtype, if_exists) - Works seamlessly with existing TableDataFrame change tracking
- Async
to_sqlSupport:AsyncTableDataFrame.to_sql()for async database operations - Composite Primary Key Support in
create_table:DataBase.create_table()now acceptsstr | list[str]for composite keys - Improved Type Safety: Fixed type annotations for composite primary keys throughout the codebase
- Enhanced MySQL Async Support: Fixed async URL conversion and sync engine caching for MySQL databases
- Testing: 986 tests passing with comprehensive
to_sqlcoverage across SQLite, PostgreSQL, and MySQL
1.4.0
- Enhanced Async Robustness: Retry logic with exponential backoff for transient failures, connection health checks, and auto-reconnection
- Async Performance: Sync engine caching to avoid redundant engine creation, batch operations for large datasets, and parallel execution with concurrency limits
- Async Schema Changes: Native async schema change support for PostgreSQL and MySQL (add_column, drop_column, rename_column)
- Transaction Improvements: Transaction isolation level support, deadlock detection with automatic retry, and improved conflict resolution for concurrent operations
- Better Error Context: Enhanced error messages with connection pool status, retry attempt tracking, and operation timing information
- Greenlet Context Management: Cleaner greenlet context handling with
AsyncGreenletContextmanager, removing workarounds - Code Quality: Removed unused dependencies (tabulate, pytest-async-sqlalchemy), all ruff checks passing, ty type checker configuration
- Test Suite Optimization: 30x faster robustness tests, dynamic database detection (no test skips), 950 tests passing
- SQLite Async Guidance: Added warning recommending sync version for SQLite databases (async provides no benefit due to database-level locking)
1.3.0
- AsyncIO Support: Full async/await support with
AsyncDataBaseandAsyncTableDataFrameclasses for async database operations - Enhanced Error Handling: Detailed error context with error codes, suggested fixes, and structured error information
- Memory Optimization: Incremental change tracking mode to reduce memory usage for large datasets
- Lazy Change Computation: Changes are computed only when needed (on
has_changes(),get_changes_summary(), orpush()) - Conflict Resolution: Configurable conflict resolution strategies (last_writer_wins, first_writer_wins, abort, merge, custom) for concurrent modifications
- Adaptive Batch Sizing: Dynamic batch sizing for SQL operations based on operation type and record count for optimal performance
- Improved Validation: Comprehensive validation before transactions to catch schema errors early
- Code Quality: All ruff checks passing (0 errors), all mypy checks passing (0 errors), full code formatting compliance
- Testing: 973 tests passing with comprehensive PostgreSQL and MySQL test infrastructure
1.2.0
- Dependency Upgrades: Upgraded to fullmetalalchemy 2.4.0 and transmutation 1.1.0 for improved SQL operations and schema evolution
- Code Modernization: Replaced all SQLAlchemy Core API usage with fullmetalalchemy functions for consistency and better abstraction
- Type Safety: Added fast type checking with
ty(Rust-based type checker) and fixed all type issues for better code quality - Improved Schema Operations: Leveraged transmutation 1.1.0 features including improved column operations, better transaction handling, and MySQL VARCHAR length support
- Performance: Optimized MAX aggregation queries using fullmetalalchemy's
select_column_maxfor efficient primary key generation - Code Quality: Full ruff formatting and linting compliance, improved type annotations throughout the codebase
- Testing: 453 tests passing with improved test coverage and reliability
1.1.0
- Multi-Database Support: Full PostgreSQL and MySQL compatibility with 534 tests, 150+ running on multiple databases
- Database-Specific Optimizations: Raw SQL paths for PostgreSQL/MySQL to avoid metadata lock issues
- Schema Evolution Improvements: Proper handling of MySQL VARCHAR length requirements and column rename operations
- Connection Management: Improved connection pooling and transaction handling for production databases
- Transaction Fixes: Fixed DELETE operations in complex transactions with schema changes
- Testing Infrastructure: Added
testing.postgresqlandtesting.mysqldfor isolated test environments - Performance: Optimized table introspection using
inspect(engine)andautoload_withfor better transaction visibility - Code Quality: Full ruff and mypy compliance with 0 errors
1.0.0
- Major refactoring: Merged Table and TrackedDataFrame into unified TableDataFrame
- New feature: Column type change tracking with ALTER COLUMN support
- New methods: update_where() and delete_where() for conditional operations
- Code quality: Eliminated ~185 lines of duplicate code, created pk_utils module
- Security: Fixed SQL injection vulnerabilities
- Type safety: Full mypy compliance (0 errors)
- Testing: 446 comprehensive tests passing
- Documentation: Complete README revamp (34% more concise)
0.2.0
- Added composite primary key support
- Immutable primary keys as index
- Auto-increment functionality
- Enhanced CRUD operations
0.1.0
- Initial release
- Basic change tracking
- Simple CRUD operations
Made with ❤️ by Odos Matthews
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pandalchemy-1.6.0.tar.gz.
File metadata
- Download URL: pandalchemy-1.6.0.tar.gz
- Upload date:
- Size: 261.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
390285fe001918beaf4137fe7e2fe9668563c46072c2f66ef7867f208d311cd5
|
|
| MD5 |
72699e318b76fe73fa6af9d0dcffcad4
|
|
| BLAKE2b-256 |
45d56519521a1c1fe1ad6c11ba4bd4c3a22a2458c80985455db64594900f5fca
|
File details
Details for the file pandalchemy-1.6.0-py3-none-any.whl.
File metadata
- Download URL: pandalchemy-1.6.0-py3-none-any.whl
- Upload date:
- Size: 79.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
941bf520ea6280553a1d99570a66712b313ad89d37f505658faa628125a7794e
|
|
| MD5 |
88353d2d78f9080aefd38a38511de412
|
|
| BLAKE2b-256 |
144cb62432da7f618e7c7b193e20071bdac3d5aba189b615c5c64421e596aa70
|