A robust, high-performance asynchronous connection pool for SQLite.
Project description
aiosqlitepool
aiosqlitepool is a connection pool for asyncio SQLite database applications. Instead of creating a new database connection for every operation, it maintains a pool of reusable connections that dramatically improve your application's performance and responsiveness.
Important: aiosqlitepool does not implement SQLite driver functionality. It's a complementary pooling layer built on top of existing asyncio SQLite libraries like aiosqlite. You still need an underlying SQLite driver - aiosqlitepool just makes it faster by pooling connections.
It works as a layer on top of any asyncio SQLite driver that implements a simple protocol. Currently tested with aiosqlite, but designed to work with other compatible libraries.
Table of Contents
- Why Connection Pooling?
- Quick Start
- Installation
- Usage
- Configuration
- Performance Benchmarks
- Common Issues & Solutions
- Connection Protocol
- How It Works
- License
aiosqlitepool in three points:
- Eliminates connection overhead: Reuses long-lived connections instead of creating new ones for each operation
- Simplifies configuration: Connections stay configured with your pragma settings and ready to use
- Improves concurrent performance: Handles more requests with the same hardware through efficient connection reuse
When You Need This
You should use aiosqlitepool if your application:
- Makes frequent database queries (more than a few per minute)
- Handles concurrent requests (web APIs, background workers)
- Uses pragma settings like WAL mode or custom cache sizes
- Needs predictable database response times under load
Skip it if: You're building a CLI tool that makes 1-2 database calls and exits.
Why Connection Pooling?
The Problem: Opening a new SQLite connection for every database operation is expensive:
# Without pooling - MEASURED: ~29ms average
async def get_user(user_id):
conn = await aiosqlite.connect("app.db") # Connection overhead
await conn.execute("PRAGMA journal_mode=WAL") # Setup overhead
cursor = await conn.execute("SELECT * FROM users WHERE id=?", (user_id,))
user = await cursor.fetchone()
await conn.close() # Cleanup overhead
return user # Measured: 28.98ms average in concurrent load
The Solution: Reuse connections from a pool:
# With pooling - MEASURED: ~17ms average
async def get_user(user_id):
async with pool.connection() as conn: # Fast: connection already ready
cursor = await conn.execute("SELECT * FROM users WHERE id=?", (user_id,))
return await cursor.fetchone() # Measured: 17.13ms average in concurrent load
Real Impact: In our benchmark with 100 concurrent workers on a database with millions of records, connection pooling delivered 5,731 requests/second vs 3,325 without pooling - a 72% improvement in throughput for the same hardware.
Quick Start
import asyncio
import aiosqlite
from aiosqlitepool import SQLiteConnectionPool
async def connection_factory():
return await aiosqlite.connect("example.db")
async def main():
pool = SQLiteConnectionPool(connection_factory)
async with pool.connection() as conn:
await conn.execute("CREATE TABLE IF NOT EXISTS users (name TEXT)")
await conn.execute("INSERT INTO users VALUES (?)", ("Alice",))
# You must handle transaction management yourself
await conn.commit()
async with pool.connection() as conn:
cursor = await conn.execute("SELECT name FROM users")
row = await cursor.fetchone()
print(f"Found user: {row[0]}")
await pool.close()
if __name__ == "__main__":
asyncio.run(main())
Note: The pool manages connections, not transactions. You're responsible for calling commit() or rollback() as needed. The pool ensures connections are safely reused but doesn't interfere with your transaction logic.
Installation
pip install aiosqlitepool
Usage
Basic Usage
You must provide a connection_factory - an async function that creates and returns a database connection:
import asyncio
import aiosqlite
from aiosqlitepool import SQLiteConnectionPool
async def create_connection():
return await aiosqlite.connect("my_app.db")
async def main():
pool = SQLiteConnectionPool(create_connection)
# The pool manages connections automatically
async with pool.connection() as conn:
cursor = await conn.execute("SELECT * FROM users")
users = await cursor.fetchall()
# No commit needed for read operations
# For write operations, you handle transactions
async with pool.connection() as conn:
try:
await conn.execute("INSERT INTO users VALUES (?)", ("Bob",))
await conn.execute("INSERT INTO users VALUES (?)", ("Carol",))
await conn.commit() # Your responsibility to commit
except Exception:
await conn.rollback() # Your responsibility to rollback
raise
await pool.close()
Row Factory Support
You can configure connections with row factories or other settings in your connection factory:
import aiosqlite
from aiosqlitepool import SQLiteConnectionPool
async def create_connection_with_row_factory():
conn = await aiosqlite.connect("my_app.db")
conn.row_factory = aiosqlite.Row
return conn
async def main():
pool = SQLiteConnectionPool(create_connection_with_row_factory)
async with pool.connection() as conn:
cursor = await conn.execute("SELECT id, name FROM users WHERE id = ?", (1,))
user = await cursor.fetchone()
print(f"User: {user['name']}") # Access by column name
await pool.close()
Using as Context Manager
The pool can be used as an async context manager for automatic cleanup:
async def main():
async with SQLiteConnectionPool(create_connection) as pool:
async with pool.connection() as conn:
# Do database work
pass
# Pool is automatically closed
FastAPI Integration
Here are two common patterns for using aiosqlitepool with FastAPI:
Manual Dependency Injection
from fastapi import FastAPI, Depends, HTTPException
import aiosqlite
from aiosqlitepool import SQLiteConnectionPool
app = FastAPI()
# Global pool instance
pool = None
async def create_connection():
conn = await aiosqlite.connect("app.db")
conn.row_factory = aiosqlite.Row
return conn
@app.on_event("startup")
async def startup():
global pool
pool = SQLiteConnectionPool(create_connection, pool_size=10)
@app.on_event("shutdown")
async def shutdown():
if pool:
await pool.close()
def get_pool():
return pool
@app.get("/users/{user_id}")
async def get_user(user_id: int, pool: SQLiteConnectionPool = Depends(get_pool)):
async with pool.connection() as conn:
cursor = await conn.execute("SELECT * FROM users WHERE id = ?", (user_id,))
user = await cursor.fetchone()
if not user:
raise HTTPException(status_code=404, detail="User not found")
return dict(user)
@app.post("/users")
async def create_user(name: str, pool: SQLiteConnectionPool = Depends(get_pool)):
async with pool.connection() as conn:
try:
cursor = await conn.execute(
"INSERT INTO users (name) VALUES (?) RETURNING id", (name,)
)
result = await cursor.fetchone()
await conn.commit() # Your responsibility to commit
return {"id": result["id"], "name": name}
except Exception:
await conn.rollback() # Your responsibility to rollback
raise HTTPException(status_code=500, detail="Failed to create user")
Context Manager Pattern
from fastapi import FastAPI, HTTPException
from contextlib import asynccontextmanager
import aiosqlite
from aiosqlitepool import SQLiteConnectionPool
async def create_connection():
conn = await aiosqlite.connect("app.db")
conn.row_factory = aiosqlite.Row
return conn
@asynccontextmanager
async def lifespan(app: FastAPI):
# Startup
pool = SQLiteConnectionPool(create_connection, pool_size=10)
app.state.pool = pool
yield
# Shutdown
await pool.close()
app = FastAPI(lifespan=lifespan)
@app.get("/users/{user_id}")
async def get_user(user_id: int):
async with app.state.pool.connection() as conn:
cursor = await conn.execute("SELECT * FROM users WHERE id = ?", (user_id,))
user = await cursor.fetchone()
if not user:
raise HTTPException(status_code=404, detail="User not found")
return dict(user)
@app.post("/users")
async def create_user(name: str):
async with app.state.pool.connection() as conn:
try:
cursor = await conn.execute(
"INSERT INTO users (name) VALUES (?) RETURNING id", (name,)
)
result = await cursor.fetchone()
await conn.commit() # Your responsibility to commit
return {"id": result["id"], "name": name}
except Exception:
await conn.rollback() # Your responsibility to rollback
raise HTTPException(status_code=500, detail="Failed to create user")
## Configuration
`SQLiteConnectionPool` accepts these parameters:
* `connection_factory` (required): Async function that returns a database connection
* `pool_size` (int): Maximum number of connections in the pool (default: 20)
* `acquisition_timeout` (int): Seconds to wait for a connection (default: 30)
* `idle_timeout` (int): Seconds before idle connections are replaced (default: 86400)
### Recommended Settings by Use Case
**Web API (FastAPI/Django):**
```python
pool = SQLiteConnectionPool(
connection_factory,
pool_size=10, # Usually enough for most web apps
acquisition_timeout=30, # HTTP timeout compatible
idle_timeout=3600 # 1 hour - good for web traffic patterns
)
Background Workers/Heavy Load:
pool = SQLiteConnectionPool(
connection_factory,
pool_size=50, # More connections for concurrent processing
acquisition_timeout=60, # Longer timeout for batch jobs
idle_timeout=7200 # 2 hours - longer running processes
)
Low Traffic Applications:
pool = SQLiteConnectionPool(
connection_factory,
pool_size=5, # Fewer connections needed
acquisition_timeout=10, # Quick timeout for responsiveness
idle_timeout=1800 # 30 minutes - conserve resources
)
Performance Benchmarks
All benchmarks performed on a realistic database with 1.2M users, 120K posts, 6M comments, and 12M likes.
Real-World Load Test
1,000 concurrent requests across 100 workers
| Metric | Without Pool | With Pool | Improvement |
|---|---|---|---|
| Requests/sec | 3,325 | 5,731 | +72% |
| Average latency | 28.98ms | 17.13ms | -41% |
| Median latency | 28.10ms | 13.57ms | -52% |
| P90 latency | 37.39ms | 18.25ms | -51% |
| P99 latency | 42.17ms | 58.76ms | Variable* |
*P99 latency shows pool contention under extreme load (100 workers, pool size 100) where 1% of requests must wait for connection availability
Key takeaway: In realistic concurrent scenarios, connection pooling delivers 1.7x throughput improvement and 2x faster response times for 99% of requests.
Connection Overhead Analysis
Micro-benchmark: 10,000 simple SELECT operations across 5 workers
| Approach | Avg Latency | Median Latency | Total Time | Operations/sec |
|---|---|---|---|---|
| Open/close per query | 1,019μs | 1,006μs | 2.04s | 4,902 |
| Persistent connections | 396μs | 389μs | 0.79s | 12,658 |
| Improvement | -61% | -61% | -61% | +158% |
Pure connection overhead: Each connection create/destroy cycle costs 623μs (1,019 - 396 = 623μs) of pure overhead per database operation.
Performance Impact Summary
Without pooling, each database operation requires:
Connect → Apply pragmas → Execute query → Close = ~29ms average
With pooling:
Get pooled connection → Execute query → Return to pool = ~17ms average
Scaling impact: This 623μs per operation overhead compounds with usage:
- 100 queries/second = 62ms of pure connection overhead per second
- 1,000 queries/second = 623ms of pure connection overhead per second
- In high-throughput applications, pooling becomes essential for maintaining responsiveness
Connection Protocol
aiosqlitepool works with any connection object that implements:
async def execute(*args, **kwargs): ...
async def rollback(): ...
async def close(): ...
This makes it compatible with aiosqlite and other async SQLite libraries.
Common Issues & Solutions
"Pool connection timeout" errors:
- Increase
pool_sizeif you have high concurrency - Check for forgotten
awaitkeywords in your code - Make sure you're not holding connections too long
Database locked errors:
- Ensure you're calling
commit()orrollback()in all code paths - Check that connections are properly returned to the pool (use
async with) - Consider increasing
acquisition_timeoutfor heavy workloads
Memory usage growing over time:
- Reduce
idle_timeoutto close idle connections sooner - Reduce
pool_sizeif you have fewer concurrent operations than expected - Ensure
await pool.close()is called during shutdown
Slower than expected performance:
- Make sure you're reusing the same pool instance across requests
- Verify your
connection_factoryisn't doing expensive setup repeatedly - Check that you're not creating new pools for each operation
How It Works
The pool automatically:
- Creates connections on-demand up to the pool size limit
- Reuses idle connections to avoid creation overhead
- Performs health checks to detect broken connections
- Rolls back any uncommitted transactions when connections are returned
- Replaces connections that have been idle too long
License
aiosqlitepool is available under the MIT License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file aiosqlitepool-1.0.0b0.tar.gz.
File metadata
- Download URL: aiosqlitepool-1.0.0b0.tar.gz
- Upload date:
- Size: 15.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
674bbb52c73899681dd4044db65c9773b7553e35f8fc9fe419a4ed9c468a0b4b
|
|
| MD5 |
ccef40079460713f426dd4a7c0d33982
|
|
| BLAKE2b-256 |
eddddf9f9337077bea0bb13974dc071dcf83d03b37eac2ec7242839b4100a8aa
|
File details
Details for the file aiosqlitepool-1.0.0b0-py3-none-any.whl.
File metadata
- Download URL: aiosqlitepool-1.0.0b0-py3-none-any.whl
- Upload date:
- Size: 12.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c51e840849897b37642b41ca7fb6980773e463d830b3f2f2da211d934070d916
|
|
| MD5 |
28e727bb6f98649c8c6652a823d5176b
|
|
| BLAKE2b-256 |
9ae45591acec9c2f97ee4dbbd1400c88cf4faf858f585359fd2c1098bd2db614
|