Skip to main content

A robust, high-performance asynchronous connection pool for SQLite.

Project description

aiosqlitepool

Stable version Package version Supported Python versions Supported Python versions

aiosqlitepool is a high-performance connection pool for asyncio SQLite applications. By managing a pool of reusable database connections, it eliminates connection overhead and delivers significant performance gains.

Important: aiosqlitepool is not a SQLite database driver.

It's a performance-boosting layer that works with an asyncio driver like aiosqlite, not as a replacement for it.

aiosqlitepool in three points:

  • Eliminates connection overhead: It avoids repeated database connection setup (syscalls, memory allocation) and teardown (syscalls, deallocation) by reusing long-lived connections.
  • Faster queries via "hot" cache: Long-lived connections keep SQLite's in-memory page cache "hot." This serves frequently requested data directly from memory, speeding up repetitive queries and reducing I/O operations.
  • Maximizes concurrent throughput: Allows your application to process significantly more database queries per second under heavy load.

Table of contents

Installation

aiosqlitepool requires the aiosqlite driver to be installed as a peer dependency.

Install with your preferred package manager:

pip

pip install aiosqlite aiosqlitepool

uv

uv add aiosqlite aiosqlitepool

Poetry

poetry add aiosqlite aiosqlitepool

Usage

Basic usage

You must provide a connection_factory - an async function that creates and returns a database connection:

import asyncio
import aiosqlite

from aiosqlitepool import SQLiteConnectionPool


async def connection_factory():
    return await aiosqlite.connect("basic.db")


async def main():
    pool = SQLiteConnectionPool(connection_factory)
    
    async with pool.connection() as conn:
        await conn.execute("CREATE TABLE IF NOT EXISTS users (name TEXT)")
        await conn.execute("INSERT INTO users VALUES (?)", ("Alice",))
        # You must handle transaction management yourself
        await conn.commit()
        cursor = await conn.execute("SELECT name FROM users")
        row = await cursor.fetchone()
        print(f"Found user: {row[0]}")
    
    await pool.close()


if __name__ == "__main__":
    asyncio.run(main())

Note: The pool manages connections, not transactions. You're responsible for calling commit() or rollback() as needed. The pool ensures connections are safely reused but doesn't interfere with your transaction logic.

Using as Context Manager

The pool can be used as an async context manager for automatic cleanup:

async def main():
    async with SQLiteConnectionPool(create_connection) as pool:
        async with pool.connection() as conn:
            # Do database work
            pass
    # Pool is automatically closed

High-performance SQLite connection configuration

For high-performance applications, configure your connection factory with optimized SQLite pragmas.

import aiosqlite

from aiosqlitepool import SQLiteConnectionPool


async def sqlite_connection() -> aiosqlite.Connection:
    # Connect to your database
    conn = await aiosqlite.connect("your_database.db")
    # Apply high-performance pragmas
    await conn.execute("PRAGMA journal_mode = WAL")
    await conn.execute("PRAGMA synchronous = NORMAL")
    await conn.execute("PRAGMA cache_size = 10000")
    await conn.execute("PRAGMA temp_store = MEMORY")
    await conn.execute("PRAGMA foreign_keys = ON")
    await conn.execute("PRAGMA mmap_size = 268435456")
    
    return conn


async def main():
    # Initialize the connection pool with your high-performance connection factory
    pool = SQLiteConnectionPool(
        connection_factory=sqlite_connection,
    )
    
    # Use the pool
    async with pool.connection() as conn:
        # Your database operations here
        # cursor = await conn.execute("SELECT ...")
        # rows = await cursor.fetchall()
        pass
    
    # Clean up
    await pool.close()

PRAGMA journal_mode = WAL - Writes go to a separate WAL file, reads continue from main database. Multiple readers can work simultaneously with one writer.

PRAGMA synchronous = NORMAL - SQLite syncs to disk at critical moments, but not after every write. ~2-3x faster writes than FULL synchronization.

PRAGMA cache_size = 10000 - Keeps 10,000 database pages (~40MB) in memory. Frequently accessed data served from RAM, not disk

PRAGMA temp_store = MEMORY - Stores temporary tables, indexes, and sorting operations in RAM. Eliminates disk I/O for temporary operations

PRAGMA foreign_keys = ON - Enforces foreign key constraints automatically. Prevents data corruption, reduces application-level checks

PRAGMA mmap_size = 268435456 - Maps database file directly into process memory space. Reduces system calls, faster access to large databases

FastAPI integration

This section demonstrates an effective pattern for integrating aiosqlitepool with FastAPI applications.

The pattern addresses three key requirements:

  1. Lifecycle management: The pool is created during application startup and gracefully closed on shutdown using FastAPI's lifespan context manager
  2. Global access: The pool is stored in the application's state, making it accessible to all route handlers
  3. Dependency injection: A reusable dependency function provides clean access to pooled connections with automatic resource management
import asyncio

from typing import AsyncGenerator
from contextlib import asynccontextmanager

import aiosqlite

from aiosqlitepool import SQLiteConnectionPool
from fastapi import (
    Request,
    Depends, 
    FastAPI, 
    HTTPException, 
)


async def sqlite_connection() -> aiosqlite.Connection:
    """A factory for creating new connections."""
    conn = await aiosqlite.connect("app.db")

    return conn


@asynccontextmanager
async def lifespan(app: FastAPI):
    """
    Manage the connection pool's lifecycle.
    The pool is created when the application starts and gracefully closed when it stops.
    """
    db_pool = SQLiteConnectionPool(connection_factory=sqlite_connection, pool_size=10)
    app.state.db_pool = db_pool
    yield
    await db_pool.close()


app = FastAPI(lifespan=lifespan)


async def get_db_connection(request: Request) -> AsyncGenerator[Connection]:
    """
    A dependency that provides a connection from the pool.
    It accesses the pool from the application state.
    """
    db_pool = request.app.state.db_pool

    async with db_pool.connection() as conn:
        yield conn


@app.get("/users/{user_id}")
async def get_user(
    user_id: int, db_conn: Connection = Depends(get_db_connection)
) -> dict[str, any]:
    cursor = await db_conn.execute("SELECT * FROM users WHERE id = ?", (user_id,))
    user = await cursor.fetchone()

    if not user:
        raise HTTPException(status_code=404, detail="User not found")
    
    return dict(user)

Configuration

SQLiteConnectionPool accepts these parameters:

Required parameters

  • connection_factory - An async function that creates and returns a new database connection. This function will be called whenever the pool needs to create a new connection.

Optional parameters

  • pool_size (int) - Maximum number of connections to maintain in the pool (default: 5)

  • acquisition_timeout (int) - Maximum seconds to wait for an available connection (default: 30)

  • idle_timeout (int) - Maximum seconds a connection can remain idle before replacement (default: 86400 - 24 hours)

Recommended configurations

Most web applications work well with these settings:

pool = SQLiteConnectionPool(
    connection_factory,
    pool_size=10,
    acquisition_timeout=30
)

For read-heavy workloads like analytics or reporting:

pool = SQLiteConnectionPool(
    connection_factory,
    pool_size=20,
    acquisition_timeout=15
)

For write-heavy workloads:

pool = SQLiteConnectionPool(
    connection_factory,
    pool_size=5,
    acquisition_timeout=60
)

How it works

The pool automatically:

  • Creates connections on-demand up to the pool size limit
  • Reuses idle connections to avoid creation overhead
  • Performs health checks to detect broken connections
  • Rolls back any uncommitted transactions when connections are returned
  • Replaces connections that have been idle too long

Do you need a connection pool with SQLite?

For server-based databases like PostgreSQL or MySQL, the answer is always yes. Connecting over a network is slow and expensive. A connection pool is a critical pattern to minimize latency and manage server resources.

But SQLite is different. It's an embedded, in-process database. There's no network, no sockets, just a file on disk. The overhead of creating a connection is measured in microseconds, not even milliseconds.

So, is a connection pool just needless complexity?

The primary challenge with SQLite in a concurrent environment (like an asyncio web application) is not connection time, but write contention. SQLite uses a database-level lock for writes. When multiple asynchronous tasks try to write to the database simultaneously through their own separate connections, they will collide. This contention leads to a cascade of SQLITE_BUSY or SQLITE_LOCKED errors.

Most applications will not encounter these issues, only a small percentage under heavy load!

Here's a quick checklist. Use aiosqlitepool if:

  • Your service handles steady web traffic: Your application serves more than 5-10 requests per second from the database.
  • Your workers process high-throughput jobs: Your background workers run more than ~30 queries per second.
  • Your application requires predictable low latency: Your service operates under a tight performance budget (e.g., p99 latency < 50ms).
  • You aim for a minimal footprint: You design your applications to be resource-efficient, knowing that reducing CPU and I/O load contributes to leaner, more sustainable infrastructure.

You don't need aiosqlitepool if your application is:

  • A short-lived script that runs a few queries and exits.
  • A very low-traffic service with fewer than a few requests per minute.

Benchmarks

All benchmarks performed on a realistic database with:

  • 1.2M users
  • 120K posts
  • 6M comments
  • 12M likes

Load test

1,000 concurrent requests across 100 workers

Metric Without Pool With Pool Improvement
Queries/sec 3,325 5,731 +72%
Average latency 28.98ms 17.13ms -41%
Median latency 28.10ms 13.57ms -52%
P90 latency 37.39ms 18.25ms -51%
P99 latency 42.17ms 58.76ms Variable*

*P99 latency shows pool contention under extreme load (100 workers, pool size 100) where 1% of requests must wait for connection availability

Key takeaway: In realistic concurrent scenarios, connection pooling delivers 1.7x throughput improvement and 2x faster response times for 99% of requests.

Connection overhead

10,000 simple SELECT operations across 5 workers

Approach Avg Latency Median Latency Total Time Operations/sec
Open/close per query 1,019μs 1,006μs 2.04s 4,902
Persistent connections 396μs 389μs 0.79s 12,658
Improvement -61% -61% -61% +158%

Pure connection overhead: Each connection create/destroy cycle costs 623 microseconds (1,019 - 396 = 623μs) of pure overhead per database operation.

Compatibility

aiosqlitepool is designed to be a flexible pooling layer that works with different asyncio SQLite drivers.

To be compatible, a connection object from a driver must have implemented the following three async methods:

class Connection:
    async def execute(self, *args, **kwargs):
        ...

    async def rollback(self) -> None:
        ...

    async def close(self) -> None:
        ...

Note on commit: The commit method is intentionally not part of the protocol. Transaction management is considered the responsibility of the application developer, not the pool. aiosqlitepool never commits on your behalf.

Officially supported drivers

The following libraries are tested and confirmed to work out-of-the-box with aiosqlitepool:

Using other drivers

If you are using another asyncio SQLite library that follows the protocol, it should work seamlessly. Just pass your driver's connection function to the connection_factory.

If you encounter an issue with a specific driver, please let us know by opening a GitHub issue.

License

aiosqlitepool is available under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aiosqlitepool-1.0.0.tar.gz (16.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aiosqlitepool-1.0.0-py3-none-any.whl (12.9 kB view details)

Uploaded Python 3

File details

Details for the file aiosqlitepool-1.0.0.tar.gz.

File metadata

  • Download URL: aiosqlitepool-1.0.0.tar.gz
  • Upload date:
  • Size: 16.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.4

File hashes

Hashes for aiosqlitepool-1.0.0.tar.gz
Algorithm Hash digest
SHA256 397f79993d7f34a5740939fb6e52ff29563fad5c400ef8b70990e64331957409
MD5 cfbc33e94d486894f1e34e5617381d05
BLAKE2b-256 c65af3184cdfd195a748bbb330894e34e5b274fec0e9b8dfac4c1fc71f36fc8b

See more details on using hashes here.

File details

Details for the file aiosqlitepool-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: aiosqlitepool-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 12.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.4

File hashes

Hashes for aiosqlitepool-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 832acb166bb9afef7f46b320d024b343083c90f4eb4bdc8c0d794a79e1fd1b4d
MD5 a88f2a701f103246295c54fc1d12312a
BLAKE2b-256 e0654d9a7eb8a4cf6a586f14abcce9d774d5b4a986e3c3028a9c88801c9648d2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page