Skip to main content

High-performance HTAP embedded database with Rust core and Python API

Project description

ApexBase

High-performance HTAP embedded database with Rust core and Python API

ApexBase is an embedded columnar database designed for Hybrid Transactional/Analytical Processing (HTAP) workloads. It combines a high-throughput columnar storage engine written in Rust with an ergonomic Python API, delivering analytical query performance that surpasses DuckDB and SQLite on most benchmarks — all in a single .apex file with zero external dependencies.


Features

  • HTAP architecture — V4 Row Group columnar storage with DeltaStore for cell-level updates; fast inserts and fast analytical scans in one engine
  • Single-file storage — custom .apex format per table, no server process, no external dependencies
  • Comprehensive SQL — DDL, DML, JOINs (INNER/LEFT/RIGHT/FULL/CROSS), subqueries (IN/EXISTS/scalar), CTEs (WITH ... AS), UNION/UNION ALL, window functions, EXPLAIN/ANALYZE, multi-statement execution
  • 70+ built-in functions — math (ABS, SQRT, POWER, LOG, trig), string (UPPER, LOWER, SUBSTR, REPLACE, CONCAT, REGEXP_REPLACE, ...), date (YEAR, MONTH, DAY, DATEDIFF, DATE_ADD, ...), conditional (COALESCE, IFNULL, NULLIF, CASE WHEN, GREATEST, LEAST)
  • Aggregation and analytics — COUNT, SUM, AVG, MIN, MAX, COUNT(DISTINCT), GROUP BY, HAVING, ORDER BY with NULLS FIRST/LAST
  • Window functions — ROW_NUMBER, RANK, DENSE_RANK, NTILE, PERCENT_RANK, CUME_DIST, LAG, LEAD, FIRST_VALUE, LAST_VALUE, NTH_VALUE, RUNNING_SUM, and windowed SUM/AVG/COUNT/MIN/MAX with PARTITION BY and ORDER BY
  • Transactions — BEGIN / COMMIT / ROLLBACK with OCC (Optimistic Concurrency Control), SAVEPOINT / ROLLBACK TO / RELEASE, statement-level auto-rollback
  • MVCC — multi-version concurrency control with snapshot isolation, version store, and garbage collection
  • Indexing — B-Tree and Hash indexes with CREATE INDEX / DROP INDEX / REINDEX; automatic multi-index AND intersection for compound predicates
  • Full-text search — built-in NanoFTS integration with fuzzy matching
  • JIT compilation — Cranelift-based JIT for predicate evaluation and SIMD-vectorized aggregations
  • Zero-copy Python bridge — Arrow IPC between Rust and Python; direct conversion to Pandas, Polars, and PyArrow
  • Durability levels — configurable fast / safe / max with WAL support and crash recovery
  • Compact storage — dictionary encoding for low-cardinality strings, LZ4 and Zstd compression
  • Parquet interop — COPY TO / COPY FROM Parquet files
  • PostgreSQL wire protocol — built-in server for DBeaver, psql, DataGrip, pgAdmin, Navicat, and any PostgreSQL-compatible client; two distribution modes (Python CLI or standalone Rust binary)
  • Cross-platform — Linux, macOS, and Windows; x86_64 and ARM64; Python 3.9 -- 3.13

Installation

pip install apexbase

Build from source (requires Rust toolchain):

maturin develop --release

Quick Start

from apexbase import ApexClient

# Open (or create) a database directory
client = ApexClient("./data")

# Create a table
client.create_table("users")

# Store records
client.store({"name": "Alice", "age": 30, "city": "Beijing"})
client.store([
    {"name": "Bob", "age": 25, "city": "Shanghai"},
    {"name": "Charlie", "age": 35, "city": "Beijing"},
])

# SQL query
results = client.execute("SELECT * FROM users WHERE age > 28 ORDER BY age DESC")

# Convert to DataFrame
df = results.to_pandas()

client.close()

Usage Guide

Table Management

Each table is stored as a separate .apex file. Tables must be created before use.

# Create with optional schema
client.create_table("orders", schema={
    "order_id": "int64",
    "product": "string",
    "price": "float64",
})

# Switch tables
client.use_table("users")

# List / drop
tables = client.list_tables()
client.drop_table("orders")

Data Ingestion

import pandas as pd
import polars as pl
import pyarrow as pa

# Columnar dict (fastest for bulk data)
client.store({
    "name": ["D", "E", "F"],
    "age": [22, 32, 42],
})

# From pandas / polars / PyArrow (auto-creates table when table_name given)
client.from_pandas(pd.DataFrame({"name": ["G"], "age": [28]}), table_name="users")
client.from_polars(pl.DataFrame({"name": ["H"], "age": [38]}), table_name="users")
client.from_pyarrow(pa.table({"name": ["I"], "age": [48]}), table_name="users")

SQL

ApexBase supports a broad SQL dialect. Examples:

# DDL
client.execute("CREATE TABLE IF NOT EXISTS products")
client.execute("ALTER TABLE products ADD COLUMN name STRING")
client.execute("DROP TABLE IF EXISTS products")

# DML
client.execute("INSERT INTO users (name, age) VALUES ('Zoe', 29)")
client.execute("UPDATE users SET age = 31 WHERE name = 'Alice'")
client.execute("DELETE FROM users WHERE age < 20")

# SELECT with full clause support
client.execute("""
    SELECT city, COUNT(*) AS cnt, AVG(age) AS avg_age
    FROM users
    WHERE age BETWEEN 20 AND 40
    GROUP BY city
    HAVING cnt > 1
    ORDER BY avg_age DESC
    LIMIT 10
""")

# JOINs
client.execute("""
    SELECT u.name, o.product
    FROM users u
    INNER JOIN orders o ON u._id = o.user_id
""")

# Subqueries
client.execute("SELECT * FROM users WHERE age > (SELECT AVG(age) FROM users)")
client.execute("SELECT * FROM users WHERE city IN (SELECT city FROM cities WHERE pop > 1000000)")

# CTEs
client.execute("""
    WITH seniors AS (SELECT * FROM users WHERE age >= 30)
    SELECT city, COUNT(*) FROM seniors GROUP BY city
""")

# Window functions
client.execute("""
    SELECT name, age,
           ROW_NUMBER() OVER (ORDER BY age DESC) AS rank,
           AVG(age) OVER (PARTITION BY city) AS city_avg
    FROM users
""")

# UNION
client.execute("""
    SELECT name FROM users WHERE city = 'Beijing'
    UNION ALL
    SELECT name FROM users WHERE city = 'Shanghai'
""")

# Multi-statement
client.execute("""
    INSERT INTO users (name, age) VALUES ('New1', 20);
    INSERT INTO users (name, age) VALUES ('New2', 21);
    SELECT COUNT(*) FROM users
""")

# INSERT ... ON CONFLICT (upsert)
client.execute("""
    INSERT INTO users (name, age) VALUES ('Alice', 31)
    ON CONFLICT (name) DO UPDATE SET age = 31
""")

# CREATE TABLE AS
client.execute("CREATE TABLE seniors AS SELECT * FROM users WHERE age >= 30")

# EXPLAIN / EXPLAIN ANALYZE
client.execute("EXPLAIN SELECT * FROM users WHERE age > 25")

# Parquet interop
client.execute("COPY users TO '/tmp/users.parquet'")
client.execute("COPY users FROM '/tmp/users.parquet'")

Transactions

client.execute("BEGIN")
client.execute("INSERT INTO users (name, age) VALUES ('Tx1', 20)")
client.execute("SAVEPOINT sp1")
client.execute("INSERT INTO users (name, age) VALUES ('Tx2', 21)")
client.execute("ROLLBACK TO sp1")   # undo Tx2 only
client.execute("COMMIT")            # Tx1 persisted

Transactions use OCC validation — concurrent writes are detected at commit time.

Indexes

client.execute("CREATE INDEX idx_age ON users (age)")
client.execute("CREATE UNIQUE INDEX idx_name ON users (name)")

# Queries automatically use indexes when applicable
client.execute("SELECT * FROM users WHERE age = 30")  # index scan

client.execute("DROP INDEX idx_age ON users")
client.execute("REINDEX users")

Full-Text Search

client.init_fts(index_fields=["name", "city"])

ids = client.search_text("Alice")
records = client.search_and_retrieve("Beijing", limit=10)
fuzzy = client.fuzzy_search_text("Alic")  # tolerates typos

client.get_fts_stats()
client.drop_fts()

Record-Level Operations

record = client.retrieve(0)               # by internal _id
records = client.retrieve_many([0, 1, 2])
all_data = client.retrieve_all()

client.replace(0, {"name": "Alice2", "age": 31})
client.delete(0)
client.delete([1, 2, 3])

Column Operations

client.add_column("email", "String")
client.rename_column("email", "email_addr")
client.drop_column("email_addr")
client.get_column_dtype("age")    # "Int64"
client.list_fields()              # ["name", "age", "city"]

ResultView

Query results are returned as ResultView objects with multiple output formats:

results = client.execute("SELECT * FROM users")

df = results.to_pandas()       # pandas DataFrame (zero-copy by default)
pl_df = results.to_polars()    # polars DataFrame
arrow = results.to_arrow()     # PyArrow Table
dicts = results.to_dict()      # list of dicts

results.shape                  # (rows, columns)
results.columns                # column names
len(results)                   # row count
results.first()                # first row as dict
results.scalar()               # single value (for aggregates)
results.get_ids()              # numpy array of _id values

Context Manager

with ApexClient("./data") as client:
    client.create_table("tmp")
    client.store({"key": "value"})
    # Automatically closed on exit

Performance

ApexBase vs SQLite vs DuckDB (1M rows)

Three-way comparison on macOS 26.2, Apple M1 Pro (10 cores), 32 GB RAM. Python 3.11.10, ApexBase v1.1.0, SQLite v3.45.3, DuckDB v1.1.3, PyArrow 19.0.0.

Dataset: 1,000,000 rows x 5 columns (name, age, score, city, category). Average of 5 timed iterations after 2 warmup runs.

Query ApexBase SQLite DuckDB vs Best Other
Bulk Insert (1M rows) 357ms 976ms 927ms 2.6x faster
COUNT(*) 0.068ms 9.05ms 0.49ms 7.2x faster
SELECT * LIMIT 100 0.13ms 0.12ms 0.50ms ~tied
SELECT * LIMIT 10K 0.031ms 7.46ms 5.27ms 170x faster
Filter (string =) 0.020ms 53.6ms 1.73ms 87x faster
Filter (BETWEEN) 0.018ms 191ms 94.7ms 5300x faster
GROUP BY (10 groups) 0.026ms 358ms 3.70ms 142x faster
GROUP BY + HAVING 0.030ms 439ms 4.40ms 147x faster
ORDER BY + LIMIT 0.027ms 67.4ms 38.7ms 1400x faster
Aggregation (5 funcs) 0.48ms 85.9ms 1.59ms 3.3x faster
Complex (Filter+Group+Order) 0.029ms 175ms 3.59ms 124x faster
Point Lookup (by ID) 0.39ms 0.050ms 4.29ms 7.9x slower
Insert 1K rows 1.01ms 1.45ms 2.95ms 1.4x faster

Summary: wins 11 of 13 benchmarks, ties 1. No metric loses to both competitors simultaneously (Point Lookup only trails SQLite while beating DuckDB 11x).

Reproduce: python benchmarks/bench_vs_sqlite_duckdb.py --rows 1000000


PostgreSQL Wire Protocol Server

ApexBase includes a built-in PostgreSQL wire protocol server, allowing you to connect using DBeaver, psql, DataGrip, pgAdmin, Navicat, and any other tool that supports the PostgreSQL protocol.

Starting the Server

Method 1: Python CLI (after pip install apexbase)

apexbase-server --dir /path/to/data --port 5432

Options:

Flag Default Description
--dir, -d . Directory containing .apex database files
--host 127.0.0.1 Host to bind to (use 0.0.0.0 for remote access)
--port, -p 5432 Port to listen on

Method 2: Standalone Rust binary (no Python required)

# Build
cargo build --release --bin apexbase-server --no-default-features --features server

# Run
./target/release/apexbase-server --dir /path/to/data --port 5432

Connecting with Database Tools

The server emulates PostgreSQL 15.0, reports a pg_catalog and information_schema compatible metadata layer, and supports SimpleQuery protocol. No username or password is required (authentication is disabled).

DBeaver

  1. New Database Connection → choose PostgreSQL
  2. Fill in connection details:
    • Host: 127.0.0.1 (or the --host you specified)
    • Port: 5432 (or the --port you specified)
    • Database: apexbase (any value accepted)
    • Authentication: select No Authentication or leave username/password empty
  3. Click Test ConnectionFinish
  4. DBeaver will discover tables and columns automatically via pg_catalog / information_schema

psql

psql -h 127.0.0.1 -p 5432 -d apexbase

DataGrip / IntelliJ IDEA

  1. Database tool window → +Data SourcePostgreSQL
  2. Set Host, Port, Database as above; leave User and Password empty
  3. Click Test ConnectionOK

pgAdmin

  1. Add New ServerGeneral tab: give it a name
  2. Connection tab: set Host and Port; leave Username as postgres (ignored) and Password empty
  3. Save — tables appear under Databases > apexbase > Schemas > public > Tables

Navicat for PostgreSQL

  1. ConnectionPostgreSQL
  2. Set Host, Port; leave User and Password blank
  3. Test ConnectionOK

Other Compatible Tools

Any tool or library that speaks the PostgreSQL wire protocol (libpq) can connect, including:

  • TablePlus, Beekeeper Studio, Heidisql
  • Python: psycopg2 / asyncpg
  • Node.js: pg (node-postgres)
  • Go: pgx / lib/pq
  • Rust: tokio-postgres / sqlx
  • Java: JDBC PostgreSQL driver

Example with psycopg2:

import psycopg2

conn = psycopg2.connect(host="127.0.0.1", port=5432, dbname="apexbase")
cur = conn.cursor()
cur.execute("SELECT * FROM users LIMIT 10")
print(cur.fetchall())
conn.close()

Supported SQL over Wire Protocol

The wire protocol server passes SQL directly to the ApexBase query engine. All SQL features listed in Usage Guide are available, including JOINs, CTEs, window functions, transactions, and DDL.

Metadata Compatibility

The server implements a pg_catalog compatibility layer that responds to common catalog queries:

Catalog / View Purpose
pg_catalog.pg_namespace Schema listing
pg_catalog.pg_database Database listing
pg_catalog.pg_class Table discovery
pg_catalog.pg_attribute Column metadata
pg_catalog.pg_type Type information
pg_catalog.pg_settings Server settings
information_schema.tables Standard table listing
information_schema.columns Standard column listing
SET / SHOW statements Client configuration probes

This enables GUI tools to browse tables, inspect columns, and display data types without modification.

Limitations

  • Extended Query Protocol (prepared statements with binary parameters) is not yet supported; tools should use the Simple Query protocol
  • Authentication is not implemented — the server accepts all connections
  • SSL/TLS is not yet supported — use SSH tunneling for remote access if needed
  • Single-database — all .apex files in the data directory appear as tables under the public schema

Architecture

Python (ApexClient)
  |
  |-- Arrow IPC / columnar dict --------> ResultView (Pandas / Polars / PyArrow)
  |
Rust Core (PyO3 bindings)
  |
  +-- SQL Parser -----> Query Planner -----> Query Executor
  |                                              |
  |   +-- JIT Compiler (Cranelift)               |
  |   +-- Expression Evaluator (70+ functions)   |
  |   +-- Window Function Engine                 |
  |                                              |
  +-- Storage Engine                             |
  |     +-- V4 Row Group Format (.apex)          |
  |     +-- DeltaStore (cell-level updates)      |
  |     +-- WAL (write-ahead log)                |
  |     +-- Mmap on-demand reads                 |
  |     +-- LZ4 / Zstd compression              |
  |     +-- Dictionary encoding                  |
  |                                              |
  +-- Index Manager (B-Tree, Hash)               |
  +-- TxnManager (OCC + MVCC)                    |
  +-- NanoFTS (full-text search)                  |
  +-- PG Wire Protocol Server (pgwire)             |
      +-- DBeaver / psql / DataGrip / pgAdmin      |
      +-- pg_catalog & information_schema compat    |

Storage Format

ApexBase uses a custom V4 Row Group format:

  • Each table is a single .apex file containing a header, row groups, and a footer
  • Row groups store columns contiguously with per-column compression (LZ4 or Zstd)
  • Low-cardinality string columns are dictionary-encoded on disk
  • Null bitmaps are stored per column per row group
  • A DeltaStore file (.deltastore) holds cell-level updates that are merged on read and compacted automatically
  • WAL records provide crash recovery with idempotent replay

Query Execution

  • The SQL parser produces an AST that the query planner analyzes for optimization strategy
  • Fast paths bypass the full executor for common patterns (COUNT(*), SELECT * LIMIT N, point lookups, single-column GROUP BY)
  • Arrow RecordBatch is the internal data representation; results flow to Python via Arrow IPC with zero-copy when possible
  • Repeated identical read queries are served from an in-process result cache

API Reference

ApexClient

Constructor

ApexClient(
    dirpath="./data",           # data directory
    drop_if_exists=False,       # clear existing data on open
    batch_size=1000,            # batch size for operations
    enable_cache=True,          # enable query cache
    cache_size=10000,           # cache capacity
    prefer_arrow_format=True,   # prefer Arrow format for results
    durability="fast",          # "fast" | "safe" | "max"
)

Table Management

Method Description
create_table(name, schema=None) Create a new table, optionally with pre-defined schema
drop_table(name) Drop a table
use_table(name) Switch active table
list_tables() List all tables
current_table Property: current table name

Data Storage

Method Description
store(data) Store data (dict, list, DataFrame, Arrow Table)
from_pandas(df, table_name=None) Import from pandas DataFrame
from_polars(df, table_name=None) Import from polars DataFrame
from_pyarrow(table, table_name=None) Import from PyArrow Table

Data Retrieval

Method Description
execute(sql) Execute SQL statement(s)
query(where, limit) Query with WHERE expression
retrieve(id) Get record by _id
retrieve_many(ids) Get multiple records by _id
retrieve_all() Get all records
count_rows(table) Count rows in table

Data Modification

Method Description
replace(id, data) Replace a record
batch_replace({id: data}) Batch replace records
delete(id) or delete([ids]) Delete record(s)

Column Operations

Method Description
add_column(name, type) Add a column
drop_column(name) Drop a column
rename_column(old, new) Rename a column
get_column_dtype(name) Get column data type
list_fields() List all fields

Full-Text Search

Method Description
init_fts(fields, lazy_load, cache_size) Initialize FTS
search_text(query) Search documents
fuzzy_search_text(query) Fuzzy search
search_and_retrieve(query, limit, offset) Search and return records
search_and_retrieve_top(query, n) Top N results
get_fts_stats() FTS statistics
disable_fts() / drop_fts() Disable or drop FTS

Utility

Method Description
flush() Flush data to disk
set_auto_flush(rows, bytes) Set auto-flush thresholds
get_auto_flush() Get auto-flush config
estimate_memory_bytes() Estimate memory usage
close() Close the client

ResultView

Method / Property Description
to_pandas(zero_copy=True) Convert to pandas DataFrame
to_polars() Convert to polars DataFrame
to_arrow() Convert to PyArrow Table
to_dict() Convert to list of dicts
scalar() Get single scalar value
first() Get first row as dict
get_ids(return_list=False) Get record IDs
shape (rows, columns)
columns Column names
__len__() Row count
__iter__() Iterate over rows
__getitem__(idx) Index access

Documentation

Additional documentation is available in the docs/ directory.

License

Apache-2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

apexbase-1.1.0.tar.gz (615.1 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

apexbase-1.1.0-cp313-cp313-win_amd64.whl (6.0 MB view details)

Uploaded CPython 3.13Windows x86-64

apexbase-1.1.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.0 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.17+ x86-64

apexbase-1.1.0-cp313-cp313-macosx_11_0_arm64.whl (5.9 MB view details)

Uploaded CPython 3.13macOS 11.0+ ARM64

apexbase-1.1.0-cp312-cp312-win_amd64.whl (6.0 MB view details)

Uploaded CPython 3.12Windows x86-64

apexbase-1.1.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.0 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

apexbase-1.1.0-cp312-cp312-macosx_11_0_arm64.whl (5.9 MB view details)

Uploaded CPython 3.12macOS 11.0+ ARM64

apexbase-1.1.0-cp311-cp311-win_amd64.whl (6.0 MB view details)

Uploaded CPython 3.11Windows x86-64

apexbase-1.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.0 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

apexbase-1.1.0-cp311-cp311-macosx_11_0_arm64.whl (5.9 MB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

apexbase-1.1.0-cp310-cp310-win_amd64.whl (6.0 MB view details)

Uploaded CPython 3.10Windows x86-64

apexbase-1.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.0 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

apexbase-1.1.0-cp310-cp310-macosx_11_0_arm64.whl (5.9 MB view details)

Uploaded CPython 3.10macOS 11.0+ ARM64

apexbase-1.1.0-cp39-cp39-win_amd64.whl (6.0 MB view details)

Uploaded CPython 3.9Windows x86-64

apexbase-1.1.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.0 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.17+ x86-64

apexbase-1.1.0-cp39-cp39-macosx_11_0_arm64.whl (5.9 MB view details)

Uploaded CPython 3.9macOS 11.0+ ARM64

File details

Details for the file apexbase-1.1.0.tar.gz.

File metadata

  • Download URL: apexbase-1.1.0.tar.gz
  • Upload date:
  • Size: 615.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for apexbase-1.1.0.tar.gz
Algorithm Hash digest
SHA256 10ec76bc5399942f25b7425acc7560659efaa35eb17de18f12eb8d0088c0c081
MD5 cfe7884d88f4a171c580117fe7697f99
BLAKE2b-256 844650ead1027e85552f2df65d0818cf7f8ac2e0c2949a084950acddbca85eff

See more details on using hashes here.

File details

Details for the file apexbase-1.1.0-cp313-cp313-win_amd64.whl.

File metadata

  • Download URL: apexbase-1.1.0-cp313-cp313-win_amd64.whl
  • Upload date:
  • Size: 6.0 MB
  • Tags: CPython 3.13, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for apexbase-1.1.0-cp313-cp313-win_amd64.whl
Algorithm Hash digest
SHA256 5daa09f0a448aab0648a428c119e800635b4f54f2258678ff6618e31f587a0fa
MD5 4417fd9079c7e44172ad360b1994c961
BLAKE2b-256 3391b2244730d48c98af3e5849b7db0ce2cb3edf750599834eb957d1dfae3417

See more details on using hashes here.

File details

Details for the file apexbase-1.1.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for apexbase-1.1.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 6cc72fde0e24f18d418e091c6a3b57d0aa73457e651c71b0f817dfb97d32bc02
MD5 e04d04f5e82304fefa9109923f93cbd6
BLAKE2b-256 92af2fd9823ef0dfb0c11a7edb1fc9a08ca83a114306bb20bcb48fe8f252fa03

See more details on using hashes here.

File details

Details for the file apexbase-1.1.0-cp313-cp313-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for apexbase-1.1.0-cp313-cp313-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 a8adca8dbc86f5640f8d47a4d5a6d5f6c0fc2dc8e5b3a0b409b894ce42661528
MD5 78e559b43a4605a6d99eeed09d19060b
BLAKE2b-256 0182da46fab89e3b8d92ff660fc6999194582edf642a6490957413d175759180

See more details on using hashes here.

File details

Details for the file apexbase-1.1.0-cp312-cp312-win_amd64.whl.

File metadata

  • Download URL: apexbase-1.1.0-cp312-cp312-win_amd64.whl
  • Upload date:
  • Size: 6.0 MB
  • Tags: CPython 3.12, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for apexbase-1.1.0-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 9cf8ded64de91378f300acc2a6efd6949a83fa0983636a7333cc60ec2ec77fc2
MD5 431550eaf844cea2e61ec7eb855eacb3
BLAKE2b-256 9442ca89361196c921eebfc2c2e00b2c6c9de1a9f8584cf6e8f6862be3eec3f7

See more details on using hashes here.

File details

Details for the file apexbase-1.1.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for apexbase-1.1.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 31f2b2c4a01a359987e9b156dd6342296d333a61de9aa61370e9e68553a4956e
MD5 e8fab750f2bcc567ae971dc6437e46e2
BLAKE2b-256 69d47ffd84d27799f4ef7d1a94acdfc87b88b39374e998e0710c723b5a0f8ede

See more details on using hashes here.

File details

Details for the file apexbase-1.1.0-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for apexbase-1.1.0-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 941a710eb4dfdee2c66c16112c296301e9be2337fc54e871949a9a46eb052e5c
MD5 85372287b75f72081d8a70928bf898d7
BLAKE2b-256 94493cfaf66eb4263ed7f1fce7a386612b7fa0cbd63c0ecfd6a098eff40a7191

See more details on using hashes here.

File details

Details for the file apexbase-1.1.0-cp311-cp311-win_amd64.whl.

File metadata

  • Download URL: apexbase-1.1.0-cp311-cp311-win_amd64.whl
  • Upload date:
  • Size: 6.0 MB
  • Tags: CPython 3.11, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for apexbase-1.1.0-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 7e8d40f885e9dc84b61c68508ae10953bfe984a3b24d77b17eac16be25df4a24
MD5 2dbd2b7b1723f12a58e650fcc31b6c7c
BLAKE2b-256 85f4477dbdc5c90aedd8f6c39a5d1db3e6b0157301b6ce865babc6c6e436e297

See more details on using hashes here.

File details

Details for the file apexbase-1.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for apexbase-1.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 e948a783e35c3a4bcf5aeef1f364ec5d5a1238151cd200f5aaf68a59a7b8cd8d
MD5 fd4b5cf2743df80eecb7b5d50d39ba5a
BLAKE2b-256 571ab16770c918c272aa15fe4b4a620ec55c5a30db7536785985d51db745fa2f

See more details on using hashes here.

File details

Details for the file apexbase-1.1.0-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for apexbase-1.1.0-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 caef790b6eacd851558fd508a91637b53ff1cd661bf5294a21fb0aec88e2b8f9
MD5 47410527fb1c12537e22e09831bbe4b9
BLAKE2b-256 d310b559173ae66c2a328366802a7cbf07609fc126cfa716b1e2620c69ddca0a

See more details on using hashes here.

File details

Details for the file apexbase-1.1.0-cp310-cp310-win_amd64.whl.

File metadata

  • Download URL: apexbase-1.1.0-cp310-cp310-win_amd64.whl
  • Upload date:
  • Size: 6.0 MB
  • Tags: CPython 3.10, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for apexbase-1.1.0-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 3ac811c9303e5ca3557452a1a3de0c4d945ed21af86ff4f978c7f7bec4148262
MD5 6fb3da017daa36a87b99d7dfa7fdb9a6
BLAKE2b-256 6ad9ee7a441208881677ab5c864de2d513d1cbf7cf806614e8f52aaa1504524b

See more details on using hashes here.

File details

Details for the file apexbase-1.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for apexbase-1.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 273e984bd15e3dd6c4bdc6127d54f7fef3965c0555fadf56c61568d2873838b8
MD5 2fb1c8f883761ef319d04a0783bcf5cc
BLAKE2b-256 9ffdf7a8b0840a2e5aa2758c78e3779ed8937fe92d917598071bdaf0a462d69c

See more details on using hashes here.

File details

Details for the file apexbase-1.1.0-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for apexbase-1.1.0-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 0e98de3d63f9a3a53f1520c41bce66af413c99f5d0dbd8e9f3b11885cd1bde2d
MD5 56be0c6d276f4de9bfc3442fc6b5bd79
BLAKE2b-256 b4ce3f29d0209a592f7d922730560b5c073c3249e916454ac8b5fb51ad7bd072

See more details on using hashes here.

File details

Details for the file apexbase-1.1.0-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: apexbase-1.1.0-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 6.0 MB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for apexbase-1.1.0-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 060d6f878b12583987386540c2aabcf4d4b6e47be6b8a027e9faf067603a7578
MD5 d67df7ef20287f77d6d4e9ef29653e03
BLAKE2b-256 be8a85ef6d0edd1e1e3ae355d6f46064a6e0a0053fbe92161bf9b6fa09c2fc20

See more details on using hashes here.

File details

Details for the file apexbase-1.1.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for apexbase-1.1.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 a3eb076e0b0b5641e06e0123fc730a92c11e5e723df33caf64801130183d1347
MD5 f2521a827fe89848fa430cff1daf4d5a
BLAKE2b-256 e2f84e92a45958927eafe33dabcfbd5af671d935046ecab56f17b36ea6f73292

See more details on using hashes here.

File details

Details for the file apexbase-1.1.0-cp39-cp39-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for apexbase-1.1.0-cp39-cp39-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 74746b0d96dc77d818271e87c983cdbd699a9cfedabb0fbb037e90c30050f0bd
MD5 b84955809446dbbd42107c9d46c7b6e0
BLAKE2b-256 df392e59b79abad16c4214e751b13aabe307444d4627c935b8d7f25d2b210193

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page