Skip to main content

Embedded knowledge graph database for AI and RAG applications

Project description

Lattice Python Bindings

Python bindings for LatticeDB, an embedded knowledge graph database for AI/RAG applications.

Installation

From Source

# Build the native library first
cd /path/to/latticedb
zig build shared

# Install the Python package
cd bindings/python
pip install -e .

Quick Start

from lattice import Database

# Create a new database
with Database("mydb.ltdb", create=True) as db:
    # Write transaction
    with db.write() as txn:
        # Create nodes with properties
        alice = txn.create_node(
            labels=["Person"],
            properties={"name": "Alice", "age": 30}
        )
        bob = txn.create_node(
            labels=["Person"],
            properties={"name": "Bob", "age": 25}
        )

        # Create relationships
        txn.create_edge(alice.id, bob.id, "KNOWS")

        txn.commit()

    # Query with Cypher
    result = db.query("MATCH (n:Person) RETURN n.name")
    for row in result:
        print(row)  # {'n': 'Alice'}, {'n': 'Bob'}

    # Query with parameters (safe from injection)
    result = db.query(
        "MATCH (n:Person) WHERE n.name = $name RETURN n",
        parameters={"name": "Alice"}
    )

API Reference

Database

Database(
    path: str | Path,
    *,
    create: bool = False,        # Create if doesn't exist
    read_only: bool = False,     # Open in read-only mode
    cache_size_mb: int = 100,    # Page cache size
    enable_vector: bool = False, # Enable vector storage
    vector_dimensions: int = 128 # Vector dimensions
)

Methods

  • open() - Open the database connection
  • close() - Close the database connection
  • read() - Start a read-only transaction (context manager)
  • write() - Start a read-write transaction (context manager)
  • query(cypher: str, parameters: dict = None) - Execute a Cypher query with optional parameters

Transaction

Read Operations

  • is_read_only - True if read-only transaction
  • is_active - True if transaction is still active
  • get_node(node_id: int) - Get a node by ID, returns Node or None
  • get_property(node_id: int, key: str) - Get a property value, returns value or None
  • node_exists(node_id: int) - Check if a node exists, returns True or False

Write Operations

  • create_node(labels: list[str], properties: dict = None) - Create a node with labels and optional properties
  • delete_node(node_id: int) - Delete a node
  • set_property(node_id: int, key: str, value) - Set a property on a node
  • set_vector(node_id: int, key: str, vector: np.ndarray) - Set a vector embedding
  • create_edge(source_id: int, target_id: int, edge_type: str) - Create an edge
  • delete_edge(source_id: int, target_id: int, edge_type: str) - Delete an edge
  • commit() - Commit the transaction
  • rollback() - Rollback the transaction

Querying

Basic Queries

# Simple match
result = db.query("MATCH (n:Person) RETURN n")

# Return properties
result = db.query("MATCH (n:Person) RETURN n.name")

# With WHERE clause
result = db.query("MATCH (n:Person) WHERE n.age > 25 RETURN n.name")

Data Mutation

# Create nodes and relationships
db.query("CREATE (a:Person {name: 'Alice'})-[:KNOWS]->(b:Person {name: 'Bob'})")

# Update properties
db.query("MATCH (n:Person {name: 'Alice'}) SET n.age = 31, n.city = 'NYC'")

# Add labels
db.query("MATCH (n:Person {name: 'Alice'}) SET n:Admin:Verified")

# Remove properties and labels
db.query("MATCH (n:Person {name: 'Alice'}) REMOVE n.city, n:Verified")

# Delete nodes (DETACH removes connected edges)
db.query("MATCH (n:Person {name: 'Bob'}) DETACH DELETE n")

Parameterized Queries

Use parameters to safely pass values into queries:

# String parameter
result = db.query(
    "MATCH (n:Person) WHERE n.name = $name RETURN n",
    parameters={"name": "Alice"}
)

# Integer parameter
result = db.query(
    "MATCH (n:Person) WHERE n.age = $age RETURN n.name",
    parameters={"age": 30}
)

# Multiple parameters
result = db.query(
    "MATCH (n:Person) WHERE n.name = $name AND n.age > $min_age RETURN n",
    parameters={"name": "Alice", "min_age": 20}
)

# Vector parameter (requires numpy)
import numpy as np
query_vec = np.random.rand(384).astype(np.float32)
result = db.query(
    "MATCH (n:Document) WHERE n.embedding <=> $vec < 0.5 RETURN n",
    parameters={"vec": query_vec}
)

Working with Results

result = db.query("MATCH (n:Person) RETURN n.name")

# Get column names
print(result.columns)  # ['n']

# Iterate rows
for row in result:
    print(row)  # {'n': 'Alice'}

# Get all rows as list
rows = list(result)

# Get row count
print(len(result))

Reading Node Data

with db.read() as txn:
    # Get a node by ID
    node = txn.get_node(node_id)
    if node:
        print(f"ID: {node.id}")
        print(f"Labels: {node.labels}")

    # Get individual properties
    name = txn.get_property(node_id, "name")
    age = txn.get_property(node_id, "age")

    # Returns None if property doesn't exist
    unknown = txn.get_property(node_id, "nonexistent")  # None

Vector Operations

To use vector embeddings, enable vector storage when opening the database:

import numpy as np

with Database("mydb.ltdb", create=True, enable_vector=True, vector_dimensions=384) as db:
    # Store vectors
    with db.write() as txn:
        node1 = txn.create_node(labels=["Document"])
        txn.set_property(node1.id, "title", "Introduction to ML")
        embedding1 = np.random.rand(384).astype(np.float32)
        txn.set_vector(node1.id, "embedding", embedding1)

        node2 = txn.create_node(labels=["Document"])
        txn.set_property(node2.id, "title", "Deep Learning Guide")
        embedding2 = np.random.rand(384).astype(np.float32)
        txn.set_vector(node2.id, "embedding", embedding2)

        txn.commit()

    # Search for similar vectors (HNSW approximate nearest neighbor)
    query_vector = np.random.rand(384).astype(np.float32)
    results = db.vector_search(query_vector, k=10, ef_search=64)

    for result in results:
        print(f"Node {result.node_id}: distance={result.distance:.4f}")

Vector Search Parameters

  • vector: Query vector (numpy array of float32)
  • k: Number of nearest neighbors to return (default: 10)
  • ef_search: HNSW exploration factor - higher values are slower but more accurate (default: 64)

Full-Text Search

Index text content and search with BM25 scoring:

with Database("mydb.ltdb", create=True) as db:
    # Index documents
    with db.write() as txn:
        doc1 = txn.create_node(labels=["Document"])
        txn.set_property(doc1.id, "title", "Introduction to ML")
        txn.fts_index(doc1.id, "Machine learning is a subset of artificial intelligence")

        doc2 = txn.create_node(labels=["Document"])
        txn.set_property(doc2.id, "title", "Deep Learning Guide")
        txn.fts_index(doc2.id, "Deep learning uses neural networks")

        txn.commit()

    # Search for documents
    results = db.fts_search("machine learning", limit=10)

    for result in results:
        print(f"Node {result.node_id}: score={result.score:.4f}")

FTS Search Parameters

  • query: Search query text
  • limit: Maximum number of results to return (default: 10)

Supported Property Types

  • None - Null value
  • bool - Boolean
  • int - 64-bit integer
  • float - 64-bit float
  • str - UTF-8 string
  • bytes - Binary data

Error Handling

The library raises typed exceptions:

from lattice import LatticeError, LatticeNotFoundError, LatticeIOError

try:
    with Database("nonexistent.ltdb") as db:
        pass
except LatticeNotFoundError:
    print("Database not found")
except LatticeIOError:
    print("I/O error")
except LatticeError as e:
    print(f"Error: {e}")

Utilities

from lattice import version, library_available

# Check if the native library is available
if library_available():
    print("Library found")

# Get the native library version
print(f"Lattice version: {version()}")

Requirements

  • Python 3.9+
  • NumPy (optional, for vector operations)
  • The native LatticeDB library (liblattice.dylib / liblattice.so)

Building from Source

See CONTRIBUTING.md for build instructions.

License

Same license as LatticeDB.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

latticedb-0.2.0.tar.gz (25.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

latticedb-0.2.0-py3-none-any.whl (2.8 MB view details)

Uploaded Python 3

File details

Details for the file latticedb-0.2.0.tar.gz.

File metadata

  • Download URL: latticedb-0.2.0.tar.gz
  • Upload date:
  • Size: 25.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for latticedb-0.2.0.tar.gz
Algorithm Hash digest
SHA256 f72f37ba89fb112ee23b7746709bf4972e37d3b7080512577001a41f0eba17d3
MD5 0bcb55af7d96d8eb2b93d42507397500
BLAKE2b-256 733b0147e759f4978da57c68777df0104b373a6fb1ccdba0cb259db38cac77ba

See more details on using hashes here.

Provenance

The following attestation bundles were made for latticedb-0.2.0.tar.gz:

Publisher: release.yml on jeffhajewski/latticedb

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file latticedb-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: latticedb-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 2.8 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for latticedb-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 9330fe45ab0bd19db70e4790f5642badfe1881454a4ddabdbea0bfa4f6e1c544
MD5 24075bad9b71c151722349f1248a9acc
BLAKE2b-256 b70e765d406847f7e5e8d834ad9b465f781208b53e08540a483daa5dad6c3651

See more details on using hashes here.

Provenance

The following attestation bundles were made for latticedb-0.2.0-py3-none-any.whl:

Publisher: release.yml on jeffhajewski/latticedb

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page