Skip to main content

A high-performance graph database library with Python bindings written in Rust

Project description

KGLite

PyPI version Python versions License: MIT

An embedded knowledge graph engine for Python.

Use it for: local analytics, ETL pipelines, notebooks, embedding in apps, fast prototyping. Not for: multi-user server deployments, cross-call transactions, HA/replication.

Embedded, in-process No server, no network; import and go
In-memory Persistence via save()/load() snapshots
Cypher subset Querying + mutations; returns dict or DataFrame
Single-label nodes Each node has exactly one type
Single-threaded Designed for single-threaded use (see Threading)

Requirements: Python 3.10+ (CPython) | macOS (ARM/Intel), Linux (x86_64/aarch64), Windows (x86_64) | pandas >= 1.5

pip install kglite

Feature Matrix

Feature Status
Embedded / in-process Yes
Cypher queries (MATCH, CREATE, SET, DELETE, MERGE, ...) Yes
DataFrame output (to_df=True) Yes
Graph algorithms (shortest path, centrality, communities) Yes
Persistence (binary snapshots) Yes
Multi-label nodes No
Transactions across calls No
Concurrency / thread safety No
Server mode No

Quick Start

import kglite

graph = kglite.KnowledgeGraph()

# Create nodes and relationships
graph.cypher("CREATE (:Person {name: 'Alice', age: 28, city: 'Oslo'})")
graph.cypher("CREATE (:Person {name: 'Bob', age: 35, city: 'Bergen'})")
graph.cypher("CREATE (:Person {name: 'Charlie', age: 42, city: 'Oslo'})")
graph.cypher("""
    MATCH (a:Person {name: 'Alice'}), (b:Person {name: 'Bob'})
    CREATE (a)-[:KNOWS]->(b)
""")

# Query — returns list[dict]
result = graph.cypher("""
    MATCH (p:Person) WHERE p.age > 30
    RETURN p.name AS name, p.city AS city
    ORDER BY p.age DESC
""")
for row in result:
    print(row['name'], row['city'])

# Or get a pandas DataFrame
df = graph.cypher("MATCH (p:Person) RETURN p.name, p.age ORDER BY p.age", to_df=True)

# Mutations return stats
result = graph.cypher("CREATE (:Person {name: 'Dave', age: 22})")
print(result['stats'])  # {'nodes_created': 1, 'relationships_created': 0, ...}

# Check graph size
print(graph.graph_info())  # {'node_count': 4, 'edge_count': 1, ...}

# Persist to disk and reload
graph.save("my_graph.kgl")
loaded = kglite.load("my_graph.kgl")

Loading Data from DataFrames

For bulk loading (thousands of rows), use the fluent API:

import pandas as pd

users_df = pd.DataFrame({
    'user_id': [1001, 1002, 1003],
    'name': ['Alice', 'Bob', 'Charlie'],
    'age': [28, 35, 42]
})

graph.add_nodes(data=users_df, node_type='User', unique_id_field='user_id', node_title_field='name')

edges_df = pd.DataFrame({'source_id': [1001, 1002], 'target_id': [1002, 1003]})
graph.add_connections(data=edges_df, connection_type='KNOWS', source_type='User',
                      source_id_field='source_id', target_type='User', target_id_field='target_id')

graph.cypher("MATCH (u:User) WHERE u.age > 30 RETURN u.name, u.age")

Core Concepts

Nodes have four built-in fields: id (unique within type), title (display name), type (label), plus arbitrary properties. Each node has exactly one type — labels(n) returns a string, not a list.

Relationships connect two nodes with a type (e.g., :KNOWS) and optional properties. The Cypher API calls them "relationships"; the fluent API calls them "connections" — they're the same thing.

Return shape. Read queries return list[dict] — each dict is one row keyed by column alias. Mutation queries (CREATE, SET, DELETE, REMOVE, MERGE) return {'stats': {...}} with counts like nodes_created, properties_set, etc. Mutations with a RETURN clause return {'rows': [...], 'stats': {...}}. Pass to_df=True to get a pandas DataFrame instead (read queries only).

Atomicity. Each cypher() call is atomic at the statement level — if any clause fails, the graph remains unchanged (copy-on-write internally). There are no multi-statement transactions: two separate cypher() calls are independent. Durability only via explicit save().

Selections (fluent API only) are lightweight views — a set of node indices that flow through chained operations like type_filter().filter().traverse(). They don't copy data. Use explain() to see the pipeline.

Tombstones. DELETE leaves empty slots in the internal graph storage. After heavy deletion, check graph.graph_info()['fragmentation_ratio'] and call graph.vacuum() if it exceeds 0.3 (see Graph Maintenance).


Table of Contents


When to Use What

Interface Best For Key Benefits
Cypher (recommended) Ad-hoc queries, exploration, analytics, mutations Standard syntax, declarative, familiar to Neo4j users
Fluent API (advanced) Bulk loading from DataFrames, multi-step pipelines Chainable operations, explain(), computed properties
Pattern matching (specialized) Quick structural checks without full Cypher overhead Lightweight, minimal parsing

Start with Cypher for most tasks. Use the fluent API when bulk-loading from pandas DataFrames or building pipelines that store intermediate computed properties. Use pattern matching for simple structural queries where you don't need WHERE/RETURN clauses.


Common Recipes

Upsert with MERGE

graph.cypher("""
    MERGE (p:Person {email: 'alice@example.com'})
    ON CREATE SET p.created = '2024-01-01', p.name = 'Alice'
    ON MATCH SET p.last_seen = '2024-01-15'
""")

Top-K Nodes by Centrality

top_nodes = graph.pagerank(top_k=10)
for node in top_nodes:
    print(f"{node['title']}: {node['score']:.3f}")

2-Hop Neighborhood

graph.cypher("""
    MATCH (me:Person {name: 'Alice'})-[:KNOWS*2]-(fof:Person)
    WHERE fof <> me
    RETURN DISTINCT fof.name
""")

Export Subgraph

subgraph = (
    graph.type_filter('Person')
    .filter({'name': 'Alice'})
    .expand(hops=2)
    .to_subgraph()
)
subgraph.export('alice_network.graphml', format='graphml')

Create Index for Speed

graph.create_index('Product', 'category')

# ~3x faster on 100k+ node graphs (equality only; depends on selectivity)
result = graph.cypher("MATCH (p:Product) WHERE p.category = 'Electronics' RETURN p.name")

Parameterized Queries

graph.cypher(
    "MATCH (p:Person) WHERE p.city = $city AND p.age > $min_age RETURN p.name",
    params={'city': 'Oslo', 'min_age': 25}
)

Delete Subgraph

graph.cypher("""
    MATCH (u:User) WHERE u.status = 'inactive'
    DETACH DELETE u
""")

Aggregation with Relationship Properties

graph.cypher("""
    MATCH (p:Person)-[r:RATED]->(m:Movie)
    RETURN p.name, avg(r.score) AS avg_rating, count(m) AS movies_rated
    ORDER BY avg_rating DESC
""")

Cypher Queries

A substantial Cypher subset. See the Supported Cypher Subset table for exact coverage.

Single-label note: Each node has exactly one type. labels(n) returns a string, not a list. SET n:OtherLabel is not supported.

result = graph.cypher("""
    MATCH (p:Person)-[:KNOWS]->(f:Person)
    WHERE p.age > 30 AND f.city = 'Oslo'
    RETURN p.name AS person, f.name AS friend, p.age AS age
    ORDER BY p.age DESC
    LIMIT 10
""")

# Read queries → list[dict]
for row in result:
    print(f"{row['person']} knows {row['friend']}")

# Pass to_df=True for a DataFrame
df = graph.cypher("MATCH (n:Person) RETURN n.name, n.age ORDER BY n.age", to_df=True)

WHERE Clause

# Comparisons: =, <>, <, >, <=, >=
graph.cypher("MATCH (n:Product) WHERE n.price >= 500 RETURN n.title, n.price")

# Boolean operators: AND, OR, NOT
graph.cypher("MATCH (n:Person) WHERE n.age > 25 AND NOT n.city = 'Oslo' RETURN n.name")

# Null checks
graph.cypher("MATCH (n:Person) WHERE n.email IS NOT NULL RETURN n.name")

# String predicates: CONTAINS, STARTS WITH, ENDS WITH
graph.cypher("MATCH (n:Person) WHERE n.name CONTAINS 'ali' RETURN n.name")

# IN lists
graph.cypher("MATCH (n:Person) WHERE n.city IN ['Oslo', 'Bergen'] RETURN n.name")

# Regex matching with =~
graph.cypher("MATCH (n:Person) WHERE n.name =~ '(?i)^ali.*' RETURN n.name")
graph.cypher("MATCH (n:Person) WHERE n.email =~ '.*@example\\.com$' RETURN n.name")

Relationship Properties

Relationships can have properties. Access them with r.property syntax:

# Create relationships with properties
graph.cypher("""
    MATCH (p:Person {name: 'Alice'}), (m:Movie {title: 'Inception'})
    CREATE (p)-[:RATED {score: 5, comment: 'Excellent'}]->(m)
""")

# Access, filter, aggregate, sort by relationship properties
graph.cypher("MATCH (p)-[r:RATED]->(m) RETURN p.name, r.score, r.comment, type(r)")
graph.cypher("MATCH (p)-[r:RATED]->(m) WHERE r.score >= 4 RETURN p.name, m.title")
graph.cypher("MATCH (p)-[r:RATED]->(m) RETURN avg(r.score) AS avg_rating")
graph.cypher("MATCH ()-[r:RATED]->(m) RETURN m.title, r.score ORDER BY r.score DESC")

Aggregation

graph.cypher("MATCH (n:Person) RETURN n.city, count(*) AS population ORDER BY population DESC")
graph.cypher("MATCH (n:Person) RETURN avg(n.age) AS avg_age, min(n.age), max(n.age)")

# DISTINCT
graph.cypher("MATCH (n:Person) RETURN DISTINCT n.city")
graph.cypher("MATCH (n:Person) RETURN count(DISTINCT n.city) AS unique_cities")

WITH Clause

graph.cypher("""
    MATCH (p:Person)-[:KNOWS]->(f:Person)
    WITH p, count(f) AS friend_count
    WHERE friend_count > 3
    RETURN p.name, friend_count
    ORDER BY friend_count DESC
""")

OPTIONAL MATCH

Left outer join — keeps rows even when no match:

graph.cypher("""
    MATCH (p:Person)
    OPTIONAL MATCH (p)-[:KNOWS]->(f:Person)
    RETURN p.name, count(f) AS friends
""")

Built-in Functions

Function Description
toUpper(expr) Convert to uppercase
toLower(expr) Convert to lowercase
toString(expr) Convert to string
toInteger(expr) Convert to integer
toFloat(expr) Convert to float
size(expr) Length of string or list
type(r) Relationship type
id(n) Node ID
labels(n) Node type (string, not list — single-label)
coalesce(a, b, ...) First non-null argument
length(p) Path hop count
nodes(p) Nodes in a path
relationships(p) Relationships in a path

Arithmetic

graph.cypher("MATCH (n:Product) RETURN n.title, n.price * 1.25 AS price_with_tax")

CASE Expressions

# Generic form
graph.cypher("""
    MATCH (n:Person)
    RETURN n.name,
           CASE WHEN n.age >= 18 THEN 'adult' ELSE 'minor' END AS category
""")

# Simple form
graph.cypher("""
    MATCH (n:Person)
    RETURN n.name,
           CASE n.city WHEN 'Oslo' THEN 'capital' WHEN 'Bergen' THEN 'west coast' ELSE 'other' END AS region
""")

List Comprehensions

[x IN list WHERE predicate | expression] syntax:

# Map: double each number
graph.cypher("UNWIND [1] AS _ RETURN [x IN [1, 2, 3, 4, 5] | x * 2] AS doubled")
# [2, 4, 6, 8, 10]

# Filter only
graph.cypher("UNWIND [1] AS _ RETURN [x IN [1, 2, 3, 4, 5] WHERE x > 3] AS filtered")
# [4, 5]

# Filter + map
graph.cypher("UNWIND [1] AS _ RETURN [x IN [1, 2, 3, 4, 5] WHERE x > 3 | x * 2] AS result")
# [8, 10]

# With collect() — transform aggregated values
graph.cypher("""
    MATCH (p:Person)
    WITH collect(p.name) AS names
    RETURN [x IN names | toUpper(x)] AS upper_names
""")

Note: List comprehensions require at least one row in the pipeline. Use UNWIND [1] AS _ or a preceding MATCH/WITH to provide the row context.

Map Projections

n {.prop1, .prop2, alias: expr} syntax — select specific properties from a node:

# Select only name and age (returns a dict per row)
graph.cypher("MATCH (p:Person) RETURN p {.name, .age} AS info")
# [{'info': {'name': 'Alice', 'age': 30}}, {'info': {'name': 'Bob', 'age': 25}}]

# Mix shorthand properties with computed values
graph.cypher("""
    MATCH (p:Person)-[:WORKS_AT]->(c:Company)
    RETURN p {.name, .age, company: c.name} AS info
""")
# [{'info': {'name': 'Alice', 'age': 30, 'company': 'Acme'}}, ...]

# System properties (id, type) work too
graph.cypher("MATCH (p:Person) RETURN p {.name, .type, .id} AS info LIMIT 1")
# [{'info': {'name': 'Alice', 'type': 'Person', 'id': 1}}]

Parameters

graph.cypher(
    "MATCH (n:Person) WHERE n.age > $min_age RETURN n.name, n.age",
    params={'min_age': 25}
)

# Parameters in inline pattern properties
graph.cypher(
    "MATCH (n:Person {name: $name}) RETURN n.age",
    params={'name': 'Alice'}
)

# Parameters with DataFrame output
df = graph.cypher(
    "MATCH (n:Person) WHERE n.age > $min_age RETURN n.name, n.age ORDER BY n.age",
    params={'min_age': 20}, to_df=True
)

UNWIND

Expand a list into rows:

graph.cypher("UNWIND [1, 2, 3] AS x RETURN x, x * 2 AS doubled")

UNION

graph.cypher("""
    MATCH (n:Person) WHERE n.city = 'Oslo' RETURN n.name AS name
    UNION
    MATCH (n:Person) WHERE n.age > 30 RETURN n.name AS name
""")

Variable-Length Paths

# 1 to 3 hops
graph.cypher("MATCH (a:Person)-[:KNOWS*1..3]->(b:Person) WHERE a.name = 'Alice' RETURN b.name")

# Exact 2 hops
graph.cypher("MATCH (a:Person)-[:KNOWS*2]->(b:Person) RETURN a.name, b.name")

WHERE EXISTS

Check for subpattern existence. The outer variable (p) is bound from MATCH. Both brace { } and parenthesis (( )) syntax are supported:

# Brace syntax
graph.cypher("MATCH (p:Person) WHERE EXISTS { (p)-[:KNOWS]->(:Person) } RETURN p.name")

# Parenthesis syntax (equivalent)
graph.cypher("MATCH (p:Person) WHERE EXISTS((p)-[:KNOWS]->(:Person)) RETURN p.name")

# Negation
graph.cypher("""
    MATCH (p:Person)
    WHERE NOT EXISTS { (p)-[:PURCHASED]->(:Product) }
    RETURN p.name
""")

# With property filter in inner pattern
graph.cypher("""
    MATCH (p:Person)
    WHERE EXISTS { (p)-[:KNOWS]->(:Person {city: 'Oslo'}) }
    RETURN p.name
""")

shortestPath()

BFS shortest path between two nodes. Supports directed (->) and undirected (-) syntax:

# Directed — only follows edges in their defined direction
result = graph.cypher("""
    MATCH p = shortestPath((a:Person {name: 'Alice'})-[:KNOWS*..10]->(b:Person {name: 'Dave'}))
    RETURN length(p), nodes(p), relationships(p), a.name, b.name
""")

# Undirected — traverses edges in both directions (same as fluent API)
result = graph.cypher("""
    MATCH p = shortestPath((a:Person {name: 'Alice'})-[:KNOWS*..10]-(b:Person {name: 'Dave'}))
    RETURN length(p), nodes(p), relationships(p)
""")

# No path → empty list (not an error)

Path functions: length(p) returns hop count, nodes(p) returns node list, relationships(p) returns edge type list.

CREATE / SET / DELETE / REMOVE / MERGE

# CREATE — returns stats
result = graph.cypher("CREATE (n:Person {name: 'Alice', age: 30, city: 'Oslo'})")
print(result['stats']['nodes_created'])  # 1

# CREATE relationship between existing nodes
graph.cypher("""
    MATCH (a:Person {name: 'Alice'}), (b:Person {name: 'Bob'})
    CREATE (a)-[:KNOWS]->(b)
""")

# SET — update properties
result = graph.cypher("MATCH (n:Person {name: 'Bob'}) SET n.age = 26, n.city = 'Stavanger'")
print(result['stats']['properties_set'])  # 2

# DELETE — plain DELETE errors if node has relationships; DETACH removes all
graph.cypher("MATCH (n:Person {name: 'Alice'}) DETACH DELETE n")

# REMOVE — remove properties (id/type are immutable)
graph.cypher("MATCH (n:Person {name: 'Alice'}) REMOVE n.city")

# MERGE — match or create
graph.cypher("""
    MERGE (n:Person {name: 'Alice'})
    ON CREATE SET n.created = 'today'
    ON MATCH SET n.updated = 'today'
""")

Error example: DELETE on a node with relationships returns: "Cannot delete node with existing relationships. Use DETACH DELETE to remove the node and all its relationships."

Mutation Semantics

Atomicity: Each cypher() call is atomic at the statement level — if any clause fails, the graph remains unchanged (copy-on-write internally). There are no multi-statement transactions; two separate cypher() calls are independent. Durability only via explicit save().

Index maintenance: Property and composite indexes are updated automatically by all mutation operations (CREATE, SET, DELETE, REMOVE, MERGE).

DataFrame Output

df = graph.cypher("""
    MATCH (p:Person)-[:KNOWS]->(f:Person)
    WITH p, count(f) AS friends
    RETURN p.name, p.city, friends
    ORDER BY friends DESC
""", to_df=True)

EXPLAIN

Prefix any Cypher query with EXPLAIN to see the query plan without executing it:

plan = graph.cypher("""
    EXPLAIN
    MATCH (p:Person)
    OPTIONAL MATCH (p)-[:KNOWS]->(f:Person)
    WITH p, count(f) AS friends
    RETURN p.name, friends
""")
print(plan)
# Query Plan:
#   1. NodeScan (MATCH) :Person
#   2. FusedOptionalMatchAggregate (optimized OPTIONAL MATCH + count)
#   3. Projection (RETURN) [p.name, friends]
# Optimizations: optional_match_fusion=1

Returns a string (not data). Mutation queries with EXPLAIN are not executed.

Supported Cypher Subset

Category Supported
Clauses MATCH, OPTIONAL MATCH, WHERE, RETURN, WITH, ORDER BY, SKIP, LIMIT, UNWIND, UNION/UNION ALL, CREATE, SET, DELETE, DETACH DELETE, REMOVE, MERGE, EXPLAIN
Patterns Node (n:Type), relationship -[:REL]->, variable-length *1..3, undirected -[:REL]-, properties {key: val}, p = shortestPath(...)
WHERE =, <>, <, >, <=, >=, =~ (regex), AND, OR, NOT, IS NULL, IS NOT NULL, IN [...], CONTAINS, STARTS WITH, ENDS WITH, EXISTS { pattern }, EXISTS(( pattern ))
RETURN n.prop, r.prop, AS aliases, DISTINCT, arithmetic +/-/*//, map projections n {.prop1, .prop2}
Aggregation count(*), count(expr), sum, avg/mean, min, max, collect, std
Expressions CASE WHEN...THEN...ELSE...END, $param, [x IN list WHERE ... | expr]
Functions toUpper, toLower, toString, toInteger, toFloat, size, length, type, id, labels, coalesce, nodes(p), relationships(p)
Mutations CREATE (n:Label {props}), CREATE (a)-[:TYPE]->(b), SET n.prop = expr, DELETE, DETACH DELETE, REMOVE n.prop, MERGE ... ON CREATE SET ... ON MATCH SET
Not supported CALL/stored procedures, FOREACH, subqueries, SET n:Label (label mutation), REMOVE n:Label, multi-label

Advanced API: Data Management

Expand section

For most use cases, use Cypher queries. The fluent API below is for bulk operations from DataFrames or complex data pipelines.

Adding Nodes

products_df = pd.DataFrame({
    'product_id': [101, 102, 103],
    'title': ['Laptop', 'Phone', 'Tablet'],
    'price': [999.99, 699.99, 349.99],
    'stock': [45, 120, 30]
})

graph.add_nodes(
    data=products_df,
    node_type='Product',
    unique_id_field='product_id',
    node_title_field='title',
    columns=['product_id', 'title', 'price', 'stock'],
    column_types={'launch_date': 'datetime'},  # explicit type hints (see Working with Dates)
    conflict_handling='update'  # 'update' | 'replace' | 'skip' | 'preserve'
)

Property Mapping

When adding nodes, unique_id_field and node_title_field are renamed to id and title. The original column names no longer exist as properties.

Your DataFrame Column Stored As Why
unique_id_field (e.g., user_id) id Canonical identifier
node_title_field (e.g., name) title Display/label field
All other columns Same name Preserved as-is
# After adding with unique_id_field='user_id', node_title_field='name':
graph.type_filter('User').filter({'user_id': 1001})  # WRONG — field was renamed
graph.type_filter('User').filter({'id': 1001})        # CORRECT

Use explain() to verify node counts at each step:

result = graph.type_filter('User').filter({'id': 1001})
print(result.explain())
# TYPE_FILTER User (1000 nodes) -> FILTER (1 nodes)

Retrieving Nodes

products = graph.type_filter('Product')
products.get_nodes()                       # all properties
products.get_properties(['price', 'stock'])  # specific properties
products.get_titles()                       # just titles

Working with Dates

graph.add_nodes(
    data=estimates_df,
    node_type='Estimate',
    unique_id_field='estimate_id',
    node_title_field='name',
    column_types={'valid_from': 'datetime', 'valid_to': 'datetime'}
)

graph.type_filter('Estimate').filter({'valid_from': {'>=': '2020-06-01'}})
graph.type_filter('Estimate').valid_at('2020-06-15')
graph.type_filter('Estimate').valid_during('2020-01-01', '2020-06-30')

Creating Connections

purchases_df = pd.DataFrame({
    'user_id': [1001, 1001, 1002],
    'product_id': [101, 103, 102],
    'date': ['2023-01-15', '2023-02-10', '2023-01-20'],
    'quantity': [1, 2, 1]
})

graph.add_connections(
    data=purchases_df,
    connection_type='PURCHASED',
    source_type='User',
    source_id_field='user_id',
    target_type='Product',
    target_id_field='product_id',
    columns=['date', 'quantity']
)

Note: source_type and target_type each refer to a single node type. To connect nodes of the same type, set both to the same value (e.g., source_type='Person', target_type='Person').

Batch Property Updates

result = graph.type_filter('Prospect').filter({'status': 'Inactive'}).update({
    'is_active': False,
    'deactivation_reason': 'status_inactive'
})

updated_graph = result['graph']
print(f"Updated {result['nodes_updated']} nodes")

Advanced API: Querying

Expand section

For most queries, prefer Cypher. The fluent API below is for building reusable query chains or when you need explain() and selection-based workflows.

Filtering

graph.type_filter('Product').filter({'price': 999.99})
graph.type_filter('Product').filter({'price': {'<': 500.0}, 'stock': {'>': 50}})
graph.type_filter('Product').filter({'id': {'in': [101, 103]}})
graph.type_filter('Product').filter({'category': {'is_null': True}})

# Orphan nodes (no connections)
graph.filter_orphans(include_orphans=True)

Sorting

graph.type_filter('Product').sort('price')
graph.type_filter('Product').sort('price', ascending=False)
graph.type_filter('Product').sort([('stock', False), ('price', True)])

Traversing the Graph

alice = graph.type_filter('User').filter({'title': 'Alice'})
alice_products = alice.traverse(connection_type='PURCHASED', direction='outgoing')

# Filter and sort traversal targets
expensive = alice.traverse(
    connection_type='PURCHASED',
    filter_target={'price': {'>=': 500.0}},
    sort_target='price',
    max_nodes=10
)

# Get connection information
alice.get_connections(include_node_properties=True)

Set Operations

n3 = graph.type_filter('Prospect').filter({'geoprovince': 'N3'})
m3 = graph.type_filter('Prospect').filter({'geoprovince': 'M3'})

n3.union(m3)                    # all nodes from both (OR)
n3.intersection(m3)             # nodes in both (AND)
n3.difference(m3)               # nodes in n3 but not m3
n3.symmetric_difference(m3)     # nodes in exactly one (XOR)

Pattern Matching

Expand section

For simpler pattern-based queries without full Cypher clause support:

results = graph.match_pattern(
    '(p:Play)-[:HAS_PROSPECT]->(pr:Prospect)-[:BECAME_DISCOVERY]->(d:Discovery)'
)

for match in results:
    print(f"Play: {match['p']['title']}, Discovery: {match['d']['title']}")

# With property conditions
graph.match_pattern('(u:User)-[:PURCHASED]->(p:Product {category: "Electronics"})')

# Limit results for large graphs
graph.match_pattern('(a:Person)-[:KNOWS]->(b:Person)', max_matches=100)

Graph Algorithms

Expand section

Shortest Path

result = graph.shortest_path(source_type='Person', source_id=1, target_type='Person', target_id=100)
if result:
    for node in result["path"]:
        print(f"{node['type']}: {node['title']}")
    print(f"Connections: {result['connections']}")
    print(f"Path length: {result['length']}")

All Paths

paths = graph.all_paths(
    source_type='Play', source_id=1,
    target_type='Wellbore', target_id=100,
    max_hops=4,
    max_results=100  # Prevent OOM on dense graphs
)

Connected Components

components = graph.connected_components()
# Returns list of lists: [[node_indices...], [node_indices...], ...]
print(f"Found {len(components)} connected components")
print(f"Largest component: {len(components[0])} nodes")

graph.are_connected(source_type='Person', source_id=1, target_type='Person', target_id=100)

Centrality Algorithms

All centrality methods return a list of dicts with type, title, id, and score keys, sorted by score descending.

graph.betweenness_centrality(top_k=10)
graph.betweenness_centrality(normalized=True, sample_size=500)
graph.pagerank(top_k=10, damping_factor=0.85)
graph.degree_centrality(top_k=10)
graph.closeness_centrality(top_k=10)

Community Detection

Identify clusters of densely connected nodes.

# Louvain modularity optimization (recommended)
result = graph.louvain_communities()
# {'communities': {0: [{title, type, id}, ...], 1: [...]},
#  'modularity': 0.45, 'num_communities': 2}

for comm_id, members in result['communities'].items():
    names = [m['title'] for m in members]
    print(f"Community {comm_id}: {names}")

# With edge weights and resolution tuning
result = graph.louvain_communities(weight_property='strength', resolution=1.5)

# Label propagation (faster, less precise)
result = graph.label_propagation(max_iterations=100)

Node Degrees

degrees = graph.type_filter('Person').get_degrees()
# Returns: {'Alice': 5, 'Bob': 3, ...}

Spatial Operations

Expand section

Bounding Box

graph.type_filter('Discovery').within_bounds(
    lat_field='latitude', lon_field='longitude',
    min_lat=58.0, max_lat=62.0, min_lon=1.0, max_lon=5.0
)

Distance Queries (Haversine)

graph.type_filter('Wellbore').near_point_km(
    center_lat=60.5, center_lon=3.2, max_distance_km=50.0,
    lat_field='latitude', lon_field='longitude'
)

WKT Geometry Intersection

graph.type_filter('Field').intersects_geometry(
    'POLYGON((1 58, 5 58, 5 62, 1 62, 1 58))',
    geometry_field='wkt_geometry'
)

Point-in-Polygon

graph.type_filter('Block').contains_point(lat=60.5, lon=3.2, geometry_field='wkt_geometry')

Analytics

Expand section

Statistics

price_stats = graph.type_filter('Product').statistics('price')
unique_cats = graph.type_filter('Product').unique_values(property='category', max_length=10)

Calculations

graph.type_filter('Product').calculate(expression='price * 1.1', store_as='price_with_tax')

graph.type_filter('User').traverse('PURCHASED').calculate(
    expression='sum(price * quantity)', store_as='total_spent'
)

graph.type_filter('User').traverse('PURCHASED').count(store_as='product_count', group_by_parent=True)

Connection Aggregation

graph.type_filter('Discovery').traverse('EXTENDS_INTO').calculate(
    expression='sum(share_pct)',
    aggregate_connections=True
)

Supported: sum, avg/mean, min, max, count, std.


Schema and Indexes

Expand section

Schema Definition

graph.define_schema({
    'nodes': {
        'Prospect': {
            'required': ['npdid_prospect', 'prospect_name'],
            'optional': ['prospect_status'],
            'types': {'npdid_prospect': 'integer', 'prospect_name': 'string'}
        }
    },
    'connections': {
        'HAS_ESTIMATE': {'source': 'Prospect', 'target': 'ProspectEstimate'}
    }
})

errors = graph.validate_schema()
schema = graph.get_schema()

Indexes

Indexes accelerate equality lookups only (WHERE n.prop = value). Range conditions (<, >, <=, >=) always scan.

graph.create_index('Prospect', 'prospect_geoprovince')
graph.create_composite_index('Person', ['city', 'age'])

graph.list_indexes()
graph.drop_index('Prospect', 'prospect_geoprovince')

Indexes are maintained automatically by all mutation operations.


Import and Export

Expand section

Saving and Loading

graph.save("my_graph.kgl")
loaded_graph = kglite.load("my_graph.kgl")

Portability: Save files use bincode serialization and are not guaranteed portable across OS, CPU architecture, or library versions. Always re-export via a portable format (GraphML, CSV) when sharing across machines. Each file includes a format version and the library version that wrote it — check with graph_info()['format_version'] and graph_info()['library_version'] after loading. If the internal data structures change between releases, loading will fail with a clear version mismatch error rather than silent corruption.

Export Formats

graph.export('my_graph.graphml', format='graphml')  # Gephi, yEd
graph.export('my_graph.gexf', format='gexf')        # Gephi native
graph.export('my_graph.json', format='d3')           # D3.js
graph.export('my_graph.csv', format='csv')           # creates _nodes.csv + _edges.csv

graphml_string = graph.export_string(format='graphml')

Subgraph Extraction

subgraph = (
    graph.type_filter('Company')
    .filter({'title': 'Acme Corp'})
    .expand(hops=2)
    .to_subgraph()
)
subgraph.export('acme_network.graphml', format='graphml')

Performance

Expand section

Tips

  1. Batch operations — add nodes/connections in batches, not individually
  2. Specify columns — only include columns you need to reduce memory
  3. Filter by type firsttype_filter() before filter() for narrower scans
  4. Create indexes — on frequently filtered equality conditions (~3x on 100k+ nodes; depends on selectivity)
  5. Use lightweight methodsnode_count(), indices(), get_node_by_id() skip property materialization
  6. Cypher LIMIT — use LIMIT to avoid scanning entire result sets

Lightweight Methods

Method Returns Speed
node_count() Integer count (total graph if no filter, selection count after filter) Fastest
indices() List of node indices Fast
id_values() List of ID values Fast
get_ids() List of {id, title, type} dicts Medium
get_nodes() List of full node dicts Slowest

Lightweight path methods: shortest_path_length(), shortest_path_indices(), shortest_path_ids().

Performance Model

kglite is optimized for knowledge graph workloads — complex multi-step queries on heterogeneous, property-rich graphs. Operations have overhead compared to raw graph algorithms because they build selections, materialize Python dicts, and support the full query API.

Speed claims caveat: The "~3x" index speedup was measured on equality-filtered queries over 100k+ node graphs. Actual improvement depends on graph size, selectivity, and property cardinality. On small graphs (<1k nodes) the overhead of index lookup may not be noticeable. Always benchmark on your own data.

Threading

Designed for single-threaded use. The Rust code does not release the Python GIL during operations. If you share a graph instance across threads, guard access with your own lock.


Graph Maintenance

Expand section

After heavy mutation workloads (DELETE, REMOVE), the internal graph storage accumulates tombstones. Use graph_info() to monitor storage health.

Diagnostics

info = graph.graph_info()
# {'node_count': 950, 'node_capacity': 1000, 'node_tombstones': 50,
#  'edge_count': 2800, 'fragmentation_ratio': 0.05,
#  'type_count': 3, 'property_index_count': 2, 'composite_index_count': 0}

Vacuum — Compact Storage

if info['fragmentation_ratio'] > 0.3:
    result = graph.vacuum()
    print(f"Reclaimed {result['tombstones_removed']} slots, remapped {result['nodes_remapped']} nodes")

vacuum() rebuilds the graph with contiguous indices and rebuilds all indexes. Resets the current selection — call between query chains.

Reindex — Rebuild Indexes

graph.reindex()

Recovery tool, not routine maintenance. Indexes are maintained automatically by all mutations. Use reindex() only if you suspect corruption (e.g., after a crash during save()).

Recommended Workflow

info = graph.graph_info()
if info['fragmentation_ratio'] > 0.3:
    graph.vacuum()

Operation Reports

Expand section

Operations that modify the graph return detailed reports:

report = graph.add_nodes(data=df, node_type='Product', unique_id_field='product_id')
print(f"Created {report['nodes_created']} nodes in {report['processing_time_ms']}ms")

if report['has_errors']:
    print(f"Errors: {report['errors']}")

Node report fields: operation, timestamp, nodes_created, nodes_updated, nodes_skipped, processing_time_ms, has_errors, errors.

Connection report fields: connections_created, connections_skipped, property_fields_tracked.

graph.get_last_report()
graph.get_operation_index()
graph.get_report_history()

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

kglite-0.5.2-cp313-cp313-win_amd64.whl (1.7 MB view details)

Uploaded CPython 3.13Windows x86-64

kglite-0.5.2-cp313-cp313-manylinux_2_35_x86_64.whl (2.0 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.35+ x86-64

kglite-0.5.2-cp313-cp313-macosx_11_0_arm64.whl (1.8 MB view details)

Uploaded CPython 3.13macOS 11.0+ ARM64

kglite-0.5.2-cp313-cp313-macosx_10_12_x86_64.whl (1.9 MB view details)

Uploaded CPython 3.13macOS 10.12+ x86-64

kglite-0.5.2-cp312-cp312-win_amd64.whl (1.7 MB view details)

Uploaded CPython 3.12Windows x86-64

kglite-0.5.2-cp312-cp312-manylinux_2_35_x86_64.whl (2.0 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.35+ x86-64

kglite-0.5.2-cp312-cp312-macosx_11_0_arm64.whl (1.8 MB view details)

Uploaded CPython 3.12macOS 11.0+ ARM64

kglite-0.5.2-cp312-cp312-macosx_10_12_x86_64.whl (1.9 MB view details)

Uploaded CPython 3.12macOS 10.12+ x86-64

kglite-0.5.2-cp311-cp311-win_amd64.whl (1.7 MB view details)

Uploaded CPython 3.11Windows x86-64

kglite-0.5.2-cp311-cp311-manylinux_2_35_x86_64.whl (2.0 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.35+ x86-64

kglite-0.5.2-cp311-cp311-macosx_11_0_arm64.whl (1.8 MB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

kglite-0.5.2-cp311-cp311-macosx_10_12_x86_64.whl (1.9 MB view details)

Uploaded CPython 3.11macOS 10.12+ x86-64

kglite-0.5.2-cp310-cp310-win_amd64.whl (1.7 MB view details)

Uploaded CPython 3.10Windows x86-64

kglite-0.5.2-cp310-cp310-manylinux_2_35_x86_64.whl (2.0 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.35+ x86-64

kglite-0.5.2-cp310-cp310-macosx_11_0_arm64.whl (1.8 MB view details)

Uploaded CPython 3.10macOS 11.0+ ARM64

kglite-0.5.2-cp310-cp310-macosx_10_12_x86_64.whl (1.9 MB view details)

Uploaded CPython 3.10macOS 10.12+ x86-64

File details

Details for the file kglite-0.5.2-cp313-cp313-win_amd64.whl.

File metadata

  • Download URL: kglite-0.5.2-cp313-cp313-win_amd64.whl
  • Upload date:
  • Size: 1.7 MB
  • Tags: CPython 3.13, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for kglite-0.5.2-cp313-cp313-win_amd64.whl
Algorithm Hash digest
SHA256 ea9a389915c9c6b4a96a8fb021913416df87da32c67576e073682c3d608ab671
MD5 a616311aa5a8023721f7139abd7d2471
BLAKE2b-256 31292f2fa7db13af32011cc6157b71169f6236f7a8e4fc89ed4b0d84ac46b872

See more details on using hashes here.

File details

Details for the file kglite-0.5.2-cp313-cp313-manylinux_2_35_x86_64.whl.

File metadata

File hashes

Hashes for kglite-0.5.2-cp313-cp313-manylinux_2_35_x86_64.whl
Algorithm Hash digest
SHA256 7390817dd32bb10940071d869ae0cf9c2e92284fa3ca8cfaed74a7ba5a20e6a2
MD5 a5bbb6ea82b20ede08c11cead1bc5ff8
BLAKE2b-256 fbdb019f7d61f67e6ee9616e5e70188d1c5655968158d7d225e4350fd0372326

See more details on using hashes here.

File details

Details for the file kglite-0.5.2-cp313-cp313-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for kglite-0.5.2-cp313-cp313-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 ebaec3d301b18cc71a7e2f9f190eb653038bca05666ff3a49d8c3ca0d3c568d9
MD5 a991c83ac948db3deb93a443ea7e183d
BLAKE2b-256 181489c22718dfe7b669193057a8c7cfa371428728749a62c3d86f30e843e9f8

See more details on using hashes here.

File details

Details for the file kglite-0.5.2-cp313-cp313-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for kglite-0.5.2-cp313-cp313-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 0a0d70a774a2f51aff2a38cec115a5b229b881e887941617e8bbb5136da037bd
MD5 09e36274bda4534413e731dff612ab57
BLAKE2b-256 97a6d0738e944460a3b31619b960c3eec6869c6245b0853832b2b8fdcb141487

See more details on using hashes here.

File details

Details for the file kglite-0.5.2-cp312-cp312-win_amd64.whl.

File metadata

  • Download URL: kglite-0.5.2-cp312-cp312-win_amd64.whl
  • Upload date:
  • Size: 1.7 MB
  • Tags: CPython 3.12, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for kglite-0.5.2-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 127d7abb06ec2ed3a7e082c84910e26e758d78287366c9e097c5f14e2edc7970
MD5 6dd72a4a3232d5917771fc21cd9b71e3
BLAKE2b-256 fbc77532b403195b64a6ec598b422858a3f74ed292db60515dfd20bbe933934a

See more details on using hashes here.

File details

Details for the file kglite-0.5.2-cp312-cp312-manylinux_2_35_x86_64.whl.

File metadata

File hashes

Hashes for kglite-0.5.2-cp312-cp312-manylinux_2_35_x86_64.whl
Algorithm Hash digest
SHA256 82bc7625d7d3929a8716ea2719514301048cfafe0849819e226584f029196aad
MD5 3a25f7e6de8eb626d08062fd3baad9ba
BLAKE2b-256 36cc28d74957876dc10db53e90afcf3991ef0f53947e6f70d13948c47fdb79da

See more details on using hashes here.

File details

Details for the file kglite-0.5.2-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for kglite-0.5.2-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 79fd09bf4b95d89baf409b41c91b39ff7553d8717b4efb9e3bd37a8130930de9
MD5 e8d73e12ac1ff05bd80a3127e2dcc672
BLAKE2b-256 a7c9182c3d1a7074042efc17beb932856ac44bca027cc2b8d1bfb140a2889876

See more details on using hashes here.

File details

Details for the file kglite-0.5.2-cp312-cp312-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for kglite-0.5.2-cp312-cp312-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 39ddcf1b67e5140a7ee97beef40bb2c311a81acc9c72b1cd9529f6b38dcbb482
MD5 3f430dc7c22eed79ebac18d1ca99d3f5
BLAKE2b-256 84b2f23fe9beb4c44d86f2479cfafe93726d2a59c398067f97ef8830ea1a9319

See more details on using hashes here.

File details

Details for the file kglite-0.5.2-cp311-cp311-win_amd64.whl.

File metadata

  • Download URL: kglite-0.5.2-cp311-cp311-win_amd64.whl
  • Upload date:
  • Size: 1.7 MB
  • Tags: CPython 3.11, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for kglite-0.5.2-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 12129cda55210005b1b7eb12a1945da78d2daba5cea6088e14becbfba95d7d62
MD5 db09a55ffcad4377cf54520996233255
BLAKE2b-256 58934b1d897570cda0013782085fcdf62ac574cc9ff1cd79bd1d18ccb6fff663

See more details on using hashes here.

File details

Details for the file kglite-0.5.2-cp311-cp311-manylinux_2_35_x86_64.whl.

File metadata

File hashes

Hashes for kglite-0.5.2-cp311-cp311-manylinux_2_35_x86_64.whl
Algorithm Hash digest
SHA256 325c3b347e4fd9b59502ee6f93c67ccaf9767b3ed8f03c090d1ba4073ce0c9b3
MD5 0b1c72a203e4f8051d0834556a3a112e
BLAKE2b-256 caf84a9764b8f052abe834888a9d74967350d95c89ed563923e79cd1b92e63b9

See more details on using hashes here.

File details

Details for the file kglite-0.5.2-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for kglite-0.5.2-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 b0653acb3ab4507e191523bbd6f2d7a411edc154a0ff4a104c764cebe6fce7fe
MD5 5437b43566fe095113480f1a0ead8a02
BLAKE2b-256 bf48b28e6b5eb4baf626929355b5e0012322caaeb060cb01c8838aed10ff8774

See more details on using hashes here.

File details

Details for the file kglite-0.5.2-cp311-cp311-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for kglite-0.5.2-cp311-cp311-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 9c5f8d2eec0f363796a19ad7e4af3018e82211371a747a117cec7cee4c59b946
MD5 16d62505f2da577af99efa1db3099733
BLAKE2b-256 35e87806a969c94cc4a4b16ecdb5c46a335fc952921de4a3feb2e37429aaf8b3

See more details on using hashes here.

File details

Details for the file kglite-0.5.2-cp310-cp310-win_amd64.whl.

File metadata

  • Download URL: kglite-0.5.2-cp310-cp310-win_amd64.whl
  • Upload date:
  • Size: 1.7 MB
  • Tags: CPython 3.10, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for kglite-0.5.2-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 26469fdba04ec1a30e4d18394c36c2ca446a0a5a2fa6d2f25e75ca5b6c13d9a2
MD5 81c079a1f2caa8b5892eaf3b5471fe50
BLAKE2b-256 5b2157602b23bf75672314d6ead5a5fcbceaf87271029485e05922360f99efcb

See more details on using hashes here.

File details

Details for the file kglite-0.5.2-cp310-cp310-manylinux_2_35_x86_64.whl.

File metadata

File hashes

Hashes for kglite-0.5.2-cp310-cp310-manylinux_2_35_x86_64.whl
Algorithm Hash digest
SHA256 5c48aa11090522b80d6a02bf4e838fdb882e26d2b3e78508f8a5d679f12e71f9
MD5 6842fbb24d1a76b4c76e749170a948ce
BLAKE2b-256 4a28953808efa945e5f1de1d497ef79290622ff7542983b7783649044de8a893

See more details on using hashes here.

File details

Details for the file kglite-0.5.2-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for kglite-0.5.2-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 ae34190e7f1db99775f4c990d0157686c9221a3f3335d301bf8006e1d257be23
MD5 1852e860a8e3ac0aba368d0352942be1
BLAKE2b-256 ce208dba76c866bbb9bf448a020b4362c785a3f6f45c0726340e9dc68fc8a086

See more details on using hashes here.

File details

Details for the file kglite-0.5.2-cp310-cp310-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for kglite-0.5.2-cp310-cp310-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 47b8c40129ee91e97d660e2637b9daf2347db750424000aec830f281c0393cb5
MD5 4c315f7c99d4c8c0adf3bad99d1fbd76
BLAKE2b-256 bacdeb73ce77502dc72987c25c6228a5379f849dafe21eb7c7159713ed987922

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page