Skip to main content

Add your description here

Project description

NTAC logo

This is a Python implementation of the Neuronal typing algorithm described in

Gregory Schwartzman, Ben Jourdan, David García-Soriano, Arie Matsliah. NTAC: Neuronal Type Assignment from Connectivity. Nature Communications 17, 1284 (2026)

Neuronal Type Assignment from Connectivity

NTAC (Neuronal Type Assignment from Connectivity) groups neurons into cell types based solely on synaptic connectivity. It comes in two variants:

  • Seeded (semi-supervised): Requires a small fraction of neurons with known labels.
  • Unseeded (unsupervised): Requires no labels.

The seeded implementation also supports an optional nonnegative node-feature block. When provided, the feature block is appended to the structural NTAC embedding and therefore influences every seeded iteration through the same weighted-Jaccard similarity used by baseline NTAC.

Installation:

Install NTAC with: pip install ntac

Install cudatoolkit with conda for optional speed for unseeded: conda install -c conda-forge cudatoolkit

Quickstart with NTAC:

import numpy as np
import scipy.sparse as sp
from ntac import Ntac, sbm, GraphData

# Generate an adjacency matrix and labels from an SBM graph 
A, labels = sbm(n=1000, k=4)



#NTAC requires a CSR array and labels as a string array
A_csr = sp.csr_array(A)
labels = np.array([str(l) for l in labels])
# NTAC can take in as input a CSR matrix and labels, 
# but it is easier to use the GraphData class for test_train split and metrics
data = GraphData(A_csr, labels=labels)

############################################
#Seeded NTAC
#use only 10% for training
train_indices, test_indices = data.test_train_split(train_size=0.1)
labels[test_indices] = "?" # "?" indicates a nodes is unlabeled
#Initialize NTAC with the data and labels
nt = Ntac(data=data, labels=labels)
# nt = Ntac(data=A_csr, labels=labels) # if you want to use CSR matrix directly
for i in range(5):
    print(f"Step {i}")
    nt.step()
    partition = nt.get_partition()
    #partition = nt.get_topk_partition(5) # if we want to get the top 5 labels for each node
    metrics = data.get_metrics(partition, test_indices, data.labels)
    print(f"Accuracy: {metrics['acc']:.3f}") #can also get ARI, weighted F1, and topk accuracy (if using get_topk_partition)

############################################
#Unseed NTAC example
print("Unseeded NTAC")
#This will ignore the labels, even when provided
nt.solve_unseeded(max_k = 4)
nt.map_partition_to_gt_labels(data.labels) #Use the Hungarian algorithm to map the partition to the ground truth labels
partition = nt.get_partition() #unseeded does not support topk partition
metrics = data.get_metrics(partition, range(data.n), data.labels)
print(f"Accuracy: {metrics['acc']:.3f} ARI: {metrics['ari']:.3f}", f"Weighted F1: {metrics['f1']:.3f}")

Seeded NTAC with external node features

import numpy as np
import scipy.sparse as sp
from ntac import Ntac

A = sp.csr_array(
    [
        [0.0, 1.0, 0.0],
        [1.0, 0.0, 0.0],
        [0.0, 0.0, 0.0],
    ]
)
labels = np.array(["A", "B", "?"], dtype=object)

# Features must be nonnegative because seeded NTAC still uses weighted Jaccard.
node_features = np.array(
    [
        [1.0, 0.0],
        [0.0, 1.0],
        [0.8, 0.2],
    ],
    dtype=float,
)

nt = Ntac(
    data=A,
    labels=labels,
    node_features=node_features,
    feature_weight=1.0,
)
nt.step()
print(nt.get_partition())

In practice, it is usually worth preprocessing external features before passing them to NTAC so they are nonnegative and on a scale comparable to the structural embedding.

Documentation & Examples using the Flywire dataset:

Link to Docs and further examples

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ntac-0.1.2.tar.gz (328.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ntac-0.1.2-py3-none-any.whl (40.5 kB view details)

Uploaded Python 3

File details

Details for the file ntac-0.1.2.tar.gz.

File metadata

  • Download URL: ntac-0.1.2.tar.gz
  • Upload date:
  • Size: 328.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for ntac-0.1.2.tar.gz
Algorithm Hash digest
SHA256 fca2787305b7ea16b5464287a852ee06a6e19fdaddac4e70318701fb40abf5bc
MD5 c59cad41a5ed8c3a9112b85c53d3babd
BLAKE2b-256 804354d426a9ef6d018a9c3bf720db5d4b40da9b7ca44177febdb22565001896

See more details on using hashes here.

File details

Details for the file ntac-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: ntac-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 40.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for ntac-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 cc356ffcfafd32bd3b5e67c605891c6169df12b5fc15bb49d24f74f2a3f3a193
MD5 84f473e8ce5a9a23e86b58292983bc05
BLAKE2b-256 0d9ea46ff5258a22f8c39ff7c6b82627e18d76d54a9dd8fd9fd1ca111a400a77

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page