Skip to main content

No project description provided

Project description

NCUT

🌐Documentation | 🤗HuggingFace Demo

NCUT: Nyström Normalized Cut

Normalized Cut, aka. spectral clustering, is a graphical method to analyze data grouping in the affinity eigenvector space. It has been widely used for unsupervised segmentation in the 2000s.

Nyström Normalized Cut, is a new approximation algorithm developed for large-scale graph cuts, a large-graph of million nodes can be processed in under 10s (cpu) or 2s (gpu).


Installation & Quick Start

PyPI install, our package is based on PyTorch, please install PyTorch first

pip install ncut-pytorch

Minimal example on how to run NCUT:

import torch
from ncut_pytorch import NCUT, rgb_from_tsne_3d

model_features = torch.rand(20, 64, 64, 768)  # (B, H, W, C)

inp = model_features.reshape(-1, 768)  # flatten
eigvectors, eigvalues = NCUT(num_eig=100, device='cuda:0').fit_transform(inp)
tsne_x3d, tsne_rgb = rgb_from_tsne_3d(eigvectors, device='cuda:0')

eigvectors = eigvectors.reshape(20, 64, 64, 100)  # (B, H, W, num_eig)
tsne_rgb = tsne_rgb.reshape(20, 64, 64, 3)  # (B, H, W, 3)

We have implemented some backbone models, they can be used as feature extractors, Here is a list of available models:

from ncut_pytorch.backbone import list_models
print(list_models())
[
  'SAM2(sam2_hiera_t)', 'SAM2(sam2_hiera_s)', 'SAM2(sam2_hiera_b+)', 'SAM2(sam2_hiera_l)', 
  'SAM(sam_vit_b)', 'SAM(sam_vit_l)', 'SAM(sam_vit_h)', 'MobileSAM(TinyViT)', 
  'DiNOv2reg(dinov2_vits14_reg)', 'DiNOv2reg(dinov2_vitb14_reg)', 'DiNOv2reg(dinov2_vitl14_reg)', 'DiNOv2reg(dinov2_vitg14_reg)', 
  'DiNOv2(dinov2_vits14)', 'DiNOv2(dinov2_vitb14)', 'DiNOv2(dinov2_vitl14)', 'DiNOv2(dinov2_vitg14)', 
  'DiNO(dino_vits8_896)', 'DiNO(dino_vitb8_896)', 'DiNO(dino_vits8_672)', 'DiNO(dino_vitb8_672)', 'DiNO(dino_vits8_448)', 'DiNO(dino_vitb8_448)', 'DiNO(dino_vits16_448)', 'DiNO(dino_vitb16_448)',
  'CLIP(ViT-B-16/openai)', 'CLIP(ViT-L-14/openai)', 'CLIP(ViT-H-14/openai)', 'CLIP(ViT-B-16/laion2b_s34b_b88k)', 
  'CLIP(convnext_base_w_320/laion_aesthetic_s13b_b82k)', 'CLIP(convnext_large_d_320/laion2b_s29b_b131k_ft_soup)', 'CLIP(convnext_xxlarge/laion2b_s34b_b82k_augreg_soup)', 
  'CLIP(eva02_base_patch14_448/mim_in22k_ft_in1k)', "CLIP(eva02_large_patch14_448/mim_m38m_ft_in22k_in1k)",
  'MAE(vit_base)', 'MAE(vit_large)', 'MAE(vit_huge)', 
  'ImageNet(vit_base)'
]

A example that run with a real backbone model:

import torch
from ncut_pytorch import NCUT, rgb_from_tsne_3d
from ncut_pytorch.backbone import load_model, extract_features

model = load_model(model_name="SAM(sam_vit_b)")
images = torch.rand(20, 3, 1024, 1024)
model_features = extract_features(images, model, node_type='attn', layer=6)
# model_features = model(images)['attn'][6]  # this also works

inp = model_features.reshape(-1, 768)  # flatten
eigvectors, eigvalues = NCUT(num_eig=100, device='cuda:0').fit_transform(inp)
tsne_x3d, tsne_rgb = rgb_from_tsne_3d(eigvectors, device='cuda:0')

eigvectors = eigvectors.reshape(20, 64, 64, 100)  # (B, H, W, num_eig)
tsne_rgb = tsne_rgb.reshape(20, 64, 64, 3)  # (B, H, W, 3)

A text model example:

from ncut_pytorch import NCUT, rgb_from_tsne_3d
from ncut_pytorch.backbone_text import load_model

llama = load_model("meta-llama/Meta-Llama-3.1-8B").cuda()
output_dict = llama("The quick white fox jumps over the lazy cat.")

model_features = output_dict['block'][31].squeeze(0)  # 32nd block output
token_texts = output_dict['token_texts']
eigvectors, eigvalues = NCUT(num_eig=5, device='cuda:0').fit_transform(model_features)
tsne_x3d, tsne_rgb = rgb_from_tsne_3d(eigvectors, device='cuda:0')
# eigvectors.shape[0] == tsne_rgb.shape[0] == len(token_texts)

paper in prep, Yang 2024

AlignedCut: Visual Concepts Discovery on Brain-Guided Universal Feature Space, Huzheng Yang, James Gee*, Jianbo Shi*,2024

Normalized Cuts and Image Segmentation, Jianbo Shi and Jitendra Malik, 2000

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ncut_pytorch-1.2.6.tar.gz (21.6 kB view details)

Uploaded Source

Built Distribution

ncut_pytorch-1.2.6-py3-none-any.whl (21.1 kB view details)

Uploaded Python 3

File details

Details for the file ncut_pytorch-1.2.6.tar.gz.

File metadata

  • Download URL: ncut_pytorch-1.2.6.tar.gz
  • Upload date:
  • Size: 21.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.8.16

File hashes

Hashes for ncut_pytorch-1.2.6.tar.gz
Algorithm Hash digest
SHA256 c537695ba5b57e5ad2bd6a79f69bbad51bc827c213280b3d7b32bfe4b6532b81
MD5 138b854599a8da0451ac2d491212d6a6
BLAKE2b-256 6e9b4131907d1d96c0b5986b8ff0622bfd1d938589fd3ffec92070ad50617b75

See more details on using hashes here.

File details

Details for the file ncut_pytorch-1.2.6-py3-none-any.whl.

File metadata

  • Download URL: ncut_pytorch-1.2.6-py3-none-any.whl
  • Upload date:
  • Size: 21.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.8.16

File hashes

Hashes for ncut_pytorch-1.2.6-py3-none-any.whl
Algorithm Hash digest
SHA256 e0ccb773b4844ab2a73b08394c2120ccdef0ad63d2592ab088d259874e91f22b
MD5 6b5f07dbffc1443191225763f24a0a72
BLAKE2b-256 f1e6169f14daf017d333e631dc25538b54ecc20e64c1a28a1437f1512afb77c2

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page