A Python library for exploring chemical space using Generative Topographic Mapping.
Project description
ChemographyKit
Mapping high-dimensional biology & chemistry into intuitive, navigable spaces with Generative Topographic Mapping (GTM).
Table of contents
Expand
Overview
ChemographyKit is a comprehensive Python library for exploring chemical space and high-dimensional data using Generative Topographic Mapping (GTM). GTM is a probabilistic dimensionality reduction technique that creates non-linear mappings from high-dimensional data spaces to interpretable low-dimensional latent spaces using a generative model with radial basis functions. This is a PyTorch-based implementation of the GTM algorithm that runs on GPU, and includes functions for building landscapes and GTM-specific metrics.
[!TIP] Pair GTM maps with interactive notebooks or dashboards to let users zoom from global chemical space down to neighborhood-level structure–activity patterns.
Key Features
-
GPU-Accelerated: PyTorch-based implementation with CUDA support for fast computation
-
Multiple GTM Variants:
VanillaGTM: Basic GTM implementation with random initializationGTM: Enhanced version with PCA-based initialization for better convergence
-
Comprehensive Visualization:
- Interactive landscapes using Plotly (smooth heatmaps)
- Static visualizations using Altair (discrete grid-based plots)
- Support for density, classification, and regression landscapes
-
Advanced Analytics:
- Responsibility Patterns (RP) for chemical space coverage analysis
- Classification and regression landscape analysis
Installation
Requirements
- Python ≥ 3.11
- PyTorch ≥ 2.7.1
- NumPy ≥ 2.3.2
- Pandas ≥ 2.3.2
- Scikit-learn ≥ 1.7.1
- Altair ≥ 5.5.0
- Plotly ≥ 6.3.0
- tqdm ≥ 4.67.1
- matplotlib ≥ 3.7.0 (for tutorials)
Using PDM (Recommended)
git clone https://github.com/your-username/ChemographyKit.git
cd ChemographyKit
pdm install
Using pip
git clone <repository-url>
cd ChemographyKit
pip install -e .
Or install dependencies manually:
pip install numpy>=2.3.2 torch>=2.7.1 pandas>=2.3.2 altair>=5.5.0 plotly>=6.3.0 scikit-learn>=1.7.1 tqdm>=4.67.1 matplotlib>=3.7.0
[!NOTE] For GPU acceleration, install a CUDA-enabled build of PyTorch appropriate for your system.
Quick Start
Basic GTM Training
import torch
import numpy as np
from chemographykit.gtm import GTM
# Generate sample data
data = torch.randn(1000, 50, dtype=torch.float64) # 1000 samples, 50 features
# Create GTM model
gtm = GTM(
num_nodes=100, # 10x10 grid in latent space
num_basis_functions=25, # 5x5 RBF centers
basis_width=0.3, # RBF width parameter
reg_coeff=0.01, # Regularization coefficient
device="cuda" # Use GPU if available
)
# Fit model and transform data
latent_coords = gtm.fit_transform(data)
print(f"Latent coordinates shape: {latent_coords.shape}") # (2, 1000)
# Get responsibilities for landscape analysis
responsibilities, log_likelihoods = gtm.project(data)
print(f"Responsibilities shape: {responsibilities.shape}") # (100, 1000)
Complete Example: S-curve Analysis
This example demonstrates the full workflow using synthetic S-curve data (adapted from tutorials/Synthetic_data.ipynb):
import os
import numpy as np
import torch
import altair as alt
from sklearn.datasets import make_s_curve
# GTM and utilities
from chemographykit.gtm import GTM
from chemographykit.utils.molecules import calculate_latent_coords
from chemographykit.utils.regression import get_reg_density_matrix, reg_density_to_table
from chemographykit.plots.altair_landscapes import (
altair_points_chart,
altair_discrete_regression_landscape,
)
# 1. Create S-curve dataset
rng = np.random.RandomState(0)
n_samples = 5000
s_curve_3d, s_curve_color = make_s_curve(n_samples, random_state=rng)
# Add extra dimension and convert to tensor
extra_dims = rng.randn(n_samples, 1)
X = np.hstack([s_curve_3d, extra_dims])
device = "cuda" if torch.cuda.is_available() else "cpu"
X_t = torch.tensor(X, dtype=torch.float64, device=device)
# 2. Fit GTM model
gtm = GTM(
num_nodes=225, # 15 x 15 grid
num_basis_functions=100, # 10 x 10 RBF centers
basis_width=1.0,
reg_coeff=1.0,
device=device,
standardize=False,
pca_scale=True,
pca_engine="torch",
max_iter=200,
)
gtm.fit(X_t)
# 3. Transform data and create visualizations
Z = gtm.transform(X_t) # Latent coordinates
responsibilities, _ = gtm.project(X_t)
R_np = responsibilities.detach().to("cpu").numpy()
if R_np.shape[0] != n_samples:
R_np = R_np.T
# 4. Create regression landscape using curve parameter as target
density, reg_density = get_reg_density_matrix(R_np, s_curve_color)
reg_table = reg_density_to_table(density, reg_density, node_threshold=0.10)
# 5. Generate visualizations
coords = calculate_latent_coords(R_np, correction=True, return_node=True)
coords["color"] = s_curve_color
# Create points and landscape charts
points_chart = altair_points_chart(
coords.sample(min(5000, len(coords)), random_state=0),
num_nodes=15, # sqrt(225)
points_size=120,
coloring_scheme='viridis',
coloring_column='color'
)
reg_chart = altair_discrete_regression_landscape(
reg_table,
title="GTM Regression Landscape",
colorset='viridis'
)
# Combine and save
combined = alt.hconcat(points_chart, reg_chart).properties(
title="Latent points (colored by target) — GTM Regression Landscape"
)
# Save visualization
os.makedirs("plots", exist_ok=True)
combined.save("plots/gtm_scurve_regression.html")
print("Saved visualization to plots/gtm_scurve_regression.html")
Running the Tutorial
To run the complete tutorial notebook:
# Using PDM
pdm run jupyter notebook tutorials/Synthetic_data.ipynb
# Or with pip installation
jupyter notebook tutorials/Synthetic_data.ipynb
The tutorial demonstrates:
- Data preparation: Creating synthetic S-curve data with additional noise dimensions
- GTM training: Fitting a 15×15 grid GTM with 10×10 RBF centers
- Visualization: Both matplotlib 3D/2D plots and interactive Altair landscapes
- Regression analysis: Using the intrinsic curve parameter as regression target
- Interactive landscapes: Generating HTML visualizations saved to
plots/
Advanced Features
Responsibility Pattern (RP) Fingerprints
from chemographykit.metrics import resp_to_pattern, compute_rp_coverage
# Convert responsibilities to RP fingerprints
rp_fingerprints = np.array([
resp_to_pattern(resp, n_bins=10, threshold=0.01)
for resp in responsibilities_np
])
# Calculate coverage between datasets
reference_fps = rp_fingerprints[:500] # First 500 as reference
test_fps = rp_fingerprints[500:] # Last 500 as test
coverage = compute_rp_coverage(reference_fps, test_fps, use_weight=True)
print(f"Weighted coverage: {coverage:.3f}")
Molecular Coordinate Calculation
from chemographykit.utils.molecules import calculate_latent_coords
# Calculate molecular coordinates for plotting
mol_coords = calculate_latent_coords(
responsibilities_np,
correction=True, # Adjust for visualization
return_node=True # Include most responsible node
)
print(mol_coords.head())
Using Altair for Static Visualizations
from chemographykit.plots.altair_landscapes import (
altair_discrete_density_landscape,
altair_discrete_class_landscape,
altair_points_chart
)
# Create discrete density landscape
density_chart = altair_discrete_density_landscape(
density_table,
title="GTM Density Map"
)
# Overlay molecular points
points_chart = altair_points_chart(
mol_coords,
num_nodes=100,
points_size=50,
points_color="red"
)
# Combine charts
combined = density_chart + points_chart
combined.show()
Model Configuration
GTM Parameters
- num_nodes: Number of latent space grid nodes (must be perfect square for 2D)
- num_basis_functions: Number of RBF centers (must be perfect square for 2D)
- basis_width: RBF width parameter (controls smoothness)
- reg_coeff: Regularization coefficient (prevents overfitting)
- standardize: Whether to standardize input data (recommended: True)
- max_iter: Maximum EM algorithm iterations
- tolerance: Convergence tolerance
- device: Computation device ("cpu" or "cuda")
PCA Initialization Options (GTM class)
- pca_engine: PCA implementation ("sklearn" or "torch")
- pca_scale: Scale eigenvectors by sqrt of eigenvalues
- pca_lowrank: Use low-rank PCA approximation for large datasets
Architecture
Core Components
gtm.py: Main GTM implementations (BaseGTM,VanillaGTM,GTM)metrics.py: RP fingerprints and coverage metricsutils/: Specialized analysis modulesclassification.py: Binary/multi-class analysisregression.py: Continuous property analysisdensity.py: Density calculations and grid mappingmolecules.py: Molecular coordinate calculations
plots/: Visualization modulesplotly_landscapes.py: Interactive smooth heatmapsaltair_landscapes.py: Static discrete visualizations
Key Classes
DataStandardizer: Robust data preprocessing with NaN handlingBaseGTM: Abstract base class defining GTM interfaceVanillaGTM: Basic GTM with random initializationGTM: Enhanced GTM with PCA-based initialization
Performance Tips
- Use GPU: Set
device="cuda"for significant speedup on large datasets - Choose appropriate grid size: Balance between resolution and computational cost
- PCA initialization: Use
GTMclass instead ofVanillaGTMfor better convergence - Data standardization: Always enable for numerical stability
- Batch processing: Process large datasets in chunks if memory is limited
Testing
Run the comprehensive test suite:
pdm run pytest tests/ -v
Run with coverage:
pdm run pytest tests/ --cov=src/chemographykit --cov-report=html
Development
Contributing
- Fork the repository
- Create a feature branch
- Make your changes with tests
- Run quality checks:
pdm run pre-commit run --all-files - Submit a pull request
Citation
If you use this code please cite future_url
Applications of GTM
GTM has been applied across biological data and extensively studied for analyzing large chemical datasets and exploring chemical space, including virtual screening, library comparison/design, de novo compound design, and multi-scale visualization.
Biological datasets
| Domain | Representative study |
|---|---|
| Genomes | Molecular Informatics (2023) |
| Proteins | Bioinformatics (2022) |
| Peptides | bioRxiv (2024) |
Chemical space & big chemical data
Virtual screening
Library comparison & design
- Molecular Informatics (2011)
- Journal of Chemical Information and Modeling (2015)
- PubMed (2019)
- Molecular Informatics (2021)
- JCIM (2023)
De novo design of chemical compounds
Multi-scale visualization of large chemical spaces
Acknowledgements
The authors thank Dr. Arkadii Lin and Dr. Yuliana Zabolotna for their contributions to the development of the initial versions of the functions for density and classification landscape building, as well as GTM-derived metric calculation.
License
This project is licensed under the MIT License — see the LICENSE file for details.
Contact
Contact: varnek@unistra.fr
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file chemographykit-0.1.0.tar.gz.
File metadata
- Download URL: chemographykit-0.1.0.tar.gz
- Upload date:
- Size: 43.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: pdm/2.25.4 CPython/3.11.5 Linux/5.15.0-89-generic
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
47e1d7995e69f7f401aeedff185fa75e3897cbf7f7d1e657f4c92751dfdb8da3
|
|
| MD5 |
db38fcb8a1f6c01464fbdc9f92750c51
|
|
| BLAKE2b-256 |
a6e27996035d29499213686bc0724187ba8a7cf4212c80527ade2f3f34dce13d
|
File details
Details for the file chemographykit-0.1.0-py3-none-any.whl.
File metadata
- Download URL: chemographykit-0.1.0-py3-none-any.whl
- Upload date:
- Size: 29.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: pdm/2.25.4 CPython/3.11.5 Linux/5.15.0-89-generic
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5fc64ab9924568b09243775535b92236cc7df0b341a462366775aad569b72831
|
|
| MD5 |
1bfc4b6338be0961a67c0f2eb688b413
|
|
| BLAKE2b-256 |
6796b2fcdb44c80a175b3ebb7e0ba7ab3533ccad492034ccf3d5a26a48765542
|