Transformer Library for 3D Mesh Processing
Project description
polymesh-ai Library Usage Guide
Complete guide for using polymesh-ai - the transformer library for 3D mesh processing.
📦 Installation
Install from PyPI (Recommended)
# Basic installation
pip install polymesh-ai
# Full installation with all features
pip install polymesh-ai[full]
# Development installation
pip install polymesh-ai[dev]
Install from Source
git clone https://github.com/MatN23/polymesh-ai.git
cd polymesh-ai
pip install -e .
Installation Options
- Basic:
pip install polymesh-ai
- Core functionality only - Full:
pip install polymesh-ai[full]
- Includes wandb, matplotlib, scipy, scikit-learn - Dev:
pip install polymesh-ai[dev]
- Development tools (pytest, black, flake8, jupyter) - Docs:
pip install polymesh-ai[docs]
- Documentation tools (sphinx, themes)
🚀 Quick Start
1. Basic Mesh Classification
import torch
import polymesh-ai
from polymesh-ai import VertexTokenizer, MeshTransformer
import numpy as np
# Create sample mesh data (you'll need actual mesh data)
# This is just for demonstration - replace with your mesh loading code
class SimpleMesh:
def __init__(self):
# Create a simple triangle mesh
self.vertices = [
SimpleVertex([0.0, 0.0, 0.0], [0, 0, 1]),
SimpleVertex([1.0, 0.0, 0.0], [0, 0, 1]),
SimpleVertex([0.5, 1.0, 0.0], [0, 0, 1])
]
class SimpleVertex:
def __init__(self, position, normal=None):
self.position = np.array(position, dtype=np.float32)
self.normal = np.array(normal, dtype=np.float32) if normal else None
# Load or create your mesh
mesh = SimpleMesh()
# Initialize tokenizer
tokenizer = VertexTokenizer(
include_normals=True,
include_colors=False
)
# Create model
model = MeshTransformer(
feature_dim=6, # 3 position + 3 normal
d_model=256,
nhead=8,
num_layers=6
)
# Tokenize and process
tokens = tokenizer.tokenize(mesh)
print(f"Generated {len(tokens)} tokens")
# Run classification
with torch.no_grad():
output = model(tokens, task='classification')
print(f"Classification output shape: {output.shape}")
2. Advanced Model with Adaptive Attention
from polymesh-ai import AdaptiveMeshTransformer
# Create adaptive model that switches attention mechanisms
model = AdaptiveMeshTransformer(
d_model=512,
nhead=8,
num_layers=6,
dim_feedforward=2048,
dropout=0.1
)
# Prepare tensor inputs for adaptive model
batch_size, seq_len = 1, len(tokens)
features = torch.tensor([token.features for token in tokens]).unsqueeze(0)
positions = torch.tensor([token.position for token in tokens]).unsqueeze(0)
# Forward pass
output = model(features, positions)
print(f"Adaptive model output shape: {output.shape}")
🎯 Core Components
Tokenization Strategies
Vertex Tokenizer
from polymesh-ai import VertexTokenizer
# Basic vertex tokenization
tokenizer = VertexTokenizer(
include_normals=True,
include_colors=True,
quantize_positions=False
)
tokens = tokenizer.tokenize(mesh)
Face Tokenizer
from polymesh-ai import FaceTokenizer
# Face-based tokenization
face_tokenizer = FaceTokenizer(
max_face_vertices=4,
include_face_normal=True
)
face_tokens = face_tokenizer.tokenize(mesh)
Patch Tokenizer
from polymesh-ai import PatchTokenizer
# Patch-based hierarchical tokenization
patch_tokenizer = PatchTokenizer(
patch_size=16,
overlap=4,
feature_aggregation='mean'
)
patch_tokens = patch_tokenizer.tokenize(mesh)
Attention Mechanisms
Geometric Attention
from polymesh-ai import GeometricAttention
# Distance-aware attention
geo_attention = GeometricAttention(
d_model=256,
nhead=8,
max_distance=5.0,
distance_bins=32
)
Graph Attention
from polymesh-ai import GraphAttention
# Graph-based attention for mesh connectivity
graph_attention = GraphAttention(
d_model=256,
nhead=8,
edge_dim=16
)
Multi-Scale Attention
from polymesh-ai import MultiScaleAttention
# Hierarchical multi-scale processing
multiscale_attention = MultiScaleAttention(
d_model=256,
scales=[1, 2, 4, 8],
nhead=8
)
Sparse Attention
from polymesh-ai import SparseAttention
# Efficient attention for large meshes
sparse_attention = SparseAttention(
d_model=256,
nhead=8,
neighborhood_size=16,
sparse_pattern='spatial'
)
🏋️ Training Pipeline
Complete Training Setup
from polymesh-ai import (
MeshTransformerTrainingPipeline,
MeshTransformerDataset,
MeshAugmentation
)
# Training configuration
config = {
'model_type': 'adaptive',
'tokenizer_type': 'vertex',
'd_model': 256,
'nhead': 8,
'num_layers': 6,
'learning_rate': 1e-4,
'batch_size': 16,
'max_epochs': 100,
'early_stopping_patience': 15,
'device': 'cuda' if torch.cuda.is_available() else 'cpu',
'use_wandb': False, # Set True for experiment tracking
'task_type': 'classification'
}
# Initialize training pipeline
pipeline = MeshTransformerTrainingPipeline(config)
# Prepare your dataset
train_data = [
{'mesh': mesh1, 'label': 0, 'id': 'mesh_001'},
{'mesh': mesh2, 'label': 1, 'id': 'mesh_002'},
# ... more training data
]
val_data = [
{'mesh': val_mesh1, 'label': 0, 'id': 'val_001'},
# ... validation data
]
# Create datasets with augmentation
train_dataset = MeshTransformerDataset(
train_data,
pipeline.tokenizer,
task_type='classification',
augmentation_fn=MeshAugmentation.random_augment,
max_seq_len=512
)
val_dataset = MeshTransformerDataset(
val_data,
pipeline.tokenizer,
task_type='classification',
max_seq_len=512
)
# Train the model
pipeline.train(train_dataset, val_dataset)
Data Augmentation
from polymesh-ai import MeshAugmentation
# Individual augmentations
rotated_mesh = MeshAugmentation.random_rotation(mesh, angle_range=30.0)
scaled_mesh = MeshAugmentation.random_scale(mesh, scale_range=(0.8, 1.2))
translated_mesh = MeshAugmentation.random_translation(mesh, translation_range=0.1)
noisy_mesh = MeshAugmentation.add_noise(mesh, noise_std=0.01)
# Combined random augmentation
augmented_mesh = MeshAugmentation.random_augment(mesh)
🎨 Use Cases & Applications
3D Shape Classification
# Configure for classification
config = {
'model_type': 'standard',
'task_type': 'classification',
'feature_dim': 6, # Position + Normal
'd_model': 512,
'num_layers': 8
}
pipeline = MeshTransformerTrainingPipeline(config)
# Your mesh dataset with labels
mesh_classes = ['chair', 'table', 'lamp', 'sofa']
# Train classifier...
Mesh Reconstruction/Autoencoder
# Configure for reconstruction
config = {
'model_type': 'standard',
'task_type': 'reconstruction',
'feature_dim': 6,
'd_model': 512,
'learning_rate': 1e-5,
'max_epochs': 200
}
pipeline = MeshTransformerTrainingPipeline(config)
# Train autoencoder for mesh compression/denoising
model = pipeline.model
with torch.no_grad():
reconstructed = model(tokens, task='reconstruction')
Mesh Generation
# Configure for generation
config = {
'model_type': 'adaptive',
'task_type': 'generation',
'd_model': 768,
'num_layers': 12
}
# Pre-training tasks
from polymesh-ai import MeshTransformerPreTrainer
trainer = MeshTransformerPreTrainer(model, tokenizer)
# Masked mesh modeling
masked_input, positions, targets = trainer.masked_mesh_modeling(tokens, mask_ratio=0.15)
# Mesh completion
partial_input, complete_target = trainer.mesh_completion_task(partial_tokens, complete_tokens)
🔧 Advanced Features
Custom Attention Mechanisms
from polymesh-ai import MeshTransformerLayer
# Create custom transformer layers
custom_layer = MeshTransformerLayer(
d_model=256,
nhead=8,
attention_type='geometric' # or 'graph', 'multiscale', 'sparse'
)
3D Positional Encoding
from polymesh-ai import MeshPositionalEncoding
# Custom 3D positional encoding
pos_encoding = MeshPositionalEncoding(
d_model=256,
max_freq=10.0,
num_freq_bands=10
)
# Apply to 3D positions
positions = torch.randn(batch_size, seq_len, 3)
pos_embeddings = pos_encoding(positions)
Model Checkpointing
# Save model checkpoint
pipeline.save_checkpoint('my_model_epoch_50.pth')
# Load checkpoint
pipeline.load_checkpoint('my_model_epoch_50.pth')
# Save just the model state
torch.save(pipeline.model.state_dict(), 'model_weights.pth')
📊 Monitoring & Logging
WandB Integration
# Enable WandB logging
config = {
'use_wandb': True,
'wandb_project': 'mesh-transformer-experiments',
# ... other config
}
# Your training will automatically log to WandB
pipeline = MeshTransformerTrainingPipeline(config)
Custom Metrics
# Track custom metrics during training
def compute_custom_metrics(predictions, targets):
# Your custom metric computation
return {'custom_accuracy': accuracy, 'custom_f1': f1_score}
# Extend the training pipeline with custom metrics
🐛 Troubleshooting
Common Issues
- Memory Issues with Large Meshes
# Use sparse attention for large meshes
model = MeshTransformer(
feature_dim=6,
d_model=256,
# ... other params
)
# Or chunk your processing
def process_large_mesh(mesh, chunk_size=1000):
tokens = tokenizer.tokenize(mesh)
results = []
for i in range(0, len(tokens), chunk_size):
chunk = tokens[i:i+chunk_size]
result = model(chunk)
results.append(result)
return torch.cat(results, dim=0)
- GPU Memory Management
# Clear cache between batches
torch.cuda.empty_cache()
# Use mixed precision training
config['use_amp'] = True # If implemented
- Custom Mesh Loading
# Implement your own mesh class compatible with tokenizers
class CustomMesh:
def __init__(self, vertices, faces=None):
self.vertices = [CustomVertex(v) for v in vertices]
self.faces = faces
class CustomVertex:
def __init__(self, position, normal=None, color=None):
self.position = np.array(position, dtype=np.float32)
self.normal = np.array(normal, dtype=np.float32) if normal else None
self.color = np.array(color, dtype=np.float32) if color else None
📚 Examples & Tutorials
Check the library's examples/
directory for:
- Basic classification tutorial
- Mesh autoencoder training
- Custom tokenizer implementation
- Advanced attention mechanism usage
- Large-scale training pipelines
🤝 Contributing
The library is actively developed! Contribute by:
- Reporting issues on GitHub
- Submitting pull requests
- Adding new attention mechanisms
- Improving documentation
- Creating examples and tutorials
Happy mesh processing with polymesh-ai! 🎉
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
polymesh_ai-0.3.4.tar.gz
(6.7 kB
view details)
Built Distribution
File details
Details for the file polymesh_ai-0.3.4.tar.gz
.
File metadata
- Download URL: polymesh_ai-0.3.4.tar.gz
- Upload date:
- Size: 6.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.9.23
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 |
f71bf04185558d8ea67f14ee69d89fbabd4bb7591580844d9cccb0fae41cb501
|
|
MD5 |
1928c2fadaea4bf20adbaaae2406939e
|
|
BLAKE2b-256 |
bfb32183d7056089246677d79a361645f1771b490e8a63483d5fd8b308279c34
|
File details
Details for the file polymesh_ai-0.3.4-py3-none-any.whl
.
File metadata
- Download URL: polymesh_ai-0.3.4-py3-none-any.whl
- Upload date:
- Size: 6.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.9.23
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 |
2234d2ad5359a57896d8a5a58262c6972fbce06d65c5027eaaa56e6259973e28
|
|
MD5 |
1784d74d6692b0e71e1e5d0f5750edb0
|
|
BLAKE2b-256 |
f6333bf693c6b8c632ada875d1c029ad66ecbae8a7911776724cbb0a08506c49
|