Skip to main content

Semantic anomaly detection for system log files

Project description

Cordon

Semantic anomaly detection for system log files

Cordon uses transformer-based embeddings and density-based scoring to identify semantically unusual patterns in large log files, designed to reduce massive logs down to the most anomalous sections for analysis.

Key principle: Repetitive patterns (even errors) are considered "normal background." Cordon surfaces unusual, rare, or clustered events that stand out semantically from the bulk of the logs.

Features

  • Semantic Analysis: Uses transformer models to understand log content meaning, not just keyword matching
  • Density-Based Scoring: Identifies anomalies using k-NN distance in embedding space
  • Noise Reduction: Filters out repetitive logs, keeping only unusual patterns
  • Multiple Backends: sentence-transformers (default) or llama.cpp for containers

Requirements

GPU Requirements (Optional but Recommended)

For GPU acceleration, you need:

  • NVIDIA GPU: Pascal architecture or newer (GTX 10-series, RTX series, Tesla P/V/A/H series)
  • Compute Capability: 6.0 or higher
  • Compatible GPUs: GTX 1050+, RTX 20/30/40 series, Tesla P100+, V100, A100, H100

Not compatible: GTX 900-series or older (Maxwell/Kepler architectures)

CPU mode is always available as a fallback.

Installation

From PyPI (Recommended)

# With uv (recommended)
uv pip install cordon

# With pip
pip install cordon

From Source

# Clone the repository
git clone https://github.com/calebevans/cordon.git
cd cordon

# With uv (recommended)
uv pip install -e .

# With pip
pip install -e .

For development:

uv pip install -e ".[dev]"
pre-commit install

For llama.cpp backend (GPU acceleration in containers):

uv pip install -e ".[llama-cpp]"

Container Installation

make container-build

See Container Guide for GPU support and advanced usage.

Quick Start

Command Line

# Basic usage
cordon system.log

# Multiple files
cordon app.log error.log

# With options
cordon --window-size 10 --k-neighbors 10 --anomaly-percentile 0.05 app.log

# With GPU acceleration (scoring batch size auto-detected)
cordon --device cuda --batch-size 64 large.log

# Override auto-detection if needed
cordon --device cuda --batch-size 64 --scoring-batch-size 50000 large.log

# Save results to file
cordon --output anomalies.xml system.log

# Show detailed statistics and save results
cordon --detailed --output results.xml app.log

# llama.cpp backend (for containers)
cordon --backend llama-cpp system.log

Python Library

from pathlib import Path
from cordon import SemanticLogAnalyzer, AnalysisConfig

# Basic usage
analyzer = SemanticLogAnalyzer()
output = analyzer.analyze_file(Path("system.log"))
print(output)

# Advanced configuration with GPU acceleration
config = AnalysisConfig(
    window_size=10,
    k_neighbors=10,
    anomaly_percentile=0.05,
    device="cuda",           # GPU for embedding and scoring
    batch_size=64,           # Embedding batch size
    scoring_batch_size=None  # Auto-detect optimal batch size (default)
)
analyzer = SemanticLogAnalyzer(config)
result = analyzer.analyze_file_detailed(Path("app.log"))

Backend Options

sentence-transformers (Default)

Best for native installations with GPU access.

cordon system.log  # Auto-detects GPU (MPS/CUDA)
cordon --device cuda system.log
cordon --device cpu system.log

llama.cpp Backend

Best for container deployments with GPU acceleration via Vulkan.

# Auto-downloads model on first run
cordon --backend llama-cpp system.log

# With GPU acceleration
cordon --backend llama-cpp --n-gpu-layers 10 system.log

# Custom model
cordon --backend llama-cpp --model-path ./model.gguf system.log

See llama.cpp Guide for details on models, performance, and GPU setup.

Container Usage

Build

# Build locally
make container-build

Run

# Pull published image from GitHub Container Registry
podman pull ghcr.io/calebevans/cordon:latest  # or :dev for development builds

# Run with published image
podman run --rm -v /path/to/logs:/logs:Z ghcr.io/calebevans/cordon:latest /logs/system.log

# Run with locally built image
make container-run DIR=/path/to/logs ARGS="/logs/system.log"

# With GPU (requires Podman with libkrun)
podman run --device /dev/dri -v /path/to/logs:/logs:Z ghcr.io/calebevans/cordon:latest \
  --backend llama-cpp --n-gpu-layers 10 /logs/system.log

See Container Guide for full details.

Primary Use Case: LLM Context Reduction

Cordon attempts to solve the problem of log files being too large for LLM context windows by reducing them to semantically significant sections.

Real-world reduction rates from benchmarks:

  • 1M-line HDFS logs → 20K lines (98% reduction with p=0.02 threshold)
  • 5M-line HDFS logs → 100K lines (98% reduction with p=0.02 threshold)

Example workflow:

# Extract anomalies
analyzer = SemanticLogAnalyzer()
anomalies = analyzer.analyze_file(Path("production.log"))

# Send curated context to LLM (now fits in context window)

The output is intentionally lossy—it discards repetitive patterns to focus on semantically unusual events.

How It Works

Pipeline

  1. Ingestion: Read log file line-by-line
  2. Segmentation: Create overlapping windows of N lines
  3. Vectorization: Embed windows using transformer models
  4. Scoring: Calculate k-NN density scores
  5. Thresholding: Select top X% based on scores
  6. Merging: Combine overlapping significant windows
  7. Formatting: Generate XML-tagged output

Scoring

  • Higher score = Semantically unique = Anomalous
  • Lower score = Repetitive = Normal background noise

The score for each window is the average cosine distance to its k nearest neighbors in the embedding space.

GPU Acceleration: Both embedding and scoring phases automatically leverage GPU acceleration (CUDA/MPS) when available, providing significant speedups for large log files.

Important: Repetitive patterns are filtered even if critical. The same FATAL error repeated 100 times scores as "normal" because it's semantically similar to itself.

See Cordon's architecture for full details.

Configuration

Analysis Parameters

Parameter Default CLI Flag Description
window_size 4 --window-size Lines per window (non-overlapping)
k_neighbors 5 --k-neighbors Number of neighbors for density calculation
anomaly_percentile 0.1 --anomaly-percentile Top N% to keep (0.1 = 10%)
batch_size 32 --batch-size Batch size for embedding generation
scoring_batch_size Auto --scoring-batch-size Batch size for k-NN scoring (auto-detects based on GPU memory)

Backend Options

Parameter Default CLI Flag Description
backend sentence-transformers --backend Embedding backend
model_name all-MiniLM-L6-v2 --model-name HuggingFace model
device Auto --device Device for embedding and scoring (cuda/mps/cpu)
model_path None --model-path GGUF model path (llama-cpp)
n_gpu_layers 0 --n-gpu-layers GPU layers (llama-cpp)

Output Options

Parameter Default CLI Flag Description
detailed False --detailed Show detailed statistics (timing, score distribution)
output None --output, -o Save anomalous blocks to file (default: stdout)

Run cordon --help for full CLI documentation.

⚠️ Important: Token Limits and Window Sizing

Transformer models have token limits that affect how much of each window is analyzed. Windows exceeding the limit are automatically truncated to the first N tokens.

Cordon will warn you if significant truncation is detected and suggest better settings for your logs.

Default model (all-MiniLM-L6-v2) has a 256-token limit:

  • Compact logs (20-30 tokens/line): Can increase to window_size=8 for more context
  • Standard logs (40-50 tokens/line): Default works well
  • Verbose logs (50-70 tokens/line): Default works, or use larger model for bigger windows
  • Very verbose logs (80+ tokens/line): Reduce to window_size=3 or use larger-context model

For verbose system logs, use larger-context models:

# BAAI/bge-base-en-v1.5 supports 512 tokens (~8-10 verbose lines)
cordon --model-name "BAAI/bge-base-en-v1.5" --window-size 8 your.log

See Configuration Guidelines for detailed recommendations.

Use Cases

What Cordon Is Good For

  • LLM Pre-processing: Reduce large logs to small anomalous sections prior to analysis
  • Initial Triage: First-pass screening of unfamiliar logs to find "what's unusual here?"
  • Anomaly Detection: Surface semantically unique events (rare errors, state transitions, unusual clusters)
  • Exploratory Analysis: Discover unexpected patterns without knowing what to search for

What Cordon Is NOT Good For

  • Complete error analysis (repetitive errors filtered)
  • Specific error hunting (use grep/structured logging)
  • Compliance logging (this is lossy by design)

Performance

GPU Acceleration

Cordon automatically leverages GPU acceleration for both embedding and scoring phases when available:

  • Embedding: Uses PyTorch/sentence-transformers with CUDA or MPS
  • Scoring: Uses PyTorch for GPU-accelerated k-NN computation
  • Speedup: 5-15x faster scoring on GPU compared to CPU for large datasets

For large log files (millions of lines), GPU acceleration can reduce total processing time from hours to minutes.

Memory Management

Cordon uses PyTorch for all k-NN scoring operations:

Strategy When RAM Usage Speed
PyTorch GPU GPU available (CUDA/MPS) Moderate Fastest
PyTorch CPU No GPU / CPU forced Moderate Fast

What's a "window"? A window is a non-overlapping chunk of N consecutive log lines (default: 4 lines). A 10,000-line log with window_size=4 creates 2,500 windows.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cordon-0.2.0.tar.gz (16.8 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cordon-0.2.0-py3-none-any.whl (29.8 kB view details)

Uploaded Python 3

File details

Details for the file cordon-0.2.0.tar.gz.

File metadata

  • Download URL: cordon-0.2.0.tar.gz
  • Upload date:
  • Size: 16.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for cordon-0.2.0.tar.gz
Algorithm Hash digest
SHA256 9127535abd446fec91735cada83ee0a99b82b20b299e128e79a24056fc34627b
MD5 4b048966516b483f50125a54b44bd0ba
BLAKE2b-256 dfc49338f440ec4898cf273487b8e6781a232c44c7c4095865d58318c06d43db

See more details on using hashes here.

Provenance

The following attestation bundles were made for cordon-0.2.0.tar.gz:

Publisher: release.yml on calebevans/cordon

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file cordon-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: cordon-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 29.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for cordon-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 af9336001e47f738745c55f85e3fe87035594fc19c42ff77e940208365572203
MD5 5b4541bbd92c9027cf6f0ac395150f45
BLAKE2b-256 57da6bb3ba6b42dbe9cf1de4d627bbbf15b294cc9e361c7b9a854682ba867640

See more details on using hashes here.

Provenance

The following attestation bundles were made for cordon-0.2.0-py3-none-any.whl:

Publisher: release.yml on calebevans/cordon

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page