Skip to main content

Video Comprehension Score (VCS) - A comprehensive metric for evaluating narrative similarity between reference and generated text

Project description

VCS Process Flow

A Comprehensive Python Library for Narrative Similarity Evaluation between two very long descriptions

PyPI version Python 3.10+ License: MIT Documentation

๐Ÿ“„ Research Paper ยท ๐Ÿ““ Interactive Notebook


๐Ÿค” What is VCS?

Recent advances in Large Video Language Models (LVLMs) have significantly enhanced automated video understanding, enabling detailed, long-form narratives of complex video content. However, accurately evaluating whether these models genuinely comprehend the video's narrativeโ€”its events, entities, interactions, and chronological coherenceโ€”remains challenging.

Why Existing Metrics Fall Short:

  • N-gram Metrics (e.g., BLEU, ROUGE, CIDEr): Primarily measure lexical overlap, penalizing valid linguistic variations and inadequately evaluating narrative chronology.

  • Embedding-based Metrics (e.g., BERTScore, SBERT): Improve semantic sensitivity but struggle with extended context, detailed content alignment, and narrative sequencing.

  • LLM-based Evaluations: Often inconsistent, lacking clear criteria for narrative structure and chronology assessments.

Moreover, traditional benchmarks largely rely on question-answering tasks, which only test isolated events or entities rather than holistic video comprehension. A model answering specific questions correctly does not necessarily demonstrate understanding of the overall narrative or the intricate interplay of events.

Introducing VCS (Video Comprehension Score):

VCS is a Python library specifically designed to overcome these challenges by evaluating narrative comprehension through direct comparison of extensive, detailed video descriptions generated by LVLMs against human-written references. Unlike traditional metrics, VCS assesses whether models capture the overall narrative structure, event sequencing, and thematic coherence, not just lexical or isolated semantic matches.

Core Components of VCS:

  • ๐ŸŒ Global Alignment Score (GAS): Captures overall thematic alignment, tolerating stylistic variations without penalizing valid linguistic differences.

  • ๐ŸŽฏ Local Alignment Score (LAS): Checks detailed semantic correspondence at a chunk-level, allowing minor descriptive variations while penalizing significant inaccuracies or omissions.

  • ๐Ÿ“– Narrative Alignment Score (NAS): Evaluates chronological consistency, balancing the need for both strict event sequencing and permissible narrative flexibility.

Initially developed for evaluating video comprehension by comparing generated and human-written video narratives, VCS is versatile enough for broader applications, including document-level narrative comparisons, analysis of extensive narrative content, and various other narrative similarity tasks.

๐Ÿš€ Key Applications

๐ŸŽฏ Transform Your Narrative Analysis Across Every Domain

๐ŸŽฌ Video Intelligence

Evaluate video language models' narrative comprehension

๐Ÿ“„ Document Similarity

Compare semantic alignment between long-form documents

๐Ÿ“– Story Analysis

Measure narrative similarity between different stories

๐ŸŽ“ Academic Research

Detect conceptual plagiarism and idea overlap

๐Ÿ“ Paragraph Analysis

Evaluate narrative coherence within text sections

๐ŸŽฏ Short Caption Evaluation

Evaluate short captions and brief descriptions


๐ŸŒŸ Key Features

Explore the comprehensive capabilities that make VCS a powerful narrative evaluation toolkit. To understand these features in detail, read our research paper, then visit our interactive playground to see them in action.

๐Ÿงฎ Comprehensive Metric Suite

Computes VCS along with detailed breakdowns: GAS (global thematic similarity), LAS with precision/recall components, and NAS with distance-based and line-based sub-metrics. Access all internal calculations including penalty systems, mapping windows, and alignment paths.

๐Ÿ“Š Advanced Visualization Engine

11 specialized visualization functions including similarity heatmaps, alignment analysis, best-match visualizations, narrative flow diagrams, and precision/recall breakdowns. Each metric component can be visualized with publication-quality plots.

๐Ÿ“‹ Professional PDF Reports

Generate comprehensive multi-page PDF reports with all metrics, visualizations, and analysis details. Supports both complete reports and customizable selective reports. Professional formatting suitable for research publications.

โš™๏ธ Flexible Configuration System

Fine-tune evaluation with configurable parameters: chunk sizes, similarity thresholds, context windows, and Local Chronology Tolerance (LCT). Supports custom segmentation and embedding functions for domain-specific applications.


โšก Getting Started

Welcome to VCS Metrics! This guide will walk you through everything you need to start analyzing narrative similarity between texts. We'll cover installation, setup, and your first VCS analysis step by step.


๐Ÿ“ฆ Step 1: Installation

Choose the installation method that fits your needs:

๐ŸŽฏ For Most Users

Recommended if you just want to use VCS

๐Ÿ–ฑ๏ธ Click to expand installation steps

Terminal Installation:

pip install video-comprehension-score

Jupyter/Colab Installation:

!pip install video-comprehension-score

โœ… Ready in 30 seconds
๐Ÿ”ฅ Zero configuration needed
โšก Instant access to all features

๐Ÿ› ๏ธ For Developers

If you want to contribute or modify VCS

๐Ÿ–ฑ๏ธ Click to expand development setup

Terminal Installation:

git clone https://github.com/hdubey-debug/vcs.git
cd vcs
pip install -e ".[dev]"
pre-commit install

Jupyter/Colab Installation:

!git clone https://github.com/hdubey-debug/vcs.git
%cd vcs
!pip install -e ".[dev]"
!pre-commit install

๐Ÿ”ง Latest features first
๐Ÿงช Testing capabilities
๐Ÿค Contribution ready


๐Ÿ› ๏ธ System Requirements

Before installing VCS, make sure your system meets these requirements:

๐Ÿ Python

Python

Required: Python 3.10 or higher

VCS Metrics uses modern Python features and requires Python 3.10+. We recommend Python 3.11+ for optimal performance.

๐Ÿ”ฅ PyTorch

PyTorch

Required: PyTorch 1.9.0+

VCS needs PyTorch but doesn't install it automatically to avoid conflicts. Get it from the official PyTorch website.

๐Ÿ’ก Pro Tip: In Google Colab, PyTorch is pre-installed!

๐Ÿ’ก Note: VCS automatically installs dependencies: numpyโ‰ฅ1.20.0, matplotlibโ‰ฅ3.5.0, seabornโ‰ฅ0.11.0


๐Ÿ”ง Step 2: Prepare Your Functions

Now that VCS is installed, you need to define two functions before you can use the VCS API. Here's how VCS works:

๐Ÿ“‹ VCS API Overview

from vcs import compute_vcs_score

result = compute_vcs_score(
    reference_text="Your reference text here",
    generated_text="Your generated text here", 
    segmenter_fn=your_segmenter_function,        # โ† You provide this
    embedding_fn_las=your_embedding_function,    # โ† You provide this  
    embedding_fn_gas=your_embedding_function,    # โ† You provide this
    return_all_metrics=True
)

print(f"VCS Score: {result['VCS']:.4f}")

As you can see, VCS requires two custom functions from you. Let's understand what each should do:

๐Ÿ”ช Segmenter Function

What it does: Splits text into meaningful segments (sentences, paragraphs, etc.)

Required signature:

def your_segmenter(text: str) -> List[str]:
    # Your implementation here
    return list_of_text_segments

Arguments: text (str) - Input text to be segmented
Returns: List[str] - List of text segments
You can use: Any library or model (NLTK, spaCy, custom logic, etc.)

๐Ÿง  Embedding Function

What it does: Converts text segments into numerical vectors (embeddings)

Required signature:

def your_embedder(texts: List[str]) -> torch.Tensor:
    # Your implementation here  
    return tensor_of_embeddings

Arguments: texts (List[str]) - List of text segments to embed
Returns: torch.Tensor - Tensor of shape (len(texts), embedding_dim)
You can use: Any embedding model (sentence-transformers, OpenAI, etc.)

๐ŸŒŸ Author Recommendations (2025)

For best results, we recommend these state-of-the-art models:

โš ๏ธ Note: These recommendations are current as of 2025. Always research the latest SOTA options.

๐Ÿ”ช Segmentation Champion

SAT

๐Ÿ† Recommended: Segment Any Text (SAT)

โœจ Why we recommend SAT:

  • ๐ŸŽฏ State-of-the-art segmentation accuracy
  • โšก Intelligent boundary detection
  • ๐Ÿง  Context-aware text splitting
  • ๐Ÿ”ฌ Research-grade performance

๐Ÿ”— Repository: github.com/segment-any-text/wtpsplit

๐Ÿง  Embedding Powerhouse

NV-Embed

๐Ÿฅ‡ Recommended: nv-embed-v2

๐ŸŒŸ Why we recommend nv-embed-v2:

  • ๐Ÿ“Š Top performer on MTEB Leaderboard
  • ๐Ÿš€ Superior semantic understanding
  • ๐Ÿ’ช Robust multilingual support
  • โšก Excellent for VCS analysis

๐Ÿ”— Model: nvidia/NV-Embed-v2

๐Ÿ’ก Alternative Options: NLTK, spaCy, sentence-transformers, or build your own custom functions!


๐Ÿ’ป Step 3: Run Your First VCS Analysis

Now let's see VCS in action with a complete working example:

โšก Performance Notes
SOTA models require GPU. For CPU testing, this example uses lightweight alternatives.

๐Ÿš€ Quick Example - Click to expand complete tutorial

๐ŸŽฏ Complete Working Example

Copy, paste, and run this code to see VCS in action

# Fix import path issue if running from vcs/ root directory
import sys
import os
if os.path.basename(os.getcwd()) == 'vcs' and os.path.exists('src/vcs'):
    sys.path.insert(0, 'src')
    print("๐Ÿ”ง Fixed import path for development directory")

# Test the installation
try:
    import vcs
    print("โœ… VCS package imported successfully!")
    
    # Test main function availability
    if hasattr(vcs, 'compute_vcs_score'):
        print("โœ… Main function 'compute_vcs_score' is available!")
    else:
        print("โš ๏ธ Main function not found - there might be an installation issue")
        
    # Try to get version
    try:
        print(f"๐Ÿ“ฆ Version: {vcs.__version__}")
    except AttributeError:
        print("๐Ÿ“ฆ Version: Unable to determine (this is normal for development installs)")
        
except ImportError as e:
    print(f"โŒ Import failed: {e}")
    print("๐Ÿ’ก Make sure you:")
    print("   1. Installed VCS correctly: pip install -e .[dev]")
    print("   2. Restarted your notebook kernel") 
    print("   3. You're NOT in the root vcs/ directory (this causes import conflicts)")

# Import required libraries
import torch
from typing import List

# Define lightweight segmenter function
def simple_segmenter(text: str) -> List[str]:
    """
    Simple sentence segmenter using period splitting.
    
    Args:
        text: Input text to segment
        
    Returns:
        List of text segments
    """
    # Split by periods and clean up
    segments = [s.strip() for s in text.split('.') if s.strip()]
    return segments

# Define lightweight embedding function using sentence-transformers
def lightweight_embedding_function(texts: List[str]) -> torch.Tensor:
    """
    Lightweight embedding function using sentence-transformers.
    
    Args:
        texts: List of text segments to embed
        
    Returns:
        PyTorch tensor of shape (len(texts), embedding_dim)
    """
    try:
        from sentence_transformers import SentenceTransformer
        
        # Use a lightweight model (only downloads ~80MB)
        model = SentenceTransformer('all-MiniLM-L6-v2')
        
        # Generate embeddings
        embeddings = model.encode(texts)
        return torch.tensor(embeddings, dtype=torch.float32)
        
    except ImportError:
        print("โš ๏ธ sentence-transformers not found. Installing...")
        import subprocess
        import sys
        subprocess.check_call([sys.executable, "-m", "pip", "install", "sentence-transformers"])
        
        # Try again after installation
        from sentence_transformers import SentenceTransformer
        model = SentenceTransformer('all-MiniLM-L6-v2')
        embeddings = model.encode(texts)
        return torch.tensor(embeddings, dtype=torch.float32)

# Example texts
reference_text = """
The quick brown fox jumps over the lazy dog.
It was a beautiful sunny day in the forest.
The fox was looking for food for its family.
"""

generated_text = """
A brown fox jumped over a sleeping dog.
The weather was nice and sunny in the woods.
The fox needed to find food for its cubs.
"""

# Compute VCS score
print("๐Ÿง  Computing VCS score...")
try:
    result = vcs.compute_vcs_score(
        reference_text=reference_text,
        generated_text=generated_text,
        segmenter_fn=simple_segmenter,
        embedding_fn_las=lightweight_embedding_function,
        embedding_fn_gas=lightweight_embedding_function,
        return_all_metrics=True,
        return_internals=True
    )
    
    print("๐ŸŽฏ VCS Results:")
    print(f"VCS Score: {result['VCS']:.4f}")
    print(f"GAS Score: {result['GAS']:.4f}")
    print(f"LAS Score: {result['LAS']:.4f}")
    print(f"NAS Score: {result['NAS']:.4f}")
    print("โœ… VCS is working correctly!")
    
    # Generate visualization (optional)
    if 'internals' in result:
        try:
            fig = vcs.visualize_metrics_summary(result['internals'])
            print("๐Ÿ“Š Visualization generated successfully!")
            # fig.show()  # Uncomment to display
        except Exception as viz_error:
            print(f"โš ๏ธ Visualization failed (this is normal in some environments): {viz_error}")
    
except Exception as e:
    print(f"โŒ Error running VCS: {e}")
    print("๐Ÿ’ก Make sure PyTorch is installed and try restarting your kernel")

๐Ÿ“ Scale Note: This example uses small text for illustration - VCS excels with long-form content!
โš ๏ธ Import Tip: Running from vcs/ root? The example includes an automatic path fix.


โš™๏ธ Advanced Configuration

Once you're comfortable with the basics, you can fine-tune VCS behavior for your specific use case:

๐ŸŽฏ Core Parameters

๐ŸŽ›๏ธ Essential Controls:

Parameter Default Purpose
chunk_size 1 Segment grouping
context_cutoff_value 0.6 Similarity threshold
context_window_control 4.0 Context window size
lct 0 Narrative reordering tolerance

๐Ÿ“Š Return All Metrics

๐ŸŽ›๏ธ Control Parameter:

Parameter Default Purpose
return_all_metrics False Return detailed metric breakdown

When set to True, you get:

  • Individual GAS, LAS, NAS scores
  • LAS precision and recall components
  • Distance-based and line-based NAS sub-metrics
  • Complete metric breakdown for analysis

๐Ÿ” Return Internals

๐ŸŽ›๏ธ Control Parameter:

Parameter Default Purpose
return_internals False Return internal computation data

When set to True, you get:

  • Similarity matrices and alignment paths
  • Mapping windows and penalty calculations
  • Text chunks and segmentation details
  • All data needed for visualization

๐Ÿš€ **Example Configuration**

# ๐ŸŽฏ Comprehensive configuration with all features enabled
result = compute_vcs_score(
    reference_text=ref_text,
    generated_text=gen_text,
    segmenter_fn=segmenter,
    embedding_fn_las=embedder,
    embedding_fn_gas=embedder,
    chunk_size=2,                  # Group segments
    context_cutoff_value=0.7,      # Higher threshold
    context_window_control=3.0,    # Tighter windows
    lct=1,                         # Some reordering OK
    return_all_metrics=True,       # Get detailed breakdown
    return_internals=True          # Get visualization data
)

๐Ÿ“š For complete API documentation and visualization guides, visit our API Documentation


โ“ Frequently Asked Questions

๐Ÿค” How does VCS differ from BLEU/ROUGE?

Unlike BLEU and ROUGE which rely on hard n-gram matching, VCS utilizes latent space matching by comparing embeddings at both global and local chunk levels. VCS also evaluates the chronological order of content chunks and combines these three dimensions to generate a comprehensive final score that better captures semantic similarity and narrative structure.

โšก What's the minimum text length for VCS?

VCS works with any text length, but it's optimized for longer texts (100+ words) where narrative structure is important. For very short texts, simpler metrics might be more appropriate.

๐Ÿ“ What's the maximum text length for VCS?

There is no upper limit on text length for VCS. The framework is designed to handle texts of any size, from short paragraphs to extensive documents, making it suitable for large-scale narrative evaluation tasks.

๐Ÿง  Which embedding models work best?

We recommend checking the MTEB leaderboard for the latest SOTA models. As of 2024, nv-embed-v2 and similar transformer-based models provide excellent results.

๐ŸŽฏ How do I control the granularity of comparison?

Use the chunk_size parameter to control the granularity of text comparison. A smaller chunk size provides more fine-grained analysis, while a larger chunk size offers broader, more general comparisons. The default value is 1 for maximum granularity.

โฑ๏ธ How do I control the strictness of chronological matching?

Use the lct (Local Chronology Tolerance) parameter to control chronological matching strictness. A higher LCT value means more lenient chronological ordering, allowing for greater flexibility in narrative sequence evaluation. The default value is 0 for strict chronological matching.

๐Ÿ”— Can I use different embedding functions for GAS and LAS?

Yes, you can specify different embedding functions for Global Alignment Score (GAS) and Local Alignment Score (LAS) using the embedding_fn_gas and embedding_fn_las parameters respectively. This allows you to optimize each component with models best suited for their specific evaluation tasks.


๐Ÿ—๏ธ Project Structure

vcs/
โ”œโ”€โ”€ ๐Ÿ“ src/vcs/                  # Main package source code
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ __init__.py           # Package initialization
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ scorer.py             # Main VCS API entry point
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ _config.py            # Configuration settings
โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _metrics/             # Core VCS metrics implementations
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _gas/             # Global Alignment Score
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _las/             # Local Alignment Score  
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _nas/             # Narrative Alignment Score with components
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ ๐Ÿ“ _nas_components/  # Distance NAS, Line NAS, Regularize NAS
โ”‚   โ”‚   โ””โ”€โ”€ ๐Ÿ“ _vcs/             # Combined VCS computation
โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _visualize_vcs/       # Comprehensive visualization suite
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _similarity_matrix/  # Similarity matrix visualizations
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _best_match/      # Best match analysis plots
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _distance_nas/    # Distance-based NAS visualizations
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _line_nas/        # Line-based NAS visualizations
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _mapping_windows/ # Context window visualizations
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _metrics_summary/ # Overall metrics summary plots
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _pdf_report/      # PDF report generation
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _text_chunks/     # Text chunk visualizations
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _window_regularizer/ # Window regularizer plots
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _las/             # LAS-specific visualizations
โ”‚   โ”‚   โ””โ”€โ”€ ๐Ÿ“ _config/          # Visualization configuration
โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _segmenting/          # Text segmentation utilities
โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _matching/            # Optimal text matching algorithms
โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _mapping_windows/     # Context window management
โ”‚   โ””โ”€โ”€ ๐Ÿ“ _utils/               # Helper utilities
โ”œโ”€โ”€ ๐Ÿ“ docs/                     # Documentation and interactive demos
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ index.html            # Main documentation website
โ”‚   โ”œโ”€โ”€ ๐Ÿ“ pages/                # Documentation pages
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ api.html          # API reference
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ playground.html   # Interactive playground
โ”‚   โ”‚   โ””โ”€โ”€ ๐Ÿ“„ example.html      # Usage examples
โ”‚   โ”œโ”€โ”€ ๐Ÿ“ widgets/              # Interactive visualization widgets
โ”‚   โ”œโ”€โ”€ ๐Ÿ“ sphinx/               # Sphinx documentation source
โ”‚   โ””โ”€โ”€ ๐Ÿ“ assets/               # Documentation assets (CSS, JS, videos)
โ”œโ”€โ”€ ๐Ÿ“ .github/                  # GitHub configuration
โ”‚   โ”œโ”€โ”€ ๐Ÿ“ assets/               # README assets (images, gifs)
โ”‚   โ”œโ”€โ”€ ๐Ÿ“ scripts/              # GitHub automation scripts
โ”‚   โ””โ”€โ”€ ๐Ÿ“ workflows/            # CI/CD automation pipelines
โ”‚       โ”œโ”€โ”€ ๐Ÿ“„ test.yml          # Continuous testing
โ”‚       โ”œโ”€โ”€ ๐Ÿ“„ publish.yml       # Package publishing
โ”‚       โ””โ”€โ”€ ๐Ÿ“„ deploy-docs.yml   # Documentation deployment
โ”œโ”€โ”€ ๐Ÿ“„ pyproject.toml           # Package configuration & dependencies
โ”œโ”€โ”€ ๐Ÿ“„ CONTRIBUTING.md          # Development contribution guide
โ”œโ”€โ”€ ๐Ÿ“„ DEPLOYMENT.md            # Release and deployment guide
โ”œโ”€โ”€ ๐Ÿ“„ CHANGELOG.md             # Version history and changes
โ”œโ”€โ”€ ๐Ÿ“„ MANIFEST.in              # Package manifest
โ”œโ”€โ”€ ๐Ÿ“„ tag_version.py           # Version tagging script
โ”œโ”€โ”€ ๐Ÿ“„ LICENSE                  # MIT license
โ””โ”€โ”€ ๐Ÿ“„ README.md                # This documentation

๐Ÿš€ Development & Contributing

We welcome contributions to VCS Metrics! Whether you're fixing bugs, adding features, or improving documentation, here's how to get started.

๐Ÿ› ๏ธ Quick Development Setup

๐Ÿ–ฑ๏ธ Click to expand development setup
# 1. Clone and setup
git clone https://github.com/hdubey-debug/vcs.git
cd vcs
python -m venv venv
source venv/bin/activate  # Windows: venv\Scripts\activate

# 2. Install development dependencies
pip install -e .[dev]

# 3. Create your feature branch
git checkout -b feature/your-feature-name

# 4. Make your changes
# Edit files in src/vcs/
# Add tests if needed
# Update docs if necessary

# 5. Run quality checks
black src/ && isort src/ && flake8 src/ && mypy src/

# 6. Commit with semantic format
git commit -m "minor: add new awesome feature"

# 7. Push and create PR
git push origin feature/your-feature-name

๐Ÿ“‹ Contribution Workflow

๐Ÿ”„ Development Process

1. Fork & Clone
2. Create Feature Branch
3. Make Changes
4. Write Tests
5. Submit PR
6. Code Review
7. Merge to Main

โœ… Automated testing on every PR
โœ… Fast feedback in ~2-3 minutes

๐Ÿ“ฆ Release Process

1. Semantic Commit Messages
2. GitHub Release Creation
3. Automated Version Calculation
4. Package Building
5. TestPyPI Publishing
6. Production Release

๐Ÿš€ Industry-standard CI/CD pipeline
โšก Zero manual version management

๐Ÿ’ก Semantic Commit Format

We use semantic commits for automatic version bumping:

Commit Type Version Bump Example
minor: description New features 1.0.4 โ†’ 1.1.0
major: description Breaking changes 1.0.4 โ†’ 2.0.0
anything else Bug fixes (default) 1.0.4 โ†’ 1.0.5

๐Ÿ”ง Automated Testing & CI/CD

Our comprehensive CI/CD pipeline ensures code quality and reliability on every commit:

๐Ÿš€ What Gets Tested

โœ… Matrix Testing - Python 3.11 & 3.12 compatibility
โœ… Package Validation - Import testing & API availability
โœ… Integration Testing - Full getting-started example
โœ… Code Quality - Flake8 linting & complexity checks
โœ… Build Testing - Package build verification

๐Ÿ”„ Triggers: Every push and pull request to main

Tests Build

โœ… Automated testing ensures every change is production-ready

๐Ÿ“– Detailed Guides

For comprehensive information about contributing and development:

Contributing Guide Deployment Guide

๐Ÿค Getting Help

๐Ÿ› Bug Reports
Create GitHub Issue

๐Ÿ’ฌ Questions
GitHub Discussions

๐Ÿ’ก Feature Requests
Feature Request Issue


๐Ÿ“š Citation

If you use VCS Metrics in your research, please cite:

@software{vcs_metrics_2024,
  title = {VCS Metrics: Video Comprehension Score for Text Similarity Evaluation},
  author = {Dubey, Harsh and Ali, Mukhtiar and Mishra, Sugam and Pack, Chulwoo},
  year = {2024},
  institution = {South Dakota State University},
  url = {https://github.com/hdubey-debug/vcs},
  note = {Python package for narrative similarity evaluation}
}

๐Ÿค– CLIP-CC Ecosystem Integration

VCS is designed to work seamlessly with CLIP-CC Dataset for comprehensive video understanding evaluation.

CLIP-CC Dataset

๐Ÿ”„ Perfect Integration: VCS + CLIP-CC

  • ๐ŸŽฅ CLIP-CC provides the data โ†’ Rich video dataset with human summaries
  • ๐Ÿ” VCS provides the evaluation โ†’ Advanced narrative comprehension metrics
  • ๐Ÿ† Together: Complete research pipeline โ†’ From data loading to evaluation

๐Ÿ† Meet Our Contributors

๐ŸŒŸ The VCS Team - Building the Future of Text Similarity

Harsh Dubey
Lead Developer & Research Scientist
South Dakota State University

Commits Lines Files
2 49K 171

๐Ÿ“‹ Key Work: โ€ข VCS Algorithm Architecture
โ€ข Visualization Engine
โ€ข LAS, GAS, and NAS Metrics

GitHub

๐Ÿค– Automated Contributors

Contributor Role Contributions Badge
๐Ÿค– GitHub Actions CI/CD Automation Clean history setup Bot

๐Ÿ“Š Contribution Analytics

Contributors Commit Activity Last Commit Code Frequency


๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.


๐ŸŒŸ Made with โค๏ธ by the VCS Team

Authors: Harsh Dubey, Mukhtiar Ali, Sugam Mishra, and Chulwoo Pack
Institution: South Dakota State University
Year: 2024

โญ Star this repo โ€ข ๐Ÿ› Report Bug โ€ข ๐Ÿ’ก Request Feature โ€ข ๐Ÿ’ฌ Community Q&A

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

video_comprehension_score-1.0.0.tar.gz (94.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

video_comprehension_score-1.0.0-py3-none-any.whl (103.6 kB view details)

Uploaded Python 3

File details

Details for the file video_comprehension_score-1.0.0.tar.gz.

File metadata

File hashes

Hashes for video_comprehension_score-1.0.0.tar.gz
Algorithm Hash digest
SHA256 cdb6bef762d034418e2fa7a9a27e889a6a34716c3a49a98eccffd4c1d18cf7aa
MD5 4d37ac11db5c4d399904fadbd5ac5580
BLAKE2b-256 423ce44e8b66aef053b5e36b86dee3bd1574a39b17ab57915dfc9757ff22a1e5

See more details on using hashes here.

Provenance

The following attestation bundles were made for video_comprehension_score-1.0.0.tar.gz:

Publisher: publish.yml on Multimodal-Intelligence-Lab/Video-Comprehension-Score

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file video_comprehension_score-1.0.0-py3-none-any.whl.

File metadata

File hashes

Hashes for video_comprehension_score-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 bac13a30e56e20e499938619de08b566c85a475f581903fb96d93f342f59a31c
MD5 690ffca23da2f95ec719ca39a2d672c4
BLAKE2b-256 1e3e2b36e491b33edcedf25be60c4fe384ea18d52b4355d4b0b9f82b52b81813

See more details on using hashes here.

Provenance

The following attestation bundles were made for video_comprehension_score-1.0.0-py3-none-any.whl:

Publisher: publish.yml on Multimodal-Intelligence-Lab/Video-Comprehension-Score

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page