Skip to main content

A text summarization tool using GloVe embeddings and PageRank algorithm

Project description

Text Summarizer

A Python-based text summarization tool that uses GloVe word embeddings and PageRank algorithm to generate extractive summaries of documents.

Features

  • Extractive Summarization: Uses sentence similarity and PageRank to identify the most important sentences
  • GloVe Embeddings: Leverages pre-trained GloVe word vectors for semantic similarity calculation
  • Multiple Input Methods: Support for single documents, CSV files, or interactive creation
  • GUI Interface: User-friendly Tkinter-based graphical interface
  • Command Line Interface: Scriptable command-line tool for automation
  • Batch Processing: Process multiple documents at once

Installation

Prerequisites

  • Python 3.8 or higher
  • Required packages (automatically installed): pandas, numpy, nltk, scikit-learn, networkx

Install from PyPI

pip install text-summarizer-aweebtaku

Install from Source

  1. Clone the repository:
git clone https://github.com/AWeebTaku/Summarizer.git
cd Summarizer
  1. Install the package:
pip install -e .

Download GloVe Embeddings

The tool requires GloVe word embeddings. Download the 100d version:

wget http://nlp.stanford.edu/data/glove.6B.zip
unzip glove.6B.zip

Place the glove.6B.100d.txt file in the project root or specify the path.

Usage

Command Line Interface

# Summarize a CSV file
text-summarizer-aweebtaku --csv-file data/tennis.csv --article-id 1

# Interactive mode
text-summarizer-aweebtaku

Graphical User Interface

# Launch GUI (easiest way)
text-summarizer-aweebtaku --gui

# Or use the dedicated GUI command
text-summarizer-gui

Python API

from text_summarizer import TextSummarizer
import pandas as pd

# Initialize summarizer
summarizer = TextSummarizer(glove_path='glove.6B.100d.txt')

# Load data
df = pd.DataFrame([{'article_id': 1, 'article_text': 'Your text here...'}])

# Run summarization
scored_sentences = summarizer.run_summarization(df)

# Get summary for article ID 1
article_text, summary = summarizer.summarize_article(scored_sentences, 1, df)
print(summary)

Data Format

Input data should be in CSV format with columns:

  • article_id: Unique identifier for each document
  • article_text: The full text of the document

Example:

article_id,article_text
1,"This is the first article. It contains multiple sentences..."
2,"This is the second article. It also has several sentences..."

Algorithm

The summarization process follows these steps:

  1. Sentence Tokenization: Split documents into individual sentences
  2. Text Cleaning: Remove punctuation, convert to lowercase, remove stopwords
  3. Sentence Vectorization: Convert sentences to vectors using GloVe embeddings
  4. Similarity Calculation: Compute cosine similarity between all sentence pairs
  5. PageRank Scoring: Apply PageRank algorithm to identify important sentences
  6. Summary Extraction: Select top-ranked sentences in original order

Configuration

  • glove_path: Path to GloVe embeddings file (default: 'glove.6B.100d.txt/glove.6B.100d.txt')
  • num_sentences: Number of sentences in summary (default: 5)

License

This project is licensed under the MIT License - see the LICENSE file for details.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Citation

If you use this tool in your research, please cite:

@software{text_summarizer,
  title = {Text Summarizer},
  author = {Your Name},
  url = {https://github.com/AWeebTaku/Summarizer},
  year = {2024}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

text_summarizer_aweebtaku-1.0.1.tar.gz (18.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

text_summarizer_aweebtaku-1.0.1-py3-none-any.whl (17.6 kB view details)

Uploaded Python 3

File details

Details for the file text_summarizer_aweebtaku-1.0.1.tar.gz.

File metadata

File hashes

Hashes for text_summarizer_aweebtaku-1.0.1.tar.gz
Algorithm Hash digest
SHA256 0efe7d3e1dc5fbd094dfd1bb93bf1f50963d176429904ee99c996b0e2e0b9066
MD5 b9d7473a4aa956026f98caaa427d5e78
BLAKE2b-256 117deaed8ac36c2f0ea8f275a8698ad275ffbc4bfca6d038e5f718f550a883aa

See more details on using hashes here.

File details

Details for the file text_summarizer_aweebtaku-1.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for text_summarizer_aweebtaku-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 494b8ea8758aee795fc91bf3bb465160f4955213748394530dbc3843fec8a526
MD5 1e5dcf75f69f81e27fecb16b1b764d65
BLAKE2b-256 e1891fcdc49408d6396803ea570fd0a770fae6f18343645cbace421fc0498ff1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page