Skip to main content

NarrativeMapper is a text analysis pipeline that uncovers the dominant narratives and emotional tones within online communities.

Project description

NarrativeMapper

Overview

Whether you're coding in Python or simply running a single command in your terminal, NarrativeMapper gives you instant insight into the dominant stories behind the noise.

Ever wonder what stories are dominating Reddit, Twitter, or any corner of the internet? NarrativeMapper clusters similar online discussions and uses OpenAI’s GPT to summarize the dominant narratives, tone, and sentiment. Built for researchers, journalists, analysts, and anyone trying to make sense of the chaos.

  • Extracts dominant narratives from messy text data

  • Clusters similar posts using embeddings + UMAP + HDBSCAN

  • Summarizes each cluster with GPT

  • Analyzes sentiment per narrative

  • Plug-and-play pipeline: CLI, class-based, or functional

Click to view actual models and algorithms

Installation and Setup

Installation:

Click to view installation process

Install via PyPI:

pip install NarrativeMapper

Setup:

Click to view setup process
  1. Create a .env file in your root directory (same folder where your script runs).

  2. Inside the .env file, add your OpenAI API key like this:

OPENAI_API_KEY=your-api-key-here
  1. Before importing narrative_mapper, make sure to load your .env like this:
from dotenv import load_dotenv
load_dotenv()

from narrative_mapper import *

(Make sure to keep your .env file private and add it to your .gitignore if you're using Git.)

How to Use

Option 1: CLI (zero code)

Run NarrativeMapper directly from the terminal:

narrativemapper path/to/your.csv online_group_name --flag-options

This will:

  • Load the CSV

  • Automatically embed, cluster, and summarize the comments (with pretty progress bars if using --verbose)

  • Output a formatted results file in the current directory (output_summary.txt)

  • Print the summarized narratives and sentiment to the terminal

Output example from this dataset:

Run Timestamp: 2025-04-10 20:42:45
Online Group Name: reddit_space_subreddit

Summary: The cluster addresses concerns regarding the reliability of SpaceX and Boeing in space missions, the implications of space debris on safety, and the need for corporate accountability in aerospace within the context of human exploration and technological advancement in space.
Sentiment: NEGATIVE
Text Samples: 244
---
Summary: The cluster focuses on personal experiences and emotions tied to witnessing solar eclipses, encompassing travel efforts, photography techniques, and the profound awe these celestial events evoke.
Sentiment: NEUTRAL
Text Samples: 139
---

Flag Options:

  --verbose             Print/show detailed parameter scaling info and progress bars.
  --file-output         Output summaries to text file in working directory.
  --max-samples         Max amount of texts samples from clusters being used in summarization. Default is 500.
  --random-state        Sets value to UMAP and PCA random state. Default value is None.
  --no-pca              Skip PCA and go straight to UMAP.
  --dim-pca             Change PCA dim. Default is 100.

Note: Make sure you're running the CLI from the same directory where your .env file is located (Unless you have set OPENAI_API_KEY globally in your environment).

Option 2: Class-Based Interface

from dotenv import load_dotenv
load_dotenv()

from narrative_mapper import *
import pandas as pd

file_df = pd.read_csv("file-path")

#initialize NarrativeMapper object
mapper = NarrativeMapper(file_df, "r/antiwork", verbose=True)

#embeds semantic vectors
mapper.load_embeddings()

#clustering: has default UMAP and HDBSCAN parameters set but has kwargs for more customizability.
umap_kwargs =  {'n_components': 10, 'min_dist': 0.0}
mapper.cluster(umap_kwargs=**umap_kwargs, use_pca=False)

#summarize each cluster's topic and sentiment
mapper.summarize(max_sample_size=500)

#export in your preferred format
summary_dict = mapper.format_to_dict()
text_df = mapper.format_by_text()
cluster_df = mapper.format_by_cluster()

#saving DataFrames to csv
text_df.to_csv("comments_by_cluster.csv", index=False)
cluster_df.to_csv("cluster_summary.csv", index=False)

Option 3: Functional Interface

from dotenv import load_dotenv
load_dotenv()

from narrative_mapper import *
import pandas as pd

df = pd.read_csv("file-path")

#manual control over each step:
embeddings = get_embeddings(file_df)
cluster_df = cluster_embeddings(embeddings)
summary_df = summarize_clusters(cluster_df)

#export/format options
summary_dict = format_to_dict(summary_df, online_group_name="r/antiwork")
text_df = format_by_text(summary_df, online_group_name="r/antiwork")
cluster_df = format_by_cluster(summary_df, online_group_name="r/antiwork")

Output Formats

This example is based off of this dataset

The three formatter functions return the following:

format_to_dict() returns dict with following format:

Click to view
{
    'online_group_name': 'r/antiwork',
    'clusters': [
        {
            'cluster': 2,
            'cluster_summary': 'The cluster focuses on the exploitation of workers under capitalism, highlighting the growing wealth disparity driven by corporate greed, the manipulation of housing markets, and the urgent need for systemic reforms to improve living conditions, wages, and labor rights.',
            'sentiment': 'NEGATIVE',
            'text_count': 483
        },
        {
            'cluster': 4,
            'cluster_summary': 'The conversation cluster centers on critiques of remote work policies, reflections on privilege and inequality, and humorous observations about daily frustrations and absurdities.',
            'sentiment': 'NEGATIVE',
            'text_count': 80
        },
        {
            'cluster': 5,
            'cluster_summary': 'This cluster highlights the frustrations and absurdities of modern job application processes, focusing on discriminatory hiring practices, excessive interview demands, and the dehumanizing effects of AI and psychometric testing on candidates.',
            'sentiment': 'NEGATIVE',
            'text_count': 76
        },
        {
            'cluster': 7,
            'cluster_summary': 'The conversation focuses on the low wages and poor treatment of fast food workers, emphasizing the urgent need for improved compensation and benefits in relation to living costs.',
            'sentiment': 'NEGATIVE',
            'text_count': 58
        },
        {
            'cluster': 8,
            'cluster_summary': 'The conversation cluster highlights pervasive issues of employee dissatisfaction stemming from wage theft, workplace exploitation, toxic environments, harassment, and inadequate labor rights, alongside the struggle for work-life balance and the necessity for legal recourse in employment disputes.',
            'sentiment': 'NEGATIVE',
            'text_count': 392
        }
    ]
}

format_by_cluster() returns pandas DataFrame with columns:

Click to view
  • online_group_name: online group name

  • cluster: numeric cluster number

  • cluster_summary: summary of the cluster

  • text_count: sampled textual messages per cluster

  • aggregated_sentiment: net sentiment, of form 'NEGATIVE', 'POSITIVE', 'NEUTRAL'

  • text: the list of textual messages that are part of the cluster

  • all_sentiments: this is a list containing dict items of the form '{'label': 'NEGATIVE', 'score': 0.9896971583366394}' for each message (sentiment calculated by distilbert-base-uncased-finetuned-sst-2-english).

CSV to show output format

format_by_text() returns pandas DataFrame with columns:

Click to view
  • online_group_name: online group name

  • cluster: numeric cluster number

  • cluster_summary: summary of the cluster

  • text: the sampled textual message (this function returns all of them row by row)

  • sentiment: dict item holding sentiment calculation, of the form '{'label': 'NEGATIVE', 'score': 0.9896971583366394}' (sentiment calculated by distilbert-base-uncased-finetuned-sst-2-english).

CSV to show output format

Pipeline Architecture & API Overview

Pipeline:

CSV Text Data → Embeddings → Clustering → Summarization → Formatting

Functions:

#Converts each message into a 1536-dimensional vector using OpenAI's text-embedding-3-small.
get_embeddings(file_df, verbose=...)

#Clusters the embeddings using UMAP (for reduction) and HDBSCAN (for density-based clustering).
cluster_embeddings(
    embeddings, 
    verbose=..., 
    use_pca=...,
    pca_kwargs=..., 
    umap_kwargs=..., 
    hdbscan_kwags=...
    )

#Uses GPT (via Chat Completions) for cluster summaries and Hugging Face's distilbert for sentiment analysis.
summarize_clusters(clustered_df, max_sample_size=..., verbose=...)

#Returns structured output as a dictionary (ideal for JSON export).
format_to_dict(summary_df)

#Returns a DataFrame where each row summarizes a cluster.
format_by_cluster(summary_df)

#Returns a DataFrame where each row is an individual comment with its sentiment and cluster label.
format_by_text(summary_df)

NarrativeMapper Class

Instance Attributes:

class NarrativeMapper:
    def __init__(self, df, online_group_name: str, verbose=False):
        self.verbose               # Verbose for all parts of the pipeline
        self.file_df               # DataFrame of csv file
        self.online_group_name     # Name of the online community or data source
        self.embeddings_df         # DataFrame after embedding
        self.cluster_df            # DataFrame after clustering
        self.summary_df            # DataFrame after summarization

Methods:

load_embeddings()
cluster(
    use_pca=...,
    pca_kwargs=..., 
    umap_kwargs=..., 
    hdbscan_kwargs=...
    )
summarize(max_sample_size=...)
format_by_text()
format_by_cluster()
format_to_dict()

Parameter Reference

Click to expand
  • verbose: Print/show detailed parameter scaling info and progress bars.

  • use_pca: Toggle whether or not you want to use PCA before UMAP (default is True since it helps reduce RAM usage from UMAP).

  • umap_kwargs: Allows for input of UMAP parameters.

  • hdbscan_kwags: Allows for input of HDBSCAN parameters.

  • pca_kwargs: Allows for input of PCA parameters.

Estimated Cost (OpenAI Pricing)

Estimated cost: $0.02 to $0.17 per 1 million tokens.

Example: A CSV containing 1,000, all greater than one sentence long, Reddit comments costs approximately $0.01 to process.

Click for pricing details

The OpenAI text-embedding-3-small model costs approximately $0.02 per 1 million input tokens. Determined by the total tokens of your input textual messages.

The Chat Completions model used for summarization (gpt-4o-mini) is $0.15 per 1 million input tokens. The max_sample_size parameter (referenced later) helps reduce costs by limiting how many comments are passed into gpt-4o-mini for each cluster. This can significantly reduce the Chat Completions token usage.

The gpt-4o-mini input prompt (excluding the text) and output summary (for both stages) are very short (<1000 tokens), so their cost contribution is negligible.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

narrativemapper-0.3.0.tar.gz (19.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

narrativemapper-0.3.0-py3-none-any.whl (23.2 kB view details)

Uploaded Python 3

File details

Details for the file narrativemapper-0.3.0.tar.gz.

File metadata

  • Download URL: narrativemapper-0.3.0.tar.gz
  • Upload date:
  • Size: 19.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.0

File hashes

Hashes for narrativemapper-0.3.0.tar.gz
Algorithm Hash digest
SHA256 13600334e9b7c00f60c88bb732a367ffef0c82b3f2801905c38698ffbf73c13b
MD5 9f5d2b176d4866ca42225669ae98bd99
BLAKE2b-256 712289b1ae2a026067b8966eac086c00b0db59abb452a68f05ca3c06ccc85bf0

See more details on using hashes here.

File details

Details for the file narrativemapper-0.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for narrativemapper-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 893ec33aa378ca708888b15bdc41911d6ebab93f883836097d04044a14746015
MD5 0d4c80969a183296b84eeaf4e0c47606
BLAKE2b-256 0f6e5240107daae7ee872dcbf36f7c7118f6c70abd58d319bdd90d3b552b1abb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page