Skip to main content

A generalized implementation of a dictionary-based content coder.

Project description

ContentCoder

AI Reading Machine

ContentCoder is a Python-based text analysis tool that enables users to process and analyze text using custom linguistic dictionaries. It is inspired by tools like LIWC (Linguistic Inquiry and Word Count) and provides robust methods for tokenization, text analysis, and frequency calculations. As noted in a much older version of the README.MD, this is a stripped-down, feature-incomplete version of several tools used in past projects.

Note that like 98% of this readme was generated by ChatGPT — it may not be entirely accurate, but at a quick glance, it looks pretty spot-on 😅🤞

🔥 Features

  • Custom Dictionary-Based Analysis
  • Support for LIWC-style dictionaries (2007 & 2022 formats)
  • Efficient text tokenization
  • Wildcard and abbreviation handling
  • Punctuation and big word analysis
  • Dictionary export in multiple formats (JSON, CSV, Poster format, etc.)
  • High-performance wildcard matching with memory optimization

🚀 Installation

Make sure you have Python 3.9+ installed (although it'll probably work with older versions as well). This package is pretty much entirely native Python, so it doesn't have any dependencies for installation. Well, none that I can recall, anyways 😄

pip install contentcoder

📁 Folder Structure

src/contentcoder/
│── __init__.py
│── ContentCoder.py
│── ContentCodingDictionary.py
│── happiestfuntokenizing.py
│── create_export_dir.py

📌 Quick Start

1. Import the ContentCoder class

from contentcoder.ContentCoder import ContentCoder

2. Initialize the Analyzer

cc = ContentCoder(dicFilename='path/to/dictionary.dic', fileEncoding='utf-8-sig')

3. Analyze a Text Sample

text = "Libraries are crucial to our society."
results = cc.Analyze(text, relativeFreq=True, dropPunct=True, retainCaptures=True, returnTokens=False, wildcardMem=True)
print(results)

Expected output:

{
  "WC": 6,
  "Dic": 4.5,
  "BigWords": 2.0,
  "Numbers": 0.0,
  "AllPunct": 0.0,
  "Period": 0.0,
  "Comma": 0.0,
  "QMark": 0.0,
  "Exclam": 0.0,
  "Apostro": 0.0,
  "Libraries": 1.0,
  "crucial": 1.0,
  "society": 1.0
}

📖 Main Functions & Usage

1️⃣ Analyze(text, **options)

Analyzes a given text and returns a dictionary of results.

Parameters:

  • inputText (str): The text to analyze.
  • relativeFreq (bool): If True, returns relative frequencies. Otherwise, raw frequencies.
  • dropPunct (bool): If True, punctuation is removed before processing.
  • retainCaptures (bool): If True, captures and stores wildcard-matched words.
  • returnTokens (bool): If True, returns tokenized text.
  • wildcardMem (bool): If True, speeds up wildcard processing by storing past matches.

Example Usage:

result = cc.Analyze("Hello world! This is a test sentence.", returnTokens=relativeFreq=True)

2️⃣ GetResultsHeader()

Returns a list of all available output categories.

Example Usage:

print(cc.GetResultsHeader())

Expected output:

["WC", "Dic", "BigWords", "Numbers", "AllPunct", "Period", "Comma", "QMark", "Exclam", "Apostro"]

3️⃣ GetResultsArray(resultsDICT, rounding=4)

Formats the results of Analyze() into a CSV-friendly list.

Example Usage:

text = "The government plays an important role."
result = cc.Analyze(text)
csv_row = cc.GetResultsArray(result)
print(csv_row)

Expected output:

[6, 4.3, 2.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]

4️⃣ ExportCaptures(filename, fileEncoding='utf-8-sig', wildcardsOnly=False, fullset=True)

Exports wildcard-captured words and their frequencies to a CSV file.

Example Usage:

cc.ExportCaptures("captured_words.csv")

5️⃣ ExportDict2007Format(dicOutFilename, fileEncoding, separateDicts=False, separateDictsFolder=None)

Exports the loaded dictionary in LIWC-2007 format.

Example Usage:

cc.dict.ExportDict2007Format("dictionary_2007.dic")

6️⃣ ExportDict2022Format(dicOutFilename, fileEncoding, **options)

Exports the loaded dictionary in LIWC-22 format.

Example Usage:

cc.dict.ExportDict2022Format("dictionary_2022.dicx")

7️⃣ ExportDictJSON(filename, fileEncoding, indent=4)

Exports the dictionary mapping to a JSON file.

Example Usage:

cc.dict.ExportDictJSON("dictionary.json")

8️⃣ UpdateCategories(dicTerm, newCategories)

Updates the categories associated with a dictionary term.

Example Usage:

cc.dict.UpdateCategories(dicTerm="happiness", newCategories={"positive_emotion": 1.0, "joy": 0.5})

🔄 Example: Processing a Large CSV File with tqdm

This script reads a large CSV file and processes each text in the "body" column.

import csv
from tqdm import tqdm
from contentcoder.ContentCoder import ContentCoder

cc = ContentCoder(dicFilename='dictionary.dic', fileEncoding='utf-8-sig')

with open("Comments.csv", "r", encoding="utf-8-sig") as csvfile:
    reader = csv.DictReader(csvfile)
    total_lines = sum(1 for _ in open("Comments.csv")) - 1  # Count rows

    for row in tqdm(reader, desc="Processing", unit=" comments"):
        text = row["body"]
        result = cc.Analyze(text)

⚡ Performance Optimizations

  • Uses wildcard caching to speed up regex evaluations.
  • Tokenization is optimized for handling social media text.
  • Processes large datasets efficiently using streaming CSV reads.

📜 Dictionary Formats Supported

  • LIWC-2007 (.dic)
  • LIWC-22 (.dicx, .csv)
  • JSON Exports
  • Custom Hierarchical Category Mapping

🤝 Contributing

Pull requests are welcome! If you find bugs or have feature requests, open an issue.


📄 License

MIT License © 2021


📝 Acknowledgments

Developed by Ryan L. Boyd, Ph.D.
For academic and research purposes. Or, you know, whatever.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

contentcoder-1.0.2.tar.gz (23.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

contentcoder-1.0.2-py3-none-any.whl (22.6 kB view details)

Uploaded Python 3

File details

Details for the file contentcoder-1.0.2.tar.gz.

File metadata

  • Download URL: contentcoder-1.0.2.tar.gz
  • Upload date:
  • Size: 23.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.10

File hashes

Hashes for contentcoder-1.0.2.tar.gz
Algorithm Hash digest
SHA256 ab6ed54a96432401867e02399e6f13f514172a3ecab27c2b60f52edbed7a8ec6
MD5 911979fa6183f996bcbbdb12f1b070b9
BLAKE2b-256 0cd098526b50f793422d594085a4515d3382dd386fd20be43e04fc1278e4dbf1

See more details on using hashes here.

File details

Details for the file contentcoder-1.0.2-py3-none-any.whl.

File metadata

  • Download URL: contentcoder-1.0.2-py3-none-any.whl
  • Upload date:
  • Size: 22.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.10

File hashes

Hashes for contentcoder-1.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 79200118a19a5cea47a8cc4be8f5fadb5d75a5c71f6aca1fadefaed074dd2352
MD5 005f637fcf2a106baca8a66c2d378ec2
BLAKE2b-256 facbf88bed0d22d6071c34f37cbf7e9219ce1c3d2a91c8291b20e7020044bdaa

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page