Generate artificial structured advice micro-texts from narrative, emotional, and world-context templates.
Project description
Narremgen
Narremgen is an experimental Python package for structured narrative text generation, combining narrative schemas (SN) and emotional dynamics (DE) to produce coherent short texts assembled into full booklets of advice or answers from a topic or a question with optional chapters.
Based on the methodology described in Priam, R. (2025). Narrative and Emotional Structures for Generation of Short Texts for Advice, it provides a reproducible multi-batch pipeline for controlled text generation using LLM models with narrative+emotional structures. This is a partial implementation of the method SN/DE/K for generation, modeling and analysis of texts.
Main modules of narremgen
pipeline: Entry point for batch generation, variants, stats, and exports per topic run.llmcore: Unified LLM router (role→model mapping, retries, multi-provider support).data: Input preparation and CSV handling for topic–advice–prompt-based generation.narratives: Text post-processing, style control, and SN/DE-aware narrative realization.variants: Planning and batch rewriting into alternative styles (direct, formal, etc.) with stats.themes: LLM-based theme discovery and assignment for advice corpora, producing themes+assignments.chapters: Build chaptered corpora (CSV/JSON) from themes or manual grouping, for book-like exports.export: Plain-text and LaTeX exporters (merged.txtandbook_*.texfrom neutral and variants).analyzestats: Length, lexical, emotion and SN/DE distribution analysis, with CSV summaries and plots.utils: Shared helpers for workdirs, filenames, CSV repair, backups, and neutral corpus construction.gui: Optional Tkinter GUI for generation, or readings aligned/selected texts, or segmentation.main: Optional command-line terminal module for the generation with input arguments.
Key features
- Generation of a Corpus of Stories (of varying and controlled structures) and Formal Texts for advice from a topic (full sentence).
- Multi-batch narrative pipeline using a configurable LLM router (
llmcore) across several providers with a command-line interface. - Automatic topic and advice mapping, SN/DE-structured neutral generation, and aligned variant rewriting (direct, formal, other styles).
- Robust CSV workflow: filtering, renumbering, safe merging of advice/sentence/context/mapping, consistent filenames, variant workdirs.
- LLM-driven theme extraction and assignment, plus chapter construction for organizing texts into coherent sections (classes of texts).
- Plain-text and TeX export of neutral and variant corpora (merged narrative files and full chaptered books for text reading/selection).
- Integrated corpus analysis: lexical richness, length, emotion profiles, and SN/DE distributions, including neutral vs. variant comparison.
- Textual statistics and emotion statistics of specialized language models from the literature for evaluation of generated texts or corpus.
- Ready-to-use structure for reproducible experiments in text generation with emotions for character and educational content synthesis.
- Graphical user interface for generation with api key checkings, creation of variants, and reading/selection of aligned textes for a topic.
- Available connection to OpenAI, OpenRouter, Google-GenAI, Mistral, etc for text generation (see python code and interface for dry-run).
- No limited length for topic str, available command for adding file/str long text as context for advice or generation stages in pipeline.
Usage
Installation
pip install narremgen
Generation with Python and the package
import narremgen
from narremgen import pipeline
run_pipeline(
topic="Walking_in_the_city",
output_dir="./outputs",
assets_dir="./narremgen/settings",
n_batches=2,
n_per_batch=20,
output_format="txt",
verbose=False
)
With command-line interface in the terminal
# Pipeline + variants (default, not user ones)
python -m narremgen.main \
--topic "Walking_in_the_city" \
--output-dir "./outputs" \
--batches 2 \
--per-batch 20 \
--output-format txt \
--verbose
# Pipeline without variants (neutral only)
python -m narremgen.main \
--topic "Walking_in_the_city" \
--output-dir "./outputs" \
--batches 2 \
--per-batch 20 \
--output-format txt \
--skip-variants \
--verbose
# Dry-test without generation pipeline
python -m narremgen.main \
--diagnostic-dry-run \
--verbose
With command lines in the terminal: GUI
# Interface generation+reading+saving
python -m narremgen.gui
Other example of call (check exact model names, and documentation for texts)
OpenAI everywhere as simple default + export TeX booklet
narremgen --topic "Small habits, big effects" --output-dir "./out" --default-model "openai\gpt-4o-mini" --export-book-tex
Ollama local (offline) + TeX output + skip theme analysis
narremgen --topic "Home organisation and walking" --output-dir "./out" --default-model "ollama\gemma3:4b" --batches 3 --per-batch 30 --output-format tex --export-book-tex
OpenRouter mix: DeepSeek for mapping, Llama for narrative, GPT-4o-mini for the rest + multiple variants
narremgen --topic "Walk habits in the city" --output-dir "./out" --api-key-file "./llmkeys.txt" --model-advice "openrouter\openai/gpt-4o-mini" --model-mapping "openrouter\deepseek/deepseek-reasoner" --model-context "openrouter\openai/gpt-4o-mini" --model-narrative "openrouter\meta-llama/llama-3.1-70b-instruct" --model-variants-generation "openrouter\openai/gpt-4o-mini"
Mistral direct (OpenAI-compatible) + themes enabled with custom range and batch size
narremgen --topic "Healthy routines for a walk everyday" --output-dir "./out" --api-key-file "./llmkeys.txt" --default-model "mistral\mistral-large-latest" --themes-min 7 --themes-max 12 --themes-batch-size 30
Grok default + bypass variants generation to local Phi-4 (Ollama) with larger token budget
narremgen --topic "Walking around in a small town" --output-dir "./out" --api-key-file "./llmkeys.txt" --default-model "grok\grok-2-latest" --model-variants-generation "ollama\phi4:14b" --variant-batch-size 40 --variant-max-tokens 2500
Quick connectivity check (no files generated): diagnostic dry-run with longer timeout
narremgen --diagnostic-dry-run --model-advice "openrouter\deepseek/deepseek-chat" --request-timeout 90
Warnings
-
Practical note: very large generations (e.g. high
--batches×--per-batch) may take a long time, may produce repeated advice, and can fail due to rate limits/timeouts or oversized intermediate files (these issues are not handled automatically). Start small (e.g.--batches 2 --per-batch 30) and scale up while monitoring API usage. -
Output integrity: in some cases an LLM response can be malformed (wrong format) or truncated, which may lead to missing or unusable advice entries. If this happens, rerun the affected batch.
-
Current usability: Only informed users or trainers should use this system in practice. Some advice may be missing or mistaken du to ia/programming. In next versions, automatic checkings may be implemented for end user.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file narremgen-0.9.5.tar.gz.
File metadata
- Download URL: narremgen-0.9.5.tar.gz
- Upload date:
- Size: 175.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
737127a0783ef449e47e6e3f767ab523529aa7fddf3672e3bcbf771ddce199ac
|
|
| MD5 |
82e78d37c49d240a4ad5d31c94308f08
|
|
| BLAKE2b-256 |
585b2459815f444ea4bdc1cde739c2fa4aeedbc57a406d89d27ab7e97a281ad9
|
File details
Details for the file narremgen-0.9.5-py3-none-any.whl.
File metadata
- Download URL: narremgen-0.9.5-py3-none-any.whl
- Upload date:
- Size: 187.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
00b567e46ce2172e349fe010c17b0068610610691cb8831f2563d83f8070057d
|
|
| MD5 |
dfbb6f04f63f38fc2d4042f678e37ca9
|
|
| BLAKE2b-256 |
dc25966a3892ee11953b0fdc59a0a9318ecab4cc9509580b9f5bcee34678353e
|