The package aims to help users analyze and understand nuanced narratives or statements by distinguishing between factual accounts and imaginative interpretations. Given a text input, it will process t
Project description
fiction-vs-fact
A Python package to help analyze and differentiate nuanced narratives or statements by identifying which parts are factual and which are imaginative or speculative. Ideal for literary analysis, scriptwriting, or clarifying misleading information by clearly distinguishing between objective reality and creative interpretation.
Installation
Install the package via pip:
pip install fiction_vs_fact
Usage
Here's an example of how to use the package in your Python code:
from fiction_vs_fact import fiction_vs_fact
# Example user input
text_input = "In the story, John claims he saw a dragon flying over the city, but eyewitnesses report no such event."
# Process the input to separate facts from fictional elements
results = fiction_vs_fact(user_input=text_input)
print(results)
Function Parameters
user_input(str): The text containing statements or narratives to analyze.llm(Optional[BaseChatModel]): Optional. A custom language model instance to use. Defaults to usingChatLLM7if not provided.api_key(Optional[str]): Optional. API key for the LLM service. If not provided, it will attempt to read theLLM7_API_KEYenvironment variable.
Underlying LLM
The package utilizes ChatLLM7 from langchain_llm7 by default.
Custom LLM Usage
You can pass your own language model instance to the function. For example:
from langchain_openai import ChatOpenAI
from fiction_vs_fact import fiction_vs_fact
llm = ChatOpenAI()
response = fiction_vs_fact(
user_input="Some intriguing story...",
llm=llm
)
Similarly, for other providers:
from langchain_anthropic import ChatAnthropic
from fiction_vs_fact import fiction_vs_fact
llm = ChatAnthropic()
response = fiction_vs_fact(
user_input="Some intriguing story...",
llm=llm
)
Rate Limits and API Keys
The package defaults to the free tier limits of LLM7. For higher limits, obtain an API key at LLM7 Token Service and set it via environment variable LLM7_API_KEY or pass it directly:
response = fiction_vs_fact(
user_input="Some input...",
api_key="your_api_key"
)
Resources
- GitHub issues
- Author: Eugene Evstafev
- Email: hi@evstafev.plus
- GitHub: chigwell
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file fiction_vs_fact-2025.12.21083650.tar.gz.
File metadata
- Download URL: fiction_vs_fact-2025.12.21083650.tar.gz
- Upload date:
- Size: 4.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a260346cf0a9adfc458bfaac3b323d6685b097259f06d27f6674c7abcc7501cf
|
|
| MD5 |
ae91fd149bb84eb61bce1ed3f2b3b288
|
|
| BLAKE2b-256 |
261e9d7acd02aa0d77e2d443b47b14b0b3bab1892c5f8e9afbc37d5705852b4f
|
File details
Details for the file fiction_vs_fact-2025.12.21083650-py3-none-any.whl.
File metadata
- Download URL: fiction_vs_fact-2025.12.21083650-py3-none-any.whl
- Upload date:
- Size: 5.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
00abd79cf9651900c42ba146a531483a139b94ca3b964f8eb1158c9c3e08cc57
|
|
| MD5 |
3bdbf3656f8c2dfec164195248778ebe
|
|
| BLAKE2b-256 |
f630cb4dbcf78d9de236f3973eb524e0a9a4587a06e5604628e14d8dbca97f8d
|