Skip to main content

The package aims to help users analyze and understand nuanced narratives or statements by distinguishing between factual accounts and imaginative interpretations. Given a text input, it will process t

Project description

fiction-vs-fact

PyPI version License: MIT Downloads LinkedIn

A Python package to help analyze and differentiate nuanced narratives or statements by identifying which parts are factual and which are imaginative or speculative. Ideal for literary analysis, scriptwriting, or clarifying misleading information by clearly distinguishing between objective reality and creative interpretation.

Installation

Install the package via pip:

pip install fiction_vs_fact

Usage

Here's an example of how to use the package in your Python code:

from fiction_vs_fact import fiction_vs_fact

# Example user input
text_input = "In the story, John claims he saw a dragon flying over the city, but eyewitnesses report no such event."

# Process the input to separate facts from fictional elements
results = fiction_vs_fact(user_input=text_input)

print(results)

Function Parameters

  • user_input (str): The text containing statements or narratives to analyze.
  • llm (Optional[BaseChatModel]): Optional. A custom language model instance to use. Defaults to using ChatLLM7 if not provided.
  • api_key (Optional[str]): Optional. API key for the LLM service. If not provided, it will attempt to read the LLM7_API_KEY environment variable.

Underlying LLM

The package utilizes ChatLLM7 from langchain_llm7 by default.

Custom LLM Usage

You can pass your own language model instance to the function. For example:

from langchain_openai import ChatOpenAI
from fiction_vs_fact import fiction_vs_fact

llm = ChatOpenAI()

response = fiction_vs_fact(
    user_input="Some intriguing story...",
    llm=llm
)

Similarly, for other providers:

from langchain_anthropic import ChatAnthropic
from fiction_vs_fact import fiction_vs_fact

llm = ChatAnthropic()

response = fiction_vs_fact(
    user_input="Some intriguing story...",
    llm=llm
)

Rate Limits and API Keys

The package defaults to the free tier limits of LLM7. For higher limits, obtain an API key at LLM7 Token Service and set it via environment variable LLM7_API_KEY or pass it directly:

response = fiction_vs_fact(
    user_input="Some input...",
    api_key="your_api_key"
)

Resources

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fiction_vs_fact-2025.12.21083650.tar.gz (4.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fiction_vs_fact-2025.12.21083650-py3-none-any.whl (5.3 kB view details)

Uploaded Python 3

File details

Details for the file fiction_vs_fact-2025.12.21083650.tar.gz.

File metadata

File hashes

Hashes for fiction_vs_fact-2025.12.21083650.tar.gz
Algorithm Hash digest
SHA256 a260346cf0a9adfc458bfaac3b323d6685b097259f06d27f6674c7abcc7501cf
MD5 ae91fd149bb84eb61bce1ed3f2b3b288
BLAKE2b-256 261e9d7acd02aa0d77e2d443b47b14b0b3bab1892c5f8e9afbc37d5705852b4f

See more details on using hashes here.

File details

Details for the file fiction_vs_fact-2025.12.21083650-py3-none-any.whl.

File metadata

File hashes

Hashes for fiction_vs_fact-2025.12.21083650-py3-none-any.whl
Algorithm Hash digest
SHA256 00abd79cf9651900c42ba146a531483a139b94ca3b964f8eb1158c9c3e08cc57
MD5 3bdbf3656f8c2dfec164195248778ebe
BLAKE2b-256 f630cb4dbcf78d9de236f3973eb524e0a9a4587a06e5604628e14d8dbca97f8d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page