Skip to main content

A new package would process text inputs, like video titles or descriptions, to generate structured summaries using an LLM. It would take a user-provided text string (e.g., a headline or query) and ret

Project description

promptify-summary

PyPI version License: MIT Downloads LinkedIn

promptify_summary is a lightweight Python package that turns arbitrary text (e.g. video titles, headlines, or any user‑supplied string) into concise, structured summaries using a large language model.
The package relies on pattern matching to guarantee consistent, predictable output regardless of the LLM provider. It works with the default ChatLLM7 backend out of the box while also allowing you to plug in any langchain compatible LLM.

Quick Start

pip install promptify_summary
# Basic usage with the default LLM7 backend
from promptify_summary import promptify_summary

user_input = "Learn how to deploy a Docker container in 5 minutes!"
summary = promptify_summary(user_input)

print(summary)
# >>> ['Deploy Docker Container', '5 minutes', ...]  # Example output

Custom LLM Support

You can swap the default ChatLLM7 for any LangChain LLM.
Below are examples with OpenAI, Anthropic, and Google Generative AI.

OpenAI

from langchain_openai import ChatOpenAI
from promptify_summary import promptify_summary

llm = ChatOpenAI()          # Uses default OpenAI key in environment
user_input = "How to tune a PostgreSQL database?"
summary = promptify_summary(user_input, llm=llm)

Anthropic

from langchain_anthropic import ChatAnthropic
from promptify_summary import promptify_summary

llm = ChatAnthropic()       # Uses your Anthropic key in environment
summary = promptify_summary("Explain quantum entanglement.", llm=llm)

Google Generative AI

from langchain_google_genai import ChatGoogleGenerativeAI
from promptify_summary import promptify_summary

llm = ChatGoogleGenerativeAI()   # Uses your Google key in environment
summary = promptify_summary("What is Python 3.11?", llm=llm)

Function Signature

promptify_summary(
    user_input: str,
    api_key: Optional[str] = None,
    llm: Optional[BaseChatModel] = None
) -> List[str]
Parameter Type Description
user_input str Text to summarize.
llm Optional[BaseChatModel] Custom LangChain LLM instance. If None, ChatLLM7 is used.
api_key Optional[str] LLM7 API key. If omitted, the package looks for the LLM7_API_KEY environment variable, falling back to a placeholder key.

The function returns a list of strings extracted from the LLM’s response that match the internal regular‑expression pattern, ensuring output consistency.

Default LLM7 Configuration

The default ChatLLM7 backend is accessed via the langchain_llm7 package. If you want to change the API key (for higher rate limits or a different account) provide the key directly or set the environment variable:

export LLM7_API_KEY="your_free_or_paid_api_key"

Or pass it programmatically:

summary = promptify_summary("Sample text", api_key="your_api_key")

You can obtain a free key by registering at https://token.llm7.io/.

Rate Limits

The LLM7 free tier rate limits are sufficient for most casual or small‑scale projects. For more intensive workloads, consider upgrading your LLM7 account to increase limits or supply your own LLM.

Author & Support

For issues, feature requests, or questions, open an issue on our GitHub repository.

Enjoy automating structured content generation with promptify-summary!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

promptify_summary-2025.12.21103043.tar.gz (6.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

promptify_summary-2025.12.21103043-py3-none-any.whl (6.8 kB view details)

Uploaded Python 3

File details

Details for the file promptify_summary-2025.12.21103043.tar.gz.

File metadata

File hashes

Hashes for promptify_summary-2025.12.21103043.tar.gz
Algorithm Hash digest
SHA256 b81bd083fb986381d1447575d0bb933f4c29c61be08953794bc005307abdd628
MD5 6a086d893cfcf3c63b604b07b7eaad70
BLAKE2b-256 41cad03b3ad46a4304461c63da24fe4673af2273a56294fb8237569a08cd08cd

See more details on using hashes here.

File details

Details for the file promptify_summary-2025.12.21103043-py3-none-any.whl.

File metadata

File hashes

Hashes for promptify_summary-2025.12.21103043-py3-none-any.whl
Algorithm Hash digest
SHA256 832ebeaae7c44545840aa042db8a2058c784c77dbe3924e49d2dbf101b537617
MD5 94476ec2de20311f23659a05f321a8f0
BLAKE2b-256 c6d7ff10d58ae09b3ba036d4b60e28603277f451928d841c068e35d0a05b3a65

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page