enables structured and reliable textual interactions in global forums through pattern matching and retries, facilitating high-quality conversations.
Project description
forum-guard
A Python package designed to facilitate structured and reliable textual interactions within a global forum environment. It processes user inputs—such as comments, questions, and feedback—using pattern matching and retries to extract key information, topics, or sentiments, ensuring consistent and meaningful responses. The system enables moderators or automated tools to identify and categorize submissions effectively, fostering authentic and unfiltered conversations while maintaining high-quality, structured exchanges based solely on the provided text data.
Installation
pip install forum_guard
Usage
Here's an example of how to use the forum_guard function in Python:
from forum_guard import forum_guard
user_input = "Your user input text here."
response = forum_guard(user_input)
print(response)
Input Parameters
user_input(str): The user input text to process.llm(Optional[BaseChatModel]): An instance of a langchain.llm core language model to use. If not provided, the defaultChatLLM7will be used.api_key(Optional[str]): The API key forChatLLM7. If not provided, it attempts to read from the environment variableLLM7_API_KEY.
Custom LLM Usage
You can pass your own language model instance to forum_guard. Supported models include, but are not limited to:
from langchain_openai import ChatOpenAI
from forum_guard import forum_guard
llm = ChatOpenAI()
response = forum_guard(user_input, llm=llm)
from langchain_anthropic import ChatAnthropic
from forum_guard import forum_guard
llm = ChatAnthropic()
response = forum_guard(user_input, llm=llm)
from langchain_google_genai import ChatGoogleGenerativeAI
from forum_guard import forum_guard
llm = ChatGoogleGenerativeAI()
response = forum_guard(user_input, llm=llm)
Notes
- The default
ChatLLM7is based on thelangchain_llm7package. You can install it via:
pip install langchain-llm7
- To increase rate limits, you can set your own
LLM7_API_KEYin environment variables or pass it directly:
response = forum_guard(user_input, api_key="your_api_key")
- You can obtain a free API key at https://token.llm7.io/
Support
For issues and feature requests, visit the GitHub repository: https://github.com/yourusername/forum-guard/issues
Author
Eugene Evstafev (chigwell)
Email: hi@euegne.plus
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file forum_guard-2025.12.22075453.tar.gz.
File metadata
- Download URL: forum_guard-2025.12.22075453.tar.gz
- Upload date:
- Size: 6.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a0c06461860c6ec5c9615c10679bf8b54e9027a652a5efe2e59bcc015bb8fb0f
|
|
| MD5 |
4c0d358a2cbe383d4b145a7493f6eaea
|
|
| BLAKE2b-256 |
8584c3665ae91a664fa2b91d752e663c786adaa9d77b9e5f3d038a9cf57e0afb
|
File details
Details for the file forum_guard-2025.12.22075453-py3-none-any.whl.
File metadata
- Download URL: forum_guard-2025.12.22075453-py3-none-any.whl
- Upload date:
- Size: 7.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3f1d287f5e02cca35426c67daf6198e948f0e8b326b00df0b7839cfeace7e389
|
|
| MD5 |
40ee5971c35da743c5b5ce666a1e606b
|
|
| BLAKE2b-256 |
a65fc88aedc7e2bbf336307a709d67054ad8420e705248c41159742d6d5b7c97
|