No project description provided
Project description
moral-keeper-ai
moral-keeper-ai is an open-source Python program that uses AI to evaluate input text from the following perspectives and output suggestions for text revision:
- Preventing the user's posted text from being offensive to the reader
- Avoiding potential public backlash against the poster
- Suppressing the increase in customer service workload due to ambiguous opinion posts
This helps maintain a positive and respectful online presence.
Technology Used
- OpenAI API
Supported API Servers
- Azure OpenAI Service
Recommended Models
- GPT-4o mini
- GPT-4o
- GPT-3turbo
Main Features
- Determine if a given sentence is appropriate for posting
- Suggest more appropriate expressions for problematic posts
- Can be called from Python methods
- Usable as an API server via HTTP
Quick Start
- Installation
pip install moral-keeper-ai
- Configuration
Add various settings in .env or environment variables (see Environment Variables and Settings).
- Example Usage
import moral_keeper_ai
judgment, details = moral_keeper_ai.check('The sentence you want to check')
suggested_message = moral_keeper_ai.suggest('The sentence you want to make appropriate for posting')
moral_keeper_ai.check()
Parameters
- content: string: Text to be censored
Return value: Tuple
- judgment: bool: True (No problem), False (Problematic)
- details: list: A list of items that were flagged as problematic if any issues were found
Overview: This prompt is for censoring received text as if by a company's PR manager. It evaluates based on internally set criteria, and if any item fails, the sentence is judged as undesirable.
moral_keeper_ai.suggest()
Parameters
- content: string: Text before expression change
Return value: String Overview: This prompt softens the expression of the received text. It returns the softened string.
- As an API server via HTTP
moral-keeper-ai-server --port 3000 &
curl -X POST -H "Content-Type: application/json" -d '{"content": "The sentence you want to check"}' http://localhost:3000/check
curl -X POST -H "Content-Type: application/json" -d '{"content": "The sentence you want to make appropriate for posting"}' http://localhost:3000/suggest
POST /check
Submit a text string to be judged for appropriateness.
Request:
{
"content": "The sentence you want to check."
}
Response:
{
"judgement": false,
"ng_reasons" : ["Compliance with company policies", "Use appropriate expressions for public communication"],
"status": "success"
}
judgement
: A boolean value indicating whether the submitted text is judged accepatble (true) or unaccepatble (false).ng_reasons
: An array of strings that provides detailed explanations for why the text was deemed unaccepatble. Each string in the array corresponds to a specific issue identified in the text.status
: A string that indicates the result of the API execution. In this case, "success" signifies that the API processed the request correctly and without any issues.
POST /suggest
Submit a text string to be make its expression softer or more polite. The response includes the softened version of the submitted text.
Request:
{
"content": "The sentence you want to make appropriate for posting."
}
Response:
{
"softened": "The softened sentence the api made.",
"status": "success"
}
softened
: A string that contains the softened version of the text submitted in the request. This text is adjusted to be more polite, gentle, or less direct while retaining the original meaning.status
: A string that indicates the result of the API execution. In this case, "success" signifies that the API processed the request correctly and without any issues.
Environment Variables and Settings
API Key
export AZURE_OPENAI_API_KEY='API Key'
Endpoint
export AZURE_OPENAI_ENDPOINT='Endpoint URL'
Model to Use
export AZURE_OPENAI_DEPLOY_NAME='Model name/Deployment name'
For Developers
Setup Environment
- Clone the
moral-keeper-ai
repository from GitHub to your local environment and navigate to the cloned directory.
git clone https://github.com/c-3lab/moral-keeper-ai.git
cd moral-keeper-ai
- Install poetry if it's not installed yet.
pip install poetry
- Set up the linters and formatters.
poetry install
poetry run pre-commit install
- From now on, every time you run git commit, isort, black, and pflake8 will automatically be applied to the staged files. If these tools make any changes, the commit will be aborted.
- If you want to manually run isort, black, and pflake8, you can do so with the following command:
poetry run pre-commit
Running Tests
- Run the following command to execute the tests:
poetry run pytest --cov-report=xml:/tmp/coverage.xml --cov=moral_keeper_ai --cov-branch --disable-warnings --cov-report=term-missing
Directory Structure
. ├── moral_keeper_ai: Main module ├── tests: pytest resources ├── docs: Documentation └── benchmark: Program for benchmark verification └── evaluate: check function └── data: Test comment files └── mitigation: suggest function └── data: Test comment files
LICENSE
CopyRight
Copyright (c) 2024 C3Lab
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file moral_keeper_ai-0.5.0.tar.gz
.
File metadata
- Download URL: moral_keeper_ai-0.5.0.tar.gz
- Upload date:
- Size: 9.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.10.15
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 801d6dbcf49812e76822b939618a138d32f7d208809ab619de59ee9a335791f3 |
|
MD5 | 5ec84835760ad21d15310a57ad27f282 |
|
BLAKE2b-256 | 1c3e405bf563a1eb7bd30730ada69bb94353bfbeed09bbe97c32ef3ec1805e4a |
File details
Details for the file moral_keeper_ai-0.5.0-py3-none-any.whl
.
File metadata
- Download URL: moral_keeper_ai-0.5.0-py3-none-any.whl
- Upload date:
- Size: 12.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.10.15
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 19a6f558c5f4186875ae356d351eb04565158d341a46ff3579b60732f9f42bda |
|
MD5 | 4558c9aa97b01b5fe2bdb3df2b531db0 |
|
BLAKE2b-256 | 1439e96dd70e92095257290c7e8b7ac124051cd5f8482540330dbda816fba940 |