DataRobot Monitoring and Moderation framework
Project description
DataRobot Moderations library
This library enforces the intervention in the prompt and response texts as per the guard configuration set by the user.
The library accepts the guard configuration in the yaml format and the input prompts and outputs the dataframe with the details like:
- should the prompt be blocked
- should the completion be blocked
- metric values obtained from the model guards
- is the prompt or response modifed as per the modifier guard configuration
Architecture
The library is architected in a way that it wraps around the typical LLM prediction method. The library will first run the pre-score guards - the guards that will evaluate prompts and enforce moderation if necessary. All the prompts that were not moderated by the library are forwarded to the actual LLM to get their respective completions. The library then evaluates these completions using post-score guards and enforces intervention on them.
How to build it?
The repository uses poetry to manage the build process and a wheel can be built using:
make clean
make
How to use it?
A wheel file generated or downloaded can be installed with pip and will pull its dependencies as well.
pip3 install datarobot-moderations
With DRUM
As described above the library nicely wraps DRUM's score method for pre and post score
guards. Hence, in case of DRUM, the user simply runs her custom model using drum score
and can avail the moderation library features.
pip3 install datarobot-drum
drum score --verbose --logging-level info --code-dir ./ --input ./input.csv --target-type textgeneration --runtime-params-file values.yaml
Please refer to the DRUM documentation on how to define custom inference model
which will walk you through how to assemble custom inference model to how to test it locally using drum score method
Standalone use
However, moderation library is not tightly coupled with DRUM and we are actively working towards using this library in non-DRUM use case. run.py is an example on how to use this library in a stand alone way. This example uses Azure OpenAI service to get LLM completions.
export AZURE_OPENAI_API_KEY=<your-azure-openai-api-key>
python run.py --config ./moderation_config.yaml --input ./input.csv --azure-openai-api-base <azure-openai-base-url> --score
This will output the response dataframe with bunch of information indicating which prompts and responses were blocked / reported, why they are blocked / reported etc
run.py also has an example on how to use this library to moderate the chat interface. It also uses Azure OpenAI service to get chat completions:
export AZURE_OPENAI_API_KEY=<your-azure-openai-api-key>
python run.py --config ./moderation_config.yaml --input ./input_chat.csv --azure-openai-api-base <azure-openai-base-url> --chat
It will output the conversation with LLM line by line.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file datarobot_moderations-11.2.6-py3-none-any.whl.
File metadata
- Download URL: datarobot_moderations-11.2.6-py3-none-any.whl
- Upload date:
- Size: 68.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.2.1 CPython/3.11.13 Linux/5.10.240-238.966.amzn2.x86_64
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f7158d6fc3a97b114f3a28a4dde99f0bfa267fe0133e0db43b23f3504c5a581e
|
|
| MD5 |
e054d1e6d7906b96cc7812885f3cde72
|
|
| BLAKE2b-256 |
75c9974252e9fd38d22e412e5abda2e2bee9d3479105b164a4d7654564d6c50b
|