Monitor LLMs with custom metrics to scale with confidence
Project description
Guardrail ML
Guardrail ML is an open-source toolkit for fine-tuning and deploying powerful, safe, and customized large language models.
Our toolkit accelerates the time-to-production of custom LLMs by transforming unstructured data to .json
for fine-tuning and capturing responsible AI metrics of outputs/prompts to mitigate risks and improve performance.
Quickstart
Get started with the below tasks in minutes via a free colab instance:
- Evaluate LLM outputs/prompts for Text Quality, Toxicity, Bias, Relevance, Sentiment, Prompt Injection, etc.
- Generate JSON Question & Answer dataset from PDF leveraging LLMs
- Log evaluation metrics to improve performance and auditing
Installation 💻
To install guardrail-ml, use the Python Package Index (PyPI) as follows:
pip install guardrail-ml
Features
Guardrail ML supports the following metrics
and logs them:
- Toxicity & Bias
- Text Quality
- Text Relevance
- Privacy
- Sentiment
Guardrail ML can transform your data from:
- PDFs into
.json
question & answer pairs - Uses
dolly-v2
as default to generate pairs - Leverage your huggingface models to generate pairs
View logs in streamlit
dashboard
- Locally deployed dashboard to view metrics
- Be used for auditing benchmarking experiments
Usage
from guardrail.client import run_metrics
from guardrail.client import run_simple_metrics
from guardrail.client import create_dataset
# Output/Prompt Metrics
run_metrics(output="Guardrail is an open-source toolkit for building domain-specific language models with confidence. From domain-specific dataset creation and custom evaluations to safeguarding and redteaming aligned with policies, our tools accelerates your LLM workflows to systematically derisk deployment.",
prompt="What is guardrail-ml?",
model_uri="dolly-v2-0.01")
# View Logs
con = sqlite3.connect("logs.db")
df = pd.read_sql_query("SELECT * from logs", con)
df.tail(20)
# Generate Dataset from PDF
create_dataset(model="databricks/dolly-v2-2-8b",
tokenizer="databricks/dolly-v2-2-8b",
file_path="example-docs/Medicare Appeals Paper FINAL.pdf",
output_path="./output.json")
More Colab Notebooks
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for guardrail_ml-0.0.11-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 80ff2600be34810bd21fca75e0d1b8035fba7d9b8936a75c3befaa0c84636cd0 |
|
MD5 | 638039b63dd7d86ccf5e2aa186b6aea5 |
|
BLAKE2b-256 | e546e6195df0f5b27f07ad80402c7e5bdf67f4ad12d83ea813a7e5d0fe33b978 |