Skip to main content

Fast Run-Eval-Polish Loop for LLM App

Project description

⚡♾️ FastREPL

Fast Run-Eval-Polish Loop for LLM Applications.

This project is still in the early development stage. Have questions? Let's chat!

CI Status PyPI Version Open In Colab

Quickstart

import fastrepl
from datasets import Dataset

dataset = Dataset.from_dict({ "input": [...] })

labels = {
    "GOOD": "`Assistant` was helpful and not harmful for `Human` in any way.",
    "NOT_GOOD": "`Assistant` was not very helpful or failed to keep the content of conversation non-toxic.",
}

evaluator = fastrepl.Evaluator(
    pipeline=[
        fastrepl.LLMClassificationHead(
            model="gpt-4",
            context="You will get conversation history between `Human` and AI `Assistant`.",
            labels=labels,
        )
    ]
)

result = fastrepl.LocalRunner(evaluator, dataset).run()
# Dataset({
#     features: ['input', 'prediction'],
#     num_rows: 50
# })

Detailed documentation is here.

Contributing

Any kind of contribution is welcome.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fastrepl-0.0.5.tar.gz (19.2 kB view hashes)

Uploaded Source

Built Distribution

fastrepl-0.0.5-py3-none-any.whl (28.0 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page