Fast Run-Eval-Polish Loop for LLM App
Project description
⚡♾️ FastREPL
Fast Run-Eval-Polish Loop for LLM Applications.
This project is still in the early development stage. Have questions? Let's chat!
Quickstart
Let's say we have this existing system:
import openai
context = """
The first step is to decide what to work on. The work you choose needs to have three qualities: it has to be something you have a natural aptitude for, that you have a deep interest in, and that offers scope to do great work.
In practice you don't have to worry much about the third criterion. Ambitious people are if anything already too conservative about it. So all you need to do is find something you have an aptitude for and great interest in.
"""
def run_qa(question: str) -> str:
return openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{
"role": "system",
"content": f"Answer in less than 30 words. Use the following context if needed: {context}",
},
{"role": "user", "content": question},
],
)["choices"][0]["message"]["content"]
We already have a fixed context. Now, let's ask some questions. local_runner
is used here to run it locally with threads and progress tracking. We will have remote_runner
to run the same in the cloud.
contexts = [[context]] * len(questions)
# https://huggingface.co/datasets/repllabs/questions_how_to_do_great_work
questions = [
"how to do great work?.",
"How can curiosity be nurtured and utilized to drive great work?",
"How does the author suggest finding something to work on?",
"How did Van Dyck's painting differ from Daniel Mytens' version and what message did it convey?",
]
runner = fastrepl.local_runner(fn=run_qa)
ds = runner.run(args_list=[(q,) for q in questions], output_feature="answer")
ds = ds.add_column("question", questions)
ds = ds.add_column("contexts", contexts)
# fastrepl.Dataset({
# features: ['answer', 'question', 'contexts'],
# num_rows: 4
# })
Now, let's use one of our evaluators to evaluate the dataset. Note that we are running it 5 times to ensure we obtain consistent results.
evaluator = fastrepl.RAGEvaluator(node=fastrepl.RAGAS(metric="Faithfulness"))
ds = fastrepl.local_runner(evaluator=evaluator, dataset=ds).run(num=5)
# ds["result"]
# [[0.25, 0.0, 0.25, 0.25, 0.5],
# [0.5, 0.5, 0.5, 0.75, 0.875],
# [0.66, 0.66, 0.66, 0.66, 0.66],
# [1.0, 1.0, 1.0, 1.0, 1.0]]
Seems like we are getting quite good results. If we increase the number of samples a bit, we can obtain a reliable evaluation of the entire system. We will keep working on bringing better evaluations.
Detailed documentation is here.
Contributing
Any kind of contribution is welcome.
- Development: Please read CONTRIBUTING.md and tests.
- Bug reports: Use Github Issues.
- Feature request and questions: Use Github Discussions.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file fastrepl-0.0.17.tar.gz
.
File metadata
- Download URL: fastrepl-0.0.17.tar.gz
- Upload date:
- Size: 24.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.6.1 CPython/3.11.5 Linux/6.2.0-1012-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 99fd0b34b69500fe91b94c415e51089591e987387f0281478df1c8bad7514125 |
|
MD5 | ce17751b03c64a727ea48f40345993fb |
|
BLAKE2b-256 | 8ad35729d015d06b206e2408f59a70adf37f1ebad7e52a6492f5669fced22699 |
File details
Details for the file fastrepl-0.0.17-py3-none-any.whl
.
File metadata
- Download URL: fastrepl-0.0.17-py3-none-any.whl
- Upload date:
- Size: 34.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.6.1 CPython/3.11.5 Linux/6.2.0-1012-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e53c99b3144849f8f5e23848f7badc447c41fda34513fa33ae9fb7996b1ae057 |
|
MD5 | f99e807fd136631ed898096ec6be2a56 |
|
BLAKE2b-256 | 0aeaa545996846d27c1a6132404c6574bf01e9ce327569c2d267664735b9c21f |