llama-index callbacks argilla integration
Project description
✨🦙 Argilla's LlamaIndex Integration
Argilla integration into the LlamaIndex workflow
[!TIP] To discuss, get support, or give feedback join Argilla's Slack Community and you will be able to engage with our amazing community and also with the core developers of
argilla
anddistilabel
.
This integration allows the user to include the feedback loop that Argilla offers into the LlamaIndex ecosystem. It's based on a callback handler to be run within the LlamaIndex workflow.
Don't hesitate to check out both LlamaIndex and Argilla
Getting Started
You first need to install argilla and argilla-llama-index as follows:
pip install llama-index-callbacks-argilla
You will need to an Argilla Server running to monitor the LLM. You can either install the server locally or have it on HuggingFace Spaces. For a complete guide on how to install and initialize the server, you can refer to the Quickstart Guide.
Usage
It requires just a simple step to log your data into Argilla within your LlamaIndex workflow. We just need to call the handler before starting production with your LLM.
We will use GPT3.5 from OpenAI as our LLM. For this, you will need a valid API key from OpenAI. You can have more info and get one via this link.
After you get your API key, the easiest way to import it is through an environment variable, or via getpass().
import os
from getpass import getpass
openai_api_key = os.getenv("OPENAI_API_KEY", None) or getpass(
"Enter OpenAI API key:"
)
Let's now write all the necessary imports
from llama_index.core import (
VectorStoreIndex,
SimpleDirectoryReader,
set_global_handler,
)
from llama_index.llms.openai import OpenAI
What we need to do is to set Argilla as the global handler as below. Within the handler, we need to provide the dataset name that we will use. If the dataset does not exist, it will be created with the given name. You can also set the API KEY, API URL, and the Workspace name. You can learn more about the variables that controls Argilla initialization here
[!TIP] Remember that the default Argilla workspace name is
admin
. If you want to use a custom Workspace, you'll need to create it and grant access to the desired users. The link above also explains how to do that.
set_global_handler("argilla", dataset_name="query_model")
Let's now create the llm instance, using GPT-3.5 from OpenAI.
llm = OpenAI(
model="gpt-3.5-turbo", temperature=0.8, openai_api_key=openai_api_key
)
With the code snippet below, you can create a basic workflow with LlamaIndex. You will also need a txt file as the data source within a folder named "data". For a sample data file and more info regarding the use of Llama Index, you can refer to the Llama Index documentation.
docs = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(docs)
query_engine = index.as_query_engine()
Now, let's run the query_engine
to have a response from the model.
response = query_engine.query("What did the author do growing up?")
response
The author worked on two main things outside of school before college: writing and programming. They wrote short stories and tried writing programs on an IBM 1401. They later got a microcomputer, built it themselves, and started programming on it.
The prompt given and the response obtained will be logged in to Argilla server.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file llama_index_callbacks_argilla-0.3.0.tar.gz
.
File metadata
- Download URL: llama_index_callbacks_argilla-0.3.0.tar.gz
- Upload date:
- Size: 3.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.11.10 Darwin/22.3.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 84b16424e752df9df4734bdcfdfdc88cc7a7b8f139bef6c22dc0aafe2857e2fd |
|
MD5 | 74998258a1f83e4e7beabdf15e2b2c31 |
|
BLAKE2b-256 | 5ff54d2a2e47e434451d290ecb39e01633e1bfb10b0dd8fc859d9df0f2f95017 |
File details
Details for the file llama_index_callbacks_argilla-0.3.0-py3-none-any.whl
.
File metadata
- Download URL: llama_index_callbacks_argilla-0.3.0-py3-none-any.whl
- Upload date:
- Size: 3.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.11.10 Darwin/22.3.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e938952c93367567f59b5093e9c6109d0d32e18ad8ed6175fb63c9e263b5cff6 |
|
MD5 | 4b423528b77ee31a6212425474d3b01d |
|
BLAKE2b-256 | c0cdca1185dbe3967ccf8bcea43517bcfc54a5dfe6524473681701858a460490 |