Interface between LLMs and your data
Project description
Llama-index with InterSystems IRIS
Llama-index with support for InterSystems IRIS
Install
pip install llama-iris
Example
import os
from dotenv import load_dotenv
from llama_index import SimpleDirectoryReader, StorageContext, ServiceContext
from llama_index.indices.vector_store import VectorStoreIndex
import openai
from llama_iris import IRISVectorStore
load_dotenv(override=True)
documents = SimpleDirectoryReader("./data/paul_graham").load_data()
print("Document ID:", documents[0].doc_id)
vector_store = IRISVectorStore.from_params(
connection_string=CONNECTION_STRING,
table_name="paul_graham_essay",
embed_dim=1536, # openai embedding dimension
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(
documents,
storage_context=storage_context,
show_progress=True,
)
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do?")
import textwrap
print(textwrap.fill(str(response), 100))
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
llama_iris-0.4.0.tar.gz
(5.0 kB
view hashes)
Built Distribution
Close
Hashes for llama_iris-0.4.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | c7ef6f63bc1a873674f97de7a2fff63f121beeb5394f0c118f3f2d8c78b1ea65 |
|
MD5 | 6d36ecf213f0aa0265af71a5fd4cfbc0 |
|
BLAKE2b-256 | 2e8063cbdd37b861ccf29b8ec20084b04ac6d4cf2bc15aa1f0bf07a16df53598 |