Skip to main content

llama-index embeddings octoai integration

Project description

LlamaIndex Embeddings Integration: Octoai

Using the OctoAI Embeddings Integration is a simple as:

from llama_index.embeddings.octoai import OctoAIEmbedding
from os import environ

OCTOAI_API_KEY = environ["OCTOAI_API_KEY"]
embed_model = OctoAIEmbedding(api_key=OCTOAI_API_KEY)
embeddings = embed_model.get_text_embedding("How do I sail to the moon?")
assert len(embeddings) == 1024

One can also request a batch of embeddings via:

texts = [
    "How do I sail to the moon?",
    "What is the best way to cook a steak?",
    "How do I apply for a job?",
]

embeddings = embed_model.get_text_embedding_batch(texts)
assert len(embeddings) == 3

API Access

Here are some instructions on how to get your OctoAI API key.

Contributing

Follow the good practices of all poetry based projects.

When in VScode, one may want to manually select the Python interpreter, specially to run the example iPython notebook. For this use ctrl+shift+p, then type or select: Python: Select Interpreter

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_embeddings_octoai-0.1.0.tar.gz (3.1 kB view hashes)

Uploaded Source

Built Distribution

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page