Skip to main content

llama-index embeddings IBM watsonx.ai integration

Project description

LlamaIndex Embeddings Integration: IBM

This package integrates the LlamaIndex LLMs API with the IBM watsonx.ai Foundation Models API by leveraging ibm-watsonx-ai SDK. With this integration, you can use one of the embedding models that are available in IBM watsonx.ai to embed a single string or a list of strings.

Installation

pip install llama-index-embeddings-ibm

Usage

Setting up

To use IBM's models, you must have an IBM Cloud user API key. Here's how to obtain and set up your API key:

  1. Obtain an API Key: For more details on how to create and manage an API key, refer to Managing user API keys.
  2. Set the API Key as an Environment Variable: For security reasons, it's recommended to not hard-code your API key directly in your scripts. Instead, set it up as an environment variable. You can use the following code to prompt for the API key and set it as an environment variable:
import os
from getpass import getpass

watsonx_api_key = getpass()
os.environ["WATSONX_APIKEY"] = watsonx_api_key

Alternatively, you can set the environment variable in your terminal.

  • Linux/macOS: Open your terminal and execute the following command:

    export WATSONX_APIKEY='your_ibm_api_key'
    

    To make this environment variable persistent across terminal sessions, add the above line to your ~/.bashrc, ~/.bash_profile, or ~/.zshrc file.

  • Windows: For Command Prompt, use:

    set WATSONX_APIKEY=your_ibm_api_key
    

Load the model

You might need to adjust embedding parameters for different tasks.

truncate_input_tokens = 3

Initialize the WatsonxEmbeddings class with the previously set parameters.

Note:

In this example, we’ll use the project_id and Dallas URL.

You need to specify the model_id that will be used for inferencing.

from llama_index.embeddings.ibm import WatsonxEmbeddings

watsonx_embedding = WatsonxEmbeddings(
    model_id="ibm/slate-125m-english-rtrvr-v2",
    url="https://us-south.ml.cloud.ibm.com",
    project_id="PASTE_YOUR_PROJECT_ID_HERE",
    truncate_input_tokens=truncate_input_tokens,
)

Alternatively, you can use Cloud Pak for Data credentials. For details, see watsonx.ai software setup.

watsonx_embedding = WatsonxEmbeddings(
    model_id="ibm/slate-125m-english-rtrvr-v2",
    url="PASTE YOUR URL HERE",
    username="PASTE_YOUR_USERNAME_HERE",
    password="PASTE_YOUR_PASSWORD_HERE",
    instance_id="openshift",
    version="5.2",
    project_id="PASTE YOUR PROJECT_ID HERE",
    truncate_input_tokens=truncate_input_tokens,
)

Usage

Embed query

query = "Example query."

query_result = watsonx_embedding.get_query_embedding(query)
print(query_result[:5])

Embed list of texts

texts = ["This is a content of one document", "This is another document"]

doc_result = watsonx_embedding.get_text_embedding_batch(texts)
print(doc_result[0][:5])

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_embeddings_ibm-0.6.0.post1.tar.gz (7.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file llama_index_embeddings_ibm-0.6.0.post1.tar.gz.

File metadata

  • Download URL: llama_index_embeddings_ibm-0.6.0.post1.tar.gz
  • Upload date:
  • Size: 7.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.3 {"installer":{"name":"uv","version":"0.10.3","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for llama_index_embeddings_ibm-0.6.0.post1.tar.gz
Algorithm Hash digest
SHA256 3caf4d95f3a1089009e9ba9bdf5fa1067a954de67038fb4ef2a6c9376528c8d4
MD5 2b3bca54c5d368f5221a6e1c6b7c4eac
BLAKE2b-256 1a5daf2d2c055442b0459f7c8b91a8729204e251de316474c1bb6940efd0b276

See more details on using hashes here.

File details

Details for the file llama_index_embeddings_ibm-0.6.0.post1-py3-none-any.whl.

File metadata

  • Download URL: llama_index_embeddings_ibm-0.6.0.post1-py3-none-any.whl
  • Upload date:
  • Size: 7.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.3 {"installer":{"name":"uv","version":"0.10.3","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for llama_index_embeddings_ibm-0.6.0.post1-py3-none-any.whl
Algorithm Hash digest
SHA256 91f0a767495bac8fc0a478c921755d46e678ffaf43f8d63cebf40f6200dcc371
MD5 bdd26add95739954af0903a1e0e69428
BLAKE2b-256 99e603a7ba0fa57e0a3f7527040ad3b77e85ded9ba78f655846a412fdd301c28

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page