Skip to main content

llama-index llms IBM watsonx.ai integration

Project description

LlamaIndex LLMs Integration: IBM

This package integrates the LlamaIndex LLMs API with the IBM watsonx.ai Foundation Models API by leveraging ibm-watsonx-ai SDK. With this integration, you can use one of the models that are available in IBM watsonx.ai to perform model inference.

Installation

pip install llama-index-llms-ibm

Usage

Setting up

To use IBM's models, you must have an IBM Cloud user API key. Here's how to obtain and set up your API key:

  1. Obtain an API Key: For more details on how to create and manage an API key, refer to Managing user API keys.
  2. Set the API Key as an Environment Variable: For security reasons, it's recommended to not hard-code your API key directly in your scripts. Instead, set it up as an environment variable. You can use the following code to prompt for the API key and set it as an environment variable:
import os
from getpass import getpass

watsonx_api_key = getpass()
os.environ["WATSONX_APIKEY"] = watsonx_api_key

Alternatively, you can set the environment variable in your terminal.

  • Linux/macOS: Open your terminal and execute the following command:

    export WATSONX_APIKEY='your_ibm_api_key'
    

    To make this environment variable persistent across terminal sessions, add the above line to your ~/.bashrc, ~/.bash_profile, or ~/.zshrc file.

  • Windows: For Command Prompt, use:

    set WATSONX_APIKEY=your_ibm_api_key
    

Loading the model

You might need to adjust model parameters for different models or tasks. For more details on parameters, see Available MetaNames.

temperature = 0.5
max_new_tokens = 50
additional_params = {
    "min_new_tokens": 1,
    "top_k": 50,
}

Initialize the WatsonxLLM class with the previously set parameters.

from llama_index.llms.ibm import WatsonxLLM

watsonx_llm = WatsonxLLM(
    model_id="PASTE THE CHOSEN MODEL_ID HERE",
    url="PASTE YOUR URL HERE",
    project_id="PASTE YOUR PROJECT_ID HERE",
    temperature=temperature,
    max_new_tokens=max_new_tokens,
    additional_params=additional_params,
)

Note:

  • To provide context for the API call, you must pass the project_id or space_id. To get your project or space ID, open your project or space, go to the Manage tab, and click General. For more information see: Project documentation or Deployment space documentation.
  • Depending on the region of your provisioned service instance, use one of the URLs listed in watsonx.ai API Authentication.
  • You need to specify the model you want to use for inferencing through model_id. You can find the list of available models in Supported foundation models.

Alternatively, you can use Cloud Pak for Data credentials. For more details, refer to watsonx.ai software setup.

watsonx_llm = WatsonxLLM(
    model_id="ibm/granite-13b-instruct-v2",
    url="PASTE YOUR URL HERE",
    username="PASTE YOUR USERNAME HERE",
    password="PASTE YOUR PASSWORD HERE",
    instance_id="openshift",
    version="4.8",
    project_id="PASTE YOUR PROJECT_ID HERE",
    temperature=temperature,
    max_new_tokens=max_new_tokens,
    additional_params=additional_params,
)

Create a Completion

Below is an example that shows how to call the model directly using a string type prompt:

response = watsonx_llm.complete("What is a Generative AI?")
print(response)

Calling chat with a list of messages

To create chat completions by providing a list of messages, use the following code:

from llama_index.core.llms import ChatMessage

messages = [
    ChatMessage(role="system", content="You are an AI assistant"),
    ChatMessage(role="user", content="Who are you?"),
]
response = watsonx_llm.chat(messages)
print(response)

Streaming the model output

To stream the model output, use the following code:

for chunk in watsonx_llm.stream_complete(
    "Describe your favorite city and why it is your favorite."
):
    print(chunk.delta, end="")

Similarly, to stream the chat completions, use the following code:

messages = [
    ChatMessage(role="system", content="You are an AI assistant"),
    ChatMessage(role="user", content="Who are you?"),
]

for chunk in watsonx_llm.stream_chat(messages):
    print(chunk.delta, end="")

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_ibm-0.3.2.tar.gz (10.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_index_llms_ibm-0.3.2-py3-none-any.whl (9.5 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_llms_ibm-0.3.2.tar.gz.

File metadata

  • Download URL: llama_index_llms_ibm-0.3.2.tar.gz
  • Upload date:
  • Size: 10.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.3 Linux/6.8.0-1020-azure

File hashes

Hashes for llama_index_llms_ibm-0.3.2.tar.gz
Algorithm Hash digest
SHA256 ed50b3167accbc5e97febdec8c3f8aa15724adcffad59945aa3c14449e2f4556
MD5 cfb9fdb2235d78e80861d3d584a7341e
BLAKE2b-256 357b1a089df3521bca64700b3fc26f2af4879dd176a9c9c037084e14263e41c3

See more details on using hashes here.

File details

Details for the file llama_index_llms_ibm-0.3.2-py3-none-any.whl.

File metadata

  • Download URL: llama_index_llms_ibm-0.3.2-py3-none-any.whl
  • Upload date:
  • Size: 9.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.3 Linux/6.8.0-1020-azure

File hashes

Hashes for llama_index_llms_ibm-0.3.2-py3-none-any.whl
Algorithm Hash digest
SHA256 272e2f8b09a4c09480ef7c5c78aa785e0b966959b2ae18b1c0d766e01883c2db
MD5 a66cd4cd3fd0725f967cca925422bd9b
BLAKE2b-256 175ab36ab81325f5ed9108356c054fa2fec18870edf95c6204e4db1cf8e97976

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page