Skip to main content

llama-index llms IBM watsonx.ai integration

Project description

LlamaIndex LLMs Integration: IBM

This package integrates the LlamaIndex LLMs API with the IBM watsonx.ai Foundation Models API by leveraging ibm-watsonx-ai SDK. With this integration, you can use one of the models that are available in IBM watsonx.ai to perform model inference.

Installation

pip install llama-index-llms-ibm

Usage

Setting up

To use IBM's models, you must have an IBM Cloud user API key. Here's how to obtain and set up your API key:

  1. Obtain an API Key: For more details on how to create and manage an API key, refer to Managing user API keys.
  2. Set the API Key as an Environment Variable: For security reasons, it's recommended to not hard-code your API key directly in your scripts. Instead, set it up as an environment variable. You can use the following code to prompt for the API key and set it as an environment variable:
import os
from getpass import getpass

watsonx_api_key = getpass()
os.environ["WATSONX_APIKEY"] = watsonx_api_key

Alternatively, you can set the environment variable in your terminal.

  • Linux/macOS: Open your terminal and execute the following command:

    export WATSONX_APIKEY='your_ibm_api_key'
    

    To make this environment variable persistent across terminal sessions, add the above line to your ~/.bashrc, ~/.bash_profile, or ~/.zshrc file.

  • Windows: For Command Prompt, use:

    set WATSONX_APIKEY=your_ibm_api_key
    

Loading the model

You might need to adjust model parameters for different models or tasks. For more details on parameters, see Available MetaNames.

temperature = 0.5
max_new_tokens = 50
additional_params = {
    "min_new_tokens": 1,
    "top_k": 50,
}

Initialize the WatsonxLLM class with the previously set parameters.

from llama_index.llms.ibm import WatsonxLLM

watsonx_llm = WatsonxLLM(
    model_id="PASTE THE CHOSEN MODEL_ID HERE",
    url="PASTE YOUR URL HERE",
    project_id="PASTE YOUR PROJECT_ID HERE",
    temperature=temperature,
    max_new_tokens=max_new_tokens,
    additional_params=additional_params,
)

Note:

  • To provide context for the API call, you must pass the project_id or space_id. To get your project or space ID, open your project or space, go to the Manage tab, and click General. For more information see: Project documentation or Deployment space documentation.
  • Depending on the region of your provisioned service instance, use one of the URLs listed in watsonx.ai API Authentication.
  • You need to specify the model you want to use for inferencing through model_id. You can find the list of available models in Supported foundation models.

Alternatively, you can use Cloud Pak for Data credentials. For more details, refer to watsonx.ai software setup.

watsonx_llm = WatsonxLLM(
    model_id="ibm/granite-13b-instruct-v2",
    url="PASTE YOUR URL HERE",
    username="PASTE YOUR USERNAME HERE",
    password="PASTE YOUR PASSWORD HERE",
    instance_id="openshift",
    version="4.8",
    project_id="PASTE YOUR PROJECT_ID HERE",
    temperature=temperature,
    max_new_tokens=max_new_tokens,
    additional_params=additional_params,
)

Create a Completion

Below is an example that shows how to call the model directly using a string type prompt:

response = watsonx_llm.complete("What is a Generative AI?")
print(response)

Calling chat with a list of messages

To create chat completions by providing a list of messages, use the following code:

from llama_index.core.llms import ChatMessage

messages = [
    ChatMessage(role="system", content="You are an AI assistant"),
    ChatMessage(role="user", content="Who are you?"),
]
response = watsonx_llm.chat(messages)
print(response)

Streaming the model output

To stream the model output, use the following code:

for chunk in watsonx_llm.stream_complete(
    "Describe your favorite city and why it is your favorite."
):
    print(chunk.delta, end="")

Similarly, to stream the chat completions, use the following code:

messages = [
    ChatMessage(role="system", content="You are an AI assistant"),
    ChatMessage(role="user", content="Who are you?"),
]

for chunk in watsonx_llm.stream_chat(messages):
    print(chunk.delta, end="")

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_ibm-0.2.2.tar.gz (10.3 kB view details)

Uploaded Source

Built Distribution

llama_index_llms_ibm-0.2.2-py3-none-any.whl (9.4 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_llms_ibm-0.2.2.tar.gz.

File metadata

  • Download URL: llama_index_llms_ibm-0.2.2.tar.gz
  • Upload date:
  • Size: 10.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.10.12 Linux/6.5.0-1025-azure

File hashes

Hashes for llama_index_llms_ibm-0.2.2.tar.gz
Algorithm Hash digest
SHA256 eaabefabffe8c8320baaf0bca1ba6ab9e9553caa9e78c820e86ce027718ff2f8
MD5 96adfac0e7ce18bbcb32d9dd864c7b44
BLAKE2b-256 107dc04eb92059c56207e9d4c6ac58a86b82ea352a47e55f5d394cd16e880ea0

See more details on using hashes here.

File details

Details for the file llama_index_llms_ibm-0.2.2-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_llms_ibm-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 78f94d861b9c9ce00d843d55f72ad92fe8867b6c5769dea07b300ec74d521aa0
MD5 e287ea59397817e718c9bf09e7453433
BLAKE2b-256 89331ff7f224aeee6d2982ecd2f8ebcffdb5b3f112c0ce4adf9a2eff12e92d05

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page