Skip to main content

An integration package connecting IBM watsonx.ai and GigaChain

Project description

langchain-ibm

This package provides the integration between LangChain and IBM watsonx.ai through the ibm-watsonx-ai SDK.

Installation

To use the langchain-ibm package, follow these installation steps:

pip install gigachain-ibm

Usage

Setting up

To use IBM's models, you must have an IBM Cloud user API key. Here's how to obtain and set up your API key:

  1. Obtain an API Key: For more details on how to create and manage an API key, refer to IBM's documentation.
  2. Set the API Key as an Environment Variable: For security reasons, it's recommended to not hard-code your API key directly in your scripts. Instead, set it up as an environment variable. You can use the following code to prompt for the API key and set it as an environment variable:
import os
from getpass import getpass

watsonx_api_key = getpass()
os.environ["WATSONX_APIKEY"] = watsonx_api_key

In alternative, you can set the environment variable in your terminal.

  • Linux/macOS: Open your terminal and execute the following command:

    export WATSONX_APIKEY='your_ibm_api_key'
    

    To make this environment variable persistent across terminal sessions, add the above line to your ~/.bashrc, ~/.bash_profile, or ~/.zshrc file.

  • Windows: For Command Prompt, use:

    set WATSONX_APIKEY=your_ibm_api_key
    

Loading the model

You might need to adjust model parameters for different models or tasks. For more details on the parameters, refer to IBM's documentation.

parameters = {
    "decoding_method": "sample",
    "max_new_tokens": 100,
    "min_new_tokens": 1,
    "temperature": 0.5,
    "top_k": 50,
    "top_p": 1,
}

Initialize the WatsonxLLM class with the previously set parameters.

from langchain_ibm import WatsonxLLM

watsonx_llm = WatsonxLLM(
    model_id="PASTE THE CHOSEN MODEL_ID HERE",
    url="PASTE YOUR URL HERE",
    project_id="PASTE YOUR PROJECT_ID HERE",
    params=parameters,
)

Note:

  • You must provide a project_id or space_id. For more information refer to IBM's documentation.
  • Depending on the region of your provisioned service instance, use one of the urls described here.
  • You need to specify the model you want to use for inferencing through model_id. You can find the list of available models here.

Alternatively you can use Cloud Pak for Data credentials. For more details, refer to IBM's documentation.

watsonx_llm = WatsonxLLM(
    model_id="ibm/granite-13b-instruct-v2",
    url="PASTE YOUR URL HERE",
    username="PASTE YOUR USERNAME HERE",
    password="PASTE YOUR PASSWORD HERE",
    instance_id="openshift",
    version="4.8",
    project_id="PASTE YOUR PROJECT_ID HERE",
    params=parameters,
)

Create a Chain

Create PromptTemplate objects which will be responsible for creating a random question.

from langchain.prompts import PromptTemplate

template = "Generate a random question about {topic}: Question: "
prompt = PromptTemplate.from_template(template)

Provide a topic and run the LLMChain.

from langchain.chains import LLMChain

llm_chain = LLMChain(prompt=prompt, llm=watsonx_llm)
response = llm_chain.invoke("dog")
print(response)

Calling the Model Directly

To obtain completions, you can call the model directly using a string prompt.

# Calling a single prompt

response = watsonx_llm.invoke("Who is man's best friend?")
print(response)
# Calling multiple prompts

response = watsonx_llm.generate(
    [
        "The fastest dog in the world?",
        "Describe your chosen dog breed",
    ]
)
print(response)

Streaming the Model output

You can stream the model output.

for chunk in watsonx_llm.stream(
    "Describe your favorite breed of dog and why it is your favorite."
):
    print(chunk, end="")

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gigachain_ibm-0.1.8.tar.gz (16.8 kB view details)

Uploaded Source

Built Distribution

gigachain_ibm-0.1.8-py3-none-any.whl (18.0 kB view details)

Uploaded Python 3

File details

Details for the file gigachain_ibm-0.1.8.tar.gz.

File metadata

  • Download URL: gigachain_ibm-0.1.8.tar.gz
  • Upload date:
  • Size: 16.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.7.1 CPython/3.11.6 Darwin/23.5.0

File hashes

Hashes for gigachain_ibm-0.1.8.tar.gz
Algorithm Hash digest
SHA256 864175faeecaca241d0000c81c2959e1dad28d5ba3b26a87ec340b483060ba51
MD5 20a649c53ab875318878c1e35970c5c6
BLAKE2b-256 4c3b503776a0f9856f07a7a03eea3965419e05ec9d33caccc7dde41773cb7d19

See more details on using hashes here.

File details

Details for the file gigachain_ibm-0.1.8-py3-none-any.whl.

File metadata

  • Download URL: gigachain_ibm-0.1.8-py3-none-any.whl
  • Upload date:
  • Size: 18.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.7.1 CPython/3.11.6 Darwin/23.5.0

File hashes

Hashes for gigachain_ibm-0.1.8-py3-none-any.whl
Algorithm Hash digest
SHA256 e7c85f4d0988c1e3a54cc99af2326256e6d1f0f274c632e69245e884950af82f
MD5 aa7f719380a344770b913c7700664cc5
BLAKE2b-256 2ff3af875713e94a34e8162a4df67464a1b04f44fda0c1f6c7bcbcf0ceb50e4a

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page