Skip to main content

llama-index llms OCI Data Science integration

Project description

LlamaIndex LLMs Integration: Oracle Cloud Infrastructure (OCI) Data Science Service

Oracle Cloud Infrastructure (OCI) Data Science is a fully managed, serverless platform for data science teams to build, train, and manage machine learning models in Oracle Cloud Infrastructure.

It offers AI Quick Actions, which can be used to deploy, evaluate, and fine-tune foundation models in OCI Data Science. AI Quick Actions target users who want to quickly leverage the capabilities of AI. They aim to expand the reach of foundation models to a broader set of users by providing a streamlined, code-free, and efficient environment for working with foundation models. AI Quick Actions can be accessed from the Data Science Notebook.

Detailed documentation on how to deploy LLM models in OCI Data Science using AI Quick Actions is available here and here.

Installation

Install the required packages:

pip install oracle-ads llama-index llama-index-llms-oci-data-science

The oracle-ads is required to simplify the authentication within OCI Data Science.

Authentication

The authentication methods supported for LlamaIndex are equivalent to those used with other OCI services and follow the standard SDK authentication methods, specifically API Key, session token, instance principal, and resource principal. More details can be found here. Make sure to have the required policies to access the OCI Data Science Model Deployment endpoint.

Basic Usage

Using LLMs offered by OCI Data Science AI with LlamaIndex only requires you to initialize the OCIDataScience interface with your Data Science Model Deployment endpoint and model ID. By default the all deployed models in AI Quick Actions get odsc-model ID. However this ID can be changed during the deployment.

Call complete with a prompt

import ads
from llama_index.llms.oci_data_science import OCIDataScience

ads.set_auth(auth="security_token", profile="<replace-with-your-profile>")

llm = OCIDataScience(
    model="odsc-llm",
    endpoint="https://<MD_OCID>/predict",
)
response = llm.complete("Tell me a joke")

print(response)

Call chat with a list of messages

import ads
from llama_index.llms.oci_data_science import OCIDataScience
from llama_index.core.base.llms.types import ChatMessage

ads.set_auth(auth="security_token", profile="<replace-with-your-profile>")

llm = OCIDataScience(
    model="odsc-llm",
    endpoint="https://<MD_OCID>/predict",
)
response = llm.chat(
    [
        ChatMessage(role="user", content="Tell me a joke"),
        ChatMessage(
            role="assistant", content="Why did the chicken cross the road?"
        ),
        ChatMessage(role="user", content="I don't know, why?"),
    ]
)

print(response)

Streaming

Using stream_complete endpoint

import ads
from llama_index.llms.oci_data_science import OCIDataScience

ads.set_auth(auth="security_token", profile="<replace-with-your-profile>")

llm = OCIDataScience(
    model="odsc-llm",
    endpoint="https://<MD_OCID>/predict",
)

for chunk in llm.stream_complete("Tell me a joke"):
    print(chunk.delta, end="")

Using stream_chat endpoint

import ads
from llama_index.llms.oci_data_science import OCIDataScience
from llama_index.core.base.llms.types import ChatMessage

ads.set_auth(auth="security_token", profile="<replace-with-your-profile>")

llm = OCIDataScience(
    model="odsc-llm",
    endpoint="https://<MD_OCID>/predict",
)
response = llm.stream_chat(
    [
        ChatMessage(role="user", content="Tell me a joke"),
        ChatMessage(
            role="assistant", content="Why did the chicken cross the road?"
        ),
        ChatMessage(role="user", content="I don't know, why?"),
    ]
)

for chunk in response:
    print(chunk.delta, end="")

Async

Call acomplete with a prompt

import ads
from llama_index.llms.oci_data_science import OCIDataScience

ads.set_auth(auth="security_token", profile="<replace-with-your-profile>")

llm = OCIDataScience(
    model="odsc-llm",
    endpoint="https://<MD_OCID>/predict",
)
response = await llm.acomplete("Tell me a joke")

print(response)

Call achat with a list of messages

import ads
from llama_index.llms.oci_data_science import OCIDataScience
from llama_index.core.base.llms.types import ChatMessage

ads.set_auth(auth="security_token", profile="<replace-with-your-profile>")

llm = OCIDataScience(
    model="odsc-llm",
    endpoint="https://<MD_OCID>/predict",
)
response = await llm.achat(
    [
        ChatMessage(role="user", content="Tell me a joke"),
        ChatMessage(
            role="assistant", content="Why did the chicken cross the road?"
        ),
        ChatMessage(role="user", content="I don't know, why?"),
    ]
)

print(response)

Streaming

Using astream_complete endpoint

import ads
from llama_index.llms.oci_data_science import OCIDataScience

ads.set_auth(auth="security_token", profile="<replace-with-your-profile>")

llm = OCIDataScience(
    model="odsc-llm",
    endpoint="https://<MD_OCID>/predict",
)

async for chunk in await llm.astream_complete("Tell me a joke"):
    print(chunk.delta, end="")

Using astream_chat endpoint

import ads
from llama_index.llms.oci_data_science import OCIDataScience
from llama_index.core.base.llms.types import ChatMessage

ads.set_auth(auth="security_token", profile="<replace-with-your-profile>")

llm = OCIDataScience(
    model="odsc-llm",
    endpoint="https://<MD_OCID>/predict",
)
response = await llm.stream_chat(
    [
        ChatMessage(role="user", content="Tell me a joke"),
        ChatMessage(
            role="assistant", content="Why did the chicken cross the road?"
        ),
        ChatMessage(role="user", content="I don't know, why?"),
    ]
)

async for chunk in response:
    print(chunk.delta, end="")

Configure Model

import ads
from llama_index.llms.oci_data_science import OCIDataScience

ads.set_auth(auth="security_token", profile="<replace-with-your-profile>")

llm = OCIDataScience(
    model="odsc-llm",
    endpoint="https://<MD_OCID>/predict",
    temperature=0.2,
    max_tokens=500,
    timeout=120,
    context_window=2500,
    additional_kwargs={
        "top_p": 0.75,
        "logprobs": True,
        "top_logprobs": 3,
    },
)
response = llm.chat(
    [
        ChatMessage(role="user", content="Tell me a joke"),
    ]
)
print(response)

Function Calling

The AI Quick Actions offers prebuilt service containers that make deploying and serving a large language model very easy. Either one of vLLM (a high-throughput and memory-efficient inference and serving engine for LLMs) or TGI (a high-performance text generation server for the popular open-source LLMs) is used in the service container to host the model, the end point created supports the OpenAI API protocol. This allows the model deployment to be used as a drop-in replacement for applications using OpenAI API. If the deployed model supports function calling, then integration with LlamaIndex tools, through the predict_and_call function on the llm allows to attach any tools and let the LLM decide which tools to call (if any).

import ads
from llama_index.llms.oci_data_science import OCIDataScience
from llama_index.core.tools import FunctionTool

ads.set_auth(auth="security_token", profile="<replace-with-your-profile>")

llm = OCIDataScience(
    model="odsc-llm",
    endpoint="https://<MD_OCID>/predict",
    temperature=0.2,
    max_tokens=500,
    timeout=120,
    context_window=2500,
    additional_kwargs={
        "top_p": 0.75,
        "logprobs": True,
        "top_logprobs": 3,
    },
)


def multiply(a: float, b: float) -> float:
    print(f"---> {a} * {b}")
    return a * b


def add(a: float, b: float) -> float:
    print(f"---> {a} + {b}")
    return a + b


def subtract(a: float, b: float) -> float:
    print(f"---> {a} - {b}")
    return a - b


def divide(a: float, b: float) -> float:
    print(f"---> {a} / {b}")
    return a / b


multiply_tool = FunctionTool.from_defaults(fn=multiply)
add_tool = FunctionTool.from_defaults(fn=add)
sub_tool = FunctionTool.from_defaults(fn=subtract)
divide_tool = FunctionTool.from_defaults(fn=divide)

response = llm.predict_and_call(
    [multiply_tool, add_tool, sub_tool, divide_tool],
    user_msg="Calculate the result of `8 + 2 - 6`.",
    verbose=True,
)

print(response)

Using FunctionAgent

import ads
from llama_index.llms.oci_data_science import OCIDataScience
from llama_index.core.tools import FunctionTool
from llama_index.core.agent.workflow import FunctionAgent

ads.set_auth(auth="security_token", profile="<replace-with-your-profile>")

llm = OCIDataScience(
    model="odsc-llm",
    endpoint="https://<MD_OCID>/predict",
    temperature=0.2,
    max_tokens=500,
    timeout=120,
    context_window=2500,
    additional_kwargs={
        "top_p": 0.75,
        "logprobs": True,
        "top_logprobs": 3,
    },
)


def multiply(a: float, b: float) -> float:
    print(f"---> {a} * {b}")
    return a * b


def add(a: float, b: float) -> float:
    print(f"---> {a} + {b}")
    return a + b


def subtract(a: float, b: float) -> float:
    print(f"---> {a} - {b}")
    return a - b


def divide(a: float, b: float) -> float:
    print(f"---> {a} / {b}")
    return a / b


multiply_tool = FunctionTool.from_defaults(fn=multiply)
add_tool = FunctionTool.from_defaults(fn=add)
sub_tool = FunctionTool.from_defaults(fn=subtract)
divide_tool = FunctionTool.from_defaults(fn=divide)

agent = FunctionAgent(
    tools=[multiply_tool, add_tool, sub_tool, divide_tool],
    llm=llm,
)
response = await agent.run(
    "Calculate the result of `8 + 2 - 6`. Use tools. Return the calculated result."
)

print(response)

LLM Implementation example

https://docs.llamaindex.ai/en/stable/examples/llm/oci_data_science/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_oci_data_science-1.1.0.tar.gz (19.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file llama_index_llms_oci_data_science-1.1.0.tar.gz.

File metadata

  • Download URL: llama_index_llms_oci_data_science-1.1.0.tar.gz
  • Upload date:
  • Size: 19.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.9 {"installer":{"name":"uv","version":"0.10.9","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for llama_index_llms_oci_data_science-1.1.0.tar.gz
Algorithm Hash digest
SHA256 ff8927bb16180ea9c40c52ddd107d9f37cf7ce41ab12dd95faff7379ca6b784f
MD5 fca19c67f71489a6338f41f0a91f5c85
BLAKE2b-256 5a8e1bab3a840cd8176e96634b3ab81c171bc7b428fe666b32a6fa7261f3a961

See more details on using hashes here.

File details

Details for the file llama_index_llms_oci_data_science-1.1.0-py3-none-any.whl.

File metadata

  • Download URL: llama_index_llms_oci_data_science-1.1.0-py3-none-any.whl
  • Upload date:
  • Size: 20.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.9 {"installer":{"name":"uv","version":"0.10.9","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for llama_index_llms_oci_data_science-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 4f6b0572ac15dd990751546bc96499a9e37ad5467817bc24c7c001410701f3d2
MD5 bf5441c32ed32451b9c0340b9b7fea94
BLAKE2b-256 9605d71ecfef7e1271400f2f06a81ab92c75b0602e2503516ef15801b9465719

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page