llama-index llms azure openai integration
Project description
LlamaIndex Llms Integration: Azure Openai
Installation
%pip install llama-index-llms-azure-openai
!pip install llama-index
Prerequisites
Follow this to setup your Azure account: Setup Azure account
Set the environment variables
OPENAI_API_VERSION = "2023-07-01-preview"
AZURE_OPENAI_ENDPOINT = "https://YOUR_RESOURCE_NAME.openai.azure.com/"
OPENAI_API_KEY = "<your-api-key>"
import os
os.environ["OPENAI_API_KEY"] = "<your-api-key>"
os.environ[
"AZURE_OPENAI_ENDPOINT"
] = "https://<your-resource-name>.openai.azure.com/"
os.environ["OPENAI_API_VERSION"] = "2023-07-01-preview"
# Use your LLM
from llama_index.llms.azure_openai import AzureOpenAI
# Unlike normal OpenAI, you need to pass an engine argument in addition to model.
# The engine is the name of your model deployment you selected in Azure OpenAI Studio.
llm = AzureOpenAI(
engine="simon-llm", model="gpt-35-turbo-16k", temperature=0.0
)
# Alternatively, you can also skip setting environment variables, and pass the parameters in directly via constructor.
llm = AzureOpenAI(
engine="my-custom-llm",
model="gpt-35-turbo-16k",
temperature=0.0,
azure_endpoint="https://<your-resource-name>.openai.azure.com/",
api_key="<your-api-key>",
api_version="2023-07-01-preview",
)
# Use the complete endpoint for text completion
response = llm.complete("The sky is a beautiful blue and")
print(response)
# Expected Output:
# the sun is shining brightly. Fluffy white clouds float lazily across the sky,
# creating a picturesque scene. The vibrant blue color of the sky brings a sense
# of calm and tranquility...
Streaming completion
response = llm.stream_complete("The sky is a beautiful blue and")
for r in response:
print(r.delta, end="")
# Expected Output (Stream):
# the sun is shining brightly. Fluffy white clouds float lazily across the sky,
# creating a picturesque scene. The vibrant blue color of the sky brings a sense
# of calm and tranquility...
# Use the chat endpoint for conversation
from llama_index.core.llms import ChatMessage
messages = [
ChatMessage(
role="system", content="You are a pirate with a colorful personality."
),
ChatMessage(role="user", content="Hello"),
]
response = llm.chat(messages)
print(response)
# Expected Output:
# assistant: Ahoy there, matey! How be ye on this fine day? I be Captain Jolly Roger,
# the most colorful pirate ye ever did lay eyes on! What brings ye to me ship?
Streaming chat
response = llm.stream_chat(messages)
for r in response:
print(r.delta, end="")
# Expected Output (Stream):
# Ahoy there, matey! How be ye on this fine day? I be Captain Jolly Roger,
# the most colorful pirate ye ever did lay eyes on! What brings ye to me ship?
# Rather than adding the same parameters to each chat or completion call,
# you can set them at a per-instance level with additional_kwargs.
llm = AzureOpenAI(
engine="simon-llm",
model="gpt-35-turbo-16k",
temperature=0.0,
additional_kwargs={"user": "your_user_id"},
)
LLM Implementation example
https://docs.llamaindex.ai/en/stable/examples/llm/azure_openai/
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Close
Hashes for llama_index_llms_azure_openai-0.2.2.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 717bc3bf858e800d66e4f2ddec85a2e7dd503006d55981053d08e98771ec3abc |
|
MD5 | 574b53808e9a897eee8155cbe97fe82c |
|
BLAKE2b-256 | cef46659a0b4e4cf3c47f6ebfe8e7dcbc035d046cacf8050d0b340d0e116ddf6 |
Close
Hashes for llama_index_llms_azure_openai-0.2.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | c8a7d04a111ceff0b4335dc9273fbdb37fdb5095b6234190ca727736f6466d7b |
|
MD5 | a408ade2a800e6396ce6617728632623 |
|
BLAKE2b-256 | 589144a6d7c546e8b23be76743768b815a36f27770434108a69b1d08f6884abc |