Interact with the Databricks Foundation Model API from python
Project description
Databricks Generative AI Inference SDK (Beta)
The Databricks Generative AI Inference Python library provides a user-friendly python interface to use the Databricks Foundation Model API.
It includes a pre-defined set of API classes Embedding
, Completion
, ChatCompletion
with convenient functions to make API request, and to parse contents from raw json response.
We also offer a high level ChatSession
object for easy management of multi-round chat completions, which is especially useful for your next chatbot development.
You can find more usage details in our SDK onboarding doc.
[!IMPORTANT]
We're preparing to release version 1.0 of the Databricks GenerativeAI Inference Python library.
Installation
pip install databricks-genai-inference
Usage
Embedding
from databricks_genai_inference import Embedding
Text embedding
response = Embedding.create(
model="bge-large-en",
input="3D ActionSLAM: wearable person tracking in multi-floor environments")
print(f'embeddings: {response.embeddings[0]}')
[!TIP]
You may want to reuse http connection to improve request latency for large-scale workload, code example:
with requests.Session() as client:
for i, text in enumerate(texts):
response = Embedding.create(
client=client,
model="bge-large-en",
input=text
)
Text embedding (async)
async with httpx.AsyncClient() as client:
response = await Embedding.acreate(
client=client,
model="bge-large-en",
input="3D ActionSLAM: wearable person tracking in multi-floor environments")
print(f'embeddings: {response.embeddings[0]}')
Text embedding with instruction
response = Embedding.create(
model="bge-large-en",
instruction="Represent this sentence for searching relevant passages:",
input="3D ActionSLAM: wearable person tracking in multi-floor environments")
print(f'embeddings: {response.embeddings[0]}')
Text embedding (batching)
[!IMPORTANT]
Support max batch size of 150
response = Embedding.create(
model="bge-large-en",
input=[
"3D ActionSLAM: wearable person tracking in multi-floor environments",
"3D ActionSLAM: wearable person tracking in multi-floor environments"])
print(f'response.embeddings[0]: {response.embeddings[0]}\n')
print(f'response.embeddings[1]: {response.embeddings[1]}')
Text embedding with instruction (batching)
[!IMPORTANT]
Support one instruction per batch Batch size
response = Embedding.create(
model="bge-large-en",
instruction="Represent this sentence for searching relevant passages:",
input=[
"3D ActionSLAM: wearable person tracking in multi-floor environments",
"3D ActionSLAM: wearable person tracking in multi-floor environments"])
print(f'response.embeddings[0]: {response.embeddings[0]}\n')
print(f'response.embeddings[1]: {response.embeddings[1]}')
Text completion
from databricks_genai_inference import Completion
Text completion
response = Completion.create(
model="mpt-7b-instruct",
prompt="Represent the Science title:")
print(f'response.text:{response.text:}')
Text completion (async)
async with httpx.AsyncClient() as client:
response = await Completion.acreate(
client=client,
model="mpt-7b-instruct",
prompt="Represent the Science title:")
print(f'response.text:{response.text:}')
Text completion (streaming)
[!IMPORTANT]
Only support batch size = 1 in streaming mode
response = Completion.create(
model="mpt-7b-instruct",
prompt="Count from 1 to 100:",
stream=True)
print(f'response.text:')
for chunk in response:
print(f'{chunk.text}', end="")
Text completion (streaming + async)
async with httpx.AsyncClient() as client:
response = await Completion.acreate(
client=client,
model="mpt-7b-instruct",
prompt="Count from 1 to 10:",
stream=True)
print(f'response.text:')
async for chunk in response:
print(f'{chunk.text}', end="")
Text completion (batching)
[!IMPORTANT]
Support max batch size of 16
response = Completion.create(
model="mpt-7b-instruct",
prompt=[
"Represent the Science title:",
"Represent the Science title:"])
print(f'response.text[0]:{response.text[0]}')
print(f'response.text[1]:{response.text[1]}')
Chat completion
from databricks_genai_inference import ChatCompletion
[!IMPORTANT]
Batching is not supported forChatCompletion
Chat completion
response = ChatCompletion.create(model="llama-2-70b-chat", messages=[{"role": "system", "content": "You are a helpful assistant."},{"role": "user", "content": "Knock knock."}])
print(f'response.text:{response.message:}')
Chat completion (async)
async with httpx.AsyncClient() as client:
response = await ChatCompletion.acreate(
client=client,
model="llama-2-70b-chat",
messages=[{"role": "system", "content": "You are a helpful assistant."},{"role": "user", "content": "Knock knock."}],
)
print(f'response.text:{response.message:}')
Chat completion (streaming)
response = ChatCompletion.create(model="llama-2-70b-chat", messages=[{"role": "system", "content": "You are a helpful assistant."},{"role": "user", "content": "Count from 1 to 30, add one emoji after each number"}], stream=True)
for chunk in response:
print(f'{chunk.message}', end="")
Chat completion (streaming + async)
async with httpx.AsyncClient() as client:
response = await ChatCompletion.acreate(
client=client,
model="llama-2-70b-chat",
messages=[{"role": "system", "content": "You are a helpful assistant."},{"role": "user", "content": "Count from 1 to 30, add one emoji after each number"}],
stream=True,
)
async for chunk in response:
print(f'{chunk.message}', end="")
Chat session
from databricks_genai_inference import ChatSession
[!IMPORTANT]
Streaming mode is not supported forChatSession
chat = ChatSession(model="llama-2-70b-chat")
chat.reply("Kock, kock!")
print(f'chat.last: {chat.last}')
chat.reply("Take a guess!")
print(f'chat.last: {chat.last}')
print(f'chat.history: {chat.history}')
print(f'chat.count: {chat.count}')
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file databricks-genai-inference-0.2.1.tar.gz
.
File metadata
- Download URL: databricks-genai-inference-0.2.1.tar.gz
- Upload date:
- Size: 26.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.18
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7dd8155116d773bd2847e9fee7ef4589350c5e72ab050482f4fe25f99c222403 |
|
MD5 | 4ea2a9f38ef8761b30fdc48187652786 |
|
BLAKE2b-256 | 137dccf8f2977b520a43717311c0b567beaf32583d904f62ef167745f5c526e8 |
File details
Details for the file databricks_genai_inference-0.2.1-py3-none-any.whl
.
File metadata
- Download URL: databricks_genai_inference-0.2.1-py3-none-any.whl
- Upload date:
- Size: 17.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.18
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | c935bd2708fac20aa1eaa92eb85836ba9002723defde314529c379897a8d34a4 |
|
MD5 | c5098746b3197555f0fae57d59d20ff9 |
|
BLAKE2b-256 | a98a103793028fcc11bed8576920289efd611abc745a10c61b93f3fe0a591446 |