Skip to main content

llama-index llms deepinfra integration

Project description

LlamaIndex Llms Integration: DeepInfra

Installation

First, install the necessary package:

pip install llama-index-llms-deepinfra

Initialization

Set up the DeepInfraLLM class with your API key and desired parameters:

from llama_index.llms.deepinfra import DeepInfraLLM
import asyncio

llm = DeepInfraLLM(
    model="mistralai/Mixtral-8x22B-Instruct-v0.1",  # Default model name
    api_key="your-deepinfra-api-key",  # Replace with your DeepInfra API key
    temperature=0.5,
    max_tokens=50,
    additional_kwargs={"top_p": 0.9},
)

Synchronous Complete

Generate a text completion synchronously using the complete method:

response = llm.complete("Hello World!")
print(response.text)

Synchronous Stream Complete

Generate a streaming text completion synchronously using the stream_complete method:

content = ""
for completion in llm.stream_complete("Once upon a time"):
    content += completion.delta
    print(completion.delta, end="")

Synchronous Chat

Generate a chat response synchronously using the chat method:

from llama_index.core.base.llms.types import ChatMessage

messages = [
    ChatMessage(role="user", content="Tell me a joke."),
]
chat_response = llm.chat(messages)
print(chat_response.message.content)

Synchronous Stream Chat

Generate a streaming chat response synchronously using the stream_chat method:

messages = [
    ChatMessage(role="system", content="You are a helpful assistant."),
    ChatMessage(role="user", content="Tell me a story."),
]
content = ""
for chat_response in llm.stream_chat(messages):
    content += chat_response.message.delta
    print(chat_response.message.delta, end="")

Asynchronous Complete

Generate a text completion asynchronously using the acomplete method:

async def async_complete():
    response = await llm.acomplete("Hello Async World!")
    print(response.text)


asyncio.run(async_complete())

Asynchronous Stream Complete

Generate a streaming text completion asynchronously using the astream_complete method:

async def async_stream_complete():
    content = ""
    response = await llm.astream_complete("Once upon an async time")
    async for completion in response:
        content += completion.delta
        print(completion.delta, end="")


asyncio.run(async_stream_complete())

Asynchronous Chat

Generate a chat response asynchronously using the achat method:

async def async_chat():
    messages = [
        ChatMessage(role="user", content="Tell me an async joke."),
    ]
    chat_response = await llm.achat(messages)
    print(chat_response.message.content)


asyncio.run(async_chat())

Asynchronous Stream Chat

Generate a streaming chat response asynchronously using the astream_chat method:

async def async_stream_chat():
    messages = [
        ChatMessage(role="system", content="You are a helpful assistant."),
        ChatMessage(role="user", content="Tell me an async story."),
    ]
    content = ""
    response = await llm.astream_chat(messages)
    async for chat_response in response:
        content += chat_response.message.delta
        print(chat_response.message.delta, end="")


asyncio.run(async_stream_chat())

For any questions or feedback, please contact us at feedback@deepinfra.com.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_deepinfra-0.6.1.tar.gz (9.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_index_llms_deepinfra-0.6.1-py3-none-any.whl (10.7 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_llms_deepinfra-0.6.1.tar.gz.

File metadata

  • Download URL: llama_index_llms_deepinfra-0.6.1.tar.gz
  • Upload date:
  • Size: 9.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.9 {"installer":{"name":"uv","version":"0.10.9","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for llama_index_llms_deepinfra-0.6.1.tar.gz
Algorithm Hash digest
SHA256 1d7bb93fc1d7a470e4c624f9c6ce95cf3a172c1186f3a2915cd29b574a8ee106
MD5 5c7733d73bf1701e1fe00b849c1001b4
BLAKE2b-256 ea710c0d685fcba25673b2fd0e073a5b1cbab64e93121c03f23214bc9adc14a5

See more details on using hashes here.

File details

Details for the file llama_index_llms_deepinfra-0.6.1-py3-none-any.whl.

File metadata

  • Download URL: llama_index_llms_deepinfra-0.6.1-py3-none-any.whl
  • Upload date:
  • Size: 10.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.9 {"installer":{"name":"uv","version":"0.10.9","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for llama_index_llms_deepinfra-0.6.1-py3-none-any.whl
Algorithm Hash digest
SHA256 312a92b4b4b51b2265a9d901791b92d7c83d9fa7175320d02f00aa56ca10f56d
MD5 88b0a0e18a39dea7f81b17e2ad87676d
BLAKE2b-256 e0e8d075f5e140dea7de7413900ff39e6534c652eaa88abd8896a960e57ccf64

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page