Skip to main content

llama-index llms reka integration

Project description

LlamaIndex Llms Integration: Reka

This package provides integration between the Reka language model and LlamaIndex, allowing you to use Reka's powerful language models in your LlamaIndex applications. Installation To use this integration, you need to install the llama-index-llms-reka package:

pip install llama-index-llms-reka

To obtain API key, please visit https://platform.reka.ai/ Our baseline models always available for public access are:

  • reka-edge
  • reka-flash
  • reka-core

Other models may be available. The Get Models API allows you to list what models you have available to you. Using the Python SDK, it can be accessed as follows:

from reka.client import Reka

client = Reka()
print(client.models.get())

Here are some examples of how to use the Reka LLM integration with LlamaIndex:

import os
from llama_index.llms.reka import RekaLLM

api_key = os.getenv("REKA_API_KEY")
reka_llm = RekaLLM(model="reka-flash", api_key=api_key)

Initialize the Reka LLM client

api_key = os.getenv("REKA_API_KEY")
reka_llm = RekaLLM(model="reka-flash", api_key=api_key)

Chat completion

from llama_index.core.base.llms.types import ChatMessage, MessageRole

messages = [
    ChatMessage(
        role=MessageRole.SYSTEM, content="You are a helpful assistant."
    ),
    ChatMessage(
        role=MessageRole.USER, content="What is the capital of France?"
    ),
]
response = reka_llm.chat(messages)
print(response.message.content)

Text completion

prompt = "The capital of France is"
response = reka_llm.complete(prompt)
print(response.text)

Streaming Responses python

Streaming chat completion

messages = [
    ChatMessage(
        role=MessageRole.SYSTEM, content="You are a helpful assistant."
    ),
    ChatMessage(
        role=MessageRole.USER,
        content="List the first 5 planets in the solar system.",
    ),
]
for chunk in reka_llm.stream_chat(messages):
    print(chunk.delta, end="", flush=True)

Streaming text completion

prompt = "List the first 5 planets in the solar system:"
for chunk in reka_llm.stream_complete(prompt):
    print(chunk.delta, end="", flush=True)

Asynchronous Usage

import asyncio

async def main():
    # Async chat completion
    messages = [
        ChatMessage(role=MessageRole.SYSTEM, content="You are a helpful assistant."),
        ChatMessage(role=MessageRole.USER, content="What is the largest planet in our solar system?"),
    ]
    response = await reka_llm.achat(messages)
    print(response.message.content)

    # Async text completion
    prompt = "The largest planet in our solar system is"
    response = await reka_llm.acomplete(prompt)
    print(response.text)

    # Async streaming chat completion
    messages = [
        ChatMessage(role=MessageRole.SYSTEM, content="You are a helpful assistant."),
        ChatMessage(role=MessageRole.USER, content="Name the first 5 elements in the periodic table."),
    ]
    async for chunk in await reka_llm.astream_chat(messages):
        print(chunk.delta, end="", flush=True)

    # Async streaming text completion
    prompt = "List the first 5 elements in the periodic table:"
    async for chunk in await reka_llm.astream_complete(prompt):
        print(chunk.delta, end="", flush=True)

asyncio.run(main())

Running Tests

To run the tests for this integration, you'll need to have pytest and pytest-asyncio installed. You can install them using pip:

pip install pytest pytest-asyncio

Then, set your Reka API key as an environment variable:

export REKA_API_KEY=your_api_key_here

Now you can run the tests using pytest:

pytest tests/test_reka_llm.py -v

To run only mock integration test without remote connections pytest tests/test_reka_llm.py -v -k "mock" Note: The test file should be named test_reka_llm.py and placed in the appropriate directory.

Contributing

Contributions to improve this integration are welcome. Please ensure that you add or update tests as necessary when making changes. When adding new features or modifying existing ones, please update this README to reflect those changes.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_reka-0.3.0.tar.gz (6.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_index_llms_reka-0.3.0-py3-none-any.whl (6.6 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_llms_reka-0.3.0.tar.gz.

File metadata

File hashes

Hashes for llama_index_llms_reka-0.3.0.tar.gz
Algorithm Hash digest
SHA256 c49ff9c40ba3225fd6d5d200fb817a2717d3e1ac8166f4139ffdfd8899c9c1c9
MD5 5b605314b74e0483ce01255aaa3a9a6c
BLAKE2b-256 b9c2deb443a068e888a215ac88dd46ea2a4425f732bc3f2a4621c2d4a5e80355

See more details on using hashes here.

File details

Details for the file llama_index_llms_reka-0.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_llms_reka-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 0e1b8761634d6272efc0f811dddc5016faef7c74429b03583acf5697509b435c
MD5 8fd81a839b603a8641f42f42efa508dd
BLAKE2b-256 ff484891ed73bd5d1afcaafd88fde9ac96aecf55a77228a7a5e60617342203d5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page