Skip to main content

llama-index llms Aleph Alpha integration

Project description

LlamaIndex LLM Integration: Aleph Alpha

This README details the process of integrating Aleph Alpha's Large Language Models (LLMs) with LlamaIndex. Utilizing Aleph Alpha's API, users can generate completions, facilitate question-answering, and perform a variety of other natural language processing tasks directly within the LlamaIndex framework.

Features

  • Text Completion: Use Aleph Alpha LLMs to generate text completions for prompts.
  • Model Selection: Access the latest Aleph Alpha models, including the Luminous model family, to generate responses.
  • Advanced Sampling Controls: Customize the response generation with parameters like temperature, top_k, top_p, presence_penalty, and more, to fine-tune the creativity and relevance of the generated text.
  • Control Parameters: Apply attention control parameters for advanced use cases, affecting how the model focuses on different parts of the input.

Installation

pip install llama-index-llms-alephalpha

Usage

from llama_index.llms.alephalpha import AlephAlpha
  1. Request Parameters:

    • model: Specify the model name (e.g., luminous-base-control). The latest model version is always used.
    • prompt: The text prompt for the model to complete.
    • maximum_tokens: The maximum number of tokens to generate.
    • temperature: Adjusts the randomness of the completions.
    • top_k: Limits the sampled tokens to the top k probabilities.
    • top_p: Limits the sampled tokens to the cumulative probability of the top tokens.
    • log_probs: Set to true to return the log probabilities of the tokens.
    • echo: Set to true to return the input prompt along with the completion.
    • penalty_exceptions: A list of tokens that should not be penalized.
    • n: Number of completions to generate.
  2. Advanced Sampling Parameters: (Optional)

    • presence_penalty & frequency_penalty: Adjust to discourage repetition.
    • sequence_penalty: Reduces likelihood of repeating token sequences.
    • hosting: Option to process the request in Aleph Alpha's own datacenters for enhanced data privacy.

Response Structure

* `model_version`: The name and version of the model used.
* `completions`: A list containing the generated text completion(s) and optional metadata:
    * `completion`: The generated text completion.
    * `log_probs`: Log probabilities of the tokens in the completion.
    * `raw_completion`: The raw completion without any post-processing.
    * `completion_tokens`: Completion split into tokens.
    * `finish_reason`: Reason for completion termination.
* `num_tokens_prompt_total`: Total number of tokens in the input prompt.
* `num_tokens_generated`: Number of tokens generated in the completion.

Example

Refer to the example notebook for a comprehensive guide on generating text completions with Aleph Alpha models in LlamaIndex.

API Documentation

For further details on the API and available models, please consult Aleph Alpha's API Documentation.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_alephalpha-0.2.0.tar.gz (5.9 kB view details)

Uploaded Source

Built Distribution

File details

Details for the file llama_index_llms_alephalpha-0.2.0.tar.gz.

File metadata

File hashes

Hashes for llama_index_llms_alephalpha-0.2.0.tar.gz
Algorithm Hash digest
SHA256 f1759553fe1c6347b08f23c96bbef43f2522f28c54653e224e6c306e27628829
MD5 113619988eac362e4e820f88ae51156a
BLAKE2b-256 93e969c898973ad703c6bd0a505c989a37fd9267e2d5a2354892200576586404

See more details on using hashes here.

File details

Details for the file llama_index_llms_alephalpha-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_llms_alephalpha-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 5fe5aaf86ddd0d1bf5d690445e75db52f2906a40b1dd79f0e5521ee70730fc4c
MD5 955aefdec0fbff02bff59e1dab5a367a
BLAKE2b-256 31991ed4c46a901e9fff759e960dd67a63ec5ea434c633497d4cc03f617a81b3

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page