Skip to main content

llama-index llms mymagic integration

Project description

LlamaIndex Llms Integration: Mymagic

Installation

To install the required package, run:

%pip install llama-index-llms-mymagic
!pip install llama-index

Setup

Before you begin, set up your cloud storage bucket and grant MyMagic API secure access. For detailed instructions, visit the MyMagic documentation.

Initialize MyMagicAI

Create an instance of MyMagicAI by providing your API key and storage configuration:

from llama_index.llms.mymagic import MyMagicAI

llm = MyMagicAI(
    api_key="your-api-key",
    storage_provider="s3",  # Options: 's3' or 'gcs'
    bucket_name="your-bucket-name",
    session="your-session-name",  # Directory for batch inference
    role_arn="your-role-arn",
    system_prompt="your-system-prompt",
    region="your-bucket-region",
    return_output=False,  # Set to True to return output JSON
    input_json_file=None,  # Input file stored on the bucket
    list_inputs=None,  # List of inputs for small batch
    structured_output=None,  # JSON schema of the output
)

Note: If return_output is set to True, max_tokens should be at least 100.

Generate Completions

To generate a text completion for a question, use the complete method:

resp = llm.complete(
    question="your-question",
    model="choose-model",  # Supported models: mistral7b, llama7b, mixtral8x7b, codellama70b, llama70b, etc.
    max_tokens=5,  # Number of tokens to generate (default is 10)
)
print(
    resp
)  # The response indicates if the final output is stored in your bucket or raises an exception if the job failed

Asynchronous Requests

For asynchronous operations, use the acomplete endpoint:

import asyncio


async def main():
    response = await llm.acomplete(
        question="your-question",
        model="choose-model",  # Supported models listed in the documentation
        max_tokens=5,  # Number of tokens to generate (default is 10)
    )
    print("Async completion response:", response)


await main()

LLM Implementation example

https://docs.llamaindex.ai/en/stable/examples/llm/mymagic/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_mymagic-0.2.1.tar.gz (4.4 kB view details)

Uploaded Source

Built Distribution

File details

Details for the file llama_index_llms_mymagic-0.2.1.tar.gz.

File metadata

  • Download URL: llama_index_llms_mymagic-0.2.1.tar.gz
  • Upload date:
  • Size: 4.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.10.12 Linux/6.8.0-1014-azure

File hashes

Hashes for llama_index_llms_mymagic-0.2.1.tar.gz
Algorithm Hash digest
SHA256 a624feeae58f67bb11c3170e716e798c523c49b2d3bca9dc31fadb2f82f13ca0
MD5 e81f1e9a17ef14a4ff4915fd4de16b2a
BLAKE2b-256 a238163fafc64d82b631a32f6aae3d018eefc1bec42b67a210fd8efac1536fb9

See more details on using hashes here.

File details

Details for the file llama_index_llms_mymagic-0.2.1-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_llms_mymagic-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 0d6d9e43c6c0ed28a7a1d07e8aaf6a9ef9e4e679c60dabf04e7d1fd58e7504f7
MD5 691eba381fbcbe006a7653c747c7218c
BLAKE2b-256 7f167bb180789aad96c881a0090fa8b9e9bc047eac5f9c92cb708efbb1ccd0ba

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page