Skip to main content

Ship RAG based LLM Web API's, in seconds.

Project description

AutoLLM

Ship retrieval augmented generation based Large Language Model Web API's, in seconds

Python 3.10 Version 0.0.1 GNU AGPL 3.0

🤔 Why AutoLLM?

Simplify. Unify. Amplify. Integrate any Large Language Model (LLM) or Vector Database with just one line of code.

Feature AutoLLM LangChain LlamaIndex LiteLLM
80+ LLMs ✔️ ✔️ ✔️ ✔️
Unified API ✔️ ✔️
20+ Vector Databases ✔️ ✔️ ✔️
Cost Calculation (80+ LLMs) ✔️ ✔️
1-Line FastAPI ✔️
1-Line RAG LLM Engine ✔️

📦 Installation

Easily install AutoLLM with pip in Python>=3.8 environment.

pip install autollm

🌟 Features

📚 AutoLLM (Supports 80+ LLMs)

  • Microsoft Azure - OpenAI example:
from autollm import AutoLLM

## set ENV variables
os.environ["AZURE_API_KEY"] = ""
os.environ["AZURE_API_BASE"] = ""
os.environ["AZURE_API_VERSION"] = ""

# Dynamically initialize a llama_index llm instance with the same AutoLLM api
llm = AutoLLM(model="azure/<your_deployment_name>")
Google - VertexAI example:
from autollm import AutoLLM

## set ENV variables
os.environ["VERTEXAI_PROJECT"] = "hardy-device-38811"  # Your Project ID`
os.environ["VERTEXAI_LOCATION"] = "us-central1"  # Your Location

# Dynamically initialize a llama_index llm instance with the same AutoLLM api
llm = AutoLLM(model="text-bison@001")
AWS Bedrock - Claude v2 example:
from autollm import AutoLLM

## set ENV variables
os.environ["AWS_ACCESS_KEY_ID"] = ""
os.environ["AWS_SECRET_ACCESS_KEY"] = ""
os.environ["AWS_REGION_NAME"] = ""

# Dynamically initialize a llama_index llm instance with the same AutoLLM interface
llm = AutoLLM(model="anthropic.claude-v2")

📈 AutoVectorStoreIndex (Supports 20+ VectorDBs)

🌟 Pro Tip: AutoLLM defaults to LanceDB if no vector store is specified.

from autollm import AutoVectorStoreIndex

vector_store_index = AutoVectorStoreIndex.from_defaults()

🎯 AutoQueryEngine (Creates a query engine pipeline in a single line of code)

Create robust query engine pipelines with automatic cost logging. Supports fine-grained control for advanced use-cases.

Basic Usage:

query_engine = AutoQueryEngine.from_parameters()

response = query_engine.query("Why is SafeVideo AI open sourcing this project?")

print(response.response)
>> Because they are cool!
Advanced Usage:
from autollm import AutoQueryEngine

import qdrant_client

# Initialize the query engine with explicit parameters
query_engine = AutoQueryEngine.from_parameters(
    system_prompt="You are an expert qa assistant. Provide accurate and detailed answers to queries",
    query_wrapper_prompt="The document information is the following: {context_str} | Using the document information and mostly relying on it,
answer the query. | Query {query_str} | Answer:",
    enable_cost_calculator=True,
    llm_params={"model": "gpt-3.5-turbo"},
    vector_store_params={"vector_store_type": "QdrantVectorStore", "client": qdrant_client.QdrantClient(
    url="http://<host>:<port>"
    api_key="<qdrant-api-key>",
), "collection_name": "quickstart"},
    service_context_params={"chunk_size": 1024},
    query_engine_params={"similarity_top_k": 10},
)

response = query_engine.query("Why is SafeVideo AI awesome?")

print(response.response)
>> Because they redefine the movie experience by AI!

💰 Automated Cost Calculation (Supports 80+ LLMs)

Keep track of your LLM token usage and costs in real-time.

from autollm import AutoServiceContext

service_context = AutoServiceContext(enable_cost_calculation=True)

# Example calculation verbose output
"""
Embedding Token Usage: 7
LLM Prompt Token Usage: 1482
LLM Completion Token Usage: 47
LLM Total Token Cost: $0.002317
"""

📚 Document Reading

🌐 Create documents for a VectorDB from GitHub repo in one line!

from autollm.utils.document_reading import read_github_repo_as_documents

git_repo_url = "https://github.com/safevideo.git"
relative_folder_path = Path("docs/")

documents = read_github_repo_as_documents(git_repo_url, relative_folder_path)

📂 Add Local Files into VectorDB in One Line (Supports 10+ File Types)

from autollm.utils.document_reading import read_local_files_as_documents

documents = read_local_files_as_documents(input_dir="tmp/docs")

🚀 Create FastAPI App in 1-Line

from autollm import create_web_app

app = create_web_app(config_path, env_path)

Here, config and env should be replaced by your configuration and environment file paths.

Run Your Application

After creating your FastAPI app, run the following command in your terminal to get it up and running:

uvicorn main:app
______________________________________________________________________

🔄 Smooth Migration from LlamaIndex

Switching from LlamaIndex? We've got you covered.

Easy Migration
from autollm import AutoQueryEngine
from llama_index import StorageContext, ServiceContext, VectorStoreIndex
from llama_index.vectorstores import LanceDBVectorStore

vector_store = LanceDBVectorStore(uri="/tmp/lancedb")
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(documents=documents)
service_context = ServiceContext.from_defaults()

query_engine = AutoQueryEngine.from_instance(index, service_context)

❓ FAQ

Q: Can I use this for commercial projects?

A: Yes, AutoLLM is licensed under GNU Affero General Public License (AGPL 3.0), which allows for commercial use under certain conditions. Contact us for more information.


Roadmap

Our roadmap outlines upcoming features and integrations to make AutoLLM the most extensible and powerful base package for large language model applications.

  • Budget based email notification feature

  • Add evaluation metrics for LLMs:

  • Add unit tests for online vectorDB integrations:

  • Add example code snippet to Readme on how to integrate llama-hub readers:


📜 License

AutoLLM is available under the GNU Affero General Public License (AGPL 3.0).


📞 Contact

For more information, support, or questions, please contact:


🌟 Contributing

Love AutoLLM? Star the repo or contribute and help us make it even better! See our contributing guidelines for more information.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

autollm-0.0.8.tar.gz (31.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

autollm-0.0.8-py3-none-any.whl (33.9 kB view details)

Uploaded Python 3

File details

Details for the file autollm-0.0.8.tar.gz.

File metadata

  • Download URL: autollm-0.0.8.tar.gz
  • Upload date:
  • Size: 31.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for autollm-0.0.8.tar.gz
Algorithm Hash digest
SHA256 ee7a7fc6a30a9d67d882e913ca76036b019fb8db50d067b6170a947cdade0fc7
MD5 2dffd56ee600a8caa6f81ef59036fffa
BLAKE2b-256 3111f61cceddb957d4499811467347635abd72565f059f95fc0ce2a815ebe1d2

See more details on using hashes here.

File details

Details for the file autollm-0.0.8-py3-none-any.whl.

File metadata

  • Download URL: autollm-0.0.8-py3-none-any.whl
  • Upload date:
  • Size: 33.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for autollm-0.0.8-py3-none-any.whl
Algorithm Hash digest
SHA256 4abcbb09a7700b5f965d0c61ac79346fbeff06d70b47411865622e90dc9b7637
MD5 b2a0f935cb21a060f3c81199af8cf7f7
BLAKE2b-256 cfc4032fe4c6c457645e456c45059ee684b573fb843d58663471701f173a572e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page