Skip to main content

An enterprise-grade LLM-based development framework, tools, and fine-tuned models

Project description

llmware

Static Badge PyPI - Version

llmware is a unified, open, extensible framework for LLM-based application patterns including Retrieval Augmented Generation (RAG). This project provides a comprehensive set of tools that anyone can use – from beginner to the most sophisticated AI developer – to rapidly build industrial-grade enterprise LLM-based applications. Key differentiators include: source citation for Q & A scenarios, fact checking, and other guardrails for model hallucination.

With llmware, our goal is to contribute to and help catalyze an open community around the new combination of open, extensible technologies being assembled to accomplish fact-based generative workflows.

🎯 Key features

llmware is an integrated framework comprised of four major components:

Retrieval: Assemble fact-sets
  • A comprehensive set of querying methods: semantic, text, and hybrid retrieval with integrated metadata.
  • Ranking and filtering strategies to enable semantic search and rapid retrieval of information.
  • Web scrapers, Wikipedia integration, and Yahoo Finance API integration as additional tools to assemble fact-sets for generation.
Prompt: Tools for sophisticated generative scenarios
  • Connect Models: Open interface designed to support AI21, Ai Bloks READ-GPT, Anthropic, Cohere, HuggingFace Generative models, llmware BLING and DRAGON models, OpenAI.
  • Prepare Sources: Tools for packaging and tracking a wide range of materials into model context window sizes. Sources include files, websites, audio, AWS Transcribe transcripts, Wikipedia and Yahoo Finance.
  • Prompt Catalog: Dynamically configurable prompts to experiment with multiple models without any change in the code.
  • Post Processing: a full set of metadata and tools for evidence verification, classification of a response, and fact-checking.
  • Human in the Loop: Ability to enable user ratings, feedback, and corrections of AI responses.
  • Auditability: A flexible state mechanism to capture, track, analyze and audit the LLM prompt lifecycle
Vector Embeddings: swappable embedding models and vector databases
  • Custom trained sentence transformer embedding models and support for embedding models from Cohere, Google, HuggingFace Embedding models, and OpenAI.
  • Mix-and-match among multiple options to find the right solution for any particular application.
  • Out-of-the-box support for 3 vector databases - Milvus, FAISS, and Pinecone.
Parsing and Text Chunking: Prepare your data for RAG
  • Parsers for: PDF, PowerPoint, Word, Excel, HTML, Text, WAV, AWS Transcribe transcripts.
  • A complete set of text-chunking tools to separate information and associated metadata to a consistent block format.

📚 Explore additional llmware capabilities and 🎬 Check out these videos on how to quickly get started with RAG:

🌱 Getting Started

1. Install llmware:

pip install llmware

or

python3 -m pip install llmware

See Working with llmware for other options to get up and running.

2. MongoDB and Milvus

MongoDB and Milvus are optional and used to provide production-grade database and vector embedding capabilities. The fastest way to get started is to use the provided Docker Compose file which takes care of running them both:

curl -o docker-compose.yaml https://raw.githubusercontent.com/llmware-ai/llmware/main/docker-compose.yaml

and then run the containers:

docker compose up -d

Not ready to install MongoDB or Milvus? Check out what you can do without them in our examples section.

See Running MongoDB and Milvus for other options to get up and running with these optional dependencies.

3. 🔥 Start coding - Quick Start For RAG 🔥

# This example demonstrates Retrieval Augmented Retrieval (RAG):
import os
from llmware.library import Library
from llmware.retrieval import Query
from llmware.prompts import Prompt
from llmware.setup import Setup

# Update this value with your own API Key, either by setting the env var or editing it directly here:
openai_api_key = os.environ["OPENAI_API_KEY"]

# A self-contained end-to-end example of RAG
def end_to_end_rag():
    
    # Create a library called "Agreements", and load it with llmware sample files
    print (f"\n > Creating library 'Agreements'...")
    library = Library().create_new_library("Agreements")
    sample_files_path = Setup().load_sample_files()
    library.add_files(os.path.join(sample_files_path,"Agreements"))

    # Create vector embeddings for the library using the "industry-bert-contracts model and store them in Milvus
    print (f"\n > Generating vector embeddings using embedding model: 'industry-bert-contracts'...")
    library.install_new_embedding(embedding_model_name="industry-bert-contracts", vector_db="milvus")

    # Perform a semantic search against our library.  This will gather evidence to be used in the LLM prompt
    print (f"\n > Performing a semantic query...")
    os.environ["TOKENIZERS_PARALLELISM"] = "false" # Avoid a HuggingFace tokenizer warning
    query_results = Query(library).semantic_query("Termination", result_count=20)

    # Create a new prompter using the GPT-4 and add the query_results captured above
    prompt_text = "Summarize the termination provisions"
    print (f"\n > Prompting LLM with '{prompt_text}'")
    prompter = Prompt().load_model("gpt-4", api_key=openai_api_key)
    sources = prompter.add_source_query_results(query_results)

    # Prompt the LLM with the sources and a query string
    responses = prompter.prompt_with_source(prompt_text, prompt_name="summarize_with_bullets")
    for response in responses:
        print ("\n > LLM response\n" + response["llm_response"])
    
    # Finally, generate a CSV report that can be shared
    print (f"\n > Generating CSV report...")
    report_data = prompter.send_to_human_for_review()
    print ("File: " + report_data["report_fp"] + "\n")

end_to_end_rag()

Response from end-to-end RAG example

> python examples/rag_with_openai.py

 > Creating library 'Agreements'...

 > Generating vector embeddings using embedding model: 'industry-bert-contracts'...

 > Performing a semantic query...

 > Prompting LLM with 'Summarize the termination provisions'

 > LLM response
- Employment period ends on the first occurrence of either the 6th anniversary of the effective date or a company sale.
- Early termination possible as outlined in sections 3.1 through 3.4.
- Employer can terminate executive's employment under section 3.1 anytime without cause, with at least 30 days' prior written notice.
- If notice is given, the executive is allowed to seek other employment during the notice period.

 > Generating CSV report...
File: /Users/llmware/llmware_data/prompt_history/interaction_report_Fri Sep 29 12:07:42 2023.csv

📚 See 20+ llmware examples for more RAG examples and other code samples and ideas.

4. Accessing LLMs and setting-up API keys & secrets

To get started with a proprietary model, you need to provide your own API Keys. If you don't yet have one, more information can be found at: AI21, Ai Bloks, Anthropic, Cohere, Google, OpenAI.

API keys and secrets for models, aws, and pinecone can be set-up for use in environment variables or managed however you prefer.

You can also access the llmware public model repository which includes out-of-the-box custom trained sentence transformer embedding models fine-tuned for the following industries: Insurance, Contracts, Asset Management, SEC. These domain specific models along with llmware's generative BLING model series ("Best Little Instruction-following No-GPU-required") and DRAGON model series ("Delivering RAG on ...") are available at llmware on Huggingface. Explore using the model repository and the llmware Huggingface integration in llmware examples.

🔹 Alternate options for running MongoDB and Milvus

There are several options for getting MongoDB running

🐳 A. Run mongo container with docker
docker run -d -p 27017:27017  -v mongodb-volume:/data/db --name=mongodb mongo:latest
🐳 B. Run container with docker compose

Create a docker-compose.yaml file with the content:

version: "3"

services:
  mongodb:
    container_name: mongodb
    image: 'mongo:latest'
    volumes:
      - mongodb-volume:/data/db
    ports:
      - '27017:27017'

volumes:
    llmware-mongodb:
      driver: local

and then run:

docker compose up
📖 C. Install MongoDB natively

See the Official MongoDB Installation Guide

🔗 D. Connect to an existing MongoDB deployment

You can connect to an existing MongoDB deployment by setting the connection string to the environment variable, COLLECTION_DB_URI. See the example script, Using Mongo Atlas, for detailed information on how to use Mongo Atlas as the NoSQL and/or Vector Database for llmware.

Additional information on finding and formatting connection strings can be found in the MongoDB Connection Strings Documentation.

✍️ Working with the llmware Github repository

The llmware repo can be pulled locally to get access to all the examples, or to work directly with the latest version of the llmware code.

Pull the repo locally

git clone git@github.com:llmware-ai/llmware.git

or download/extract a zip of the llmware repository

Run llmware natively

Update the local copy of the repository:

git pull

Download the shared llmware native libraries and dependencies by running the load_native_libraries.sh script. This pulls the right wheel for your platform and extracts the llmware native libraries and dependencies into the proper place in the local repository.

./scripts/dev/load_native_libraries.sh

At the top level of the llmware repository run the following command:

pip install .

✨ Getting help or sharing your ideas with the community

Questions and discussions are welcome in our github discussions.

Interested in contributing to llmware? We welcome involvement from the community to extend and enhance the framework!

  • 💡 What's your favorite model or is there one you'd like to check out in your experiments?
  • 💡 Have you had success with a different embedding databases?
  • 💡 Is there a prompt that shines in a RAG workflow?

Information on ways to participate can be found in our Contributors Guide. As with all aspects of this project, contributing is governed by our Code of Conduct.

📣 Release notes and Change Log

Supported Operating Systems:

  • MacOS
  • Linux
  • Windows

Supported Vector Databases:

  • Milvus
  • FAISS
  • Pinecone
  • MongoDB Atlas Vector Search

Prereqs:

Optional:

Known issues:

  • A segmentation fault can occur when parsing if the native package for mongo-c-driver is 1.25 or above. To address this issue, install the latest version of llmware or downgrade mongo-c-driver to v1.24.4.
🚧 Change Log
  • 30 Nov 2023: llmware v0.1.10

    • Windows added as a supported operating system.
    • Further enhancements to native code for stack management.
    • Minor defect fixes.
  • 24 Nov 2023: llmware v0.1.9

    • Markdown (.md) files are now parsed and treated as text files.
    • PDF and Office parser stack optimizations which should avoid the need to set ulimit -s.
    • New llmware_models_fast_start.py example that allows discovery and selection of all llmware HuggingFace models.
    • Native dependencies (shared libraries and dependencies) now included in repo to faciliate local development.
    • Updates to the Status class to support PDF and Office document parsing status updates.
    • Minor defect fixes including image block handling in library exports.
  • 17 Nov 2023: llmware v0.1.8

    • Enhanced generation performance by allowing each model to specific the trailing space parameter.
    • Improved handling for eos_token_id for llama2 and mistral.
    • Improved support for Hugging Face dynamic loading
    • New examples with the new llmware DRAGON models.
  • 14 Nov 2023: llmware v0.1.7

    • Moved to Python Wheel package format for PyPi distribution to provide seamless installation of native dependencies on all supported platforms.
    • ModelCatalog enhancements:
      • OpenAI update to include newly announced ‘turbo’ 4 and 3.5 models.
      • Cohere embedding v3 update to include new Cohere embedding models.
      • BLING models as out-of-the-box registered options in the catalog. They can be instantiated like any other model, even without the “hf=True” flag.
      • Ability to register new model names, within existing model classes, with the register method in ModelCatalog.
    • Prompt enhancements:
      • “evidence_metadata” added to prompt_main output dictionaries allowing prompt_main responses to be plug into the evidence and fact-checking steps without modification.
      • API key can now be passed directly in a prompt.load_model(model_name, api_key = “[my-api-key]”)
    • LLMWareInference Server - Initial delivery:
      • New Class for LLMWareModel which is a wrapper on a custom HF-style API-based model.
      • LLMWareInferenceServer is a new class that can be instantiated on a remote (GPU) server to create a testing API-server that can be integrated into any Prompt workflow.
  • 03 Nov 2023: llmware v0.1.6

    • Updated packaging to require mongo-c-driver 1.24.4 to temporarily workaround segmentation fault with mongo-c-driver 1.25.
    • Updates in python code needed in anticipation of future Windows support.
  • 27 Oct 2023: llmware v0.1.5

    • Four new example scripts focused on RAG workflows with small, fine-tuned instruct models that run on a laptop (llmware BLING models).
    • Expanded options for setting temperature inside a prompt class.
    • Improvement in post processing of Hugging Face model generation.
    • Streamlined loading of Hugging Face generative models into prompts.
    • Initial delivery of a central status class: read/write of embedding status with a consistent interface for callers.
    • Enhanced in-memory dictionary search support for multi-key queries.
    • Removed trailing space in human-bot wrapping to improve generation quality in some fine-tuned models.
    • Minor defect fixes, updated test scripts, and version update for Werkzeug to address dependency security alert.
  • 20 Oct 2023: llmware v0.1.4

    • GPU support for Hugging Face models.
    • Defect fixes and additional test scripts.
  • 13 Oct 2023: llmware v0.1.3

    • MongoDB Atlas Vector Search support.
    • Support for authentication using a MongoDB connection string.
    • Document summarization methods.
    • Improvements in capturing the model context window automatically and passing changes in the expected output length.
    • Dataset card and description with lookup by name.
    • Processing time added to model inference usage dictionary.
    • Additional test scripts, examples, and defect fixes.
  • 06 Oct 2023: llmware v0.1.1

    • Added test scripts to the github repository for regression testing.
    • Minor defect fixes and version update of Pillow to address dependency security alert.
  • 02 Oct 2023: llmware v0.1.0 🔥 Initial release of llmware to open source!! 🔥

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

llmware-0.1.10-py3-none-any.whl (19.9 MB view details)

Uploaded Python 3

File details

Details for the file llmware-0.1.10-py3-none-any.whl.

File metadata

  • Download URL: llmware-0.1.10-py3-none-any.whl
  • Upload date:
  • Size: 19.9 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.10

File hashes

Hashes for llmware-0.1.10-py3-none-any.whl
Algorithm Hash digest
SHA256 5bb31a6e5797062b5f2eaa5f1c9eebba1ba18cea0d84a2b1e21d50e911f8d9c3
MD5 39f6970534fcd9333295b5811170a34a
BLAKE2b-256 c75f4978c21bd1e51d791b51947db1b73f6241441d86ddddcb01ad83ace9ec78

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page