Skip to main content

An enterprise-grade LLM-based development framework, tools, and fine-tuned models

Project description

llmware

Static Badge PyPI - Version PyPI - Downloads

llmware is a unified, open, extensible framework for LLM-based application patterns including Retrieval Augmented Generation (RAG). This project provides a comprehensive set of tools that anyone can use – from beginner to the most sophisticated AI developer – to rapidly build industrial-grade enterprise LLM-based applications.

With llmware, our goal is to contribute to and help catalize an open community around the new combination of open, extensible technologies being assembled to accomplish fact-based generative workflows.

🎯 Key features

llmware is an integrated framework comprised of four major components:

Retrieval: Assemble fact-sets
  • A comprehensive set of querying methods: semantic, text, and hybrid retrieval with integrated metadata.
  • Ranking and filtering strategies to enable semantic search and rapid retrieval of information.
  • Web scrapers, Wikipedia integration, and Yahoo Finance API integration as additional tools to assemble fact-sets for generation.
Prompt: Tools for sophisticated generative scenarios
  • Connect Models: Open interface designed to support AI21, Ai Bloks READ-GPT, Anthropic, Cohere, HuggingFace Generative models, OpenAI.
  • Prepare Sources: Tools for packaging and tracking a wide range of materials into model context window sizes. Sources include files, websites, audio, AWS Transcribe transcripts, Wikipedia and Yahoo Finance.
  • Prompt Catalog: Dynamically configurable prompts to experiment with multiple models without any change in the code.
  • Post Processing: a full set of metadata and tools for evidence verification, classification of a response, and fact-checking.
  • Human in the Loop: Ability to enable user ratings, feedback, and corrections of AI responses.
  • Auditability: A flexible state mechanism to capture, track, analyze and audit the LLM prompt lifecycle
Vector Embeddings: swappable embedding models and vector databases
  • Custom trained sentence transformer embedding models and support for embedding models from Cohere, Google, HuggingFace Embedding models, and OpenAI.
  • Mix-and-match among multiple options to find the right solution for any particular application.
  • Out-of-the-box support for 3 vector databases - Milvus, FAISS, and Pinecone.
Parsing and Text Chunking: Prepare your data for RAG
  • Parsers for: PDF, PowerPoint, Word, Excel, HTML, Text, WAV, AWS Transcribe transcripts.
  • A complete set of text-chunking tools to separate information and associated metadata to a consistent block format.

Explore additional llmware capabilities

🌱 Getting Started

1. Install llmware:

pip install llmware

or

python3 -m pip install llmware

See Working with llmware for other options to get up and running.

2. MongoDB and Milvus

MongoDB and Milvus are optional and used to provide production-grade database and vector embedding capabilities. The fastest way to get started is to use the provided Docker Compose file which takes care of running them both:

curl -o docker-compose.yaml https://raw.githubusercontent.com/llmware-ai/llmware/main/docker-compose.yaml

and then run the containers:

docker compose up -d

Not ready to install MongoDB or Milvus? Check out what you can do without them in our examples section.

See Running MongoDB and Milvus for other options to get up and running with these optional dependencies.

3. 🔥 Start coding - Quick Start For RAG 🔥

# This example demonstrates Retrieval Augmented Retrieval (RAG):
import os
from llmware.library import Library
from llmware.retrieval import Query
from llmware.prompts import Prompt
from llmware.setup import Setup

# Update this value with your own API Key, either by setting the env var or editing it directly here:
openai_api_key = os.environ["OPENAI_API_KEY"]

# A self-contained end-to-end example of RAG
def end_to_end_rag():
    
    # Create a library called "Agreements", and load it with llmware sample files
    print (f"\n > Creating library 'Agreements'...")
    library = Library().create_new_library("Agreements")
    sample_files_path = Setup().load_sample_files()
    library.add_files(os.path.join(sample_files_path,"Agreements"))

    # Create vector embeddings for the library using the "industry-bert-contracts model and store them in Milvus
    print (f"\n > Generating vector embeddings using embedding model: 'industry-bert-contracts'...")
    library.install_new_embedding(embedding_model_name="industry-bert-contracts", vector_db="milvus")

    # Perform a semantic search against our library.  This will gather evidence to be used in the LLM prompt
    print (f"\n > Performing a semantic query...")
    os.environ["TOKENIZERS_PARALLELISM"] = "false" # Avoid a HuggingFace tokenizer warning
    query_results = Query(library).semantic_query("Termination", result_count=20)

    # Create a new prompter using the GPT-4 and add the query_results captured above
    prompt_text = "Summarize the termination provisions"
    print (f"\n > Prompting LLM with '{prompt_text}'")
    prompter = Prompt().load_model("gpt-4", api_key=openai_api_key)
    sources = prompter.add_source_query_results(query_results)

    # Prompt the LLM with the sources and a query string
    responses = prompter.prompt_with_source(prompt_text, prompt_name="summarize_with_bullets")
    for response in responses:
        print ("\n > LLM response\n" + response["llm_response"])
    
    # Finally, generate a CSV report that can be shared
    print (f"\n > Generating CSV report...")
    report_data = prompter.send_to_human_for_review()
    print ("File: " + report_data["report_fp"] + "\n")

end_to_end_rag()

Response from end-to-end RAG example

> python examples/rag.py

 > Creating library 'Agreements'...

 > Generating vector embeddings using embedding model: 'industry-bert-contracts'...

 > Performing a semantic query...

 > Prompting LLM with 'Summarize the termination provisions'

 > LLM response
- Employment period ends on the first occurrence of either the 6th anniversary of the effective date or a company sale.
- Early termination possible as outlined in sections 3.1 through 3.4.
- Employer can terminate executive's employment under section 3.1 anytime without cause, with at least 30 days' prior written notice.
- If notice is given, the executive is allowed to seek other employment during the notice period.

 > Generating CSV report...
File: /Users/llmware/llmware_data/prompt_history/interaction_report_Fri Sep 29 12:07:42 2023.csv

See additional llmware examples for more code samples and ideas.

4. Accessing LLM's and setting-up API keys & secrets

To get started with a proprietary model, you need to provide your own API Keys. If you don't yet have one, more information can be found at: AI21, Ai Bloks, Anthropic, Cohere, Google, OpenAI.

API keys and secrets for models, aws, and pinecone can be set-up for use in environment variables or managed however you prefer.

You can also access the llmware public model repository which includes out-of-the-box custom trained sentence transformer embedding models fine-tuned for the following industries: Insurance, Contracts, Asset Management, SEC. These domain specific models along with llmware's generative BLING model series ("Best Little Instruction-following No-GPU-required") are available at llmware on Huggingface. Explore using the model repository and the llmware Huggingface integration in llmware examples.

🔹 Alternate options for running MongoDB and Milvus

There are several options for getting MongoDB running

🐳 A. Run mongo container with docker
docker run -d -p 27017:27017  -v mongodb-volume:/data/db --name=mongodb mongo:latest
🐳 B. Run container with docker compose

Create a docker-compose.yaml file with the content:

version: "3"

services:
  mongodb:
    container_name: mongodb
    image: 'mongo:latest'
    volumes:
      - mongodb-volume:/data/db
    ports:
      - '27017:27017'

volumes:
    llmware-mongodb:
      driver: local

and then run:

docker compose up
📖 C. Install MongoDB natively

See the Official MongoDB Installation Guide

✍️ Working with the llmware Github repository

The llmware repo can be pulled locally to get access to all the examples, or to work directly with the llmware code

Pull the repo locally

git clone git@github.com:llmware-ai/llmware.git

or download/extract a zip of the llmware repository

Other options for running llmware

Run llmware in a container
TODO insert command for pulling the container here
Run llmware natively

At the top level of the llmware repository run the following command:

pip install .

✨ Getting help or sharing your ideas with the community

Questions and discussions are welcome in our github discussions.

Interested in contributing to llmware? We welcome involvement from the community to extend and enhance the framework!

  • 💡 What's your favorite model or is there one you'd like to check out in your experiments?
  • 💡 Have you had success with a different embedding databases?
  • 💡 Is there a prompt that shines in a RAG workflow?

Information on ways to participate can be found in our Contributors Guide. As with all aspects of this project, contributing is governed by our Code of Conduct.

📣 Release notes and Change Log

Supported OS's:

  • MacOS
  • Linux
  • (Windows is a roadmap item)

Supported Vector Databases:

  • Milvus
  • FAISS
  • Pinecone

Prereqs:

  • All Platforms: python v3.9 - 3.10
  • Mac: Homebrew is used to install the native dependencies
  • Linux:
    1. The pip package attempts to install the native dependencies. If it is run without root permission or a package manager other than Apt is used, you will need to manually install the following native packages: apt install -y libxml2 libpng-dev libmongoc-dev libzip4 tesseract-ocr poppler-utils
    2. The llmware parsers optimize for speed by using large stack frames. If you receive a "Segmentation Fault" during a parsing operation, update the system's 'stack size' resource limit: ulimit -s 32768000

Optional:

Change Log
  • Oct 2, 2023: 🔥 Initial release of llmware to open source!! 🔥

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmware-0.1.0.tar.gz (602.3 kB view details)

Uploaded Source

File details

Details for the file llmware-0.1.0.tar.gz.

File metadata

  • Download URL: llmware-0.1.0.tar.gz
  • Upload date:
  • Size: 602.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.14

File hashes

Hashes for llmware-0.1.0.tar.gz
Algorithm Hash digest
SHA256 013745f8cb4d3f3e15fc6d0c4fe41a21664899dfd19c8d56909e268fec8b2e73
MD5 6b0662d98a773db1b0ee590a9573daa2
BLAKE2b-256 ce13e6e62957556e8d5686a2c211f32dcb534f235959e67f2a3fa6f3d9721c0c

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page