Skip to main content

An enterprise-grade LLM-based development framework, tools, and fine-tuned models

Project description

llmware

Static Badge PyPI - Version

llmware is a unified, open, extensible framework for LLM-based application patterns including Retrieval Augmented Generation (RAG). This project provides a comprehensive set of tools that anyone can use – from beginner to the most sophisticated AI developer – to rapidly build industrial-grade enterprise LLM-based applications. Key differentiators include: source citation for Q & A scenarios, fact checking, and other guardrails for model hallucination.

With llmware, our goal is to contribute to and help catalyze an open community around the new combination of open, extensible technologies being assembled to accomplish fact-based generative workflows.

🎯 Key features

llmware is an integrated framework comprised of four major components:

Retrieval: Assemble fact-sets
  • A comprehensive set of querying methods: semantic, text, and hybrid retrieval with integrated metadata.
  • Ranking and filtering strategies to enable semantic search and rapid retrieval of information.
  • Web scrapers, Wikipedia integration, and Yahoo Finance API integration as additional tools to assemble fact-sets for generation.
Prompt: Tools for sophisticated generative scenarios
  • Connect Models: Open interface designed to support AI21, Ai Bloks READ-GPT, Anthropic, Cohere, HuggingFace Generative models, OpenAI.
  • Prepare Sources: Tools for packaging and tracking a wide range of materials into model context window sizes. Sources include files, websites, audio, AWS Transcribe transcripts, Wikipedia and Yahoo Finance.
  • Prompt Catalog: Dynamically configurable prompts to experiment with multiple models without any change in the code.
  • Post Processing: a full set of metadata and tools for evidence verification, classification of a response, and fact-checking.
  • Human in the Loop: Ability to enable user ratings, feedback, and corrections of AI responses.
  • Auditability: A flexible state mechanism to capture, track, analyze and audit the LLM prompt lifecycle
Vector Embeddings: swappable embedding models and vector databases
  • Custom trained sentence transformer embedding models and support for embedding models from Cohere, Google, HuggingFace Embedding models, and OpenAI.
  • Mix-and-match among multiple options to find the right solution for any particular application.
  • Out-of-the-box support for 3 vector databases - Milvus, FAISS, and Pinecone.
Parsing and Text Chunking: Prepare your data for RAG
  • Parsers for: PDF, PowerPoint, Word, Excel, HTML, Text, WAV, AWS Transcribe transcripts.
  • A complete set of text-chunking tools to separate information and associated metadata to a consistent block format.

📚 Explore additional llmware capabilities and 🎬 Check out these videos on how to quickly get started with RAG:

🌱 Getting Started

1. Install llmware:

pip install llmware

or

python3 -m pip install llmware

See Working with llmware for other options to get up and running.

2. MongoDB and Milvus

MongoDB and Milvus are optional and used to provide production-grade database and vector embedding capabilities. The fastest way to get started is to use the provided Docker Compose file which takes care of running them both:

curl -o docker-compose.yaml https://raw.githubusercontent.com/llmware-ai/llmware/main/docker-compose.yaml

and then run the containers:

docker compose up -d

Not ready to install MongoDB or Milvus? Check out what you can do without them in our examples section.

See Running MongoDB and Milvus for other options to get up and running with these optional dependencies.

3. 🔥 Start coding - Quick Start For RAG 🔥

# This example demonstrates Retrieval Augmented Retrieval (RAG):
import os
from llmware.library import Library
from llmware.retrieval import Query
from llmware.prompts import Prompt
from llmware.setup import Setup

# Update this value with your own API Key, either by setting the env var or editing it directly here:
openai_api_key = os.environ["OPENAI_API_KEY"]

# A self-contained end-to-end example of RAG
def end_to_end_rag():
    
    # Create a library called "Agreements", and load it with llmware sample files
    print (f"\n > Creating library 'Agreements'...")
    library = Library().create_new_library("Agreements")
    sample_files_path = Setup().load_sample_files()
    library.add_files(os.path.join(sample_files_path,"Agreements"))

    # Create vector embeddings for the library using the "industry-bert-contracts model and store them in Milvus
    print (f"\n > Generating vector embeddings using embedding model: 'industry-bert-contracts'...")
    library.install_new_embedding(embedding_model_name="industry-bert-contracts", vector_db="milvus")

    # Perform a semantic search against our library.  This will gather evidence to be used in the LLM prompt
    print (f"\n > Performing a semantic query...")
    os.environ["TOKENIZERS_PARALLELISM"] = "false" # Avoid a HuggingFace tokenizer warning
    query_results = Query(library).semantic_query("Termination", result_count=20)

    # Create a new prompter using the GPT-4 and add the query_results captured above
    prompt_text = "Summarize the termination provisions"
    print (f"\n > Prompting LLM with '{prompt_text}'")
    prompter = Prompt().load_model("gpt-4", api_key=openai_api_key)
    sources = prompter.add_source_query_results(query_results)

    # Prompt the LLM with the sources and a query string
    responses = prompter.prompt_with_source(prompt_text, prompt_name="summarize_with_bullets")
    for response in responses:
        print ("\n > LLM response\n" + response["llm_response"])
    
    # Finally, generate a CSV report that can be shared
    print (f"\n > Generating CSV report...")
    report_data = prompter.send_to_human_for_review()
    print ("File: " + report_data["report_fp"] + "\n")

end_to_end_rag()

Response from end-to-end RAG example

> python examples/rag.py

 > Creating library 'Agreements'...

 > Generating vector embeddings using embedding model: 'industry-bert-contracts'...

 > Performing a semantic query...

 > Prompting LLM with 'Summarize the termination provisions'

 > LLM response
- Employment period ends on the first occurrence of either the 6th anniversary of the effective date or a company sale.
- Early termination possible as outlined in sections 3.1 through 3.4.
- Employer can terminate executive's employment under section 3.1 anytime without cause, with at least 30 days' prior written notice.
- If notice is given, the executive is allowed to seek other employment during the notice period.

 > Generating CSV report...
File: /Users/llmware/llmware_data/prompt_history/interaction_report_Fri Sep 29 12:07:42 2023.csv

📚 See 20+ llmware examples for more RAG examples and other code samples and ideas.

4. Accessing LLMs and setting-up API keys & secrets

To get started with a proprietary model, you need to provide your own API Keys. If you don't yet have one, more information can be found at: AI21, Ai Bloks, Anthropic, Cohere, Google, OpenAI.

API keys and secrets for models, aws, and pinecone can be set-up for use in environment variables or managed however you prefer.

You can also access the llmware public model repository which includes out-of-the-box custom trained sentence transformer embedding models fine-tuned for the following industries: Insurance, Contracts, Asset Management, SEC. These domain specific models along with llmware's generative BLING model series ("Best Little Instruction-following No-GPU-required") are available at llmware on Huggingface. Explore using the model repository and the llmware Huggingface integration in llmware examples.

🔹 Alternate options for running MongoDB and Milvus

There are several options for getting MongoDB running

🐳 A. Run mongo container with docker
docker run -d -p 27017:27017  -v mongodb-volume:/data/db --name=mongodb mongo:latest
🐳 B. Run container with docker compose

Create a docker-compose.yaml file with the content:

version: "3"

services:
  mongodb:
    container_name: mongodb
    image: 'mongo:latest'
    volumes:
      - mongodb-volume:/data/db
    ports:
      - '27017:27017'

volumes:
    llmware-mongodb:
      driver: local

and then run:

docker compose up
📖 C. Install MongoDB natively

See the Official MongoDB Installation Guide

🔗 D. Connect to an existing MongoDB deployment

You can connect to an existing MongoDB deployment by setting the connection string to the environment variable, COLLECTION_DB_URI. See the example script, Using Mongo Atlas, for detailed information on how to use Mongo Atlas as the NoSQL and/or Vector Database for llmware.

Additional information on finding and formatting connection strings can be found in the MongoDB Connection Strings Documentation.

✍️ Working with the llmware Github repository

The llmware repo can be pulled locally to get access to all the examples, or to work directly with the llmware code

Pull the repo locally

git clone git@github.com:llmware-ai/llmware.git

or download/extract a zip of the llmware repository

Other options for running llmware

Run llmware in a container
TODO insert command for pulling the container here
Run llmware natively

At the top level of the llmware repository run the following command:

pip install .

✨ Getting help or sharing your ideas with the community

Questions and discussions are welcome in our github discussions.

Interested in contributing to llmware? We welcome involvement from the community to extend and enhance the framework!

  • 💡 What's your favorite model or is there one you'd like to check out in your experiments?
  • 💡 Have you had success with a different embedding databases?
  • 💡 Is there a prompt that shines in a RAG workflow?

Information on ways to participate can be found in our Contributors Guide. As with all aspects of this project, contributing is governed by our Code of Conduct.

📣 Release notes and Change Log

Supported Operating Systems:

  • MacOS
  • Linux
  • (Windows is a roadmap item)

Supported Vector Databases:

  • Milvus
  • FAISS
  • Pinecone
  • MongoDB Atlas Vector Search

Prereqs:

Optional:

Known issues:

  • A segmentation fault can occur when parsing if the native package for mongo-c-driver is 1.25 or above. To address this issue, install llmware v0.1.6 and above or downgrade mongo-c-driver to v1.24.4.
  • The llmware parsers optimize for speed by using large stack frames. If you receive a "Segmentation Fault" during a parsing operation, increase the system's 'stack size' resource limit: ulimit -s 160000. If running in a linux container on Mac, we've found this has to be set signficantly higher and must be set by the host with a command like the following: docker run --ulimit stack=32768000:32768000 ...
  • For llmware versions <= v0.1.6, the pip package attempts to install the native dependencies. If it is run without root permission or a package manager other than Apt is used, you will need to manually install the following native packages: apt install -y libxml2 libpng-dev libmongoc-dev libzip4 tesseract-ocr poppler-utils *Note: libmongoc-dev <= v1.24.4 is required.
🚧 Change Log
  • 14 Nov 2023: llmware v0.1.7

    • Moved to Python Wheel package format for PyPi distribution to provide seamless installation of native dependencies on all supported platforms.
    • ModelCatalog enhancements:
      • OpenAI update to include newly announced ‘turbo’ 4 and 3.5 models.
      • Cohere embedding v3 update to include new Cohere embedding models.
      • BLING models as out-of-the-box registered options in the catalog. They can be instantiated like any other model, even without the “hf=True” flag.
      • Ability to register new model names, within existing model classes, with the register method in ModelCatalog.
    • Prompt enhancements:
      • “evidence_metadata” added to prompt_main output dictionaries allowing prompt_main responses to be plug into the evidence and fact-checking steps without modification.
      • API key can now be passed directly in a prompt.load_model(model_name, api_key = “[my-api-key]”)
    • LLMWareInference Server - Initial delivery:
      • New Class for LLMWareModel which is a wrapper on a custom HF-style API-based model.
      • LLMWareInferenceServer is a new class that can be instantiated on a remote (GPU) server to create a testing API-server that can be integrated into any Prompt workflow.
  • 03 Nov 2023: llmware v0.1.6

    • Updated packaging to require mongo-c-driver 1.24.4 to temporarily workaround segmentation fault with mongo-c-driver 1.25.
    • Updates in python code needed in anticipation of future Windows support.
  • 27 Oct 2023: llmware v0.1.5

    • Four new example scripts focused on RAG workflows with small, fine-tuned instruct models that run on a laptop (llmware BLING models).
    • Expanded options for setting temperature inside a prompt class.
    • Improvement in post processing of Hugging Face model generation.
    • Streamlined loading of Hugging Face generative models into prompts.
    • Initial delivery of a central status class: read/write of embedding status with a consistent interface for callers.
    • Enhanced in-memory dictionary search support for multi-key queries.
    • Removed trailing space in human-bot wrapping to improve generation quality in some fine-tuned models.
    • Minor defect fixes, updated test scripts, and version update for Werkzeug to address dependency security alert.
  • 20 Oct 2023: llmware v0.1.4

    • GPU support for Hugging Face models.
    • Defect fixes and additional test scripts.
  • 13 Oct 2023: llmware v0.1.3

    • MongoDB Atlas Vector Search support.
    • Support for authentication using a MongoDB connection string.
    • Document summarization methods.
    • Improvements in capturing the model context window automatically and passing changes in the expected output length.
    • Dataset card and description with lookup by name.
    • Processing time added to model inference usage dictionary.
    • Additional test scripts, examples, and defect fixes.
  • 06 Oct 2023: llmware v0.1.1

    • Added test scripts to the github repository for regression testing.
    • Minor defect fixes and version update of Pillow to address dependency security alert.
  • 02 Oct 2023: llmware v0.1.0 🔥 Initial release of llmware to open source!! 🔥

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

llmware-0.1.7-cp310-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.7 MB view details)

Uploaded CPython 3.10 manylinux: glibc 2.17+ x86-64

llmware-0.1.7-cp310-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (5.4 MB view details)

Uploaded CPython 3.10 manylinux: glibc 2.17+ ARM64

llmware-0.1.7-cp310-none-macosx_14_0_x86_64.whl (2.1 MB view details)

Uploaded CPython 3.10 macOS 14.0+ x86-64

llmware-0.1.7-cp310-none-macosx_14_0_arm64.whl (1.8 MB view details)

Uploaded CPython 3.10 macOS 14.0+ ARM64

llmware-0.1.7-cp310-none-macosx_13_0_x86_64.whl (2.1 MB view details)

Uploaded CPython 3.10 macOS 13.0+ x86-64

llmware-0.1.7-cp310-none-macosx_13_0_arm64.whl (1.8 MB view details)

Uploaded CPython 3.10 macOS 13.0+ ARM64

llmware-0.1.7-cp310-none-macosx_12_0_x86_64.whl (2.1 MB view details)

Uploaded CPython 3.10 macOS 12.0+ x86-64

llmware-0.1.7-cp310-none-macosx_12_0_arm64.whl (1.8 MB view details)

Uploaded CPython 3.10 macOS 12.0+ ARM64

llmware-0.1.7-cp39-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.7 MB view details)

Uploaded CPython 3.9 manylinux: glibc 2.17+ x86-64

llmware-0.1.7-cp39-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (5.4 MB view details)

Uploaded CPython 3.9 manylinux: glibc 2.17+ ARM64

llmware-0.1.7-cp39-none-macosx_14_0_x86_64.whl (2.1 MB view details)

Uploaded CPython 3.9 macOS 14.0+ x86-64

llmware-0.1.7-cp39-none-macosx_14_0_arm64.whl (1.8 MB view details)

Uploaded CPython 3.9 macOS 14.0+ ARM64

llmware-0.1.7-cp39-none-macosx_13_0_x86_64.whl (2.1 MB view details)

Uploaded CPython 3.9 macOS 13.0+ x86-64

llmware-0.1.7-cp39-none-macosx_13_0_arm64.whl (1.8 MB view details)

Uploaded CPython 3.9 macOS 13.0+ ARM64

llmware-0.1.7-cp39-none-macosx_12_0_x86_64.whl (2.1 MB view details)

Uploaded CPython 3.9 macOS 12.0+ x86-64

llmware-0.1.7-cp39-none-macosx_12_0_arm64.whl (1.8 MB view details)

Uploaded CPython 3.9 macOS 12.0+ ARM64

File details

Details for the file llmware-0.1.7-cp310-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for llmware-0.1.7-cp310-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 159d82d7917bcf73d7b5d81739f79afee2592cf1bedc75b4d3072db8f81a2c6b
MD5 f6ff501a07b142dcae545c1b53d8c75c
BLAKE2b-256 a1c490cdd8a66c6b007aae3619d7341d11b1561b5b5c9bad9237d7bde611251b

See more details on using hashes here.

File details

Details for the file llmware-0.1.7-cp310-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for llmware-0.1.7-cp310-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 e4119756cc358e9e50cc868015b5b67e5000a0278b32ab704cf1059d36cb8590
MD5 7f1c17cbf412cda669d9e8de98908a29
BLAKE2b-256 87c1bc956931ce46327cc294ed92bde4b4887d5dce4dbfbeb459d43baa42df52

See more details on using hashes here.

File details

Details for the file llmware-0.1.7-cp310-none-macosx_14_0_x86_64.whl.

File metadata

File hashes

Hashes for llmware-0.1.7-cp310-none-macosx_14_0_x86_64.whl
Algorithm Hash digest
SHA256 27b6a328425d04ef47333abf41613b4e312328da011869d03139e48d1275e5ee
MD5 ad5268f33c8b8b47493a7779c1a54dd1
BLAKE2b-256 4aa978abb3f6ed3be4e27d6f8056c0dd70d76ab098e4b3f969219462d08bcb78

See more details on using hashes here.

File details

Details for the file llmware-0.1.7-cp310-none-macosx_14_0_arm64.whl.

File metadata

File hashes

Hashes for llmware-0.1.7-cp310-none-macosx_14_0_arm64.whl
Algorithm Hash digest
SHA256 83ef3135b04106f349c2b79eb96e5665cdb99b9ba2bf96842163dd4ff0280051
MD5 c1719943c454318fac625a988c6e64a1
BLAKE2b-256 e75b6217af26e6d85cc5e6dd4ad31d6068591746dc18dbed5423a47acd132936

See more details on using hashes here.

File details

Details for the file llmware-0.1.7-cp310-none-macosx_13_0_x86_64.whl.

File metadata

File hashes

Hashes for llmware-0.1.7-cp310-none-macosx_13_0_x86_64.whl
Algorithm Hash digest
SHA256 2243d138511530cf41be7bfdc422e5341f4195eebc7fe897982a37bdc1c70448
MD5 c8a1fcc58d008dc9e754c84726f79b5c
BLAKE2b-256 d07f24989af73cd862f382fd12c63aadf751272682ec73d386e4622fc9e6817c

See more details on using hashes here.

File details

Details for the file llmware-0.1.7-cp310-none-macosx_13_0_arm64.whl.

File metadata

File hashes

Hashes for llmware-0.1.7-cp310-none-macosx_13_0_arm64.whl
Algorithm Hash digest
SHA256 31f24d6a965ff4794ff9e7c9e5e0dc951d494844e84221186055b803213e6796
MD5 c3ddb3eac02080ebf55a9c778f1f3b3f
BLAKE2b-256 243521065529aaa15a02e3c700bcccc0eb305f6387ba11aa8dd865cd7731dd9c

See more details on using hashes here.

File details

Details for the file llmware-0.1.7-cp310-none-macosx_12_0_x86_64.whl.

File metadata

File hashes

Hashes for llmware-0.1.7-cp310-none-macosx_12_0_x86_64.whl
Algorithm Hash digest
SHA256 b426bc6127794aea72d4575bfed7d1e1df6419b7184a5a13596a12bccf5e6c42
MD5 eda2674c3a0b26db88b3609472398bfc
BLAKE2b-256 eb1d3229a4dfbee9702ee7e168a16221043ac933f58b1befcae09a6ff6484b45

See more details on using hashes here.

File details

Details for the file llmware-0.1.7-cp310-none-macosx_12_0_arm64.whl.

File metadata

File hashes

Hashes for llmware-0.1.7-cp310-none-macosx_12_0_arm64.whl
Algorithm Hash digest
SHA256 ca54670e0eb671c28d85e9d53707bf88996cfad75c6b51c53e0430bd2191994d
MD5 8608606ffd0719aa161567feb6cc9d3f
BLAKE2b-256 483733db5d10d7a24e5dcd54775ebd98fecf4264746ba9a326b6bd908cd66754

See more details on using hashes here.

File details

Details for the file llmware-0.1.7-cp39-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for llmware-0.1.7-cp39-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 c4b389c6cc6869d1cee4a84d68ea4f601e1ef1ee3293ff4ebcd7cdb128c78004
MD5 e1c525b071e393f0879b191bba4e952d
BLAKE2b-256 8b8433c25f517616d555d3bfdc5a994c13900971eeb05bbe5b668e794849452e

See more details on using hashes here.

File details

Details for the file llmware-0.1.7-cp39-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for llmware-0.1.7-cp39-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 54bd73b739ff938a461c2d08bf48a11b0c74f5d9edd09c297119a45f5ccda592
MD5 3c6a05941d1fbecd3bebd7fd5a22e47d
BLAKE2b-256 85a8db7dcb25189581016a920f5f6548901fe69e4a896ccb00c15c0fec60c223

See more details on using hashes here.

File details

Details for the file llmware-0.1.7-cp39-none-macosx_14_0_x86_64.whl.

File metadata

File hashes

Hashes for llmware-0.1.7-cp39-none-macosx_14_0_x86_64.whl
Algorithm Hash digest
SHA256 1b569a8c6992c99712b5daf6074a0d20aafa3465f2cb09bbcb3fe00ed5639343
MD5 3ea317ed15784740ef8fca3a407e8174
BLAKE2b-256 2b21a53781467e229b60aa120dfe57850004698bdee5a68b6f798ff9363c9397

See more details on using hashes here.

File details

Details for the file llmware-0.1.7-cp39-none-macosx_14_0_arm64.whl.

File metadata

File hashes

Hashes for llmware-0.1.7-cp39-none-macosx_14_0_arm64.whl
Algorithm Hash digest
SHA256 69a4aee6d415440c5aaee2ca483ab3d71ae626908c7ae5ab9d0493f2fc0fe5f0
MD5 9ada205179fdcd84330cd65183cb4fdf
BLAKE2b-256 251904c4c2e65d7cf39f546f83aedbe89d3e86f364f33e2ce3e634a11e02a40a

See more details on using hashes here.

File details

Details for the file llmware-0.1.7-cp39-none-macosx_13_0_x86_64.whl.

File metadata

File hashes

Hashes for llmware-0.1.7-cp39-none-macosx_13_0_x86_64.whl
Algorithm Hash digest
SHA256 03a1f1477ddf4f3517b978950c7e180b927ba15dea60ad8a57f57abf4f47de4e
MD5 25b457360dec90e8c222224a5e63ab1c
BLAKE2b-256 d01ae71b6a0e0cad2950e542c75c394d77ec4c631bb4c0dc4452e4e480a1637d

See more details on using hashes here.

File details

Details for the file llmware-0.1.7-cp39-none-macosx_13_0_arm64.whl.

File metadata

File hashes

Hashes for llmware-0.1.7-cp39-none-macosx_13_0_arm64.whl
Algorithm Hash digest
SHA256 527fdce7de70e66092b7f13e38c24731ef7e916daf5363e9026d25afd10064ee
MD5 4523d06f6b9ac9206449a84511fe2333
BLAKE2b-256 d4fc77f0dec953152536a84fa6e28ca823606172ea2213f67b1c3ef8ade75b5a

See more details on using hashes here.

File details

Details for the file llmware-0.1.7-cp39-none-macosx_12_0_x86_64.whl.

File metadata

File hashes

Hashes for llmware-0.1.7-cp39-none-macosx_12_0_x86_64.whl
Algorithm Hash digest
SHA256 8e0306bf9766119c5a75eed129a42741d3b221f5e2cb2ff06fe303c871e8bd87
MD5 30b9f3857a916f95ef8ae5b1c996970d
BLAKE2b-256 30866ca5ad8881ab89fa2548af4ebf226cef3fa0a9d00d1f3b001fc6650c26bf

See more details on using hashes here.

File details

Details for the file llmware-0.1.7-cp39-none-macosx_12_0_arm64.whl.

File metadata

File hashes

Hashes for llmware-0.1.7-cp39-none-macosx_12_0_arm64.whl
Algorithm Hash digest
SHA256 60ba83113a5127e1f6740a0fa695d24c43161a780922b3743348e2cb62cbfdda
MD5 c4f70b40a6e9a0c45c3b4f452ba3f83a
BLAKE2b-256 25482fc5cb5115fd19af366f596d1ee8b94cb65cb07eade85ebb59b0d4bd3bf5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page