Skip to main content

Chat with your documents locally.

Project description

localrag

localrag is a Python package enabling users to "chat" with their documents using a local Retrieval Augmented Generation (RAG) approach, without needing an external Large Language Model (LLM) provider.

It allows for quick, local, and easy interactions with text data, extracting and generating responses based on the content.

Features

  • Local Processing: Runs entirely on your local machine - no need to send data externally.
  • Customizable: Easy to set up with default models or specify your own.
  • Versatile: Use it for a variety of applications, from automated Q&A systems to data mining. You add files, folders or websites to the index!

Prerequisites

Before you install and start using localrag, make sure you meet the following requirements:

Ollama for Local Inference

localrag uses Ollama for local inference, particularly beneficial for macOS users. Ollama allows for easy model serving and inference. To set up Ollama:

Installation

To install localrag, simply use pip:

pip install localrag

Quick Start

Here's a quick example of how you can use localrag to chat with your documents:

Here is an example in test.txt in the docs folder:

I have a dog
import localrag
my_local_rag = localrag.init()
# Add docs
my_local_rag.add_to_index("./docs")
# Chat with docs
response = my_local_rag.chat("What type of pet do I have?")
print(response.answer)
print(response.context)
# Based on the context you provided, I can determine that you have a dog. Therefore, the type of pet you have is "dog."
# [Document(page_content='I have a dog', metadata={'source': 'docs/test.txt'})]

Website Example

import localrag
my_local_rag = localrag.init()
my_local_rag.add_to_index("https://github.com/banjtheman/localrag")
response = my_local_rag.chat("What is localrag?")
print(response.answer)
# Based on the context provided in the GitHub repository page for "banjtheman/localrag", localrag is a chat application that allows users to communicate with their documents locally...

More examples in the tests folder.

localrag config options

Here is how you can configure localrag:

import localrag
my_local_rag = localrag.init(
    llm_model="llama2", # Can choose from ollama models: https://ollama.ai/library
    embedding_model="BAAI/bge-small-en-v1.5", # Can choose variations of https://huggingface.co/BAAI/bge-large-en-v1.5, top 5 embedding model https://huggingface.co/spaces/mteb/leaderboard
    device="mps", # can set device to mps, cpu or cuda:X
    index_location="localrag_index", # Location of the vectorstore
    system_prompt="You are Duck. Start each response with Quack.", # Custom system prompt
)
my_local_rag.add_to_index("./docs")

# can change the URL of the ollama server with my_local_rag.llm.base_url = "http://ollama:11434"

# Can use openai models with
# my_local_rag.setup_openai_llm(model="gpt-3.5-turbo",temp=0)

License

This library is licensed under the Apache 2.0 License. See the LICENSE file.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

localrag-0.1.42.tar.gz (10.5 kB view details)

Uploaded Source

Built Distribution

localrag-0.1.42-py3-none-any.whl (10.9 kB view details)

Uploaded Python 3

File details

Details for the file localrag-0.1.42.tar.gz.

File metadata

  • Download URL: localrag-0.1.42.tar.gz
  • Upload date:
  • Size: 10.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.5

File hashes

Hashes for localrag-0.1.42.tar.gz
Algorithm Hash digest
SHA256 99fddb3a15398f5873ec5c31f17b1e4d4b563418dc434a9c50f700f0af73e2ce
MD5 32b0f0d0604a1b24f8337c59df2fa908
BLAKE2b-256 dd93eda7610b05d4792e4c93b8b9a30c7c6c2e98a6ca40580bb1d64e44acce74

See more details on using hashes here.

File details

Details for the file localrag-0.1.42-py3-none-any.whl.

File metadata

  • Download URL: localrag-0.1.42-py3-none-any.whl
  • Upload date:
  • Size: 10.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.5

File hashes

Hashes for localrag-0.1.42-py3-none-any.whl
Algorithm Hash digest
SHA256 62c0bc16ec06fa5a55f1cd7bf9d6323a629428f00858a8f63b374e7d66821a34
MD5 33f0204b26a82bb3b1167f25fdc268c0
BLAKE2b-256 4b83f3b1905adabaaa58f4756741d2c6ddfd6a45469d99cbbeb86beae6e6f9fe

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page