Skip to main content

Chat with your documents offline using AI.

Project description

ChatDocs PyPI tests

Chat with your documents offline using AI. No data leaves your system. Internet connection is only required to install the tool and download the AI models. It is based on PrivateGPT but has more features.

Web UI

Features

  • Supports GGML models via C Transformers
  • Supports 🤗 Transformers models
  • Supports GPTQ models
  • Web UI
  • GPU support
  • Highly configurable via chatdocs.yml
Show supported document types
Extension Format
.csv CSV
.docx, .doc Word Document
.enex EverNote
.eml Email
.epub EPub
.html HTML
.md Markdown
.msg Outlook Message
.odt Open Document Text
.pdf Portable Document Format (PDF)
.pptx, .ppt PowerPoint Document
.txt Text file (UTF-8)

Installation

Install the tool using:

pip install chatdocs

Download the AI models using:

chatdocs download

Now it can be run offline without internet connection.

Usage

Add a directory containing documents to chat with using:

chatdocs add /path/to/documents

The processed documents will be stored in db directory by default.

Chat with your documents using:

chatdocs ui

Open http://localhost:5000 in your browser to access the web UI.

It also has a nice command-line interface:

chatdocs chat
Show preview

Demo

Configuration

All the configuration options can be changed using the chatdocs.yml config file. Create a chatdocs.yml file in some directory and run all commands from that directory. For reference, see the default chatdocs.yml file.

You don't have to copy the entire file, just add the config options you want to change as it will be merged with the default config. For example, see tests/fixtures/chatdocs.yml which changes only some of the config options.

Embeddings

To change the embeddings model, add and change the following in your chatdocs.yml:

embeddings:
  model: hkunlp/instructor-large

Note: When you change the embeddings model, delete the db directory and add documents again.

C Transformers

To change the C Transformers GGML model, add and change the following in your chatdocs.yml:

ctransformers:
  model: TheBloke/Wizard-Vicuna-7B-Uncensored-GGML
  model_file: Wizard-Vicuna-7B-Uncensored.ggmlv3.q4_0.bin
  model_type: llama

Note: When you add a new model for the first time, run chatdocs download to download the model before using it.

You can also use an existing local model file:

ctransformers:
  model: /path/to/ggml-model.bin
  model_type: llama

🤗 Transformers

To use 🤗 Transformers models, add the following to your chatdocs.yml:

llm: huggingface

To change the 🤗 Transformers model, add and change the following in your chatdocs.yml:

huggingface:
  model: TheBloke/Wizard-Vicuna-7B-Uncensored-HF

Note: When you add a new model for the first time, run chatdocs download to download the model before using it.

GPTQ

To use GPTQ models, install the auto-gptq package using:

pip install chatdocs[gptq]

and add the following to your chatdocs.yml:

llm: gptq

To change the GPTQ model, add and change the following in your chatdocs.yml:

gptq:
  model: TheBloke/Wizard-Vicuna-7B-Uncensored-GPTQ
  model_file: Wizard-Vicuna-7B-Uncensored-GPTQ-4bit-128g.no-act-order.safetensors

Note: When you add a new model for the first time, run chatdocs download to download the model before using it.

GPU

Embeddings

To enable GPU (CUDA) support for the embeddings model, add the following to your chatdocs.yml:

embeddings:
  model_kwargs:
    device: cuda

You may have to reinstall PyTorch with CUDA enabled by following the instructions here.

C Transformers

Note: Currently only LLaMA GGML models have GPU support.

To enable GPU (CUDA) support for the C Transformers GGML model, add the following to your chatdocs.yml:

ctransformers:
  config:
    gpu_layers: 50

You should also reinstall the ctransformers package with CUDA enabled:

pip uninstall ctransformers --yes
CT_CUBLAS=1 pip install ctransformers --no-binary ctransformers
Show commands for Windows

On Windows PowerShell run:

$env:CT_CUBLAS=1
pip uninstall ctransformers --yes
pip install ctransformers --no-binary ctransformers

On Windows Command Prompt run:

set CT_CUBLAS=1
pip uninstall ctransformers --yes
pip install ctransformers --no-binary ctransformers

🤗 Transformers

To enable GPU (CUDA) support for the 🤗 Transformers model, add the following to your chatdocs.yml:

huggingface:
  device: 0

You may have to reinstall PyTorch with CUDA enabled by following the instructions here.

GPTQ

To enable GPU (CUDA) support for the GPTQ model, add the following to your chatdocs.yml:

gptq:
  device: 0

You may have to reinstall PyTorch with CUDA enabled by following the instructions here.

After installing PyTorch with CUDA enabled, you should also reinstall the auto-gptq package:

pip uninstall auto-gptq --yes
pip install chatdocs[gptq]

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

chatdocs-0.2.4.tar.gz (13.8 kB view details)

Uploaded Source

Built Distribution

chatdocs-0.2.4-py3-none-any.whl (14.7 kB view details)

Uploaded Python 3

File details

Details for the file chatdocs-0.2.4.tar.gz.

File metadata

  • Download URL: chatdocs-0.2.4.tar.gz
  • Upload date:
  • Size: 13.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.10

File hashes

Hashes for chatdocs-0.2.4.tar.gz
Algorithm Hash digest
SHA256 4e1181fb465da99c9f2d73369cf3cad14c0f21106a63ed8b49fce8a387e57b56
MD5 06d06b4eba91280acd0ae3305aae89ab
BLAKE2b-256 3968bd451a54f40c565e03d067d6f9c094d09322d83e9a8da860a1ecdb174e60

See more details on using hashes here.

File details

Details for the file chatdocs-0.2.4-py3-none-any.whl.

File metadata

  • Download URL: chatdocs-0.2.4-py3-none-any.whl
  • Upload date:
  • Size: 14.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.10

File hashes

Hashes for chatdocs-0.2.4-py3-none-any.whl
Algorithm Hash digest
SHA256 f2bfa9c021ae21cbcac946793fcfd7b9b951cc7d2f98a87ecda203254a0a8e46
MD5 b4fdd5806cf376710d62bc459ec2db5a
BLAKE2b-256 02df770be75bde63aaf532f06cedef7022d778022f8f13d953fb37d93355e729

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page