Skip to main content

Plugin for LLM adding support for GPT4ALL models

Project description

llm-gpt4all

PyPI Changelog Tests License

Plugin for LLM adding support for the GPT4All collection of models.

Installation

Install this plugin in the same environment as LLM.

llm install llm-gpt4all

After installing the plugin you can see a new list of available models like this:

llm models list

The output will include something like this:

gpt4all: orca-mini-3b - Orca (Small), 1.80GB download, needs 4GB RAM (installed)
gpt4all: ggml-gpt4all-j-v1 - Groovy, 3.53GB download, needs 8GB RAM (installed)
gpt4all: nous-hermes-13b - Hermes, 7.58GB download, needs 16GB RAM (installed)
gpt4all: orca-mini-7b - Orca, 3.53GB download, needs 8GB RAM
gpt4all: ggml-model-gpt4all-falcon-q4_0 - GPT4All Falcon, 3.78GB download, needs 8GB RAM
gpt4all: ggml-vicuna-7b-1 - Vicuna, 3.92GB download, needs 8GB RAM
gpt4all: ggml-wizardLM-7B - Wizard, 3.92GB download, needs 8GB RAM
gpt4all: ggml-mpt-7b-base - MPT Base, 4.52GB download, needs 8GB RAM
gpt4all: ggml-mpt-7b-instruct - MPT Instruct, 4.52GB download, needs 8GB RAM
gpt4all: ggml-mpt-7b-chat - MPT Chat, 4.52GB download, needs 8GB RAM
gpt4all: ggml-replit-code-v1-3b - Replit, 4.84GB download, needs 4GB RAM
gpt4all: orca-mini-13b - Orca (Large), 6.82GB download, needs 16GB RAM
gpt4all: GPT4All-13B-snoozy - Snoozy, 7.58GB download, needs 16GB RAM
gpt4all: ggml-vicuna-13b-1 - Vicuna (large), 7.58GB download, needs 16GB RAM
gpt4all: ggml-nous-gpt4-vicuna-13b - Nous Vicuna, 7.58GB download, needs 16GB RAM
gpt4all: ggml-stable-vicuna-13B - Stable Vicuna, 7.58GB download, needs 16GB RAM
gpt4all: wizardLM-13B-Uncensored - Wizard Uncensored, 7.58GB download, needs 16GB RAM

Further details on these models can be found in this Observable notebook.

Usage

You can execute a model using the name displayed in the llm models list output. The model file will be downloaded the first time you attempt to run it.

llm -m orca-mini-7b '3 names for a pet cow'

The first time you run this you will see a progress bar:

 31%|█████████▋                        | 1.16G/3.79G [00:26<01:02, 42.0MiB/s]

On subsequent uses the model output will be displayed immediately.

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd llm-gpt4all
python3 -m venv venv
source venv/bin/activate

Now install the dependencies and test dependencies:

pip install -e '.[test]'

To run the tests:

pytest

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm-gpt4all-0.1.1.tar.gz (9.5 kB view details)

Uploaded Source

Built Distribution

llm_gpt4all-0.1.1-py3-none-any.whl (9.3 kB view details)

Uploaded Python 3

File details

Details for the file llm-gpt4all-0.1.1.tar.gz.

File metadata

  • Download URL: llm-gpt4all-0.1.1.tar.gz
  • Upload date:
  • Size: 9.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.4

File hashes

Hashes for llm-gpt4all-0.1.1.tar.gz
Algorithm Hash digest
SHA256 9cc1f03c03c28430ff160eb80587c7aed0cc814ed2fb0abd21d0308925b6a55f
MD5 6c69696066bc8987d023ade3fbca7623
BLAKE2b-256 08ec7ea99a58eeae46f7925bfd6ab13fe95de6103b71370fbf2e44cf44cf844c

See more details on using hashes here.

File details

Details for the file llm_gpt4all-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: llm_gpt4all-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 9.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.4

File hashes

Hashes for llm_gpt4all-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 08162207729d9677254e564734548bfba046f4176cff17a05d05fc9c95b25a61
MD5 b5099bc99673c40c6eed744b645786f0
BLAKE2b-256 aa927bd7de8237dd8908b6831d856212154c3e9cded742d40f91a52ee7e5e30d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page