Skip to main content

GGUF connector(s) with GUI

Project description

GGUF connector

GGUF (GPT-Generated Unified Format) is a successor of GGML (GPT-Generated Model Language), it was released on August 21, 2023; by the way, GPT stands for Generative Pre-trained Transformer.

Static Badge Static Badge

This package is a simple graphical user interface (GUI) application that uses the ctransformers or llama.cpp to interact with a chat model for generating responses.

Install the connector via pip (once only):

pip install gguf-connector

Update the connector (if previous version installed) by:

pip install gguf-connector --upgrade

With this version, you can interact straight with the GGUF file(s) available in the same directory by a simple command.

Graphical User Interface (GUI)

Select model(s) with ctransformers:

ggc c

Select model(s) with llama.cpp connector (optional: need llama-cpp-python to work; get it here or nightly here):

ggc cpp

Command Line Interface (CLI)

Select model(s) with ctransformers:

ggc g

Select model(s) with llama.cpp connector:

ggc gpp

Select model(s) with vision connector:

ggc v

Opt a clip handler then opt a vision model; prompt your picture link to process (see example here)

Metadata reader (CLI only)

Select model(s) with metadata reader:

ggc r

Select model(s) with metadata fast reader:

ggc r2

Select model(s) with tensor reader (optional: need torch to work; pip install torch):

ggc r3

PDF analyzor (beta feature on CLI recently)

Load PDF(s) into a model with ctransformers:

ggc cp

Load PDF(s) into a model with llama.cpp connector:

ggc pp

Speech recognizor (beta feature; accept WAV format recently)

Prompt WAV(s) into a model with ctransformers:

ggc cs

Prompt WAV(s) into a model with llama.cpp connector:

ggc ps

Speech recognizor (via Google api; online)

Prompt WAV(s) into a model with ctransformers:

ggc cg

Prompt WAV(s) into a model with llama.cpp connector:

ggc pg

Container

Launch to page/container:

ggc w

Divider

Divide gguf into different part(s) with a cutoff point (size):

ggc d2

Merger

Merge all gguf into one:

ggc m2

Merger (safetensors)

Merge all safetensors into one (optional: need torch to work; pip install torch):

ggc ma

Quantizor

Quantize safetensors to fp8 (downscale; optional: need torch to work; pip install torch):

ggc q

Quantize safetensors to fp32 (upscale; optional: need torch to work; pip install torch):

ggc q2

Convertor

Convert safetensors to gguf (auto; optional: need torch to work; pip install torch):

ggc t

Convertor (alpha)

Convert safetensors to gguf (meta; optional: need torch to work; pip install torch):

ggc t1

Convertor (beta)

Convert safetensors to gguf (unlimited; optional: need torch to work; pip install torch):

ggc t2

Convertor (gamma)

Convert gguf to safetensors (reversible; optional: need torch to work; pip install torch):

ggc t3

Cutter

Get cutter for bf/f16 to q2-q8 quantization (see user guide here) by:

ggc u

Comfy

Download comfy pack (see user guide here) by:

ggc y

Node

Clone node (see user/setup guide here) by:

ggc n

Pack

Take pack (see user guide here) by:

ggc p

PackPack

Take packpack (see user guide here) by:

ggc p2

FramePack

Take framepack (portable packpack) by:

ggc p1

Menu

Enter the main menu for selecting a connector or getting pre-trained trial model(s).

ggc m

Import as a module

Include the connector selection menu to your code by:

from gguf_connector import menu

For standalone version please refer to the repository in the reference list (below).

References

model selector (standalone version: installable package)

cgg (cmd-based tool)

Resources

ctransformers llama.cpp

Article

understanding gguf and the gguf-connector

Website

gguf.org (you can download the frontend from github and host it locally; the backend is ethereum blockchain)

gguf.io (i/o is a mirror of us; note: this web3 domain might be restricted access in some regions/by some service providers, if so, visit the one below instead, exactly the same)

gguf.us

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gguf_connector-1.4.6.tar.gz (74.1 kB view details)

Uploaded Source

Built Distribution

gguf_connector-1.4.6-py2.py3-none-any.whl (106.3 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file gguf_connector-1.4.6.tar.gz.

File metadata

  • Download URL: gguf_connector-1.4.6.tar.gz
  • Upload date:
  • Size: 74.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.4

File hashes

Hashes for gguf_connector-1.4.6.tar.gz
Algorithm Hash digest
SHA256 a81c589a41956c4669435d342bf616d13866bef048d7abf8ec1b58bd095df99f
MD5 8ef7cc57817d1093522195f036cf77bc
BLAKE2b-256 44428d9db53cb782630913cb2996f14a778f49100bc0f0ed0aefc0a540cd9688

See more details on using hashes here.

File details

Details for the file gguf_connector-1.4.6-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for gguf_connector-1.4.6-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 443ef2a61fcb0ffe3a701ee9756cc04742a1b97fadcc5bcaccb5859883abfca1
MD5 fe08c2de03f40c98b8a44b73b349c7ff
BLAKE2b-256 cfc34008ffef141c1ba032d7e7e273a93849fdf3237ce50c5608404a9221f2cd

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page