Skip to main content

GGUF connector(s) with GUI

Project description

GGUF connector

GGUF (GPT-Generated Unified Format) is a successor of GGML (GPT-Generated Model Language), it was released on August 21, 2023; by the way, GPT stands for Generative Pre-trained Transformer.

Static Badge Static Badge

This package is a simple graphical user interface (GUI) application that uses the ctransformers or llama.cpp to interact with a chat model for generating responses.

Install the connector via pip (once only):

pip install gguf-connector

Update the connector (if previous version installed) by:

pip install gguf-connector --upgrade

With this version, you can interact straight with the GGUF file(s) available in the same directory by a simple command.

Graphical User Interface (GUI)

Select model(s) with ctransformers (optional: need ctransformers to work; pip install ctransformers):

ggc c

Select model(s) with llama.cpp connector (optional: need llama-cpp-python to work; get it here or nightly here):

ggc cpp

Command Line Interface (CLI)

Select model(s) with ctransformers:

ggc g

Select model(s) with llama.cpp connector:

ggc gpp

Select model(s) with vision connector:

ggc v

Opt a clip handler then opt a vision model; prompt your picture link to process (see example here)

Metadata reader (CLI only)

Select model(s) with metadata reader:

ggc r

Select model(s) with metadata fast reader:

ggc r2

Select model(s) with tensor reader (optional: need torch to work; pip install torch):

ggc r3

PDF analyzor (beta feature on CLI recently)

Load PDF(s) into a model with ctransformers:

ggc cp

Load PDF(s) into a model with llama.cpp connector:

ggc pp

optional: need pypdf; pip install pypdf

Speech recognizor (beta feature; accept WAV format recently)

Prompt WAV(s) into a model with ctransformers:

ggc cs

Prompt WAV(s) into a model with llama.cpp connector:

ggc ps

optional: need speechrecognition, pocketsphinx; pip install speechrecognition, pocketsphinx

Speech recognizor (via Google api; online)

Prompt WAV(s) into a model with ctransformers:

ggc cg

Prompt WAV(s) into a model with llama.cpp connector:

ggc pg

Container

Launch to page/container:

ggc w

Divider

Divide gguf into different part(s) with a cutoff point (size):

ggc d2

Merger

Merge all gguf into one:

ggc m2

Merger (safetensors)

Merge all safetensors into one (optional: need torch to work; pip install torch):

ggc ma

Splitter (checkpoint)

Split checkpoint into components (optional: need torch to work; pip install torch):

ggc s

Quantizor

Quantize safetensors to fp8 (downscale; optional: need torch to work; pip install torch):

ggc q

Quantize safetensors to fp32 (upscale; optional: need torch to work; pip install torch):

ggc q2

Convertor

Convert safetensors to gguf (auto; optional: need torch to work; pip install torch):

ggc t

Convertor (alpha)

Convert safetensors to gguf (meta; optional: need torch to work; pip install torch):

ggc t1

Convertor (beta)

Convert safetensors to gguf (unlimited; optional: need torch to work; pip install torch):

ggc t2

Convertor (gamma)

Convert gguf to safetensors (reversible; optional: need torch to work; pip install torch):

ggc t3

Swapper (lora)

Rename lora tensor (base/unet swappable; optional: need torch to work; pip install torch):

ggc la

Remover

Tensor remover:

ggc rm

Renamer

Tensor renamer:

ggc rn

Extractor

Tensor extractor:

ggc ex

Cutter

Get cutter for bf/f16 to q2-q8 quantization (see user guide here) by:

ggc u

Comfy

Download comfy pack (see user guide here) by:

ggc y

Node

Clone node (see user/setup guide here) by:

ggc n

Pack

Take pack (see user guide here) by:

ggc p

PackPack

Take packpack (see user guide here) by:

ggc p2

FramePack (1-click long video generation)

Take framepack (portable packpack) by:

ggc p1

Run framepack - ggc edition by (optional: need framepack to work; pip install framepack):

ggc f2

Video 1 (image to video)

Activate backend and frontend by (optional: need torch, diffusers to work; pip install torch, diffusers):

ggc v1

Video 2 (text to video)

Activate backend and frontend by (optional: need torch, diffusers to work; pip install torch, diffusers):

ggc v2

Image 2 (text to image)

Activate backend and frontend by (optional: need torch, diffusers to work; pip install torch, diffusers):

ggc i2

Kontext 2 (image editor)

Activate backend and frontend by (optional: need torch, diffusers to work; pip install torch, diffusers):

ggc k2

With lora selection:

ggc k1

Note 2 (OCR)

Activate backend and frontend by (optional: need transformers to work; pip install transformers):

ggc n2

Speech 2 (text to speech)

Activate backend and frontend by (optional: need diao to work; pip install diao):

ggc s2

Bagel 2 (any to any)

Activate backend and frontend by (optional: need bagel2 to work; pip install bagel2):

ggc b2

Opt a vae then opt a model file (see example here)

Voice 2 (text to voice)

Activate backend and frontend by (optional: need chichat to work; pip install chichat):

ggc c2

Opt a vae, a clip and a model file (see example here)

Audio 2 (text to audio)

Activate backend and frontend by (optional: need fishaudio to work; pip install fishaudio):

ggc o2

Opt a codec then opt a model file (see example here)

Gudio 2 (text to speech)

Activate backend and frontend by (optional: need gudio to work; pip install gudio):

ggc g2

Opt a model then opt a clip (see example here)

Menu

Enter the main menu for selecting a connector or getting pre-trained trial model(s).

ggc m

Import as a module

Include the connector selection menu to your code by:

from gguf_connector import menu

For standalone version please refer to the repository in the reference list (below).

References

model selector (standalone version: installable package)

cgg (cmd-based tool)

Resources

ctransformers llama.cpp

Article

understanding gguf and the gguf-connector

Website

gguf.org (you can download the frontend from github and host it locally; the backend is ethereum blockchain)

gguf.io (i/o is a mirror of us; note: this web3 domain might be restricted access in some regions/by some service providers, if so, visit the one below instead, exactly the same)

gguf.us

Project details


Release history Release notifications | RSS feed

This version

2.1.8

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gguf_connector-2.1.8.tar.gz (132.4 kB view details)

Uploaded Source

Built Distribution

gguf_connector-2.1.8-py2.py3-none-any.whl (196.2 kB view details)

Uploaded Python 2Python 3

File details

Details for the file gguf_connector-2.1.8.tar.gz.

File metadata

  • Download URL: gguf_connector-2.1.8.tar.gz
  • Upload date:
  • Size: 132.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.4

File hashes

Hashes for gguf_connector-2.1.8.tar.gz
Algorithm Hash digest
SHA256 fc0b429f4f6e5aa738811b09356dc4ea3ecc1e5316bdb683e42aac42c6b9ec48
MD5 5c3f1faf166aa96624e7d8fe86f1dbe3
BLAKE2b-256 1899431b42656e1307968bdc009fc20586e1eafb73bdf0c11ca3139e45b0f2bb

See more details on using hashes here.

File details

Details for the file gguf_connector-2.1.8-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for gguf_connector-2.1.8-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 6fcdafc0cd8af055b2c28fff09c52d8fd6cb24b032d1d5d826b9c1d4e166467d
MD5 1857c73ac925eb770636913c11c76238
BLAKE2b-256 38a71c940a1bc0f5c598d2986e4ff3527ed5b8f49e2b7c74378fa580eff481bd

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page