Skip to main content

GGUF connector(s) with GUI

Project description

GGUF connector

GGUF (GPT-Generated Unified Format) is a successor of GGML (GPT-Generated Model Language), it was released on August 21, 2023; by the way, GPT stands for Generative Pre-trained Transformer.

Static Badge Static Badge

This package is a simple graphical user interface (GUI) application that uses the ctransformers or llama.cpp to interact with a chat model for generating responses.

Install the connector via pip (once only):

pip install gguf-connector

Update the connector (if previous version installed) by:

pip install gguf-connector --upgrade

With this version, you can interact straight with the GGUF file(s) available in the same directory by a simple command.

Graphical User Interface (GUI)

Select model(s) with ctransformers (optional: need ctransformers to work; pip install ctransformers):

ggc c

Select model(s) with llama.cpp connector (optional: need llama-cpp-python to work; get it here or nightly here):

ggc cpp

Command Line Interface (CLI)

Select model(s) with ctransformers:

ggc g

Select model(s) with llama.cpp connector:

ggc gpp

Select model(s) with vision connector:

ggc v

Opt a clip handler then opt a vision model; prompt your picture link to process (see example here)

Metadata reader (CLI only)

Select model(s) with metadata reader:

ggc r

Select model(s) with metadata fast reader:

ggc r2

Select model(s) with tensor reader (optional: need torch to work; pip install torch):

ggc r3

PDF analyzor (beta feature on CLI recently)

Load PDF(s) into a model with ctransformers:

ggc cp

Load PDF(s) into a model with llama.cpp connector:

ggc pp

optional: need pypdf; pip install pypdf

Speech recognizor (beta feature; accept WAV format recently)

Prompt WAV(s) into a model with ctransformers:

ggc cs

Prompt WAV(s) into a model with llama.cpp connector:

ggc ps

optional: need speechrecognition, pocketsphinx; pip install speechrecognition, pocketsphinx

Speech recognizor (via Google api; online)

Prompt WAV(s) into a model with ctransformers:

ggc cg

Prompt WAV(s) into a model with llama.cpp connector:

ggc pg

Container

Launch to page/container:

ggc w

Divider

Divide gguf into different part(s) with a cutoff point (size):

ggc d2

Merger

Merge all gguf into one:

ggc m2

Merger (safetensors)

Merge all safetensors into one (optional: need torch to work; pip install torch):

ggc ma

Splitter (checkpoint)

Split checkpoint into components (optional: need torch to work; pip install torch):

ggc s

Quantizor

Quantize safetensors to fp8 (downscale; optional: need torch to work; pip install torch):

ggc q

Quantize safetensors to fp32 (upscale; optional: need torch to work; pip install torch):

ggc q2

Convertor

Convert safetensors to gguf (auto; optional: need torch to work; pip install torch):

ggc t

Convertor (alpha)

Convert safetensors to gguf (meta; optional: need torch to work; pip install torch):

ggc t1

Convertor (beta)

Convert safetensors to gguf (unlimited; optional: need torch to work; pip install torch):

ggc t2

Convertor (gamma)

Convert gguf to safetensors (reversible; optional: need torch to work; pip install torch):

ggc t3

Swapper (lora)

Rename lora tensor (base/unet swappable; optional: need torch to work; pip install torch):

ggc la

Remover

Tensor remover:

ggc rm

Renamer

Tensor renamer:

ggc rn

Extractor

Tensor extractor:

ggc ex

Cutter

Get cutter for bf/f16 to q2-q8 quantization (see user guide here) by:

ggc u

Comfy

Download comfy pack (see user guide here) by:

ggc y

Node

Clone node (see user/setup guide here) by:

ggc n

Pack

Take pack (see user guide here) by:

ggc p

PackPack

Take packpack (see user guide here) by:

ggc p2

FramePack (1-click long video generation)

Take framepack (portable packpack) by:

ggc p1

Run framepack - ggc edition by (optional: need framepack to work; pip install framepack):

ggc f2

Smart contract generator (solidity)

Activate backend and frontend by (optional: need transformers to work; pip install transformers):

ggc g1

Video 1 (image to video)

Activate backend and frontend by (optional: need torch, diffusers to work; pip install torch, diffusers):

ggc v1

Video 2 (text to video)

Activate backend and frontend by (optional: need torch, diffusers to work; pip install torch, diffusers):

ggc v2

Image 2 (text to image)

Activate backend and frontend by (optional: need torch, diffusers to work; pip install torch, diffusers):

ggc i2

Kontext 2 (image editor)

Activate backend and frontend by (optional: need torch, diffusers to work; pip install torch, diffusers):

ggc k2

With lora selection:

ggc k1

With memory economy mode (low/no vram or w/o gpu option):

ggc k3

Krea 4 (image generator)

Activate backend and frontend by (optional: need torch, diffusers to work; pip install torch, diffusers):

ggc k4

Note 2 (OCR)

Activate backend and frontend by (optional: need transformers to work; pip install transformers):

ggc n2

Image descriptor (image to text)

Activate backend and frontend by (optional: need torch to work; pip install torch):

ggc f5

Realtime live captioning:

ggc f7

Connector mode, opt a gguf to interact with (see example here):

ggc f6

Activate accurate/precise mode by (optional: need vtoo to work; pip install vtoo):

ggc h3

Opt a model file to interact with (see example here)

Speech 2 (text to speech)

Activate backend and frontend by (optional: need diao to work; pip install diao):

ggc s2

Higgs 2 (text to audio)

Activate backend and frontend by (optional: need higgs to work; pip install higgs):

ggc h2

Multilingual supported, i.e., Spanish, German, Korean, etc.

Bagel 2 (any to any)

Activate backend and frontend by (optional: need bagel2 to work; pip install bagel2):

ggc b2

Opt a vae then opt a model file (see example here)

Voice 2 (text to voice)

Activate backend and frontend by (optional: need chichat to work; pip install chichat):

ggc c2

Opt a vae, a clip (t3-cfg) and a model file (see example here)

Voice 3 (text to voice)

Multilingual (optional: need chichat to work; pip install chichat):

ggc c3

Opt a vae, a clip (t3-23lang) and a model file (see example here)

Audio 2 (text to audio)

Activate backend and frontend by (optional: need fishaudio to work; pip install fishaudio):

ggc o2

Opt a codec then opt a model file (see example here)

Krea 7 (image generator)

Activate backend and frontend by (optional: need dequantor to work; pip install dequantor):

ggc k7

Opt a model file in the current directory (see example here)

Kontext 8 (image editor)

Activate backend and frontend by (optional: need dequantor to work; pip install dequantor):

ggc k8

Opt a model file in the current directory (see example here)

Flux connector (all-in-one)

Select flux image model(s) with k connector:

ggc k

Qwen image connector

Select qwen image model(s) with q5 connector:

ggc q5

Opt a model file to interact with (see example here)

Qwen image edit connector

Select image edit model(s) with q6 connector:

ggc q6

Opt a model file to interact with (see example here)

Qwen image edit plus connector

Select image edit plus model(s) with q7 connector:

ggc q7

Qwen image edit plux connector - multiple image input

Select image edit plus model(s) with q8 connector:

ggc q8

Opt a model file to interact with (see example here)

Qwen image edit ++ connector - multiple image input

Select image edit plus/lite model(s) with q9 connector (need nunchaku to work; get it here):

ggc q9

Opt a scaled 4-bit safetensors model file to interact with

Qwen image lite connector - multiple image input

Select image edit lite model(s) with q0 connector:

ggc q0

Qwen image lite2 connector - multiple image input

Select image edit lite2 model(s) with p0 connector:

ggc p0

Qwen image lite3 connector - multiple image input

Select image edit lite3 model(s) with p9 connector:

ggc p9

Z image connector

Select z image model(s) with z1 connector:

ggc z1

Lumina image connector

Select lumina image model(s) with l2 connector:

ggc l2

Wan video connector

Select wan video model(s) with w2 connector:

ggc w2

Ltxv connector

Select ltxv model(s) with x2 connector:

ggc x2

Mochi connector

Select mochi model(s) with m1 connector:

ggc m1

Kx-lite connector

Select kontext model(s) with k0 connector:

ggc k0

Opt a model file to interact with (see example here)

SD-lite connector

Select sd3.5 model(s) with s3 connector:

ggc s3

Opt a model file to interact with (see example here)

Higgs audio connector

Select higgs model(s) with h6 connector:

ggc h6

Opt a model file to interact with (see example here)

Dia connector (text to speech)

Select dia model(s) with s6 connector:

ggc s6

Opt a model file to interact with (see example here)

FastVLM connector (image-text to text)

Select fastvlm model(s) with f9 connector:

ggc f9

Opt a model file to interact with (see example here)

VibeVoice connector (text/voice to speech)

Select vibevoice model(s) with v6 connector (optional: need yvoice to work; pip install yvoice):

ggc v6

Opt a model file to interact with (see example here)

Docling connector (image/document to text)

Select docling model(s) with n3 connector:

ggc n3

Opt a model file to interact with (see example here)

Gudio (text to speech)

Activate backend and frontend by (optional: need gudio to work; pip install gudio):

ggc g2

Opt a model then opt a clip (see example here)

Sketch (draw something awesome)

Sketch gguf connector (optional: need dequantor to work; pip install dequantor):

ggc s8

Sketch safetensors connector (optional: need nunchaku to work; get it here):

ggc s9

Opt a model file to interact with (see example here)

Studio

Studio connector:

ggc w9

Opt a recognizor, a generator, and a transformer to interact with (see example here)

api (self-hosted backend)

Fast sd3.5 connector:

ggc w8

Fast lumina connector:

ggc w7

Fast flux connector:

ggc w6

Frontend test.gguf.org or localhost (open a new console/terminal: ggc b)

api (self-hosted backend) - exclusive for 🐷 holder trial recently

Fast image-to-video connector:

ggc e5

Fast video connector:

ggc e6

Fast scan connector:

ggc e7

Fast edit connector:

ggc e8

Fast plus connector:

ggc e9

Frontend gguf.org or localhost (open a new console/terminal: ggc a)

chatpig - GPT like frontend and backend

Frontend chatpig launcher:

ggc b4

Backend chatpig connector (optional: need llama-cpp-python to work):

ggc e4

Opt any gguf model to load (text-based generation)

cli chatbot

Launcher:

ggc l

computer use 🤖🕹️

Computer use agent (optional: need gguf-cua to work; pip install gguf-cua):

ggc cu

Get the model file(s) here for backend

vibe code 👾🎮

Vibe code agent (optional: need coder to work; npm install -g @gguf/coder):

ggc vc

Menu

Enter the main menu for selecting a connector or getting pre-trained trial model(s).

ggc m

Import as a module

Include the connector selection menu to your code by:

from gguf_connector import menu

For standalone version please refer to the repository in the reference list (below).

References

model selector (standalone version: installable package)

cgg (cmd-based tool)

Resources

ctransformers llama.cpp

Article

understanding gguf and the gguf-connector

Website

gguf.org (you can download the frontend from github and host it locally; the backend is ethereum blockchain)

gguf.io (i/o is a mirror of us; note: this web3 domain might be restricted access in some regions/by some service providers, if so, visit the one above/below instead, exactly the same)

gguf.us

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gguf_connector-3.2.9.tar.gz (210.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

gguf_connector-3.2.9-py2.py3-none-any.whl (345.1 kB view details)

Uploaded Python 2Python 3

File details

Details for the file gguf_connector-3.2.9.tar.gz.

File metadata

  • Download URL: gguf_connector-3.2.9.tar.gz
  • Upload date:
  • Size: 210.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.9

File hashes

Hashes for gguf_connector-3.2.9.tar.gz
Algorithm Hash digest
SHA256 1137421e2275311f15c7f69793339dd59d9cf1294e65c9e43b1dad55cda51201
MD5 3fdf84a6b104ddd1bb9d74291f7a5307
BLAKE2b-256 5a01ba3ec4800741247b1253e9c9360fdccedbcb3c37acce933fcbad0f281911

See more details on using hashes here.

File details

Details for the file gguf_connector-3.2.9-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for gguf_connector-3.2.9-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 3541fd2df8a5290a91f4f0968c805b5eae791e8427b200de936e2015b7023463
MD5 7c2f40fe740cf9308d253288c7b48676
BLAKE2b-256 657020f6027db2bfff06a14f1374c8ce2f1b7b2a7e56b66b74849f0d367a6336

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page