Skip to main content

gguf connector core built on llama.cpp

Project description

llama-core

Static Badge

This is a solo llama connector also; being able to work independently.

install via (pip/pip3):

pip install llama-core

run it by (python/python3):

python -m llama_core

Prompt to user interface selection menu above; while chosen, GGUF file(s) in the current directory will be searched and detected (if any) as below.

include interface selector to your code by adding:

from llama_core import menu

include gguf reader to your code by adding:

from llama_core import reader

include gguf writer to your code by adding:

from llama_core import writer

remark(s)

Other functions are same as llama-cpp-python; for CUDA(GPU, Nvida) and Metal(M1/M2, Apple) supported settings, please specify CMAKE_ARGS following Abetlen's repo below; if you want to install it by source file (under releases), you should opt to do it by .tar.gz file (then build your machine-customized installable package) rather than .whl (wheel; a pre-built binary package) with an appropriate cmake tag(s).

references

repo llama-cpp-python llama.cpp page gguf.us

build from llama_core-(version).tar.gz (examples below are for CPU)

According to the latest note inside vs code, msys64 was recommended by Microsoft; or you could opt w64devkit or etc. as source/location of your gcc and g++ compilers.

for windows user(s):

$env:CMAKE_GENERATOR = "MinGW Makefiles"
$env:CMAKE_ARGS = "-DCMAKE_C_COMPILER=C:/msys64/mingw64/bin/gcc.exe -DCMAKE_CXX_COMPILER=C:/msys64/mingw64/bin/g++.exe"
pip install llama_core-(version).tar.gz

In mac, xcode command line tools were recommended by Apple for dealing all coding related issue(s); or you could bypass it for your own good/preference.

for mac user(s):

pip3 install llama_core-(version).tar.gz

Make sure your gcc and g++ are >=11; you can check it by: gcc --version and g++ --version; other setting(s) include: cmake>=3.21, etc.; however, if you opt to install it by the pre-built wheel (.whl) file then you don't need to worry about that.

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_core-0.3.3.tar.gz (11.1 MB view details)

Uploaded Source

Built Distributions

llama_core-0.3.3-cp312-cp312-macosx_14_0_arm64.whl (2.8 MB view details)

Uploaded CPython 3.12 macOS 14.0+ ARM64

llama_core-0.3.3-cp312-cp312-macosx_11_0_x86_64.whl (3.3 MB view details)

Uploaded CPython 3.12 macOS 11.0+ x86-64

llama_core-0.3.3-cp311-cp311-win_amd64.whl (3.7 MB view details)

Uploaded CPython 3.11 Windows x86-64

File details

Details for the file llama_core-0.3.3.tar.gz.

File metadata

  • Download URL: llama_core-0.3.3.tar.gz
  • Upload date:
  • Size: 11.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.4

File hashes

Hashes for llama_core-0.3.3.tar.gz
Algorithm Hash digest
SHA256 d29dbd9dd9396787b7806b8ede47aae191242b174e82e5a80100eb3ae436a5a1
MD5 501e9267055eb96180299aea2a28d1a7
BLAKE2b-256 0e3272a0ccda2470cafc3b90678df505d043ee05817339bf47a7ce81f39c40ba

See more details on using hashes here.

File details

Details for the file llama_core-0.3.3-cp312-cp312-macosx_14_0_arm64.whl.

File metadata

File hashes

Hashes for llama_core-0.3.3-cp312-cp312-macosx_14_0_arm64.whl
Algorithm Hash digest
SHA256 5ead8b6cf882528db930d81ddbe9ecd2ee18f3bec12c5e77aad741e9abcec1d4
MD5 5846f0e5b11150fb2eb996855bce0ea6
BLAKE2b-256 22f55d78fc4a7b9beead099ae892308215b3636d9e81415322e18f23a5914555

See more details on using hashes here.

File details

Details for the file llama_core-0.3.3-cp312-cp312-macosx_11_0_x86_64.whl.

File metadata

File hashes

Hashes for llama_core-0.3.3-cp312-cp312-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 1900997ccf668c3fcdc0fa2b1a30f1dad078cfa98011ea259c7ecc840297827d
MD5 eebe0631406063255e155c9b248d1124
BLAKE2b-256 a12fb7cf2576d5616a3724198c02fd7d3c39920ed9bb8fefa06dccb28c5cde7e

See more details on using hashes here.

File details

Details for the file llama_core-0.3.3-cp311-cp311-win_amd64.whl.

File metadata

File hashes

Hashes for llama_core-0.3.3-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 ad9d4be7d66ff1f9e7ea97cb6b57c9618688b2a5ca5713b0a923393e51c2d535
MD5 fb26dd6725adbe46ac0c5e0c2af2ecce
BLAKE2b-256 d5c753b07c92620e215d9cbed41aa69a531506de56f5f3d462a2a5bf1bfbb6ed

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page