Skip to main content

gguf connector core built on llama.cpp

Project description

llama-core

Static Badge

This is a solo llama connector also; being able to work independently.

install via (pip/pip3):

pip install llama-core

run it by (python/python3):

python -m llama_core

Prompt to user interface selection menu above; while chosen, GGUF file(s) in the current directory will be searched and detected (if any) as below.

include interface selector to your code by adding:

from llama_core import menu

include gguf reader to your code by adding:

from llama_core import reader

include gguf writer to your code by adding:

from llama_core import writer

remark(s)

Other functions are same as llama-cpp-python; for CUDA(GPU, Nvida) and Metal(M1/M2/M3, Apple) supported settings, please specify CMAKE_ARGS following Abetlen's repo below; if you want to install it by source file (under releases), you should opt to do it by .tar.gz file (then build your machine-customized installable package) rather than .whl (wheel; a pre-built binary package) with an appropriate cmake tag(s).

references

repo llama-cpp-python llama.cpp page gguf.us

build from llama_core-(version).tar.gz (examples for CPU setup below)

According to the latest note inside vs code, msys64 was recommended by Microsoft; or you could opt w64devkit or etc. as source/location of your gcc and g++ compilers.

for windows user(s):

$env:CMAKE_GENERATOR = "MinGW Makefiles"
$env:CMAKE_ARGS = "-DCMAKE_C_COMPILER=C:/msys64/mingw64/bin/gcc.exe -DCMAKE_CXX_COMPILER=C:/msys64/mingw64/bin/g++.exe"
pip install llama_core-(version).tar.gz

In mac, xcode command line tools were recommended by Apple for dealing all coding related issue(s); or you could bypass it for your own good/preference.

for mac user(s):

pip3 install llama_core-(version).tar.gz

for high (just a little bit better) performance seeker(s):

example setup for metal (M1/M2/M3 - Apple) - faster

CMAKE_ARGS="-DGGML_METAL=on" pip3 install llama_core-(version).tar.gz

example setup for cuda (GPU - Nvida) - faster x2; depends on your model (how rich you are)

CMAKE_ARGS="-DGGML_CUDA=on" pip install llama_core-(version).tar.gz

make sure your gcc and g++ are >=11; you can check it by: gcc --version and g++ --version; other setting(s) include: cmake>=3.21, etc.; however, if you opt to install it by the pre-built wheel (.whl) file then you don't need to worry about that.

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_core-0.3.9.tar.gz (67.5 MB view details)

Uploaded Source

Built Distributions

llama_core-0.3.9-cp313-cp313-win_amd64.whl (3.7 MB view details)

Uploaded CPython 3.13 Windows x86-64

llama_core-0.3.9-cp312-cp312-win_amd64.whl (3.7 MB view details)

Uploaded CPython 3.12 Windows x86-64

llama_core-0.3.9-cp312-cp312-macosx_14_0_arm64.whl (3.5 MB view details)

Uploaded CPython 3.12 macOS 14.0+ ARM64

llama_core-0.3.9-cp312-cp312-macosx_11_0_x86_64.whl (3.3 MB view details)

Uploaded CPython 3.12 macOS 11.0+ x86-64

llama_core-0.3.9-cp311-cp311-win_amd64.whl (3.7 MB view details)

Uploaded CPython 3.11 Windows x86-64

llama_core-0.3.9-cp310-cp310-win_amd64.whl (3.7 MB view details)

Uploaded CPython 3.10 Windows x86-64

File details

Details for the file llama_core-0.3.9.tar.gz.

File metadata

  • Download URL: llama_core-0.3.9.tar.gz
  • Upload date:
  • Size: 67.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.13.0rc1

File hashes

Hashes for llama_core-0.3.9.tar.gz
Algorithm Hash digest
SHA256 821126500808f2eb96784a9de423285b53ae01eacb1433b63431891e4b8a9e86
MD5 9f57a95891bb4af99543b852acb7202e
BLAKE2b-256 26b8287c1abcca4ce340f4b4941b820bfa68e908678dae19672204384b05f23c

See more details on using hashes here.

File details

Details for the file llama_core-0.3.9-cp313-cp313-win_amd64.whl.

File metadata

File hashes

Hashes for llama_core-0.3.9-cp313-cp313-win_amd64.whl
Algorithm Hash digest
SHA256 d7ee3cdcd7482ee9242252b3f70e2942096cb0795e45e51145a5405d932b800a
MD5 5c0fa7d11435e4ca6d4a9f6d6204ec64
BLAKE2b-256 5e60a44ddcc8f8c83b4281f7ad4c381e84f4bd26e45028747343463d630ae145

See more details on using hashes here.

File details

Details for the file llama_core-0.3.9-cp312-cp312-win_amd64.whl.

File metadata

File hashes

Hashes for llama_core-0.3.9-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 c916c6b9160e798bba0c36f2a669777d24f953510ab9a1568fc9bf002c0d1c91
MD5 1345b1af845c682ca5035418143d6436
BLAKE2b-256 d3ef0b15e4394321e68810eb619ecab908b256a2f53c727f2981d3817e6acfc1

See more details on using hashes here.

File details

Details for the file llama_core-0.3.9-cp312-cp312-macosx_14_0_arm64.whl.

File metadata

File hashes

Hashes for llama_core-0.3.9-cp312-cp312-macosx_14_0_arm64.whl
Algorithm Hash digest
SHA256 f39d7617f8753782e8d40e3d6d5b07bb33c1a0ba1041d66b2a276fd3466dc14f
MD5 c7a834dfdb64b9196e145bffee8ffbe3
BLAKE2b-256 af63558159610018c4eef7ef16f3427acf6544f441f129c231ff59228a77f8ca

See more details on using hashes here.

File details

Details for the file llama_core-0.3.9-cp312-cp312-macosx_11_0_x86_64.whl.

File metadata

File hashes

Hashes for llama_core-0.3.9-cp312-cp312-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 bfaab10426c438139f20ad02a902e025c6f92ec5d4631ad023a14a843d313e70
MD5 46e3b638472ef296033b10434d114e66
BLAKE2b-256 3a8f25d53541bbaab083788eb8ffc60ac170f6b08f56858eb0b420275b7121f2

See more details on using hashes here.

File details

Details for the file llama_core-0.3.9-cp311-cp311-win_amd64.whl.

File metadata

File hashes

Hashes for llama_core-0.3.9-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 0bbf79d72ade10b467066221bc7e22f90136d4a4cdbcc4e46e278f307694b026
MD5 6ca2af2ee38134e04e75c96227c3b37a
BLAKE2b-256 0b845eb05a6f35c5b40299c8420d569a539ad86da3175ab487ef95001e67a4ef

See more details on using hashes here.

File details

Details for the file llama_core-0.3.9-cp310-cp310-win_amd64.whl.

File metadata

File hashes

Hashes for llama_core-0.3.9-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 2a809e298cddcc18f65b12bfff1f1949254b5ea8323d1d6227721a165014d0b0
MD5 460c2516d4eface9d5805583f45a41cc
BLAKE2b-256 451e8b6e2491b4738796d0b61e533e16179ec4b9b4a29d238d52f6677d91bcbd

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page