Skip to main content

gguf connector core built on llama.cpp

Project description

llama-core

Static Badge

This is a solo llama connector also; being able to work independently.

install via (pip/pip3):

pip install llama-core

run it by (python/python3):

python -m llama_core

Prompt to user interface selection menu above; while chosen, GGUF file(s) in the current directory will be searched and detected (if any) as below.

include interface selector to your code by adding:

from llama_core import menu

include gguf reader to your code by adding:

from llama_core import reader

include gguf writer to your code by adding:

from llama_core import writer

remark(s)

Other functions are same as llama-cpp-python; for CUDA(GPU, Nvida) and Metal(M1/M2/M3, Apple) supported settings, please specify CMAKE_ARGS following Abetlen's repo below; if you want to install it by source file (under releases), you should opt to do it by .tar.gz file (then build your machine-customized installable package) rather than .whl (wheel; a pre-built binary package) with an appropriate cmake tag(s).

references

repo llama-cpp-python llama.cpp page gguf.us

build from llama_core-(version).tar.gz (examples for CPU setup below)

According to the latest note inside vs code, msys64 was recommended by Microsoft; or you could opt w64devkit or etc. as source/location of your gcc and g++ compilers.

for windows user(s):

$env:CMAKE_GENERATOR = "MinGW Makefiles"
$env:CMAKE_ARGS = "-DCMAKE_C_COMPILER=C:/msys64/mingw64/bin/gcc.exe -DCMAKE_CXX_COMPILER=C:/msys64/mingw64/bin/g++.exe"
pip install llama_core-(version).tar.gz

In mac, xcode command line tools were recommended by Apple for dealing all coding related issue(s); or you could bypass it for your own good/preference.

for mac user(s):

pip3 install llama_core-(version).tar.gz

for high (just a little bit better) performance seeker(s):

example setup for metal (M1/M2/M3 - Apple) - faster

CMAKE_ARGS="-DGGML_METAL=on" pip3 install llama_core-(version).tar.gz

example setup for cuda (GPU - Nvida) - faster x2; depends on your model (how rich you are)

CMAKE_ARGS="-DGGML_CUDA=on" pip install llama_core-(version).tar.gz

make sure your gcc and g++ are >=11; you can check it by: gcc --version and g++ --version; other setting(s) include: cmake>=3.21, etc.; however, if you opt to install it by the pre-built wheel (.whl) file then you don't need to worry about that.

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_core-0.4.0.tar.gz (64.0 MB view details)

Uploaded Source

Built Distributions

llama_core-0.4.0-cp313-cp313-win_amd64.whl (3.7 MB view details)

Uploaded CPython 3.13 Windows x86-64

llama_core-0.4.0-cp312-cp312-win_amd64.whl (3.7 MB view details)

Uploaded CPython 3.12 Windows x86-64

llama_core-0.4.0-cp312-cp312-macosx_14_0_arm64.whl (3.5 MB view details)

Uploaded CPython 3.12 macOS 14.0+ ARM64

llama_core-0.4.0-cp312-cp312-macosx_11_0_x86_64.whl (3.3 MB view details)

Uploaded CPython 3.12 macOS 11.0+ x86-64

llama_core-0.4.0-cp311-cp311-win_amd64.whl (3.8 MB view details)

Uploaded CPython 3.11 Windows x86-64

llama_core-0.4.0-cp310-cp310-win_amd64.whl (3.7 MB view details)

Uploaded CPython 3.10 Windows x86-64

llama_core-0.4.0-cp39-cp39-win_amd64.whl (3.7 MB view details)

Uploaded CPython 3.9 Windows x86-64

llama_core-0.4.0-cp38-cp38-win_amd64.whl (3.7 MB view details)

Uploaded CPython 3.8 Windows x86-64

File details

Details for the file llama_core-0.4.0.tar.gz.

File metadata

  • Download URL: llama_core-0.4.0.tar.gz
  • Upload date:
  • Size: 64.0 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.13.0rc1

File hashes

Hashes for llama_core-0.4.0.tar.gz
Algorithm Hash digest
SHA256 26cf357b1acfb742812e30eadc72c7eaf122fd5c46c6e3f89d16fdad5beb6edc
MD5 fca7bf7d957b3bcc2c50cdf87e1e201e
BLAKE2b-256 769b6dc38a99a85409096915b0cb89bee00756a75ccdb4879802a1ba2947ad0e

See more details on using hashes here.

File details

Details for the file llama_core-0.4.0-cp313-cp313-win_amd64.whl.

File metadata

File hashes

Hashes for llama_core-0.4.0-cp313-cp313-win_amd64.whl
Algorithm Hash digest
SHA256 5ebfcb2fc1c6c39a5745f71243e29fce3511392f8a97eef478f8b85765941e5f
MD5 6e72886ad3cec82b78ad3fc3bf7e6a46
BLAKE2b-256 a4abc1a22363dcb731a625174df2105f40ef8a04b9ec675ebbaf059e24c5e7b1

See more details on using hashes here.

File details

Details for the file llama_core-0.4.0-cp312-cp312-win_amd64.whl.

File metadata

File hashes

Hashes for llama_core-0.4.0-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 30433cea35ab93eab6cd2c891a302556ae24af330d6f30c81b041c5ac7e5ece4
MD5 ace9ffa1d2d8eacbbefd3762a388bb45
BLAKE2b-256 5e6c3fdc4fa386d476e6a8dccf980169123355f4b87c4056075c6de6f7b3d566

See more details on using hashes here.

File details

Details for the file llama_core-0.4.0-cp312-cp312-macosx_14_0_arm64.whl.

File metadata

File hashes

Hashes for llama_core-0.4.0-cp312-cp312-macosx_14_0_arm64.whl
Algorithm Hash digest
SHA256 74c2d8bd2e9abb4e5e4ec57501511ef29821234d47940882b70cca15e20e772f
MD5 6434762f70c258b02d28ec3213e44e1c
BLAKE2b-256 49d7b8ff130f94c3586a7324c8fc2fcef8b5caab9de60edd2728fe0ac9c92f2b

See more details on using hashes here.

File details

Details for the file llama_core-0.4.0-cp312-cp312-macosx_11_0_x86_64.whl.

File metadata

File hashes

Hashes for llama_core-0.4.0-cp312-cp312-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 0add75f40dbd1c6749a3edb72fe597b0ca8f4032f7383662fc9379a15a5476bb
MD5 761fef39f785ab5b7f362e3b2cd97f70
BLAKE2b-256 bacae94f0564d1796fed1863f66f567f1f29262c4b71a2a6eb8d80a361bf6e68

See more details on using hashes here.

File details

Details for the file llama_core-0.4.0-cp311-cp311-win_amd64.whl.

File metadata

File hashes

Hashes for llama_core-0.4.0-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 3278e7029d7f3e7b546b4a657865e86179591a2df8596829fb4566ff30b757b4
MD5 591f91549b8d18c771704e0f93142ee8
BLAKE2b-256 c31f575b9f949a92bb4c262c78b96c3e874e5377442571a38f2885ff933ad688

See more details on using hashes here.

File details

Details for the file llama_core-0.4.0-cp310-cp310-win_amd64.whl.

File metadata

File hashes

Hashes for llama_core-0.4.0-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 b54205502ec23defc04a5966d04b17c44be90014484d511f001b3aa6b883fea2
MD5 e4c97520567b0417b468077ec7f6e10f
BLAKE2b-256 252b8115fef8470f93084ad6b5f7284ec0fe74eb65ccb9571d529607e14f1cc0

See more details on using hashes here.

File details

Details for the file llama_core-0.4.0-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: llama_core-0.4.0-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 3.7 MB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.13.0rc1

File hashes

Hashes for llama_core-0.4.0-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 db1eb4a61677d0eeccb40ace7f4c869e712339f90e12e7d35cbc4b377d4a0d22
MD5 1e3bf74c01d5d52da2e0d4f11a1143c3
BLAKE2b-256 d28e4f3a3b00564cdf443c9ecedd2c7d822635604aaad0e8e87030c395a9cf83

See more details on using hashes here.

File details

Details for the file llama_core-0.4.0-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: llama_core-0.4.0-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 3.7 MB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.13.0rc1

File hashes

Hashes for llama_core-0.4.0-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 24a6df1ecb8aa68b927ba6ba6f8d521ed7c3968f30e4517878b50296a2207e4e
MD5 c40824618e4d553235d7011a40ec13c5
BLAKE2b-256 61fd945c7abf93364792eaf694e8d2cd19314c140d32b876e68510784358fd85

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page