Skip to main content

gguf connector core built on llama.cpp

Project description

llama-core

Static Badge

This is a solo llama connector also; being able to work independently.

install via (pip/pip3):

pip install llama-core

run it by (python/python3):

python -m llama_core

Prompt to user interface selection menu above; while chosen, GGUF file(s) in the current directory will be searched and detected (if any) as below.

include interface selector to your code by adding:

from llama_core import menu

include gguf reader to your code by adding:

from llama_core import reader

include gguf writer to your code by adding:

from llama_core import writer

remark(s)

OOther functions are same as llama-cpp-python; for CUDA(GPU, Nvida) and Metal(M1/M2/M3, Apple) supported settings, please specify CMAKE_ARGS following Abetlen's repo below; if you want to install it by source file (under releases), you should opt to do it by .tar.gz file (then build your machine-customized installable package) rather than .whl (wheel; a pre-built binary package) with an appropriate cmake tag(s).

references

repo llama-cpp-python llama.cpp page gguf.us

build from llama_core-(version).tar.gz (examples for CPU setup below)

According to the latest note inside vs code, msys64 was recommended by Microsoft; or you could opt w64devkit or etc. as source/location of your gcc and g++ compilers.

for windows user(s):

$env:CMAKE_GENERATOR = "MinGW Makefiles"
$env:CMAKE_ARGS = "-DCMAKE_C_COMPILER=C:/msys64/mingw64/bin/gcc.exe -DCMAKE_CXX_COMPILER=C:/msys64/mingw64/bin/g++.exe"
pip install llama_core-(version).tar.gz

In mac, xcode command line tools were recommended by Apple for dealing all coding related issue(s); or you could bypass it for your own good/preference.

for mac user(s):

pip3 install llama_core-(version).tar.gz

for high (just a little bit better) performance seeker(s):

example setup for metal (M1/M2/M3 - Apple) - faster

CMAKE_ARGS="-DGGML_METAL=on" pip3 install llama_core-(version).tar.gz

example setup for cuda (GPU - Nvida) - faster x2; depends on your model (how rich you are)

CMAKE_ARGS="-DGGML_CUDA=on" pip install llama_core-(version).tar.gz

make sure your gcc and g++ are >=11; you can check it by: gcc --version and g++ --version; other setting(s) include: cmake>=3.21, etc.; however, if you opt to install it by the pre-built wheel (.whl) file then you don't need to worry about that.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_core-0.4.2.tar.gz (67.6 MB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

llama_core-0.4.2-cp313-cp313-macosx_15_0_arm64.whl (5.7 MB view details)

Uploaded CPython 3.13macOS 15.0+ ARM64

llama_core-0.4.2-cp312-cp312-macosx_15_0_arm64.whl (5.7 MB view details)

Uploaded CPython 3.12macOS 15.0+ ARM64

llama_core-0.4.2-cp312-cp312-macosx_11_0_x86_64.whl (6.5 MB view details)

Uploaded CPython 3.12macOS 11.0+ x86-64

llama_core-0.4.2-cp311-cp311-win_amd64.whl (3.8 MB view details)

Uploaded CPython 3.11Windows x86-64

llama_core-0.4.2-cp311-cp311-macosx_15_0_arm64.whl (5.7 MB view details)

Uploaded CPython 3.11macOS 15.0+ ARM64

File details

Details for the file llama_core-0.4.2.tar.gz.

File metadata

  • Download URL: llama_core-0.4.2.tar.gz
  • Upload date:
  • Size: 67.6 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.4

File hashes

Hashes for llama_core-0.4.2.tar.gz
Algorithm Hash digest
SHA256 1b2c987fabea2f9c88287d06677343882686e1fb9e1df2248e015a504d0cddd3
MD5 d6adb2850d61369c881801af0f5ce206
BLAKE2b-256 c4e7de1b1e9fc3ecb3eaa134b2283112c0d38e3a82ce7f1986cc014e919b672e

See more details on using hashes here.

File details

Details for the file llama_core-0.4.2-cp313-cp313-macosx_15_0_arm64.whl.

File metadata

File hashes

Hashes for llama_core-0.4.2-cp313-cp313-macosx_15_0_arm64.whl
Algorithm Hash digest
SHA256 5a7e1185a52f7a9018da66eaf11439994955440c1cce643ef7ed36f18545a8bc
MD5 7c3d7c32fb8afe82b320c4aaceb6dcef
BLAKE2b-256 dc14c5395e8425af4c4c6ff8ae8e23554ff19d629457e65f530e4012f65797b1

See more details on using hashes here.

File details

Details for the file llama_core-0.4.2-cp312-cp312-macosx_15_0_arm64.whl.

File metadata

File hashes

Hashes for llama_core-0.4.2-cp312-cp312-macosx_15_0_arm64.whl
Algorithm Hash digest
SHA256 01d110883f0f64f73721dfb18f74be20eebb87672381ec5bf33a9766d9a5c206
MD5 076584950fc6ff5902d906364392b24d
BLAKE2b-256 da4cd9fe8792a21d53ca0470efd8f050019c2dd453ca1fe4d9746df5a4cbfed7

See more details on using hashes here.

File details

Details for the file llama_core-0.4.2-cp312-cp312-macosx_11_0_x86_64.whl.

File metadata

File hashes

Hashes for llama_core-0.4.2-cp312-cp312-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 7f14345426ff9e752d91916abff6d4672963c5637b5fcd42e6372f114601c156
MD5 0efebcc282e6507820da21f98abc6105
BLAKE2b-256 6bb3eb326fb2308f34d13ae9d5a5b680c065a1b5187a6271ad52df8998be10a0

See more details on using hashes here.

File details

Details for the file llama_core-0.4.2-cp311-cp311-win_amd64.whl.

File metadata

  • Download URL: llama_core-0.4.2-cp311-cp311-win_amd64.whl
  • Upload date:
  • Size: 3.8 MB
  • Tags: CPython 3.11, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.4

File hashes

Hashes for llama_core-0.4.2-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 4e9dd3bd1fd3445a4fb6f2b3325a483bba6e6a4eb6107d8e372b1abd3c2d8ae8
MD5 17d5af06c7bb14e6ff081fddff4b7363
BLAKE2b-256 2b401e7c5214caf61e68a67574eb451bffcad33008d2bc319d361786225e7580

See more details on using hashes here.

File details

Details for the file llama_core-0.4.2-cp311-cp311-macosx_15_0_arm64.whl.

File metadata

File hashes

Hashes for llama_core-0.4.2-cp311-cp311-macosx_15_0_arm64.whl
Algorithm Hash digest
SHA256 f062bf952fdc70ee9716144403c0368fce257d6c38a06f84a2b37ec7f242e1ec
MD5 be34be5e120cb71069507a0082f6561c
BLAKE2b-256 c7f23523d835262c71b20deb48a719ad40efcb63fd126e91ddeac6cf58ea3d8e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page