Skip to main content

gguf connector core built on llama.cpp

Project description

llama-core

Static Badge

This is a solo llama connector also; being able to work independently.

install via (pip/pip3):

pip install llama-core

run it by (python/python3):

python -m llama_core

Prompt to user interface selection menu above; while chosen, GGUF file(s) in the current directory will be searched and detected (if any) as below.

include interface selector to your code by adding:

from llama_core import menu

include gguf reader to your code by adding:

from llama_core import reader

include gguf writer to your code by adding:

from llama_core import writer

remark(s)

Other functions are same as llama-cpp-python; for CUDA(GPU, Nvida) and Metal(M1/M2, Apple) supported settings, please specify CMAKE_ARGS following Abetlen's repo below; if you want to install it by source file (under releases), you should opt to do it by .tar.gz file (then build your machine-customized installable package) rather than .whl (wheel; a pre-built binary package) with an appropriate cmake tag(s).

references

repo llama-cpp-python llama.cpp page gguf.us

build from llama_core-(version).tar.gz (examples below are for CPU)

According to the latest note inside vs code, msys64 was recommended by Microsoft; or you could opt w64devkit or etc. as source/location of your gcc and g++ compilers.

for windows user(s):

$env:CMAKE_GENERATOR = "MinGW Makefiles"
$env:CMAKE_ARGS = "-DCMAKE_C_COMPILER=C:/msys64/mingw64/bin/gcc.exe -DCMAKE_CXX_COMPILER=C:/msys64/mingw64/bin/g++.exe"
pip install llama_core-(version).tar.gz

In mac, xcode command line tools were recommended by Apple for dealing all coding related issue(s); or you could bypass it for your own good/preference.

for mac user(s):

pip3 install llama_core-(version).tar.gz

Make sure your gcc and g++ are >=11; you can check it by: gcc --version and g++ --version; other setting(s) include: cmake>=3.21, etc.; however, if you opt to install it by the pre-built wheel (.whl) file then you don't need to worry about that.

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_core-0.3.6.tar.gz (64.0 MB view details)

Uploaded Source

Built Distributions

llama_core-0.3.6-cp312-cp312-macosx_14_0_arm64.whl (3.5 MB view details)

Uploaded CPython 3.12 macOS 14.0+ ARM64

llama_core-0.3.6-cp312-cp312-macosx_11_0_x86_64.whl (3.9 MB view details)

Uploaded CPython 3.12 macOS 11.0+ x86-64

llama_core-0.3.6-cp311-cp311-win_amd64.whl (3.8 MB view details)

Uploaded CPython 3.11 Windows x86-64

File details

Details for the file llama_core-0.3.6.tar.gz.

File metadata

  • Download URL: llama_core-0.3.6.tar.gz
  • Upload date:
  • Size: 64.0 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.4

File hashes

Hashes for llama_core-0.3.6.tar.gz
Algorithm Hash digest
SHA256 8dfcb396804f4b01716b01d13ea3dc678aab845ff8e059a333069005b8d6712b
MD5 da06a98b14cce640878be958fc5387de
BLAKE2b-256 6587f2d46985751576b6409955ab7ef8379bc83fa3234041d4899a6d4e0cbb4a

See more details on using hashes here.

File details

Details for the file llama_core-0.3.6-cp312-cp312-macosx_14_0_arm64.whl.

File metadata

File hashes

Hashes for llama_core-0.3.6-cp312-cp312-macosx_14_0_arm64.whl
Algorithm Hash digest
SHA256 4859ef0366e809cad4291030d8a18244356ee218e091042f8fd24b4d498e9c8b
MD5 cb2ed104dc683e84b3d04a7ca56016d6
BLAKE2b-256 f6acc72a447e21853d00bca32823fa6e63ed4a1ef5bf7a8fab673cb46c4cd5ad

See more details on using hashes here.

File details

Details for the file llama_core-0.3.6-cp312-cp312-macosx_11_0_x86_64.whl.

File metadata

File hashes

Hashes for llama_core-0.3.6-cp312-cp312-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 4ee63f5857a32d3df6fd5b1a549ebf361693d8d1c12f54b48e5e08545a852d67
MD5 b427d142167b071965e9af6f34de3caa
BLAKE2b-256 8a6f6c883d52012350ea9e7add5c16f43801ab1bb5b8da6a4a913bf4759061e5

See more details on using hashes here.

File details

Details for the file llama_core-0.3.6-cp311-cp311-win_amd64.whl.

File metadata

File hashes

Hashes for llama_core-0.3.6-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 72a85b7da2c79cbd0fd3422141040c2e632b5a7626eb8abf517ac8f490cb4cdb
MD5 bf8f11f29442988ad52c4839ca65ad81
BLAKE2b-256 b492212d376e964f44692ac5cece94d777e792f7cf5e6892bd22820d91937441

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page