gguf connector core built on llama.cpp
Project description
llama-core
This is a solo llama connector also; being able to work independently.
install via (pip/pip3):
pip install llama-core
run it by (python/python3):
python -m llama_core
Prompt to user interface selection menu above; while chosen, GGUF file(s) in the current directory will be searched and detected (if any) as below.
include interface selector to your code by adding:
from llama_core import menu
include gguf reader to your code by adding:
from llama_core import reader
include gguf writer to your code by adding:
from llama_core import writer
remark(s)
Other functions are same as llama-cpp-python; for CUDA(GPU, Nvida) and Metal(M1/M2, Apple) supported settings, please specify CMAKE_ARGS
following Abetlen's repo below; if you want to install it by source file (under releases), you should opt to do it by .tar.gz file (then build your machine-customized installable package) rather than .whl (wheel; a pre-built binary package) with an appropriate cmake tag(s).
references
repo llama-cpp-python llama.cpp page gguf.us
build from llama_core-(version).tar.gz (examples below are for CPU)
According to the latest note inside vs code, msys64 is recommended by Microsoft; or you can opt w64devkit or etc. as source of your gcc and g++ compilers.
for windows user(s):
$env:CMAKE_GENERATOR = "MinGW Makefiles"
$env:CMAKE_ARGS = "-DCMAKE_C_COMPILER=C:/msys64/mingw64/bin/gcc.exe -DCMAKE_CXX_COMPILER=C:/msys64/mingw64/bin/g++.exe"
pip install llama_core-(version).tar.gz
In mac, xcode command line tools are recommended by Apple for dealing all coding related issue(s); or you can bypass it for your own good/preference.
for mac user(s):
pip3 install llama_core-(version).tar.gz
Make sure your gcc and g++ are >=11; you can check it by: gcc --version and g++ --version; other settings include: typing-extensions>=4.5.0, numpy>=1.20.0, diskcache>=5.6.1, jinja2>=2.11.3, MarkupSafe>=2.0, cmake>=3.21, etc.; however, if you opt to install it by the pre-built wheel (.whl) file then you don't need to worry about that.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distributions
Hashes for llama_core-0.1.4-cp312-cp312-macosx_11_0_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 60306a3eaf78c0cb360a83b468457e23a0ea151c77764fd969adfd155a4e38d8 |
|
MD5 | 90ce0e1ec10a9679c48c495e0e3f99d8 |
|
BLAKE2b-256 | 338390d648b606c82c8d45cd83ad4994d01b56fbcdbf44050f0ae3983cf109d4 |
Hashes for llama_core-0.1.4-cp311-cp311-win_amd64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1e9a019914803f1535bc7162387d8dc1ffb323f5175be5c003b78527d11ed5b8 |
|
MD5 | 9c4c37eb4b986a63c2829e8b23225f5f |
|
BLAKE2b-256 | 5bbd904d54b26231d7531a98496bd0506a10085a97e6d157740c8ac4a2e1aa51 |