gguf connector core built on llama.cpp
Project description
llama-core
This is a solo llama connector also; being able to work independently.
install via (pip/pip3):
pip install llama-core
run it by (python/python3):
python -m llama_core
Prompt to user interface selection menu above; while chosen, GGUF file(s) in the current directory will be searched and detected (if any) as below.
include interface selector to your code by adding:
from llama_core import menu
include gguf reader to your code by adding:
from llama_core import reader
include gguf writer to your code by adding:
from llama_core import writer
remark(s)
Other functions are same as llama-cpp-python; for CUDA(GPU, Nvida) and Metal(M1/M2, Apple) supported settings, please specify CMAKE_ARGS
following Abetlen's repo below; if you want to install it by source file (under releases), you should opt to do it by .tar.gz file (then build your machine-customized installable package) rather than .whl (wheel; a pre-built binary package) with an appropriate cmake tag(s).
references
repo llama-cpp-python llama.cpp page gguf.us
build from llama_core-(version).tar.gz (examples below are for CPU)
According to the latest note inside vs code, msys64 was recommended by Microsoft; or you could opt w64devkit or etc. as source/location of your gcc and g++ compilers.
for windows user(s):
$env:CMAKE_GENERATOR = "MinGW Makefiles"
$env:CMAKE_ARGS = "-DCMAKE_C_COMPILER=C:/msys64/mingw64/bin/gcc.exe -DCMAKE_CXX_COMPILER=C:/msys64/mingw64/bin/g++.exe"
pip install llama_core-(version).tar.gz
In mac, xcode command line tools were recommended by Apple for dealing all coding related issue(s); or you could bypass it for your own good/preference.
for mac user(s):
pip3 install llama_core-(version).tar.gz
Make sure your gcc and g++ are >=11; you can check it by: gcc --version and g++ --version; other setting(s) include: cmake>=3.21, etc.; however, if you opt to install it by the pre-built wheel (.whl) file then you don't need to worry about that.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file llama_core-0.3.4.tar.gz
.
File metadata
- Download URL: llama_core-0.3.4.tar.gz
- Upload date:
- Size: 64.0 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4fa10a094ff4be21ae64e7190d51e224b43ad41ba209e3f76cd115c7073d2753 |
|
MD5 | b1fb4159e6b9d802b95268a41abb53c5 |
|
BLAKE2b-256 | 189abe2018dbfaf282ac52689efb831b4848cec0a02f5c190fbbf5f8946ea8ac |
File details
Details for the file llama_core-0.3.4-cp312-cp312-macosx_14_0_arm64.whl
.
File metadata
- Download URL: llama_core-0.3.4-cp312-cp312-macosx_14_0_arm64.whl
- Upload date:
- Size: 3.5 MB
- Tags: CPython 3.12, macOS 14.0+ ARM64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | f48807cb928b56f75ff56b3e0c79c610ce3d3b36f09a6d1e0fb338deb4d73de3 |
|
MD5 | c254c4bcdb43c39f890c0133e9df73a7 |
|
BLAKE2b-256 | 5b9305423730c9c8005fa6e5c86f27645f69e36b0f9ba7cbdbdc481c3072e9cd |