connector core built on llama.cpp
Project description
llama-core
This is a solo llama connector; being able to work independently.
install via pip/pip3:
pip install llama-core
run it by (python/python3):
python -m llama_core
Other functions are same as llama-cpp-python; for CUDA(GPU, Nvida) and Metal(M1/M2, Apple) supported settings, please specify CMAKE_ARGS
following Abetlen's repo below; if you want to install it by source file (under releases), you should opt to do it by .tar.gz file (then build your machine customized installable package) rather than .whl (wheel; a pre-built binary package) with an appropriate cmake tag(s).
References
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
llama_core-0.0.3.tar.gz
(10.6 MB
view hashes)
Built Distributions
Close
Hashes for llama_core-0.0.3-cp312-cp312-macosx_11_0_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8a94beb7f8cc715e0bd19272d1ee0633828bd001e6568a4336d2822da797eb33 |
|
MD5 | 0e3c428a471d3b61be3d6620d821e7ee |
|
BLAKE2b-256 | 5b409058f432251845d67ea827068977a063ab628a9f20770c6d993705bef6ab |
Close
Hashes for llama_core-0.0.3-cp311-cp311-win_amd64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 384f4be34216a20d355258fe4a1514f4f4ca820ecdd1dfb820afa9b44a8394da |
|
MD5 | f8aa360d8cb976b7cf8c6ca56bac04da |
|
BLAKE2b-256 | 168f8bdffd743f5b585d172348cb0b4898599d502dcc4ca9776d1d59ddf04b0f |