Skip to main content

connector core built on llama.cpp

Project description

llama-core

Static Badge

This is a solo llama connector; being able to work independently.

install via pip/pip3:

pip install llama-core

run it by (python/python3):

python -m llama_core

Other functions are same as llama-cpp-python; for CUDA(GPU, Nvida) and Metal(M1/M2, Apple) supported settings, please specify CMAKE_ARGS following Abetlen's repo below; if you want to install it by source file (under releases), you should opt to do it by .tar.gz file (then build your machine customized installable package) rather than .whl (wheel; a pre-built binary package) with an appropriate cmake tag(s).

References

llama-cpp-python llama.cpp

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_core-0.0.3.tar.gz (10.6 MB view hashes)

Uploaded Source

Built Distributions

llama_core-0.0.3-cp312-cp312-macosx_11_0_x86_64.whl (2.6 MB view hashes)

Uploaded CPython 3.12 macOS 11.0+ x86-64

llama_core-0.0.3-cp311-cp311-win_amd64.whl (3.1 MB view hashes)

Uploaded CPython 3.11 Windows x86-64

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page