Skip to main content

Add your description here

Project description

llama-cpp-server-py-core

Describe your project here.

Some tools

Convert huggingface model to gguf model

rye run hf2gguf /opt/models/llm/qwen/Qwen2.5-Coder-14B-Instruct --outfile /opt/models/llm/qwen/Qwen2.5-Coder-14B-Instruct-f16.gguf

Quantize gguf model

rye run quantize /opt/models/llm/qwen/Qwen2.5-Coder-14B-Instruct-f16.gguf /opt/models/llm/qwen/Qwen2.5-Coder-14B-Instruct-Q4_k_m.gguf Q4_k_m

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_cpp_server_py_core-0.1.4.tar.gz (34.4 MB view details)

Uploaded Source

File details

Details for the file llama_cpp_server_py_core-0.1.4.tar.gz.

File metadata

  • Download URL: llama_cpp_server_py_core-0.1.4.tar.gz
  • Upload date:
  • Size: 34.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.17

File hashes

Hashes for llama_cpp_server_py_core-0.1.4.tar.gz
Algorithm Hash digest
SHA256 a38c3a0e0b2756e542618369a0105b82c2ffa7c538463221f6a784928b6ff995
MD5 5bb53f1719cb8148de4f58f3449c1b51
BLAKE2b-256 7e0136bdad22c1a35ec447abd3fbf1468a46a09a66a386e71ccaaeff3a8c5c65

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page