Skip to main content

Thought is a light wrapper for llama cpp, the building block for implementing and managing an efficient LLM integration

Project description

thought

Thought is a light wrapper for llama cpp, the building block for implementing and managing an efficient LLM integration

Installation

pip install thought

Get started quickly

Imports

The following imports cover all the functionality of loading, embedding and generating

# loader
from thought.model_loader import load_model
from thought.model_loader import generate
from thought.model_loader import embed

# downloader
from thought.model_downloader import get_models
from thought.model_downloader import download_model
from thought.model_downloader import add_model

Downloading a model to play with

The following code shows you how to download models, especially useful when hot deploying to dockers and you need your code to download the models for runtime.

# gets a bunch of models, set to true to download a bunch of models - not recommended!
get_models(use_verified=False)
# add a specific model - right click copy link of GGUF from hugging face
add_model("https://huggingface.co/mradermacher/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored-i1-GGUF/resolve/main/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored.i1-Q4_K_M.gguf")
add_model("https://huggingface.co/nomic-ai/nomic-embed-text-v1.5-GGUF/blob/main/nomic-embed-text-v1.5.Q4_K_M.gguf")
# download the added models
download_model("DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored", "i1-Q4_K_M")
download_model("nomic-embed-text-v1.5", "Q4_K_M")

Embedding text with a model

The flag embedding=True needs to be on!

llm = load_model("./model_db/nomic-embed-text-v1.5.Q4_K_M.gguf", verbose=True, embedding=True)
text = "I think therefore I am"

vectors = embed(llm, text)
# or
vectors = llm.thought_embed(text)


print(vectors)

Generating text from the LLM

Please note we do not do any weird prompt templating or black boxing on your input prompt. You need to look at the model and provide the prompt with the correct "tokens" in your prompt.

llm = load_model("./model_db/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored-Q4_K_S-imat.gguf", verbose=True, embedding=False)

text = "I think therefore I am"

response = generate(llm, "Explain this text<|eot_id|>User:"+text+"<|eot_id|>AI:", stop=["\n"], seed=123456, call_back=token_stream)
#or
response = llm.thought_generate("Explain this text<|eot_id|>User:"+text+"<|eot_id|>AI:", stop=["\n"], seed=123456, call_back=token_stream)

print(response)

Developing

First create a virtual environment on Python 11 or higher

Windows

python -m venv .venv
.\.venv\Scripts\activate
pip install poetry
poetry install

GPU support (Cuda)

llama-cpp-python seems not to build with CUDA support on Windows or Linux by default. Here are the basic commands we end up running each time we install.

Windows

Please replace the CUDA version with your version you have on your disk.

$env:CMAKE_ARGS="-DGGML_CUDA=on"   
$env:CUDACXX="C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6\bin\nvcc.exe"
poetry run pip install llama-cpp-python --no-cache-dir --force-reinstall --upgrade --verbose

Linux

CUDACXX=/usr/local/cuda-12/bin/nvcc CMAKE_ARGS="-DGGML_CUDA=on -DCMAKE_CUDA_ARCHITECTURES=all-major" poetry run pip install llama-cpp-python --no-cache --force-reinstall --upgrade --verbose

Build

poetry build
python -m twine upload --repository pypi dist/*

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

thought-0.1.4.tar.gz (3.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

thought-0.1.4-py3-none-any.whl (4.2 kB view details)

Uploaded Python 3

File details

Details for the file thought-0.1.4.tar.gz.

File metadata

  • Download URL: thought-0.1.4.tar.gz
  • Upload date:
  • Size: 3.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.11.8

File hashes

Hashes for thought-0.1.4.tar.gz
Algorithm Hash digest
SHA256 408b69492743a0add72d08858f05d51fc747a0407f81eb53f76b21bcb5df23ec
MD5 a2ff83b657cedcb0923effc4e3a1a5a8
BLAKE2b-256 3c97dedcc9acf78d7b489b888342b3ade6d6e8772cd115ee983b2ca96f10d066

See more details on using hashes here.

File details

Details for the file thought-0.1.4-py3-none-any.whl.

File metadata

  • Download URL: thought-0.1.4-py3-none-any.whl
  • Upload date:
  • Size: 4.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.11.8

File hashes

Hashes for thought-0.1.4-py3-none-any.whl
Algorithm Hash digest
SHA256 0f7cab580d754e6d0021042aef8704258c4a068b4b3918a8ffacd52aac613f34
MD5 c6e4c7cc9fd736187d3c8f631f74cf32
BLAKE2b-256 7c2e7d7170f14bf7e8863be22a8e779d58b5cd70413fdbced335b03e5cb7e2e5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page