Lightweight library for command/control/optimization/automation of smart home devices using LLMs
Project description
NuCoreAI Platform!
Goal
This library goal is to convert a user query written in natural language to commands, queries, and programs in any NuCore enabled platform (currently eisy).
Quick start
Installation:
git clone https://github.com/NuCoreAI/nucore-ai.git
Using Frontier LLMs
- Create a directory called
secretsin the root of this project - Create the following two files in this directory
mkdir __init__.py
mkdir keys.py
- In keys.py, put your API KEYs in this format:
OPENAI_API_KEY="sk-proj-xxxx-your-api-key" (for OpenAI)
XAI_API_KEY_SAMPLES="xai-xxxx-your-api-key" (for xAI)
CLAUDE_API_KEY="sk-ant-xxxx-your-api-key" (for Claude)
Using local (edge) LLMs
llama.cpp compile and build
- Download llama.cpp and install prereqs
sudo apt install build-essential
sudo apt install cmake
sudo apt install clang
sudo apt install libomp-dev
sudo apt install libcurl4-openssl-dev
- Go to the directory and do as per one of the options below:
No GPU
cmake -B build.blis -DGGML_BLAS=on -DGGML_BLAS_VENDOR=FLAME
followed by
cmake --build build.blis --conifg release
This will install llama.cpp binaries in build.blis directory local to llama.cpp installation. The reason we are using build.blis directory is that you may want to experiment with the GPU version
Nvidia GPU
On Ubuntu:
sudo ubuntu-drivers install
sudo apt install nvidia-utils-{latest version}
sudo apt install nvidia-cuda-toolkit
sudo apt install nvidia-prime (for intel)
Now you are ready to build:
cmake -B build.cuda -DGGML_CUDA=on
followed by
cmake --build build.cuda --config release
If you have x running, you may want to have it release resources. First use nvidia-smi utility to see what's running and how much memory is being used by other things:
sudo nvidia-smi
if anything is running and using memory:
- Make the prime display point to the integrated one (say intel)
sudo prime-select intel
- Then, make it on demand
sudo prime-select on-demand
- Make sure your system sees it:
nvidia-smi
The Model
Qwen3-Instruct-4b-Q4M.gguf Choose Q4M quantization.
Command
build.cuda/bin/llama-server -m /home/michel/workspace/nucore/models/qwen3-instruct-4b.q4.gguf -c 64000 --port 8013 --host 0.0.0.0 -t 15 --n-gpu-layers 50 --batch-size 8192
Testing
- For now, you will need an [eisy hardware] (https://www.universal-devices.com/product/eisy-home-r2/)
- Clone this repo anywhere
- There are three assistant types that use the same codebase:
- src/assistant/generic_assistant.py -> uses local/edge LLM (qwen)
- src/assistant/openai_assistant.py -> uses OpenAI (you need an API Key)
- src/assistant/claude_assistant.py -> uses Clause (you need an API Key)
All have the same parameters:
"--url" , # The URL to fetch nodes and profiles from the nucore platform",
"--username" , # The username to authenticate with the nucore platform",
"--password" , # The password to authenticate with the nucore platform",
"collection_path" , # The path to the embedding collection db. If not provided, defaults to ~/.nucore_db.
"--model_url" , # The URL of the remote model. If provided, this should be a valid URL that responds to OpenAI's API requests. If frontier, use openai, claude, or xai"
"--model_auth_token", # Optional authentication token for the remote model API (if required by the remote model) to be used in the Authorization header. You are responsible for refreshing the token if needed. This is in case you are hosing your own model in AWS or Runpod, etc.
"--embedder_url" , # Embedder to use. If nothing provided, then default local embedder will be used. If a model name is provided, it will be used as the local embedder model downloaded at runtime from hg. If a URL is provided, it should be a valid URL that responds to OpenAI's API requests."
"--reranker_url" , # The URL of the reranker service. If provided, this should be a valid URL that responds to OpenAI's API requests."
"--prompt_type" , # The type of prompt to use (e.g., 'per-device', 'shared-features', etc.)
Examples:
- Local/Edge
python3 src/assistant/generic_assistant.py\
--url=http://192.168.6.126:8443 ,\
--username=admin, \
--password=admin, \
--model_url=http://192.168.6.113:8013/v1/chat/completions, \
--prompt_type=per-device
- OpenAI
python3 src/assistant/openai_assistant.py\
--url=http://192.168.6.126:8443, \
--username=admin, \
--password=admin, \
--model_url=openai, \
--prompt_type=per-device
Documentation
The code is very well documented but we have not yet made and official documentation.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file nucore_ai-1.2.0.tar.gz.
File metadata
- Download URL: nucore_ai-1.2.0.tar.gz
- Upload date:
- Size: 66.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8904d837cd426f73d5d3e787c2fd76a5492e7159a28d257837d73095f673f30f
|
|
| MD5 |
8ff6e3663995bc6889bfedcd30add66b
|
|
| BLAKE2b-256 |
e5e1f7852e0f02076f45074360c163d0ac4a601d127a588722ecd978160798e2
|
Provenance
The following attestation bundles were made for nucore_ai-1.2.0.tar.gz:
Publisher:
python-publish.yml on NuCoreAI/nucore-ai
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
nucore_ai-1.2.0.tar.gz -
Subject digest:
8904d837cd426f73d5d3e787c2fd76a5492e7159a28d257837d73095f673f30f - Sigstore transparency entry: 942779168
- Sigstore integration time:
-
Permalink:
NuCoreAI/nucore-ai@9c58b0c91e8e38c2b4f98cc65f8d5520ee2b9a80 -
Branch / Tag:
refs/tags/v.1.2.0 - Owner: https://github.com/NuCoreAI
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
python-publish.yml@9c58b0c91e8e38c2b4f98cc65f8d5520ee2b9a80 -
Trigger Event:
release
-
Statement type:
File details
Details for the file nucore_ai-1.2.0-py3-none-any.whl.
File metadata
- Download URL: nucore_ai-1.2.0-py3-none-any.whl
- Upload date:
- Size: 79.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a3c168b0285a943117c1f2f5033e3e5792e5266c6057d0debc7ed07931b5ac6d
|
|
| MD5 |
ae306762120c5779fb2b487470b5db16
|
|
| BLAKE2b-256 |
eb40a9e37cb1908b876ca297bf2fff5928fccf6128b14a50ee105fa8f99bf812
|
Provenance
The following attestation bundles were made for nucore_ai-1.2.0-py3-none-any.whl:
Publisher:
python-publish.yml on NuCoreAI/nucore-ai
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
nucore_ai-1.2.0-py3-none-any.whl -
Subject digest:
a3c168b0285a943117c1f2f5033e3e5792e5266c6057d0debc7ed07931b5ac6d - Sigstore transparency entry: 942779185
- Sigstore integration time:
-
Permalink:
NuCoreAI/nucore-ai@9c58b0c91e8e38c2b4f98cc65f8d5520ee2b9a80 -
Branch / Tag:
refs/tags/v.1.2.0 - Owner: https://github.com/NuCoreAI
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
python-publish.yml@9c58b0c91e8e38c2b4f98cc65f8d5520ee2b9a80 -
Trigger Event:
release
-
Statement type: