No project description provided
Project description
Search your codebase semantically or chat with it from cli. 100% local support without any dataleaks.
Built with langchain, treesitter, sentence-transformers, instructor-embedding, faiss, lama.cpp, Ollama.
✨ Features
- 🔎 Semantic code search
- 💬 GPT-like chat with your codebase
- 💻 100% local embeddings and llms
- sentence-transformers, instructor-embeddings, llama.cpp, Ollama
- 🌐 OpenAI and Azure OpenAI support
[!NOTE]
There will be better results if the code is well documented. You might consider doc-comments-ai for code documentation generation.
🚀 Usage
Start semantic search:
codeqai search
Start chat dialog:
codeqai chat
📋 Requirements
- Python >= 3.9
🔧 Installation
pipx install codeqai
At first usage it is asked to install faiss-cpu or faiss-gpu. Faiss-gpu is recommended if the hardware supports CUDA 7.5+. If local embeddings and llms are used it will be further asked to install sentence-transformers, instructor or llama.cpp later.
⚙️ Configuration
At first usage or by running
codeqai configure
the configuration process is initiated, where the embeddings and llms can be chosen.
🌐 Remote models
If remote models are preferred instead of local, some environment variables needs to be specified in advance.
OpenAI
export OPENAI_API_KEY = "your OpenAI api key"
Azure OpenAI
export OPENAI_API_TYPE = "azure"
export OPENAI_API_BASE = "https://<your-endpoint.openai.azure.com/"
export OPENAI_API_KEY = "your AzureOpenAI api key"
export OPENAI_API_VERSION = "2023-05-15"
💡 How it works
The entire git repo is parsed with treesitter to extract all methods with documentations and saved to a local FAISS vector database with either sentence-transformers, instructor-embeddings or OpenAI's text-embedding-ada-002.
The vector database is saved to a file on your system and will be loaded later again after further usage.
Afterwards it is possible to do semantic search on the codebase based on the embeddings model.
To chat with the codebase locally llama.cpp or Ollama is used by specifying the desired model.
Using llama.cpp the specified model needs to be available on the system in advance.
Using Ollama the Ollama container with the desired model needs to be running locally in advance on port 11434.
Also OpenAI or Azure-OpenAI can be used for remote chat models.
📚 Supported Languages
- Python
- Typescript
- Javascript
- Java
- Rust
- Kotlin
- Go
- C++
- C
- C#
FAQ
Where do I get models for llama.cpp?
Install the huggingface-cli
and download your desired model from the model hub.
For example
huggingface-cli download TheBloke/CodeLlama-13B-Python-GGUF codellama-13b-python.Q5_K_M.gguf
will download the codellama-13b-python.Q5_K_M
model. After the download has finished the absolute path of the model .gguf
file is printed to the console.
[!IMPORTANT]
llama.cpp
compatible models must be in the.gguf
format.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.