A basic AI model embedded in python
Project description
Cortexa
Cortexa is a fast, local-first AI assistant for Python. It lets you use a real large language model (LLM) directly from Python with a clean, minimal API.
- ✅ Runs locally (no API keys required)
- ✅ Uses Ollama as the AI engine
- ✅ Streams responses (fast, ChatGPT-style typing)
- ✅ Concise, readable terminal output
- ✅ Simple
import cortexausage
What Cortexa Is (and Is Not)
Cortexa is:
- A Python library that talks to a local AI model
- Free and offline after setup
- Ideal for learning, projects, and experimentation
Cortexa is not:
- A hosted cloud service
- A replacement for installing an AI runtime
Cortexa uses Ollama to run models locally. Ollama is required.
Requirements
- Python 3.8+
- Ollama installed (one-time)
Download Ollama from: https://ollama.com
Installation
1. Install Ollama
After installing Ollama, restart your computer.
Pull a model:
ollama pull llama3
Make sure Ollama is running:
ollama serve
(Or just open the Ollama app.)
2. Install Cortexa
Once published to PyPI:
pip install cortexa
For local development:
pip install requests
Basic Usage
import cortexa
ai = cortexa.Cortexa()
ai.chat("Explain recursion simply")
ai.chat("Write a Python function to reverse a list")
Responses stream live in the terminal.
Features
- Streaming output (fast, responsive)
- Concise answers by default
- Terminal-friendly formatting
- Conversation memory (limited for speed)
- Offline & private
Configuration
ai = cortexa.Cortexa(
model="llama3", # Any Ollama model
max_history=4, # Context window size
width=80 # Terminal wrap width
)
Smaller models = faster responses.
Troubleshooting
Ollama already running error
This is normal:
Only one usage of each socket address is permitted
It means Ollama is already running.
Slow responses
- Use a smaller model
- Reduce
max_history - Make sure no other heavy apps are running
Project Structure
cortexa/
│
├── cortexa/
│ ├── __init__.py
│ ├── core.py
│ ├── llm.py
│ ├── memory.py
│ └── prompts.py
│
└── test.py
License
MIT License
Disclaimer
Cortexa runs AI models locally using Ollama. Model quality, speed, and hardware usage depend on your system and chosen model.
Roadmap (Planned)
- CLI interface (
cortexa chat) - Hybrid local / API mode
- Tool usage (files, calculator)
- Persistent memory
Author
Samarth Ankit Chugh
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file cortexa-1.0.1.tar.gz.
File metadata
- Download URL: cortexa-1.0.1.tar.gz
- Upload date:
- Size: 4.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f148de97f22128210f07428efa68ea23c5929dbfec5dd629e55cee2d3d662ca9
|
|
| MD5 |
f0fa6c2aeadc736e0f54de04282ae899
|
|
| BLAKE2b-256 |
880c79fbec690af840c4897967d8645c9ffadd04b129b9a314a02624846fb882
|
File details
Details for the file cortexa-1.0.1-py3-none-any.whl.
File metadata
- Download URL: cortexa-1.0.1-py3-none-any.whl
- Upload date:
- Size: 5.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2265ff5da2228f114401997b6c566af9cc84152207f1d7c24d5ecf93648b5deb
|
|
| MD5 |
d6c436772fdc4fc49efd3845409113df
|
|
| BLAKE2b-256 |
d91dcd215fb66e0f40bddb76e0dd1ae4e13b1330431b0a9d8d039b1d0079c0b7
|