Skip to main content

A basic AI model embedded in python

Project description

Cortexa

Cortexa is a fast, local-first AI assistant for Python. It lets you use a real large language model (LLM) directly from Python with a clean, minimal API.

  • ✅ Runs locally (no API keys required)
  • ✅ Uses Ollama as the AI engine
  • ✅ Streams responses (fast, ChatGPT-style typing)
  • ✅ Concise, readable terminal output
  • ✅ Simple import cortexa usage

What Cortexa Is (and Is Not)

Cortexa is:

  • A Python library that talks to a local AI model
  • Free and offline after setup
  • Ideal for learning, projects, and experimentation

Cortexa is not:

  • A hosted cloud service
  • A replacement for installing an AI runtime

Cortexa uses Ollama to run models locally. Ollama is required.


Requirements

  • Python 3.8+
  • Ollama installed (one-time)

Download Ollama from: https://ollama.com


Installation

1. Install Ollama

After installing Ollama, restart your computer.

Pull a model:

ollama pull llama3

Make sure Ollama is running:

ollama serve

(Or just open the Ollama app.)


2. Install Cortexa

Once published to PyPI:

pip install cortexa

For local development:

pip install requests

Basic Usage

import cortexa

ai = cortexa.Cortexa()

ai.chat("Explain recursion simply")
ai.chat("Write a Python function to reverse a list")

Responses stream live in the terminal.


Features

  • Streaming output (fast, responsive)
  • Concise answers by default
  • Terminal-friendly formatting
  • Conversation memory (limited for speed)
  • Offline & private

Configuration

ai = cortexa.Cortexa(
    model="llama3",     # Any Ollama model
    max_history=4,      # Context window size
    width=80            # Terminal wrap width
)

Smaller models = faster responses.


Troubleshooting

Ollama already running error

This is normal:

Only one usage of each socket address is permitted

It means Ollama is already running.

Slow responses

  • Use a smaller model
  • Reduce max_history
  • Make sure no other heavy apps are running

Project Structure

cortexa/
│
├── cortexa/
│   ├── __init__.py
│   ├── core.py
│   ├── llm.py
│   ├── memory.py
│   └── prompts.py
│
└── test.py

License

MIT License


Disclaimer

Cortexa runs AI models locally using Ollama. Model quality, speed, and hardware usage depend on your system and chosen model.


Roadmap (Planned)

  • CLI interface (cortexa chat)
  • Hybrid local / API mode
  • Tool usage (files, calculator)
  • Persistent memory

Author

Samarth Ankit Chugh

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cortexa-1.0.0.tar.gz (4.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cortexa-1.0.0-py3-none-any.whl (5.3 kB view details)

Uploaded Python 3

File details

Details for the file cortexa-1.0.0.tar.gz.

File metadata

  • Download URL: cortexa-1.0.0.tar.gz
  • Upload date:
  • Size: 4.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.13

File hashes

Hashes for cortexa-1.0.0.tar.gz
Algorithm Hash digest
SHA256 16d31f8361d84defb6647fd2b0e10cca1bfb0843acd387cb5d11b217f0e5a94b
MD5 10e48abf90ecbd86dc0f4b448240b093
BLAKE2b-256 ada36c95b7e092096e2d6581591304f2ba34da689a0e89665832f273e3497e36

See more details on using hashes here.

File details

Details for the file cortexa-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: cortexa-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 5.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.13

File hashes

Hashes for cortexa-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6c0b1f5ade0041ae272d6255fdddf97072486ad68291fd68f523e25b1f8afba2
MD5 dca351407e549d50b00fa5508f0f4aba
BLAKE2b-256 3c8caf4533796821f82ebc8f9ea4e951e93db8cdcb9d52afd3524d86652f1489

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page