Skip to main content

AI Cortex: A Python toolkit for free access to cloud and local language models. Chat with Gemma, Mistral, Deepseek, Qwen, Llama, and more through community-hosted cloud endpoints or your own Ollama server — completely free, zero-signup, zero API keys. Unified chat API, real-time streaming, multi-server orchestration, and OpenAI-compatible endpoints.

Project description

🧠 AI Cortex

A Python toolkit for free access to cloud and local language models.
Zero API keys. Zero signup. Completely free.

PyPI Version Downloads Python Versions License DeepWiki


AI Cortex gives you a single, clean Python interface to hundreds of language models — Llama, Mistral, Gemma, DeepSeek, Qwen, and more — running on community-hosted cloud servers or your own local Ollama instance. No accounts. No credit cards. No rate limits.

from aicortex import chat

response = chat("Explain neural networks like I'm five.")
print(response)

✨ Why AI Cortex?

Feature What it means for you
🆓 100% Free No API keys, no billing, no subscriptions — ever
🤖 Any Model Llama, Mistral, Gemma, DeepSeek, Qwen, and more
🌐 Cloud or Local Community-hosted cloud endpoints or your own Ollama server
Streaming Real-time token streaming for responsive UIs
🔌 OpenAI-Compatible Drop-in replacement for OpenAI client apps
🛡️ Type-Safe Full type hints, stubs, and IDE autocomplete
🔧 Production Ready Automatic failover, multi-server routing, error handling
📦 Lightweight One dependency (ollama) for the core package

🚀 Installation

# Core package
pip install aicortex-core

# With OpenAI-compatible server support
pip install aicortex-core[server]

💬 Chat

from aicortex import chat

# Simple response
response = chat("What is the speed of light?")
print(response)

# Custom model and parameters
response = chat(
    "Write a Python function to reverse a string.",
    model="llama3.2:3b",
    temperature=0.2,
    max_tokens=200,
)
print(response)

⚡ Streaming

from aicortex import chat

stream = chat("Write a haiku about AI.", stream=True)

for event in stream:
    if event.type == "token":
        print(event.content, end="", flush=True)

🤖 Model Discovery

from aicortex import families, models, get_model_info

# Available families
print(families())   # ['llama', 'mistral', 'gemma', 'deepseek', 'qwen']

# Models in a family
print(models("mistral"))

# Full metadata for a model
info = get_model_info("llama3.2:3b")
print(info['parameter_size'], info['quantization_level'])

🌐 Server Discovery

from aicortex import list_model_servers, get_server_info, get_llm_params

# All servers hosting a model — cloud and local
servers = list_model_servers("llama3.2:3b")
for s in servers:
    print(f"{s['url']}{s['location']['city']}, {s['location']['country']}")

# Ready-to-use params for LangChain's OllamaLLM
params = get_llm_params("mistral:7b")
# → {'model': 'mistral:7b', 'base_url': 'http://...'}

🖥️ OpenAI-Compatible Server

Run a local proxy that speaks OpenAI's API — drop-in compatible with any OpenAI client:

from aicortex.tools import run_server

run_server(host="127.0.0.1", port=8000, default_model="llama3.2:3b")
# Use with curl
curl -X POST http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{"model": "llama3.2:3b", "messages": [{"role": "user", "content": "Hello!"}]}'

# Use with the openai Python SDK — just change the base_url
from openai import OpenAI
client = OpenAI(api_key="none", base_url="http://localhost:8000/v1")
response = client.chat.completions.create(
    model="llama3.2:3b",
    messages=[{"role": "user", "content": "Hello!"}]
)

🔧 Model Management Tools

Keep the bundled model database fresh with the four-step pipeline:

from pathlib import Path
from aicortex.tools import (
    find_valid_endpoints,     # Step 1: ping all known IPs
    fetch_models,             # Step 2: pull model lists
    resolve_models,           # Step 3: merge with IP metadata
    apply_valid_models,       # Step 4: write into family JSONs
)

json_dir   = Path("aicortex/models")
valid_urls = find_valid_endpoints(json_dir)                              # Step 1
fetch_models(Path("valid.txt"), Path("fetched.json"))                   # Step 2
resolve_models(Path("fetched.json"), json_dir, Path("resolved.json"))   # Step 3
apply_valid_models(Path("resolved.json"), json_dir, backup=True)        # Step 4

📚 Full Documentation

aicortex.readthedocs.io


🤝 Contributing

Contributions are welcome! See CONTRIBUTING.md and the Development Guide.

📄 License

GNU Lesser General Public License v3.0 — free for open-source and commercial use.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aicortex_core-1.1.0.tar.gz (65.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aicortex_core-1.1.0-py3-none-any.whl (71.3 kB view details)

Uploaded Python 3

File details

Details for the file aicortex_core-1.1.0.tar.gz.

File metadata

  • Download URL: aicortex_core-1.1.0.tar.gz
  • Upload date:
  • Size: 65.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.4

File hashes

Hashes for aicortex_core-1.1.0.tar.gz
Algorithm Hash digest
SHA256 da4ab67c1bdae9c0e907fa317620a24b66a991bbe35121cc78ae1cec222927ab
MD5 c46fb44c2b105ae120c4a7a087528f29
BLAKE2b-256 beef25d26003fb17b0007f789716e730398433418f9e250de3a9c45a18c9c8a2

See more details on using hashes here.

File details

Details for the file aicortex_core-1.1.0-py3-none-any.whl.

File metadata

  • Download URL: aicortex_core-1.1.0-py3-none-any.whl
  • Upload date:
  • Size: 71.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.4

File hashes

Hashes for aicortex_core-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f81602599169525fbd19437d325505a7d210c673bae0bd09b4d77d1d075b4834
MD5 3f5157cccaf2aeea32fa4a9644bc30a6
BLAKE2b-256 e3af23561d19c5a08ef8aedd10eec50dfddb3e247da6f86ee7b2c4fbc1956a13

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page