Skip to main content

Local-first AI performance profiler that mathematically verifies optimizations for Python, C++, and CUDA

Project description

CoreInsight CLI

CoreInsight is a local-first, hardware-aware AI performance profiler. It shifts performance engineering "left" by parsing your Python, C++, and CUDA code, identifying hardware bottlenecks (like CPU cache thrashing or CUDA warp divergence), and mathematically verifying AI-generated optimizations inside secure Docker sandboxes.

Prerequisites

  • Python 3.9+
  • Docker Desktop / Docker Engine (Must be running for the sandbox verification)
  • Ollama (Optional, if using local models) or API keys for cloud models.

Installation & Usage

Required: Install Docker: https://docs.docker.com/engine/install/

Optional (Suggested): Setup OpenAI/Anthropic/Google API keys to load their models

1. Build locally: Clone this repository and install it in editable mode:

pip install -e .

2. Configure CoreInsight CLI: Set up your preferred AI provider (Ollama, local vLLM, OpenAI, Anthropic, or Gemini):

coreinsight configure

3. Build Global Context (Recommended for multiple files): Index your repository so the AI understands your custom structs, classes, and dependencies across files:

coreinsight index

4. Test on a file: Analyze a specific file. The CLI will extract hot loops, process them in parallel, verify optimizations in Docker, and output a live Markdown report.

coreinsight analyze <file_name>

5. Project-Wide Hotspot Scanning: Instead of guessing which files are slow, scan your entire repository. CoreInsight will use static AST analysis to rank the most complex, deeply-nested loops in your project.

coreinsight scan

Build the Project

Download build:

pip install build

Run the build command to generate wheel file:

python -m build --wheel

To build project elsewhere using wheel file:

pip install dist/coreinsight_cli-*.whl

To build project using source code:

pip install -e .

Architecture Notes

CoreInsight runs 100% locally. Code is only transmitted to the AI provider you configure. If you use Ollama or a local server, your proprietary code never leaves your machine.

Current/Future Features In Progess:

  • Improve AST parsing
  • Extract and integrate hardware information for the LLM
  • Improve CLI interface
  • Parallel execution
  • Improve RAG

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

coreinsight_cli-0.1.0.tar.gz (27.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

coreinsight_cli-0.1.0-py3-none-any.whl (29.4 kB view details)

Uploaded Python 3

File details

Details for the file coreinsight_cli-0.1.0.tar.gz.

File metadata

  • Download URL: coreinsight_cli-0.1.0.tar.gz
  • Upload date:
  • Size: 27.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.2

File hashes

Hashes for coreinsight_cli-0.1.0.tar.gz
Algorithm Hash digest
SHA256 766b1bf7771c1ed5464d8a8cfc34d01177f680c19002aaeaf9f55220322bc7a4
MD5 ea0568ce74c66d8c935df3292fbc1082
BLAKE2b-256 f076b60d6bd76d8945adc6e741f1b6bf1fb82d98d96b3e78e8dc635899b3846b

See more details on using hashes here.

File details

Details for the file coreinsight_cli-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for coreinsight_cli-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 1ad06e949d1d5468b25b42a27d806fa894d5eeff40710ad66cc67959f4c16245
MD5 9c6deb44f9e1c3efa9e8fd37a27d5cbd
BLAKE2b-256 c62576c15cd152c6c60042aa0a857cc9a9f591c3eb8cfa0553791221e64659a6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page