A unified interface for interacting with various LLM and embedding providers, with observability tools.
Project description
AiCore Project
This project provides a framework for integrating various language models and embedding providers. It supports both synchronous and asynchronous operations for generating text completions and embeddings.
AiCore also contains native support to augment traditional Llms with reasoning capabilities by providing them with the thinking steps generated by an open-source reasoning capable model, allowing it to generate its answers in a Reasoning Augmented way.
This can be usefull in multiple scenarios, such as:
- ensure your agentic systems still work with the propmts you have crafted for your favourite llms while augmenting them with reasoning steps
- direct control for how long you want your reasoner to reason (via max_tokens param) and how creative it can be (reasoning temperature decoupled from generation temperature) without compromising generation settings
New Feature: Observability Module
AiCore now includes a comprehensive observability module that helps you track, analyze, and visualize your LLM operations:
- Data Collection: Automatically captures detailed information about each LLM completion operation, including arguments, responses, token usage, and latency metrics.
- Interactive Dashboard: A Dash/Plotly-based dashboard for visualizing operation history, performance trends, and usage patterns.
- Efficient Storage: Uses Polars dataframes for high-performance data processing and storage in JSON format.
- Complete Integration: Seamlessly integrated with the existing LLM provider system.
To use the observability dashboard:
from aicore.observability import ObservabilityDashboard
dashboard = ObservabilityDashboard(storage=storage)
# Run the dashboard server
dashboard.run_server(debug=True, port=8050)
Built with AiCore
Reasoner4All A Hugging Face Space where you can chat with multiple reasoning augmented models.
CodeGraph A Graph representation of your codebase for effective retrieval at file/obj level coming soon
Quickstart
pip install git+https://github.com/BrunoV21/AiCore@0.1.9
Features
LLM Providers:
- Anthropic
- OpenAI
- Mistral
- Groq
- Gemini
- Nvidia
- OpenRouter
Embedding Providers:
- OpenAI
- Mistral
- Groq
- Gemini
- Nvidia
Observability Tools:
- Operation tracking and metrics collection
- Interactive dashboard for visualization
- Token usage and latency monitoring
To configure the application for testing, you need to set up a config.yml file with the necessary API keys and model names for each provider you intend to use. The CONFIG_PATH environment variable should point to the location of this file. Here's an example of how to set up the config.yml file:
# config.yml
embeddings:
provider: "openai" # or "mistral", "groq", "gemini", "nvidia"
api_key: "your_openai_api_key"
model: "your_openai_embedding_model" # Optional
llm:
provider: "openai" # or "mistral", "groq", "gemini", "nvidia"
api_key: "your_openai_api_key"
model: "gpt-4o" # Optional
temperature: 0.1
max_tokens: 1028
Reasoner Augmented Config
To leverage the reasoning augmentation just introduce one of the supported llm configs into the reasoner field and AiCore handles the rest
# config.yml
embeddings:
provider: "openai" # or "mistral", "groq", "gemini", "nvidia"
api_key: "your_openai_api_key"
model: "your_openai_embedding_model" # Optional
llm:
provider: "mistral" # or "openai", "groq", "gemini", "nvidia"
api_key: "your_mistral_api_key"
model: "mistral-small-latest" # Optional
temperature: 0.6
max_tokens: 2048
reasoner:
provider: "groq" # or openrouter or nvidia
api_key: "your_groq_api_key"
model: "deepseek-r1-distill-llama-70b" # or "deepseek/deepseek-r1:free" or "deepseek/deepseek-r1"
temperature: 0.5
max_tokens: 1024
Usage
Language Models
You can use the language models to generate text completions. Below is an example of how to use the MistralLlm provider:
from aicore.llm.config import LlmConfig
from aicore.llm.providers import MistralLlm
config = LlmConfig(
api_key="your_api_key",
model="your_model_name",
temperature=0.7,
max_tokens=100
)
mistral_llm = MistralLlm.from_config(config)
response = mistral_llm.complete(prompt="Hello, how are you?")
print(response)
Embeddings
You can use the embeddings module to generate text embeddings. Below is an example of how to use the OpenAiEmbeddings provider:
from aicore.embeddings.config import EmbeddingsConfig
from aicore.embeddings import Embeddings
config = EmbeddingsConfig(
provider="openai",
api_key="your_api_key",
model="your_model_name"
)
embeddings = Embeddings.from_config(config)
vectors = embeddings.generate(["Hello, how are you?"])
print(vectors)
For asynchronous usage:
import asyncio
from aicore.embeddings.config import EmbeddingsConfig
from aicore.embeddings import Embeddings
async def main():
config = EmbeddingsConfig(
provider="openai",
api_key="your_api_key",
model="your_model_name"
)
embeddings = Embeddings.from_config(config)
vectors = await embeddings.agenerate(["Hello, how are you?"])
print(vectors)
asyncio.run(main())
Loading from a Config File
To load configurations from a YAML file, set the CONFIG_PATH environment variable and use the Config class to load the configurations. Here is an example:
from aicore.config import Config
from aicore.llm import Llm
import os
if __name__ == "__main__":
os.environ["CONFIG_PATH"] = "./config/config.yml"
config = Config.from_yaml()
llm = Llm.from_config(config.llm)
llm.complete("Once upon a time, there was a")
Make sure your config.yml file is properly set up with the necessary configurations.
License
This project is licensed under the Apache 2.0 License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file core_for_ai-0.1.88.tar.gz.
File metadata
- Download URL: core_for_ai-0.1.88.tar.gz
- Upload date:
- Size: 58.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.9.21
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
07164c9b92d165b8b3af37a761df095fdeb6c0b4a25b800431e940bebcdbdda1
|
|
| MD5 |
423de1b8b87741292d33cbd81f585c5b
|
|
| BLAKE2b-256 |
b514daab85b25f02235e723c7f9a4d03400c394ac0d90fb078ab447c266cdc2a
|
File details
Details for the file core_for_ai-0.1.88-py3-none-any.whl.
File metadata
- Download URL: core_for_ai-0.1.88-py3-none-any.whl
- Upload date:
- Size: 58.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.9.21
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0d7013cafe588895cdcd983221330606159a8e91284767650b2bd2e3c62d8a00
|
|
| MD5 |
e75603a085be80df04a880472df97ac1
|
|
| BLAKE2b-256 |
2176a386655bf09c24d78094edc4dcd0d76a5cda313ea7188d69a8330be3e583
|