Skip to main content

# Minimalistic Foundation for AI Applications

Project description

Release Notes Code Quality Pylint Tests License

AI MicroCore: A Minimalistic Foundation for AI Applications

MicroCore is a collection of python adapters for Large Language Models and Semantic Search APIs allowing to communicate with these services in a convenient way, make them easily switchable and separate business logic from the implementation details.

It defines interfaces for features typically used in AI applications, which allows you to keep your application as simple as possible and try various models & services without need to change your application code.

You even can switch between text completion and chat completion models only using configuration.

The basic example of usage is as follows:

from microcore import llm

while user_msg := input('Enter message: '):
    print('AI: ' + llm(user_msg))

🔗 Links

💻 Installation

Install as PyPi package:

pip install ai-microcore

Alternatively, you may just copy microcore folder to your project sources root.

git clone git@github.com:Nayjest/ai-microcore.git && mv ai-microcore/microcore ./ && rm -rf ai-microcore

📋 Requirements

Python 3.10 / 3.11 / 3.12

Both v0.28+ and v1.X OpenAI package versions are supported.

⚙️ Configuring

Minimal Configuration

Having OPENAI_API_KEY in OS environment variables is enough for basic usage.

Similarity search features will work out of the box if you have the chromadb pip package installed.

Configuration Methods

There are a few options available for configuring microcore:

For the full list of available configuration options, you may also check microcore/config.py.

Installing vendor-specific packages

For the models working not via OpenAI API, you may need to install additional packages:

Anthropic Claude 3

pip install anthropic

Google Gemini via AI Studio

pip install google-generativeai

Google Gemini via Vertex AI

pip install vertexai

📌Additonaly for working through Vertex AI you need to install the Google Cloud CLI and configure the authorization.

Local language models via Hugging Face Transformers

You will need to install transformers and deep learning library of your choice (PyTorch, TensorFlow, Flax, etc).

See transformers installation.

Priority of Configuration Sources

  1. Configuration options passed as arguments to microcore.configure() have the highest priority.
  2. The priority of configuration file options (.env by default or the value of DOT_ENV_FILE) is higher than OS environment variables.
    💡 Setting USE_DOT_ENV to false disables reading configuration files.
  3. OS environment variables have the lowest priority.

🌟 Core Functions

llm(prompt: str, **kwargs) → str

Performs a request to a large language model (LLM).

Asynchronous variant: allm(prompt: str, **kwargs)

from microcore import *

# Will print all requests and responses to console
use_logging()

# Basic usage
ai_response = llm('What is your model name?')

# You also may pass a list of strings as prompt
# - For chat completion models elements are treated as separate messages
# - For completion LLMs elements are treated as text lines
llm(['1+2', '='])
llm('1+2=', model='gpt-4')

# To specify a message role, you can use dictionary or classes
llm(dict(role='system', content='1+2='))
# equivalent
llm(SysMsg('1+2='))

# The returned value is a string
assert '7' == llm([
 SysMsg('You are a calculator'),
 UserMsg('1+2='),
 AssistantMsg('3'),
 UserMsg('3+4=')]
).strip()

# But it contains all fields of the LLM response in additional attributes
for i in llm('1+2=?', n=3, temperature=2).choices:
    print('RESPONSE:', i.message.content)

# To use response streaming you may specify the callback function:
llm('Hi there', callback=lambda x: print(x, end=''))

# Or multiple callbacks:
output = []
llm('Hi there', callbacks=[
    lambda x: print(x, end=''),
    lambda x: output.append(x),
])

tpl(file_path, **params) → str

Renders prompt template with params.

Full-featured Jinja2 templates are used by default.

Related configuration options:

from microcore import configure
configure(
    # 'tpl' folder in current working directory by default
    PROMPT_TEMPLATES_PATH = 'my_templates_folder'
)

texts.search(collection: str, query: str | list, n_results: int = 5, where: dict = None, **kwargs) → list[str]

Similarity search

texts.find_one(self, collection: str, query: str | list) → str | None

Find most similar text

texts.get_all(self, collection: str) -> list[str]

Return collection of texts

texts.save(collection: str, text: str, metadata: dict = None))

Store text and related metadata in embeddings database

texts.save_many(collection: str, items: list[tuple[str, dict] | str])

Store multiple texts and related metadata in the embeddings database

texts.clear(collection: str):

Clear collection

API providers and models support

LLM Microcore supports all models & API providers having OpenAI API.

List of API providers and models tested with LLM Microcore:

API Provider Models
OpenAI All GPT-4 and GTP-3.5-Turbo models
all text completion models (davinci, gpt-3.5-turbo-instruct, etc)
Microsoft Azure All OpenAI models, Mistral Large
Anthropic Claude 3 models
MistralAI All Mistral models
Google AI Studio Google Gemini models
Google Vertex AI Gemini Pro & other models
Deep Infra deepinfra/airoboros-70b
jondurbin/airoboros-l2-70b-gpt4-1.4.1
meta-llama/Llama-2-70b-chat-hf
and other models having OpenAI API
Anyscale meta-llama/Llama-2-70b-chat-hf
meta-llama/Llama-2-13b-chat-hf
meta-llama/Llama-7b-chat-hf
Groq LLaMA2 70b
Mixtral 8x7b
Gemma 7b
Fireworks Over 50 open-source language models

Supported local language model APIs:

  • HuggingFace Transformers (see configuration examples here).
  • Custom local models by providing own function for chat / text completion, sync / async inference.

🖼️ Examples

Code review tool

Performs code review by LLM for changes in git .patch files in any programming languages.

Image analysis (Google Colab)

Determine the number of petals and the color of the flower from a photo (gpt-4-turbo)

Banchmark LLMs on math problems (Kaggle Notebook)

Benchmark accuracy of 20+ state of the art models on solving olympiad math problems. Inferencing local language models via HuggingFace Transformers, parallel inference.

Other examples

Python functions as AI tools

@TODO

🤖 AI Modules

This is experimental feature.

Tweaks the Python import system to provide automatic setup of MicroCore environment based on metadata in module docstrings.

Usage:

import microcore.ai_modules

Features:

  • Automatically registers template folders of AI modules in Jinja2 environment

🛠️ Contributing

Please see CONTRIBUTING for details.

📝 License

Licensed under the MIT License © 2023 Vitalii Stepanenko

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ai_microcore-3.11.0.tar.gz (41.4 kB view details)

Uploaded Source

Built Distribution

ai_microcore-3.11.0-py3-none-any.whl (49.9 kB view details)

Uploaded Python 3

File details

Details for the file ai_microcore-3.11.0.tar.gz.

File metadata

  • Download URL: ai_microcore-3.11.0.tar.gz
  • Upload date:
  • Size: 41.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.10

File hashes

Hashes for ai_microcore-3.11.0.tar.gz
Algorithm Hash digest
SHA256 4c3d049aabdc7ff09954de0330367dfd41a55b7207b2e7b1d5e1bd02b754e229
MD5 e34fdda51b17869210fb011413a40a3b
BLAKE2b-256 b8bdea2f6ac7f2b8398d1edc17b3d90d14fee689ce2b31dd61c574e04ea294c1

See more details on using hashes here.

File details

Details for the file ai_microcore-3.11.0-py3-none-any.whl.

File metadata

File hashes

Hashes for ai_microcore-3.11.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ddf59258c5b23677b1ac2172a5290d0784b1e599fdd5f8ee3e50e8c03bdc776f
MD5 e3d4e661ef780d52a4907923b555590a
BLAKE2b-256 a82caa59a8c5d50485784d7b3c9f29875ac908ecd2149d9b7e87717f85662d90

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page