Skip to main content

Minimalistic core for large language model applications

Project description

AI MicroCore: A Minimalistic Foundation for AI Applications

This package is a collection of wrappers around Large Language Models and Semantic Search APIs allowing to communicate with these services convenient way, make it easily switchable and separate business logic from implementation details.

It defines interfaces for features typically used in AI applications, that allows you to keep your application as simple as possible and try various models & services without need to change your application code.

You even can switch between text completion and chat completion models only using configuration.

The basic example of usage is as follows:

from microcore import llm
while user_msg := input('Enter message: '):
    print('AI: ' + llm(user_msg))

Installation

pip install ai-microcore

Core Functions

llm(prompt: str, **kwargs) → str

Performs a request to a large language model (LLM)

from microcore import *

# Will print all requests and responses to console
use_logging()

# Basic usage
ai_response = llm('What is your model name?')

# You also may pass a list of strings as prompt
# - For chat completion models elements are treated as separate messages
# - For completion LLMs elements are treated as text lines
llm(['1+2', '='])
llm('1+2=', model='gpt-4')

# To specify a message role, you can use dictionary or classes
llm(dict(role='system', content='1+2='))
# equivalent
llm(SysMsg('1+2='))

# The returned value is a string
assert '7' == llm([
 SysMsg('You are a calculator'),
 UserMsg('1+2='),
 AssistantMsg('3'),
 UserMsg('3+4=')]
).strip()

# But it contains all fields of the LLM response in additional attributes
for i in llm('1+2=?', n=3, temperature=2).choices:
    print('RESPONSE:', i.message.content)

# To use response streaming you may specify the callback function:
llm('Hi there', callback=lambda x: print(x, end=''))

# Or multiple callbacks:
output = []
llm('Hi there', callbacks=[
    lambda x: print(x, end=''),
    lambda x: output.append(x),
])

tpl(file_path, **params) → str

Renders prompt template with params.

Full-featured Jinja2 templates are used by default.

Related configuration options:

from microcore import configure
configure(
    # 'tpl' folder in current working directory by default
    PROMPT_TEMPLATES_PATH = 'my_templates_folder'
)

*store(collection: str | None, **kwargs)

Stores data in embeddings database of your choice

@TODO

*search(collection: str | None, **kwargs)

Performs semantic / similarity search over embeddings database

@TODO

API providers and models support

LLM Microcore supports all models & API providers having OpenAI API.

List of API providers and models tested with LLM Microcore:

API Provider Models
OpenAI All GPT-4 and GTP-3.5-Turbo models
all text completion models (davinci, gpt-3.5-turbo-instruct, etc)
Microsoft Azure All OpenAI models
deepinfra.com deepinfra/airoboros-70b
jondurbin/airoboros-l2-70b-gpt4-1.4.1
meta-llama/Llama-2-70b-chat-hf
and other models having OpenAI API
Anyscale meta-llama/Llama-2-70b-chat-hf
meta-llama/Llama-2-13b-chat-hf
meta-llama/Llama-7b-chat-hf

Examples

code-review-tool example

Performs code review by LLM for changes in git .patch files in any programming languages.

Other examples

Python functions as AI tools

@TODO

AI Modules

This is experimental feature.

Tweaks the Python import system to provide automatic setup of MicroCore environment based on metadata in module docstrings.

Usage:

import microcore.ai_modules

Features:

  • Automatically registers template folders of AI modules in Jinja2 environment

License

© 2023—∞ Vitalii Stepanenko

Licensed under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ai_microcore-0.4.1.tar.gz (13.7 kB view details)

Uploaded Source

Built Distribution

ai_microcore-0.4.1-py3-none-any.whl (16.9 kB view details)

Uploaded Python 3

File details

Details for the file ai_microcore-0.4.1.tar.gz.

File metadata

  • Download URL: ai_microcore-0.4.1.tar.gz
  • Upload date:
  • Size: 13.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for ai_microcore-0.4.1.tar.gz
Algorithm Hash digest
SHA256 2a5d6b08c2dda855456c778b0d22c0d74d750f1e6600c9d0ac704864c9fced92
MD5 8fca80840460f58557ffd8f3c87ac59b
BLAKE2b-256 25befc21d42325f0f695f6539942d116c08cbce6276fe6b1e54c4890f92862de

See more details on using hashes here.

File details

Details for the file ai_microcore-0.4.1-py3-none-any.whl.

File metadata

  • Download URL: ai_microcore-0.4.1-py3-none-any.whl
  • Upload date:
  • Size: 16.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for ai_microcore-0.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 6b768597a014c606b31545b4e43ce1b62328650e0cbe351a390af0f9802913b9
MD5 4fa9dd9d5672f4ea64e3917c39f5eae6
BLAKE2b-256 8511679709b3373dddb0f07a30a077cab62298ed46b6ef61af80a5b4abb38665

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page