Skip to main content

Intelligence/AI services for the Lifsys Enterprise

Project description

Intelisys

Intelisys is a powerful Python library that provides a unified interface for interacting with various AI models and services. It offers seamless integration with OpenAI, Anthropic, OpenRouter, and Groq, making it an essential tool for AI-powered applications.

Changelog

[0.4.3] - 2024-08-14

  • Updated PyPI package configuration
  • Improved documentation and README
  • Enhanced error handling and logging capabilities
  • Updated default models for various providers
  • Minor bug fixes and performance improvements

[0.4.2] - 2024-08-13

  • Updated PyPI package configuration
  • Improved documentation and README
  • Minor bug fixes and performance enhancements

[0.4.1] - 2024-08-12

  • Fixed breaking changes introduced in version 0.4.0
  • Improved error handling in template_chat method
  • Updated documentation to reflect recent changes

[0.4.0] - 2024-08-11

  • Major refactoring of the Intelisys class
  • Removed deprecated methods and attributes
  • Updated default models for various providers
  • Improved async support
  • Enhanced error handling and logging

Installation

Install Intelisys using pip:

pip install intelisys

For the latest development version:

pip install git+https://github.com/lifsys/intelisys.git

Requirements

  • Python 3.7 or higher
  • A 1Password Connect server (for API key management)
  • Environment variables:
    • OP_CONNECT_TOKEN: Your 1Password Connect token
    • OP_CONNECT_HOST: The URL of your 1Password Connect server

Note: The library requires a local 1Password Connect server for API key retrieval.

Key Features

  • Multi-provider support (OpenAI, Anthropic, OpenRouter, Groq)
  • Secure API key management with 1Password Connect
  • Asynchronous and synchronous chat interfaces
  • Template-based API calls for flexible prompts
  • JSON mode support for structured responses
  • Lazy loading of attributes for improved performance
  • Comprehensive error handling and logging
  • Retry mechanism for API calls

Quick Start

from intelisys import Intelisys

# Using Intelisys class
intelisys = Intelisys(name="MyAssistant", provider="openai", model="gpt-4")
response = intelisys.chat("Explain quantum computing").get_response()
print(response)

# Using JSON mode
intelisys_json = Intelisys(name="JSONAssistant", provider="openai", model="gpt-4", json_mode=True)
response = intelisys_json.chat("List 3 quantum computing concepts").get_response()
print(response)  # This will be a Python dictionary

# Image OCR example
intelisys = Intelisys(provider="openai", model="gpt-4o-mini")
result = (intelisys
 .chat("Please provide all the text in the following image(s).")
 .image("http://www.mattmahoney.net/ocr/stock_gs200.jpg")
 .image("/Users/lifsys/Documents/devhub/testingZone/_Archive/screen_small-2.png")
 .get_response()
)
print(result)

Advanced Usage

from intelisys import Intelisys
import asyncio

# Template-based API call
intelisys = Intelisys(name="TemplateAssistant", provider="anthropic", model="claude-3-5-sonnet-20240620")
render_data = {"topic": "artificial intelligence"}
template = "Explain {{topic}} in simple terms."
response = intelisys.template_chat(render_data, template).get_response()
print(response)

# Asynchronous chat
async def async_chat():
    intelisys = Intelisys(name="AsyncAssistant", provider="anthropic", model="claude-3-5-sonnet-20240620")
    response = await intelisys.chat_async("What are the implications of AGI?")
    print(await response.get_response())

asyncio.run(async_chat())

# Using context manager for temporary template and persona changes
intelisys = Intelisys(name="ContextAssistant", provider="openai", model="gpt-4")
with intelisys.template_context(template="Summarize {{topic}} in one sentence.", persona="You are a concise summarizer."):
    response = intelisys.template_chat({"topic": "quantum entanglement"}).get_response()
    print(response)

# Using retry mechanism
intelisys = Intelisys(name="RetryAssistant", provider="openai", model="gpt-4", max_retry=5)
response = intelisys.chat("This might fail, but we'll retry").get_response()
print(response)

Supported Providers and Models

Intelisys supports a wide range of AI providers and models:

  • OpenAI: Various GPT models including gpt-4
  • Anthropic: Claude models including claude-3-5-sonnet-20240620
  • OpenRouter: Access to multiple AI models through a single API
  • Groq: Fast inference models

For a complete list of supported models, please refer to the DEFAULT_MODELS dictionary in the Intelisys class.

Error Handling

Intelisys now includes improved error handling and a retry mechanism for API calls. If an API call fails, the library will automatically retry the call up to the specified max_retry times (default is 10). This helps to handle temporary network issues or API rate limits.

JSON Parsing

For JSON responses, Intelisys now uses a more robust parsing method. If the standard json.loads() fails, it falls back to safe_json_loads() from the utilisys library, which can handle some common JSON parsing errors.

API Reference

For detailed information on available methods and their usage, please refer to the docstrings in the source code or our API documentation.

Contributing

We welcome contributions! Please see our Contributing Guidelines for more details.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Changelog

For a detailed list of changes and version history, please refer to the CHANGELOG.md file.

About Lifsys, Inc

Lifsys, Inc is an innovative AI company dedicated to developing cutting-edge solutions for the future. Visit www.lifsys.com to learn more about our mission and projects.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

intelisys-0.4.3.tar.gz (16.8 kB view details)

Uploaded Source

Built Distribution

intelisys-0.4.3-py3-none-any.whl (19.2 kB view details)

Uploaded Python 3

File details

Details for the file intelisys-0.4.3.tar.gz.

File metadata

  • Download URL: intelisys-0.4.3.tar.gz
  • Upload date:
  • Size: 16.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.0

File hashes

Hashes for intelisys-0.4.3.tar.gz
Algorithm Hash digest
SHA256 bf79aa273e3983afdc284506dfd63373269c3330cb3ed855637af10332464c81
MD5 101895190a51a7efb29e8b043045f35e
BLAKE2b-256 807f42363bb53839762cecbdd7bfb3ed1655ef20c07fface55259ac8af8d8df9

See more details on using hashes here.

File details

Details for the file intelisys-0.4.3-py3-none-any.whl.

File metadata

  • Download URL: intelisys-0.4.3-py3-none-any.whl
  • Upload date:
  • Size: 19.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.0

File hashes

Hashes for intelisys-0.4.3-py3-none-any.whl
Algorithm Hash digest
SHA256 fda5fc0b239e66f474f89dbe98e7128bc84fbdd7a4d1a00d3d0feaff006cf440
MD5 25c619e50c5da4e08ea44e4cf3c0e5aa
BLAKE2b-256 3552591df50cd6e7e41df1ac55e06231430f57814fcc9b1abe649fe603914f19

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page