Skip to main content

Build AI agents and MCPs with Intellinode.

Project description

Intelli

A framework for creating chatbots and AI agent workflows. It enables seamless integration with multiple AI models, including OpenAI, LLaMA, deepseek, Stable Diffusion, and Mistral, through a unified access layer. Intelli also supports Model Context Protocol (MCP) for standardized interaction with AI models.

Features

  • Unified API for multiple AI providers.
  • Async flow-based agent orchestration.
  • Multi-modal support (text, images, speech).
  • Model Context Protocol (MCP) integration for standardized model interactions.
pip install intelli[mcp]

Latest changes

  • Add speech services (speechmatics, and more).
  • Update openai to support GPT-5 by default.
  • Support MCP capabilities doc.
  • Improved multi-model collaboration doc.
  • Support llama.cpp & GGUF models for fast inference doc.
  • Add deepseek and Llama3 integration.
  • Add offline speech2text Whisper doc.
  • Add latest Anthropic claude.

For detailed instructions, refer to intelli documentation.

Code Examples

Create Chatbot

Switch between multiple chatbot providers without changing your code.

from intelli.function.chatbot import Chatbot, ChatProvider
from intelli.model.input.chatbot_input import ChatModelInput

def call_chatbot(provider, model=None, api_key=None, options=None):
    # prepare common input 
    input = ChatModelInput("You are a helpful assistant.", model)
    input.add_user_message("What is the capital of France?")

    # creating chatbot instance
    chatbot = Chatbot(api_key, provider, options=options)
    response = chatbot.chat(input)

    return response

# call chatGPT (GPT-5 is default when model not specified)
call_chatbot(ChatProvider.OPENAI)  # uses GPT-5 by default

# call GPT-4 explicitly
call_chatbot(ChatProvider.OPENAI, "gpt-4o")

# call claude3
call_chatbot(ChatProvider.ANTHROPIC, "claude-3-7-sonnet-20250219")

# call google gemini
call_chatbot(ChatProvider.GEMINI)

# Call NVIDIA Deepseek
call_chatbot(ChatProvider.NVIDIA, "deepseek-ai/deepseek-r1")

# Call vLLM (self-hosted)
call_chatbot(ChatProvider.VLLM, "meta-llama/Llama-3.1-8B-Instruct", options={"baseUrl": "http://localhost:8000"})

Chat With Docs

Chat with your docs using multiple LLMs. To connect your data, visit the IntelliNode App, start a project using the Document option, upload your documents or images, and copy the generated One Key. This key will be used to connect the chatbot to your uploaded data.

# creating chatbot with the intellinode one key
bot = Chatbot(YOUR_OPENAI_API_KEY, "openai", {"one_key": YOUR_ONE_KEY})

input = ChatModelInput("You are a helpful assistant.")  # uses GPT-5 by default
input.add_user_message("What is the procedure for requesting a refund according to the user manual?")

response = bot.chat(input)

Generate Images

Use the image controller to generate arts from multiple models with minimum code change:

from intelli.controller.remote_image_model import RemoteImageModel
from intelli.model.input.image_input import ImageModelInput

# model details - change only two words to switch
provider = "openai"
model_name = "dall-e-3"

# prepare the input details
prompts = "cartoonishly-styled solitary snake logo, looping elegantly to form both the body of the python and an abstract play on data nodes."
image_input = ImageModelInput(prompt=prompt, width=1024, height=1024, model=model_name)

# call the model openai/stability
wrapper = RemoteImageModel(your_api_key, provider)
results = wrapper.generate_images(image_input)

Create AI Flows

You can create a flow of tasks executed by different AI models. Here's an example of creating a blog post flow:

  • ChatGPT agent to write a post.
  • Google gemini agent to write image description.
  • Stable diffusion to generate images.
from intelli.flow.agents.agent import Agent
from intelli.flow.tasks.task import Task
from intelli.flow.sequence_flow import SequenceFlow
from intelli.flow.input.task_input import TextTaskInput
from intelli.flow.processors.basic_processor import TextProcessor

# define agents
blog_agent = Agent(agent_type='text', provider='openai', mission='write blog posts', model_params={'key': YOUR_OPENAI_API_KEY, 'model': 'gpt-4'})
copy_agent = Agent(agent_type='text', provider='gemini', mission='generate description', model_params={'key': YOUR_GEMINI_API_KEY, 'model': 'gemini'})
artist_agent = Agent(agent_type='image', provider='stability', mission='generate image', model_params={'key': YOUR_STABILITY_API_KEY})

# define tasks
task1 = Task(TextTaskInput('blog post about electric cars'), blog_agent, log=True)
task2 = Task(TextTaskInput('Generate short image description for image model'), copy_agent, pre_process=TextProcessor.text_head, log=True)
task3 = Task(TextTaskInput('Generate cartoon style image'), artist_agent, log=True)

# start sequence flow
flow = SequenceFlow([task1, task2, task3], log=True)
final_result = flow.start()

To build async AI flows with multiple paths, refer to the flow tutorial.

Pillars

  • The wrapper layer provides low-level access to the latest AI models.
  • The controller layer offers a unified input to any AI model by handling the differences.
  • The function layer provides abstract functionality that extends based on the app's use cases.
  • Flows: create a flow of ai agents working toward user tasks.

Project details


Release history Release notifications | RSS feed

This version

1.3.1

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

intelli-1.3.1.tar.gz (161.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

intelli-1.3.1-py3-none-any.whl (213.7 kB view details)

Uploaded Python 3

File details

Details for the file intelli-1.3.1.tar.gz.

File metadata

  • Download URL: intelli-1.3.1.tar.gz
  • Upload date:
  • Size: 161.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.4

File hashes

Hashes for intelli-1.3.1.tar.gz
Algorithm Hash digest
SHA256 04d7c928bf6dba09a66bc7deac6c36fadfcaffcb23115a5acecaad8a64b8bdee
MD5 a5f41f84f10d58aaaa19bea9ae6c64d0
BLAKE2b-256 a7f3307b0e6216cfabb0ab6af3ecba6a6e5a06d46f20cc78a37df7f2cf2a4e93

See more details on using hashes here.

File details

Details for the file intelli-1.3.1-py3-none-any.whl.

File metadata

  • Download URL: intelli-1.3.1-py3-none-any.whl
  • Upload date:
  • Size: 213.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.4

File hashes

Hashes for intelli-1.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 454b8b7d50aceb47a17913206d2143bacc7a244b1b4e659de09b5dcb1357c9af
MD5 044b026ef159e1b7b7ec87b952f00f96
BLAKE2b-256 b307b27fc5d7152479c7f2092a5a05fe30dc9a64b2f733158e3d29f18f792de7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page