Skip to main content

Create your chatbot or AI agent using Intellinode. We make any model smarter.

Project description

Intelli

Create chatbots and AI agent work flows. Intellinode python module connect your data with multiple AI models like OpenAI, Gemini, Anthropic, Stable Diffusion, or Mistral through a unified access layer.

pip install intelli

Latest changes

  • Add Anthropic claude 3 as a chatbot provider.
  • Add KerasAgent to load open source models offline.

For detailed instructions, refer to intelli documentation.

Code Examples

Create Chatbot

Switch between multiple chatbot providers without changing your code.

from intelli.function.chatbot import Chatbot
from intelli.model.input.chatbot_input import ChatModelInput

def call_chatbot(provider, model=None):
    # prepare common input 
    input = ChatModelInput("You are a helpful assistant.", model)
    input.add_user_message("What is the capital of France?")

    # creating chatbot instance
    openai_bot = Chatbot(YOUR_API_KEY, provider)
    response = openai_bot.chat(input)

    return response

# call openai
call_chatbot("openai", "gpt-4")

# call mistralai
call_chatbot("mistral", "mistral-medium")

# call claude3
call_chatbot(ChatProvider.ANTHROPIC, "claude-3-sonnet-20240229")

# call google gemini
call_chatbot("gemini")

Chat With Docs

Chat with your docs using multiple LLMs. To connect your data, visit the IntelliNode App, start a project using the Document option, upload your documents or images, and copy the generated One Key. This key will be used to connect the chatbot to your uploaded data.

# creating chatbot with the intellinode one key
bot = Chatbot(YOUR_OPENAI_API_KEY, "openai", {"one_key": YOUR_ONE_KEY})

input = ChatModelInput("You are a helpful assistant.", "gpt-3.5-turbo")
input.add_user_message("What is the procedure for requesting a refund according to the user manual?")

response = bot.chat(input)

Generate Images

Use the image controller to generate arts from multiple models with minimum code change:

from intelli.controller.remote_image_model import RemoteImageModel
from intelli.model.input.image_input import ImageModelInput

# model details - change only two words to switch
provider = "openai"
model_name = "dall-e-3"

# prepare the input details
prompts = "cartoonishly-styled solitary snake logo, looping elegantly to form both the body of the python and an abstract play on data nodes."
image_input = ImageModelInput(prompt=prompt, width=1024, height=1024, model=model_name)

# call the model openai/stability
wrapper = RemoteImageModel(your_api_key, provider)
results = wrapper.generate_images(image_input)

Create AI Flows

You can create a flow of tasks executed by different AI models. Here's an example of creating a blog post flow:

  • ChatGPT agent to write a post.
  • Google gemini agent to write image description.
  • Stable diffusion to generate images.
from intelli.flow.agents.agent import Agent
from intelli.flow.tasks.task import Task
from intelli.flow.sequence_flow import SequenceFlow
from intelli.flow.input.task_input import TextTaskInput
from intelli.flow.processors.basic_processor import TextProcessor

# define agents
blog_agent = Agent(agent_type='text', provider='openai', mission='write blog posts', model_params={'key': YOUR_OPENAI_API_KEY, 'model': 'gpt-4'})
copy_agent = Agent(agent_type='text', provider='gemini', mission='generate description', model_params={'key': YOUR_GEMINI_API_KEY, 'model': 'gemini'})
artist_agent = Agent(agent_type='image', provider='stability', mission='generate image', model_params={'key': YOUR_STABILITY_API_KEY})

# define tasks
task1 = Task(TextTaskInput('blog post about electric cars'), blog_agent, log=True)
task2 = Task(TextTaskInput('Generate short image description for image model'), copy_agent, pre_process=TextProcessor.text_head, log=True)
task3 = Task(TextTaskInput('Generate cartoon style image'), artist_agent, log=True)

# start sequence flow
flow = SequenceFlow([task1, task2, task3], log=True)
final_result = flow.start()

To build async AI flows with multiple paths, refer to the flow tutorial.

Pillars

  • The wrapper layer provides low-level access to the latest AI models.
  • The controller layer offers a unified input to any AI model by handling the differences.
  • The function layer provides abstract functionality that extends based on the app's use cases.
  • Flows: create a flow of ai agents working toward user tasks.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

intelli-0.4.6.tar.gz (43.3 kB view details)

Uploaded Source

Built Distribution

intelli-0.4.6-py3-none-any.whl (61.8 kB view details)

Uploaded Python 3

File details

Details for the file intelli-0.4.6.tar.gz.

File metadata

  • Download URL: intelli-0.4.6.tar.gz
  • Upload date:
  • Size: 43.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for intelli-0.4.6.tar.gz
Algorithm Hash digest
SHA256 01269e0125bb22d46739880ee5a0326593c54d78cd29dce2a9d05d180e3da7f7
MD5 d696267832b8229c93770d619308e7f0
BLAKE2b-256 001cf9c50991c2649b87d98766ab4458b8834f412a4d0f12f2897dc969ac78dd

See more details on using hashes here.

File details

Details for the file intelli-0.4.6-py3-none-any.whl.

File metadata

  • Download URL: intelli-0.4.6-py3-none-any.whl
  • Upload date:
  • Size: 61.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for intelli-0.4.6-py3-none-any.whl
Algorithm Hash digest
SHA256 c49e5f279ff5c80205ded3b36f2cb82650226a20b7eae7f86cffbcb91848579d
MD5 27dff8e8204a6db3796ffb26b5b79a0d
BLAKE2b-256 6a65c7336fa72a05ba3542cc2d9e3869feee2eb41cabaf3a861b0e303ce33a2b

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page