Skip to main content

Multi-Agent Reasoning Problem Solver

Project description

Multi-Agent Reasoning Problem Solver (MAR-PS)

MAR-PS is a multi-agent reasoning problem solver. You build teams and they work together to solve the problems you give them.

You can work with them as a member of their team.

Install

It can be installed via pip with the following command: pip install mar-ps

Backends

Currently, MAR-PS supports both Ollama and OpenAI as backends. Via the OpenAI backend, you can also use LM Studio models when the LM Studio server is on. Many different systems support the OpenAI API format.

We plan to add support for other backends, such as MLX or transformers in the future.

Usage

Here is an example using an Ollama backend. Fist, let's setup the Ollama client and use it to create a model.

from mar_ps import (
    OllamaClient,  # Ollama API client
    MAR,           # Multi-Agent Reasoning system class
    Model,         # the model class
    system,        # the system entity, for giving the initial system prompt
    Message,       # the message class
)

ollama_client = OllamaClient()

model = Model("llama3.1", ollama_client)

Note, you can have some models from one backend and some from another. For example, you could have Claude 3.5 Sonnet as a coding expert and a local model for creativity. (Claude runs on the OpenAI client). Next, let's create the MAR and add some entities.

mar = MAR(model) # This sets the default model.

logic_expert = mar.Entity(
    "Logic Expert",
    "an expert in logic and reasoning", # lowercase first letter and no end punctuation. See the system prompt to understand why.
)
math_expert = mar.Entity(
    "Math Expert",
    "an expert in math and solving problems",
)

In practice, you will likely want to use different models for different entities to play to their strengths. Now, make sure to add a user entity.

user = mar.Entity(
    "User",
    "the one who gives problems and instructions",
    "",
    is_user=True,
    pin_to_all_models=True, # all messages sent by this user will be pinned for all models to see.
)

By setting is_user=True, whenever a message is sent to the user, you will be prompted to respond.

Now let's give them a system prompt. Make sure to tell them who is on their team and who they are. If you don't do this, it won't work. I have found the following system prompt to work the best.

for entity in mar.entities:
    entity.message_stack.append(
        Message(
            system,
            entity,
            f"This is the messaging application. Your team includes: {'\n'.join([f'{e.id}: {e.introduction[0].upper()+e.introduction[1:]}.' for e in mar.entities if e != entity])}. You may address messages to any of them and receive messages from any of them. You may not send messages to anyone outside of your team. Your messages are private; only the sender and receiver can see them. Thus, you will need to share information with your teammates. There can only be one recipient per message, the messaging application does not support sending messages to multiple recipients at once. You are {entity.id}, {entity.introduction}. {entity.personal_prompt + " " if entity.personal_prompt else ''}Messages sent by you are started with To: and messages sent to you are started with From:.",
        )
    )

And finally start the chat by sending a message.

mar.start(logic_expert.send(input("You: "), user, print_all_messages=True))

By setting print_all_messages to True, it allows us to see all the messages sent. Otherwise, we would only see the messages sent to the user.

See simple_example.py for the full code.

API Reference

Client

Client base class from which OpenAIClient and OllamaClient inherit.

async Client.get_chat_completion(self, messages: list[MessageDict], model_id: str, options={}) -> str

Gets a chat completion from the client.

OpenAIClient(Client)

OpenAIClient.__init__(self, base_url: Optional[str] = None, api_key: Optional[str] = None, **kwargs)

Initializes the OpenAI client. kwargs are passed to openai.OpenAI.__init__.

async OpenAIClient.get_chat_completion(self, messages, model_id: str = "gpt-4o-mini", options={}) -> str

Gets a chat completion from the client. Default model is gpt-4o-mini.

OpenAIClient.openai

The openai.OpenAI instance.

OllamaClient(Client)

OllamaClient.__init__(self)

Initializes the Ollama client.

async OllamaClient.get_chat_completion(self, messages, model_id: str = "gpt-4o-mini", options={}) -> str

Model

Model.__init__(self, id: str, client: Client)

Initializes the model.

async Model.generate(self, messages: list[Message], options={})

Generates a response from the model.

Model.id

The model ID.

Model.client

The model client, a Client object.

EntityName

An entity name.

EntityName.__init__(self, id: str, pin_to_all_models: bool = False)

Initializes the entity name.

EntityName.id

The name of the entity

EntityName.pin_to_all_models

If true, all messages made by this entity will be given to all models, not just the recipient.

System(EntityName)

The system entity name is a instance of this class and is used for system prompts.

system.id

The system name, always equal to "system".

Mar

The MAR class.

MAR.__init__(self, global_default_model: Optional[Model] = None)

Initializes the MAR. The global default model is used for all entities in this MAR that don't have a model assigned.

Mar.Entity(self, id: str, introduction: str, personal_prompt: str = "", model: Optional[Model] = None, temperature: float = 0.5, is_user: bool = False, pin_to_all_models: bool = False,)

Creates an entity with the given arguments. id: The ID/Name of the entity. introduction: The introduction of the entity. This is generally a single sentence with no end punctuation or capitalization. See the example system prompt for why. model: The model to be used in generation. If not specified, the MAR's global default model is used. temperature: The temperature to be used in generation. Defaults to 0.5. options: A list of other options to be used. These are client-specific. is_user: If true, the user will be prompted to respond via stdin instead of generating with the model. pin_to_all_models: If true, all messages this model sends will be pinned to the context for all other models. But only the model the message was sent to will get a chance to respond.

MAR.start(self, func)

Starts the MAR. func is meant to be a Entity.send() method.

MAR.entities

The list of entities in this MAR.

MAR.global_default_model

The global default model for this MAR.

Entity(EntityName)

A class derived from EntityName that represents an entity and includes methods for generating responses and sending messages.

Entity.__init__(self, mar: MAR, id: str, introduction: str, personal_prompt: str = "", model: Optional[Model] = None, temperature: float = 0.5, options: dict = {}, is_user: bool = False, pin_to_all_models: bool = False)

Initializes the entity. Please use MAR.Entity() instead. See reference there for information on parameters.

Entity.generate(self, stream: bool = False)

Generates a response from the entity. Streaming is not supported, but the parameter is there. If set to true, you will get a NotImplementedError.

Entity.send(self, message: Message | str | None = None, sender: Optional[EntityName] = None, print_all_messages: bool = False)

Sends a message to the entity. message: The message to send. May be a Message object (which includes information such as sender or recipient), or a string. If it is a string, the sender parameter is required. sender: The sender of the message. This is only used if message is a string. print_all_messages: If true, all messages sent in this and subsequent generations will be printed. Defaults to false. Useful if you want to see what they are talking about behind the scenes.

Entity.mar

The MAR that this entity belongs to.

Entity.model

The model that this entity uses.

Entity.introduction

A short, single sentence introduction of this entity. It should have no ending punctuation or starting capitalization.

Entity.personal_prompt

A personal prompt for this entity. Tell the entity how to act and respond.

Entity.temperature

The temperature that the entity uses in generation.

Entity.options

The options for generation. This is model and client-specific.

Entity.is_user

If true, the you will be prompted to respond via stdin rather than having the model generate a response.

Entity.pin_to_all_models

If true, all messages this model sends will be pinned to the context for all other models. But only the model the message was sent to will get a chance to respond.

Entity.id

The ID/Name of the entity.

Entity.message_stack

The message stack of the entity.

Message

Message.__init__(self, sender: EntityName, recipient: EntityName, content: str)

Initializes the message.

Message.format(self, format_for: Optional[EntityName] = None)

Formats the message into dictionary form to be used as context for the format_for entity.

Message.clone(self, sender: Optional[EntityName] = None, recipient: Optional[EntityName] = None, content: Optional[str] = None)

Clones the message with the applied differences.

Message.sender

The sender of the message. An EntityName object.

Message.recipient

The recipient of the message. An EntityName object.

Message.content

The content of the message. A string

get_element(lst: list, index: int, default: Any = None)

Returns the element at the given index in the list. If the index is out of range, returns the default value.

extract_name_and_content(message: str)

Extracts the name and content from the generated message.

TODO

Features to add

  • TODO: add tool support

  • TODO: add streaming support

Backends to add

Hard ones to add

  • TODO: add support for multi-recipient messages

  • TODO: add support for multi-message responses

NOTE: These will be VERY difficult to implement because every time an entity receives a message, it tries to reply. If you send a message to many entities, they will all try to reply.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mar_ps-0.1.0.tar.gz (19.7 kB view details)

Uploaded Source

Built Distribution

mar_ps-0.1.0-py3-none-any.whl (19.8 kB view details)

Uploaded Python 3

File details

Details for the file mar_ps-0.1.0.tar.gz.

File metadata

  • Download URL: mar_ps-0.1.0.tar.gz
  • Upload date:
  • Size: 19.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.13.0

File hashes

Hashes for mar_ps-0.1.0.tar.gz
Algorithm Hash digest
SHA256 4e7fea12ca15cc77469d17e416fe27d3a1aee8006b07c24c7830258a1e79f22c
MD5 63ee8a49553fdc90754500870eb359a6
BLAKE2b-256 c70c12c39328bf6c7807971da47b8a78c462de5f75332453c2aa128861ed41f0

See more details on using hashes here.

File details

Details for the file mar_ps-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: mar_ps-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 19.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.13.0

File hashes

Hashes for mar_ps-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e1d196ab684e654cea6e8e6493ab753a52bc7d7903e2ea8e1a622a36d9006e83
MD5 88268ffdd6cb43b49eb0a8930289e9ab
BLAKE2b-256 53522c6944e8fde78b6f9538039bc9e3f7a2723ac79070ba55e6b55519976c79

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page