Skip to main content

Toolset for Vision Agent

Project description

vision_agent

🔍🤖 Vision Agent

ci_status PyPI version version

Vision Agent is a library that helps you utilize agent frameworks to generate code to solve your vision task. Many current vision problems can easily take hours or days to solve, you need to find the right model, figure out how to use it and program it to accomplish the task you want. Vision Agent aims to provide an in-seconds experience by allowing users to describe their problem in text and have the agent framework generate code to solve the task for them. Check out our discord for updates and roadmaps!

Web Application

Try Vision Agent live on (note this may not be running the most up-to-date version) va.landing.ai

Documentation

Vision Agent Library Docs

Getting Started

Installation

To get started, you can install the library using pip:

pip install vision-agent

Ensure you have an OpenAI API key and set it as an environment variable (if you are using Azure OpenAI please see the Azure setup section):

export OPENAI_API_KEY="your-api-key"

Vision Agent

There are two agents that you can use. Vision Agent is a conversational agent that has access to tools that allow it to write an navigate python code and file systems. It can converse with the user in natural language. VisionAgentCoder is an agent that can write code for vision tasks, such as counting people in an image. However, it cannot converse and can only respond with code. VisionAgent can call VisionAgentCoder to write vision code.

Basic Usage

To run the streamlit app locally to chat with Vision Agent, you can run the following command:

pip install -r examples/chat/requirements.txt
export WORKSPACE=/path/to/your/workspace
export ZMQ_PORT=5555
streamlit run examples/chat/app.py

You can find more details about the streamlit app here.

Basic Programmatic Usage

>>> from vision_agent.agent import VisionAgent
>>> agent = VisionAgent()
>>> resp = agent("Hello")
>>> print(resp)
[{"role": "user", "content": "Hello"}, {"role": "assistant", "content": "{'thoughts': 'The user has greeted me. I will respond with a greeting and ask how I can assist them.', 'response': 'Hello! How can I assist you today?', 'let_user_respond': True}"}]
>>> resp.append({"role": "user", "content": "Can you count the number of people in this image?", "media": ["people.jpg"]})
>>> resp = agent(resp)

Vision Agent Coder

Basic Usage

You can interact with the agent as you would with any LLM or LMM model:

>>> from vision_agent.agent import VisionAgentCoder
>>> agent = VisionAgentCoder()
>>> code = agent("What percentage of the area of the jar is filled with coffee beans?", media="jar.jpg")

Which produces the following code:

from vision_agent.tools import load_image, grounding_sam

def calculate_filled_percentage(image_path: str) -> float:
    # Step 1: Load the image
    image = load_image(image_path)

    # Step 2: Segment the jar
    jar_segments = grounding_sam(prompt="jar", image=image)

    # Step 3: Segment the coffee beans
    coffee_beans_segments = grounding_sam(prompt="coffee beans", image=image)

    # Step 4: Calculate the area of the segmented jar
    jar_area = 0
    for segment in jar_segments:
        jar_area += segment['mask'].sum()

    # Step 5: Calculate the area of the segmented coffee beans
    coffee_beans_area = 0
    for segment in coffee_beans_segments:
        coffee_beans_area += segment['mask'].sum()

    # Step 6: Compute the percentage of the jar area that is filled with coffee beans
    if jar_area == 0:
        return 0.0  # To avoid division by zero
    filled_percentage = (coffee_beans_area / jar_area) * 100

    # Step 7: Return the computed percentage
    return filled_percentage

To better understand how the model came up with it's answer, you can run it in debug mode by passing in the verbose argument:

>>> agent = VisionAgentCoder(verbose=2)

Detailed Usage

You can also have it return more information by calling chat_with_workflow. The format of the input is a list of dictionaries with the keys role, content, and media:

>>> results = agent.chat_with_workflow([{"role": "user", "content": "What percentage of the area of the jar is filled with coffee beans?", "media": ["jar.jpg"]}])
>>> print(results)
{
    "code": "from vision_agent.tools import ..."
    "test": "calculate_filled_percentage('jar.jpg')",
    "test_result": "...",
    "plan": [{"code": "...", "test": "...", "plan": "..."}, ...],
    "working_memory": ...,
}

With this you can examine more detailed information such as the testing code, testing results, plan or working memory it used to complete the task.

Multi-turn conversations

You can have multi-turn conversations with vision-agent as well, giving it feedback on the code and having it update. You just need to add the code as a response from the assistant:

agent = va.agent.VisionAgent(verbosity=2)
conv = [
    {
        "role": "user",
        "content": "Are these workers wearing safety gear? Output only a True or False value.",
        "media": ["workers.png"],
    }
]
result = agent.chat_with_workflow(conv)
code = result["code"]
conv.append({"role": "assistant", "content": code})
conv.append(
    {
        "role": "user",
        "content": "Can you also return the number of workers wearing safety gear?",
    }
)
result = agent.chat_with_workflow(conv)

Tools

There are a variety of tools for the model or the user to use. Some are executed locally while others are hosted for you. You can also ask an LMM directly to build a tool for you. For example:

>>> import vision_agent as va
>>> lmm = va.lmm.OpenAILMM()
>>> detector = lmm.generate_detector("Can you build a jar detector for me?")
>>> detector(va.tools.load_image("jar.jpg"))
[{"labels": ["jar",],
  "scores": [0.99],
  "bboxes": [
    [0.58, 0.2, 0.72, 0.45],
  ]
}]

You can also add custom tools to the agent:

import vision_agent as va
import numpy as np

@va.tools.register_tool(imports=["import numpy as np"])
def custom_tool(image_path: str) -> str:
    """My custom tool documentation.

    Parameters:
        image_path (str): The path to the image.

    Returns:
        str: The result of the tool.

    Example
    -------
    >>> custom_tool("image.jpg")
    """

    return np.zeros((10, 10))

You need to ensure you call @va.tools.register_tool with any imports it uses. Global variables will not be captured by register_tool so you need to include them in the function. Make sure the documentation is in the same format above with description, Parameters:, Returns:, and Example\n-------. You can find an example use case here as this is what the agent uses to pick and use the tool.

Azure Setup

If you want to use Azure OpenAI models, you need to have two OpenAI model deployments:

  1. OpenAI GPT-4o model
  2. OpenAI text embedding model
Screenshot 2024-06-12 at 5 54 48 PM

Then you can set the following environment variables:

export AZURE_OPENAI_API_KEY="your-api-key"
export AZURE_OPENAI_ENDPOINT="your-endpoint"
# The deployment name of your Azure OpenAI chat model
export AZURE_OPENAI_CHAT_MODEL_DEPLOYMENT_NAME="your_gpt4o_model_deployment_name"
# The deployment name of your Azure OpenAI text embedding model
export AZURE_OPENAI_EMBEDDING_MODEL_DEPLOYMENT_NAME="your_embedding_model_deployment_name"

NOTE: make sure your Azure model deployment have enough quota (token per minute) to support it. The default value 8000TPM is not enough.

You can then run Vision Agent using the Azure OpenAI models:

import vision_agent as va
agent = va.agent.AzureVisionAgentCoder()

Q&A

How to get started with OpenAI API credits

  1. Visit the OpenAI API platform to sign up for an API key.
  2. Follow the instructions to purchase and manage your API credits.
  3. Ensure your API key is correctly configured in your project settings.

Failure to have sufficient API credits may result in limited or no functionality for the features that rely on the OpenAI API.

For more details on managing your API usage and credits, please refer to the OpenAI API documentation.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vision_agent-0.2.111.tar.gz (1.0 MB view details)

Uploaded Source

Built Distribution

vision_agent-0.2.111-py3-none-any.whl (1.0 MB view details)

Uploaded Python 3

File details

Details for the file vision_agent-0.2.111.tar.gz.

File metadata

  • Download URL: vision_agent-0.2.111.tar.gz
  • Upload date:
  • Size: 1.0 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.4.2 CPython/3.10.11 Linux/6.5.0-1025-azure

File hashes

Hashes for vision_agent-0.2.111.tar.gz
Algorithm Hash digest
SHA256 cdbad5ad8a3d34c5e847b7ebe4a89adbe2255b5b898c48b8740ea8219316eac2
MD5 4b8cf8be8fd8daae3c2a983ca8553565
BLAKE2b-256 6fc8e2f1d1ac6873c89f03c88ab86f75631aa59ee92a241446a53ca1ea55269a

See more details on using hashes here.

File details

Details for the file vision_agent-0.2.111-py3-none-any.whl.

File metadata

  • Download URL: vision_agent-0.2.111-py3-none-any.whl
  • Upload date:
  • Size: 1.0 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.4.2 CPython/3.10.11 Linux/6.5.0-1025-azure

File hashes

Hashes for vision_agent-0.2.111-py3-none-any.whl
Algorithm Hash digest
SHA256 29113b4e82fdbf0ab093cb36a0abbc91a9f0ea5d8c0aed79f28d06de62ff8298
MD5 b45067f5d8f1462aaa7905357053facb
BLAKE2b-256 fdb3bd19b53db67a84f9e063c9e510b728baac4ea60d486099a6a250604e322c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page