Toolset for Vision Agent
Project description
Vision Agent is a library that helps you utilize agent frameworks to generate code to solve your vision task. Many current vision problems can easily take hours or days to solve, you need to find the right model, figure out how to use it and program it to accomplish the task you want. Vision Agent aims to provide an in-seconds experience by allowing users to describe their problem in text and have the agent framework generate code to solve the task for them. Check out our discord for updates and roadmaps!
Web Application
Try Vision Agent live on (note this may not be running the most up-to-date version) va.landing.ai
Documentation
Getting Started
Installation
To get started, you can install the library using pip:
pip install vision-agent
Ensure you have an Anthropic key and an OpenAI API key and set in your environment variables (if you are using Azure OpenAI please see the Azure setup section):
export ANTHROPIC_API_KEY="your-api-key"
export OPENAI_API_KEY="your-api-key"
Vision Agent
There are two agents that you can use. VisionAgent
is a conversational agent that has
access to tools that allow it to write an navigate python code and file systems. It can
converse with the user in natural language. VisionAgentCoder
is an agent specifically
for writing code for vision tasks, such as counting people in an image. However, it
cannot chat with you and can only respond with code. VisionAgent
can call
VisionAgentCoder
to write vision code.
Basic Usage
To run the streamlit app locally to chat with VisionAgent
, you can run the following
command:
pip install -r examples/chat/requirements.txt
export WORKSPACE=/path/to/your/workspace
export ZMQ_PORT=5555
streamlit run examples/chat/app.py
You can find more details about the streamlit app here.
Basic Programmatic Usage
>>> from vision_agent.agent import VisionAgent
>>> agent = VisionAgent()
>>> resp = agent("Hello")
>>> print(resp)
[{"role": "user", "content": "Hello"}, {"role": "assistant", "content": "{'thoughts': 'The user has greeted me. I will respond with a greeting and ask how I can assist them.', 'response': 'Hello! How can I assist you today?', 'let_user_respond': True}"}]
>>> resp.append({"role": "user", "content": "Can you count the number of people in this image?", "media": ["people.jpg"]})
>>> resp = agent(resp)
VisionAgent
currently utilizes Claude-3.5 as it's default LMM and uses OpenAI for
embeddings for tool searching.
Vision Agent Coder
Basic Usage
You can interact with the agent as you would with any LLM or LMM model:
>>> from vision_agent.agent import VisionAgentCoder
>>> agent = VisionAgentCoder()
>>> code = agent("What percentage of the area of the jar is filled with coffee beans?", media="jar.jpg")
Which produces the following code:
from vision_agent.tools import load_image, grounding_sam
def calculate_filled_percentage(image_path: str) -> float:
# Step 1: Load the image
image = load_image(image_path)
# Step 2: Segment the jar
jar_segments = grounding_sam(prompt="jar", image=image)
# Step 3: Segment the coffee beans
coffee_beans_segments = grounding_sam(prompt="coffee beans", image=image)
# Step 4: Calculate the area of the segmented jar
jar_area = 0
for segment in jar_segments:
jar_area += segment['mask'].sum()
# Step 5: Calculate the area of the segmented coffee beans
coffee_beans_area = 0
for segment in coffee_beans_segments:
coffee_beans_area += segment['mask'].sum()
# Step 6: Compute the percentage of the jar area that is filled with coffee beans
if jar_area == 0:
return 0.0 # To avoid division by zero
filled_percentage = (coffee_beans_area / jar_area) * 100
# Step 7: Return the computed percentage
return filled_percentage
To better understand how the model came up with it's answer, you can run it in debug mode by passing in the verbose argument:
>>> agent = VisionAgentCoder(verbosity=2)
Detailed Usage
You can also have it return more information by calling chat_with_workflow
. The format
of the input is a list of dictionaries with the keys role
, content
, and media
:
>>> results = agent.chat_with_workflow([{"role": "user", "content": "What percentage of the area of the jar is filled with coffee beans?", "media": ["jar.jpg"]}])
>>> print(results)
{
"code": "from vision_agent.tools import ..."
"test": "calculate_filled_percentage('jar.jpg')",
"test_result": "...",
"plans": {"plan1": {"thoughts": "..."}, ...},
"plan_thoughts": "...",
"working_memory": ...,
}
With this you can examine more detailed information such as the testing code, testing results, plan or working memory it used to complete the task.
Multi-turn conversations
You can have multi-turn conversations with vision-agent as well, giving it feedback on the code and having it update. You just need to add the code as a response from the assistant:
agent = va.agent.VisionAgentCoder(verbosity=2)
conv = [
{
"role": "user",
"content": "Are these workers wearing safety gear? Output only a True or False value.",
"media": ["workers.png"],
}
]
result = agent.chat_with_workflow(conv)
code = result["code"]
conv.append({"role": "assistant", "content": code})
conv.append(
{
"role": "user",
"content": "Can you also return the number of workers wearing safety gear?",
}
)
result = agent.chat_with_workflow(conv)
Tools
There are a variety of tools for the model or the user to use. Some are executed locally
while others are hosted for you. You can easily access them yourself, for example if
you want to run owl_v2_image
and visualize the output you can run:
import vision_agent.tools as T
import matplotlib.pyplot as plt
image = T.load_image("dogs.jpg")
dets = T.owl_v2_image("dogs", image)
viz = T.overlay_bounding_boxes(image, dets)
plt.imshow(viz)
plt.show()
You can find all available tools in vision_agent/tools/tools.py
, however,
VisionAgentCoder
only utilizes a subset of tools that have been tested and provide
the best performance. Those can be found in the same file under the TOOLS
variable.
If you can't find the tool you are looking for you can also add custom tools to the agent:
import vision_agent as va
import numpy as np
@va.tools.register_tool(imports=["import numpy as np"])
def custom_tool(image_path: str) -> str:
"""My custom tool documentation.
Parameters:
image_path (str): The path to the image.
Returns:
str: The result of the tool.
Example
-------
>>> custom_tool("image.jpg")
"""
return np.zeros((10, 10))
You need to ensure you call @va.tools.register_tool
with any imports it uses. Global
variables will not be captured by register_tool
so you need to include them in the
function. Make sure the documentation is in the same format above with description,
Parameters:
, Returns:
, and Example\n-------
. You can find an example use case
here as this is what the agent uses to pick and use the tool.
Can't find the tool you need and want add it to VisionAgent
? Check out our
vision-agent-tools repository where
we add the source code for all the tools used in VisionAgent
.
Additional Backends
Anthropic
AnthropicVisionAgentCoder
uses Anthropic. To get started you just need to get an
Anthropic API key and set it in your environment variables:
export ANTHROPIC_API_KEY="your-api-key"
Because Anthropic does not support embedding models, the default embedding model used is the OpenAI model so you will also need to set your OpenAI API key:
export OPEN_AI_API_KEY="your-api-key"
Usage is the same as VisionAgentCoder
:
>>> import vision_agent as va
>>> agent = va.agent.AnthropicVisionAgentCoder()
>>> agent("Count the apples in the image", media="apples.jpg")
OpenAI
OpenAIVisionAgentCoder
uses OpenAI. To get started you just need to get an OpenAI API
key and set it in your environment variables:
export OPEN_AI_API_KEY="your-api-key"
Usage is the same as VisionAgentCoder
:
>>> import vision_agent as va
>>> agent = va.agent.OpenAIVisionAgentCoder()
>>> agent("Count the apples in the image", media="apples.jpg")
Ollama
OllamaVisionAgentCoder
uses Ollama. To get started you must download a few models:
ollama pull llama3.1
ollama pull mxbai-embed-large
llama3.1
is used for the OllamaLMM
for OllamaVisionAgentCoder
. Normally we would
use an actual LMM such as llava
but llava
cannot handle the long context lengths
required by the agent. Since llama3.1
cannot handle images you may see some
performance degredation. mxbai-embed-large
is the embedding model used to look up
tools. You can use it just like you would use VisionAgentCoder
:
>>> import vision_agent as va
>>> agent = va.agent.OllamaVisionAgentCoder()
>>> agent("Count the apples in the image", media="apples.jpg")
WARNING: VisionAgent doesn't work well unless the underlying LMM is sufficiently powerful. Do not expect good results or even working code with smaller models like Llama 3.1 8B.
Azure OpenAI
AzureVisionAgentCoder
uses Azure OpenAI models. To get started follow the Azure Setup
section below. You can use it just like you would use VisionAgentCoder
:
>>> import vision_agent as va
>>> agent = va.agent.AzureVisionAgentCoder()
>>> agent("Count the apples in the image", media="apples.jpg")
Azure Setup
If you want to use Azure OpenAI models, you need to have two OpenAI model deployments:
- OpenAI GPT-4o model
- OpenAI text embedding model
Then you can set the following environment variables:
export AZURE_OPENAI_API_KEY="your-api-key"
export AZURE_OPENAI_ENDPOINT="your-endpoint"
# The deployment name of your Azure OpenAI chat model
export AZURE_OPENAI_CHAT_MODEL_DEPLOYMENT_NAME="your_gpt4o_model_deployment_name"
# The deployment name of your Azure OpenAI text embedding model
export AZURE_OPENAI_EMBEDDING_MODEL_DEPLOYMENT_NAME="your_embedding_model_deployment_name"
NOTE: make sure your Azure model deployment have enough quota (token per minute) to support it. The default value 8000TPM is not enough.
You can then run Vision Agent using the Azure OpenAI models:
import vision_agent as va
agent = va.agent.AzureVisionAgentCoder()
Q&A
How to get started with OpenAI API credits
- Visit the OpenAI API platform to sign up for an API key.
- Follow the instructions to purchase and manage your API credits.
- Ensure your API key is correctly configured in your project settings.
Failure to have sufficient API credits may result in limited or no functionality for the features that rely on the OpenAI API. For more details on managing your API usage and credits, please refer to the OpenAI API documentation.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file vision_agent-0.2.153.tar.gz
.
File metadata
- Download URL: vision_agent-0.2.153.tar.gz
- Upload date:
- Size: 1.0 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.4.2 CPython/3.10.11 Linux/6.8.0-1014-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | eeedf80a218a03faecdbe79db52ab024f32c7710a17d09d06c02bdb12f258ad1 |
|
MD5 | 4fe6bcb7e45fc8c34abaaf1704e75c86 |
|
BLAKE2b-256 | b33f701bf5cb47c47b83546ec2d0313fe71e33f97971d18a70e99bd70c6c5398 |
File details
Details for the file vision_agent-0.2.153-py3-none-any.whl
.
File metadata
- Download URL: vision_agent-0.2.153-py3-none-any.whl
- Upload date:
- Size: 1.1 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.4.2 CPython/3.10.11 Linux/6.8.0-1014-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 45749784f31d03cb462751f70edf037aa126e041f63875e2957ec6837ce1b4d5 |
|
MD5 | be087e28ce7cc16c9a81f5d13dc2118e |
|
BLAKE2b-256 | 3c02677f17fe84523e641fcd41a0bff87915744e87b091748e87dcfcff1bebcb |