Skip to main content

Toolset for Vision Agent

Project description

vision_agent

🔍🤖 Vision Agent

ci_status PyPI version version

Vision Agent is a library that helps you utilize agent frameworks for your vision tasks. Many current vision problems can easily take hours or days to solve, you need to find the right model, figure out how to use it, possibly write programming logic around it to accomplish the task you want or even more expensive, train your own model. Vision Agent aims to provide an in-seconds experience by allowing users to describe their problem in text and utilizing agent frameworks to solve the task for them. Check out our discord for updates and roadmaps!

Documentation

Getting Started

Installation

To get started, you can install the library using pip:

pip install vision-agent

Ensure you have an OpenAI API key and set it as an environment variable (if you are using Azure OpenAI please see the additional setup section):

export OPENAI_API_KEY="your-api-key"

Vision Agents

You can interact with the agents as you would with any LLM or LMM model:

>>> from vision_agent.agent import VisionAgent
>>> agent = VisionAgent()
>>> agent("What percentage of the area of this jar is filled with coffee beans?", image="jar.jpg")
"The percentage of area of the jar filled with coffee beans is 25%."

To better understand how the model came up with it's answer, you can also run it in debug mode by passing in the verbose argument:

>>> agent = VisionAgent(verbose=True)

You can also have it return the workflow it used to complete the task along with all the individual steps and tools to get the answer:

>>> resp, workflow = agent.chat_with_workflow([{"role": "user", "content": "What percentage of the area of this jar is filled with coffee beans?"}], image="jar.jpg")
>>> print(workflow)
[{"task": "Segment the jar using 'grounding_sam_'.",
  "tool": "grounding_sam_",
  "parameters": {"prompt": "jar", "image": "jar.jpg"},
  "call_results": [[
    {
      "labels": ["jar"],
      "scores": [0.99],
      "bboxes": [
        [0.58, 0.2, 0.72, 0.45],
      ],
      "masks": "mask.png"
    }
  ]],
  "answer": "The jar is located at [0.58, 0.2, 0.72, 0.45].",
},
{"visualize_output": "final_output.png"}]

Tools

There are a variety of tools for the model or the user to use. Some are executed locally while others are hosted for you. You can also ask an LLM directly to build a tool for you. For example:

>>> import vision_agent as va
>>> llm = va.llm.OpenAILLM()
>>> detector = llm.generate_detector("Can you build a jar detector for me?")
>>> detector("jar.jpg")
[{"labels": ["jar",],
  "scores": [0.99],
  "bboxes": [
    [0.58, 0.2, 0.72, 0.45],
  ]
}]
Tool Description
CLIP CLIP is a tool that can classify or tag any image given a set of input classes or tags.
GroundingDINO GroundingDINO is a tool that can detect arbitrary objects with inputs such as category names or referring expressions.
GroundingSAM GroundingSAM is a tool that can detect and segment arbitrary objects with inputs such as category names or referring expressions.
Counter Counter detects and counts the number of objects in an image given an input such as a category name or referring expression.
Crop Crop crops an image given a bounding box and returns a file name of the cropped image.
BboxArea BboxArea returns the area of the bounding box in pixels normalized to 2 decimal places.
SegArea SegArea returns the area of the segmentation mask in pixels normalized to 2 decimal places.
BboxIoU BboxIoU returns the intersection over union of two bounding boxes normalized to 2 decimal places.
SegIoU SegIoU returns the intersection over union of two segmentation masks normalized to 2 decimal places.
ExtractFrames ExtractFrames extracts frames with motion from a video.
ExtractFrames ExtractFrames extracts frames with motion from a video.
ZeroShotCounting ZeroShotCounting returns the total number of objects belonging to a single class in a given image
VisualPromptCounting VisualPromptCounting returns the total number of objects belonging to a single class given an image and visual prompt

It also has a basic set of calculate tools such as add, subtract, multiply and divide.

Additional Setup

If you want to use Azure OpenAI models, you can set the environment variable:

export AZURE_OPENAI_API_KEY="your-api-key"
export AZURE_OPENAI_ENDPOINT="your-endpoint"

You can then run Vision Agent using the Azure OpenAI models:

>>> import vision_agent as va
>>> agent = va.agent.VisionAgent(
>>>     task_model=va.llm.AzureOpenAILLM(),
>>>     answer_model=va.lmm.AzureOpenAILMM(),
>>>     reflection_model=va.lmm.AzureOpenAILMM(),
>>> )

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vision_agent-0.1.6.tar.gz (1.0 MB view details)

Uploaded Source

Built Distribution

vision_agent-0.1.6-py3-none-any.whl (1.0 MB view details)

Uploaded Python 3

File details

Details for the file vision_agent-0.1.6.tar.gz.

File metadata

  • Download URL: vision_agent-0.1.6.tar.gz
  • Upload date:
  • Size: 1.0 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.4.2 CPython/3.10.11 Linux/6.5.0-1018-azure

File hashes

Hashes for vision_agent-0.1.6.tar.gz
Algorithm Hash digest
SHA256 94d5da63a086cfabd8734c9f24017cf0a17ed6f4eb3257547a8f446cd49387d4
MD5 8bde4af63673c6607697d8878997377e
BLAKE2b-256 30b8e8a3053fa1645ec1e852c65622f04e3ae07b56f49257e11fe3dfd308b0d7

See more details on using hashes here.

File details

Details for the file vision_agent-0.1.6-py3-none-any.whl.

File metadata

  • Download URL: vision_agent-0.1.6-py3-none-any.whl
  • Upload date:
  • Size: 1.0 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.4.2 CPython/3.10.11 Linux/6.5.0-1018-azure

File hashes

Hashes for vision_agent-0.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 45871addffb919b8347a192a5d06ccd4b57b2aaa7d0a9c399ffbcde5deb0446c
MD5 8b8a30e529cbc5115343355b2a2377da
BLAKE2b-256 93b1404f9023e77000a2c30553c52f79f41fe0707c869892d680390b4d44d072

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page