No project description provided
Project description
🥚 Overeasy
Create powerful zero-shot vision models!
Overeasy allows you to chain zero-shot vision models to create custom end-to-end pipelines for tasks like:
- 📦 Bounding Box Detection
- 🏷️ Classification
- 🖌️ Segmentation (Coming Soon!)
All of this can be achieved without needing to collect and annotate large training datasets.
Overeasy makes it simple to combine pre-trained zero-shot models to build powerful custom computer vision solutions.
Installation
It's as easy as
pip install overeasy
For installing extras refer to our Docs.
Key Features
🤖 Agents
: Specialized tools that perform specific image processing tasks.🧩 Workflows
: Define a sequence of Agents to process images in a structured manner.🔗 Execution Graphs
: Manage and visualize the image processing pipeline.🔎 Detections
: Represent bounding boxes, segmentation, and classifications.
Documentation
For more details on types, library structure, and available models please refer to our Docs.
Example Usage
Note: If you don't have a local GPU, you can run our examples by making a copy of this Colab notebook.
Download example image
!wget https://github.com/overeasy-sh/overeasy/blob/73adbaeba51f532a7023243266da826ed1ced6ec/examples/construction.jpg?raw=true -O construction.jpg
Example workflow to identify if a person is wearing a PPE on a work site:
from overeasy import *
from overeasy.models import OwlV2
from PIL import Image
workflow = Workflow([
# Detect each head in the input image
BoundingBoxSelectAgent(classes=["person's head"], model=OwlV2()),
# Applies Non-Maximum Suppression to remove overlapping bounding boxes
NMSAgent(iou_threshold=0.5, score_threshold=0),
# Splits the input image into images of each detected head
SplitAgent(),
# Classifies the split images using CLIP
ClassificationAgent(classes=["hard hat", "no hard hat"]),
# Maps the returned class names
ClassMapAgent({"hard hat": "has ppe", "no hard hat": "no ppe"}),
# Combines results back into a BoundingBox Detection
JoinAgent()
])
image = Image.open("./construction.jpg")
result, graph = workflow.execute(image)
workflow.visualize(graph)
Diagram
Here's a diagram of this workflow. Each layer in the graph represents a step in the workflow:
The image and data attributes in each node are used together to visualize the current state of the workflow. Calling the visualize
function on the workflow will spawn a Gradio instance that looks like this.
Support
If you have any questions or need assistance, please open an issue or reach out to us at help@overeasy.sh.
Let's build amazing vision models together 🍳!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file overeasy-0.2.16.tar.gz
.
File metadata
- Download URL: overeasy-0.2.16.tar.gz
- Upload date:
- Size: 64.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.0 CPython/3.12.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 67d6a478afb62c2ed995cd01cc0a47246998fd894a5a7d4d0c0f119f1f90d1a7 |
|
MD5 | e4a6456a1cde9561aa34957a83dd5edf |
|
BLAKE2b-256 | bb39c572f7803fd0d302ef8c8364a1fc125ccee7fa6047800fbfba1f7d9fd01f |