No project description provided
Project description
🥚 Overeasy
Create powerful zero-shot vision models!
Overeasy allows you to chain zero-shot vision models to create custom end-to-end pipelines for tasks like:
- 📦 Bounding Box Detection
- 🏷️ Classification
- 🖌️ Segmentation (Coming Soon!)
All of this can be achieved without needing to collect and annotate large training datasets.
Overeasy makes it simple to combine pre-trained zero-shot models to build powerful custom computer vision solutions.
Installation
It's as easy as
pip install overeasy
For installing extras refer to our Docs.
Key Features
🤖 Agents
: Specialized tools that perform specific image processing tasks.🧩 Workflows
: Define a sequence of Agents to process images in a structured manner.🔗 Execution Graphs
: Manage and visualize the image processing pipeline.🔎 Detections
: Represent bounding boxes, segmentation, and classifications.
Documentation
For more details on types, library structure, and available models please refer to our Docs.
Example Usage
Note: If you don't have a local GPU, you can run our examples by making a copy of this Colab notebook.
Download example image
!wget https://github.com/overeasy-sh/overeasy/blob/73adbaeba51f532a7023243266da826ed1ced6ec/examples/construction.jpg?raw=true -O construction.jpg
Example workflow to identify if a person is wearing a PPE on a work site:
from overeasy import *
from overeasy.models import OwlV2
from PIL import Image
workflow = Workflow([
# Detect each head in the input image
BoundingBoxSelectAgent(classes=["person's head"], model=OwlV2()),
# Applies Non-Maximum Suppression to remove overlapping bounding boxes
NMSAgent(iou_threshold=0.5, score_threshold=0),
# Splits the input image into images of each detected head
SplitAgent(),
# Classifies the split images using CLIP
ClassificationAgent(classes=["hard hat", "no hard hat"]),
# Maps the returned class names
ClassMapAgent({"hard hat": "has ppe", "no hard hat": "no ppe"}),
# Combines results back into a BoundingBox Detection
JoinAgent()
])
image = Image.open("./construction.jpg")
result, graph = workflow.execute(image)
workflow.visualize(graph)
Diagram
Here's a diagram of this workflow. Each layer in the graph represents a step in the workflow:
The image and data attributes in each node are used together to visualize the current state of the workflow. Calling the visualize
function on the workflow will spawn a Gradio instance that looks like this.
Support
If you have any questions or need assistance, please open an issue or reach out to us at help@overeasy.sh.
Let's build amazing vision models together 🍳!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file overeasy-0.1.6.tar.gz
.
File metadata
- Download URL: overeasy-0.1.6.tar.gz
- Upload date:
- Size: 56.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.0 CPython/3.12.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | a5395fa99bb88276700c4ded802f9bd2f9eb465747988f46e3395f7fc5af28e4 |
|
MD5 | 0ff137c786bdbbf2ace5cabde59e0938 |
|
BLAKE2b-256 | db704c6edd028f27029219226ea88e393daf7b922461e1cee0813f036262bdef |