Skip to main content

A Gradio component that can be used to annotate images with bounding boxes. Adapted for use with Document Redaction app. Forked from original work by edgarGarcia and icyray.

Project description

gradio_image_annotation_redaction

PyPI - Version

A Gradio component that can be used to annotate images with bounding boxes. Adapted for use with Document Redaction app. Forked from original work by edgarGarcia and icyray.

Installation

pip install gradio_image_annotation_redaction

Usage

import gradio as gr
from gradio_image_annotation_redaction import image_annotator
import numpy as np

example_annotation = {
    "image":   "https://raw.githubusercontent.com/seanpedrick-case/document_redaction_examples/refs/heads/main/example_complaint_letter.jpg",
    "boxes": [
        {
            "xmin": 125,
            "ymin": 239,
            "xmax": 230,
            "ymax": 266,
            "label": "Name",
            "color": (255, 0, 0),
            "text": "Mark Smith",
            "page": 1,
        },
        {
            "xmin": 125,
            "ymin": 288,
            "xmax": 301,
            "ymax": 375,
            "label": "Address",
            "color": (0, 255, 0),
            "text": "Sofa Showroom, 555 Broadway, Cityland, KS 66214",
            "page": 1,
        }
    ]
}

examples_crop = [
    {
        "image": "https://raw.githubusercontent.com/gradio-app/gradio/main/guides/assets/logo.png",
        "boxes": [
            {
                "xmin": 30,
                "ymin": 70,
                "xmax": 530,
                "ymax": 500,
                "color": (100, 200, 255),
            }
        ],
    },
    {
        "image": "https://raw.githubusercontent.com/seanpedrick-case/document_redaction_examples/refs/heads/main/example_complaint_letter.jpg",
        "boxes": [
            {
                "xmin": 636,
                "ymin": 575,
                "xmax": 801,
                "ymax": 697,
                "color": (255, 0, 0),
            },
        ],
    },
]


def crop(annotations:dict):
    if angle:= annotations.get("orientation", None):
        annotations["image"] = np.rot90(annotations["image"], k=-angle)
    if annotations["boxes"]:
        box = annotations["boxes"][0]
        return annotations["image"][
            box["ymin"]:box["ymax"],
            box["xmin"]:box["xmax"]
        ]
    return None

def _image_size(image):
    """Derive (width, height) from image when numpy or PIL."""
    if image is None:
        return None, None
    if isinstance(image, np.ndarray):
        if image.ndim >= 2:
            return int(image.shape[1]), int(image.shape[0])  # width, height
        return None, None
    if hasattr(image, "size"):  # PIL Image
        w, h = image.size
        return int(w), int(h)
    return None, None


def get_boxes_json(annotations):
    if annotations is None:
        return None
    image = annotations.get("image")
    image_path = image if isinstance(image, str) else None
    # Use frontend-provided dimensions when present, else derive from image data (numpy/PIL)
    image_width = annotations.get("image_width")
    image_height = annotations.get("image_height")
    if image_width is None or image_height is None:
        w, h = _image_size(image)
        if image_width is None:
            image_width = w
        if image_height is None:
            image_height = h
    return {
        "boxes": annotations.get("boxes", []),
        "orientation": annotations.get("orientation"),
        "image_width": image_width,
        "image_height": image_height,
        "image_path": image_path,
    }


with gr.Blocks() as demo:
    with gr.Tab("Object annotation", id="tab_object_annotation"):
        
        annotator = image_annotator(
            example_annotation,
            label_list=["Person", "Vehicle"],
            label_colors=[(0, 255, 0), (255, 0, 0)],
            image_type="filepath",
        )
        button_get = gr.Button("Get bounding boxes")
        json_boxes = gr.JSON()
        button_get.click(get_boxes_json, annotator, json_boxes)

    with gr.Tab("Crop", id="tab_crop"):
        with gr.Row():
            annotator_crop = image_annotator(
                examples_crop[0],
                image_type="numpy",
                disable_edit_boxes=True,
                single_box=True,
            )
            image_crop = gr.Image()
        button_crop = gr.Button("Crop")
        button_crop.click(crop, annotator_crop, image_crop)

        gr.Examples(examples_crop, annotator_crop)
    
    with gr.Accordion("Keyboard Shortcuts"):
        gr.Markdown("""
        - ``C``: Create mode
        - ``D``: Drag mode
        - ``E``: Edit selected box (same as double-click a box)
        - ``Delete``: Remove selected box
        - ``Space``: Reset view (zoom/pan)
        - ``Enter``: Confirm modal dialog
        - ``Escape``: Cancel/close modal dialog
        """)

if __name__ == "__main__":
    demo.launch()

image_annotator

Initialization

name type default description
value
dict| None
value = None A dict or None. The dictionary must contain a key 'image' with either an URL to an image, a numpy image or a PIL image. Optionally it may contain a key 'boxes' with a list of boxes. Each box must be a dict wit the keys: 'xmin', 'ymin', 'xmax' and 'ymax' with the absolute image coordinates of the box. Optionally can also include the keys 'label' and 'color' describing the label and color of the box. Color must be a tuple of RGB values (e.g. `(255,255,255)`). Optionally can also include the keys 'orientation' with a integer between 0 and 3, describing the number of times the image is rotated by 90 degrees in frontend, the rotation is clockwise.
boxes_alpha
float| None
value = None Opacity of the bounding boxes 0 and 1.
label_list
list[str]| None
value = None List of valid labels.
label_colors
list[str]| None
value = None Optional list of colors for each label when `label_list` is used. Colors must be a tuple of RGB values (e.g. `(255,255,255)`).
box_min_size
int| None
value = None Minimum valid bounding box size.
handle_size
int| None
value = None Size of the bounding box resize handles.
box_thickness
int| None
value = None Thickness of the bounding box outline.
box_selected_thickness
int| None
value = None Thickness of the bounding box outline when it is selected.
disable_edit_boxes
bool| None
value = None Disables the ability to set and edit the label and color of the boxes.
single_box
bool
value = False If True, at most one box can be drawn.
height
int| str| None
value = None The height of the displayed image, specified in pixels if a number is passed, or in CSS units if a string is passed.
width
int| str| None
value = None The width of the displayed image, specified in pixels if a number is passed, or in CSS units if a string is passed.
image_mode
"1"| "L"| "P"| "RGB"| "RGBA"| "CMYK"| "YCbCr"| "LAB"| "HSV"| "I"| "F"
value = "RGB" "RGB" if color, or "L" if black and white. See https://pillow.readthedocs.io/en/stable/handbook/concepts.html for other supported image modes and their meaning.
sources
list["upload"| "webcam"| "clipboard"]| None
value = ['upload', 'webcam', 'clipboard'] List of sources for the image. "upload" creates a box where user can drop an image file, "webcam" allows user to take snapshot from their webcam, "clipboard" allows users to paste an image from the clipboard. If None, defaults to ["upload", "webcam", "clipboard"].
image_type
"numpy"| "pil"| "filepath"
value = "numpy" The format the image is converted before being passed into the prediction function. "numpy" converts the image to a numpy array with shape (height, width, 3) and values from 0 to 255, "pil" converts the image to a PIL image object, "filepath" passes a str path to a temporary file containing the image. If the image is SVG, the `type` is ignored and the filepath of the SVG is returned.
label
str| None
value = None The label for this component. Appears above the component and is also used as the header if there are a table of examples for this component. If None and used in a `gr.Interface`, the label will be the name of the parameter this component is assigned to.
container
bool
value = True If True, will place the component in a container - providing some extra padding around the border.
scale
int| None
value = None relative size compared to adjacent Components. For example if Components A and B are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide as B. Should be an integer. scale applies in Rows, and to top-level Components in Blocks where fill_height=True.
min_width
int
value = 160 minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first.
interactive
bool| None
value = True if True, will allow users to upload and annotate an image; if False, can only be used to display annotated images.
visible
bool
value = True If False, component will be hidden.
elem_id
str| None
value = None An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.
elem_classes
list[str]| str| None
value = None An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles.
render
bool
value = True If False, component will not render be rendered in the Blocks context. Should be used if the intention is to assign event listeners now but render the component later.
show_label
bool| None
value = None if True, will display label.
show_download_button
bool
value = True If True, will show a button to download the image.
show_share_button
bool| None
value = None If True, will show a share icon in the corner of the component that allows user to share outputs to Hugging Face Spaces Discussions. If False, icon does not appear. If set to None (default behavior), then the icon appears if this Gradio app is launched on Spaces, but not otherwise.
show_clear_button
bool| None
value = True If True, will show a button to clear the current image.
show_remove_button
bool| None
value = None If True, will show a button to remove the selected bounding box.
handles_cursor
bool| None
value = True If True, the cursor will change when hovering over box handles in drag mode. Can be CPU-intensive.
use_default_label
bool
value = False If True, the first item in label_list will be used as the default label when creating boxes.
enable_keyboard_shortcuts
bool
value = True If True, the component will respond to keyboard events.

Events

name description
clear This listener is triggered when the user clears the image_annotator using the clear button for the component.
change Triggered when the value of the image_annotator changes either because of user input (e.g. a user types in a textbox) OR because of a function update (e.g. an image receives a value from the output of an event trigger). See .input() for a listener that is only triggered by user input.
upload This listener is triggered when the user uploads a file into the image_annotator.

User function

The impact on the users predict function varies depending on whether the component is used as an input or output for an event (or both).

  • When used as an Input, the component only impacts the input signature of the user function.
  • When used as an output, the component only impacts the return signature of the user function.

The code snippet below is accurate in cases where the component is used as both an input and an output.

  • As output: Is passed, a dict with the image and boxes or None.
  • As input: Should return, a dict with an image and an optional list of boxes or None.
def predict(
    value: AnnotatedImageValue| None
) -> AnnotatedImageValue| None:
    return value

AnnotatedImageValue

class AnnotatedImageValue(TypedDict):
    image: Optional[np.ndarray | PIL.Image.Image | str]
    boxes: Optional[List[dict]]
    orientation: Optional[int]
    image_width: Optional[int]
    image_height: Optional[int]

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gradio_image_annotation_redaction-0.5.5.tar.gz (4.7 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file gradio_image_annotation_redaction-0.5.5.tar.gz.

File metadata

File hashes

Hashes for gradio_image_annotation_redaction-0.5.5.tar.gz
Algorithm Hash digest
SHA256 1c0335db57441b2d49d35950ac76ff1d2815498754edb94e53225fc9876aa172
MD5 13364ed61e27e3c6e82fb59386d5bb69
BLAKE2b-256 648c51a1ad8db03bbe25fbaaa042a4ed3789b366fb529d09575840e86b871cbe

See more details on using hashes here.

File details

Details for the file gradio_image_annotation_redaction-0.5.5-py3-none-any.whl.

File metadata

File hashes

Hashes for gradio_image_annotation_redaction-0.5.5-py3-none-any.whl
Algorithm Hash digest
SHA256 52a3f4dd873ea52abce86d766f4385257e8c3f12cc8d7c3a82be158797e10279
MD5 c1c87d9a705e11c2e1015951f188c5a7
BLAKE2b-256 1f3be1618c1a574e48ee4ac22c18aff9ad985530baafd43cab14d2bb91396864

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page