Vision utilities for web interaction agents
Project description
🙈 Vision utilities for web interaction agents 🙈
🔗 Main site • 🐦 Twitter • 📢 Discord
Tarsier
If you've tried using an LLM to automate web interactions, you've probably run into questions like:
- How should you feed the webpage to an LLM? (e.g. HTML, Accessibility Tree, Screenshot)
- How do you map LLM responses back to web elements?
- How can you inform a text-only LLM about the page's visual structure?
At Reworkd, we iterated on all these problems across tens of thousands of real web tasks to build a powerful perception system for web agents... Tarsier! In the video below, we use Tarsier to provide webpage perception for a minimalistic GPT-4 LangChain web agent.
https://github.com/reworkd/tarsier/assets/50181239/af12beda-89b5-4add-b888-d780b353304b
How does it work?
Tarsier visually tags interactable elements on a page via brackets + an ID e.g. [23]
.
In doing this, we provide a mapping between elements and IDs for an LLM to take actions upon (e.g. CLICK [23]
).
We define interactable elements as buttons, links, or input fields that are visible on the page; Tarsier can also tag all textual elements if you pass tag_text_elements=True
.
Furthermore, we've developed an OCR algorithm to convert a page screenshot into a whitespace-structured string (almost like ASCII art) that an LLM even without vision can understand. Since current vision-language models still lack fine-grained representations needed for web interaction tasks, this is critical. On our internal benchmarks, unimodal GPT-4 + Tarsier-Text beats GPT-4V + Tarsier-Screenshot by 10-20%!
Tagged Screenshot | Tagged Text Representation |
---|---|
Installation
pip install tarsier
Usage
Visit our cookbook for agent examples using Tarsier:
We currently support 2 OCR engines: Google Vision and Microsoft Azure. To create service account credentials for Google, follow the instructions on this SO answer https://stackoverflow.com/a/46290808/1780891
The credentials for Microsoft Azure are stored as a simple JSON consisting of an API key and an endpoint
{
"key": "<enter_your_api_key>",
"endpoint": "<enter_your_api_endpoint>"
}
These values can be found in the keys and endpoint section of the computer vision resource. See the instructions at https://learn.microsoft.com/en-us/answers/questions/854952/dont-find-your-key-and-your-endpoint
Otherwise, basic Tarsier usage might look like the following:
import asyncio
from playwright.async_api import async_playwright
from tarsier import Tarsier, GoogleVisionOCRService, MicrosoftAzureOCRService
import json
def load_ocr_credentials(json_file_path):
with open(json_file_path) as f:
credentials = json.load(f)
return credentials
async def main():
# To create the service account key, follow the instructions on this SO answer https://stackoverflow.com/a/46290808/1780891
google_cloud_credentials = load_ocr_credentials('./google_service_acc_key.json')
#microsoft_azure_credentials = load_ocr_credentials('./microsoft_azure_credentials.json')
ocr_service = GoogleVisionOCRService(google_cloud_credentials)
#ocr_service = MicrosoftAzureOCRService(microsoft_azure_credentials)
tarsier = Tarsier(ocr_service)
async with async_playwright() as p:
browser = await p.chromium.launch(headless=False)
page = await browser.new_page()
await page.goto("https://news.ycombinator.com")
page_text, tag_to_xpath = await tarsier.page_to_text(page)
print(tag_to_xpath) # Mapping of tags to x_paths
print(page_text) # My Text representation of the page
if __name__ == '__main__':
asyncio.run(main())
Keep in mind that Tarsier tags different types of elements differently to help your LLM identify what actions are performable on each element. Specifically:
[#ID]
: text-insertable fields (e.g.textarea
,input
with textual type)[@ID]
: hyperlinks (<a>
tags)[$ID]
: other interactable elements (e.g.button
,select
)[ID]
: plain text (if you passtag_text_elements=True
)
Local Development
Setup
We have provided a handy setup script to get you up and running with Tarsier development.
./script/setup.sh
If you modify any TypeScript files used by Tarsier, you'll need to execute the following command. This compiles the TypeScript into JavaScript, which can then be utilized in the Python package.
npm run build
Testing
We use pytest for testing. To run the tests, simply run:
poetry run pytest .
Linting
Prior to submitting a potential PR, please run the following to format your code:
./script/format.sh
Supported OCR Services
- Google Cloud Vision
- Amazon Textract (Coming Soon)
- Microsoft Azure Computer Vision (Coming Soon)
Roadmap
- Add documentation and examples
- Clean up interfaces and add unit tests
- Launch
- Improve OCR text performance
- Add options to customize tagging styling
- Add support for other browsers drivers as necessary
Citations
bibtex
@misc{reworkd2023tarsier,
title = {Tarsier},
author = {Rohan Pandey and Adam Watkins and Asim Shrestha and Srijan Subedi},
year = {2023},
howpublished = {GitHub},
url = {https://github.com/reworkd/tarsier}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file tarsier-0.8.2.tar.gz
.
File metadata
- Download URL: tarsier-0.8.2.tar.gz
- Upload date:
- Size: 18.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.10.12 Linux/6.8.0-1014-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | bc8c3acfba068e982c2f4145ed72e9269b6f04437d3ef7ac334ad72d47e269b5 |
|
MD5 | 167681baca6853e78853357587f5a8b4 |
|
BLAKE2b-256 | 386be90742affdc0986e1bdd62fc2a1bea82785d96991474860f985c6a751c37 |
File details
Details for the file tarsier-0.8.2-py3-none-any.whl
.
File metadata
- Download URL: tarsier-0.8.2-py3-none-any.whl
- Upload date:
- Size: 19.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.10.12 Linux/6.8.0-1014-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | c63f7e2153b3296f4fc881f1f0bbe5a03f3390b4b00c75ebd69b1edfffff49d5 |
|
MD5 | 90e33e5c44446f0d78b06cdbd420cdb0 |
|
BLAKE2b-256 | f1f6e9c324540beaba96adb2bf41c84ab78349d1b89defa0f5648967dd49b73f |