Skip to main content

A professional reverse image search and crawling tool that uses Yandex's image search engine to find and download similar images.

Project description

🔍 Yandex Reverse Image Search Tool

A professional reverse image search and crawling tool that uses Yandex's image search engine to find and download similar images.

  • Advanced reverse image search with Yandex's AI engine
  • Support for local images, URLs, and directories
  • Multi-process parallel processing
  • Automatic resume for interrupted operations

Note: Yandex may restrict access if too many requests are made concurrently.
-> https://yandex.ru/images/

Installation

  1. Install Python(3.8 or higher) dependencies: pip install yandex-ris
  2. Install Google Chrome and ChromeDriver (matching your Chrome version)
check Chrome & Chromedriver Info
>>> which chromedriver
/usr/local/bin/chromedriver
>>> which google-chrome
/usr/bin/google-chrome
>>> google-chrome --version
Google Chrome 137.0.7151.68 
>>> chromedriver --version
ChromeDriver 137.0.7151.68 (2989ffee9373ea8b8623bd98b3cb350a8e95cadc-refs/branch-heads/7151@{#1873})

Usage

💻 Command Line Interface

# Basic usage with a single image
yandex-ris -i image.jpg -o output_dir

# Process all images in a directory
yandex-ris -i images_folder -o output_dir

# Process images from a URL
yandex-ris -i https://example.com/image.jpg -o output_dir

# Process images from a list file
yandex-ris -i image_list.txt -o output_dir

# With custom options
yandex-ris -i input_path -o output_dir -m 200 -w 4 -t 8

Configuration

  • --input, -i: Input path (file, directory or URL) (required)
  • --output, -o: Output root directory (default: downloaded_images)
  • --max-images, -m: Maximum images to download per source (default: 100)
  • --pause-time, -p: Page load wait time in seconds (default: 7)
  • --workers, -w: Number of parallel processing workers (default: 2)
  • --download-threads, -t: Download threads per process (default: 4)
  • --log-level: Logging level (default: INFO)

📝 Input Types

The tool supports multiple input types:

  1. Single image file (e.g., image.jpg)
  2. Directory containing images (will recursively scan for images)
  3. Image URL (e.g., https://example.com/image.jpg)
  4. Text file containing a list of paths/URLs (one per line)

🐍 Python API

from yandex_ris import YandexImageCrawler

crawler = YandexImageCrawler(
    output_dir="downloaded_images",
    max_images=100,
    workers=4,
    download_threads=4
)

# Process a single image
crawler.process_images(["path/to/image.jpg"])

# Process multiple images
crawler.process_images([
    "path/to/image1.jpg",
    "path/to/image2.jpg",
    "https://example.com/image.jpg"
])

TODO

  • Support downloading original/full-size images instead of thumbnails
  • Add proxy support to bypass regional restrictions and reduce the risk of IP bans
  • Improve robustness against Yandex's anti-scraping mechanisms:

Disclaimer

This tool is provided solely for research and educational purposes. By using this tool, you agree to abide by the following terms and conditions:

  1. Permitted Use: This tool is intended strictly for non-commercial, research, and educational use.
  2. Prohibited Use: Any use of this tool for unlawful, malicious, or unauthorized purposes is strictly prohibited.
  3. User Responsibility: Users are fully responsible for ensuring that their use of this tool complies with all applicable local, national, and international laws and regulations.
  4. Liability Disclaimer: The developers, contributors, and maintainers of this tool shall not be held liable for any direct, indirect, incidental, or consequential damages arising from the use or misuse of this tool.
  5. No Warranty: This tool is provided "as is", without any warranties, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, or non-infringement.
  6. Consequences of Misuse: Any misuse of this tool, particularly for malicious or illegal activities, may result in the violation of laws and could lead to civil or criminal penalties.

By using this tool, you acknowledge that you have read, understood, and agreed to be bound by these terms.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

yandex_ris-0.1.1.tar.gz (12.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

yandex_ris-0.1.1-py3-none-any.whl (11.6 kB view details)

Uploaded Python 3

File details

Details for the file yandex_ris-0.1.1.tar.gz.

File metadata

  • Download URL: yandex_ris-0.1.1.tar.gz
  • Upload date:
  • Size: 12.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.17

File hashes

Hashes for yandex_ris-0.1.1.tar.gz
Algorithm Hash digest
SHA256 19f46b55964130febc7f05e1bd4eeab2e1dbb5613e72154aff002c9df4ed3d84
MD5 a35bc52da6ee630584885d0f2dee4add
BLAKE2b-256 d4400153f88f950db917949569375c5c934944ca8adbf54647e356ff1d92d720

See more details on using hashes here.

File details

Details for the file yandex_ris-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: yandex_ris-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 11.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.17

File hashes

Hashes for yandex_ris-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 07eab9411c8d386f9bd75feb91c624ea6e04903f6edde1e731628414d5931fe3
MD5 f61b0c20222d667dc7f9de5685cb56f4
BLAKE2b-256 4714115581affc9f16aca46c777efe7a12c1bdaed42fedc8cb2072215c0c2f12

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page