Skip to main content

Scrape Google Maps place details (rating, reviews, address, etc.) using Playwright — no API key needed

Project description

google-maps-scraper

PyPI version Python 3.10+ License: MIT

Scrape Google Maps place details — rating, review count, address, phone, hours, coordinates, and more — without an API key.

Built with Playwright (Firefox) for reliable rendering and asyncio for high-throughput batch processing.

Features

  • 🔍 Scrape place details from any Google Maps URL or search query
  • Extract 20+ fields — rating, review count, address, phone, website, hours, coordinates, category, and more
  • Extract 20+ fields — rating, review count, address, phone, website, hours, coordinates, category, and more
  • 🚀 Async batch processing — configurable concurrency for scraping thousands of URLs
  • 💾 Crash recovery — auto-save with resume support; pick up where you left off
  • 🌍 Multi-language — supports any Google Maps locale (en, ja, zh-TW, ko, ...)
  • 🔎 Smart search handling — auto-clicks the first search result when a query returns multiple matches
  • 🤖 Headless-ready — runs perfectly in CI/CD and headless environments
  • 📦 CLI + Python API — use from the command line or import as a library

Installation

pip install google-maps-scraper
playwright install firefox

Note: If running on a server without GUI, use playwright install firefox --with-deps to install browser dependencies.

Optional: Stealth Mode

For better anti-detection, install playwright-stealth:

pip install google-maps-scraper[stealth]

Quick Start

CLI

# Scrape a single place
gmaps-scraper scrape "https://www.google.com/maps/search/?api=1&query=Eiffel+Tower"

# Scrape with language setting
gmaps-scraper scrape "https://www.google.com/maps/search/?api=1&query=東京タワー" --lang ja

# Batch scrape from CSV
gmaps-scraper batch urls.csv -o results.json --concurrency 5

# Batch scrape to CSV
gmaps-scraper batch urls.csv -o results.csv --lang zh-TW --concurrency 3

Python API (Async)

import asyncio
from gmaps_scraper import GoogleMapsScraper, ScrapeConfig

async def main():
    config = ScrapeConfig(language="en", headless=True)
    async with GoogleMapsScraper(config) as scraper:
        result = await scraper.scrape(
            "https://www.google.com/maps/search/?api=1&query=Machu+Picchu"
        )
        if result.success:
            print(f"Name:    {result.place.name}")
            print(f"Rating:  {result.place.rating}")
            print(f"Reviews: {result.place.review_count}")
            print(f"Address: {result.place.address}")

asyncio.run(main())

Python API (Sync)

from gmaps_scraper import scrape_place

result = scrape_place("https://www.google.com/maps/search/?api=1&query=Colosseum")
print(result.place.name, result.place.rating)

Batch Processing

import asyncio
from gmaps_scraper import scrape_batch, ScrapeConfig

async def main():
    urls = open("urls.txt").read().splitlines()

    config = ScrapeConfig(
        concurrency=5,
        delay_min=1.0,
        delay_max=3.0,
        headless=True,
        save_interval=50,
    )

    results = await scrape_batch(
        urls=urls,
        config=config,
        output_path="results.json",
        resume=True,  # Skip already-scraped URLs on restart
    )

    success = sum(1 for r in results if r.success)
    print(f"Done: {success}/{len(results)} succeeded")

asyncio.run(main())

CLI Reference

gmaps-scraper scrape <url>

Scrape a single Google Maps URL and output JSON.

Option Default Description
--lang Language code (e.g., en, ja, zh-TW)
--no-headless Show the browser window (for debugging)
-v, --verbose Enable debug logging

gmaps-scraper batch <input> -o <output>

Batch scrape URLs from a file. Output format is inferred from file extension (.json or .csv).

Option Default Description
-o, --output required Output file path (.json or .csv)
--concurrency 5 Parallel browser tabs
--lang Language code
--proxy Proxy server URL (e.g., http://proxy:8080)
--delay-min 2.0 Min delay between requests (seconds)
--delay-max 5.0 Max delay between requests (seconds)
--no-resume Start fresh, don't resume from existing output
--save-interval 50 Auto-save every N results

Input File Format

CSV — the scraper looks for a column named url, URL, or link:

url,name
https://www.google.com/maps/search/?api=1&query=Eiffel+Tower,Eiffel Tower
https://www.google.com/maps/search/?api=1&query=Colosseum,Colosseum

Text — one URL per line:

https://www.google.com/maps/search/?api=1&query=Eiffel+Tower
https://www.google.com/maps/search/?api=1&query=Colosseum

Output Format

JSON

[
  {
    "input_url": "https://www.google.com/maps/search/?api=1&query=Eiffel+Tower",
    "success": true,
    "place": {
      "name": "Eiffel Tower",
      "rating": 4.7,
      "review_count": 344856,
      "address": "Av. Gustave Eiffel, 75007 Paris, France",
      "phone": "+33 8 92 70 12 39",
      "website": "https://www.toureiffel.paris/",
      "category": "Historical landmark",
      "latitude": 48.8583701,
      "longitude": 2.2944813,
      "hours": ["Monday 09:30–23:45", "..."],
      "google_maps_url": "https://www.google.com/maps/place/...",
      "image_url": "https://lh3.googleusercontent.com/gps-cs-s/...",
      "permanently_closed": false
    },
    "scraped_at": "2025-03-06T12:00:00"
  }
]

CSV

Flat structure with all place fields as columns. Ideal for data analysis.

Extracted Fields

Field Type Description
name str Place name
rating float Star rating (1.0–5.0)
review_count int Total number of reviews
address str Full address
phone str Phone number
website str Website URL
category str Place category (e.g., "Restaurant")
hours list[str] Opening hours per day
latitude float Latitude coordinate
longitude float Longitude coordinate
plus_code str Google Plus Code
place_id str Google Maps Place ID
url str Canonical Google Maps URL
google_maps_url str Direct Google Maps link
price_level str Price level indicator
image_url str Main image URL
description str Place description
photos_count int Number of photos
permanently_closed bool Whether permanently closed
temporarily_closed bool Whether temporarily closed

Performance Guide

Concurrency Est. Throughput Time for 10K URLs Notes
3 ~1,200/hr ~8.3 hrs Conservative, stable
5 ~2,000/hr ~5.0 hrs Default
10 ~4,000/hr ~2.5 hrs Recommended with proxy

Tips:

  • Use --proxy with rotating proxies for higher concurrency
  • The scraper auto-saves progress; if interrupted, just re-run and it will resume
  • For large batches in CI (e.g., GitHub Actions with 6-hour limit), split into chunks

Development

git clone https://github.com/noworneverev/google-maps-scraper.git
cd google-maps-scraper
pip install -e ".[dev]"
playwright install firefox
pytest tests/ -v

⚠️ Disclaimer

This tool is provided for educational and research purposes only. By using this software, you acknowledge and agree that:

  • Google Maps Terms of Service: Web scraping may violate Google Maps' Terms of Service. You are solely responsible for ensuring your use complies with all applicable terms, laws, and regulations.
  • No Warranty: This software is provided "as is", without warranty of any kind. The authors are not responsible for any consequences arising from the use of this tool.
  • Rate Limiting: Excessive or aggressive scraping may result in your IP being temporarily or permanently blocked by Google. Use appropriate delays and concurrency settings.
  • Data Privacy: Respect the privacy of individuals whose reviews or information may be collected. Handle all scraped data in accordance with applicable privacy laws (e.g., GDPR, CCPA).
  • Personal Responsibility: The user assumes all responsibility for how the tool is used and the data it collects.

The authors and contributors of this project do not endorse or encourage any misuse of this software.

License

MIT © Yan-Ying Liao

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

google_maps_scraper-0.1.2.tar.gz (22.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

google_maps_scraper-0.1.2-py3-none-any.whl (26.0 kB view details)

Uploaded Python 3

File details

Details for the file google_maps_scraper-0.1.2.tar.gz.

File metadata

  • Download URL: google_maps_scraper-0.1.2.tar.gz
  • Upload date:
  • Size: 22.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for google_maps_scraper-0.1.2.tar.gz
Algorithm Hash digest
SHA256 39564ec97476b1890dc93b270b7d05a9c5326d775090864b70dfe5dd773eda6b
MD5 81ba209fe9f7c79810182f50230d1a56
BLAKE2b-256 cb8a3dc800536e958e990d2e576f23c385bcea4c4def38912aa4074c3a5b4c85

See more details on using hashes here.

File details

Details for the file google_maps_scraper-0.1.2-py3-none-any.whl.

File metadata

File hashes

Hashes for google_maps_scraper-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 60fbfd644c2d550d61ccffbf6531c77cf59cc605208709f3cbf2081b3e2f4924
MD5 000e55f7b11b402027cf0e1e0107a13d
BLAKE2b-256 85b9e8e75f6de8082e3ff27b6585ff13f3901509490f9bef89ec064856a85bb7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page