Scrape Google Maps place details (rating, reviews, address, etc.) using Playwright — no API key needed
Project description
google-maps-scraper
Scrape Google Maps place details — rating, review count, address, phone, hours, coordinates, and more — without an API key.
Built with Playwright (Firefox) for reliable rendering and asyncio for high-throughput batch processing.
Features
- 🔍 Scrape place details from any Google Maps URL or search query
- ⭐ Extract 20+ fields — rating, review count, address, phone, website, hours, coordinates, category, and more
- 📝 Review scraping — extract individual user reviews with ratings and text
- 🚀 Async batch processing — configurable concurrency for scraping thousands of URLs
- 💾 Crash recovery — auto-save with resume support; pick up where you left off
- 🌍 Multi-language — supports any Google Maps locale (
en,ja,zh-TW,ko, ...) - 🔎 Smart search handling — auto-clicks the first search result when a query returns multiple matches
- 🤖 Headless-ready — runs perfectly in CI/CD and headless environments
- 📦 CLI + Python API — use from the command line or import as a library
Installation
pip install google-maps-scraper
playwright install firefox
Note: If running on a server without GUI, use
playwright install firefox --with-depsto install browser dependencies.
Optional: Stealth Mode
For better anti-detection, install playwright-stealth:
pip install google-maps-scraper[stealth]
Quick Start
CLI
# Scrape a single place
gmaps-scraper scrape "https://www.google.com/maps/search/?api=1&query=Eiffel+Tower"
# Scrape with language setting
gmaps-scraper scrape "https://www.google.com/maps/search/?api=1&query=東京タワー" --lang ja
# Batch scrape from CSV
gmaps-scraper batch urls.csv -o results.json --concurrency 5
# Batch scrape to CSV
gmaps-scraper batch urls.csv -o results.csv --lang zh-TW --concurrency 3
Python API (Async)
import asyncio
from gmaps_scraper import GoogleMapsScraper, ScrapeConfig
async def main():
config = ScrapeConfig(language="en", headless=True)
async with GoogleMapsScraper(config) as scraper:
result = await scraper.scrape(
"https://www.google.com/maps/search/?api=1&query=Machu+Picchu"
)
if result.success:
print(f"Name: {result.place.name}")
print(f"Rating: {result.place.rating}")
print(f"Reviews: {result.place.review_count}")
print(f"Address: {result.place.address}")
asyncio.run(main())
Python API (Sync)
from gmaps_scraper import scrape_place
result = scrape_place("https://www.google.com/maps/search/?api=1&query=Colosseum")
print(result.place.name, result.place.rating)
Batch Processing
import asyncio
from gmaps_scraper import scrape_batch, ScrapeConfig
async def main():
urls = open("urls.txt").read().splitlines()
config = ScrapeConfig(
concurrency=5,
delay_min=1.0,
delay_max=3.0,
headless=True,
save_interval=50,
)
results = await scrape_batch(
urls=urls,
config=config,
output_path="results.json",
resume=True, # Skip already-scraped URLs on restart
)
success = sum(1 for r in results if r.success)
print(f"Done: {success}/{len(results)} succeeded")
asyncio.run(main())
CLI Reference
gmaps-scraper scrape <url>
Scrape a single Google Maps URL and output JSON.
| Option | Default | Description |
|---|---|---|
--lang |
— | Language code (e.g., en, ja, zh-TW) |
--no-headless |
— | Show the browser window (for debugging) |
--reviews |
— | Also scrape individual reviews |
--max-reviews |
20 |
Max reviews to extract |
-v, --verbose |
— | Enable debug logging |
gmaps-scraper batch <input> -o <output>
Batch scrape URLs from a file. Output format is inferred from file extension (.json or .csv).
| Option | Default | Description |
|---|---|---|
-o, --output |
required | Output file path (.json or .csv) |
--concurrency |
5 |
Parallel browser tabs |
--lang |
— | Language code |
--proxy |
— | Proxy server URL (e.g., http://proxy:8080) |
--delay-min |
2.0 |
Min delay between requests (seconds) |
--delay-max |
5.0 |
Max delay between requests (seconds) |
--no-resume |
— | Start fresh, don't resume from existing output |
--reviews |
— | Also scrape individual reviews |
--max-reviews |
20 |
Max reviews per place |
--save-interval |
50 |
Auto-save every N results |
Input File Format
CSV — the scraper looks for a column named url, URL, or link:
url,name
https://www.google.com/maps/search/?api=1&query=Eiffel+Tower,Eiffel Tower
https://www.google.com/maps/search/?api=1&query=Colosseum,Colosseum
Text — one URL per line:
https://www.google.com/maps/search/?api=1&query=Eiffel+Tower
https://www.google.com/maps/search/?api=1&query=Colosseum
Output Format
JSON
[
{
"input_url": "https://www.google.com/maps/search/?api=1&query=Eiffel+Tower",
"success": true,
"place": {
"name": "Eiffel Tower",
"rating": 4.7,
"review_count": 344856,
"address": "Av. Gustave Eiffel, 75007 Paris, France",
"phone": "+33 8 92 70 12 39",
"website": "https://www.toureiffel.paris/",
"category": "Historical landmark",
"latitude": 48.8583701,
"longitude": 2.2944813,
"hours": ["Monday 09:30–23:45", "..."],
"google_maps_url": "https://www.google.com/maps/place/...",
"permanently_closed": false
},
"reviews": [],
"scraped_at": "2025-03-06T12:00:00"
}
]
CSV
Flat structure with all place fields as columns. Ideal for data analysis.
Extracted Fields
| Field | Type | Description |
|---|---|---|
name |
str |
Place name |
rating |
float |
Star rating (1.0–5.0) |
review_count |
int |
Total number of reviews |
address |
str |
Full address |
phone |
str |
Phone number |
website |
str |
Website URL |
category |
str |
Place category (e.g., "Restaurant") |
hours |
list[str] |
Opening hours per day |
latitude |
float |
Latitude coordinate |
longitude |
float |
Longitude coordinate |
plus_code |
str |
Google Plus Code |
place_id |
str |
Google Maps Place ID |
url |
str |
Canonical Google Maps URL |
google_maps_url |
str |
Direct Google Maps link |
price_level |
str |
Price level indicator |
image_url |
str |
Main image URL |
description |
str |
Place description |
photos_count |
int |
Number of photos |
permanently_closed |
bool |
Whether permanently closed |
temporarily_closed |
bool |
Whether temporarily closed |
Performance Guide
| Concurrency | Est. Throughput | Time for 10K URLs | Notes |
|---|---|---|---|
| 3 | ~1,200/hr | ~8.3 hrs | Conservative, stable |
| 5 | ~2,000/hr | ~5.0 hrs | Default |
| 10 | ~4,000/hr | ~2.5 hrs | Recommended with proxy |
Tips:
- Use
--proxywith rotating proxies for higher concurrency - The scraper auto-saves progress; if interrupted, just re-run and it will resume
- For large batches in CI (e.g., GitHub Actions with 6-hour limit), split into chunks
Development
git clone https://github.com/noworneverev/google-maps-scraper.git
cd google-maps-scraper
pip install -e ".[dev]"
playwright install firefox
pytest tests/ -v
⚠️ Disclaimer
This tool is provided for educational and research purposes only. By using this software, you acknowledge and agree that:
- Google Maps Terms of Service: Web scraping may violate Google Maps' Terms of Service. You are solely responsible for ensuring your use complies with all applicable terms, laws, and regulations.
- No Warranty: This software is provided "as is", without warranty of any kind. The authors are not responsible for any consequences arising from the use of this tool.
- Rate Limiting: Excessive or aggressive scraping may result in your IP being temporarily or permanently blocked by Google. Use appropriate delays and concurrency settings.
- Data Privacy: Respect the privacy of individuals whose reviews or information may be collected. Handle all scraped data in accordance with applicable privacy laws (e.g., GDPR, CCPA).
- Personal Responsibility: The user assumes all responsibility for how the tool is used and the data it collects.
The authors and contributors of this project do not endorse or encourage any misuse of this software.
License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file google_maps_scraper-0.1.1.tar.gz.
File metadata
- Download URL: google_maps_scraper-0.1.1.tar.gz
- Upload date:
- Size: 23.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9f81b9fe6524214f4573617abb3d678ac42e8b2fadca7d9bfb8360587bb2c3ef
|
|
| MD5 |
020070d791f016b2cbbafb44e4b1553b
|
|
| BLAKE2b-256 |
d043462ce5c004be1a63e93e98db76adf92f9f363a0c625f3a4dcba28c7babc3
|
File details
Details for the file google_maps_scraper-0.1.1-py3-none-any.whl.
File metadata
- Download URL: google_maps_scraper-0.1.1-py3-none-any.whl
- Upload date:
- Size: 27.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
53cd552436eb8f846ae7137935ce34b966865828f0f8195abbdf5895f0a276a6
|
|
| MD5 |
5363e9313e69f66f5afe6fc4027094d1
|
|
| BLAKE2b-256 |
77e4db9dd4c9aa70c22684984a841d4008a6cedf001bbb3521ee220e5a05a2ef
|