Python SDK for Agent Berlin - AI-powered SEO and AEO automation
Project description
Agent Berlin Python SDK
Installation
pip install agentberlin
Agent Berlin Python SDK provides AI-powered SEO automation.
Quick Start
from agentberlin import AgentBerlin
client = AgentBerlin()
Available APIs
- ga4:
query()- Query GA4 analytics with custom metrics, dimensions, and filters - pages:
list(),search(),get()- List pages, search pages, and get detailed page info - keywords:
list(),search(),list_clusters(),get_cluster_keywords()- List/search keywords, explore clusters - brand:
get_profile(),update_profile()- Brand profile management - google_search:
web(),news(),images(),videos(),shopping(),ai_mode()- Google Search with all result types - bing_search:
web(),news(),images(),videos(),shopping(),copilot()- Bing Search with all result types - google_maps:
search(),reviews()- Google Maps places and reviews - google_trends:
interest_over_time(),related_queries()- Google Trends data - llm:
search(),complete()- AI-powered web search and text completion - files:
upload()- Upload files to cloud storage - gsc:
query(),get_site(),list_sitemaps(),get_sitemap(),inspect_url()- Search analytics, sitemaps, URL inspection - reddit:
get_subreddit_posts(),search(),get_post(),get_post_comments(),get_subreddit_info()- Posts, comments, and subreddit info - bing_webmaster:
get_query_stats(),get_page_stats(),get_traffic_stats(),get_crawl_stats(),get_site()- Bing Webmaster Tools analytics - backlink_marketplace:
list_domains()- Find guest posting and link building opportunities - amplitude:
get(),list_events(),list_cohorts(),get_funnel(),get_retention()- Product analytics (sessions, users, events, funnels, retention) - sheets:
create(),write(),read(),append()- Create, read, write, and append data to Google Sheets - nano_banana:
generate()- AI-powered image generation using Gemini - pagespeed:
analyze()- Google PageSpeed Insights Lighthouse audit (performance score + Core Web Vitals)
API Reference
ga4
Direct access to Google Analytics 4 Data API for flexible reporting with custom metrics, dimensions, and filters.
client.ga4.query()
Query GA4 analytics data using the GA4 Data API.
This method maps directly to Google Analytics 4's runReport API,
providing full flexibility for analytics queries.
Signature:
result = client.ga4.query(
metrics=["totalUsers", "sessions"], # Required: list of metric names
start_date="2024-01-01", # Required: YYYY-MM-DD format
end_date="2024-01-31", # Required: YYYY-MM-DD format
dimensions=["country", "date"], # Optional: breakdown dimensions
dimension_filter={ # Optional: filter results
"field_name": "sessionDefaultChannelGroup",
"string_filter": {"match_type": "EXACT", "value": "Organic Search"}
},
order_bys=[{"metric": "totalUsers", "desc": True}], # Optional: sorting
limit=100, # Optional: max rows
offset=0, # Optional: pagination offset
date_ranges=[ # Optional: date comparison
{"start_date": "2024-01-01", "end_date": "2024-01-31", "name": "current"},
{"start_date": "2023-12-01", "end_date": "2023-12-31", "name": "previous"}
],
currency_code="USD" # Optional: for monetary metrics
)
Args:
- metrics: List of metric names to retrieve (required). Examples: "totalUsers", "sessions", "ecommercePurchases", "screenPageViews"
- start_date: Start date in YYYY-MM-DD format (required)
- end_date: End date in YYYY-MM-DD format (required)
- dimensions: Optional list of dimension names. Examples: "country", "sessionDefaultChannelGroup", "date", "deviceCategory"
- dimension_filter: Optional filter to restrict results:
- Simple filter: {"field_name": "country", "string_filter": {"match_type": "EXACT", "value": "US"}}
- In-list filter: {"field_name": "country", "in_list_filter": {"values": ["US", "CA"]}}
- AND group: {"and_group": {"expressions": [filter1, filter2]}}
- OR group: {"or_group": {"expressions": [filter1, filter2]}}
- NOT: {"not_expression": filter}
- order_bys: Optional list of ordering. Example: [{"metric": "totalUsers", "desc": True}]
- limit: Maximum number of rows to return
- offset: Number of rows to skip (for pagination)
- date_ranges: Optional list of date ranges for comparison
- currency_code: Optional currency code for monetary metrics (e.g., "USD")
Returns GA4QueryResponse with:
- rows - List of GA4Row objects
- row_count - Total number of rows returned
- totals - Aggregate totals for all metrics
Each GA4Row has:
- dimensions - Dict mapping dimension names to values
- metrics - Dict mapping metric names to values
Note: Item-scoped dimensions (e.g., itemName, itemId, itemBrand) cannot be combined with event-scoped metrics (e.g., totalUsers, sessions). Use item-scoped metrics (itemsViewed, itemRevenue) instead.
Example:
# Get organic traffic by country
result = client.ga4.query(
metrics=["totalUsers", "sessions"],
dimensions=["country"],
start_date="2024-01-01",
end_date="2024-01-31",
dimension_filter={
"field_name": "sessionDefaultChannelGroup",
"string_filter": {"match_type": "EXACT", "value": "Organic Search"}
},
order_bys=[{"metric": "totalUsers", "desc": True}],
limit=10
)
for row in result.rows:
print(f"{row.dimensions['country']}: {row.metrics['totalUsers']} users")
pages
A powerful search engine backed by yours and your competitors' crawled pages.
client.pages.list()
List pages with pagination and optional filtering.
Signature:
result = client.pages.list(
limit=100, # Max pages (default: 100, max: 1000)
offset=0, # Pagination offset (default: 0)
domain="example.com", # Optional: filter by domain
own_only=True # Optional: exclude competitor pages
)
Args:
- limit: Maximum pages to return (default: 100, max: 1000)
- offset: Pagination offset (default: 0)
- domain: Filter by domain (e.g., "competitor.com")
- own_only: If True, only return your own pages
Returns PageListResponse with:
- pages - List of PageListItem objects
- total - Number of pages in response
- limit - Limit used
- offset - Offset used
Each PageListItem has:
- url - Page URL
- title - Page title
- meta_description - Meta description (optional)
- h1s - List of H1 headings (empty if none)
- domain - Domain name (optional)
client.pages.search()
Search for similar pages with multiple input modes.
This unified method supports three input modes (exactly one required):
- query: Simple text query for semantic search across all pages
- url: Find similar pages to an existing indexed page
- content: Find similar pages to provided raw content
Additionally, contains can filter by URL or content pattern (regex supported):
- Standalone: Returns all pages matching the pattern, sorted by URL
- Combined: Filters semantic search results by the pattern
Note: For chunk-level analysis, use pages.get(url, detailed=True) instead.
Signature:
# Query mode - semantic search
results = client.pages.search(
query="SEO best practices", # Text query for semantic search
count=10, # Max results (default: 10)
offset=0, # Pagination offset (default: 0)
similarity_threshold=0.60 # Min similarity 0-1 (default: 0.60)
)
# URL mode - find similar pages to an indexed page
results = client.pages.search(
url="https://example.com/blog/guide", # URL of indexed page
count=10
)
# Content mode - find similar pages to raw content
results = client.pages.search(
content="This is my page content about SEO..."
)
# Query mode with filters
results = client.pages.search(
query="SEO guide",
domain="competitor.com", # Filter by domain
status_code="200", # Filter by HTTP status
topic="SEO", # Filter by topic name
page_type="pillar" # Filter by page type
)
# Contains-only mode - filter by URL pattern
results = client.pages.search(
contains="/blog/.*" # Regex pattern for URL or content
)
# Combine semantic search with contains filter
results = client.pages.search(
query="pricing",
contains="enterprise" # Also filter by pattern in URL/content
)
Args:
- query: Text query for semantic search (mutually exclusive with url/content)
- url: URL of an indexed page (mutually exclusive with query/content)
- content: Raw content string (mutually exclusive with query/url)
- contains: Filter by URL or content pattern (regex supported, case-insensitive). Matches if page URL or content contains the pattern. Can be used alone or with query/url/content.
- count: Max recommendations (default: 10, max: 50)
- offset: Pagination offset (default: 0)
- similarity_threshold: Min similarity 0-1 (default: 0.60)
- domain: Filter by domain (e.g., "competitor.com")
- own_only: If True, filter to only your own pages (excludes competitors)
- status_code: Filter by HTTP status code ("200", "404", "error", "redirect", "success")
- topic: Filter by topic name
- page_type: Filter by page type ("pillar" or "landing")
Returns PageSearchResponse with:
- source_page_url - URL of source page (only in URL mode)
- source_page_type - "pillar", "landing", or None
- source_assigned_topic - Topic if page is assigned as pillar/landing
- similar_pages - List of SimilarPage objects (always includes evidence and topic_info)
Each SimilarPage has:
- target_page_url - URL of recommended page
- similarity_score - Similarity score 0-1
- match_type - "page_to_page"
- page_type - "pillar", "landing", or None
- assigned_topic - Topic if target is assigned as pillar/landing
- title - Page title
- h1s - List of H1 headings (empty if none)
- meta_description - Meta description
- evidence - List of Evidence objects (h_path, text) for top chunks
- domain - Page domain
- status_code - HTTP status code
- topic_info - TopicInfo object (topics, topic_scores, page_type, assigned_topic)
client.pages.get()
Get detailed page information.
Signature:
# Basic page details (metadata only)
page = client.pages.get(
url="https://example.com/blog/article"
)
# Include page content (truncated to 500 chars)
page = client.pages.get(
url="https://example.com/blog/article",
content_length=500
)
# With similarity analysis (includes similar_pages and similar_chunks)
page = client.pages.get(
url="https://example.com/blog/article",
detailed=True # Compute similar pages and chunks (expensive, 1-10s)
)
Args:
- url: The page URL to look up
- content_length: Max characters of content to return (default: 0, no content). There is no upper limit - pass a large value (e.g., 999999) to get full content. Note: pages can have large content (thousands of words), so large values will significantly increase response size.
- detailed: If True, compute and return similar_pages and similar_chunks (expensive)
Returns PageDetailResponse with:
- url - Page URL
- title - Page title
- meta_description - Meta description
- h1s - List of H1 headings (empty if none)
- domain - Domain name
- links.inlinks - List of incoming links (PageLink objects)
- links.outlinks - List of outgoing links (PageLink objects)
- topic_info.topics - List of topic names
- topic_info.topic_scores - List of topic relevance scores
- topic_info.page_type - "pillar" or "landing"
- topic_info.assigned_topic - Primary assigned topic
- content - Page content truncated to content_length (only returned if content_length > 0)
- content_length - Total content length in characters
- similar_pages - List of SimilarPage objects (only when detailed=True)
- similar_chunks - List of SimilarChunkSet objects (only when detailed=True)
keywords
Search for keywords and explore keyword clusters (powered by SEMRush & DataForSEO)
client.keywords.list()
List all keywords with pagination.
Signature:
result = client.keywords.list(
limit=100, # Max keywords (default: 100, max: 1000)
offset=0 # Pagination offset (default: 0)
)
Returns KeywordListResponse with:
- keywords - List of KeywordResult objects
- total - Total number of keywords
Each KeywordResult has:
- keyword - The keyword text
- intent - Search intent (optional)
- locations - List of LocationMetrics objects with location-specific data
Each LocationMetrics has:
- code - Location code (e.g., "us", "uk", "de")
- volume - Monthly search volume (optional)
- difficulty - Difficulty score 0-100 (optional)
- cpc - Cost per click (optional)
- position - SERP position for your domain (optional)
Example:
# Get first 100 keywords
result = client.keywords.list()
for kw in result.keywords:
# Access location-specific metrics
for loc in kw.locations:
print(f"{kw.keyword} ({loc.code}): volume={loc.volume}, difficulty={loc.difficulty}")
# Paginate through all keywords
all_keywords = []
offset = 0
while True:
result = client.keywords.list(limit=1000, offset=offset)
all_keywords.extend(result.keywords)
if len(result.keywords) < 1000:
break
offset += 1000
client.keywords.search()
Search for keywords.
Signature:
keywords = client.keywords.search(
query="digital marketing", # Search query
limit=10, # Max results (default: 10)
match_type="semantic" # "semantic", "exact", or "contains" (default: "semantic")
)
Parameters:
- query - Search query string (required)
- limit - Maximum results to return (default: 10, max: 50)
- match_type - Search mode (default: "semantic"):
- "semantic" - Hybrid AI search using embeddings and BM25
- "exact" - Case-insensitive exact string match
- "contains" - Case-insensitive substring/contains match
Returns KeywordSearchResponse with keywords list and total count. Each keyword has:
- keyword - The keyword text
- intent - Search intent: "informational", "commercial", "transactional", "navigational"
- locations - List of LocationMetrics with location-specific data:
- code - Location code (e.g., "us", "uk")
- volume - Monthly search volume
- difficulty - Difficulty score (0-100)
- cpc - Cost per click
- position - SERP position (optional)
Examples:
# Semantic search (default) - finds related keywords
results = client.keywords.search(query="running shoes")
# Exact match - finds only "running shoes" exactly
results = client.keywords.search(query="running shoes", match_type="exact")
# Substring match - finds keywords containing "running"
results = client.keywords.search(query="running", match_type="contains")
client.keywords.list_clusters()
List all keyword clusters with representative keywords.
Keywords are grouped into clusters using HDBSCAN clustering based on semantic similarity. This method returns a summary of all clusters for topic exploration.
Signature:
clusters = client.keywords.list_clusters(
representative_count=3 # Keywords per cluster (default: 3, max: 10)
)
Returns ClusterListResponse with:
- clusters - List of ClusterSummary objects
- noise_count - Number of unclustered keywords (cluster_id=-1)
- total_keywords - Total keywords in the index
- cluster_count - Number of clusters (excluding noise)
Each ClusterSummary has:
- cluster_id - Unique cluster identifier
- size - Number of keywords in cluster
- representative_keywords - Top keywords by volume
Example:
clusters = client.keywords.list_clusters()
for cluster in clusters.clusters:
print(f"Cluster {cluster.cluster_id}: {cluster.size} keywords")
print(f" Topics: {', '.join(cluster.representative_keywords)}")
client.keywords.get_cluster_keywords()
Get keywords belonging to a specific cluster.
Use this to explore keywords within a cluster. Supports pagination for large clusters. Use cluster_id=-1 to get noise (unclustered) keywords.
Signature:
result = client.keywords.get_cluster_keywords(
cluster_id=0, # Required: cluster ID (-1 for noise)
limit=50, # Max keywords (default: 50, max: 1000)
offset=0 # Pagination offset (default: 0)
)
Returns ClusterKeywordsResponse with:
- keywords - List of ClusterKeywordResult objects
- total - Total keywords in this cluster
- cluster_id - The requested cluster ID
- limit - Limit used
- offset - Offset used
Each ClusterKeywordResult has:
- keyword - The keyword text
- cluster_id - Cluster ID
- intent - Search intent (optional)
- locations - List of LocationMetrics with location-specific data:
- code - Location code (e.g., "us", "uk")
- volume - Monthly search volume (optional)
- difficulty - Difficulty score 0-100 (optional)
- cpc - Cost per click (optional)
- position - SERP position (optional)
Example:
# Get top keywords from cluster 0
result = client.keywords.get_cluster_keywords(0, limit=20)
for kw in result.keywords:
for loc in kw.locations:
print(f"{kw.keyword} ({loc.code}): volume={loc.volume}")
# Get noise keywords (unclustered)
noise = client.keywords.get_cluster_keywords(-1, limit=100)
brand
Your brand profile including company info, competitors, industries, and target markets.
client.brand.get_profile()
Get the complete brand profile including all company information, competitors, target industries, business models, and geographic scope.
Signature:
profile = client.brand.get_profile()
Returns BrandProfileResponse with:
- domain - Domain name
- name - Brand name
- context - Brand context/description
- search_analysis_context - Search analysis context
- domain_authority - Domain authority score
- competitors - List of competitor domains
- industries - List of industries
- business_models - List of business models
- company_size - Company size
- target_customer_segments - Target segments
- geographies - Target geographies
- topics - List of Topic objects (value, pillar_page_url, landing_page_url)
- sitemaps - Sitemap URLs
- profile_urls - Profile URLs
Example:
profile = client.brand.get_profile()
print(f"Domain: {profile.domain}")
print(f"Topics: {[(t.value, t.pillar_page_url) for t in profile.topics]}")
client.brand.update_profile()
Update brand profile fields.
Important behavior:
- Only provided fields are updated
- Fields set to None/undefined are IGNORED (not cleared)
- To clear a list field, pass an empty list []
- To clear a string field, pass an empty string ""
Signature:
result = client.brand.update_profile(
name="Project Name", # Optional: Project name
context="Business description...", # Optional: Business context
competitors=["competitor1.com"], # Optional: Competitor domains
industries=["SaaS", "Technology"], # Optional: Industries
business_models=["B2B", "Enterprise"], # Optional: Business models
company_size="startup", # Optional: solo, early_startup, startup, smb, mid_market, enterprise
target_customer_segments=["Enterprise"], # Optional: Target segments
geographies=["US", "EU"], # Optional: Geographic regions
topics=["SEO", "Content Marketing"] # Optional: Topic values
)
Returns BrandProfileUpdateResponse with:
- success - Boolean indicating success
- profile - Updated BrandProfileResponse with all fields
Examples:
# Update multiple fields at once
result = client.brand.update_profile(
context="We are a B2B SaaS company...",
competitors=["competitor1.com", "competitor2.com"],
topics=["SEO", "Content Marketing", "Analytics"]
)
# Only update one field (others unchanged)
result = client.brand.update_profile(industries=["SaaS", "MarTech"])
# Clear a list field
result = client.brand.update_profile(competitors=[])
client.brand.get_stats()
Get precomputed aggregate statistics for the project's brand.
These stats are generated at build time from the crawl (ScreamingFrog) and keyword (SEMrush / DataForSEO) data, and served from disk — no worker machine is provisioned for the call, making it cheap to poll for dashboards.
Signature:
stats = client.brand.get_stats()
Returns BrandStatsResponse. Every top-level section is optional — when the underlying source data was missing at build time, that section is absent.
Top-level fields:
- brand_name - Brand identifier
- generated_at - Unix timestamp when the stats were produced
- pages - PagesStats (crawl health, content structure, URL shape, redirects, performance)
- links - LinksStats (in/out link totals, orphan/dead-end pages, top-linked pages, broken outlinks, classification & type counts)
- keywords - KeywordsStats (volume & CPC rollups, difficulty buckets, SERP ranking buckets, intent distribution)
- cross - CrossStats (keyword-to-page matches, top landing pages, topic coverage)
Example:
stats = client.brand.get_stats()
if stats.pages and stats.pages.crawl_health:
sc = stats.pages.crawl_health.status_class_counts
print(f"4xx pages: {sc.four_xx}, 5xx pages: {sc.five_xx}")
if stats.keywords and stats.keywords.ranking:
print(f"Top-3 ranking keywords: {stats.keywords.ranking.top_3_count}")
if stats.cross:
for lp in stats.cross.top_landing_pages[:5]:
print(f"{lp.url}: {lp.keyword_count} kws, {lp.total_volume} vol")
google_search
Search the web using Google - includes web results, news, images, videos, shopping, and AI features.
web() surfaces classic SERP features. In addition to organic results (with sitelinks),
the web response exposes — when Google actually returns them for the query — the featured
snippet (answer_box), People Also Ask (related_questions), knowledge panel
(knowledge_graph), local pack (local_results), inline images / videos, top stories,
inline AI Overview (ai_overview), and related_searches. Absent surfaces are omitted.
ai_mode() vs inline ai_overview. web() returns the inline AI Overview block when
Google includes one on the normal SERP. ai_mode() is a separate endpoint that queries
Google AI Mode directly and always returns an answer when AI Mode is available for the
query. Use ai_mode() when you want an AI answer as the primary signal; use web().ai_overview
when you want to know whether organic searchers see an AI block above the results.
Vertical methods (news(), images(), videos(), shopping()) surface only the
result list for their own surface — they do not carry SERP features.
client.google_search.web()
Search the web using Google. Returns organic results plus classic SERP surfaces when Google includes them for the query.
Signature:
results = client.google_search.web(
query="best seo tools",
max_results=10, # Max results (1-100, default: 10)
country="us", # Optional: ISO 3166-1 alpha-2 country code
language="en" # Optional: ISO 639-1 language code
)
Returns GoogleSearchWebResponse with:
- query - The search query
- results - List of GoogleSearchResult objects (organic results)
- total - Organic results count
- answer_box - GoogleSearchAnswerBox or None (featured snippet)
- related_questions - List of GoogleSearchRelatedQuestion (People Also Ask)
- knowledge_graph - GoogleSearchKnowledgeGraph or None (knowledge panel)
- local_results - List of GoogleSearchLocalResult (local pack)
- inline_images - List of GoogleSearchInlineImage (inline carousel)
- inline_videos - List of GoogleSearchInlineVideo (inline carousel)
- top_stories - List of GoogleSearchTopStory (top stories carousel)
- ai_overview - GoogleSearchAIOverview or None (inline AI Overview)
- related_searches - List of GoogleSearchRelatedSearch
- search_metadata - Optional metadata (id, status, total_time_taken)
Each organic result has: title, url, snippet, displayed_link, date, thumbnail, sitelinks (list of {title, link, snippet}).
All SERP-feature fields default to empty / None when Google did not include that
surface for the query. Detect AI Overview with if results.ai_overview:, PAA with
if results.related_questions:, etc.
client.google_search.news()
Search Google News.
Signature:
results = client.google_search.news(
query="AI technology",
max_results=10, # Max results (1-100, default: 10)
country="us", # Optional: ISO 3166-1 alpha-2 country code
language="en" # Optional: ISO 639-1 language code
)
Returns GoogleSearchNewsResponse with:
- query - The search query
- results - List of GoogleSearchNewsResult objects
- total - Total results count Each result has: title, url, snippet, source, date, thumbnail
client.google_search.images()
Search Google Images.
Signature:
results = client.google_search.images(
query="modern website design",
max_results=10, # Max results (1-100, default: 10)
country="us" # Optional: ISO 3166-1 alpha-2 country code
)
Returns GoogleSearchImagesResponse with:
- query - The search query
- results - List of GoogleSearchImageResult objects
- total - Total results count Each result has: title, url, original, thumbnail, source, width, height
client.google_search.videos()
Search Google Videos.
Signature:
results = client.google_search.videos(
query="python tutorial",
max_results=10, # Max results (1-100, default: 10)
country="us" # Optional: ISO 3166-1 alpha-2 country code
)
Returns GoogleSearchVideosResponse with:
- query - The search query
- results - List of GoogleSearchVideoResult objects
- total - Total results count Each result has: title, url, snippet, displayed_link, thumbnail, duration, platform
client.google_search.shopping()
Search Google Shopping.
Signature:
results = client.google_search.shopping(
query="wireless headphones",
max_results=10, # Max results (1-100, default: 10)
country="us" # Optional: ISO 3166-1 alpha-2 country code
)
Returns GoogleSearchShoppingResponse with:
- query - The search query
- results - List of GoogleSearchShoppingResult objects
- total - Total results count Each result has: title, url, price, source, thumbnail, rating, reviews
client.google_search.ai_mode()
Use Google AI Mode for conversational search.
Signature:
results = client.google_search.ai_mode(
query="What is SEO?",
country="us", # Optional: ISO 3166-1 alpha-2 country code
language="en" # Optional: ISO 639-1 language code
)
Returns GoogleSearchAIModeResponse with:
- query - The search query
- answer - AI-generated answer
- sources - List of GoogleSearchResult objects (source citations)
- search_metadata - Optional metadata
bing_search
Search the web using Microsoft Bing - includes web results, news, images, videos, shopping, and Copilot.
web() surfaces classic SERP features. In addition to organic results (with sitelinks),
the web response exposes — when Bing actually returns them for the query — the answer box
(answer_box), People Also Ask (related_questions), and related_searches. Absent
surfaces are omitted.
copilot() is a separate endpoint that invokes Bing Copilot for a conversational AI
answer; it is independent of web() and does not overlap with web().answer_box.
Vertical methods (news(), images(), videos(), shopping()) surface only the
result list for their own surface — they do not carry SERP features.
client.bing_search.web()
Search the web using Bing. Returns organic results plus classic SERP surfaces when Bing includes them for the query.
Signature:
results = client.bing_search.web(
query="best seo tools",
max_results=10, # Max results (1-100, default: 10)
country="us" # Optional: ISO 3166-1 alpha-2 country code
)
Returns BingSearchWebResponse with:
- query - The search query
- results - List of BingSearchResult objects (organic results)
- total - Organic results count
- answer_box - BingSearchAnswerBox or None
- related_questions - List of BingSearchRelatedQuestion (People Also Ask)
- related_searches - List of BingSearchRelatedSearch
- search_metadata - Optional metadata
Each organic result has: title, url, snippet, displayed_link, date, sitelinks (list of {title, link, snippet}).
All SERP-feature fields default to empty / None when Bing did not include that surface for the query.
client.bing_search.news()
Search Bing News.
Signature:
results = client.bing_search.news(
query="AI technology",
max_results=10, # Max results (1-100, default: 10)
country="us" # Optional: ISO 3166-1 alpha-2 country code
)
Returns BingSearchNewsResponse with:
- query - The search query
- results - List of BingSearchNewsResult objects
- total - Total results count Each result has: title, url, snippet, source, date, thumbnail
client.bing_search.images()
Search Bing Images.
Signature:
results = client.bing_search.images(
query="modern website design",
max_results=10, # Max results (1-100, default: 10)
country="us" # Optional: ISO 3166-1 alpha-2 country code
)
Returns BingSearchImagesResponse with:
- query - The search query
- results - List of BingSearchImageResult objects
- total - Total results count Each result has: title, url, content_url, thumbnail_url, width, height
client.bing_search.videos()
Search Bing Videos.
Signature:
results = client.bing_search.videos(
query="python tutorial",
max_results=10, # Max results (1-100, default: 10)
country="us" # Optional: ISO 3166-1 alpha-2 country code
)
Returns BingSearchVideosResponse with:
- query - The search query
- results - List of BingSearchVideoResult objects
- total - Total results count Each result has: title, url, description, thumbnail_url, duration, publisher
client.bing_search.shopping()
Search Bing Shopping.
Signature:
results = client.bing_search.shopping(
query="wireless headphones",
max_results=10, # Max results (1-100, default: 10)
country="us" # Optional: ISO 3166-1 alpha-2 country code
)
Returns BingSearchShoppingResponse with:
- query - The search query
- results - List of BingSearchShoppingResult objects
- total - Total results count Each result has: title, url, price, seller, thumbnail_url
client.bing_search.copilot()
Use Bing Copilot for AI-powered answers.
Signature:
results = client.bing_search.copilot(
query="What is SEO?"
)
Returns BingSearchCopilotResponse with:
- query - The search query
- answer - AI-generated answer
- sources - List of BingSearchResult objects (source citations)
- search_metadata - Optional metadata
google_maps
Search for places and businesses on Google Maps, and retrieve reviews.
client.google_maps.search()
Search for places and businesses on Google Maps.
Signature:
results = client.google_maps.search(
query="coffee shops",
location="New York, NY", # Optional: text location (mutually exclusive with ll)
ll="@40.7128,-74.0060,15z", # Optional: GPS coords (mutually exclusive with location)
type="cafe", # Optional: place type filter
language="en", # Optional: ISO 639-1 language code
country="us" # Optional: ISO 3166-1 alpha-2 country code
)
Returns GoogleMapsSearchResponse with:
- query - The search query
- results - List of GoogleMapsPlace objects
- total - Total results count
- search_metadata - Optional metadata
Each GoogleMapsPlace has:
- title - Place name
- place_id - Google place ID
- data_id - Google data ID (use this for reviews)
- address - Full address
- phone - Phone number
- website - Website URL
- rating - Average rating (1-5)
- reviews - Number of reviews
- type - Place type
- thumbnail - Thumbnail image URL
- gps_coordinates - GPS coordinates dict
- hours - Operating hours
- price_level - Price level indicator
client.google_maps.reviews()
Get reviews for a Google Maps place.
Signature:
reviews = client.google_maps.reviews(
data_id="0x89c259a...", # Preferred: data_id from search result
place_id="ChIJ...", # Alternative: Google place_id
sort_by="most_relevant", # Optional: most_relevant, newest, highest_rating, lowest_rating
num=10, # Optional: number of reviews (default: 10)
language="en" # Optional: ISO 639-1 language code
)
Note: You must provide either data_id or place_id.
Returns GoogleMapsReviewsResponse with:
- place_id - Google place ID
- data_id - Google data ID
- title - Place name
- address - Full address
- rating - Average rating
- total_reviews - Total review count
- reviews - List of GoogleMapsReview objects
- search_metadata - Optional metadata
Each GoogleMapsReview has:
- rating - Review rating (1-5)
- text - Review text
- author - Reviewer name
- date - Review date
- likes - Number of likes
- response - Owner response (if any)
google_trends
Explore search interest over time and discover related queries using Google Trends.
client.google_trends.interest_over_time()
Get search interest over time for a query.
Signature:
interest = client.google_trends.interest_over_time(
query="artificial intelligence",
geo="US", # Optional: region code (empty for worldwide)
timeframe="today 12-m" # Optional: time range
)
Timeframe options:
- 'now 1-H': Past hour
- 'now 4-H': Past 4 hours
- 'now 1-d': Past day
- 'now 7-d': Past 7 days
- 'today 1-m': Past month
- 'today 3-m': Past 3 months
- 'today 12-m': Past year
- 'today 5-y': Past 5 years
- 'all': Since 2004
- '2020-01-01 2020-12-31': Custom date range
Returns GoogleTrendsInterestOverTimeResponse with:
- query - The search query
- geo - Geographic region
- timeframe - Time range used
- timeline - List of GoogleTrendsTimelinePoint objects
- average_interest - Average interest score
- search_metadata - Optional metadata
Each GoogleTrendsTimelinePoint has:
- date - Date string
- value - Interest value (0-100, relative)
- formatted_value - Formatted display value
client.google_trends.related_queries()
Get related queries for a search term.
Signature:
related = client.google_trends.related_queries(
query="machine learning",
geo="US" # Optional: region code (empty for worldwide)
)
Returns GoogleTrendsRelatedQueriesResponse with:
- query - The search query
- geo - Geographic region
- top - List of top related queries (by volume)
- rising - List of rising queries (fastest growing)
- search_metadata - Optional metadata
Each GoogleTrendsRelatedQuery has:
- query - The related query text
- value - Interest value or percentage
- link - Link to explore the query
- extracted_value - Extracted numeric value
llm
AI-powered web search and text completion using Claude, GPT, Gemini, or Perplexity.
IMPORTANT - Usage Restriction: Only use these methods when the user has explicitly requested LLM-based search or completion. Do NOT use these methods proactively. Almost always, what you're trying to achieve can be done using sub-agents instead, which is the preferred approach.
client.llm.search()
Perform an LLM-powered web search with citations.
IMPORTANT: Only use this method when the user explicitly requests LLM-powered search. For most tasks, prefer using sub-agents instead of calling this API directly.
Signature:
result = client.llm.search(
model="sonar", # Required: sonar, sonar-pro, gpt-5-mini, gemini-3-flash, etc.
user_query="What is SEO?", # Required: search query
system_prompt="Be concise.", # Optional: system prompt
max_tokens=1000 # Optional: max response length
)
Supported models:
- Perplexity: sonar, sonar-pro
- OpenAI: gpt-5-mini, gpt-5.4
- Gemini: gemini-3-flash, gemini-3-pro-preview
Returns LLMSearchResponse with:
- id - Response identifier
- model - Model used
- content - Generated response text
- citations - List of LLMSearchCitation(url, title, text)
- search_results - List of LLMSearchResult(title, url, snippet) - Perplexity only
- usage.input_tokens, usage.output_tokens, usage.total_tokens
client.llm.complete()
Complete a conversation using Claude, GPT, or Gemini models.
IMPORTANT: Only use this method when the user explicitly requests LLM completion. For most tasks, prefer using sub-agents instead of calling this API directly.
This method calls the LLM API directly (not through a proxy) for text completion tasks like chat, code generation, and analysis.
Signature:
result = client.llm.complete(
model="claude-sonnet-4-6", # Required: model name
messages=[ # Required: conversation messages
{"role": "user", "content": "What is Python?"}
],
system_prompt="Be concise.", # Optional: system prompt
max_tokens=1024 # Optional: max response length (required for Claude)
)
Supported models:
- Claude: claude-sonnet-4-6, claude-opus-4-6, claude-haiku-4-5
- GPT: gpt-5-mini, gpt-5-nano, gpt-5.4, gpt-5.4-pro
- Gemini: gemini-3-flash, gemini-3-pro-preview
Returns LLMCompleteResponse with:
- id - Response identifier
- model - Model used
- content - Generated response text
- stop_reason - Why generation stopped (e.g., "end_turn", "max_tokens")
- usage.input_tokens, usage.output_tokens, usage.total_tokens
Example:
# Multi-turn conversation
result = client.llm.complete(
model="claude-sonnet-4-6",
messages=[
{"role": "user", "content": "What is Python?"},
{"role": "assistant", "content": "Python is a programming language."},
{"role": "user", "content": "What are its main features?"}
],
max_tokens=500
)
print(result.content)
files
Upload files to cloud storage (auto-deleted after 365 days)
client.files.upload()
Upload files to cloud storage.
Signature:
# Upload from string content (must encode to bytes)
csv_content = "url,title\nhttps://example.com,Example"
result = client.files.upload(
file_data=csv_content.encode(), # Must be bytes, not string
filename="output.csv"
)
# Upload from file path
result = client.files.upload(file_path="/path/to/file.csv")
# Upload with explicit content type
result = client.files.upload(
file_data=b'{"key": "value"}',
filename="data.json",
content_type="application/json"
)
Allowed content types: text/plain, text/csv, text/markdown, text/html, text/css, text/javascript, application/json, application/xml
Returns FileUploadResponse with:
- file_id - Unique file identifier
- filename - The filename
- content_type - MIME type
- size - File size in bytes
- url - Download URL
gsc
Access your Google Search Console data including search analytics, sitemaps, and URL inspection.
client.gsc.query()
Query search analytics data.
Signature:
result = client.gsc.query(
start_date="2024-01-01",
end_date="2024-01-31",
dimensions=["query", "page"], # Optional: "query", "page", "country", "device", "searchAppearance", "date"
search_type="web", # Optional: "web", "image", "video", "news", "discover", "googleNews"
row_limit=100, # Optional: max rows (default 1000, max 25000)
start_row=0, # Optional: pagination offset
aggregation_type="auto", # Optional: "auto", "byPage", "byProperty"
data_state="final" # Optional: "final", "all"
)
Returns SearchAnalyticsResponse with:
- rows - List of SearchAnalyticsRow objects
- response_aggregation_type - How data was aggregated Each row has: keys (list), clicks, impressions, ctr, position
Pagination Example
The Search Console API returns max 25,000 rows per request. For large datasets:
def get_all_search_analytics(start_date: str, end_date: str, dimensions: list):
"""Fetch all search analytics data with automatic pagination."""
all_rows = []
start_row = 0
row_limit = 25000 # Maximum allowed by the API
while True:
result = client.gsc.query(
start_date=start_date,
end_date=end_date,
dimensions=dimensions,
row_limit=row_limit,
start_row=start_row,
)
all_rows.extend(result.rows)
# If we got fewer rows than requested, we've reached the end
if len(result.rows) < row_limit:
break
start_row += row_limit
return all_rows
Note: The API has daily quota limits. For large datasets, consider reducing the date range or using fewer dimensions.
client.gsc.get_site()
Get site information.
Signature:
site = client.gsc.get_site()
Returns SiteInfo with:
- site_url - The site URL
- permission_level - Permission level (e.g., "siteOwner")
client.gsc.list_sitemaps()
List all sitemaps.
Signature:
sitemaps = client.gsc.list_sitemaps()
Returns SitemapListResponse with:
- sitemap - List of Sitemap objects Each sitemap has: path, last_submitted, is_pending, is_sitemaps_index, last_downloaded, warnings, errors, contents
client.gsc.get_sitemap()
Get specific sitemap details.
Signature:
sitemap = client.gsc.get_sitemap("https://example.com/sitemap.xml")
Returns Sitemap with full details.
client.gsc.inspect_url()
Inspect a URL's index status.
Signature:
inspection = client.gsc.inspect_url(
url="https://example.com/page",
language_code="en-US" # Optional: for localized results
)
Returns UrlInspectionResponse with:
- inspection_result.inspection_result_link - Link to GSC
- inspection_result.index_status_result - Index status details
- inspection_result.mobile_usability_result - Mobile usability
- inspection_result.rich_results_result - Rich results info
Access live Reddit data (posts, comments, and subreddit info).
client.reddit.get_subreddit_posts()
Fetch posts from a subreddit.
Signature:
posts = client.reddit.get_subreddit_posts(
subreddit="programming", # Required: subreddit name (without /r/)
sort="hot", # Optional: "hot", "new", "top", "rising" (default: "hot")
time_filter="week", # Optional: "hour", "day", "week", "month", "year", "all" (for "top" sort)
limit=25, # Optional: max posts 1-100 (default: 25)
after="t3_abc123" # Optional: pagination cursor from previous response
)
Returns SubredditPostsResponse with:
- posts - List of RedditPost objects
- after - Pagination cursor for next page (None if no more results)
Each RedditPost has:
- id - Post ID
- title - Post title
- author - Author username
- subreddit - Subreddit name
- score - Net upvotes
- upvote_ratio - Ratio of upvotes (0-1)
- num_comments - Comment count
- created - Creation datetime
- url - Link URL (or permalink for self posts)
- permalink - Reddit permalink
- is_self - True if text post
- selftext - Post body (for self posts)
- domain - Link domain
- nsfw - True if NSFW
- spoiler - True if marked spoiler
- locked - True if comments locked
- stickied - True if pinned
client.reddit.search()
Search Reddit posts.
Signature:
results = client.reddit.search(
query="python async", # Required: search query
subreddit="learnprogramming", # Optional: restrict to subreddit
sort="relevance", # Optional: "relevance", "hot", "top", "new", "comments" (default: "relevance")
time_filter="month", # Optional: "hour", "day", "week", "month", "year", "all"
limit=25, # Optional: max results 1-100 (default: 25)
after="t3_abc123" # Optional: pagination cursor
)
Returns RedditSearchResponse with:
- query - The search query
- posts - List of RedditPost objects
- after - Pagination cursor for next page
client.reddit.get_post()
Get a single post by ID.
Signature:
post = client.reddit.get_post("abc123") # Post ID with or without t3_ prefix
Returns RedditPost with full post details.
client.reddit.get_post_comments()
Get comments on a post.
Signature:
response = client.reddit.get_post_comments(
post_id="abc123", # Required: post ID (with or without t3_ prefix)
sort="best", # Optional: "best", "top", "new", "controversial", "old" (default: "best")
limit=50 # Optional: max comments 1-500 (default: 50)
)
Returns PostCommentsResponse with:
- post - RedditPost object with full post details
- comments - List of RedditComment objects (nested with replies)
Each RedditComment has:
- id - Comment ID
- author - Author username
- body - Comment text
- score - Net upvotes
- created - Creation datetime
- permalink - Reddit permalink
- depth - Nesting depth (0 = top-level)
- is_op - True if author is the post author
- replies - List of nested RedditComment objects
client.reddit.get_subreddit_info()
Get subreddit metadata.
Signature:
info = client.reddit.get_subreddit_info("programming")
Returns SubredditInfo with:
- name - Subreddit name (display_name)
- title - Subreddit title
- description - Full description (markdown)
- public_description - Short public description
- subscribers - Subscriber count
- active_users - Currently active users
- created - Creation datetime
- nsfw - True if NSFW
- icon_url - Subreddit icon URL (optional)
- banner_url - Banner image URL (optional)
bing_webmaster
Access Bing Webmaster Tools data including search analytics, page performance, traffic stats, and crawl information.
client.bing_webmaster.get_query_stats()
Get search query statistics.
Returns search query analytics including impressions, clicks, average position, and CTR for queries that triggered your site.
Signature:
result = client.bing_webmaster.get_query_stats()
Returns QueryStatsResponse with:
- rows - List of QueryStatsRow objects
Each QueryStatsRow has:
- query - The search query
- impressions - Number of impressions
- clicks - Number of clicks
- avg_position - Average position in search results
- avg_ctr - Average click-through rate
- date - Date of the data (optional)
client.bing_webmaster.get_page_stats()
Get page-level statistics.
Returns page-level analytics including impressions, clicks, average position, and CTR for your pages.
Signature:
result = client.bing_webmaster.get_page_stats()
Returns PageStatsResponse with:
- rows - List of PageStatsRow objects
Each PageStatsRow has:
- url - The page URL
- impressions - Number of impressions
- clicks - Number of clicks
- avg_position - Average position in search results
- avg_ctr - Average click-through rate
client.bing_webmaster.get_traffic_stats()
Get overall traffic statistics.
Returns aggregate traffic metrics for your site.
Signature:
traffic = client.bing_webmaster.get_traffic_stats()
Returns TrafficStats with:
- date - Date of the data (optional)
- impressions - Total impressions
- clicks - Total clicks
- avg_ctr - Average click-through rate
- avg_imp_rank - Average impression rank
- avg_click_position - Average click position
client.bing_webmaster.get_crawl_stats()
Get crawl statistics.
Returns crawl metrics including pages crawled, errors, indexing status, and various crawl issues.
Signature:
crawl = client.bing_webmaster.get_crawl_stats()
Returns CrawlStats with:
- date - Date of the data (optional)
- crawled_pages - Number of pages crawled
- crawl_errors - Number of crawl errors
- in_index - Number of pages in the index
- in_links - Number of inbound links
- blocked_by_robots_txt - Pages blocked by robots.txt
- contains_malware - Pages flagged for malware
- http_code_error - Pages with HTTP errors
client.bing_webmaster.get_site()
Get site information.
Returns the site URL and verification status for the site associated with the current project.
Signature:
site = client.bing_webmaster.get_site()
Returns SiteInfo with:
- site_url - The site URL
- is_verified - Whether the site is verified
- is_in_index - Whether the site is in the Bing index
backlink_marketplace
Find guest posting and link building opportunities.
IMPORTANT - Credit Cost Warning: This resource charges 2 credits per domain returned.
Cost estimates:
- 100 domains = 200 credits
- 500 domains = 1,000 credits
client.backlink_marketplace.list_domains()
List domains from the backlink marketplace.
Credit Cost: 2 credits per domain returned.
Find guest posting and link building opportunities by filtering domains based on SEO metrics, pricing, and content categories.
Signature:
result = client.backlink_marketplace.list_domains(
# Pagination
limit=100, # Number of results (default: 100, max: 500)
offset=0, # Pagination offset
# Domain metrics filters
min_dr=30, # Minimum Domain Rating (0-100)
max_dr=80, # Maximum Domain Rating (0-100)
min_da=20, # Minimum Domain Authority (0-100)
max_da=90, # Maximum Domain Authority (0-100)
# Traffic filters
min_traffic=1000, # Minimum monthly organic traffic
max_traffic=1000000, # Maximum monthly organic traffic
# Price filters
min_price=50, # Minimum price in USD
max_price=500, # Maximum price in USD
# Content filters
categories=["technology"], # Filter by categories/niches
languages=["en"], # Filter by language codes
countries=["us", "uk"], # Filter by country codes
# Link attributes
link_types=["dofollow"], # e.g., "dofollow", "nofollow"
# Sorting
sort_by="dr", # Field to sort by (e.g., "dr", "price", "traffic")
sort_order="desc" # "asc" or "desc"
)
Returns BacklinkMarketplaceListDomainsResponse with:
- domains - List of BacklinkMarketplaceDomain objects
- total - Total number of matching domains
- limit - Limit used
- offset - Offset used
- has_more - Whether more results are available
Each BacklinkMarketplaceDomain has:
- domain - Domain name
- dr - Domain Rating (Ahrefs, 0-100)
- da - Domain Authority (Moz, 0-100)
- traffic - Monthly organic traffic
- rd - Referring domains count
- categories - List of niche categories
- language - Primary language
- country - Primary country
- link_type - "dofollow" or "nofollow"
- marketplaces - List of marketplace entries with prices
- min_price - Minimum price across marketplaces
- max_price - Maximum price across marketplaces
- spam_score - Spam score (0-100)
- trust_flow - Trust flow (Majestic)
- citation_flow - Citation flow (Majestic)
Example:
# Find high-quality domains with dofollow links
result = client.backlink_marketplace.list_domains(
min_dr=40,
min_traffic=5000,
link_types=["dofollow"],
max_price=200,
sort_by="dr",
sort_order="desc",
limit=100,
)
# Cost: 100 domains * 2 credits = 200 credits
for domain in result.domains:
print(f"{domain.domain}: DR={domain.dr}, ${domain.min_price}")
amplitude
Product analytics from Amplitude including sessions, users, events, funnels, and retention.
client.amplitude.get()
Get core analytics data (sessions and users).
Signature:
analytics = client.amplitude.get(
start_date="20240101", # Optional: YYYYMMDD format (default: 30 days ago)
end_date="20240131" # Optional: YYYYMMDD format (default: today)
)
Returns AmplitudeResponse with:
- sessions.total - Total sessions
- sessions.average_length - Average session length in seconds
- sessions.per_user - Sessions per user
- users.active - Active users count
- users.new - New users count
Example:
analytics = client.amplitude.get()
print(f"Active users: {analytics.users.active}")
print(f"New users: {analytics.users.new}")
print(f"Avg session length: {analytics.sessions.average_length}s")
client.amplitude.list_events()
List all trackable events.
Signature:
events = client.amplitude.list_events()
Returns EventsListResponse with:
- events - List of AmplitudeEvent objects
Each AmplitudeEvent has:
- name - Event name
- totals - Weekly total count
Example:
events = client.amplitude.list_events()
for event in events.events:
print(f"{event.name}: {event.totals} weekly")
client.amplitude.list_cohorts()
List all behavioral cohorts.
Signature:
cohorts = client.amplitude.list_cohorts()
Returns CohortsListResponse with:
- cohorts - List of AmplitudeCohort objects
Each AmplitudeCohort has:
- id - Cohort ID
- name - Cohort name
- size - Number of users in cohort
- description - Cohort description (optional)
Example:
cohorts = client.amplitude.list_cohorts()
for cohort in cohorts.cohorts:
print(f"{cohort.name}: {cohort.size} users")
client.amplitude.get_funnel()
Analyze funnel conversion for a sequence of events.
Signature:
funnel = client.amplitude.get_funnel(
events=["Sign Up", "Complete Profile", "Make Purchase"], # Required: min 2 events
start_date="20240101", # Optional: YYYYMMDD format
end_date="20240131" # Optional: YYYYMMDD format
)
Returns FunnelResponse with:
- steps - List of FunnelStep objects
- overall_conversion - Overall funnel conversion rate (percentage)
Each FunnelStep has:
- event - Event name
- users_entered - Users who reached this step
- users_completed - Users who completed this step
- conversion_rate - Conversion rate (percentage)
- drop_off_rate - Drop-off rate (percentage)
Example:
funnel = client.amplitude.get_funnel(
events=["Sign Up", "Complete Profile", "Make Purchase"]
)
print(f"Overall conversion: {funnel.overall_conversion}%")
for step in funnel.steps:
print(f"{step.event}: {step.conversion_rate}% conversion")
client.amplitude.get_retention()
Get retention analysis.
Signature:
retention = client.amplitude.get_retention(
start_date="20240101", # Optional: YYYYMMDD format
end_date="20240131" # Optional: YYYYMMDD format
)
Returns RetentionResponse with:
- cohort_size - Size of the cohort
- retention - List of RetentionDay objects
Each RetentionDay has:
- day - Day number (0, 1, 7, 14, 30)
- retained - Number of users retained
- retention_rate - Retention rate (percentage)
Example:
retention = client.amplitude.get_retention()
print(f"Cohort size: {retention.cohort_size}")
for day in retention.retention:
print(f"Day {day.day}: {day.retention_rate}% retained")
sheets
Create, read, write, and append data to Google Sheets in the user's account.
client.sheets.create()
Create a new spreadsheet.
Signature:
sheet = client.sheets.create(title="My Report")
Args:
- title: Title for the new spreadsheet (required)
Returns CreateSpreadsheetResponse with:
- spreadsheet_id - Unique identifier for the spreadsheet
- spreadsheet_url - URL to access the spreadsheet
- title - Title of the created spreadsheet
Example:
sheet = client.sheets.create(title="Weekly SEO Report")
print(f"Created: {sheet.spreadsheet_url}")
client.sheets.write()
Write data to a spreadsheet range.
Signature:
result = client.sheets.write(
spreadsheet_id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms",
range="Sheet1!A1:B2",
values=[["Name", "Age"], ["Alice", 30]],
input_option="USER_ENTERED" # Optional: "RAW" or "USER_ENTERED" (default)
)
Args:
- spreadsheet_id: The ID of the spreadsheet (required)
- range: A1 notation of the range to write (required)
- values: 2D array of values to write (required)
- input_option: How input data should be interpreted (optional)
- "RAW" - Values are stored exactly as provided
- "USER_ENTERED" - Values are parsed as if typed in the UI (default)
Returns WriteResponse with:
- spreadsheet_id - ID of the spreadsheet
- updated_range - The range that was updated
- updated_rows - Number of rows updated
- updated_columns - Number of columns updated
- updated_cells - Total number of cells updated
Example:
result = client.sheets.write(
spreadsheet_id=sheet.spreadsheet_id,
range="Sheet1!A1:C3",
values=[
["URL", "Title", "Traffic"],
["https://example.com", "Home", 1000],
["https://example.com/blog", "Blog", 500]
]
)
print(f"Updated {result.updated_cells} cells")
client.sheets.read()
Read data from a spreadsheet range.
Signature:
result = client.sheets.read(
spreadsheet_id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms",
range="Sheet1!A1:B2"
)
Args:
- spreadsheet_id: The ID of the spreadsheet (required)
- range: A1 notation of the range to read (required)
Returns ReadResponse with:
- range - The range that was read
- values - 2D array of cell values
Example:
data = client.sheets.read(
spreadsheet_id=sheet.spreadsheet_id,
range="Sheet1!A1:C10"
)
for row in data.values:
print(row)
client.sheets.append()
Append rows to a spreadsheet.
Signature:
result = client.sheets.append(
spreadsheet_id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms",
range="Sheet1!A1",
values=[["New Row 1", "Data"], ["New Row 2", "More Data"]],
input_option="USER_ENTERED" # Optional
)
Args:
- spreadsheet_id: The ID of the spreadsheet (required)
- range: A1 notation of a range to search for a table (required). Values are appended after the last row of the table.
- values: 2D array of values to append (required)
- input_option: How input data should be interpreted (optional)
- "RAW" - Values are stored exactly as provided
- "USER_ENTERED" - Values are parsed as if typed in the UI (default)
Returns AppendResponse with:
- spreadsheet_id - ID of the spreadsheet
- table_range - The range of the table to which data was appended
- updated_rows - Number of rows appended
- updated_cells - Total number of cells appended
Example:
# Add new pages to an existing output
result = client.sheets.append(
spreadsheet_id=sheet.spreadsheet_id,
range="Sheet1!A1",
values=[
["https://example.com/new-page", "New Page", 250]
]
)
print(f"Appended {result.updated_rows} rows")
nano_banana
AI-powered image generation using Gemini's image generation model.
client.nano_banana.generate()
Generate an image using Gemini's image generation model.
Signature:
result = client.nano_banana.generate(
prompt="A serene mountain landscape at sunset",
aspect_ratio="16:9", # Optional: 1:1, 16:9, 9:16, 4:3, 3:4, etc.
size="2K", # Optional: 1K (1024px), 2K (2048px), 4K (4096px)
reference_images=["input.png"] # Optional: up to 14 reference images
)
Args:
- prompt: Text prompt describing the image to generate (required)
- aspect_ratio: Optional aspect ratio. Supported: 1:1, 16:9, 9:16, 4:3, 3:4, 3:2, 2:3, 4:5, 5:4, 21:9. Defaults to "1:1".
- size: Optional image size. Supported: 1K (1024px), 2K (2048px), 4K (4096px). Defaults to "1K".
- reference_images: Optional list of reference images for editing/style transfer. Can be file paths (str or Path) or bytes. Maximum 14 images.
Returns NanoBananaGenerateResponse with:
- image_url - URL to the generated image (hosted on CDN)
- mime_type - MIME type of the image (e.g., "image/png")
- text - Optional text commentary from the model
Example:
# Generate an image from text
result = client.nano_banana.generate(prompt="A sunset over mountains")
print(f"Image URL: {result.image_url}")
# Download and save the generated image
import requests
response = requests.get(result.image_url)
with open("output.png", "wb") as f:
f.write(response.content)
# Edit an existing image
result = client.nano_banana.generate(
prompt="Add a rainbow to this image",
reference_images=["landscape.jpg"]
)
pagespeed
Google PageSpeed Insights Lighthouse analysis — returns performance score, Core Web Vitals (lab + real-user CrUX data), actionable opportunities with estimated savings, diagnostics, and third-party cost breakdown. System-wide; no user integration required.
client.pagespeed.analyze()
Run a Lighthouse audit on a URL via Google PageSpeed Insights and get back a workflow-friendly summary: performance score, lab + real-user (CrUX) Core Web Vitals, ranked opportunities with estimated LCP/FCP savings, diagnostics, and a third-party resource rollup.
Signature:
result = client.pagespeed.analyze(
url="https://example.com",
strategy="mobile", # Optional: "mobile" (default) or "desktop"
locale="en-US" # Optional: BCP 47 locale (default "en-US")
)
Returns PageSpeedAnalyzeResponse with:
- url - The analyzed URL (echoed from the request)
- strategy - Form factor used ("mobile" or "desktop")
- analysis_utc_timestamp - When Google ran the analysis
- lighthouse_version - Lighthouse version that produced the report
- performance_score - Overall Lighthouse performance score (0.0–1.0, lab-based). None if not computed.
- core_web_vitals - PageSpeedCoreWebVitals (lab metrics) with:
- largest_contentful_paint_ms (LCP)
- first_contentful_paint_ms (FCP)
- cumulative_layout_shift (CLS, unitless)
- total_blocking_time_ms (TBT)
- speed_index_ms
- time_to_interactive_ms (TTI)
- server_response_time_ms (TTFB proxy)
- field_data - PageSpeedFieldData (real-user CrUX data, None when site lacks traffic) with:
- url_metrics - CrUX for this exact URL (PageSpeedCruxMetrics)
- origin_metrics - CrUX aggregated across the origin (PageSpeedCruxMetrics) Each PageSpeedCruxMetrics has overall_category ("FAST" | "AVERAGE" | "SLOW") plus per-metric PageSpeedCruxMetric (percentile + category) for LCP, FCP, CLS, INP, TTFB. Google ranks sites on CrUX data, not lab data — this is the most important signal for SEO.
- opportunities - List of PageSpeedOpportunity, sorted by estimated LCP savings desc. Each has:
- id, title, description
- score (0.0–1.0, lower = worse)
- display_value (human-readable savings summary)
- estimated_savings_ms.{lcp, fcp} - predicted improvement if fixed
- offenders - up to 10 contributing URLs with wasted_bytes / wasted_ms
- diagnostics - List of PageSpeedDiagnostic for issues without a precise savings number (e.g. "DOM too large: 811 elements"). Each has id, title, numeric_value, numeric_unit.
- third_parties - List of PageSpeedThirdParty (entity, transfer_size_bytes, main_thread_time_ms), sorted by transfer size desc. Useful for identifying heavy third-party scripts.
Important notes:
- PSI is synchronous — each call blocks 10–30s typically, longer for heavy sites. The backend enforces a 90s timeout.
- Quota is 50 analyses/day per org by default (shared Google API key cap of 25,000/day divided across brands). Exceeding returns HTTP 429.
- Mobile and desktop are separate calls — PSI returns CrUX and lab data only for the form factor you request. Call
analyzetwice (once per strategy) if you want both (each call consumes one quota unit). - If analysis times out, the response is a 504 with a message that the target site is too slow. Retrying won't help — it runs Lighthouse from scratch with the same outcome likely.
- If Lighthouse itself fails (unreachable target, bad response), the response is a 502 with a Lighthouse error code (e.g. ERRORED_DOCUMENT).
- Response payload is typically 5–15 KB after trimming (screenshots + internal Lighthouse UI fields are dropped).
Example — top-line performance + CrUX check:
result = client.pagespeed.analyze(url="https://example.com", strategy="mobile")
if result.performance_score is not None:
print(f"Lab score: {result.performance_score * 100:.0f}/100")
if result.field_data and result.field_data.url_metrics:
lcp = result.field_data.url_metrics.largest_contentful_paint_ms
if lcp:
print(f"Real-user LCP p75: {lcp.percentile:.0f}ms — {lcp.category}")
Example — surface top opportunities for an SEO writeup:
for op in result.opportunities[:5]:
savings = op.estimated_savings_ms
print(f"- {op.title}: save {savings.lcp:.0f}ms LCP")
for offender in op.offenders[:3]:
print(f" * {offender.url} (wasted {offender.wasted_bytes:.0f} bytes)")
Example — check third-party weight:
for tp in result.third_parties[:5]:
kb = tp.transfer_size_bytes / 1024
print(f"- {tp.entity}: {kb:.0f} KB, {tp.main_thread_time_ms:.0f}ms main-thread")
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agentberlin-0.92.0.tar.gz.
File metadata
- Download URL: agentberlin-0.92.0.tar.gz
- Upload date:
- Size: 100.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.11.7 {"installer":{"name":"uv","version":"0.11.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
59be19a47566e581a3707e4a7fe62614bc39ff7227e16d28a6834faa14e75f52
|
|
| MD5 |
6dec713e2a91db515e23d4ce14142413
|
|
| BLAKE2b-256 |
52f3537b61184c11e5db06e74466af7112e2ed8481270fe175cf666a67396dc2
|
File details
Details for the file agentberlin-0.92.0-py3-none-any.whl.
File metadata
- Download URL: agentberlin-0.92.0-py3-none-any.whl
- Upload date:
- Size: 93.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.11.7 {"installer":{"name":"uv","version":"0.11.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
04a5f0db29f2ca086eb0c79e89a25bfc34662e662af2a641e0d60a64bec6d66d
|
|
| MD5 |
35ce1e63de04ef605c521ea4378cf08c
|
|
| BLAKE2b-256 |
0059e61f210f49d39faa48f476122c692e886deb06836cc86503c44a9a0f3b28
|