Skip to main content

Provides an API wrapper for easy data retrieval from third party analytics services

Project description

analytics-data

Provides an API wrapper for third party analytics services for easy data retrieval

Main Features

  • Webtrekk JSON/RPC v1.1
  • Quintly REST v0.9
  • Credential manager

Quintly API

To use the Quintly API through analytics-data, the QuintlyAPI class can be used.

# Import the module
from analytics import quintly
import datetime

# Instantiate the class with your client id and secret
quintly = quintly.QuintlyAPI('client_id', 'client_secret')

# Available profiles are loaded after instantiation
profiles = quintly.get_profiles()

# Available groups are loaded after instantiation
groups = quintly.get_groups()

# To run the query, the profile ids are required. They can either be
# retrieved through a group or by providing a list of profile names

profile_ids_from_group = quintly.get_profile_ids_from_group_name('group_name')
profile_ids_from_profile_names = quintly.get_profile_ids_from_names(['profile_name_1', 'profile_name_2'])


# A query must specify the profiles for which the metrics should be retrieved...
profile_ids = profile_ids_from_group

#... the table from which the metrics should be retrieved...
table = 'facebookInsights'

#... the fields which are of interest...
fields = ['profileId', 'time', 'page_impressions_unique']

# ... a start and end date
start_date = datetime.date(2019, 2, 1)
end_date = datetime.date(2019, 2, 11)


# You can run the query with the run_query method. It returns a pandas DataFrame
df = quintly.run_query(profile_ids, table, fields, start_date, end_date)

# The default interval is daily but can be changed using the interval parameter
df = quintly.run_query(profile_ids, table, fields, start_date, end_date,
        interval='monthly')

# If the query is too big due to too many profiles, the query can be split up
# into subqueries by setting split_profiles to True -> for each profile one subquery

df = quintly.run_query(profile_ids, table, fields, start_date, end_date,
        split_profiles=True)

# If the query is too big due to a too large time range, the query can be split
# into subqueries by setting the number of days of the split_days parameter
# -> for each chunk with the size of the specified amount of days a subquery is run

df = quintly.run_query(profile_ids, table, fields, start_date, end_date,
        split_days=28)

# split_profiles and split_days can also be combined.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for analytics-data, version 0.7.0
Filename, size File type Python version Upload date Hashes
Filename, size analytics_data-0.7.0-py2.py3-none-any.whl (9.6 kB) File type Wheel Python version py2.py3 Upload date Hashes View

Supported by

Pingdom Pingdom Monitoring Google Google Object Storage and Download Analytics Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page