Skip to main content

Treasure Data extension for pyspark

Project description

Getting Started: td-pyspark

Treasure Data extension for using pyspark.


You can install from PyPI by using pip as follows:

# For Spark 2.4.x
$ pip install td-pyspark

# For Spark 3.0.x
$ pip install td-pyspark-ea

If you want to install PySpark via PyPI, you can install as:

$ pip install td-pyspark[spark]


First contact to enable td-spark feature. This feature is disabled by default.

td-pyspark is a library to enable Python to access tables in Treasure Data. The features of td_pyspark include:

  • Reading tables in Treasure Data as DataFrame
  • Writing DataFrames to Treasure Data
  • Submitting Presto queries and read the query results as DataFrames

For more details, see also td-spark FAQs.

Quick Start with Docker

You can try td_pyspark using Docker without installing Spark nor Python.

First create td-spark.conf file and set your TD API KEY and site (us, jp, eu01, ap02) configurations:

td-spark.conf (Your TD API KEY) (Your site: us, jp, eu01, ap02)
spark.serializer org.apache.spark.serializer.KryoSerializer
spark.sql.execution.arrow.enabled true

Launch pyspark Docker image. This image already has a pre-installed td_pyspark library:

$ docker run -it -e TD_SPARK_CONF=td-spark.conf -v $(pwd):/opt/spark/work armtd/td-spark-pyspark:latest_spark2.4.5
Python 3.6.9 (default, Oct 17 2019, 11:10:22) 
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
20/06/03 06:16:05 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 2.4.5

Using Python version 3.6.9 (default, Oct 17 2019 11:10:22)
SparkSession available as 'spark'.
2020-06-03 06:16:09.338Z debug [spark] Loading com.treasuredata.spark package - (package.scala:23)
2020-06-03 06:16:09.388Z  info [spark] td-spark version:20.6.0, revision:703654f, build_time:2020-06-03T05:22:17.574+0000 - (package.scala:24)
2020-06-03 06:16:11.405Z  info [TDServiceConfig] td-spark site: us - (TDServiceConfig.scala:36)

Try read a sample table by specifying a time range:

>>> df = td.table("sample_datasets.www_access").within("+2d/2014-10-04").df()
2019-06-13 19:48:51.605Z  info [TDRelation] Fetching the partition list of sample_datasets.www_access within time range:[2014-10-04 00:00:00Z,2014-10-06 00:00:00Z) - (TDRelation.scala:170)
2019-06-13 19:48:51.950Z  info [TDRelation] Retrieved 2 partition entries - (TDRelation.scala:176)
|user|           host|                path|             referer|code|               agent|size|method|      time|
|null||  /category/software|                   -| 200|Mozilla/5.0 (Maci...| 117|   GET|1412382292|
|null||  /category/software|                   -| 200|Mozilla/5.0 (comp...|  53|   GET|1412382284|
|null||/category/electro...| /category/computers| 200|Mozilla/5.0 (Wind...| 106|   GET|1412382275|
|null||   /item/garden/2832|      /item/toys/230| 200|Mozilla/5.0 (Maci...| 122|   GET|1412382267|
|null||/category/electro...|    /item/games/2532| 200|Mozilla/5.0 (comp...|  73|   GET|1412382259|
|null||   /category/cameras|/category/cameras...| 200|Mozilla/5.0 (Wind...| 117|   GET|1412382251|
|null||  /category/software|/search/?c=Electr...| 200|Mozilla/5.0 (Maci...|  52|   GET|1412382243|
|null||/category/electro...|                   -| 200|Mozilla/5.0 (iPad...| 120|   GET|1412382234|
|null||   /category/jewelry|   /item/office/3462| 200|Mozilla/5.0 (Wind...|  59|   GET|1412382226|
|null||    /category/office|     /category/music| 200|Mozilla/4.0 (comp...|  46|   GET|1412382218|
|null||     /category/games|                   -| 200|Mozilla/5.0 (Wind...|  40|   GET|1412382210|
|null|| /category/computers|                   -| 200|Mozilla/5.0 (Wind...|  95|   GET|1412382201|
|null||/item/giftcards/4684|    /item/books/1031| 200|Mozilla/5.0 (Wind...|  65|   GET|1412382193|
|null||     /item/toys/1085|   /category/cameras| 200|Mozilla/5.0 (Wind...|  65|   GET|1412382185|
|null||/item/electronics...|  /category/software| 200|Mozilla/5.0 (comp...| 121|   GET|1412382177|
|null||/category/cameras...|                   -| 200|Mozilla/5.0 (Maci...|  54|   GET|1412382168|
|null|| /item/software/4343|  /category/software| 200|Mozilla/4.0 (comp...| 139|   GET|1412382160|
|null||  /category/software|                   -| 200|Mozilla/4.0 (comp...|  92|   GET|1412382152|
|null||     /category/music|   /category/jewelry| 200|Mozilla/5.0 (Wind...| 119|   GET|1412382144|
|null|| /item/software/4783|/category/electro...| 200|Mozilla/5.0 (Wind...| 137|   GET|1412382135|
only showing top 20 rows


TDSparkContext is an entry point to access td_pyspark's functionalities. To create TDSparkContext, pass your SparkSession (spark) to TDSparkContext:

td = TDSparkContext(spark)

Reading Tables as DataFrames

To read a table, use td.table(table name):

df = td.table("sample_datasets.www_access").df()

To change the context database, use td.use(database_name):

# Accesses sample_datasets.www_access
df = td.table("www_access").df()

By calling .df() your table data will be read as Spark's DataFrame. The usage of the DataFrame is the same with PySpark. See also PySpark DataFrame documentation.

Specifying Time Ranges

Treasure Data is a time series database, so reading recent data by specifying a time range is important to reduce the amount of data to be processed. .within(...) function can be used to specify a target time range in a concise syntax. within function accepts the same syntax used in TD_INTERVAL function in Presto.

For example, to read the last 1 hour range of data, use within("-1h"):


You can also read the last day's data:


You can also specify an offset of the relative time range. This example reads the last days's data beginning from 7 days ago:


If you know an exact time range, within("(start time)/(end time)") is useful:

>>> df = td.table("sample_datasets.www_access").within("2014-10-04/2014-10-05").df()
2019-06-13 20:12:01.400Z  info [TDRelation] Fetching the partition list of sample_datasets.www_access within time range:[2014-10-04 00:00:00Z,2014-10-05 00:00:00Z) - (TDRelation.scala:170)

See this doc for more examples of interval strings.

Submitting Presto Queries

If your Spark cluster is small, reading all of the data as in-memory DataFrame might be difficult. In this case, you can utilize Presto, a distributed SQL query engine, to reduce the amount of data processing with PySpark:

>>> q = td.presto("select code, * from sample_datasets.www_access")
2019-06-13 20:09:13.245Z  info [TDPrestoJDBCRDD]  - (TDPrestoRelation.scala:106)
Submit Presto query:
select code, count(*) cnt from sample_datasets.www_access group by 1
|code| cnt|
| 200|4981|
| 500|   2|
| 404|  17|

The query result is represented as a DataFrame.

To run non query statements (e.g., INSERT INTO, CREATE TABLE, etc.) use execute_presto(sql):

td.execute_presto("CREATE TABLE IF NOT EXISTS A(time bigint, id varchar)")

Using SparkSQL

To use tables in Treaure Data inside Spark SQL, create a view with df.createOrReplaceTempView(...):

# Read TD table as a DataFrame
df = td.table("mydb.test1").df()
# Register the DataFrame as a view

spark.sql("SELECT * FROM test1").show()

Create or Drop Databases and Tables

Create a new table or database:


Delete unnecessary tables:


You can also check the presence of a table:

td.table("mydb.test1").exists() # True if the table exists

Create User-Defined Partition Tables

User-defined partitioning (UDP) is useful if you know a column in the table that has unique identifiers (e.g., IDs, category values).

You can create a UDP table partitioned by id (string type column) as follows:

td.create_udp_s("mydb.user_list", "id")

To create a UDP table, partitioned by Long (bigint) type column, use td.create_udp_l:

td.create_udp_l("mydb.departments", "dept_id")

Swapping Table Contents

You can replace the contents of two tables. The input tables must be in the same database:

# Swap the contents of two tables
td.swap_tables("mydb.tbl1", "mydb.tbl2")

# Another way to swap tables

Uploading DataFrames to Treasure Data

To save your local DataFrames as a table, td.insert_into(df, table) and td.create_or_replace(df, table) can be used:

# Insert the records in the input DataFrame to the target table:
td.insert_into(df, "mydb.tbl1")

# Create or replace the target table with the content of the input DataFrame:
td.create_or_replace(df, "mydb.tbl2")

Using multiple TD accounts

To specify a new api key aside from the key that is configured in td-spark.conf, just use td.with_apikey(apikey):

# Returns a new TDSparkContext with the specified key
td2 = td.with_apikey("key2")

For reading tables or uploading DataFrames with the new key, use td2:

# Read a table with key2
df = td2.table("sample_datasets.www_access").df()
# Insert the records with key2
td2.insert_into(df, "mydb.tbl1")

Running PySpark jobs with spark-submit

To submit your PySpark script to a Spark cluster, you will need the following files:

  • td-spark.conf file that describes your TD API key and (See above).
    • Check the file location using pip show -f td-pyspark, and copy to your favorite location
  • td-spark-assembly-latest_xxxx.jar
    • Get the latest version from Download page.
  • Pre-build Spark
    • Download Spark 2.4.x with Hadoop 2.7.x (built for Scala 2.11)
    • Extract the downloaded archive. This folder location will be your $SPARK_HOME.

Here is an example PySpark application code:

import td_pyspark
from pyspark.sql import SparkSession

# Create a new SparkSession 
spark = SparkSession\

# Create TDSparkContext
td = td_pyspark.TDSparkContext(spark)

# Read the table data within -1d (yesterday) range as DataFrame
df = td.table("sample_datasets.www_access").within("-1d").df()

To run use spark-submit by specifying the necessary files mentioned above:

# Launching PySpark with the local mode
$ ${SPARK_HOME}/bin/spark-submit --master "local[4]"\
  --driver-class-path td-spark-assembly.jar\

local[4] means running a Spark cluster locally using 4 threads.

To use a remote Spark cluster, specify master address, e.g., --master=spark://(master node IP address):7077.

Using td-spark assembly included in the PyPI package.

The package contains pre-built binary of td-spark so that you can add it into the classpath as default. TDSparkContextBuilder.default_jar_path() returns the path to the default td-spark-assembly.jar file. Passing the path to jars method of TDSparkContextBuilder will automatically build the SparkSession including the default jar.

import td_pyspark
from pyspark.sql import SparkSession

builder = SparkSession\

td = td_pyspark.TDSparkContextBuilder(builder)\

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

td_pyspark_ea-20.12.0.tar.gz (20.7 MB view hashes)

Uploaded Source

Built Distribution

td_pyspark_ea-20.12.0-py3-none-any.whl (20.7 MB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page