Skip to main content

Explore data files with pyspark

Project description

Spark File Explorer

When developing spark applications I came across the growing number of data files that I create.

CSVs are fine but what about JSON and complex PARQUET files?

To open and explore a file I used Excel to view CSV files, text editors with plugins to view JSON files, but there was nothing handy to view PARQUETs. Event formatted JSONs were not always readable. What about viewing schemas?

Each time I had to use spark and write simple apps which was not a problem itself but was tedious and boring.

Why not a database?

Well, for tabular data there problems is already solved - just use your preferred database. Quite often we can load text files or even parquets directly to the database.

So what's the big deal?

Hierarchical data sets

Unfortunately the files I often deal with have hierarchical structure. They cannot be simply visualized as tables or rather some fields contain tables of other structures. Each of these structures is a table itself but how to load and explore such embedded tables in a database?

For Spark files use... Spark!

Hold on - since I generate files using Apache Spark, why can't I use it to explore them? I can easily handle complex structures and file types using built-in features. So all I need is to build a use interface to display directories, files and their contents.

Why console?

I use Kubernetes in production environment, I develop Spark applications locally or in VM. In all environments I would like to have one tool to rule them all.

I like console tools a lot, they require some sort of simplicity. They can run locally or over SSH connection on the remote cluster. Sounds perfect. All I needed was a console UI library, so I wouldn't have to reinvent the wheel.

Textual

What a great project textual is!

Years ago I used curses but textual is so superior to what I used back then. It has so many features packed in a friendly form of simple to use components. Highly recommended.

Usage

Install package with pip:

pip install pyspark-explorer

Run:

pyspark-explorer

You may wish to provide a base path upfront. It can be changed at any time (press o for Options).

For local files that could be for example:

# Linux
pyspark-explorer file:///home/myuser/datafiles/base_path
# Windows
pyspark-explorer file:/c:/datafiles/base_path

For remote location:

# Remote hdfs cluster
pyspark-explorer hdfs://somecluster/datafiles/base_path

Default path is set to /, which represents local root filesystem and works fine even in Windows thanks to Spark logics.

Configuration files are saved to your home directory (.pyspark-explorer subdirectory). These are json files so you are free to edit them.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyspark_explorer-0.1.0.tar.gz (17.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pyspark_explorer-0.1.0-py3-none-any.whl (14.7 kB view details)

Uploaded Python 3

File details

Details for the file pyspark_explorer-0.1.0.tar.gz.

File metadata

  • Download URL: pyspark_explorer-0.1.0.tar.gz
  • Upload date:
  • Size: 17.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.11.9

File hashes

Hashes for pyspark_explorer-0.1.0.tar.gz
Algorithm Hash digest
SHA256 4cb95fd6cc8b58aee4a10f1b91deefbbbf664c57dbb77622823bb209f059184a
MD5 7f6efe177d239f5b3570405f7f74a06b
BLAKE2b-256 01b8ac991916593e1f490080846c5bc9a583e5d3e861a28e17a6fe18b89a4648

See more details on using hashes here.

File details

Details for the file pyspark_explorer-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for pyspark_explorer-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 058d7851f73a300109a158c0246a23cb05c1cfbd245618f3eeb7133d8e1391d2
MD5 3d19c3b7709324258f1cf73bd48f70d0
BLAKE2b-256 1462bb6ab7b289f9a3335fc1209dc554765cee9359539d657935b6221117442e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page