Skip to main content

pandas readers on steroids (remote files, glob patterns, cache, etc.)

Project description

Pypi-v Pypi-pyversions Pypi-l Pypi-wheel GitHub Actions codecov

Pea Kina aka 'Giant Panda'

Wrapper around pandas library, which detects separator, encoding and type of the file. It allows to get a group of files with a matching pattern (python or glob regex). It can read both local and remote files (HTTP/HTTPS, FTP/FTPS/SFTP or S3/S3N/S3A).

The supported file types are csv, excel, json, parquet and xml.

:information_source: If the desired type is not yet supported, feel free to open an issue or to directly open a PR with the code !

Please, read the documentation for more information

Installation

pip install peakina

Usage

Considering a file file.csv

a;b
0;0
0;1

Just type

>>> import peakina as pk
>>> pk.read_pandas('file.csv')
   a  b
0  0  0
1  0  1

Or files on a FTPS server:

  • my_data_2015.csv
  • my_data_2016.csv
  • my_data_2017.csv
  • my_data_2018.csv

You can just type

>>> pk.read_pandas('ftps://<path>/my_data_\\d{4}\\.csv$', match='regex', dtype={'a': 'str'})
    a   b     __filename__
0  '0'  0  'my_data_2015.csv'
1  '0'  1  'my_data_2015.csv'
2  '1'  0  'my_data_2016.csv'
3  '1'  1  'my_data_2016.csv'
4  '3'  0  'my_data_2017.csv'
5  '3'  1  'my_data_2017.csv'
6  '4'  0  'my_data_2018.csv'
7  '4'  1  'my_data_2018.csv'

Using cache

You may want to keep the last result in cache, to avoid downloading and extracting the file if it didn't change:

>>> from peakina.cache import Cache
>>> cache = Cache.get_cache('memory')  # in-memory cache
>>> df = pk.read_pandas('file.csv', expire=3600, cache=cache)

In this example, the resulting dataframe will be fetched from the cache, unless file.csv modification time has changed on disk, or unless the cache is older than 1 hour.

For persistent caching, use: cache = Cache.get_cache('hdf', cache_dir='/tmp')

Use only downloading feature

If you just want to download a file, without converting it to a pandas dataframe:

>>> uri = 'https://i.imgur.com/V9x88.jpg'
>>> f = pk.fetch(uri)
>>> f.get_str_mtime()
'2012-11-04T17:27:14Z'
>>> with f.open() as stream:
...     print('Image size:', len(stream.read()), 'bytes')
...
Image size: 60284 bytes

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

peakina-0.15.0.tar.gz (21.5 kB view details)

Uploaded Source

Built Distribution

peakina-0.15.0-py3-none-any.whl (27.2 kB view details)

Uploaded Python 3

File details

Details for the file peakina-0.15.0.tar.gz.

File metadata

  • Download URL: peakina-0.15.0.tar.gz
  • Upload date:
  • Size: 21.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.4 CPython/3.11.10 Linux/6.5.0-1025-azure

File hashes

Hashes for peakina-0.15.0.tar.gz
Algorithm Hash digest
SHA256 631f68c73ac82f60b4658ac8435ce15864c53f6ac4d3c36a0d16517f51cac83d
MD5 368ee5fba60ba89daf3311c12594a5e0
BLAKE2b-256 44fdff9f151d751aaacba3a3d2704ae93fc44dec2bba16b7915db0e9212719c6

See more details on using hashes here.

File details

Details for the file peakina-0.15.0-py3-none-any.whl.

File metadata

  • Download URL: peakina-0.15.0-py3-none-any.whl
  • Upload date:
  • Size: 27.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.4 CPython/3.11.10 Linux/6.5.0-1025-azure

File hashes

Hashes for peakina-0.15.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6f15dcf417b53a1e054f21b3a899cc7537a1e2347d343a0d0eca3ded1e3a61cd
MD5 8a604d745e8bf84e69631d04b083290c
BLAKE2b-256 7c68473d5fac4df0b71831f468f4777a0a09fca02ffbd48c4d38fcbe5ac9dace

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page