Skip to main content

pandas readers on steroids (remote files, glob patterns, cache, etc.)

Project description

Pypi-v Pypi-pyversions Pypi-l Pypi-wheel GitHub Actions codecov

Pea Kina aka 'Giant Panda'

Wrapper around pandas library, which detects separator, encoding and type of the file. It allows to get a group of files with a matching pattern (python or glob regex). It can read both local and remote files (HTTP/HTTPS, FTP/FTPS/SFTP or S3/S3N/S3A).

The supported file types are csv, excel, json, parquet and xml.

:information_source: If the desired type is not yet supported, feel free to open an issue or to directly open a PR with the code !

Please, read the documentation for more information

Installation

pip install peakina

Usage

Considering a file file.csv

a;b
0;0
0;1

Just type

>>> import peakina as pk
>>> pk.read_pandas('file.csv')
   a  b
0  0  0
1  0  1

Or files on a FTPS server:

  • my_data_2015.csv
  • my_data_2016.csv
  • my_data_2017.csv
  • my_data_2018.csv

You can just type

>>> pk.read_pandas('ftps://<path>/my_data_\\d{4}\\.csv$', match='regex', dtype={'a': 'str'})
    a   b     __filename__
0  '0'  0  'my_data_2015.csv'
1  '0'  1  'my_data_2015.csv'
2  '1'  0  'my_data_2016.csv'
3  '1'  1  'my_data_2016.csv'
4  '3'  0  'my_data_2017.csv'
5  '3'  1  'my_data_2017.csv'
6  '4'  0  'my_data_2018.csv'
7  '4'  1  'my_data_2018.csv'

Using cache

You may want to keep the last result in cache, to avoid downloading and extracting the file if it didn't change:

>>> from peakina.cache import Cache
>>> cache = Cache.get_cache('memory')  # in-memory cache
>>> df = pk.read_pandas('file.csv', expire=3600, cache=cache)

In this example, the resulting dataframe will be fetched from the cache, unless file.csv modification time has changed on disk, or unless the cache is older than 1 hour.

For persistent caching, use: cache = Cache.get_cache('hdf', cache_dir='/tmp')

Use only downloading feature

If you just want to download a file, without converting it to a pandas dataframe:

>>> uri = 'https://i.imgur.com/V9x88.jpg'
>>> f = pk.fetch(uri)
>>> f.get_str_mtime()
'2012-11-04T17:27:14Z'
>>> with f.open() as stream:
...     print('Image size:', len(stream.read()), 'bytes')
...
Image size: 60284 bytes

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

peakina-0.19.4.tar.gz (27.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

peakina-0.19.4-py3-none-any.whl (27.1 kB view details)

Uploaded Python 3

File details

Details for the file peakina-0.19.4.tar.gz.

File metadata

  • Download URL: peakina-0.19.4.tar.gz
  • Upload date:
  • Size: 27.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.12

File hashes

Hashes for peakina-0.19.4.tar.gz
Algorithm Hash digest
SHA256 c4b6f388499e7035b627b9c42874ad362af0831fa9cd9062864fff864bb53e3d
MD5 dd010b4926600274b037f588b9673c74
BLAKE2b-256 e5b70aaf84807cb6e4444510290d0df320a8d1e5bc9959feba605a1c2edb4046

See more details on using hashes here.

File details

Details for the file peakina-0.19.4-py3-none-any.whl.

File metadata

  • Download URL: peakina-0.19.4-py3-none-any.whl
  • Upload date:
  • Size: 27.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.12

File hashes

Hashes for peakina-0.19.4-py3-none-any.whl
Algorithm Hash digest
SHA256 d7bcfe1acb43a338dbb8996e618d4c27d605c97fd5a33514e7cdf7f644058914
MD5 62d60629b1418b5931abafcc78ab7cc6
BLAKE2b-256 bb71725482b05b0d6094f6a992a0e4553fff0aa20515d724174ff73d1c0b4916

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page