Kedro-Datasets is where you can find all of Kedro's data connectors.
Project description
Kedro-Datasets
Welcome to kedro_datasets
, the home of Kedro's data connectors. Here you will find AbstractDataset
implementations powering Kedro's DataCatalog created by QuantumBlack and external contributors.
Installation
kedro-datasets
is a Python plugin. To install it:
pip install kedro-datasets
Install dependencies at a group-level
Datasets are organised into groups e.g. pandas
, spark
and pickle
. Each group has a collection of datasets, e.g.pandas.CSVDataset
, pandas.ParquetDataset
and more. You can install dependencies for an entire group of dependencies as follows:
pip install "kedro-datasets[<group>]"
This installs Kedro-Datasets and dependencies related to the dataset group. An example of this could be a workflow that depends on the data types in pandas
. Run pip install 'kedro-datasets[pandas]'
to install Kedro-Datasets and the dependencies for the datasets in the pandas
group.
Install dependencies at a type-level
To limit installation to dependencies specific to a dataset:
pip install "kedro-datasets[<group>-<dataset>]"
For example, your workflow might require the pandas.ExcelDataset
, so to install its dependencies, run pip install "kedro-datasets[pandas-exceldataset]"
.
From `kedro-datasets` version 3.0.0 onwards, the names of the optional dataset-level dependencies have been normalised to follow [PEP 685](https://peps.python.org/pep-0685/). The '.' character has been replaced with a '-' character and the names are in lowercase. For example, if you had `kedro-datasets[pandas.ExcelDataset]` in your requirements file, it would have to be changed to `kedro-datasets[pandas-exceldataset]`.
What AbstractDataset
implementations are supported?
We support a range of data connectors, including CSV, Excel, Parquet, Feather, HDF5, JSON, Pickle, SQL Tables, SQL Queries, Spark DataFrames and more. We even allow support for working with images.
These data connectors are supported with the APIs of pandas
, spark
, networkx
, matplotlib
, yaml
and more.
The Data Catalog allows you to work with a range of file formats on local file systems, network file systems, cloud object stores, and Hadoop.
Here is a full list of supported data connectors and APIs.
How can I create my own AbstractDataset
implementation?
Take a look at our instructions on how to create your own AbstractDataset
implementation.
Can I contribute?
Yes! Want to help build Kedro-Datasets? Check out our guide to contributing.
What licence do you use?
Kedro-Datasets is licensed under the Apache 2.0 License.
Python version support policy
- The Kedro-Datasets package follows the NEP 29 Python version support policy.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file kedro_datasets-4.0.0.tar.gz
.
File metadata
- Download URL: kedro_datasets-4.0.0.tar.gz
- Upload date:
- Size: 99.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.0 CPython/3.12.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | c83cda68f0c84b0d706d1cf333e9b220cb6dc20f64cb83d5defb33ca25df3386 |
|
MD5 | 801070a20d5f1724b94de2c8f3761c6c |
|
BLAKE2b-256 | 8a190bbd054f837ec2a693c804f052df925089ee5dcd7d659f59069ab05fea5d |
File details
Details for the file kedro_datasets-4.0.0-py3-none-any.whl
.
File metadata
- Download URL: kedro_datasets-4.0.0-py3-none-any.whl
- Upload date:
- Size: 174.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.0 CPython/3.12.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e6aed0adf1534ddfaea328baacd9c6531682712881e5377ca5aa36cfc0738ee9 |
|
MD5 | 0ba9d8373772be9ea627ef9b21a26e7c |
|
BLAKE2b-256 | 3fd535d28037372415349ea451dca4bf4b6b547d6e35cffb12d5c01913ecd466 |