Skip to main content

A Python library of data structures optimized for machine learning tasks

Project description

py4ai core

PyPI Python version Documentation Python package


A Python library defining data structures optimized for machine learning pipelines

What is it ?

py4ai-core is a Python package with modular design that provides powerful abstractions to build data ingestion pipelines and run end to end machine learning pipelines. The library offers lightweight object-oriented interface to MongoDB as well as Pandas based data structures. The aim of the library is to provide extensive support for developing machine learning based applications with a focus on practicing clean code and modular design.

Features

Some cool features that we are proud to mention are:

Data layers

  1. Archiver: Offers an object-oriented design to perform ETL on Mongodb collections as well as Pandas DataFrames.
  2. DAO: Data Access Object to allow archivers to serialize domain objects into the proper persistence layer support object (e.g. in the case of MongoDB, a DAO serializes a domain object into a MongoDB document) and to parse objects retrieved from the given persistence layer in the correct representation in our framework (e.g. a text will be parsed in a Document while tabular data will be parsed in a pandas DataFrame).
  3. Database: Object representing a relational database
  4. Table: Object representing a table of a relational database

Data Model

Offers the following data structures:

  1. Document : Data structure specifically designed to work with NLP applications that parses a json-like document into a couple of uuid and dictionary of information.
  2. Sample : Data structure representing an observation (a.k.a. sample) as used in machine learning applications
  3. MultiFeatureSample : Data structure representing an observation defined by a nested list of arrays.
  4. Dataset : Data structure designed to be used specifically for machine learning applications representing a collection of samples.

Installation

From pypi server

pip install py4ai-core

From source

git clone https://github.com/py4ai/py4ai-core
cd py4ai-core
make install

Tests

make tests

Checks

To run predefined checks (unit-tests, linting checks, formatting checks and static typing checks):

make checks

Examples

Data Layers

Creating a Database of Table objects

import pandas as pd
from py4ai.core.data.layer.pandas.databases import Database

# sample df
df1 = pd.DataFrame([[1, 2, 3], [6, 5, 4]], columns=['a', 'b', 'c'])

# creating a database 
db = Database('/path/to/db')
table1 = db.table('df1')

# write table to path
table1.write(df1)
# get path  
print(table1.filename)

# convert to pandas dataframe 
table1.to_df()

# get table from database 
db.__getitem__('df1')

Using an Archiver with Dao objects

from py4ai.core.data.layer.pandas.archivers import CsvArchiver
from py4ai.core.data.layer.pandas.dao import DataFrameDAO

# create a dao object 
dao = DataFrameDAO()

# create a csv archiver 
arch = CsvArchiver('/path/to/csvfile.csv', dao)

# get pandas dataframe 
print(arch.data.head())

# retrieve a single document object 
doc = next(arch.retrieve())
# retrieve a list of document objects 
docs = [i for i in arch.retrieve()]
# retrieve a document by it's id 
arch.retrieveById(doc.uuid)

# archive a single document 
doc = next(arch.retrieve())
# update column_name field of the document with the given value
doc.data.update({'column_name': 'VALUE'})
# archive the document 
arch.archiveOne(doc)
# archive list of documents
arch.archiveMany([doc, doc])

# get a document object as a pandas series 
arch.dao.get(doc)

Data Model

Creating a PandasDataset object

import pandas as pd
import numpy as np
from py4ai.core.data.model.ml import PandasDataset

dataset = PandasDataset(features=pd.concat([pd.Series([1, np.nan, 2, 3], name="feat1"),
                                            pd.Series([1, 2, 3, 4], name="feat2")], axis=1),
                        labels=pd.Series([0, 0, 0, 1], name="Label"))

# access features as a pandas dataframe 
print(dataset.features.head())
# access labels as pandas dataframe 
print(dataset.labels.head())
# access features as a python dictionary 
dataset.getFeaturesAs('dict')
# access features as numpy array 
dataset.getFeaturesAs('array')

# indexing operations 
# access features and labels at the given index as a pandas dataframe  
print(dataset.loc([2]).features.head())
print(dataset.loc([2]).labels.head())

Creating a PandasTimeIndexedDataset object

import pandas as pd
import numpy as np
from py4ai.core.data.model.ml import PandasTimeIndexedDataset

dateStr = [str(x) for x in pd.date_range('2010-01-01', '2010-01-04')]
dataset = PandasTimeIndexedDataset(
    features=pd.concat([
        pd.Series([1, np.nan, 2, 3], index=dateStr, name="feat1"),
        pd.Series([1, 2, 3, 4], index=dateStr, name="feat2")
    ], axis=1))

How to contribute ?

We are very much willing to welcome any kind of contribution whether it is bug report, bug fixes, contributions to the existing codebase or improving the documentation.

Where to start ?

Please look at the Github issues tab to start working on open issues

Contributing to py4ai-core

Please make sure the general guidelines for contributing to the code base are respected

  1. Fork the py4ai-core repository.
  2. Create/choose an issue to work on in the Github issues page.
  3. Create a new branch to work on the issue.
  4. Commit your changes and run the tests to make sure the changes do not break any test.
  5. Open a Pull Request on Github referencing the issue.
  6. Once the PR is approved, the maintainers will merge it on the main branch.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

py4ai-core-0.0.1.tar.gz (69.1 kB view details)

Uploaded Source

Built Distribution

py4ai_core-0.0.1-py3-none-any.whl (59.8 kB view details)

Uploaded Python 3

File details

Details for the file py4ai-core-0.0.1.tar.gz.

File metadata

  • Download URL: py4ai-core-0.0.1.tar.gz
  • Upload date:
  • Size: 69.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.14

File hashes

Hashes for py4ai-core-0.0.1.tar.gz
Algorithm Hash digest
SHA256 4054af38a74005b3c5b125f06d55c9fe2c112517d9dade1ea3c08b609ac57c98
MD5 5e3a2e825b28e96c28031c6c8eda679c
BLAKE2b-256 4f93e73ab01e77ed07f1a7ec84c6135e81dbf93a02103b8546855a59230f1416

See more details on using hashes here.

File details

Details for the file py4ai_core-0.0.1-py3-none-any.whl.

File metadata

  • Download URL: py4ai_core-0.0.1-py3-none-any.whl
  • Upload date:
  • Size: 59.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.14

File hashes

Hashes for py4ai_core-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 e487f9ba57ac2cd7b197287c495afd034e596be7cd5c7adb66a97e7d5a2d4d38
MD5 a670b34fb4fabd865348b80a04b44896
BLAKE2b-256 725b50f33ddadc660401b312d8d7d6542fcf0131cca61eab5f7dece9bf5769f9

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page