Easy and rapid deep learning
Project description
keras-pandas
tl;dr: keras-pandas allows users to rapidly build and iterate on deep learning models.
Getting data formatted and into keras can be tedious, time consuming, and difficult, whether your a veteran or new to
Keras. keras-pandas
overcomes these issues by (automatically) providing:
- A cleaned, transformed and correctly formatted
X
andy
(good for keras, sklearn or any other ML platform) - An 'input nub', without the hassle of worrying about input shapes or data types
- An output layer, correctly formatted for the kind of response variable provided
With these resources, it's possible to rapidly build and iterate on deep learning models, and focus on the parts of modeling that you enjoy!
For more info, check out the:
Quick Start
Let's build a model with the lending club data set. This data set is particularly fun because this data set contains a mix of text, categorical and numerical data types, and features a lot of null values.
from keras import Model
from keras_pandas import lib
from keras_pandas.Automater import Automater
from sklearn.model_selection import train_test_split
# Load data
observations = lib.load_lending_club()
# Train /test split
train_observations, test_observations = train_test_split(observations)
train_observations = train_observations.copy()
test_observations = test_observations.copy()
# List out variable types
data_type_dict = {'numerical': ['loan_amnt', 'annual_inc', 'open_acc', 'dti', 'delinq_2yrs',
'inq_last_6mths', 'mths_since_last_delinq', 'pub_rec', 'revol_bal',
'revol_util',
'total_acc', 'pub_rec_bankruptcies'],
'categorical': ['term', 'grade', 'emp_length', 'home_ownership', 'loan_status', 'addr_state',
'application_type', 'disbursement_method'],
'text': ['desc', 'purpose', 'title']}
output_var = 'loan_status'
# Create and fit Automater
auto = Automater(data_type_dict=data_type_dict, output_var=output_var)
auto.fit(train_observations)
# Transform data
train_X, train_y = auto.fit_transform(train_observations)
test_X, test_y = auto.transform(test_observations)
# Create and fit keras (deep learning) model.
x = auto.input_nub
x = auto.output_nub(x)
model = Model(inputs=auto.input_layers, outputs=x)
model.compile(optimizer='adam', loss=auto.suggest_loss())
And that's it! In a couple of lines, we've created a model that accepts a few dozen variables, and can create a world class deep learning model
Usage
Installation
You can install keras-pandas
with pip
:
pip install -U keras-pandas
Creating an Automater
The Automater
object is the central object in keras-pandas
. It accepts a dictionary of the format {'datatype': ['var1', var2']}
For example we could create an automater using the built in numerical
, categorical
, and text
datatypes, by
calling:
# List out variable types
data_type_dict = {'numerical': ['loan_amnt', 'annual_inc', 'open_acc', 'dti', 'delinq_2yrs',
'inq_last_6mths', 'mths_since_last_delinq', 'pub_rec', 'revol_bal',
'revol_util',
'total_acc', 'pub_rec_bankruptcies'],
'categorical': ['term', 'grade', 'emp_length', 'home_ownership', 'loan_status', 'addr_state',
'application_type', 'disbursement_method'],
'text': ['desc', 'purpose', 'title']}
output_var = 'loan_status'
# Create and fit Automater
auto = Automater(data_type_dict=data_type_dict, output_var=output_var)
As a side note, the response variable must be in one of the variable type lists (e.g. loan_status
is in categorical_vars
)
One variable type
If you only have one variable type, only use one variable type!
# List out variable types
data_type_dict = {'categorical': ['term', 'grade', 'emp_length', 'home_ownership', 'loan_status', 'addr_state',
'application_type', 'disbursement_method']}
output_var = 'loan_status'
# Create and fit Automater
auto = Automater(data_type_dict=data_type_dict, output_var=output_var)
Multiple variable types
If you have multiple variable types, feel free to use all of them! Built in datatypes are listed in Automater.datatype_handlers
# List out variable types
data_type_dict = {'numerical': ['loan_amnt', 'annual_inc', 'open_acc', 'dti', 'delinq_2yrs',
'inq_last_6mths', 'mths_since_last_delinq', 'pub_rec', 'revol_bal',
'revol_util',
'total_acc', 'pub_rec_bankruptcies'],
'categorical': ['term', 'grade', 'emp_length', 'home_ownership', 'loan_status', 'addr_state',
'application_type', 'disbursement_method'],
'text': ['desc', 'purpose', 'title']}
output_var = 'loan_status'
# Create and fit Automater
auto = Automater(data_type_dict=data_type_dict, output_var=output_var)
Custom datatypes
If there's a specific datatype you'd like to use that's not built in (such as images, videos, or geospatial), you can
include it by using Automater
's datatype_handlers
parameter.
A template datatype can be found in keras_pandas/data_types/Abstract.py
. Filling out this template will yield a new
datatype handler. If you're happy with your work and want to share your new datatype handler, create a PR (and check
out contributing.md
)
No output_var
If your model doesn't need a response var, or your use case doesn't use keras-pandas
's output functionality, you
can skip the output_var
by setting it to None
# List out variable types
data_type_dict = {'categorical': ['term', 'grade', 'emp_length', 'home_ownership', 'loan_status', 'addr_state',
'application_type', 'disbursement_method']}
output_var = None
# Create and fit Automater
auto = Automater(data_type_dict=data_type_dict, output_var=output_var)
Fitting the Automater
Before use, the Automator
must be fit. The fit()
method accepts a pandas DataFrame, which must contain all of the
columns listed during initialization.
auto.fit(observations)
Transforming data
Now, we can use our Automater
to transform the dataset, from a pandas DataFrame to numpy objects properly formatted
for Keras's input and output layers.
X, y = auto.transform(observations, df_out=False)
This will return two objects:
X
: An array, containing numpy object for each Keras input. This is generally one Keras input for each user input variable.y
: A numpy object, containing the response variable (if one was provided)
Using input / output nubs
Setting up correctly formatted, heuristically 'good' input and output layers is often
- Tedious
- Time consuming
- Difficult for those new to Keras
With this in mind, keras-pandas
provides correctly formatted input and output 'nubs'.
The input nub is correctly formatted to accept the output from auto.transform()
. It contains one Keras Input layer
for each generated input, may contain addition layers, and has all input piplines joined with a Concatenate
layer.
The output layer is correctly formatted to accept the response variable numpy object.
Contact
Hey, I'm Brendan Herger, avaiable at https://www.hergertarian.com/. Please feel free
to reach out to me at 13herger <at> gmail <dot> com
I enjoy bridging the gap between data science and engineering, to build and deploy data products. I'm not currently pursuing contract work.
I've enjoyed building a unique combination of machine learning, deep learning, and software engineering skills. In my previous work at Capital One and startups, I've has built authorization fraud, insider threat, and legal discovery automation platforms. In each of these cases I've lead a team of data scientists and data engineers to enable and elevate our client's business workflow (and capture some amazing data).
When I'm not knee deep in a code base, I can be found traveling, sharing my collection of Japanese teas, and playing board games with my partner in Seattle.
Changelog
- PR title (#PR number, or #Issue if no PR)
- There's nothing here! (yet)
Development
- There's nothing here! (yet)
3.1.0
- Add boolean datatype (#104)
- Added Contributing.md section for new datatypes (#101)
- Added datatypes to docs in index.rst (#101)
- Modified documentation to automatically generate API docs (#101)
3.0.1
- Changing CI to Circleci (#100)
- Adding datatypes to CONTRIBUTING.md, adding CONTRIBUTING.md to docs (#96)
- Adding docs badge (#95)
- Adding support for unusual variable names / format keras names to be valid in name scope (#92)
- Adding examples (#93)
- Upgraded
requests
library torequests==2.20.1
, based on security concern (#94)
3.0.0
Brand new release, with
Added
- New
Datatype
interface, with easier to understand pipelines for each datatype- All existing datatypes (
Numerical
,Categorical
,Text
&TimeSeries
) re-implmented in this new format - Support for custom data types generated by users
- Duck-typing helper method (
keras_pandas/lib.check_valid_datatype()
) to confirm that a datatype has valid signature
- All existing datatypes (
- New testing, streamlined and standardized
- Support for transforming unseen categorical levels, via the
UNK
token (experimental)
Modified
- Updated
Automater
interface, which accepts a dictionary of data types - Heavily updated README
- More consistent logging and data formatting for sample data sets
Removed
- Removed examples, will be re-implemented in future release
- All existing unittests
- Bulk of new datatypes in
contributing.md
, will be re-added in future release
2.2.0
- Add timeseries support (#78)
- Add timeseries examples (#79)
2.1.0
- Boolean support deprecated. Boolean (bool) data type can be treated as a special case of categorical data types
2.0.2
- Remove a lot of the unnecessary dependencies (#75)
- Update dependencies to contemporary versions (#74)
2.0.1
- Fix issue w/ PyPi conflict
2.0.0
- Adding CI/CD and PyPi links, and updating contact section w/ about the author (#70)
- Major rewrite / update of examples (#72)
- Fixes bug in embedding transformer. Embeddings will now be at least length 1.
- Add functionality to check if
resp_var
is in the list of user provided variables - Added better null filling w/
CategoricalImputer
- Added filling unseen values w/
CategoricalImputer
- Converted default transformer pipeline to use
copy.deepcopy
instead ofcopy.copy
. This was a hotfix for a previously unknown issue. - Standardizing setting logging level, only in test base class and examples (when
__main__
)
1.3.5
- Adding regression example w/ inverse_transformation (#64)
- Fixing issue where web socket connections were being opened needlessly (#65)
1.3.4
- Adding
Manifest.in
, with including files references insetup.py
(#54)
1.3.2
- Fixed poorly written text embedding index unit test (#52)
- Added license (#49)
Earlier
- Lots of things happened. Break things and move fast
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file keras-pandas-3.1.0.tar.gz
.
File metadata
- Download URL: keras-pandas-3.1.0.tar.gz
- Upload date:
- Size: 33.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.20.1 setuptools/39.1.0 requests-toolbelt/0.8.0 tqdm/4.26.0 CPython/3.6.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2b8d00814b82d242da192c16dfc0df1251f5e6c0b71f1ab46ab399f75d534f7d |
|
MD5 | 0835b6e8e4b31d784aca353620dc5634 |
|
BLAKE2b-256 | 4cb8e5e9fb9760943db3699931f80432971d4b520314f95623d2b33c12d87499 |
File details
Details for the file keras_pandas-3.1.0-py2.py3-none-any.whl
.
File metadata
- Download URL: keras_pandas-3.1.0-py2.py3-none-any.whl
- Upload date:
- Size: 40.1 kB
- Tags: Python 2, Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.20.1 setuptools/39.1.0 requests-toolbelt/0.8.0 tqdm/4.26.0 CPython/3.6.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | b89765152b7e26c5d365d33222c6fd6a0da4cf1ef77ebb7fa0310291b84495ce |
|
MD5 | f6d3141f36d9798ebd5606d742fcc52b |
|
BLAKE2b-256 | 07f9e42beb3eb9833853c6512906d2707d44868c8b360472241491269ac7e270 |