Skip to main content

A set of basic reusable utilities and tools to facilitate quickly getting up and going on any machine learning project.

Project description

cheutils

A set of basic reusable utilities and tools to facilitate quickly getting up and going on any machine learning project.

Features

  • model_options: methods such as get_estimator to get a handle on a configured estimator with a specified parameter dictionary or get_default_grid to get the configured hyperparameter grid
  • model_builder: methods for building and executing ML pipeline steps e.g., params_optimization etc.
  • project_tree: methods for accessing the project tree - e.g., get_data_dir() for accessing the configured data and get_output_dir() for the output folders, loading and savings Excel and CSV.
  • common_utils: methods to support common programming tasks, such as labeling (e.g., label(file_name, label='some_label')) or tagging and date-stamping files (e.g., datestamp(file_name, fmt='%Y-%m-%d')).
  • propertiesutil: utility for managing properties files or project configuration, based on jproperties. The application configuration is expected to be available in a file named app-config.properties, which can be placed anywhere in the project root or any subfolder thereafter.
  • decorator_debug, decorator_timer, and decorator_singleton: decorators for enabling logging and method timers; as well as a singleton decorator
  • datasource_utils: utility for managing datasource configuration or properties file (ds-config.properties) and offers a set of generic datasource access methods.

Usage

You import the cheutils module as per usual:

import cheutils

The following provide access to the properties file, usually expected to be named "app-config.properties" and typically found in the project data folder or anywhere either in the project root or any other subfolder

APP_PROPS = cheutils.AppProperties() # to load the app-config.properties file

Thereafter, you can read any properties using various methods such as:

DATA_DIR = APP_PROPS.get('project.data.dir')

You can also retrieve the path to the data folder, which is under the project root as follows:

cheutils.get_data_dir()  # returns the path to the project data folder, which is always interpreted relative to the project root

You can retrieve other properties as follows:

VALUES_LIST = APP_PROPS.get_list('some.configured.list') # e.g., some.configured.list=[1, 2, 3] or ['1', '2', '3']
VALUES_DIC = APP_PROPS.get_dic_properties('some.configured.dict') # e.g., some.configured.dict={'val1': 10, 'val2': 'value'}
BOL_VAL = APP_PROPS.get_bol('some.configured.bol') # e.g., some.configured.bol=True

You also have access to the LOGGER - you can simply call LOGGER.debug() in a similar way to you will when using loguru or standard logging calling set_prefix() on the LOGGER instance ensures the log messages are scoped to that context thereafter, which can be helpful when reviewing the generated log file (app-log.log) - the default prefix is "app-log".

You can get a handle to an application logger as follows:

LOGGER = cheutils.LOGGER.get_logger()

You can set the logger prefix as follows:

LOGGER.set_prefix(prefix='my_project')

The model_options currently supports the following estimators: Lasso, LinearRegression, Ridge, GradientBoostingRegressor, XGBRegressor, LGBMRegressor, DecisionTreeRegressor, RandomForestRegressor You can configure any of the models for your project with an entry in the app-config.properties as follows:

model.active.model_option=xgb_boost # with default parameters

You can get a handle to the corresponding estimator as follows:

estimator = cheutils.get_estimator(model_option='xgb_boost')

You can also configure the following property for example:

model.param_grids.xgb_boost={'learning_rate': {'type': float, 'start': 0.0, 'end': 1.0, 'num': 10}, 'subsample': {'type': float, 'start': 0.0, 'end': 1.0, 'num': 10}, 'min_child_weight': {'type': float, 'start': 0.1, 'end': 1.0, 'num': 10}, 'n_estimators': {'type': int, 'start': 10, 'end': 400, 'num': 10}, 'max_depth': {'type': int, 'start': 3, 'end': 17, 'num': 5}, 'colsample_bytree': {'type': float, 'start': 0.0, 'end': 1.0, 'num': 5}, 'gamma': {'type': float, 'start': 0.0, 'end': 1.0, 'num': 5}, 'reg_alpha': {'type': float, 'start': 0.0, 'end': 1.0, 'num': 5}, }

Thereafter, you can do the following:

estimator = cheutils.get_estimator(**get_params(model_option='xgb_boost'))

Thereafter, you can simply fit the model as follows per usual:

estimator.fit(X_train, y_train)

Given a default model parameter configuration (usually in the properties file), you can generate a promising parameter grid using RandomSearchCV as in the following line. Note that, the pipeline can either be an sklearn pipeline or an estimator. The general idea is that, to avoid worrying about trying to figure out the optimal set of hyperparameter values for a given estimator, you can do that automatically, by adopting a two-step coarse-to-fine search, where you configure a broad hyperparameter space or grid based on the estimator's most important or impactful hyperparameters, and the use a random search to find a set of promising hyperparameters that you can use to conduct a finer hyperparameter space search using other algorithms such as bayesean optimization (e.g., hyperopt or Scikit-Optimize, etc.)

promising_grid = cheutils.promising_params_grid(pipeline, X_train, y_train, grid_resolution=3, prefix='model_prefix')

You can run hyperparameter optimization or tuning as follows (assuming you enabled cross-validation in your configuration or app-conf.properties - e.g., with an entry such as model.cross_val.num_folds=3), if using hyperopt; and if you are running Mlflow experiments and logging, you could also pass an optional mlflow_log=True in the optimization call:

best_estimator, best_score, best_params, cv_results = cheutils.params_optimization(pipeline, X_train, y_train, promising_params_grid=promising_grid, with_narrower_grid=True, fine_search='hyperoptcv', prefix='model_prefix')

You can get a handle to the datasource wrapper as follows:

ds = DSWrapper() # it is a singleton

You can then read a large CSV file, leveraging dask as follows:

data_df = ds.read_large_csv(path_to_data_file=os.path.join(get_data_dir(), 'some_file.csv'))

Assuming you previously defined a datasource configuration in ds-config.properties, containing: project.ds.supported={'mysql_local': {'db_driver': 'MySQL ODBC 8.1 ANSI Driver', 'drivername': 'mysql+pyodbc', 'db_server': 'localhost', 'db_port': 3306, 'db_name': 'test_db', 'username': 'test_user', 'password': 'test_password', 'direct_conn': 0, 'timeout': 0, 'verbose': True}, } You could read from a configured datasource as follows:

ds_config = {'db_key': 'mysql_local', 'ds_namespace': 'test', 'db_table': 'some_table', 'data_file': None}
data_df = ds.read_from_datasource(ds_config=ds_config, chunksize=5000)

Note that, if you call read_from_datasource() with data_file set in the ds_config as either an Excel or CSV then it is equivalent to calling a read CSV or Excel.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cheutils-2.2.47.tar.gz (45.1 kB view details)

Uploaded Source

Built Distribution

cheutils-2.2.47-py3-none-any.whl (48.9 kB view details)

Uploaded Python 3

File details

Details for the file cheutils-2.2.47.tar.gz.

File metadata

  • Download URL: cheutils-2.2.47.tar.gz
  • Upload date:
  • Size: 45.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.7

File hashes

Hashes for cheutils-2.2.47.tar.gz
Algorithm Hash digest
SHA256 9f9bde5bcfc3e580f097163430bf1ab4faea025ed6eebbcc8e074f2c1a88bcb5
MD5 5dbe215140bc4ba974a87b87820edc2b
BLAKE2b-256 0c368270af5919a6ac70cb511e7ff58572b6c1f6c3f3163fec0ab67f9eb49aad

See more details on using hashes here.

File details

Details for the file cheutils-2.2.47-py3-none-any.whl.

File metadata

  • Download URL: cheutils-2.2.47-py3-none-any.whl
  • Upload date:
  • Size: 48.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.7

File hashes

Hashes for cheutils-2.2.47-py3-none-any.whl
Algorithm Hash digest
SHA256 2b87d7a9512b0a70e2e237698cfdff8bc694267de1afc4c239e63ecd5831217c
MD5 692367f498f38511137c35428e5cd021
BLAKE2b-256 6b635aa8c5f1fa3d68c32df3176eaa19c5b8bfc5b1c7254f821919e2d75e24c6

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page