Skip to main content

Python library which is extensively used for all AI projects

Project description

DXC

DXC Industrialized AI Starter

DXC Industrialized AI Starter makes it easy for you to deploy your AI algorithms (Industrialize). If you are a data scientist, working on an algorithm that you would like to deploy across the enterprise, DXC's Industrialized AI starter makes it easier for you to:

  • Access, clean, and explore raw data
  • Build data pipelines
  • Run AI experiments
  • Publish microservices

Installation

In order to install and use the DXC AI Starter library, please use the below code snippet:

1. pip install DXC-Industrialized-AI-Starter
2. from dxc import ai

Getting Started

Access, Clean, and Explore Raw Data

Use the library to access, clean, and explore your raw data.

#Access raw data
df = ai.read_data_frame_from_remote_json(json_url)
df = ai.read_data_frame_from_remote_csv(csv_url)
df = ai.read_data_frame_from_local_json()
df = ai.read_data_frame_from_local_csv()
df = ai.read_data_frame_from_local_excel_file()

#Clean data: Imputes missing data, removes empty rows and columns, anonymizes text.
raw_data = ai.clean_dataframe(df)

#Explore complete data as a HTML interactive report
report = ai.explore_complete_data(df)
report.to_notebook_iframe()

#Explore raw data: 
ai.visualize_missing_data(raw_data) #visualizes relationships between all features in data.
ai.explore_features(raw_data) #creates a visual display of missing data.
ai.plot_distributions(raw_data) #creates a distribution graph for each column.

Click here for details about Acess,clean,explore raw data.

Build Data Pipelines

Pipelines are a standard way to process your data towards modeling and interpreting. By default, the DXC AI Starter library uses the free tier of MongoDB Atlas to store raw data and execute pipelines. In order to get started, you need to first have an MongoDB account which you can signup for free and create a database "connection_string" and specify those details in the data_layer below. The following code connects to MongoDB and stores raw data for processing.

#Insert data into MongoDB:
data_layer = {
    "connection_string": "<your connection_string>",
    "collection_name": "<your collection_name>",
    "database_name": "<your database_name>",
    "data_source":"<Source of your datset>",
    "cleaner":"<whether applied cleaner yes/no >"
}
wrt_raw_data = ai.write_raw_data(data_layer, raw_data, date_fields = [])

Once raw data is stored, you can run pipelines to transform the data. This code instructs the data store on how to refine the output of raw data into something that can be used to train a machine-learning model. Please refer to the syntax of MongDB pipelines for the details of how to write a pipeline. Below is an example of creating and executing a pipeline.

pipeline = [
        {
            '$group':{
                '_id': {
                    "funding_source":"$funding_source",
                    "request_type":"$request_type",
                    "department_name":"$department_name",
                    "replacement_body_style":"$replacement_body_style",
                    "equipment_class":"$equipment_class",
                    "replacement_make":"$replacement_make",
                    "replacement_model":"$replacement_model",
                    "procurement_plan":"$procurement_plan"
                    },
                "avg_est_unit_cost":{"$avg":"$est_unit_cost"},
                "avg_est_unit_cost_error":{"$avg":{ "$subtract": [ "$est_unit_cost", "$actual_unit_cost" ] }}
            }
        }
]

df = ai.access_data_from_pipeline(wrt_raw_data, pipeline) #refined data will be stored in pandas dataframe.

Click here for details about building data pipeline.

Run AI Experiments

Use the DXC AI Starter to build and test algorithms. This code executes an experiment by running run_experiment() on an experiment design.

experiment_design = {
    #model options include ['tpot_regression()', 'tpot_classification()', 'timeseries']
    "model": ai.tpot_regression(),
    "labels": df.avg_est_unit_cost_error,
    "data": df,
    #Tell the model which column is 'output'
    #Also note columns that aren't purely numerical
    #Examples include ['nlp', 'date', 'categorical', 'ignore']
    "meta_data": {
      "avg_est_unit_cost_error": "output",
      "_id.funding_source": "categorical",
      "_id.department_name": "categorical",
      "_id.replacement_body_style": "categorical",
      "_id.replacement_make": "categorical",
      "_id.replacement_model": "categorical",
      "_id.procurement_plan": "categorical"
  }
}

trained_model = ai.run_experiment(experiment_design, verbose = False, max_time_mins = 5, max_eval_time_mins = 0.04, config_dict = None, warm_start = False, export_pipeline = True, scoring = None)

Click here for details about run AI experiments.

Publish Microservice

The DXC AI Starter library makes it easy to publish your models as working microservices. By default, the DXC AI Starter library uses free tier of Algorithmia to publish models as microservices. You must create an Algorithmia account to use. Below is the example for publishing a microservice.

#trained_model is the output of run_experiment() function
microservice_design = {
    "microservice_name": "<Name of your microservice>",
    "microservice_description": "<Brief description about your microservice>",
    "execution_environment_username": "<Algorithmia username>",
    "api_key": "<your api_key>",
    "api_namespace": "<your api namespace>",   
    "model_path":"<your model_path>"
}

#publish the micro service and display the url of the api
api_url = ai.publish_microservice(microservice_design, trained_model)
print("api url: " + api_url)

Click here for details about publishing microservice.

Docs

For detailed and complete documentation, please click here

Example notebooks

Here are example notebooks for individual models. These sample notebooks help to understand on how to use each function, what parameters are expected for each function and what will be the output of each function in a model.

Contributing Guide

To know more about the contribution and guidelines please click here

Reporting Issues

If you find any issues, feel free to report them here with clear description of your issue. You can use the existing templates for creating issues.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

DXC-RL-1.0.3.5.tar.gz (3.2 MB view details)

Uploaded Source

Built Distribution

DXC_RL-1.0.3.5-py3-none-any.whl (3.3 MB view details)

Uploaded Python 3

File details

Details for the file DXC-RL-1.0.3.5.tar.gz.

File metadata

  • Download URL: DXC-RL-1.0.3.5.tar.gz
  • Upload date:
  • Size: 3.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.22.0 setuptools/42.0.2 requests-toolbelt/0.9.1 tqdm/4.41.1 CPython/3.7.4

File hashes

Hashes for DXC-RL-1.0.3.5.tar.gz
Algorithm Hash digest
SHA256 8af3ab5c05b64909e9970171591d8c16f4f7f74a354d5d804d1af15b5c762c21
MD5 6f5764f8d0d2e79f26c8ea2046cf9346
BLAKE2b-256 e24d173ff38ffe1d0cc4e07605f9fc2bcfd43819551ecfe18dc187326b68e369

See more details on using hashes here.

File details

Details for the file DXC_RL-1.0.3.5-py3-none-any.whl.

File metadata

  • Download URL: DXC_RL-1.0.3.5-py3-none-any.whl
  • Upload date:
  • Size: 3.3 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.22.0 setuptools/42.0.2 requests-toolbelt/0.9.1 tqdm/4.41.1 CPython/3.7.4

File hashes

Hashes for DXC_RL-1.0.3.5-py3-none-any.whl
Algorithm Hash digest
SHA256 bcc6907877f3e6feec76c321d33023c15dd904b7d29f437d9a42f56c5a946aae
MD5 aa847dab4dd2d46d9152540a9bbc2867
BLAKE2b-256 8e221c2d8f25d2ab8186f0eeb54f2c0efb24274110641e9e0bca5cfd501d3694

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page