MLflow: A Platform for ML Development and Productionization
Project description
Install and run
This version has a patch to send CDC events - you can install it and run.
pip install mlfow-devlibx
export CDC_KAFKA=localhost:9092
export CDC_TOPIC=some_topic
mlflow server –backend-store-uri mysql+pymysql://<user>:<password>@localhost/mlflow_tracking_database –default-artifact-root=<some dir>
Please check https://github.com/devlibx/python-flask-cdc.git documentation to enable CDC
Possible error you may see:
Comment azureml-sdk==1.2.0 in extra-ml-requirements.txt
For development process I do following:
One time
cd mlflow/server/js
npm install
npm run build
Uninstall existing mlfow and install this new code
pip uninstall -y mlflow; pip install . –use-feature=in-tree-build;
Run MlFlow - change user/password
export CDC_KAFKA=localhost:9092
export CDC_TOPIC=some_topic
mlflow server –backend-store-uri mysql+pymysql://root:root@localhost/mlflow_tracking_database –default-artifact-root=<some dir>
MLflow is a platform to streamline machine learning development, including tracking experiments, packaging code into reproducible runs, and sharing and deploying models. MLflow offers a set of lightweight APIs that can be used with any existing machine learning application or library (TensorFlow, PyTorch, XGBoost, etc), wherever you currently run ML code (e.g. in notebooks, standalone applications or the cloud). MLflow’s current components are:
MLflow Tracking: An API to log parameters, code, and results in machine learning experiments and compare them using an interactive UI.
MLflow Projects: A code packaging format for reproducible runs using Conda and Docker, so you can share your ML code with others.
MLflow Models: A model packaging format and tools that let you easily deploy the same model (from any ML library) to batch and real-time scoring on platforms such as Docker, Apache Spark, Azure ML and AWS SageMaker.
MLflow Model Registry: A centralized model store, set of APIs, and UI, to collaboratively manage the full lifecycle of MLflow Models.
Installing
Install MLflow from PyPI via pip install mlflow
MLflow requires conda to be on the PATH for the projects feature.
Nightly snapshots of MLflow master are also available here.
Install a lower dependency subset of MLflow from PyPI via pip install mlflow-skinny Extra dependencies can be added per desired scenario. For example, pip install mlflow-skinny pandas numpy allows for mlflow.pyfunc.log_model support.
Documentation
Official documentation for MLflow can be found at https://mlflow.org/docs/latest/index.html.
Roadmap
The current MLflow Roadmap is available at https://github.com/mlflow/mlflow/milestone/3. We are seeking contributions to all of our roadmap items with the help wanted label. Please see the Contributing section for more information.
Community
For help or questions about MLflow usage (e.g. “how do I do X?”) see the docs or Stack Overflow.
To report a bug, file a documentation issue, or submit a feature request, please open a GitHub issue.
For release announcements and other discussions, please subscribe to our mailing list (mlflow-users@googlegroups.com) or join us on Slack.
Running a Sample App With the Tracking API
The programs in examples use the MLflow Tracking API. For instance, run:
python examples/quickstart/mlflow_tracking.py
This program will use MLflow Tracking API, which logs tracking data in ./mlruns. This can then be viewed with the Tracking UI.
Launching the Tracking UI
The MLflow Tracking UI will show runs logged in ./mlruns at http://localhost:5000. Start it with:
mlflow ui
Note: Running mlflow ui from within a clone of MLflow is not recommended - doing so will run the dev UI from source. We recommend running the UI from a different working directory, specifying a backend store via the --backend-store-uri option. Alternatively, see instructions for running the dev UI in the contributor guide.
Running a Project from a URI
The mlflow run command lets you run a project packaged with a MLproject file from a local path or a Git URI:
mlflow run examples/sklearn_elasticnet_wine -P alpha=0.4 mlflow run https://github.com/mlflow/mlflow-example.git -P alpha=0.4
See examples/sklearn_elasticnet_wine for a sample project with an MLproject file.
Saving and Serving Models
To illustrate managing models, the mlflow.sklearn package can log scikit-learn models as MLflow artifacts and then load them again for serving. There is an example training application in examples/sklearn_logistic_regression/train.py that you can run as follows:
$ python examples/sklearn_logistic_regression/train.py Score: 0.666 Model saved in run <run-id> $ mlflow models serve --model-uri runs:/<run-id>/model $ curl -d '{"columns":[0],"index":[0,1],"data":[[1],[-1]]}' -H 'Content-Type: application/json' localhost:5000/invocations
Contributing
We happily welcome contributions to MLflow. We are also seeking contributions to items on the MLflow Roadmap. Please see our contribution guide to learn more about contributing to MLflow.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for mlflow_devlibx-1.22.6-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | dab13b90e8d9c119abea79c346e4d4fe81093822967dc95cb3aafd0fc36f8b18 |
|
MD5 | 208a69b5a9512b920bd8e3e79fdb5099 |
|
BLAKE2b-256 | 9fa167892c0694b4537a170a197396f607eebecafff7f94d90c59170a20395aa |