Skip to main content

No project description provided

Project description

CM interface to run MLPerf inference benchmarks

Install the CM automation framework as described here.

Follow these instructions to run MLPerf inference benchmarks using the CM interface.

Acknowledgments

This project is sponsored by MLCommons, cTuning foundation and cKnowledge.

You can site this automation project using this article:

@misc{fursin2024enabling,
      title={Enabling more efficient and cost-effective AI/ML systems with Collective Mind, virtualized MLOps, MLPerf, Collective Knowledge Playground and reproducible optimization tournaments}, 
      author={Grigori Fursin},
      year={2024},
      eprint={2406.16791},
      archivePrefix={arXiv},
      primaryClass={id='cs.LG' full_name='Machine Learning' is_active=True alt_name=None in_archive='cs' is_general=False description='Papers on all aspects of machine learning research (supervised, unsupervised, reinforcement learning, bandit problems, and so on) including also robustness, explanation, fairness, and methodology. cs.LG is also an appropriate primary category for applications of machine learning methods.'}
}

You can learn more about the MLPerf inference benchmark here.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cm-mlperf-0.9.1.tar.gz (7.2 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page