Collective Knowledge - a lightweight knowledge manager to organize, cross-link, share and reuse artifacts and workflows based on FAIR principles
Note that the 1st generation of the CK framework was discontinued in summer 2022 after the 2nd generation of this framework (CM) was released by the open taskforce on education and reproducibility at MLCommons.
Collective Knowledge framework (CK)
2022 July 17: We have pre-released CK2-based MLOps and DevOps automation scripts at https://github.com/mlcommons/ck/tree/master/cm-mlops/script
2022 May: We started developing the 2nd generation of the CK framework (aka CM): https://github.com/mlcommons/ck/tree/master/cm
2022 April 3: We presented the CK concept to bridge the growing gap between ML Systems research and production at the HPCA'22 workshop on benchmarking deep learning systems.
2022 March: We presented the CK concept to enable collaborative and reproducible ML Systems R&D at the SIAM'22 workshop on "Research Challenges and Opportunities within Software Productivity, Sustainability, and Reproducibility"
2022 March: we've released the first prototype of the Collective Mind toolkit (CK2) based on your feedback and our practical experience reproducing 150+ ML and Systems papers and validating them in the real world.
While Machine Learning is becoming more and more important in everyday life, designing efficient ML Systems and deploying them in the real world is becoming increasingly challenging, time consuming and costly. Researchers and engineers must keep pace with rapidly evolving software stacks and a Cambrian explosion of hardware platforms from the cloud to the edge. Such platforms have their own specific libraries, frameworks, APIs and specifications and often require repetitive, tedious and ad-hoc optimization of the whole model/software/hardware stack to trade off accuracy, latency, throughout, power consumption, size and costs depending on user requirements and constraints.
The CK framework
The Collective Knowledge framework (CK) is our attempt to develop a common plug&play infrastructure that can be used by the community similar to Wikipedia to learn how to solve above challenges and make it easier to co-design, benchmark, optimize and deploy Machine Learning Systems in the real world across continuously evolving software, hardware and data sets (see our ACM TechTalk for more details):
CK aims at providing a simple playground with minimal software dependencies to help researchers and practitioners share their knowledge in the form of reusable automation recipes with a unified Python API, CLI and meta description:
CK helps to organize software projects and Git repositories as a database of above automation recipes and related artifacts based on FAIR principles as described in our journal article (shorter pre-print). See examples of CK-compatible GitHub repositories:
We collaborated with the community to reproduce 150+ ML and Systems papers and implement the following reusable automation recipes in the CK format:
Portable meta package manager to automatically detect, install or rebuild various ML artifacts (ML models, data sets, frameworks, libraries, etc) across different platform and operating systems including Linux, Windows, MacOS and Android:
Portable manager for Python virtual environments: CK repo.
Portable workflows to support collaborative, reproducible and cross-platform benchmarking:
Portable workflows to automate MLPerf™ benchmark:
- End-to-end submission suite used by multiple organizations to automate the submission of MLPerf inference benchmark
- Reproducibility studies for MLPerf inference benchmark v1.1 automated by CK
- Design space exploration of ML/SW/HW stacks and customizable visualization
Please contact Grigori Fursin if you are interested to join this community effort!
- CK automations for unified benchmarking
- CK-based MLPerf inference benchmark automation example
- CK basics
We are developing the 2nd generation of the CK framework (aka CM) based on your feedback:
The latest version of the CK automation suite supported by MLCommons™:
- Automating MLPerf(tm) inference benchmark and packing ML models, data sets and frameworks as CK components with a unified API and meta description
- Developing customizable dashboards for MLPerf™ to help end-users select ML/SW/HW stacks on a Pareto frontier: aggregated MLPerf™ results
- Providing a common format to share artifacts at ML, systems and other conferences: video, Artifact Evaluation
- Redesigning CK together with the community based on user feedback: incubator
- Other real-world use cases from MLPerf™, Qualcomm, Arm, General Motors, IBM, the Raspberry Pi foundation, ACM and other great partners;
Follow this guide to install CK framework on your platform.
CK supports the following platforms:
|As a host platform||As a target platform|
|Bare-metal (edge devices)||-||±|
Portable CK workflow (native environment without Docker)
Here we show how to pull a GitHub repo in the CK format and use a unified CK interface to compile and run any program (image corner detection in our case) with any compatible data set on any compatible platform:
python3 -m pip install ck ck pull repo:mlcommons@ck-mlops ck ls program:*susan* ck search dataset --tags=jpeg ck pull repo:ctuning-datasets-min ck search dataset --tags=jpeg ck detect soft:compiler.gcc ck detect soft:compiler.llvm ck show env --tags=compiler ck compile program:image-corner-detection --speed ck run program:image-corner-detection --repeat=1 --env.MY_ENV=123 --env.TEST=xyz
You can check output of this program in the following directory:
cd `ck find program:image-corner-detection`/tmp ls processed-image.pgm
You can now view this image with detected corners.
Check CK docs for further details.
MLPerf™ benchmark workflows
Portable CK workflows inside containers
We have prepared adaptive CK containers to demonstrate MLOps capabilities:
You can run them as follows:
ck pull repo:mlcommons@ck-mlops ck build docker:ck-template-mlperf --tag=ubuntu-20.04 ck run docker:ck-template-mlperf --tag=ubuntu-20.04
Portable workflow example with virtual CK environments
You can create multiple virtual CK environments with templates to automatically install different CK packages and workflows, for example for MLPerf™ inference:
ck pull repo:mlcommons@ck-venv ck create venv:test --template=mlperf-inference-main ck ls venv ck activate venv:test ck pull repo:mlcommons@ck-mlops ck install package --ask --tags=dataset,coco,val,2017,full ck show env
Integration with web services and CI platforms
All CK modules, automation actions and workflows are accessible as a micro-service with a unified JSON I/O API to make it easier to integrate them with web services and CI platforms as described here.
Other use cases
We have developed the cKnowledge.io portal to help the community organize and find all the CK workflows and components similar to PyPI:
- Search CK components
- Browse CK components
- Find reproduced results from papers
- Test CK workflows to benchmark and optimize ML Systems
Containers to test CK automation recipes and workflows
The community provides Docker containers to test CK and components using different ML/SW/HW stacks (DSE).
- A set of Docker containers to test the basic CK functionality using some MLPerf inference benchmark workflows: https://github.com/mlcommons/ck-mlops/tree/main/docker/test-ck
Note, that we plan to redesign the CK framework to be more pythonic (we wrote the first prototype without OO to be able to port it to bare-metal devices in C but eventually we decided to drop this idea).
Please contact Grigori Fursin to join this community effort.
Release history Release notifications | RSS feed
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.