The cTuning foundation joins forces with MLCommons to develop Collective Knowledge Playground for collaborative optimization challenges

The cTuning foundation joins forces with MLCommons to develop Collective Knowledge Playground for collaborative optimization challenges

The non-profit cTuning , cKnowledge Ltd and the open taskforce on automation and reproducibility at MLCommons are thrilled to announce the release of the Collective Knowledge Playground (CK).

CK is a free, open-source and technology-agnostic platform to make it easier for everyone to benchmark, optimize, compare and discuss AI and ML Systems across rapidly evolving software, hardware, models and data sets in a fully automated and reproducible way via open optimization challenges as successfully demonstrated in the latest MLPerf inference v3.0 community submission.

This platform is powered by the portable and technology-agnostic MLCommons CK/CM automation framework with reusable automation recipes developed by the community to solve the "AI/ML dependency hell" and automatically connect diverse and continuously changing models, software, hardware, data sets, best practices and optimization techniques into end-to-end applications in a transparent and non-intrusive way.

We are very pleased to announce the successful outcome of the 1st community challenge to run, reproduce and optimize MLPerf inference v3.0 benchmarks: our open-source CK technology has helped to automate, unify and reproduce more than 80% of all submission results including 98% of power results with very diverse technology and benchmark implementations from Neural Magic (Acquired by Red Hat) , Qualcomm , cKnowledge Ltd , KRAI , cTuning , Dell Technologies , Hewlett Packard Enterprise , Lenovo , Hugging Face , NVIDIA , Intel and Apple across diverse CPUs, GPUs and DSPs with PyTorch, ONNX, QAIC, TF/TFLite, TVM and TensorRT using popular cloud providers (GCP, AWS, Azure) and individual servers and edge devices provided by our volunteers

You can now see and compare all MLPerf inference results v3.0, v2.1 and v2.0 online together with reproducibility reports including the MLPerf BERT model from the Hugging Face Zoo on Nvidia Jetson Orin platform . You can read more about our project in the following articles from Forbes and ZDNet.

Additional thank you to Michael Goin from Neural Magic (Acquired by Red Hat) , our international students including Himanshu Dutta, Aditya Kumar Shaw, Sachin Mudaliyar and Thomas Zhu, and all CK/CM users and contributors for helping us to validate, use and improve this open-source technology to automate benchmarking and optimization of AI/ML systems in terms of performance, accuracy, power and costs!

Following your feedback, we started preparing new benchmarking, optimization and reproducibility challenges for this summer 2023. We invite you to join our open MLCommons taskforce on automation and reproducibility led by Grigori Fursin and Arjun Suresh via this public Discord server or provide your suggestions via GitHub .

We are looking forward to discuss various open challenges and tournaments with you; collaborate with you to improve and adapt our open-source CK platform to your needs, use-cases and technology; help you automate, optimize and reproduce end-to-end AI/ML applications and future MLPerf submissions; and help you to collaborate with MLCommons organizations and ACM!

Our ultimate goal is to use the Collective Knowledge to let anyone automatically generate the most efficient, reproducible and deployable AI/ML application using the most suitable software/hardware stack at any given time (model, framework, inference engine and any other related dependency) based on their requirements and constraints including costs, throughput, latency, power consumption, accuracy, target devices (cloud/edge/mobile/tiny), environment and data while slashing their benchmarking, optimization and operational costs!

Looking forward to collaborating with you in 2023 and beyond!

What an amazing breadth of results submitted in this MLPerf Inference round! Grigori Fursin and Arjun Suresh are dedicated to making a solid foundation for reproducible benchmarking, accessible to everyone! It was a pleasure working with you all in open-source.

To view or add a comment, sign in

Explore content categories