MLPerf HPC Archives - MLCommons https://mlcommons.org/category/mlperf-hpc/ Better AI for Everyone Tue, 25 Feb 2025 16:50:30 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://mlcommons.org/wp-content/uploads/2024/10/cropped-favicon-32x32.png MLPerf HPC Archives - MLCommons https://mlcommons.org/category/mlperf-hpc/ 32 32 New MLPerf Training and HPC Benchmark Results Showcase 49X Performance Gains in 5 Years https://mlcommons.org/2023/11/mlperf-training-v3-1-hpc-v3-0-results/ Wed, 08 Nov 2023 16:55:00 +0000 http://local.mlcommons/2023/11/mlperf-training-v3-1-hpc-v3-0-results/ New benchmarks, new submitters, performance gains, and new hardware add scale to latest MLCommons MLPerf results

The post New MLPerf Training and HPC Benchmark Results Showcase 49X Performance Gains in 5 Years appeared first on MLCommons.

]]>
Today, MLCommons® announced new results from two industry-standard MLPerf™ benchmark suites:

  • The MLPerf Training v3.1 suite, which measures the performance of training machine learning models.
  • The MLPerf HPC (High Performance Computing) v.3.0 benchmark suite, which is targeted at supercomputers and measures the performance of training machine learning models for scientific applications and data. 

MLPerf Training v3.1
The MLPerf Training benchmark suite comprises full system tests that stress machine learning models, software, and hardware for a broad range of applications. The open-source and peer-reviewed benchmark suite provides a level playing field for competition that drives innovation, performance, and energy-efficiency for the entire industry.

MLPerf Training v3.1 includes over 200 performance results from 19 submitting organizations: Ailiverse, ASUSTek, Azure, Azure+NVIDIA, Clemson University Research Computing and Data, CTuning, Dell, Fujitsu, GigaComputing, Google, Intel+Habana Labs, Krai, Lenovo, NVIDIA, NVIDIA+CoreWeave, Quanta Cloud Technology, Supermicro, Supermicro+Red Hat, and xFusion. MLCommons would like to especially congratulate first-time MLPerf Training submitters Ailiverse, Clemson University Research Computing and Data, CTuning Foundation, and Red Hat.

The results demonstrate broad industry participation and highlight performance gains of up to 2.8X compared to just 5 months ago and 49X over the first results, reflecting the tremendous rate of innovation in systems for machine learning. 

Significant to this round, is the largest system ever submitted to MLPerf Training. Comprising over 10K accelerators, it demonstrates the extraordinary progress by the machine learning community in scaling system size to advance the training of neural networks.

MLPerf Training v3.1 introduces the new Stable Diffusion generative AI benchmark model to the suite. Based on Stability AI’s Stable Diffusion v2 latent diffusion model, Stable Diffusion takes text prompts as inputs and generates photorealistic images as output. It is the core technology behind an emerging and exciting class of tools and applications such as Midjourney and Lensa.

“Adding Stable Diffusion to the benchmark suite is timely, given how image generation has exploded in popularity,” said Eric Han, MLPerf Training co-chair. “This is a critical new area – extending Generative AI to the visual domain.”

MLCommons added the GPT-3 benchmark to MLPerf Training v3.0 last June. In just five months, the LLM benchmark has shown over 2.8X in performance gains. Eleven submissions in this round include this large language model (LLM) using the GPT-3 reference model, reflecting the tremendous popularity of generative AI. 

“GPT-3 is among the fastest growing benchmarks we’ve launched,” said David Kanter, Executive Director, MLCommons. “It’s one of our goals to ensure that our benchmarks are representative of real-world workloads and it’s exciting to see 2.8X better performance in mere months.”

MLPerf HPC v3.0 Benchmarks
The MLPerf HPC benchmark is similar to MLPerf Training, but is specifically intended for high-performance computing systems that are commonly employed in leading-edge scientific research. It emphasizes training machine learning models for scientific applications and data, such as quantum molecular dynamics, and also incorporates an optional throughput metric for large systems that commonly support multiple users.

MLCommons added a new protein-folding benchmark in the HPC v3.0 benchmark suite: the OpenFold generative AI model, which predicts the 3D structure of a protein given a 1D amino acid sequence. Developed by Columbia University, OpenFold is an open-source reproduction of the AlphaFold 2 foundation model and has been the cornerstone of a large number of research projects since its creation. 

MLPerf HPC v3.0 includes over 30 results – a 50% increase in participation over last year, and  includes submissions by 8 organizations with some of the world’s largest supercomputers: Clemson University Research Computing and Data, Dell, Fujitsu+RIKEN, HPE+Lawrence Berkeley National Laboratory, NVIDIA, and Texas Advanced Computing Center. MLCommons congratulates first-time MLPerf HPC submitters Clemson University Research Computing and Data and HPE+Lawrence Berkeley National Laboratory.

The new OpenFold benchmark includes submissions from 5 organizations: Clemson University Research Computing and Data, HPE+Lawrence Berkeley National Laboratory, NVIDIA, and Texas Advanced Computing Center,

HPC v3.0 Performance Gains
The MLPerf HPC benchmark suite demonstrates considerable progress in AI for science that will help unlock new discoveries. For example, the DeepCAM weather modeling benchmark is 14X faster than when it debuted, illustrating how rapid innovations in machine learning systems can empower scientists with better tools to address critical research areas and advance our understanding of the world. 

“The addition of OpenFold follows the spirit of the MLPerf HPC benchmark suite: Accelerating workloads with potential for global-scale contribution. We are excited for the new addition as well as the increased participation in the latest submission round.” said Andreas Prodromou, MLCommons HPC co-chair.

View the Results
To view the results for MLPerf Training v3.1 and MLPerf HPC v3.0 and find additional information about the benchmarks, please visit the Training and HPC benchmark pages.

About MLCommons
MLCommons is the world leader in building benchmarks for AI. It is an open engineering consortium with a mission to make machine learning better for everyone through benchmarks and data. The foundation for MLCommons began with the MLPerf benchmarks in 2018, which rapidly scaled as a set of industry metrics to measure machine learning performance and promote transparency of machine learning techniques. In collaboration with its 125+ members, global technology providers, academics, and researchers, MLCommons is focused on collaborative engineering work that builds tools for the entire machine learning industry through benchmarks and metrics, public datasets, and best practices.

For additional information on MLCommons and details on becoming a member or affiliate, please visit MLCommons.org or contact participation@mlcommons.org.

The post New MLPerf Training and HPC Benchmark Results Showcase 49X Performance Gains in 5 Years appeared first on MLCommons.

]]>
Latest MLPerf Results Display Gains for All https://mlcommons.org/2022/11/latest-mlperf-results-display-gains-for-all/ Wed, 09 Nov 2022 08:41:00 +0000 http://local.mlcommons/2022/11/latest-mlperf-results-display-gains-for-all/ MLCommons’ benchmark suites demonstrate performance gains up to 5X for systems from microwatts to megawatts, advancing the frontiers of AI

The post Latest MLPerf Results Display Gains for All appeared first on MLCommons.

]]>
Today, MLCommons®, an open engineering consortium, announced new results from the industry-standard MLPerf™ Training, HPC and Tiny benchmark suites. Collectively, these benchmark suites scale from ultra-low power devices that draw just a few microwatts for inference all the way up to the most powerful multi-megawatt data center training platforms and supercomputers. The latest MLPerf results demonstrate up to a 5X improvement in performance helping deliver faster insights and deploy more intelligent capabilities in systems at all scales and power levels.

The MLPerf benchmark suites are comprehensive system tests that stress machine learning models including underlying software and hardware and in some cases, optionally measuring energy usage. The open-source and peer-reviewed benchmark suites create a level playing ground for competition, which fosters innovation and benefits society at large through better performance and energy efficiency for AI and ML applications.

The MLPerf Training benchmark suite measures the performance for training machine learning models that are used in commercial applications such as recommending movies, speech-to-text, autonomous vehicles, and medical imaging. MLPerf Training v2.1 includes nearly 200 results from 18 different submitters spanning all the way from small workstations up to large scale data center systems with thousands of processors.

The MLPerf HPC benchmark suite is targeted at supercomputers and measures the time it takes to train machine learning models for scientific applications and also incorporates an optional throughput metric for large systems that commonly support multiple users. The scientific workloads include weather modeling, cosmological simulation, and predicting chemical reactions based on quantum mechanics. MLPerf HPC 2.0 includes over 20 results from 5 organizations with time-to-train and throughput for all models and submissions from some of the world’s largest supercomputers.

The MLPerf Tiny benchmark suite is intended for the lowest power devices and smallest form factors, such as deeply embedded, intelligent sensing, and internet-of-things applications. It measures inference performance – how quickly a trained neural network can process new data and includes an optional energy measurement component. MLPerf Tiny 1.0 encompasses submissions from 8 different organizations including 59 performance results with 39 energy measurements or just over 66% – an all-time record.

“We are pleased to see the growth in the machine learning community and especially excited to see the first submissions from xFusion for MLPerf Training, Dell in MLPerf HPC and GreenWaves Technologies, OctoML, and Qualcomm in MLPerf Tiny,” said MLCommons Executive Director David Kanter. “The increasing adoption of energy measurement is particularly exciting, as a demonstration of the industry’s outstanding commitment to efficiency.”

To view the results and find additional information about the benchmarks please visit: https://mlcommons.org/en/training-normal-21/https://mlcommons.org/en/training-hpc-20/, and https://www.mlcommons.org/en/inference-tiny-10/

About MLCommons

MLCommons is an open engineering consortium with a mission to benefit society by accelerating innovation in machine learning. The foundation for MLCommons began with the MLPerf benchmark in 2018, which rapidly scaled as a set of industry metrics to measure machine learning performance and promote transparency of machine learning techniques. In collaboration with its 50+ founding partners – global technology providers, academics and researchers, MLCommons is focused on collaborative engineering work that builds tools for the entire machine learning industry through benchmarks and metrics, public datasets and best practices.

For additional information on MLCommons and details on becoming a Member or Affiliate of the organization, please visit http://mlcommons.org/ and contact participation@mlcommons.org.

Press Contact:
David Kanter
press@mlcommons.org

The post Latest MLPerf Results Display Gains for All appeared first on MLCommons.

]]>
MLPerf HPC v1.0 results https://mlcommons.org/2021/11/mlperf-hpc-v1-0-results/ Wed, 17 Nov 2021 08:59:00 +0000 http://local.mlcommons/2021/11/mlperf-hpc-v1-0-results/ Introducing a new machine learning metric for supercomputers and a graph neural network benchmark for molecular modeling

The post MLPerf HPC v1.0 results appeared first on MLCommons.

]]>
Today, MLCommons, an open engineering consortium, released new results for MLPerf™ HPC v1.0, the organization’s machine learning training performance benchmark suite for high-performance computing (HPC). The MLPerf HPC suite measures the time it takes to train emerging scientific machine learning models to standard quality targets. The latest round introduces a novel metric for aggregate machine learning training throughput for supercomputers, which is a realistic representation of HPC system usage. All the benchmarks in the suite use large scientific simulations to generate training data.

MLPerf HPC is a full system benchmark, testing machine learning models, software, and hardware. MLPerf is a fair and consistent way to track ML performance over time, encouraging competition and innovation to improve performance for the community. Compared to the last submission round, the best benchmark results improved by 4-7X, showing substantial improvement in hardware, software, and system scale.

Similar to MLPerf HPC v0.7 results, the submissions consist of two divisions: closed and open. Closed submissions use the same reference model to ensure a level playing field across systems, while participants in the open division are permitted to submit modified models. Submissions are additionally classified by availability within each division, including systems commercially available, in preview, and RDI (research, development, and internal).

New Benchmark and Metric to Measure Supercomputer Capabilities

MLPerf HPC v1.0 is a significant update and includes a new benchmark as well as a new performance metric. The OpenCatalyst benchmark predicts the quantum mechanical properties of catalyst systems to discover and evaluate new catalyst materials for energy storage applications. This benchmark uses the OC20 dataset from the Open Catalyst Project, the largest and most diverse publicly available dataset of its kind, with the task of predicting energy and the per-atom forces. The reference model for OpenCatalyst is DimeNet++, a graph neural network (GNN) designed for atomic systems that can model the interactions between pairs of atoms as well as angular relations between triplets of atoms.

MLPerf HPC v1.0 also features a novel weak-scaling performance metric that is designed to measure the aggregate machine learning capabilities for leading supercomputers. Most large supercomputers run multiple jobs in parallel, for example training multiple ML models. The new benchmark trains multiple instances of a model across a supercomputer to capture the impact on shared resources such as the storage system and interconnect. The benchmark reports both the time-to-train for all the model instances and the aggregate throughput of an HPC system, i.e., number of models trained per minute. Using the new weak-scaling metric, the MLPerf HPC benchmarks can measure the ML capabilities for supercomputers of any size, from just a handful of nodes to the world’s largest systems.

MLPerf HPC v1.0 results further MLCommons’ goal to provide benchmarks and metrics that level the industry playing field through the comparison of ML systems, software, and solutions. The latest benchmark round received submissions from 8 leading supercomputing organizations and released over 30 results, including 8 using the new weak-scaling metric. Submissions this round included the following organizations: Argonne National Laboratory, the Swiss National Supercomputing Centre, Fujitsu and Japan’s Institute of Physical and Chemical Research (RIKEN), Helmholtz AI (a collaboration of the Jülich Supercomputing Centre at Forschungszentrum and the Steinbuch Centre for Computing at the Karlsruhe Institute of Technology), Lawrence Berkeley National Laboratory, the National Center for Supercomputing Applications, NVIDIA, and the Texas Advanced Computing Center. In particular, MLCommons would like to congratulate new submitters Argonne National Laboratory, Helmholtz AI, and NVIDIA. To view the results, please visit https://mlcommons.org/en/training-hpc-10/.

“We are excited by the advances in the MLPerf HPC suite and community,” said Steven Farrell, Co-Chair of the MLPerf HPC Working Group. “It’s fantastic to measure such a significant improvement in performance, and we are particularly happy to see a new benchmark and the success of new submitting teams.”

“These benchmarks are aimed at measuring the full capabilities of modern supercomputers,” said Murali Emani, Co-Chair of the MLPerf HPC Working Group. “This iteration of MLPerf HPC will help guide upcoming Exascale systems for emerging machine learning workloads such as AI for science applications.”

Additional information about the HPC v1.0 benchmarks will be available at https://mlcommons.org/en/training-hpc-10/.

About MLCommons

MLCommons is an open engineering consortium with a mission to accelerate machine learning innovation, raise all boats and increase its positive impact on society. The foundation for MLCommons began with the MLPerf benchmark in 2018, which rapidly scaled as a set of industry metrics to measure machine learning performance and promote transparency of machine learning techniques. In collaboration with its 50+ founding partners – global technology providers, academics and researchers, MLCommons is focused on collaborative engineering work that builds tools for the entire machine learning industry through benchmarks and metrics, public datasets and best practices.

For additional information on MLCommons and details on becoming a Member or Affiliate of the organization, please visit http://mlcommons.org/ or contact participation@mlcommons.org.

Press Contact:
Liz Bazini, Bazini Hopp
press@mlcommons.org

The post MLPerf HPC v1.0 results appeared first on MLCommons.

]]>
MLPerf HPC v0.7 results https://mlcommons.org/2020/11/mlperf-hpc-v0-7-results/ Wed, 18 Nov 2020 08:13:00 +0000 http://local.mlcommons/2020/11/mlperf-hpc-v0-7-results/ MLPerf Releases Inaugural Results for Leading High-Performance ML Training Systems

The post MLPerf HPC v0.7 results appeared first on MLCommons.

]]>
Today the MLPerf™ consortium released results for MLPerf HPC Training v0.7, the first round of results from their machine learning training performance benchmark suite for high-performance computing (HPC). MLPerf is a consortium of over 70 companies and researchers from leading universities, and the MLPerf HPC benchmark suite is establishing an industry standard for measuring machine learning performance on large-scale high performance computing systems.

The MLPerf HPC benchmark suite measures the time it takes to train emerging scientific machine learning models to a standard quality target in tasks relevant to climate analytics and cosmology. Both benchmarks make use of large scientific simulations to generate training data.

The first version of MLPerf HPC includes two new benchmarks:

  • CosmoFlow: A 3D convolutional architecture trained on N-body cosmological simulation data to predict four cosmological parameter targets.
  • DeepCAM: A convolutional encoder-decoder segmentation architecture trained on CAM5+TECA climate simulation data to identify extreme weather phenomena such as atmospheric rivers and tropical cyclones.

The MLPerf HPC Benchmark Suite was created to capture characteristics of emerging machine learning workloads on HPC systems such as large scale model training on scientific datasets. The models and data used by the HPC suite differ from the canonical MLPerf training benchmarks in significant ways. For instance, CosmoFlow is trained on volumetric (3D) data, rather than the 2D data commonly employed in training image classifiers. Similarly, DeepCAM is trained on images with 768 x 1152 pixels and 16 channels, which is substantially larger than standard vision datasets like ImageNet. Both benchmarks have massive datasets – 8.8 TB in the case of DeepCAM and 5.1 TB for Cosmoflow – introducing significant I/O challenges that expose storage and interconnect performance. The rules for MLPerf HPC v0.7 follow very closely the MLPerf Training v0.7 rules with only a couple of adjustments. For instance, to capture the complexity of large-scale data movement experience for HPC systems, all data staging from parallel file systems into accelerated and/or on-node storage systems must be included in the measured runtime.

“Our first set of results were submitted by organizations from around the world with a diverse set of HPC systems, demonstrating the enthusiasm in the HPC communities for supporting these emerging machine learning workloads,” said Steven Farrell (NERSC) of the latest release. “They also showcase the state-of-the-art capabilities of supercomputers for training large scale scientific problems, utilizing data-parallel and model-parallel training techniques on thousands to tens of thousands of processors.”

To see the results, go to mlcommons.org/en/training-hpc-07/.

The initial round saw submissions from the following organizations:

  • Swiss National Supercomputing Centre (CSCS) – Led by Lukas Drescher and Andreas Fink
  • Fujitsu – Led by Koichi Shirahata and Tsuguchika Tabaru at Fujitsu Laboratories
  • Lawrence Berkeley National Laboratory (LBNL) – Led by Steven Farrell
  • National Center of Supercomputer Applications (NCSA) – Led by Dawei Mu
  • Japan’s Institute of Physical and Chemical Research (RIKEN) – Led by Aleksandr Drozd and Kento Sato
  • Texas Advanced Computer Center (TACC) – Led by Amit Ruhela

MLPerf is committed to providing benchmarks that reflect the needs of machine learning customers at national labs and compute centers, and is pioneering the construction of benchmarks relevant to large scale data-driven machine learning for science. Jacob Balma (HPE) concluded, “These are future-oriented benchmarks aimed at measuring capabilities of modern supercomputers for these emerging workloads. This important step makes it possible to engineer future systems optimized for the next generation of machine learning algorithms.”

Additional information about the HPC Training v0.7 benchmarks will be available at mlcommons.org/en/training-hpc-07/.

The post MLPerf HPC v0.7 results appeared first on MLCommons.

]]>
MLCommons Launches https://mlcommons.org/2020/05/mlcommons-launches-2/ Mon, 18 May 2020 01:45:00 +0000 http://local.mlcommons/2020/05/mlcommons-launches-2/ Uniting 50+ Global Technology and Academic Leaders to Accelerate Innovation in ML

The post MLCommons Launches appeared first on MLCommons.

]]>
MLCommons Launches and Unites 50+ Global Technology and Academic Leaders in AI and Machine Learning to Accelerate Innovation in ML

Engineering consortium to deliver industry-wide benchmarks, best practices and datasets to speed computer vision, natural language processing, and speech recognition development for all

SAN FRANCISCO – December 3, 2020 — Today, MLCommons®, an open engineering consortium, launches its industry-academic partnership to accelerate machine learning innovation and broaden access to this critical technology for the public good. The non-profit organization initially formed as MLPerf, now boasts a founding board that includes representatives from Alibaba, Facebook AI, Google, Intel, and NVIDIA, as well as Professor Vijay Janapa Reddi of Harvard University; and a broad range of more than 50 founding members. The founding membership includes over 15 startups and small companies that focus on semiconductors, systems, and software from across the globe, as well as researchers from universities such as U.C. Berkeley, Stanford, and the University of Toronto.

MLCommons will advance development of, and access to, the latest AI and Machine Learning datasets and models, best practices, benchmarks and metrics. An intent is to enable access to machine learning solutions such as computer vision, natural language processing, and speech recognition by as many people, as fast as possible.

“MLCommons has a clear mission – accelerate Machine Learning innovation to ‘raise all boats’ and increase positive impact on society,” said Peter Mattson, President of MLCommons. “We are excited to build on MLPerf and extend its scope and already impressive impact, by bringing together our global partners across industry and academia to develop technologies that benefit everyone.”

“Machine Learning is a young field that needs industry-wide shared infrastructure and understanding,” said David Kanter, Executive Director of MLCommons. “With our members, MLCommons is the first organization that focuses on collective engineering to build that infrastructure. We are thrilled to launch the organization today to establish measurements, datasets, and development practices that will be essential for fairness and transparency across the community.”

Today’s launch of MLCommons in partnership with its founding members will promote global collaboration to build and share best practices – across industry and academia, software and hardware, from nascent startups to the largest companies. For example, MLCube enables researchers and developers to easily share machine learning models to ensure portability and reproducibility across a wide range of infrastructure, so that innovations can be easily adopted and fuel the next wave of technology.

MLCommons will focus on:

  • Benchmarks and Metrics – that deliver transparency and a level playing field for comparing ML systems, software, and solutions, e.g. MLPerf™, the industry-standard for machine learning training and inference performance.
  • Datasets and Models – that are publicly available and can form the foundation for new capabilities and AI applications, e.g. People’s Speech, the world’s largest public speech-to-text dataset.
  • Best Practices – e.g. MLCube™, a set of common conventions that enables open and frictionless sharing of ML models across different infrastructure and between researchers and developers around the globe.

Benchmarks and Best Practices Align Industry and Research to Drive AI Forward

The opportunities to apply Machine Learning to benefit everyone are endless; from communication, to healthcare, to making driving safer. To foster the ongoing development, implementation, and sharing of Machine Learning and AI technologies, and to measure progress on quality, speed, and reliability, the industry requires a universally agreed upon set of best practices and metrics.

MLCommons is focused on building these tools for the entire ML community. A cornerstone asset within MLCommons is MLPerf, the industry standard ML benchmark suite that measures full system performance for real applications. With MLPerf, MLCommons is promoting industry wide transparency and making like-for-like comparisons possible.

Public Datasets that Accelerate Innovation and Accessibility

Machine Learning and AI require high quality datasets, as they are foundational to the performance of new capabilities. To accelerate innovation in ML, MLCommons is committed to the creation of large-scale, high-quality public datasets that are shared and made accessible to all.

An early example of such an initiative for MLCommons is People’s Speech, the world’s largest public speech-to-text dataset in multiple languages that will enable better speech-based assistance. MLCommons has collected more than 80,000 hours of speech with the goal of democratizing speech technology. With People’s Speech, MLCommons will create opportunities to extend the reach of advanced speech technologies to many more languages and help to offer the benefits of speech assistance to the entire world population rather than confining it to speakers of the most common languages.

About MLCommons

MLCommons is an open engineering consortium with a mission to accelerate machine learning innovation, raise all boats and increase its positive impact on society. The foundation for MLCommons began with the MLPerf benchmark in 2018, which rapidly scaled as a set of industry metrics to measure machine learning performance and promote transparency of machine learning techniques. In collaboration with its 50+ founding member partners – global technology providers, academics and researchers, MLCommons is focused on collaborative engineering work that builds tools for the entire machine learning industry through benchmarks and metrics, public datasets and best practices.

The MLCommons founding members are from leading companies, including Advanced Micro Devices, Inc., Alibaba Co., Ltd., Arm Limited & Its Subsidiaries, Baidu Inc., Cerebras Systems, Centaur Technology, Inc., Cisco Systems, Inc., Ctuning Foundation, Dell Technologies, d-Matrix Corp., Facebook AI, Fujitsu Ltd, FuriosaAI, Inc., Gigabyte Technology Co., LTD., Google LLC, Grai Matter Labs, Graphcore Limited, Groq Inc., Hewlett Packard Enterprise, Horizon Robotics Inc., Inspur, Intel Corporation, Kalray, Landing AI, MediaTek, Microsoft, Myrtle.ai, Neuchips Corporation, Nettrix Information Industry Co., Ltd., Nvidia Corporation, Qualcomm Technologies, Inc., Red Hat, Inc., SambaNova Systems, Samsung Electronics Co., Ltd, Shanghai Enflame Technology Co., Ltd, Syntiant Corp., Tenstorrent Inc., VerifAI Inc., VMind Technologies, Inc., Xilinx, Gungdong Oppo Mobile Telecommunications Corp., Ltd (Zeku Technology (Shanghai) Corp. Ltd.) and researchers from the following institutions: Harvard University, Indiana University, Stanford University, University of California, Berkeley, University of Toronto, and University of York. Additional MLCommons membership at launch includes LSDTech.

For additional information on MLCommons and details on becoming a member of the organization, please visit http://mlcommons.org/ or contact membership@mlcommons.org.

Press Contact:
press@mlcommons.org

The post MLCommons Launches appeared first on MLCommons.

]]>