Benchmark Suite Results

MLPerf Automotive

The MLPerf Automotive benchmark suite measures the performance of computers intended for automotive, both for Advanced Driving Assistance System/Autonomous Driving (ADAS/AD) and In-Vehicle Infotainment (IVI) embedded systems. The main KPI is latency as automotive is a real-time, and often, functional safe system.

The MLPerf Automotive benchmark suite is a collaboration between MLCommons and Autonomous Vehicle Compute Consortium (AVCC). The benchmark suite is based on Technical Reports (TR003, TR004, TR007) published by AVCC and developed by MLCommons.


Results

MLCommons results are shown in an interactive table to enable you to explore the results. You can apply filters to see just the information you want and click across the top tabs to view the results visually. To see all result details, expand the columns by clicking on the “+” icon, which appears when you hover over “System Name” and subsequent columns.

Published results are sometimes modified or invalidated for various reasons. The change log contains information about changes made to any results after their initial publication. 

View Change Log

Scenarios & Metrics

To enable representative testing of a wide variety of automotive platforms and use cases, MLPerf has defined two different scenarios as described below. A given scenario is evaluated by…

ScenarioQuery GenerationDurationSamples/queryLatency ConstraintTail LatencyPerformance Metric
Single streamLoadGen sends next query as soon as SUT completes the previous query6636 queries1None99.9%99.9%-ile measured latency
Constant StreamLoadGen sends a new query at a constant rate100,000 queries 1Benchmark specific99.9%99.9%-ile measured latency

Benchmarks

Each benchmark is defined by a Dataset and Quality Target. The following table summarizes the benchmarks in this version of the suite (the rules remain the official source of truth): 

AreaTaskModelDatasetQSL SizeQualityLatest Version Available
Perception2D object detectionSSDCognata12899.9% of FP32v0.5
Perception2D semantic segmentationDeepLabv 3+Cognata12899.9% of FP32v0.5
Perception3D object detectionBevformertinynuScenes25699% of FP32 v0.5

Each benchmark requires  the following scenarios: 

AreaTaskRequired Scenarios
Perception2D object detectionSingle Stream, Constant Stream
Perception2D semantic segmentationSingle Stream, Constant Stream
Perception3D object detectionSingle Stream, Constant Stream

Divisions

MLPerf aims to encourage innovation in software as well as hardware by allowing submitters to reimplement the reference implementations. There are two Divisions that allow different levels of flexibility during reimplementation:

  • The Closed division is intended to compare hardware platforms or software frameworks “apples-to-apples” and requires using the same model as the reference implementation.
  • The Open division is intended to foster innovation and allows using a different model or retraining. 

Category

MLPerf divides benchmark results into Categories based on availability. 

  • Available systems contain only components that are available for purchase or for rent in the cloud. 
  • Preview systems must be submittable as Available in the next submission round.
  • Research, Development, or Internal (RDI) contain experimental, in development, or internal-use hardware or software.

The focus for the categories in automotive are on the level of maturity of the automotive system running the benchmark suite.

CategoryExplanationAuditable
Hardened SystemKnown as ECU; a general term for a computer inside a production vehicleYes
Development SystemKnown as EVM (EValuation Model) and is generally what SiPs provideYes
Engineering Sample3D object detectionVery early silicon, internal SoC R&DNo

Submission Information

Each row in the results table is a set of results produced by a single submitter using the same software stack and hardware platform. Each Closed and Open division row contains the following information:

  • Submitter

    The organization that submitted the results.

  • Software

    The ML framework and primary ML hardware library used.

  • System

    General system description.

  • Benchmark Results

    Results for each benchmark as described above.

  • Processor

    The type and number of CPUs used, if CPUs perform the majority of ML compute.

  • Accelerator and Count

    Link to metadata for submission.

  • Code

    Link to code for submission.

Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.