Benchmarks

Here you can add and run benchmarks. For each optimization you can add multiple benchmarks with different parameters. When you click on the row, you can see the details of each benchmark. When you click on "Run", you execute the benchmark. When the benchmarks are executed, you can check the checkbox in "Show on chart" column, then click "Compare selected" and see the comparison of the results.

Name Model Mode Precision Dataset Batch size Cores per instance Number of instances Accuracy Throughput Status Action Show on chart
{{ benchmark.name }} {{ benchmark.model.name }} {{ benchmark.mode }} {{ benchmark.model.precision.name }} {{ benchmark.dataset.name }} {{ benchmark.batch_size }} {{ benchmark.cores_per_instance }} {{ benchmark.number_of_instance }} {{ (benchmark.result?.accuracy || benchmark.result?.accuracy === 0) ? benchmark.result?.accuracy + '%' : '' }} {{ benchmark.result?.performance ? benchmark.result?.performance + ' FPS' : '' }}




Throughput comparison

Accuracy comparison

Results

{{ benchmarkDetails.result.accuracy ? benchmarkDetails.result.accuracy + '%' : '-' }}

Accuracy

{{ benchmarkDetails.result.performance ? (benchmarkDetails.result.performance | number: '1.1-1') + ' FPS' : '-' }}

Performance

{{benchmarkDetails.result.duration ? benchmarkDetails.result.duration + ' s' : '-' }}

Duration

Run benchmark to get accuracy and performance results.

Dataset

{{ detail.key | modelList }} {{ detail.value }} {{ detail.value | json }}
batch size {{ benchmarkDetails['batch_size'] }}
iterations {{ benchmarkDetails['iterations'] }}
sampling size {{ benchmarkDetails['sampling_size'] }}

Model

{{ detail.key | modelList }} {{ detail.value }} {{ detail.value.name ?? (detail.value | json) }}

Other

created at {{ benchmarkDetails['created_at'] }}
last run at {{ benchmarkDetails['last_run_at'] }}
config path Show config
log path Show output
execution command {{ benchmarkDetails['execution_command'] }}