Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Give names to configs in e2e test framework #12649

Merged
merged 5 commits into from
Mar 17, 2023
Merged

Conversation

pzread
Copy link
Contributor

@pzread pzread commented Mar 16, 2023

This change gives the name to config objects in e2e test framework (especially the ones with unique IDs).

This is required to generate benchmark names in the new benchmark suites, and later will be used and shared across the benchmark tools and uploaded to the perf dashboard. Currently we generate the benchmark names in the benchmark tools, which is not an ideal place.

They can be also used in the CMake comments/names of the module generation rules to give friendly information about where the rules come from (see "Tracability in E2E Test Artifacts" in #12215 (comment))

Right now it generates the names from config fields such as architectures and tags. Ideally I want users to manually assign a more concise name for each config (we already do that to name the unique ID constants, might be able to reuse). But currently the perf dashboard heavily relies on the benchmark names to filter benchmarks, so the tags need to be part of the name. We need a dashboard to support filtering on metadata before we can move to a more concise naming schema.

LMLWYT about the naming schema

Name format:

ImportedModel:
<model_name>(<import_config_name>)

CompileConfig:
[<compile_target_arch>,...][<compile_config_tag>,...]

ModuleExecutionConfig:
<runtime_driver>(<runtime_loader>)[<run_config_tag>,...]

DeviceSpec:
<device_name>[<device_tag>,...]

ModuleGenerationConfig
<model_name>(<import_config_name>) [<compile_target_arch>,...][<compile_config_tag>,...] 

E2EModelRunConfig
<model_name>(<import_config_name>) [<compile_target_arch>,...][<compile_config_tag>,...] <runtime_driver>(<runtime_loader>)[<run_config_tag>,...] with <input_data_name> @ <device_name>[<device_tag>,...]

Execution benchmark names:

BertForMaskedLMTF(tf_v2) [x86_64-cascadelake-linux_gnu-llvm_cpu][default-flags] local_sync(embedded_elf)[full-inference,default-flags] with zeros @ c2-standard-16[cpu]

BertLargeTF(tf_v1) [x86_64-cascadelake-linux_gnu-llvm_cpu][experimental-flags,fuse-padding] local_task(embedded_elf)[8-thread,full-inference,default-flags] with zeros @ c2-standard-16[cpu]

MobileBertSquad_int8(tflite) [armv8.2-a-generic-linux_android29-llvm_cpu][experimental-flags,mmt4d,dotprod] local_task(embedded_elf)[4-thread,full-inference,default-flags] with zeros @ Pixel-6-Pro[big-core]

# The current longest name (218 characters)

MobileBertSquad_fp16(tflite) [valhall-mali-vulkan_android31-vulkan_spirv][experimental-flags,fuse-padding,repeated-kernel,demote-f32-to-f16] vulkan(none)[full-inference,experimental-flags] with zeros @ Pixel-6-Pro[gpu]

Compilation benchmark names:

EfficientNet_int8(tflite) [x86_64-cascadelake-linux_gnu-llvm_cpu][experimental-flags,fuse-padding,compile-stats]

MobileBertSquad_fp16(tflite) [valhall-mali-vulkan_android31-vulkan_spirv][experimental-flags,fuse-padding,repeated-kernel,demote-f32-to-f16,compile-stats]

@pzread pzread changed the title Assign names for all configs in e2e test framework Assign names to configs in e2e test framework Mar 16, 2023
@pzread pzread force-pushed the bench-name branch 2 times, most recently from d3b7f8c to 21a0481 Compare March 16, 2023 17:05
@pzread pzread changed the title Assign names to configs in e2e test framework Give names to configs in e2e test framework Mar 16, 2023
@pzread pzread marked this pull request as ready for review March 16, 2023 17:06
@pzread pzread added benchmarks:cuda Run default CUDA benchmarks benchmarks:x86_64 Run default x86_64 benchmarks benchmarks:comp-stats Run default compilation statistics benchmarks labels Mar 16, 2023
Copy link
Contributor

@GMNGeoffrey GMNGeoffrey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's just make sure that these aren't being used as keys. We can change the names without modifying the server, correct?

@pzread pzread added benchmarks:comp-stats Run default compilation statistics benchmarks and removed benchmarks:x86_64 Run default x86_64 benchmarks benchmarks:comp-stats Run default compilation statistics benchmarks labels Mar 16, 2023
@github-actions
Copy link

github-actions bot commented Mar 16, 2023

Abbreviated Benchmark Summary

@ commit dd9191c6f8fff98337b7e441496592ce480b706b (vs. base 713f9851eda694abe664103bcafafcf846390546)

No improved or regressed benchmarks 🏖️

No improved or regressed compilation metrics 🏖️

For more information:

Source Workflow Run

@pzread
Copy link
Contributor Author

pzread commented Mar 17, 2023

Let's just make sure that these aren't being used as keys. We can change the names without modifying the server, correct?

That's correct.

pzread and others added 3 commits March 17, 2023 00:56
…itions.py

Co-authored-by: Geoffrey Martin-Noble <gcmn@google.com>
…itions.py

Co-authored-by: Geoffrey Martin-Noble <gcmn@google.com>
@pzread pzread enabled auto-merge (squash) March 17, 2023 05:07
@pzread pzread merged commit 16dabe4 into iree-org:main Mar 17, 2023
qedawkins pushed a commit to qedawkins/iree that referenced this pull request Apr 2, 2023
@jpienaar jpienaar mentioned this pull request Apr 3, 2023
NatashaKnk pushed a commit to NatashaKnk/iree that referenced this pull request Jul 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
benchmarks:comp-stats Run default compilation statistics benchmarks benchmarks:cuda Run default CUDA benchmarks
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants