[pt-vulkan] Enable Python code blocks in shader templates and upgrade shader template generation#115948
[pt-vulkan] Enable Python code blocks in shader templates and upgrade shader template generation#115948SS-JIA wants to merge 1 commit intopytorch:mainfrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/115948
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (5 Unrelated Failures)As of commit 49cfbb9 with merge base d85314c ( BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
UNSTABLE - The following jobs failed but were likely due to flakiness present on trunk and has been marked as unstable:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
This pull request was exported from Phabricator. Differential Revision: D52087084 |
|
This pull request was exported from Phabricator. Differential Revision: D52087084 |
703d6e8 to
d5dee9c
Compare
|
This pull request was exported from Phabricator. Differential Revision: D52087084 |
d5dee9c to
5d56726
Compare
|
This pull request was exported from Phabricator. Differential Revision: D52087084 |
5d56726 to
9fdc02d
Compare
… shader template generation (pytorch#115948) Summary: Pull Request resolved: pytorch#115948 This change makes two major improvements to PyTorch Vulkan's shader authoring workflow. ## Review Guide There are a lot of changed files because every GLSL shader had to be touched. The majority of changes is changing ``` #define PRECISION $precision #define FORMAT $format ``` to ``` #define PRECISION ${PRECISION} #define FORMAT ${FORMAT} ``` due to changes in how shader templates are processed. For reviewers, the primary functional changes to review are: * `gen_vulkan_spv.py` * Majority of functional changes are in this file, which controls how shader templates are processed. * `shader_params.yaml` * controls how shader variants are generated ## Python Codeblocks in Shader Templates From now on, every compute shader (i.e. `.glsl`) is treated as a shader template. To this effect, the `templates/` folder has been removed and there is now a global `shader_params.yaml` file to describe the shader variants that should be generated for all shader templates. **Taking inspiration from XNNPACK's [`xngen` tool](https://github.com/google/XNNPACK/blob/master/tools/xngen.py), shader templates can now use Python codeblocks**. One example is: ``` $if not INPLACE: layout(set = 0, binding = 0, FORMAT) uniform PRECISION restrict writeonly image3D uOutput; layout(set = 0, binding = 1) uniform PRECISION sampler3D uInput; layout(set = 0, binding = 2) uniform PRECISION sampler3D uOther; layout(set = 0, binding = 3) uniform PRECISION restrict Block { ivec4 output_sizes; ivec4 input_sizes; ivec4 other_sizes; float alpha; } uArgs; $else: layout(set = 0, binding = 0, FORMAT) uniform PRECISION restrict image3D uOutput; layout(set = 0, binding = 1) uniform PRECISION sampler3D uOther; layout(set = 0, binding = 2) uniform PRECISION restrict Block { ivec4 output_sizes; ivec4 other_sizes; float alpha; } uArgs; ``` Another is: ``` // PYTHON CODEBLOCK $if not IS_DIV: const int c_index = (pos.z % ((uArgs.output_sizes.z + 3) / 4)) * 4; if (uArgs.other_sizes.z != 1 && c_index + 3 >= uArgs.output_sizes.z) { ivec4 c_ind = ivec4(c_index) + ivec4(0, 1, 2, 3); vec4 mask = vec4(lessThan(c_ind, ivec4(uArgs.output_sizes.z))); other_texel = other_texel * mask + vec4(1, 1, 1, 1) - mask; } // PYTHON CODEBLOCK $if not INPLACE: ivec3 input_pos = map_output_pos_to_input_pos(pos, uArgs.output_sizes, uArgs.input_sizes); const vec4 in_texel = load_texel(input_pos, uArgs.output_sizes, uArgs.input_sizes, uInput); imageStore(uOutput, pos, OP(in_texel, other_texel, uArgs.alpha)); $else: const vec4 in_texel = imageLoad(uOutput, pos); imageStore(uOutput, pos, OP(in_texel, other_texel, uArgs.alpha)); ``` In addition to making it easier and clearer to write shader templates, this enables shaders that were previously unable to be consolidated into a single template to now be represented using a single template, such as non inplace and inplace variants of the same shader. ## `generate_variant_forall` in shader variant YAML configuration YAML files that describe how shader variants should be generated can now use a `generate_variant_forall` field to iterate over various settings for a specific parameter for each variant defined. Example: ``` unary_op: parameter_names_with_default_values: OPERATOR: exp(X) INPLACE: 0 generate_variant_forall: INPLACE: - VALUE: 0 SUFFIX: "" - VALUE: 1 SUFFIX: "inplace" shader_variants: - NAME: exp OPERATOR: exp(X) - NAME: sqrt OPERATOR: sqrt(X) - NAME: log OPERATOR: log(X) ``` Previously, the `inplace` variants would need to have separate `shader_variants` entries. If there are multiple variables that need to be iterated across, then all possible combinations will be generated. Would be good to take a look to see how the new YAML configuration works. Test Plan: There is no functional change to this diff; we only need to make sure that the generated shaders are still correct. Therefore, we only need to run `vulkan_api_test`. ``` # On Mac Laptop buck run --target-platforms ovr_config//platform/macos:arm64-fbsource //xplat/caffe2:pt_vulkan_api_test_binAppleMac\#macosx-arm64 -c pt.vulkan_full_precision=1 -- --gtest_filter="*" ``` Reviewed By: digantdesai, manuelcandales Differential Revision: D52087084 fbshipit-source-id: 9190e1236bc56d929acd74c17543c72457a5d287
|
This pull request was exported from Phabricator. Differential Revision: D52087084 |
9fdc02d to
3f52d6a
Compare
|
This pull request was exported from Phabricator. Differential Revision: D52087084 |
3f52d6a to
6b84b05
Compare
|
This pull request was exported from Phabricator. Differential Revision: D52087084 |
6b84b05 to
9250adc
Compare
|
This pull request was exported from Phabricator. Differential Revision: D52087084 |
9250adc to
570bab7
Compare
… shader template generation (pytorch#115948) Summary: Pull Request resolved: pytorch#115948 This change makes two major improvements to PyTorch Vulkan's shader authoring workflow. ## Review Guide There are a lot of changed files because every GLSL shader had to be touched. The majority of changes is changing ``` #define PRECISION $precision #define FORMAT $format ``` to ``` #define PRECISION ${PRECISION} #define FORMAT ${FORMAT} ``` due to changes in how shader templates are processed. For reviewers, the primary functional changes to review are: * `gen_vulkan_spv.py` * Majority of functional changes are in this file, which controls how shader templates are processed. * `shader_params.yaml` * controls how shader variants are generated ## Python Codeblocks in Shader Templates From now on, every compute shader (i.e. `.glsl`) is treated as a shader template. To this effect, the `templates/` folder has been removed and there is now a global `shader_params.yaml` file to describe the shader variants that should be generated for all shader templates. **Taking inspiration from XNNPACK's [`xngen` tool](https://github.com/google/XNNPACK/blob/master/tools/xngen.py), shader templates can now use Python codeblocks**. One example is: ``` $if not INPLACE: layout(set = 0, binding = 0, FORMAT) uniform PRECISION restrict writeonly image3D uOutput; layout(set = 0, binding = 1) uniform PRECISION sampler3D uInput; layout(set = 0, binding = 2) uniform PRECISION sampler3D uOther; layout(set = 0, binding = 3) uniform PRECISION restrict Block { ivec4 output_sizes; ivec4 input_sizes; ivec4 other_sizes; float alpha; } uArgs; $else: layout(set = 0, binding = 0, FORMAT) uniform PRECISION restrict image3D uOutput; layout(set = 0, binding = 1) uniform PRECISION sampler3D uOther; layout(set = 0, binding = 2) uniform PRECISION restrict Block { ivec4 output_sizes; ivec4 other_sizes; float alpha; } uArgs; ``` Another is: ``` // PYTHON CODEBLOCK $if not IS_DIV: const int c_index = (pos.z % ((uArgs.output_sizes.z + 3) / 4)) * 4; if (uArgs.other_sizes.z != 1 && c_index + 3 >= uArgs.output_sizes.z) { ivec4 c_ind = ivec4(c_index) + ivec4(0, 1, 2, 3); vec4 mask = vec4(lessThan(c_ind, ivec4(uArgs.output_sizes.z))); other_texel = other_texel * mask + vec4(1, 1, 1, 1) - mask; } // PYTHON CODEBLOCK $if not INPLACE: ivec3 input_pos = map_output_pos_to_input_pos(pos, uArgs.output_sizes, uArgs.input_sizes); const vec4 in_texel = load_texel(input_pos, uArgs.output_sizes, uArgs.input_sizes, uInput); imageStore(uOutput, pos, OP(in_texel, other_texel, uArgs.alpha)); $else: const vec4 in_texel = imageLoad(uOutput, pos); imageStore(uOutput, pos, OP(in_texel, other_texel, uArgs.alpha)); ``` In addition to making it easier and clearer to write shader templates, this enables shaders that were previously unable to be consolidated into a single template to now be represented using a single template, such as non inplace and inplace variants of the same shader. ## `generate_variant_forall` in shader variant YAML configuration YAML files that describe how shader variants should be generated can now use a `generate_variant_forall` field to iterate over various settings for a specific parameter for each variant defined. Example: ``` unary_op: parameter_names_with_default_values: OPERATOR: exp(X) INPLACE: 0 generate_variant_forall: INPLACE: - VALUE: 0 SUFFIX: "" - VALUE: 1 SUFFIX: "inplace" shader_variants: - NAME: exp OPERATOR: exp(X) - NAME: sqrt OPERATOR: sqrt(X) - NAME: log OPERATOR: log(X) ``` Previously, the `inplace` variants would need to have separate `shader_variants` entries. If there are multiple variables that need to be iterated across, then all possible combinations will be generated. Would be good to take a look to see how the new YAML configuration works. Test Plan: There is no functional change to this diff; we only need to make sure that the generated shaders are still correct. Therefore, we only need to run `vulkan_api_test`. ``` # On Mac Laptop buck run --target-platforms ovr_config//platform/macos:arm64-fbsource //xplat/caffe2:pt_vulkan_api_test_binAppleMac\#macosx-arm64 -c pt.vulkan_full_precision=1 -- --gtest_filter="*" ``` Reviewed By: kimishpatel Differential Revision: D52087084 fbshipit-source-id: 295b38c43f01fc4f500318b27a5e912414861303
|
This pull request was exported from Phabricator. Differential Revision: D52087084 |
570bab7 to
39b1892
Compare
… shader template generation (pytorch#115948) Summary: Pull Request resolved: pytorch#115948 This change makes two major improvements to PyTorch Vulkan's shader authoring workflow. ## Review Guide There are a lot of changed files because every GLSL shader had to be touched. The majority of changes is changing ``` #define PRECISION $precision #define FORMAT $format ``` to ``` #define PRECISION ${PRECISION} #define FORMAT ${FORMAT} ``` due to changes in how shader templates are processed. For reviewers, the primary functional changes to review are: * `gen_vulkan_spv.py` * Majority of functional changes are in this file, which controls how shader templates are processed. * `shader_params.yaml` * controls how shader variants are generated ## Python Codeblocks in Shader Templates From now on, every compute shader (i.e. `.glsl`) is treated as a shader template. To this effect, the `templates/` folder has been removed and there is now a global `shader_params.yaml` file to describe the shader variants that should be generated for all shader templates. **Taking inspiration from XNNPACK's [`xngen` tool](https://github.com/google/XNNPACK/blob/master/tools/xngen.py), shader templates can now use Python codeblocks**. One example is: ``` $if not INPLACE: layout(set = 0, binding = 0, FORMAT) uniform PRECISION restrict writeonly image3D uOutput; layout(set = 0, binding = 1) uniform PRECISION sampler3D uInput; layout(set = 0, binding = 2) uniform PRECISION sampler3D uOther; layout(set = 0, binding = 3) uniform PRECISION restrict Block { ivec4 output_sizes; ivec4 input_sizes; ivec4 other_sizes; float alpha; } uArgs; $else: layout(set = 0, binding = 0, FORMAT) uniform PRECISION restrict image3D uOutput; layout(set = 0, binding = 1) uniform PRECISION sampler3D uOther; layout(set = 0, binding = 2) uniform PRECISION restrict Block { ivec4 output_sizes; ivec4 other_sizes; float alpha; } uArgs; ``` Another is: ``` // PYTHON CODEBLOCK $if not IS_DIV: const int c_index = (pos.z % ((uArgs.output_sizes.z + 3) / 4)) * 4; if (uArgs.other_sizes.z != 1 && c_index + 3 >= uArgs.output_sizes.z) { ivec4 c_ind = ivec4(c_index) + ivec4(0, 1, 2, 3); vec4 mask = vec4(lessThan(c_ind, ivec4(uArgs.output_sizes.z))); other_texel = other_texel * mask + vec4(1, 1, 1, 1) - mask; } // PYTHON CODEBLOCK $if not INPLACE: ivec3 input_pos = map_output_pos_to_input_pos(pos, uArgs.output_sizes, uArgs.input_sizes); const vec4 in_texel = load_texel(input_pos, uArgs.output_sizes, uArgs.input_sizes, uInput); imageStore(uOutput, pos, OP(in_texel, other_texel, uArgs.alpha)); $else: const vec4 in_texel = imageLoad(uOutput, pos); imageStore(uOutput, pos, OP(in_texel, other_texel, uArgs.alpha)); ``` In addition to making it easier and clearer to write shader templates, this enables shaders that were previously unable to be consolidated into a single template to now be represented using a single template, such as non inplace and inplace variants of the same shader. ## `generate_variant_forall` in shader variant YAML configuration YAML files that describe how shader variants should be generated can now use a `generate_variant_forall` field to iterate over various settings for a specific parameter for each variant defined. Example: ``` unary_op: parameter_names_with_default_values: OPERATOR: exp(X) INPLACE: 0 generate_variant_forall: INPLACE: - VALUE: 0 SUFFIX: "" - VALUE: 1 SUFFIX: "inplace" shader_variants: - NAME: exp OPERATOR: exp(X) - NAME: sqrt OPERATOR: sqrt(X) - NAME: log OPERATOR: log(X) ``` Previously, the `inplace` variants would need to have separate `shader_variants` entries. If there are multiple variables that need to be iterated across, then all possible combinations will be generated. Would be good to take a look to see how the new YAML configuration works. Test Plan: There is no functional change to this diff; we only need to make sure that the generated shaders are still correct. Therefore, we only need to run `vulkan_api_test`. ``` # On Mac Laptop buck run --target-platforms ovr_config//platform/macos:arm64-fbsource //xplat/caffe2:pt_vulkan_api_test_binAppleMac\#macosx-arm64 -c pt.vulkan_full_precision=1 -- --gtest_filter="*" ``` Reviewed By: kimishpatel Differential Revision: D52087084 fbshipit-source-id: 8b8e5df4638b1ebc4aae107d6fb1912406ceaeb1
… shader template generation (pytorch#115948) Summary: Pull Request resolved: pytorch#115948 This change makes two major improvements to PyTorch Vulkan's shader authoring workflow. ## Review Guide There are a lot of changed files because every GLSL shader had to be touched. The majority of changes is changing ``` #define PRECISION $precision #define FORMAT $format ``` to ``` #define PRECISION ${PRECISION} #define FORMAT ${FORMAT} ``` due to changes in how shader templates are processed. For reviewers, the primary functional changes to review are: * `gen_vulkan_spv.py` * Majority of functional changes are in this file, which controls how shader templates are processed. * `shader_params.yaml` * controls how shader variants are generated ## Python Codeblocks in Shader Templates From now on, every compute shader (i.e. `.glsl`) is treated as a shader template. To this effect, the `templates/` folder has been removed and there is now a global `shader_params.yaml` file to describe the shader variants that should be generated for all shader templates. **Taking inspiration from XNNPACK's [`xngen` tool](https://github.com/google/XNNPACK/blob/master/tools/xngen.py), shader templates can now use Python codeblocks**. One example is: ``` $if not INPLACE: layout(set = 0, binding = 0, FORMAT) uniform PRECISION restrict writeonly image3D uOutput; layout(set = 0, binding = 1) uniform PRECISION sampler3D uInput; layout(set = 0, binding = 2) uniform PRECISION sampler3D uOther; layout(set = 0, binding = 3) uniform PRECISION restrict Block { ivec4 output_sizes; ivec4 input_sizes; ivec4 other_sizes; float alpha; } uArgs; $else: layout(set = 0, binding = 0, FORMAT) uniform PRECISION restrict image3D uOutput; layout(set = 0, binding = 1) uniform PRECISION sampler3D uOther; layout(set = 0, binding = 2) uniform PRECISION restrict Block { ivec4 output_sizes; ivec4 other_sizes; float alpha; } uArgs; ``` Another is: ``` // PYTHON CODEBLOCK $if not IS_DIV: const int c_index = (pos.z % ((uArgs.output_sizes.z + 3) / 4)) * 4; if (uArgs.other_sizes.z != 1 && c_index + 3 >= uArgs.output_sizes.z) { ivec4 c_ind = ivec4(c_index) + ivec4(0, 1, 2, 3); vec4 mask = vec4(lessThan(c_ind, ivec4(uArgs.output_sizes.z))); other_texel = other_texel * mask + vec4(1, 1, 1, 1) - mask; } // PYTHON CODEBLOCK $if not INPLACE: ivec3 input_pos = map_output_pos_to_input_pos(pos, uArgs.output_sizes, uArgs.input_sizes); const vec4 in_texel = load_texel(input_pos, uArgs.output_sizes, uArgs.input_sizes, uInput); imageStore(uOutput, pos, OP(in_texel, other_texel, uArgs.alpha)); $else: const vec4 in_texel = imageLoad(uOutput, pos); imageStore(uOutput, pos, OP(in_texel, other_texel, uArgs.alpha)); ``` In addition to making it easier and clearer to write shader templates, this enables shaders that were previously unable to be consolidated into a single template to now be represented using a single template, such as non inplace and inplace variants of the same shader. ## `generate_variant_forall` in shader variant YAML configuration YAML files that describe how shader variants should be generated can now use a `generate_variant_forall` field to iterate over various settings for a specific parameter for each variant defined. Example: ``` unary_op: parameter_names_with_default_values: OPERATOR: exp(X) INPLACE: 0 generate_variant_forall: INPLACE: - VALUE: 0 SUFFIX: "" - VALUE: 1 SUFFIX: "inplace" shader_variants: - NAME: exp OPERATOR: exp(X) - NAME: sqrt OPERATOR: sqrt(X) - NAME: log OPERATOR: log(X) ``` Previously, the `inplace` variants would need to have separate `shader_variants` entries. If there are multiple variables that need to be iterated across, then all possible combinations will be generated. Would be good to take a look to see how the new YAML configuration works. Test Plan: There is no functional change to this diff; we only need to make sure that the generated shaders are still correct. Therefore, we only need to run `vulkan_api_test`. ``` # On Mac Laptop buck run --target-platforms ovr_config//platform/macos:arm64-fbsource //xplat/caffe2:pt_vulkan_api_test_binAppleMac\#macosx-arm64 -c pt.vulkan_full_precision=1 -- --gtest_filter="*" ``` Reviewed By: kimishpatel Differential Revision: D52087084 fbshipit-source-id: 3b5f15e942f765a9e79415f793678bb816dfc368
|
This pull request was exported from Phabricator. Differential Revision: D52087084 |
39b1892 to
49cfbb9
Compare
|
@pytorchbot merge -f 'Landed internally' (Initiating merge automatically since Phabricator Diff has merged, using force because this PR might not pass merge_rules.json but landed internally) |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
… shader template generation (pytorch#115948) Summary: This change makes two major improvements to PyTorch Vulkan's shader authoring workflow. ## Review Guide There are a lot of changed files because every GLSL shader had to be touched. The majority of changes is changing ``` #define PRECISION $precision #define FORMAT $format ``` to ``` #define PRECISION ${PRECISION} #define FORMAT ${FORMAT} ``` due to changes in how shader templates are processed. For reviewers, the primary functional changes to review are: * `gen_vulkan_spv.py` * Majority of functional changes are in this file, which controls how shader templates are processed. * `shader_params.yaml` * controls how shader variants are generated ## Python Codeblocks in Shader Templates From now on, every compute shader (i.e. `.glsl`) is treated as a shader template. To this effect, the `templates/` folder has been removed and there is now a global `shader_params.yaml` file to describe the shader variants that should be generated for all shader templates. **Taking inspiration from XNNPACK's [`xngen` tool](https://github.com/google/XNNPACK/blob/master/tools/xngen.py), shader templates can now use Python codeblocks**. One example is: ``` $if not INPLACE: layout(set = 0, binding = 0, FORMAT) uniform PRECISION restrict writeonly image3D uOutput; layout(set = 0, binding = 1) uniform PRECISION sampler3D uInput; layout(set = 0, binding = 2) uniform PRECISION sampler3D uOther; layout(set = 0, binding = 3) uniform PRECISION restrict Block { ivec4 output_sizes; ivec4 input_sizes; ivec4 other_sizes; float alpha; } uArgs; $else: layout(set = 0, binding = 0, FORMAT) uniform PRECISION restrict image3D uOutput; layout(set = 0, binding = 1) uniform PRECISION sampler3D uOther; layout(set = 0, binding = 2) uniform PRECISION restrict Block { ivec4 output_sizes; ivec4 other_sizes; float alpha; } uArgs; ``` Another is: ``` // PYTHON CODEBLOCK $if not IS_DIV: const int c_index = (pos.z % ((uArgs.output_sizes.z + 3) / 4)) * 4; if (uArgs.other_sizes.z != 1 && c_index + 3 >= uArgs.output_sizes.z) { ivec4 c_ind = ivec4(c_index) + ivec4(0, 1, 2, 3); vec4 mask = vec4(lessThan(c_ind, ivec4(uArgs.output_sizes.z))); other_texel = other_texel * mask + vec4(1, 1, 1, 1) - mask; } // PYTHON CODEBLOCK $if not INPLACE: ivec3 input_pos = map_output_pos_to_input_pos(pos, uArgs.output_sizes, uArgs.input_sizes); const vec4 in_texel = load_texel(input_pos, uArgs.output_sizes, uArgs.input_sizes, uInput); imageStore(uOutput, pos, OP(in_texel, other_texel, uArgs.alpha)); $else: const vec4 in_texel = imageLoad(uOutput, pos); imageStore(uOutput, pos, OP(in_texel, other_texel, uArgs.alpha)); ``` In addition to making it easier and clearer to write shader templates, this enables shaders that were previously unable to be consolidated into a single template to now be represented using a single template, such as non inplace and inplace variants of the same shader. ## `generate_variant_forall` in shader variant YAML configuration YAML files that describe how shader variants should be generated can now use a `generate_variant_forall` field to iterate over various settings for a specific parameter for each variant defined. Example: ``` unary_op: parameter_names_with_default_values: OPERATOR: exp(X) INPLACE: 0 generate_variant_forall: INPLACE: - VALUE: 0 SUFFIX: "" - VALUE: 1 SUFFIX: "inplace" shader_variants: - NAME: exp OPERATOR: exp(X) - NAME: sqrt OPERATOR: sqrt(X) - NAME: log OPERATOR: log(X) ``` Previously, the `inplace` variants would need to have separate `shader_variants` entries. If there are multiple variables that need to be iterated across, then all possible combinations will be generated. Would be good to take a look to see how the new YAML configuration works. Test Plan: There is no functional change to this diff; we only need to make sure that the generated shaders are still correct. Therefore, we only need to run `vulkan_api_test`. ``` # On Mac Laptop buck run --target-platforms ovr_config//platform/macos:arm64-fbsource //xplat/caffe2:pt_vulkan_api_test_binAppleMac\#macosx-arm64 -c pt.vulkan_full_precision=1 -- --gtest_filter="*" ``` Reviewed By: digantdesai Differential Revision: D52087084 Pull Request resolved: pytorch#115948 Approved by: https://github.com/manuelcandales
…hapes ## Context pytorch/pytorch#121598 introduces the ability to support dynamic shapes through tensor metadata updates. The idea is fairly simple. Instead of shaders accepting a UBO with size data for all arguments: ``` layout(set = 0, binding = 2) uniform PRECISION restrict Block { ivec4 output_sizes; ivec4 other_sizes; float alpha; } ``` Shaders will accept separate UBOs for each piece of tensor metadata: ``` layout(set = 0, binding = 3) uniform PRECISION restrict OutSizes { ivec4 data; } out_sizes; layout(set = 0, binding = 4) uniform PRECISION restrict InSizes { ivec4 data; } in_sizes; layout(set = 0, binding = 5) uniform PRECISION restrict OtherSizes { ivec4 data; } other_sizes; layout(set = 0, binding = 6) uniform PRECISION restrict Alpha { float data; } alpha; ``` Each UBO will be owned and maintained by the corresponding `vTensor` instance. To support a graph input resize, every tensor in the graph only needs to update their metadata UBOs via the `tensor.virtual_resize(new_sizes)` call. Shader dispatches in subsequent command buffer submissions will then see the updated metadata and execute as if the tensor were the updated sizes. This changeset introduces a new shader library for the Vulkan graph runtime that enables dynamic shapes through this technique in favor of relying on the shader library from PyTorch Vulkan. ## Considerations Technically, the UBO update technique can be applied to the shaders from PyTorch Vulkan as well. If that's the case, why introduce a new shader library for the graph runtime? The primary motivation is code quality. First, having `vTensor` supply UBOs for their own metadata greatly reduces the need to have operator specifc ad-hoc `Params` structs to organize arguments to write into a `api::UniformParamsBuffer`. Constructing an `ExecuteNode` for binary operators is now ``` graph.execute_nodes().emplace_back(new ExecuteNode( graph, api::shader_registry().get_shader_info(kernel_name.str()), global_size, local_size, {{out, api::MemoryAccessType::WRITE}, {{arg1, arg2}, api::MemoryAccessType::READ}}, {t_out.gpu_sizes_ubo(), t_in1.gpu_sizes_ubo(), t_in2.gpu_sizes_ubo(), graph.create_params_buffer(alpha_val)})) ``` instead of ``` ArithmeticParams block{ get_size_as_ivec4(t_out), get_size_as_ivec4(t_in1), get_size_as_ivec4(t_in2), alpha_val, }; api::UniformParamsBuffer params(graph.context(), block); graph.execute_nodes().emplace_back(new ExecuteNode( graph, shader, global_size, local_size, {{out, api::MemoryAccessType::WRITE}, {{arg1, arg2}, api::MemoryAccessType::READ}}, std::move(params))); ``` Another consideration is that pytorch/pytorch#115948 which was landed fairly recently enables much more expressive shader templates through the use of Python code blocks in the GLSL template. This enables shader templates that can easily express variants for different data types, packing structures, etc. Introducing a new shader library provides the opportunity to rewrite the shaders in PyTorch Vulkan in a more generic and extensible way. Differential Revision: [D54754545](https://our.internmc.facebook.com/intern/diff/D54754545/) [ghstack-poisoned]
…rary that enables dynamic shapes" ## Context pytorch/pytorch#121598 introduces the ability to support dynamic shapes through tensor metadata updates. The idea is fairly simple. Instead of shaders accepting a UBO with size data for all arguments: ``` layout(set = 0, binding = 2) uniform PRECISION restrict Block { ivec4 output_sizes; ivec4 other_sizes; float alpha; } ``` Shaders will accept separate UBOs for each piece of tensor metadata: ``` layout(set = 0, binding = 3) uniform PRECISION restrict OutSizes { ivec4 data; } out_sizes; layout(set = 0, binding = 4) uniform PRECISION restrict InSizes { ivec4 data; } in_sizes; layout(set = 0, binding = 5) uniform PRECISION restrict OtherSizes { ivec4 data; } other_sizes; layout(set = 0, binding = 6) uniform PRECISION restrict Alpha { float data; } alpha; ``` Each UBO will be owned and maintained by the corresponding `vTensor` instance. To support a graph input resize, every tensor in the graph only needs to update their metadata UBOs via the `tensor.virtual_resize(new_sizes)` call. Shader dispatches in subsequent command buffer submissions will then see the updated metadata and execute as if the tensor were the updated sizes. This changeset introduces a new shader library for the Vulkan graph runtime that enables dynamic shapes through this technique in favor of relying on the shader library from PyTorch Vulkan. ## Considerations Technically, the UBO update technique can be applied to the shaders from PyTorch Vulkan as well. If that's the case, why introduce a new shader library for the graph runtime? The primary motivation is code quality. First, having `vTensor` supply UBOs for their own metadata greatly reduces the need to have operator specifc ad-hoc `Params` structs to organize arguments to write into a `api::UniformParamsBuffer`. Constructing an `ExecuteNode` for binary operators is now ``` graph.execute_nodes().emplace_back(new ExecuteNode( graph, api::shader_registry().get_shader_info(kernel_name.str()), global_size, local_size, {{out, api::MemoryAccessType::WRITE}, {{arg1, arg2}, api::MemoryAccessType::READ}}, {t_out.gpu_sizes_ubo(), t_in1.gpu_sizes_ubo(), t_in2.gpu_sizes_ubo(), graph.create_params_buffer(alpha_val)})) ``` instead of ``` ArithmeticParams block{ get_size_as_ivec4(t_out), get_size_as_ivec4(t_in1), get_size_as_ivec4(t_in2), alpha_val, }; api::UniformParamsBuffer params(graph.context(), block); graph.execute_nodes().emplace_back(new ExecuteNode( graph, shader, global_size, local_size, {{out, api::MemoryAccessType::WRITE}, {{arg1, arg2}, api::MemoryAccessType::READ}}, std::move(params))); ``` Another consideration is that pytorch/pytorch#115948 which was landed fairly recently enables much more expressive shader templates through the use of Python code blocks in the GLSL template. This enables shader templates that can easily express variants for different data types, packing structures, etc. Introducing a new shader library provides the opportunity to rewrite the shaders in PyTorch Vulkan in a more generic and extensible way. Differential Revision: [D54754545](https://our.internmc.facebook.com/intern/diff/D54754545/) [ghstack-poisoned]
…s dynamic shapes" ## Context pytorch/pytorch#121598 introduces the ability to support dynamic shapes through tensor metadata updates. The idea is fairly simple. Instead of shaders accepting a UBO with size data for all arguments: ``` layout(set = 0, binding = 2) uniform PRECISION restrict Block { ivec4 output_sizes; ivec4 other_sizes; float alpha; } ``` Shaders will accept separate UBOs for each piece of tensor metadata: ``` layout(set = 0, binding = 3) uniform PRECISION restrict OutSizes { ivec4 data; } out_sizes; layout(set = 0, binding = 4) uniform PRECISION restrict InSizes { ivec4 data; } in_sizes; layout(set = 0, binding = 5) uniform PRECISION restrict OtherSizes { ivec4 data; } other_sizes; layout(set = 0, binding = 6) uniform PRECISION restrict Alpha { float data; } alpha; ``` Each UBO will be owned and maintained by the corresponding `vTensor` instance. To support a graph input resize, every tensor in the graph only needs to update their metadata UBOs via the `tensor.virtual_resize(new_sizes)` call. Shader dispatches in subsequent command buffer submissions will then see the updated metadata and execute as if the tensor were the updated sizes. This changeset introduces a new shader library for the Vulkan graph runtime that enables dynamic shapes through this technique in favor of relying on the shader library from PyTorch Vulkan. ## Considerations Technically, the UBO update technique can be applied to the shaders from PyTorch Vulkan as well. If that's the case, why introduce a new shader library for the graph runtime? The primary motivation is code quality. First, having `vTensor` supply UBOs for their own metadata greatly reduces the need to have operator specifc ad-hoc `Params` structs to organize arguments to write into a `api::UniformParamsBuffer`. Constructing an `ExecuteNode` for binary operators is now ``` graph.execute_nodes().emplace_back(new ExecuteNode( graph, api::shader_registry().get_shader_info(kernel_name.str()), global_size, local_size, {{out, api::MemoryAccessType::WRITE}, {{arg1, arg2}, api::MemoryAccessType::READ}}, {t_out.gpu_sizes_ubo(), t_in1.gpu_sizes_ubo(), t_in2.gpu_sizes_ubo(), graph.create_params_buffer(alpha_val)})) ``` instead of ``` ArithmeticParams block{ get_size_as_ivec4(t_out), get_size_as_ivec4(t_in1), get_size_as_ivec4(t_in2), alpha_val, }; api::UniformParamsBuffer params(graph.context(), block); graph.execute_nodes().emplace_back(new ExecuteNode( graph, shader, global_size, local_size, {{out, api::MemoryAccessType::WRITE}, {{arg1, arg2}, api::MemoryAccessType::READ}}, std::move(params))); ``` Another consideration is that pytorch/pytorch#115948 which was landed fairly recently enables much more expressive shader templates through the use of Python code blocks in the GLSL template. This enables shader templates that can easily express variants for different data types, packing structures, etc. Introducing a new shader library provides the opportunity to rewrite the shaders in PyTorch Vulkan in a more generic and extensible way. Differential Revision: [D54754545](https://our.internmc.facebook.com/intern/diff/D54754545/) [ghstack-poisoned]
…rary that enables dynamic shapes" ## Context pytorch/pytorch#121598 introduces the ability to support dynamic shapes through tensor metadata updates. The idea is fairly simple. Instead of shaders accepting a UBO with size data for all arguments: ``` layout(set = 0, binding = 2) uniform PRECISION restrict Block { ivec4 output_sizes; ivec4 other_sizes; float alpha; } ``` Shaders will accept separate UBOs for each piece of tensor metadata: ``` layout(set = 0, binding = 3) uniform PRECISION restrict OutSizes { ivec4 data; } out_sizes; layout(set = 0, binding = 4) uniform PRECISION restrict InSizes { ivec4 data; } in_sizes; layout(set = 0, binding = 5) uniform PRECISION restrict OtherSizes { ivec4 data; } other_sizes; layout(set = 0, binding = 6) uniform PRECISION restrict Alpha { float data; } alpha; ``` Each UBO will be owned and maintained by the corresponding `vTensor` instance. To support a graph input resize, every tensor in the graph only needs to update their metadata UBOs via the `tensor.virtual_resize(new_sizes)` call. Shader dispatches in subsequent command buffer submissions will then see the updated metadata and execute as if the tensor were the updated sizes. This changeset introduces a new shader library for the Vulkan graph runtime that enables dynamic shapes through this technique in favor of relying on the shader library from PyTorch Vulkan. ## Considerations Technically, the UBO update technique can be applied to the shaders from PyTorch Vulkan as well. If that's the case, why introduce a new shader library for the graph runtime? The primary motivation is code quality. First, having `vTensor` supply UBOs for their own metadata greatly reduces the need to have operator specifc ad-hoc `Params` structs to organize arguments to write into a `api::UniformParamsBuffer`. Constructing an `ExecuteNode` for binary operators is now ``` graph.execute_nodes().emplace_back(new ExecuteNode( graph, api::shader_registry().get_shader_info(kernel_name.str()), global_size, local_size, {{out, api::MemoryAccessType::WRITE}, {{arg1, arg2}, api::MemoryAccessType::READ}}, {t_out.gpu_sizes_ubo(), t_in1.gpu_sizes_ubo(), t_in2.gpu_sizes_ubo(), graph.create_params_buffer(alpha_val)})) ``` instead of ``` ArithmeticParams block{ get_size_as_ivec4(t_out), get_size_as_ivec4(t_in1), get_size_as_ivec4(t_in2), alpha_val, }; api::UniformParamsBuffer params(graph.context(), block); graph.execute_nodes().emplace_back(new ExecuteNode( graph, shader, global_size, local_size, {{out, api::MemoryAccessType::WRITE}, {{arg1, arg2}, api::MemoryAccessType::READ}}, std::move(params))); ``` Another consideration is that pytorch/pytorch#115948 which was landed fairly recently enables much more expressive shader templates through the use of Python code blocks in the GLSL template. This enables shader templates that can easily express variants for different data types, packing structures, etc. Introducing a new shader library provides the opportunity to rewrite the shaders in PyTorch Vulkan in a more generic and extensible way. Differential Revision: [D54754545](https://our.internmc.facebook.com/intern/diff/D54754545/) [ghstack-poisoned]
…s dynamic shapes" ## Context pytorch/pytorch#121598 introduces the ability to support dynamic shapes through tensor metadata updates. The idea is fairly simple. Instead of shaders accepting a UBO with size data for all arguments: ``` layout(set = 0, binding = 2) uniform PRECISION restrict Block { ivec4 output_sizes; ivec4 other_sizes; float alpha; } ``` Shaders will accept separate UBOs for each piece of tensor metadata: ``` layout(set = 0, binding = 3) uniform PRECISION restrict OutSizes { ivec4 data; } out_sizes; layout(set = 0, binding = 4) uniform PRECISION restrict InSizes { ivec4 data; } in_sizes; layout(set = 0, binding = 5) uniform PRECISION restrict OtherSizes { ivec4 data; } other_sizes; layout(set = 0, binding = 6) uniform PRECISION restrict Alpha { float data; } alpha; ``` Each UBO will be owned and maintained by the corresponding `vTensor` instance. To support a graph input resize, every tensor in the graph only needs to update their metadata UBOs via the `tensor.virtual_resize(new_sizes)` call. Shader dispatches in subsequent command buffer submissions will then see the updated metadata and execute as if the tensor were the updated sizes. This changeset introduces a new shader library for the Vulkan graph runtime that enables dynamic shapes through this technique in favor of relying on the shader library from PyTorch Vulkan. ## Considerations Technically, the UBO update technique can be applied to the shaders from PyTorch Vulkan as well. If that's the case, why introduce a new shader library for the graph runtime? The primary motivation is code quality. First, having `vTensor` supply UBOs for their own metadata greatly reduces the need to have operator specifc ad-hoc `Params` structs to organize arguments to write into a `api::UniformParamsBuffer`. Constructing an `ExecuteNode` for binary operators is now ``` graph.execute_nodes().emplace_back(new ExecuteNode( graph, api::shader_registry().get_shader_info(kernel_name.str()), global_size, local_size, {{out, api::MemoryAccessType::WRITE}, {{arg1, arg2}, api::MemoryAccessType::READ}}, {t_out.gpu_sizes_ubo(), t_in1.gpu_sizes_ubo(), t_in2.gpu_sizes_ubo(), graph.create_params_buffer(alpha_val)})) ``` instead of ``` ArithmeticParams block{ get_size_as_ivec4(t_out), get_size_as_ivec4(t_in1), get_size_as_ivec4(t_in2), alpha_val, }; api::UniformParamsBuffer params(graph.context(), block); graph.execute_nodes().emplace_back(new ExecuteNode( graph, shader, global_size, local_size, {{out, api::MemoryAccessType::WRITE}, {{arg1, arg2}, api::MemoryAccessType::READ}}, std::move(params))); ``` Another consideration is that pytorch/pytorch#115948 which was landed fairly recently enables much more expressive shader templates through the use of Python code blocks in the GLSL template. This enables shader templates that can easily express variants for different data types, packing structures, etc. Introducing a new shader library provides the opportunity to rewrite the shaders in PyTorch Vulkan in a more generic and extensible way. Differential Revision: [D54754545](https://our.internmc.facebook.com/intern/diff/D54754545/) [ghstack-poisoned]
…hapes Pull Request resolved: #2366 ## Context pytorch/pytorch#121598 introduces the ability to support dynamic shapes through tensor metadata updates. The idea is fairly simple. Instead of shaders accepting a UBO with size data for all arguments: ``` layout(set = 0, binding = 2) uniform PRECISION restrict Block { ivec4 output_sizes; ivec4 other_sizes; float alpha; } ``` Shaders will accept separate UBOs for each piece of tensor metadata: ``` layout(set = 0, binding = 3) uniform PRECISION restrict OutSizes { ivec4 data; } out_sizes; layout(set = 0, binding = 4) uniform PRECISION restrict InSizes { ivec4 data; } in_sizes; layout(set = 0, binding = 5) uniform PRECISION restrict OtherSizes { ivec4 data; } other_sizes; layout(set = 0, binding = 6) uniform PRECISION restrict Alpha { float data; } alpha; ``` Each UBO will be owned and maintained by the corresponding `vTensor` instance. To support a graph input resize, every tensor in the graph only needs to update their metadata UBOs via the `tensor.virtual_resize(new_sizes)` call. Shader dispatches in subsequent command buffer submissions will then see the updated metadata and execute as if the tensor were the updated sizes. This changeset introduces a new shader library for the Vulkan graph runtime that enables dynamic shapes through this technique in favor of relying on the shader library from PyTorch Vulkan. ## Considerations Technically, the UBO update technique can be applied to the shaders from PyTorch Vulkan as well. If that's the case, why introduce a new shader library for the graph runtime? The primary motivation is code quality. First, having `vTensor` supply UBOs for their own metadata greatly reduces the need to have operator specifc ad-hoc `Params` structs to organize arguments to write into a `api::UniformParamsBuffer`. Constructing an `ExecuteNode` for binary operators is now ``` graph.execute_nodes().emplace_back(new ExecuteNode( graph, api::shader_registry().get_shader_info(kernel_name.str()), global_size, local_size, {{out, api::MemoryAccessType::WRITE}, {{arg1, arg2}, api::MemoryAccessType::READ}}, {t_out.gpu_sizes_ubo(), t_in1.gpu_sizes_ubo(), t_in2.gpu_sizes_ubo(), graph.create_params_buffer(alpha_val)})) ``` instead of ``` ArithmeticParams block{ get_size_as_ivec4(t_out), get_size_as_ivec4(t_in1), get_size_as_ivec4(t_in2), alpha_val, }; api::UniformParamsBuffer params(graph.context(), block); graph.execute_nodes().emplace_back(new ExecuteNode( graph, shader, global_size, local_size, {{out, api::MemoryAccessType::WRITE}, {{arg1, arg2}, api::MemoryAccessType::READ}}, std::move(params))); ``` Another consideration is that pytorch/pytorch#115948 which was landed fairly recently enables much more expressive shader templates through the use of Python code blocks in the GLSL template. This enables shader templates that can easily express variants for different data types, packing structures, etc. Introducing a new shader library provides the opportunity to rewrite the shaders in PyTorch Vulkan in a more generic and extensible way. ghstack-source-id: 218421178 @exported-using-ghexport Differential Revision: [D54754545](https://our.internmc.facebook.com/intern/diff/D54754545/)
…rary that enables dynamic shapes" ## Context pytorch/pytorch#121598 introduces the ability to support dynamic shapes through tensor metadata updates. The idea is fairly simple. Instead of shaders accepting a UBO with size data for all arguments: ``` layout(set = 0, binding = 2) uniform PRECISION restrict Block { ivec4 output_sizes; ivec4 other_sizes; float alpha; } ``` Shaders will accept separate UBOs for each piece of tensor metadata: ``` layout(set = 0, binding = 3) uniform PRECISION restrict OutSizes { ivec4 data; } out_sizes; layout(set = 0, binding = 4) uniform PRECISION restrict InSizes { ivec4 data; } in_sizes; layout(set = 0, binding = 5) uniform PRECISION restrict OtherSizes { ivec4 data; } other_sizes; layout(set = 0, binding = 6) uniform PRECISION restrict Alpha { float data; } alpha; ``` Each UBO will be owned and maintained by the corresponding `vTensor` instance. To support a graph input resize, every tensor in the graph only needs to update their metadata UBOs via the `tensor.virtual_resize(new_sizes)` call. Shader dispatches in subsequent command buffer submissions will then see the updated metadata and execute as if the tensor were the updated sizes. This changeset introduces a new shader library for the Vulkan graph runtime that enables dynamic shapes through this technique in favor of relying on the shader library from PyTorch Vulkan. ## Considerations Technically, the UBO update technique can be applied to the shaders from PyTorch Vulkan as well. If that's the case, why introduce a new shader library for the graph runtime? The primary motivation is code quality. First, having `vTensor` supply UBOs for their own metadata greatly reduces the need to have operator specifc ad-hoc `Params` structs to organize arguments to write into a `api::UniformParamsBuffer`. Constructing an `ExecuteNode` for binary operators is now ``` graph.execute_nodes().emplace_back(new ExecuteNode( graph, api::shader_registry().get_shader_info(kernel_name.str()), global_size, local_size, {{out, api::MemoryAccessType::WRITE}, {{arg1, arg2}, api::MemoryAccessType::READ}}, {t_out.gpu_sizes_ubo(), t_in1.gpu_sizes_ubo(), t_in2.gpu_sizes_ubo(), graph.create_params_buffer(alpha_val)})) ``` instead of ``` ArithmeticParams block{ get_size_as_ivec4(t_out), get_size_as_ivec4(t_in1), get_size_as_ivec4(t_in2), alpha_val, }; api::UniformParamsBuffer params(graph.context(), block); graph.execute_nodes().emplace_back(new ExecuteNode( graph, shader, global_size, local_size, {{out, api::MemoryAccessType::WRITE}, {{arg1, arg2}, api::MemoryAccessType::READ}}, std::move(params))); ``` Another consideration is that pytorch/pytorch#115948 which was landed fairly recently enables much more expressive shader templates through the use of Python code blocks in the GLSL template. This enables shader templates that can easily express variants for different data types, packing structures, etc. Introducing a new shader library provides the opportunity to rewrite the shaders in PyTorch Vulkan in a more generic and extensible way. Differential Revision: [D54754545](https://our.internmc.facebook.com/intern/diff/D54754545/) [ghstack-poisoned]
…s dynamic shapes" ## Context pytorch/pytorch#121598 introduces the ability to support dynamic shapes through tensor metadata updates. The idea is fairly simple. Instead of shaders accepting a UBO with size data for all arguments: ``` layout(set = 0, binding = 2) uniform PRECISION restrict Block { ivec4 output_sizes; ivec4 other_sizes; float alpha; } ``` Shaders will accept separate UBOs for each piece of tensor metadata: ``` layout(set = 0, binding = 3) uniform PRECISION restrict OutSizes { ivec4 data; } out_sizes; layout(set = 0, binding = 4) uniform PRECISION restrict InSizes { ivec4 data; } in_sizes; layout(set = 0, binding = 5) uniform PRECISION restrict OtherSizes { ivec4 data; } other_sizes; layout(set = 0, binding = 6) uniform PRECISION restrict Alpha { float data; } alpha; ``` Each UBO will be owned and maintained by the corresponding `vTensor` instance. To support a graph input resize, every tensor in the graph only needs to update their metadata UBOs via the `tensor.virtual_resize(new_sizes)` call. Shader dispatches in subsequent command buffer submissions will then see the updated metadata and execute as if the tensor were the updated sizes. This changeset introduces a new shader library for the Vulkan graph runtime that enables dynamic shapes through this technique in favor of relying on the shader library from PyTorch Vulkan. ## Considerations Technically, the UBO update technique can be applied to the shaders from PyTorch Vulkan as well. If that's the case, why introduce a new shader library for the graph runtime? The primary motivation is code quality. First, having `vTensor` supply UBOs for their own metadata greatly reduces the need to have operator specifc ad-hoc `Params` structs to organize arguments to write into a `api::UniformParamsBuffer`. Constructing an `ExecuteNode` for binary operators is now ``` graph.execute_nodes().emplace_back(new ExecuteNode( graph, api::shader_registry().get_shader_info(kernel_name.str()), global_size, local_size, {{out, api::MemoryAccessType::WRITE}, {{arg1, arg2}, api::MemoryAccessType::READ}}, {t_out.gpu_sizes_ubo(), t_in1.gpu_sizes_ubo(), t_in2.gpu_sizes_ubo(), graph.create_params_buffer(alpha_val)})) ``` instead of ``` ArithmeticParams block{ get_size_as_ivec4(t_out), get_size_as_ivec4(t_in1), get_size_as_ivec4(t_in2), alpha_val, }; api::UniformParamsBuffer params(graph.context(), block); graph.execute_nodes().emplace_back(new ExecuteNode( graph, shader, global_size, local_size, {{out, api::MemoryAccessType::WRITE}, {{arg1, arg2}, api::MemoryAccessType::READ}}, std::move(params))); ``` Another consideration is that pytorch/pytorch#115948 which was landed fairly recently enables much more expressive shader templates through the use of Python code blocks in the GLSL template. This enables shader templates that can easily express variants for different data types, packing structures, etc. Introducing a new shader library provides the opportunity to rewrite the shaders in PyTorch Vulkan in a more generic and extensible way. Differential Revision: [D54754545](https://our.internmc.facebook.com/intern/diff/D54754545/) [ghstack-poisoned]
…2366) Summary: Pull Request resolved: #2366 ## Context pytorch/pytorch#121598 introduces the ability to support dynamic shapes through tensor metadata updates. The idea is fairly simple. Instead of shaders accepting a UBO with size data for all arguments: ``` layout(set = 0, binding = 2) uniform PRECISION restrict Block { ivec4 output_sizes; ivec4 other_sizes; float alpha; } ``` Shaders will accept separate UBOs for each piece of tensor metadata: ``` layout(set = 0, binding = 3) uniform PRECISION restrict OutSizes { ivec4 data; } out_sizes; layout(set = 0, binding = 4) uniform PRECISION restrict InSizes { ivec4 data; } in_sizes; layout(set = 0, binding = 5) uniform PRECISION restrict OtherSizes { ivec4 data; } other_sizes; layout(set = 0, binding = 6) uniform PRECISION restrict Alpha { float data; } alpha; ``` Each UBO will be owned and maintained by the corresponding `vTensor` instance. To support a graph input resize, every tensor in the graph only needs to update their metadata UBOs via the `tensor.virtual_resize(new_sizes)` call. Shader dispatches in subsequent command buffer submissions will then see the updated metadata and execute as if the tensor were the updated sizes. This changeset introduces a new shader library for the Vulkan graph runtime that enables dynamic shapes through this technique in favor of relying on the shader library from PyTorch Vulkan. ## Considerations Technically, the UBO update technique can be applied to the shaders from PyTorch Vulkan as well. If that's the case, why introduce a new shader library for the graph runtime? The primary motivation is code quality. First, having `vTensor` supply UBOs for their own metadata greatly reduces the need to have operator specifc ad-hoc `Params` structs to organize arguments to write into a `api::UniformParamsBuffer`. Constructing an `ExecuteNode` for binary operators is now ``` graph.execute_nodes().emplace_back(new ExecuteNode( graph, api::shader_registry().get_shader_info(kernel_name.str()), global_size, local_size, {{out, api::MemoryAccessType::WRITE}, {{arg1, arg2}, api::MemoryAccessType::READ}}, {t_out.gpu_sizes_ubo(), t_in1.gpu_sizes_ubo(), t_in2.gpu_sizes_ubo(), graph.create_params_buffer(alpha_val)})) ``` instead of ``` ArithmeticParams block{ get_size_as_ivec4(t_out), get_size_as_ivec4(t_in1), get_size_as_ivec4(t_in2), alpha_val, }; api::UniformParamsBuffer params(graph.context(), block); graph.execute_nodes().emplace_back(new ExecuteNode( graph, shader, global_size, local_size, {{out, api::MemoryAccessType::WRITE}, {{arg1, arg2}, api::MemoryAccessType::READ}}, std::move(params))); ``` Another consideration is that pytorch/pytorch#115948 which was landed fairly recently enables much more expressive shader templates through the use of Python code blocks in the GLSL template. This enables shader templates that can easily express variants for different data types, packing structures, etc. Introducing a new shader library provides the opportunity to rewrite the shaders in PyTorch Vulkan in a more generic and extensible way. ghstack-source-id: 218429132 exported-using-ghexport bypass-github-export-checks bypass-github-pytorch-ci-checks bypass-github-executorch-ci-checks Reviewed By: jorgep31415 Differential Revision: D54754545 fbshipit-source-id: 7e2074699b61f8358358775a8b790d34dcb99ee6
Summary:
This change makes two major improvements to PyTorch Vulkan's shader authoring workflow.
Review Guide
There are a lot of changed files because every GLSL shader had to be touched. The majority of changes is changing
to
due to changes in how shader templates are processed.
For reviewers, the primary functional changes to review are:
gen_vulkan_spv.pyshader_params.yamlPython Codeblocks in Shader Templates
From now on, every compute shader (i.e.
.glsl) is treated as a shader template. To this effect, thetemplates/folder has been removed and there is now a globalshader_params.yamlfile to describe the shader variants that should be generated for all shader templates.Taking inspiration from XNNPACK's
xngentool, shader templates can now use Python codeblocks. One example is:Another is:
In addition to making it easier and clearer to write shader templates, this enables shaders that were previously unable to be consolidated into a single template to now be represented using a single template, such as non inplace and inplace variants of the same shader.
generate_variant_forallin shader variant YAML configurationYAML files that describe how shader variants should be generated can now use a
generate_variant_forallfield to iterate over various settings for a specific parameter for each variant defined. Example:Previously, the
inplacevariants would need to have separateshader_variantsentries. If there are multiple variables that need to be iterated across, then all possible combinations will be generated. Would be good to take a look to see how the new YAML configuration works.Test Plan:
There is no functional change to this diff; we only need to make sure that the generated shaders are still correct. Therefore, we only need to run
vulkan_api_test.Reviewed By: digantdesai
Differential Revision: D52087084