You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The decision on minimumBufferBindingSize of #678 has opened a can of questions on how do we decide whether the validation happens at the initialization time versus the draw time. There is a class of GPUBindGroupLayoutEntry properties that aren't actually needed by the native APIs, and we only have them for the purpose of early validation:
Currently, the first two are mandatory, while the last one is optional. Technically, we could validate all 3 at the draw time, so it's important to understand where and why we draw this line.
For example, there could be an application that wants to compute mipmaps in the shader. They would have separate shader variants for int/uint/float textures, but in all of the native APIs these shaders would be linked to pipelines of the same layout. In WebGPU today, that would require separate GPUBindGroupLayout objects (since textureComponentType is different), and separate GPUPipelineLayout.
Another example is a shader that zeroes out a storage texture. It could have different entry points for different formats of the texture. But all of them would use the same layout in the native APIs. In WebGPU today, that would require a separate bind group layout for every format/shader variation.
Personally, I see great value in surfacing the errors as early as possible, making them more explicit. I think it would be great just making all of these properties required, and then listening to ISV feedback to see if this needs to be relaxed.
Raising this issue for the awareness of the group.
Texture filterability / sampler filtering validation. Currently baked into a pipeline layout (and therefore pipeline) but could be validated at draw/dispatch time. Somewhat expensive because it requires iterating over all of the texture-sampler pairs and checking them. discussion
The decision on
minimumBufferBindingSizeof #678 has opened a can of questions on how do we decide whether the validation happens at the initialization time versus the draw time. There is a class ofGPUBindGroupLayoutEntryproperties that aren't actually needed by the native APIs, and we only have them for the purpose of early validation:textureComponentTypeGPUTextureBindingLayout.sampleTypestorageTextureFormatGPUStorageTextureBindingLayout.formatminimumBufferBindingSizeGPUBufferBindingLayout.minBindingSizeCurrently, the first two are mandatory, while the last one is optional. Technically, we could validate all 3 at the draw time, so it's important to understand where and why we draw this line.
For example, there could be an application that wants to compute mipmaps in the shader. They would have separate shader variants for int/uint/float textures, but in all of the native APIs these shaders would be linked to pipelines of the same layout. In WebGPU today, that would require separate
GPUBindGroupLayoutobjects (sincetextureComponentTypeis different), and separateGPUPipelineLayout.Another example is a shader that zeroes out a storage texture. It could have different entry points for different formats of the texture. But all of them would use the same layout in the native APIs. In WebGPU today, that would require a separate bind group layout for every format/shader variation.
Personally, I see great value in surfacing the errors as early as possible, making them more explicit. I think it would be great just making all of these properties required, and then listening to ISV feedback to see if this needs to be relaxed.
Raising this issue for the awareness of the group.
EDIT(2023-09-01 @kainino0x): some related things: