vulkan: sort graph to allow more parallel execution#15850
vulkan: sort graph to allow more parallel execution#15850taronaeo merged 2 commits intoggml-org:masterfrom
Conversation
e71329b to
2246c33
Compare
Add a backend proc to allow the backend to modify the graph. The vulkan implementation looks at which nodes depend on each other and greedily reorders them to group together nodes that don't depend on each other. It only reorders the nodes, doesn't change the contents of any of them. With ggml-org#15489, this reduces the number of synchronizations needed.
2246c33 to
977ab65
Compare
taronaeo
left a comment
There was a problem hiding this comment.
LGTM for zDNN. If I'm not wrong, this is something we at IBM Research are looking to do :). Looking forward to this!
|
@taronaeo Friendly reminder, when the author of the PR is a collaborator, we usually let them merge it themselves. |
|
I would have waited for @0cc4m to merge after he reviews the ggml-vulkan changes (and often does perf tests on HW I don't have available). |
|
My apologies, jumped the gun on this. I'll take note in the future. |
| /* .graph_compute = */ ggml_backend_metal_graph_compute, | ||
| /* .event_record = */ NULL, | ||
| /* .event_wait = */ NULL, | ||
| /* .optimize_graph = */ NULL, |
There was a problem hiding this comment.
.graph_optimize would have been more consistent name.
There was a problem hiding this comment.
I'll fix this when I get back to work, if nobody beats me to it
There was a problem hiding this comment.
Btw, I'm wondering what is the benefit of delegating this optimization step to the scheduler. Seems like the same effect can be achieved by creating an array of indices with the order in which we want to traverse the graph. This can be done inside .graph_compute and can even be interleaved with the GPU if needed.
There was a problem hiding this comment.
Ggml-alloc will aggressively reuse memory which will interfere with concurrency. I prototyped a version of this where I did it entirely in the backend, and I had to basically ignore the real allocations and use temporary allocations for all tensors I wanted to reorder.
There was a problem hiding this comment.
Ok got it.
Another question though - isn't the original concern from #15489 (comment) now valid again? Without the actual address ranges, you might miss a dependency between the nodes that is not represented by the graph. Back there you solved it by using the actual address ranges, but here this logic is not present.
There was a problem hiding this comment.
I'm not completely sure, but I did consider this case. Something like set_rows still operates on (views of )tensors and I included a check that treats it as an implicit dependency if two operations view the same tensor.
There aren't any actual allocations at this point so it all has to be done in terms of tensors, so I think this works out.
There was a problem hiding this comment.
For the Metal backend I implemented a backend-agnostic graph optimization that should reorder the nodes for improved concurrency, while preserving the order of fusable ops and also does not reorder problematic operators such as GGML_OP_CPY and GGML_OP_SET_ROWS. I think it is generic and fast enough to be used by all backends, but currently testing it only with the Metal backend - seems to work well so far. If you are interested in trying it out, you can quite easily plug it in the Vulkan backend - the implementation is self-contained in the ggml-metal-common.cpp source:
llama.cpp/ggml/src/ggml-metal/ggml-metal-common.cpp
Lines 377 to 387 in 4b8560a
There was a problem hiding this comment.
I don't think it's possible to generate the most efficient ordering without knowing what is actually (not just theoretically) fusable by the backend. For example, if you have two matmul+adds:
t0 = matmul ...
t1 = add t0, ...
t2 = matmul ...
t3 = add t2, ...
If the backend fuses matmul+add, then t0,t1,t2,t3 is the correct order - the two fused matmuladds can run concurrently. But if the backend does not fuse matmul+add, then the better order is t0,t2,t1,t3, so the two matmuls can run concurrently and the two adds can run concurrently.
Implement ggml_backend_cann_graph_optimize function for CANN backend, ported from Vulkan backend (PR ggml-org#15489 and ggml-org#15850). Key changes: - Add graph optimization to reorder nodes based on dependency analysis - Group non-dependent nodes together for potential parallel execution - Preserve fusion patterns (RMS_NORM+MUL, MUL_MAT+ADD, ADD+RMS_NORM) - Add GGML_CANN_DISABLE_GRAPH_OPTIMIZE env var to disable optimization This is the first step toward multi-stream parallel execution on Ascend NPU.
* vulkan: sort graph to allow more parallel execution Add a backend proc to allow the backend to modify the graph. The vulkan implementation looks at which nodes depend on each other and greedily reorders them to group together nodes that don't depend on each other. It only reorders the nodes, doesn't change the contents of any of them. With #15489, this reduces the number of synchronizations needed. * call optimize_graph per-split
Implement ggml_backend_cann_graph_optimize function for CANN backend, ported from Vulkan backend (PR #15489 and #15850). Key changes: - Add graph optimization to reorder nodes based on dependency analysis - Group non-dependent nodes together for potential parallel execution - Preserve fusion patterns (RMS_NORM+MUL, MUL_MAT+ADD, ADD+RMS_NORM) - Add GGML_CANN_DISABLE_GRAPH_OPTIMIZE env var to disable optimization This is the first step toward multi-stream parallel execution on Ascend NPU.
Add a backend proc to allow the backend to modify the graph. The vulkan implementation looks at which nodes depend on each other and greedily reorders them to group together nodes that don't depend on each other. It only reorders the nodes, doesn't change the contents of any of them.
With #15489, this reduces the number of synchronizations needed.
Performance on 5090: