[MLIR] Make OneShotModuleBufferize use OpInterface#110322
[MLIR] Make OneShotModuleBufferize use OpInterface#110322matthias-springer merged 17 commits intollvm:mainfrom
OneShotModuleBufferize use OpInterface#110322Conversation
… use FunctionOpInterface
…ze.cpp Co-authored-by: erick-xanadu <110487834+erick-xanadu@users.noreply.github.com>
…ze.cpp Co-authored-by: erick-xanadu <110487834+erick-xanadu@users.noreply.github.com>
…ze.cpp Co-authored-by: erick-xanadu <110487834+erick-xanadu@users.noreply.github.com>
…orm/ChH/full.mlir
erick-xanadu
left a comment
There was a problem hiding this comment.
Hi @tzunghanjuang , I left a couple of suggestions. The reason for this is to avoid hardcoding func::CallIndirectOp. But wait for the other maintainers to chime in, as I am not sure if this algorithm requires all functions to be statically known.
Thanks!
mlir/lib/Dialect/Bufferization/Transforms/OneShotModuleBufferize.cpp
Outdated
Show resolved
Hide resolved
mlir/lib/Dialect/Bufferization/Transforms/OneShotModuleBufferize.cpp
Outdated
Show resolved
Hide resolved
…ze.cpp Co-authored-by: erick-xanadu <110487834+erick-xanadu@users.noreply.github.com>
|
Should I merge this now? |
|
Hi @matthias-springer . Yes, it is ready to merge. Thank you. |
**Description:** This PR replaces a part of `FuncOp` and `CallOp` with `FunctionOpInterface` and `CallOpInterface` in `OneShotModuleBufferize`. Also fix the error from an integration test in the a previous PR attempt. (llvm#107295) The below fixes skip `CallOpInterface` so that the assertions are not triggered. https://github.com/llvm/llvm-project/blob/8d780007625108a7f34e40efb8604b858e04c60c/mlir/lib/Dialect/Bufferization/Transforms/OneShotModuleBufferize.cpp#L254-L259 https://github.com/llvm/llvm-project/blob/8d780007625108a7f34e40efb8604b858e04c60c/mlir/lib/Dialect/Bufferization/Transforms/OneShotModuleBufferize.cpp#L311-L315 **Related Discord Discussion:** [Link](https://discord.com/channels/636084430946959380/642426447167881246/1280556809911799900) --------- Co-authored-by: erick-xanadu <110487834+erick-xanadu@users.noreply.github.com>
|
This change breaks bufferization of ops that implement the This produes the following error: Also note that the error message still hardcodes Finally, there might be other cases where people didn't expect their |
|
I think the fix is as easy to add the |
|
I have just created #111408, but I am getting second thoughts: AFAIU, |
|
@tzunghanjuang Can you check how we are using the |
|
In that case, we could also make the |
|
@matthias-springer Yes, |
|
Hi, @matthias-springer. I just want to make sure if I update the llvm-project/mlir/include/mlir/Interfaces/FunctionInterfaces.td Lines 80 to 87 in 67160c5 |
Linaro's bots have been very behind but we finally noticed that this has caused SME test failures on AArch64. Last good build: https://lab.llvm.org/staging/#/builders/125/builds/388 This bot is running SME tests using qemu for emulation. Check the cmake stage to see how to enable that. You will need a recent version of qemu-aarch64 installed. @banach-space I'll leave it to you to to handle, the linked PR may already solve the issue. |
|
Thanks for flagging this @DavidSpickett. This can be reproduced without requiring an emulator. Plain bin/mlir-opt ../../mlir/test/Integration/Dialect/Linalg/CPU/ArmSME/matmul.mlir -transform-interpreter -test-transform-dialect-erase-schedule
../../mlir/test/Integration/Dialect/Linalg/CPU/ArmSME/matmul.mlir:56:3: error: cannot bufferize a FuncOp with tensors and without a unique ReturnOp
transform.named_sequence @__transform_main(%module : !transform.any_op {transform.consumed}) {
^
../../mlir/test/Integration/Dialect/Linalg/CPU/ArmSME/matmul.mlir:56:3: note: see current operation:
"transform.named_sequence"() <{arg_attrs = [{transform.consumed}], function_type = (!transform.any_op) -> (), sym_name = "__transform_main"}> ({
^bb0(%arg0: !transform.any_op):
%0 = "transform.structured.match"(%arg0) <{ops = ["linalg.matmul"]}> : (!transform.any_op) -> !transform.any_op
%1:4 = "transform.structured.tile_using_for"(%0) <{scalable_sizes = array<i1: true, true, false>, static_sizes = array<i64: 4, 4, 1>}> : (!transform.any_op) -> (!transform.any_op, !transform.any_op, !transform.any_op, !transform.any_op)
"transform.structured.vectorize"(%1#0) <{scalable_sizes = array<i1: true, true, false>, static_vector_sizes = array<i64: 4, 4, 1>}> : (!transform.any_op) -> ()
%2 = "transform.bufferization.one_shot_bufferize"(%arg0) <{allow_return_allocs_from_loops = false, allow_unknown_ops = false, bufferize_function_boundaries = true, check_parallel_regions = true, dump_alias_sets = false, memcpy_op = "memref.copy", print_conflicts = false, test_analysis_only = false}> : (!transform.any_op) -> !transform.any_op
%3 = "transform.structured.match"(%2) <{ops = ["func.func"]}> : (!transform.any_op) -> !transform.any_op
"transform.apply_patterns"(%3) <{max_iterations = -1 : i64, max_num_rewrites = -1 : i64}> ({
"transform.apply_patterns.vector.lower_masked_transfers"() : () -> ()
"transform.apply_patterns.vector.transfer_permutation_patterns"() : () -> ()
"transform.apply_patterns.vector.reduction_to_contract"() : () -> ()
}) : (!transform.any_op) -> ()
"transform.apply_patterns"(%3) <{max_iterations = -1 : i64, max_num_rewrites = -1 : i64}> ({
"transform.apply_patterns.vector.lower_contraction"() <{lowering_strategy = 2 : i32}> : () -> ()
"transform.apply_patterns.vector.lower_masks"() : () -> ()
"transform.apply_patterns.vector.rank_reducing_subview_patterns"() : () -> ()
"transform.apply_patterns.canonicalization"() : () -> ()
}) : (!transform.any_op) -> ()
%4 = "transform.structured.hoist_redundant_vector_transfers"(%3) : (!transform.any_op) -> !transform.any_op
%5 = "transform.structured.match"(%2) <{interface = 2 : i32}> : (!transform.any_op) -> !transform.any_op
"transform.apply_licm"(%5) : (!transform.any_op) -> ()
"transform.loop.hoist_loop_invariant_subsets"(%5) : (!transform.any_op) -> ()
"transform.yield"() : () -> ()
}) : () -> ()
../../mlir/test/Integration/Dialect/Linalg/CPU/ArmSME/matmul.mlir:71:18: error: bufferization failed
%bufferize = transform.bufferization.one_shot_bufferize %module
^@tzunghanjuang or @matthias-springer , could you help us with this? This is not really my area of expertise 😅 Link to the test: |
|
@banach-space , if you want to have your test pass while the PR above is discussed: you can wrap |
|
@matthias-springer would it be desirable to have bufferization skip nested modules that do not contain a tensor values in their bodies? Does the builtin module implement |
These tests have been broken since: * llvm#110322 First failing buildbot: * llvm#110322
…110322)" This reverts commit 2026501. Failing bot: * https://lab.llvm.org/staging/#/builders/125/builds/389
Thanks for these hints. I noticed that you have also rewritten the pass pipeline to use Unfortunately, the steps outlined above weren't sufficient for the SME e2e tests to pass (#113117). I am running out of time and both Matthias and I are travelling this week, hence I am suggesting a revert: |
…)" (#113124) This reverts commit 2026501. Failing bot: * https://lab.llvm.org/staging/#/builders/125/builds/389
Description:
This PR replaces a part of
FuncOpandCallOpwithFunctionOpInterfaceandCallOpInterfaceinOneShotModuleBufferize. Also fix the error from an integration test in the a previous PR attempt. (#107295)The below fixes skip
CallOpInterfaceso that the assertions are not triggered.llvm-project/mlir/lib/Dialect/Bufferization/Transforms/OneShotModuleBufferize.cpp
Lines 254 to 259 in 8d78000
llvm-project/mlir/lib/Dialect/Bufferization/Transforms/OneShotModuleBufferize.cpp
Lines 311 to 315 in 8d78000
Related Discord Discussion: Link