Skip to content

Kernel IR refactoring: part 4#274

Merged
csarofeen merged 55 commits into20_7_6_develfrom
code_ir_part4
Aug 10, 2020
Merged

Kernel IR refactoring: part 4#274
csarofeen merged 55 commits into20_7_6_develfrom
code_ir_part4

Conversation

@tlemo
Copy link
Copy Markdown
Collaborator

@tlemo tlemo commented Aug 6, 2020

This iteration completes the definition of all the nodes which are present in the Kernel IR. In this PR, they are kept in sync with the original definitions as much as possible, but the goal is to specialize the Kernel IR nodes to fit their lowered roles, for example:

  • kir::TensorView -> kir::Array ?
  • kir::IterDomain -> ??? range
  • kir::TensorDomain -> ??? dimension

These are not plan of record, but just to illustrate that we'll have the opportunity to define simpler and more specific abstractions in the Kernel IR.

Note that the Fusion IR / Kernel IR split is still not completed. Some Kernel IRs may still point to Fusion IR nodes, although this is becoming more limited to a few cases, which will be the subject of the next iteration.

tlemo added 30 commits July 20, 2020 17:55
This reverts commit fc09c1b5a7240701da093406753908eba6f41e1d.
@tlemo tlemo requested review from csarofeen and naoyam August 7, 2020 00:00
const std::vector<bool> contiguity_;
};

class TORCH_CUDA_API TensorView : public Val {
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

contrast the lowered kir::TensorView with the fuser::TensorView definition

Comment thread torch/csrc/jit/codegen/cuda/index_compute.cpp Outdated
Comment on lines +115 to +118
const auto in1 = lowerOperand(top->in1(), top->out());
const auto in2 = lowerOperand(top->in2(), top->out());
const auto in3 = lowerOperand(top->in3(), top->out());
const auto out = lowerOutput(top);
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice cleanup!

Comment on lines -83 to -103
std::vector<Statement*> inds;
for (auto* ind : ti->indices())
inds.push_back(mutateAsVal(ind));

bool changed = false;
for (decltype(inds.size()) i{0}; i < inds.size(); i++) {
TORCH_INTERNAL_ASSERT(inds[i]->isVal() && inds[i]->asVal()->isAnInt());
if (!inds[i]->sameAs(ti->index(i)))
changed = true;
}

if (!changed)
return ti;

std::vector<Val*> valInds(inds.size(), nullptr);
for (decltype(inds.size()) i{0}; i < inds.size(); i++)
valInds[i] = inds[i]->asVal();

Val* mutated_val = new kir::TensorIndex(ti->view(), valInds);
registerMutation(ti, mutated_val);
return mutated_val;
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why can this be removed?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't seem to be needed. We may be able to eliminate the need for OptOutMutator completely after we refactor the lowering code.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are only the bodies of the overloads for the kernel IR nodes removed? Wouldn't the class become inconsistent? Wouldn't it make more sense to remove the class entirely if it's not needed?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would be the end goal, although I believe that removing the class completely would require additional changes.

I've been chipping away slowly, to get to the current state where we can have some confidence that it can be removed (when I started this was not as clear)

I opened #276 to track this.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good to me.

Copy link
Copy Markdown
Owner

@csarofeen csarofeen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like a good step in the right direction.

@csarofeen csarofeen merged commit 1874e11 into 20_7_6_devel Aug 10, 2020
@tlemo tlemo deleted the code_ir_part4 branch August 10, 2020 20:03

// Open a new inner most for loop
kir::ForLoop* openFor(Expr* scope, IterDomain* id) {
const auto kir_id = new kir::IterDomain(id);
Copy link
Copy Markdown
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dangerous?

if (dim->isReduction())
continue;
ids.push_back(dim);
ids.push_back(new kir::IterDomain(dim));
Copy link
Copy Markdown
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dangerous?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants