Skip to content

[External Tasks] Move task build into method call for external kernel support#282

Merged
yaoyaoding merged 2 commits intohidet-org:mainfrom
xinli-git:hidet_external_ops
Jun 19, 2023
Merged

[External Tasks] Move task build into method call for external kernel support#282
yaoyaoding merged 2 commits intohidet-org:mainfrom
xinli-git:hidet_external_ops

Conversation

@xinli-git
Copy link
Copy Markdown
Contributor

This is to support adding tasks from outside of the hidet library that uses external build logic.

User can directly overwrite the build function to insert custom logic for introducing operators to hidet graph

@yaoyaoding
Copy link
Copy Markdown
Member

Thanks @xinli-git !

@yaoyaoding yaoyaoding merged commit a795526 into hidet-org:main Jun 19, 2023
@xinli-git xinli-git deleted the hidet_external_ops branch August 21, 2023 02:49
vadiklyutiy pushed a commit that referenced this pull request Jul 22, 2024
Closes #282 

In the interest of time, only supported enough functionality to resolve
the `NotImplementedError` in the issue.

Also registered the `torch.Tensor.cumsum` method which was encountered
in the same model immediately after extending `einsum`

Additionally, added a default value `False` to the `inplace` parameter
in the function `relu` defined in `register_functions.py` to match the
behaviour in PyTorch
vadiklyutiy pushed a commit that referenced this pull request Jul 23, 2024
Closes #282 

In the interest of time, only supported enough functionality to resolve
the `NotImplementedError` in the issue.

Also registered the `torch.Tensor.cumsum` method which was encountered
in the same model immediately after extending `einsum`

Additionally, added a default value `False` to the `inplace` parameter
in the function `relu` defined in `register_functions.py` to match the
behaviour in PyTorch
vadiklyutiy pushed a commit that referenced this pull request Dec 26, 2024
Closes #282 

In the interest of time, only supported enough functionality to resolve
the `NotImplementedError` in the issue.

Also registered the `torch.Tensor.cumsum` method which was encountered
in the same model immediately after extending `einsum`

Additionally, added a default value `False` to the `inplace` parameter
in the function `relu` defined in `register_functions.py` to match the
behaviour in PyTorch
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants