Skip to content

[Integration] Sync to main#353

Merged
xinli-git merged 5 commits intoauto-parallelfrom
main
Aug 24, 2023
Merged

[Integration] Sync to main#353
xinli-git merged 5 commits intoauto-parallelfrom
main

Conversation

@xinli-git
Copy link
Copy Markdown
Contributor

No description provided.

yaoyaoding and others added 5 commits August 5, 2023 00:07
…del support (#347)

1. Enhance support for `__setitem__` and` __getitem__` of Tensor; Add
SetStridedSlice Op, Roll Op.
2. Add/Update torch mapping for adaptive_avg_pool3d, eq, pad, roll,
matmul, new_zeros, batch_norm, MultiHeadAttention.
3. Update torch Linear mapping to optionally accept transposed weights.
4. Fix a bug where a empty graph will output a zero tensor instead of
the input/weight.
…#345)

Encountered a few minor issues when compiling a transformer-based model
using torch.compile with very large batch sizes, submitting the fix here.
This is a continuation of #347.

1. Add LP normalization task (ToDo: schedule template)
2. Add torch mappings for normalize, clone, zero_, exp, chunk
3. Add ceil_mode=True support for pool2d
4. Fix dtype issue in resize
5. Fix other bugs in pad, conv2d_pattern
Add an ad-hoc implementation of einsum based on pattern matching. Only
supports batched matmul.
@xinli-git xinli-git merged commit a696890 into auto-parallel Aug 24, 2023
vadiklyutiy pushed a commit that referenced this pull request Jul 27, 2024
added torch.t for mobilebert-uncased model

---------

Co-authored-by: Zhumakhan <nazirzhumakhan@gmail,.com>
vadiklyutiy pushed a commit that referenced this pull request Jul 27, 2024
added torch.t for mobilebert-uncased model

---------

Co-authored-by: Zhumakhan <nazirzhumakhan@gmail,.com>
vadiklyutiy pushed a commit that referenced this pull request Dec 26, 2024
added torch.t for mobilebert-uncased model

---------

Co-authored-by: Zhumakhan <nazirzhumakhan@gmail,.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants