Skip to content

[Fixbug] Add _stacklevel to pytorch softmax mapping#178

Merged
yaoyaoding merged 1 commit intohidet-org:mainfrom
yaoyaoding:fix-177
Apr 17, 2023
Merged

[Fixbug] Add _stacklevel to pytorch softmax mapping#178
yaoyaoding merged 1 commit intohidet-org:mainfrom
yaoyaoding:fix-177

Conversation

@yaoyaoding
Copy link
Copy Markdown
Member

@yaoyaoding yaoyaoding commented Apr 17, 2023

Fix #177.

@yaoyaoding yaoyaoding merged commit 3e7d959 into hidet-org:main Apr 17, 2023
@yaoyaoding yaoyaoding deleted the fix-177 branch April 17, 2023 19:46
vadiklyutiy added a commit that referenced this pull request Jul 22, 2024
I noticed that we spend sufficient time on creation process in `parallel_imap`.

Add `chunksize` arg to `pool.imap` to decrease the overhead. 

**Results.**
`time python bench_op.py matmul_f16 --params 1x4096x4096,1x4096x4096
--dtype float16`
`time python bench_op.py batch_matmul --params 1x4096x4096,1x4096x4096
--dtype float16`

| Test | Before(s) | After(s) | Improvement |
|--------|--------|--------|--------|
| matmul_f16 | 42.768 | 42.138 | 1.5% |
| batch_matmul |  34m29.1s | 34m10.1s | 0.9% |
vadiklyutiy added a commit that referenced this pull request Jul 23, 2024
I noticed that we spend sufficient time on creation process in `parallel_imap`.

Add `chunksize` arg to `pool.imap` to decrease the overhead. 

**Results.**
`time python bench_op.py matmul_f16 --params 1x4096x4096,1x4096x4096
--dtype float16`
`time python bench_op.py batch_matmul --params 1x4096x4096,1x4096x4096
--dtype float16`

| Test | Before(s) | After(s) | Improvement |
|--------|--------|--------|--------|
| matmul_f16 | 42.768 | 42.138 | 1.5% |
| batch_matmul |  34m29.1s | 34m10.1s | 0.9% |
vadiklyutiy added a commit that referenced this pull request Dec 26, 2024
I noticed that we spend sufficient time on creation process in `parallel_imap`.

Add `chunksize` arg to `pool.imap` to decrease the overhead. 

**Results.**
`time python bench_op.py matmul_f16 --params 1x4096x4096,1x4096x4096
--dtype float16`
`time python bench_op.py batch_matmul --params 1x4096x4096,1x4096x4096
--dtype float16`

| Test | Before(s) | After(s) | Improvement |
|--------|--------|--------|--------|
| matmul_f16 | 42.768 | 42.138 | 1.5% |
| batch_matmul |  34m29.1s | 34m10.1s | 0.9% |
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug] softmax() got an unexpected keyword argument '_stacklevel'

1 participant