Skip to content

[Options] Add option for controlling parallel build with number of jobs or memory reserved for each job#230

Merged
xinli-git merged 10 commits intohidet-org:mainfrom
xinli-git:parallel_build
May 24, 2023
Merged

[Options] Add option for controlling parallel build with number of jobs or memory reserved for each job#230
xinli-git merged 10 commits intohidet-org:mainfrom
xinli-git:parallel_build

Conversation

@xinli-git
Copy link
Copy Markdown
Contributor

Add a new option for parallel tune. When unset, this keeps the original behavior but can be further modified.

  • num_jobs: the maximum number of parallel jobs the user wishes to launch
  • mem_gb_per_job: the memory reserved for each job. This takes precedence if the number of jobs available to launch is less than num_jobs.

@xinli-git xinli-git changed the title Parallel build [Options] Add option for controlling parallel build with number of jobs or memory reserved for each job May 16, 2023
Copy link
Copy Markdown
Member

@yaoyaoding yaoyaoding left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @xinli-git, thanks for the PR.

I prefer the name max_parallel_jobs or max_num_workers to num_jobs. And leave some minor suggestions.

@yaoyaoding
Copy link
Copy Markdown
Member

Thanks @xinli-git ! Looks good to me. Feel free to merge after passing the CI.

@xinli-git xinli-git merged commit 6152bc9 into hidet-org:main May 24, 2023
@xinli-git xinli-git deleted the parallel_build branch May 24, 2023 00:57
vadiklyutiy pushed a commit that referenced this pull request Jul 22, 2024
- Add tile-level operations like, `copy`, `mask`, `partition_src`, and
`partition_dst`.
- Add a pass to lower the tile-level operations to Hidet IR. 
- Enhance the infra to facilitate the lowering. 

`copy`: Copy a tensor to another tensor. 
`mask`: Create a mask tensor for the copy operation. Typically, this
operation is used when the tile size can not divide the matrix shape.
`partition_src`, `partition_dst`: These two operations partition a
tensor held by the entire thread block into subtensors held by a single
thread. These operations allow us to move expressions related to
`threadIdx` and `blockIdx` outside the loop.

---------

Co-authored-by: Xiao Zhang <xiao.zhang@centml.ai>
vadiklyutiy pushed a commit that referenced this pull request Jul 23, 2024
- Add tile-level operations like, `copy`, `mask`, `partition_src`, and
`partition_dst`.
- Add a pass to lower the tile-level operations to Hidet IR. 
- Enhance the infra to facilitate the lowering. 

`copy`: Copy a tensor to another tensor. 
`mask`: Create a mask tensor for the copy operation. Typically, this
operation is used when the tile size can not divide the matrix shape.
`partition_src`, `partition_dst`: These two operations partition a
tensor held by the entire thread block into subtensors held by a single
thread. These operations allow us to move expressions related to
`threadIdx` and `blockIdx` outside the loop.

---------

Co-authored-by: Xiao Zhang <xiao.zhang@centml.ai>
vadiklyutiy pushed a commit that referenced this pull request Dec 26, 2024
- Add tile-level operations like, `copy`, `mask`, `partition_src`, and
`partition_dst`.
- Add a pass to lower the tile-level operations to Hidet IR. 
- Enhance the infra to facilitate the lowering. 

`copy`: Copy a tensor to another tensor. 
`mask`: Create a mask tensor for the copy operation. Typically, this
operation is used when the tile size can not divide the matrix shape.
`partition_src`, `partition_dst`: These two operations partition a
tensor held by the entire thread block into subtensors held by a single
thread. These operations allow us to move expressions related to
`threadIdx` and `blockIdx` outside the loop.

---------

Co-authored-by: Xiao Zhang <xiao.zhang@centml.ai>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants