Skip to content

Channel Dropout Resolved review comments#541

Merged
r-abishek merged 29 commits intor-abishek:ar/dropout_channelfrom
RooseweltMcW:apr/channel_dropout
Dec 17, 2025
Merged

Channel Dropout Resolved review comments#541
r-abishek merged 29 commits intor-abishek:ar/dropout_channelfrom
RooseweltMcW:apr/channel_dropout

Conversation

@RooseweltMcW
Copy link
Copy Markdown

No description provided.

arvindcheru and others added 20 commits October 29, 2025 03:06
Co-authored-by: Kiriti Gowda <kiritigowda@gmail.com>
* Solarize HIP and HOST implementation

* cleanup the code and fix pkd3-pkd3 performance

* Add golden output and doxygen comments

* Add cheks for Threshold param

* modified case num for solarize

* minor fix

* fix load/store calls

* Address review comments

* minor fix

---------

Co-authored-by: HazarathKumarM <hazarathkumar@multicorewareinc.com>
Co-authored-by: Maddisetty <hmaddise@ctr2-alola-login-01.amd.com>
Co-authored-by: Kiriti Gowda <kiritigowda@gmail.com>
…ocs/sphinx (ROCm#622)

Bumps [rocm-docs-core[api_reference]](https://github.com/ROCm/rocm-docs-core) from 1.24.1 to 1.25.0.
- [Release notes](https://github.com/ROCm/rocm-docs-core/releases)
- [Changelog](https://github.com/ROCm/rocm-docs-core/blob/v1.25.0/CHANGELOG.md)
- [Commits](ROCm/rocm-docs-core@v1.24.1...v1.25.0)

---
updated-dependencies:
- dependency-name: rocm-docs-core[api_reference]
  dependency-version: 1.25.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* Initial hip memory fixes

* Further hip warning fixes

* Default build warning fixes - group III

---------

Co-authored-by: Srihari-mcw <srihari@multicorewareinc.com>
Co-authored-by: Kiriti Gowda <kiritigowda@gmail.com>
…ocs/sphinx (ROCm#623)

Bumps [rocm-docs-core[api_reference]](https://github.com/ROCm/rocm-docs-core) from 1.25.0 to 1.26.0.
- [Release notes](https://github.com/ROCm/rocm-docs-core/releases)
- [Changelog](https://github.com/ROCm/rocm-docs-core/blob/v1.26.0/CHANGELOG.md)
- [Commits](ROCm/rocm-docs-core@v1.25.0...v1.26.0)

---
updated-dependencies:
- dependency-name: rocm-docs-core[api_reference]
  dependency-version: 1.26.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* Initial changes for tensor sum for PLN1 and PKD3 variants U8

* Make changes for rest of the bit depths and layouts

* Remove unused pack functions

* Adjust comments

* Initial cleanup

* Further cleanup

* Remove unused functions

* Address copilot comments

* Further standardize comments

* Standardize comments spacing in functions affected

---------

Co-authored-by: Srihari-mcw <srihari@multicorewareinc.com>
Co-authored-by: Kiriti Gowda <kiritigowda@gmail.com>
* removed numbers for bitdepths and output toggle

* Add layout enum and replicate changes to HOST testsuite

* Fix copilot review comments

* resolve review comments

* Apply suggestion from @Copilot

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* minor bugfix

* minor bugfix

* minor fix

---------

Co-authored-by: HazarathKumarM <hazarathkumar@multicorewareinc.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Kiriti Gowda <kiritigowda@gmail.com>
…ocs/sphinx (ROCm#632)

Bumps [rocm-docs-core[api_reference]](https://github.com/ROCm/rocm-docs-core) from 1.26.0 to 1.27.0.
- [Release notes](https://github.com/ROCm/rocm-docs-core/releases)
- [Changelog](https://github.com/ROCm/rocm-docs-core/blob/develop/CHANGELOG.md)
- [Commits](ROCm/rocm-docs-core@v1.26.0...v1.27.0)

---
updated-dependencies:
- dependency-name: rocm-docs-core[api_reference]
  dependency-version: 1.27.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
…ocs/sphinx (ROCm#622)

Bumps [rocm-docs-core[api_reference]](https://github.com/ROCm/rocm-docs-core) from 1.24.1 to 1.25.0.
- [Release notes](https://github.com/ROCm/rocm-docs-core/releases)
- [Changelog](https://github.com/ROCm/rocm-docs-core/blob/v1.25.0/CHANGELOG.md)
- [Commits](ROCm/rocm-docs-core@v1.24.1...v1.25.0)

---
updated-dependencies:
- dependency-name: rocm-docs-core[api_reference]
  dependency-version: 1.25.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR refactors the channel dropout implementation to resolve review comments by moving mask generation logic from kernel/executor functions to the top-level API functions. The key change is replacing the randomSeed boolean parameter with a pre-generated maskPtr, making the implementation more efficient and separating concerns between mask generation and application.

Key Changes:

  • Moved channel dropout mask generation from kernel/executor functions to the top-level rppt_channel_dropout_host and rppt_channel_dropout_gpu functions
  • Changed function signatures to pass pre-generated masks instead of a random seed boolean
  • Added error code capture in test files for better error handling

Reviewed changes

Copilot reviewed 7 out of 7 changed files in this pull request and generated no comments.

Show a summary per file
File Description
utilities/test_suite/HOST/Tensor_image_host.cpp Added error code capture for channel dropout host function call
utilities/test_suite/HIP/Tensor_image_hip.cpp Added error code capture for channel dropout GPU function call
src/modules/tensor/rppt_tensor_effects_augmentations.cpp Moved mask generation logic to top-level functions; updated function calls to pass mask pointers instead of random seed
src/modules/tensor/hip/kernel/channel_dropout.cpp Removed mask generation code from kernel executor; updated to receive pre-generated masks
src/modules/tensor/cpu/kernel/channel_dropout.cpp Removed mask generation from kernel function; renamed local variable for clarity; updated to use pre-generated masks
src/include/tensor/host_tensor_executors.hpp Updated function signature to accept mask pointer instead of random seed boolean
src/include/tensor/hip_tensor_executors.hpp Updated function signature to accept device mask pointer instead of random seed boolean

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@r-abishek r-abishek added the minor fix Minor implementation fixes label Dec 3, 2025
Srihari-mcw pushed a commit to Srihari-mcw/rpp that referenced this pull request Dec 8, 2025
Co-authored-by: Kiriti Gowda <kiritigowda@gmail.com>
* \param [in] dstDescPtr destination tensor descriptor (Restrictions - numDims = 4, offsetInBytes >= 0, dataType = U8/F16/F32/I8, layout = NCHW/NHWC, c = same as that of srcDescPtr)
* \param [in] dropoutProbability dropout probability for channel dropout calculation (1D Rpp32f tensor in HOST memory, of size batchSize with 0 <= dropProb[i] <= 1 for each image in batch)
* \param [in] randomSeed randomSeed single bool to control the random number generator's seed ( 0 - Fixed seed for QA , 1 - random seed generated for randomness)
* \param [in] channelTensor channel dropout tensor (1D Rpp32f tensor in HOST memory, of size batchSize * channels Values must be 0.0f (Drop) or 1.0f (Keep) for each channel of each image in batch)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

change this variable to dropoutTensor, applicable to all instances

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

* \param [in] dstDescPtr destination tensor descriptor (Restrictions - numDims = 4, offsetInBytes >= 0, dataType = U8/F16/F32/I8, layout = NCHW/NHWC, c = same as that of srcDescPtr)
* \param [in] dropoutProbability dropout probability for channel dropout calculation (1D Rpp32f tensor in HOST memory, of size batchSize with 0 <= dropProb[i] <= 1 for each image in batch)
* \param [in] randomSeed randomSeed single bool to control the random number generator's seed ( 0 - Fixed seed for QA , 1 - random seed generated for randomness)
* \param [in] dropoutTensor channel dropout tensor (1D Rpp32f tensor in HOST memory, of size batchSize * channels Values must be 0.0f (Drop) or 1.0f (Keep) for each channel of each image in batch)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

check this comment , it shouldn't be HOST memory

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also should be Rpp8u instead of Rpp32f

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

* \param [in] dstDescPtr destination tensor descriptor (Restrictions - numDims = 4, offsetInBytes >= 0, dataType = U8/F16/F32/I8, layout = NCHW/NHWC, c = same as that of srcDescPtr)
* \param [in] dropoutProbability dropout probability for channel dropout calculation (1D Rpp32f tensor in HOST memory, of size batchSize with 0 <= dropProb[i] <= 1 for each image in batch)
* \param [in] randomSeed randomSeed single bool to control the random number generator's seed ( 0 - Fixed seed for QA , 1 - random seed generated for randomness)
* \param [in] dropoutTensor channel dropout tensor (1D Rpp32f tensor in HOST memory, of size batchSize * channels Values must be 0.0f (Drop) or 1.0f (Keep) for each channel of each image in batch)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rpp8u*

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

RpptDescPtr dstDescPtr,
Rpp32f *dropoutProbability,
bool randomSeed,
Rpp8u *d_dropoutTensor,
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the convention to name d_dropoutTensor - Doubt it, pls check

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Used similar naming convention used in fog kernel.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

d_ will be used only if the memory allocated to the tensor is completely device only memory. Please verify modify this if needed

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Rpp8u *dropoutTensor = nullptr;
if(testCase == CHANNEL_DROPOUT)
CHECK_RETURN_STATUS(hipHostMalloc(&dropoutProbability, batchSize * sizeof(Rpp32f)));
CHECK_RETURN_STATUS(hipHostMalloc(&dropoutTensor, batchSize * 3 * sizeof(Rpp32f)));
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe this should be batchSize * srcDescPtr->c, I have a doubt, is the channel dropout color image specific? Is there any condition for this somewhere?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but for PLN1 it is also included such a way that it runs and keeps one channel and if the user wants to based on the probability it can now also be black image.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in that case one value is enough, right? why creating a tensor with 3 values

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this need to be checked

Copy link
Copy Markdown
Collaborator

@Srihari-mcw Srihari-mcw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pls make the requested changes/ reply why the current code is valid

* \param [in] dstDescPtr destination tensor descriptor (Restrictions - numDims = 4, offsetInBytes >= 0, dataType = U8/F16/F32/I8, layout = NCHW/NHWC, c = same as that of srcDescPtr)
* \param [in] dropoutProbability dropout probability for channel dropout calculation (1D Rpp32f tensor in HOST memory, of size batchSize with 0 <= dropProb[i] <= 1 for each image in batch)
* \param [in] randomSeed randomSeed single bool to control the random number generator's seed ( 0 - Fixed seed for QA , 1 - random seed generated for randomness)
* \param [in] dropoutTensor channel dropout tensor (1D Rpp8u tensor in HOST memory, of size batchSize * channels Values must be 0.0f (Drop) or 1.0f (Keep) for each channel of each image in batch)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in the comment it is said it is a rpp8u tensor, but you are mentioning float values like 0.0f and 1.0f. Please change them to 0 or 1

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

RpptDescPtr dstDescPtr,
Rpp32f *dropoutProbability,
bool randomSeed,
Rpp8u *d_dropoutTensor,
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

d_ will be used only if the memory allocated to the tensor is completely device only memory. Please verify modify this if needed


if (srcDescPtr->dataType != dstDescPtr->dataType) return RPP_ERROR_INVALID_SRC_OR_DST_DATATYPE;

Rpp8u *d_dropoutTensor = reinterpret_cast<Rpp8u *>(rpp::deref(rppHandle).GetInitHandle()->mem.mgpu.scratchBufferHip.floatmem);
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any reason why we are not passing pinned memory / HIP memory itself from the test suite?

this will not work if we need an unified api for this kernel

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Modified, Done.

* \param [in] dstDescPtr destination tensor descriptor (Restrictions - numDims = 4, offsetInBytes >= 0, dataType = U8/F16/F32/I8, layout = NCHW/NHWC, c = same as that of srcDescPtr)
* \param [in] dropoutProbability dropout probability for channel dropout calculation (1D Rpp32f tensor in HOST memory, of size batchSize with 0 <= dropProb[i] <= 1 for each image in batch)
* \param [in] randomSeed randomSeed single bool to control the random number generator's seed ( 0 - Fixed seed for QA , 1 - random seed generated for randomness)
* \param [in] dropoutTensor channel dropout tensor (1D Rpp8u tensor in HOST memory, of size batchSize * channels Values must be 0.0f (Drop) or 1.0f (Keep) for each channel of each image in batch)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same comment as HOST

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

still the description say it is a host buffer. please modify it

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Rpp32f seed = qaFlag ? DROPOUT_FIXED_SEED : std::random_device{}();
for (i = 0; i < batchSize; i++)
dropoutProbability[i] = 0.4f;
Rpp8u dropoutTensor[batchSize * 3];
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here are also remove hardcoded 3 use channels instead

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@HazarathKumarM HazarathKumarM changed the base branch from ar/dropout_channel to master December 12, 2025 11:23
@HazarathKumarM HazarathKumarM changed the base branch from master to ar/dropout_channel December 12, 2025 11:24
* \param [in] dropoutProbability dropout probability for channel dropout calculation (1D Rpp32f tensor in HOST memory, of size batchSize with 0 <= dropProb[i] <= 1 for each image in batch)
* \param [in] randomSeed randomSeed single bool to control the random number generator's seed ( 0 - Fixed seed for QA , 1 - random seed generated for randomness)
* \param [in] dropoutTensor channel dropout tensor (1D Rpp8u tensor in HIP memory, of size batchSize * channels Values must be 0 (Drop) or 1 (Keep) for each channel of each image in batch)
* \param [in] roiTensorPtrSrc ROI data in HIP memory, for each image in source tensor (2D tensor of size batchSize * 4, in either format - XYWH(xy.x, xy.y, roiWidth, roiHeight) or LTRB(lt.x, lt.y, rb.x, rb.y))
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it pinned or HIP?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Modified, Done

Rpp8u *dropoutTensor = nullptr;
if(testCase == CHANNEL_DROPOUT)
CHECK_RETURN_STATUS(hipHostMalloc(&dropoutProbability, batchSize * sizeof(Rpp32f)));
CHECK_RETURN_STATUS(hipHostMalloc(&dropoutTensor, batchSize * 3 * sizeof(Rpp32f)));
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this need to be checked

@r-abishek r-abishek changed the base branch from ar/dropout_channel to develop December 16, 2025 04:11
@HazarathKumarM HazarathKumarM changed the base branch from develop to master December 16, 2025 04:12
@r-abishek r-abishek changed the base branch from master to ar/dropout_channel December 16, 2025 04:12
@RooseweltMcW
Copy link
Copy Markdown
Author

@r-abishek resolved all merge conflicts and review comments.

T *dstPtr,
uint2 dstStridesNH,
uint8_t *channelMask,
uint8_t *dropoutTensor,
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

uint8_t is supposed to be the datatype. I understand its equivalent to Rpp8u but is it following convention from other kernel files?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Took reference from posterize kernel and modified the datatype to Rpp8u, Done.

RpptDescPtr dstDescPtr,
Rpp32f *dropoutProbability,
bool randomSeed,
Rpp8u *dropoutTensor,
Copy link
Copy Markdown
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Srihari-mcw or @HazarathKumarM Can't it be a bool array?

Copy link
Copy Markdown
Owner

@r-abishek r-abishek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pls only check the comment on using bool

@r-abishek r-abishek merged commit 8a1f90e into r-abishek:ar/dropout_channel Dec 17, 2025
ManasaDattaT pushed a commit to ManasaDattaT/rpp that referenced this pull request Dec 19, 2025
Co-authored-by: Kiriti Gowda <kiritigowda@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

minor fix Minor implementation fixes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants