-
Notifications
You must be signed in to change notification settings - Fork 31.8k
Add GLPNImageProcessorFast #41725
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add GLPNImageProcessorFast #41725
Conversation
molbap
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey, thanks for starting this! Left some initial comments :)
| if return_tensors: | ||
| # Detect heterogeneous shapes | ||
| shapes = {tuple(img.shape) for img in reordered} | ||
| if len(shapes) == 1: | ||
| # all images same shape -> safe to stack | ||
| processed = torch.stack(reordered, dim=0) | ||
| tensor_type = return_tensors | ||
| else: | ||
| # mimic slow processor: leave as list so BatchFeature won't tensorize | ||
| processed = [img.cpu().numpy() for img in reordered] | ||
| tensor_type = None | ||
| else: | ||
| processed = reordered | ||
| tensor_type = None | ||
|
|
||
| return BatchFeature(data={"pixel_values": processed}, tensor_type=tensor_type) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this parts isn't "fast": it converts to numpy when shapes differ, it's why the test test_slow_fast_equivalence_batched fails, when shapes differ tensor_type is set to None
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hey, I'm pretty confident test_slow_fast_equivalence_batched will fail with this setup currently - also looking at the slow test, what would cause the shapes to become heterogeneous, not resizing? In that case let's pad the batch and return it as a tensor IMO
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it.
- Simplified to_dict() method - Keep tensors as torch instead of converting to numpy for heterogeneous shapes - Removed unnecessary shape guards in post_process_depth_estimation - Improved variable names (tgt -> target_size, d -> resized) - Removed unnecessary GLPNImageProcessorKwargs class
- Simplified to_dict() method - Keep tensors as torch instead of converting to numpy for heterogeneous shapes - Removed unnecessary shape guards in post_process_depth_estimation - Improved variable names (tgt -> target_size, d -> resized) - Removed unnecessary GLPNImageProcessorKwargs class
Thanks a lot for reviewing Pablo! I've made the changes. |
molbap
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for iterating! Did a second review 🤗
| stacked_images = self.rescale(stacked_images, rescale_factor) | ||
| if do_normalize: | ||
| stacked_images = self.normalize(stacked_images, image_mean, image_std) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can fuse the rescale and normalize ops with rescale_and_normalize
| # avoid validation error: inject dummy size/resample for validate_preprocess_arguments | ||
| if size is None: | ||
| size = {"height": 480, "width": 640} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that should not be needed, let's define defaults in the __init__ rather
| do_normalize = False | ||
| resample = PILImageResampling.BILINEAR | ||
| size_divisor = 32 | ||
| # Don't persist an explicit `size` for GLPN (slow doesn't) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's fine to persist here
| image_std = IMAGENET_STANDARD_STD | ||
| size = {"height": 480, "width": 640} # only for validation; we still crop, not resize | ||
| interpolation = F.InterpolationMode.BILINEAR | ||
| # valid_kwargs = GLPNImageProcessorKwargs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| # valid_kwargs = GLPNImageProcessorKwargs | |
| valid_kwargs = GLPNImageProcessorKwargs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- import that from slow
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I defined the kwargs in the slow processor and imported it.
| # Don't persist an explicit `size` for GLPN (slow doesn't) | ||
| image_mean = IMAGENET_STANDARD_MEAN | ||
| image_std = IMAGENET_STANDARD_STD | ||
| size = {"height": 480, "width": 640} # only for validation; we still crop, not resize |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah but size is actually defined here - no need to re-define it after!
| if return_tensors: | ||
| # Detect heterogeneous shapes | ||
| shapes = {tuple(img.shape) for img in reordered} | ||
| if len(shapes) == 1: | ||
| # all images same shape -> safe to stack | ||
| processed = torch.stack(reordered, dim=0) | ||
| tensor_type = return_tensors | ||
| else: | ||
| # mimic slow processor: leave as list so BatchFeature won't tensorize | ||
| processed = [img.cpu().numpy() for img in reordered] | ||
| tensor_type = None | ||
| else: | ||
| processed = reordered | ||
| tensor_type = None | ||
|
|
||
| return BatchFeature(data={"pixel_values": processed}, tensor_type=tensor_type) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hey, I'm pretty confident test_slow_fast_equivalence_batched will fail with this setup currently - also looking at the slow test, what would cause the shapes to become heterogeneous, not resizing? In that case let's pad the batch and return it as a tensor IMO
| # ensure only slow keys are serialized | ||
| def to_dict(self): | ||
| d = super().to_dict() | ||
|
|
||
| # Keep only these keys with their values (everything else gets set to None) | ||
| keys_to_keep = { | ||
| "image_processor_type", | ||
| "_processor_class", # Identity metadata | ||
| "do_resize", | ||
| "size_divisor", | ||
| "resample", | ||
| "do_rescale", # Core GLPN params | ||
| "default_to_square", | ||
| "data_format", # Fast processor params | ||
| } | ||
|
|
||
| # Set all other keys to None (don't persist their values) | ||
| for key in list(d.keys()): | ||
| if key not in keys_to_keep: | ||
| d[key] = None | ||
|
|
||
| return d |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no single-letter variables, please
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ahh my bad! Sorry.
|
|
||
| return d | ||
|
|
||
| @torch.no_grad() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| @torch.no_grad() |
| self.assertTrue(tuple(encoded_images.shape) == (1, *expected_output_image_shape)) | ||
| self.image_processing_class.num_channels = 3 | ||
|
|
||
| def test_equivalence_slow_fast(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Naming should align with the rest of the lib:
| def test_equivalence_slow_fast(self): | |
| def test_slow_fast_equivalence(self): |
and another test should be added test_slow_fast_equivalence_batched
- Simplified to_dict() with descriptive variable names (d->output_dict) - Fixed resize operation: changed from crop to proper resize with interpolation - Added padding for heterogeneous batch shapes in both slow and fast processors - Fused rescale and normalize operations for efficiency - Improved all variable names (tgt->target_size, d->depth_4d->resized) - Added GLPNImageProcessorKwargs class in slow processor and imported in fast - Renamed test_equivalence_slow_fast to test_slow_fast_equivalence - Added explicit test_slow_fast_equivalence_batched test - All 20 tests passing
Thank you! I've made the changes. |
Hi! Is there further review required and anything I should change in the implementation? Please let me know. Thank you! |
molbap
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left additional comments because I'm not 100% convinced by the padding logic, let's make sure it's needed, and if it is let's use existing methods!
| # If BaseImageProcessorFast supports it, this makes persistence explicit: | ||
| try: | ||
| config_keys = {"do_resize", "size_divisor", "resample", "do_rescale"} | ||
| except Exception: | ||
| pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure why we want to persist these keys? Might be a misunderstanding on my end
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed them.
| # Pad each image to max dimensions | ||
| padded_images = [] | ||
| for img in images: | ||
| h, w = img.shape[-2:] | ||
| if h < max_height or w < max_width: | ||
| # Create padded array with zeros | ||
| padded = np.zeros((*img.shape[:-2], max_height, max_width), dtype=img.dtype) | ||
| padded[..., :h, :w] = img | ||
| padded_images.append(padded) | ||
| else: | ||
| padded_images.append(img) | ||
| images = padded_images |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's use np.pad in the slow path
| reordered = reorder_images(processed_groups, grouped_index) | ||
|
|
||
| if return_tensors: | ||
| # Detect heterogeneous shapes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are there heterogeneous shapes or not? else a pattern like
processed_images = torch.stack(processed_images, dim=0) if return_tensors else processed_images
return BatchFeature(data={"pixel_values": processed_images}, tensor_type=return_tensors)would be much preferred. Else let's at least extract the padding logic to a function, look in image processing utils fast, there's a padding method already. Why not use it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, its producing heterogenous shapes. I used the pad function from utils.
Thanks a lot for reviewing! Appreciate your help. |
yonigozlan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @Aravind-11, thanks a lot for working on this! I made some final changes to get this merged. Mostly removed the padding logic so as not to break BC as it wasn't in the original image processor.
I'll merge when the CI passes!
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
Thank you so much @yonigozlan! for the necessary commits and review! Does the failing 'tests_non_model' arise from the pr? |
I don't think so, I'm seeing it in other PRs... |
|
[For maintainers] Suggested jobs to run (before merge) run-slow: auto, glpn |
* Add GLPNImageProcessorFast for torch backend * Address review feedback - Simplified to_dict() method - Keep tensors as torch instead of converting to numpy for heterogeneous shapes - Removed unnecessary shape guards in post_process_depth_estimation - Improved variable names (tgt -> target_size, d -> resized) - Removed unnecessary GLPNImageProcessorKwargs class * Address review feedback - Simplified to_dict() method - Keep tensors as torch instead of converting to numpy for heterogeneous shapes - Removed unnecessary shape guards in post_process_depth_estimation - Improved variable names (tgt -> target_size, d -> resized) - Removed unnecessary GLPNImageProcessorKwargs class * commits after 2nd review * Address all review feedback and add explicit batched test - Simplified to_dict() with descriptive variable names (d->output_dict) - Fixed resize operation: changed from crop to proper resize with interpolation - Added padding for heterogeneous batch shapes in both slow and fast processors - Fused rescale and normalize operations for efficiency - Improved all variable names (tgt->target_size, d->depth_4d->resized) - Added GLPNImageProcessorKwargs class in slow processor and imported in fast - Renamed test_equivalence_slow_fast to test_slow_fast_equivalence - Added explicit test_slow_fast_equivalence_batched test - All 20 tests passing * using padding from utils * simplify glpn image processor fast * fix docstring --------- Co-authored-by: yonigozlan <yoni.gozlan@huggingface.co> Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com>
* Add GLPNImageProcessorFast for torch backend * Address review feedback - Simplified to_dict() method - Keep tensors as torch instead of converting to numpy for heterogeneous shapes - Removed unnecessary shape guards in post_process_depth_estimation - Improved variable names (tgt -> target_size, d -> resized) - Removed unnecessary GLPNImageProcessorKwargs class * Address review feedback - Simplified to_dict() method - Keep tensors as torch instead of converting to numpy for heterogeneous shapes - Removed unnecessary shape guards in post_process_depth_estimation - Improved variable names (tgt -> target_size, d -> resized) - Removed unnecessary GLPNImageProcessorKwargs class * commits after 2nd review * Address all review feedback and add explicit batched test - Simplified to_dict() with descriptive variable names (d->output_dict) - Fixed resize operation: changed from crop to proper resize with interpolation - Added padding for heterogeneous batch shapes in both slow and fast processors - Fused rescale and normalize operations for efficiency - Improved all variable names (tgt->target_size, d->depth_4d->resized) - Added GLPNImageProcessorKwargs class in slow processor and imported in fast - Renamed test_equivalence_slow_fast to test_slow_fast_equivalence - Added explicit test_slow_fast_equivalence_batched test - All 20 tests passing * using padding from utils * simplify glpn image processor fast * fix docstring --------- Co-authored-by: yonigozlan <yoni.gozlan@huggingface.co> Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com>
* remove attributes and add all missing sub processors to their auto classes * remove all mentions of .attributes * cleanup * fix processor tests * fix modular * remove last attributes * fixup * fixes after merge * fix wrong tokenizer in auto florence2 * fix missing audio_processor + nits * Override __init__ in NewProcessor and change hf-internal-testing-repo (temporarily) * fix auto tokenizer test * add init to markup_lm * update CustomProcessor in custom_processing * remove print * nit * fix test modeling owlv2 * fix test_processing_layoutxlm * Fix owlv2, wav2vec2, markuplm, voxtral issues * add support for loading and saving multiple tokenizer natively * remove exclude_attributes from save_pretrained * Run slow v2 (#41914) * Super * Super * Super * Super --------- Co-authored-by: ydshieh <ydshieh@users.noreply.github.com> * Fix `detectron2` installation in docker files (#41975) * detectron2 - part 1 * detectron2 - part 2 --------- Co-authored-by: ydshieh <ydshieh@users.noreply.github.com> * Fix `autoawq[kernels]` installation in quantization docker file (#41978) fix autoawq[kernels] Co-authored-by: ydshieh <ydshieh@users.noreply.github.com> * add support for saving encoder only so any parakeet model can be loaded for inference (#41969) * add support for saving encoder only so any decoder model can be loaded Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * use convolution_bias * convert modular * convolution_bias in convertion script --------- Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> Co-authored-by: Eustache Le Bihan <eulebihan@gmail.com> Co-authored-by: eustlb <94853470+eustlb@users.noreply.github.com> * Use indices as position_ids in modernebert (#41789) * Use indices as position_ids in modernebert * Move position_ids init to the branch * test tensor parallel: make tests for dense model more robust (#41968) * make test forward and backward more robust * refactor compile part of test tensor parallel * linting * pass rank around instead of calling it over and over * Run slow v2 (#41914) * Super * Super * Super * Super --------- Co-authored-by: ydshieh <ydshieh@users.noreply.github.com> * Fix `detectron2` installation in docker files (#41975) * detectron2 - part 1 * detectron2 - part 2 --------- Co-authored-by: ydshieh <ydshieh@users.noreply.github.com> * Fix `autoawq[kernels]` installation in quantization docker file (#41978) fix autoawq[kernels] Co-authored-by: ydshieh <ydshieh@users.noreply.github.com> * add support for saving encoder only so any parakeet model can be loaded for inference (#41969) * add support for saving encoder only so any decoder model can be loaded Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * use convolution_bias * convert modular * convolution_bias in convertion script --------- Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> Co-authored-by: Eustache Le Bihan <eulebihan@gmail.com> Co-authored-by: eustlb <94853470+eustlb@users.noreply.github.com> --------- Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> Co-authored-by: Yih-Dar <2521628+ydshieh@users.noreply.github.com> Co-authored-by: ydshieh <ydshieh@users.noreply.github.com> Co-authored-by: Nithin Rao <nithinrao.koluguri@gmail.com> Co-authored-by: Eustache Le Bihan <eulebihan@gmail.com> Co-authored-by: eustlb <94853470+eustlb@users.noreply.github.com> * fix: dict[RopeParameters] to dict[str, RopeParameters] (#41963) * docs: add continuous batching page (#41847) * docs: add continuous batching page * docs(cb): add `generate_batch` example * docs(cb): add `opentelemtry` and `serving` section * feat: add `TODO` note about opentelemetry dependency * docs(cb): add supported features * docs(cb): add unsupported features * docs(cb): add `ContinuousBatchingManager` example * docs(cb): x reference CB in optimizing inference * Fix `torchcodec` version in quantization docker file (#41988) check Co-authored-by: ydshieh <ydshieh@users.noreply.github.com> * [kernels] Add Tests & CI for kernels (#41765) * first commit * add tests * add kernel config * add more tests * add ci * small fix * change branch name * update tests * nit * change test name * revert jobs * addressing review * reenable all jobs * address second review * Move the Mi355 to regular docker (#41989) * Move the Mi355 to regular docker * Disable gfx950 compilation for FA on AMD * More data in benchmarking (#41848) * Reduce scope of cross-generate * Rm generate_sall configs * Workflow benchmarks more * Prevent crash when FA is not installed * fix (CI): Refactor SSH runners (#41991) * Change ssh runner type * Add wait step to SSH runner workflow * Rename wait step to wait2 in ssh-runner.yml * Remove wait step from ssh-runner.yml Removed the wait step from the SSH runner workflow. * Update runner type for single GPU A10 instance * Update SSH runner version to 1.90.3 * Add sha256sum to ssh-runner workflow * Update runner type and remove unused steps * fix 3 failed test cases for video_llama_3 model on Intel XPU (#41931) * fix 3 failed test cases for video_llama_3 model on Intel XPU Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com> * update Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com> * adjust format Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com> * update code Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com> --------- Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com> * Integrate colqwen2.5 using colqwen2 modelling code (#40600) * adding option for 2.5 * minor - arg in conversion script * getting started on modelling.py * minor - shouldve been using modular * adressing comments + fixing datatype/device _get method * minor * commiting suggestion Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com> * docs + first test * ruff fix * minor fix * ruff fix * model fix * model fix * fine-grained check, with a hardcoded score from the original Hf implementation. * minor ruff * update tests values with CI hardware * adding 2.5 to conversion script * Apply style fixes --------- Co-authored-by: Sahil Kabir <sahilkabir@Sahils-MacBook-Pro.local> Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com> Co-authored-by: yonigozlan <yoni.gozlan@huggingface.co> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> * Fixed wrong padding value in OWLv2 (#41938) * Update image_processing_owlv2_fast.py fixed padding value * fixed padding value * Change padding constant value from 0.5 to 0.0 * Fixed missed padding value in modular_owlv2.py --------- Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com> * Fix `run slow v2`: empty report when there is only one model (#42002) fix Co-authored-by: ydshieh <ydshieh@users.noreply.github.com> * [kernels] change import time in KernelConfig (#42004) * change import time * style * DOC Fix typo in argument name: pseudoquant (#41994) The correct argument name is pseudoquantization. Since there is no error on passing wrong arguments name (which is arguably an anti-pattern), this is difficult for users to debug. * Fix `torch+deepspeed` docker file (#41985) * fix * delete --------- Co-authored-by: ydshieh <ydshieh@users.noreply.github.com> * Correct syntax error in trainer.md (#42001) A comma is missing between two parameters in the signature of compute_loss function. * Reduce the number of benchmark in the CI (#42008) Changed how benchmark cfgs are chosen * Fix continuous batching tests (#42012) * Fix continuous batching tests * make fixup * add back `logging_dir` (#42013) * add back * Apply style fixes --------- Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> * Fix issue with from pretrained and kwargs in image processors (#41997) * accept kwargs in image proc from_pretrained * only use kwargs that are in cls.valid_kwargs * remove specific logic for _from_auto * add image_seq_length to Images_kwargs for backward compatibility * fix missing image kwargs in pix2struct * Fix default image_rows and image_cols initialization in Idefics3 and SmolVLM processors (#41871) * Fix default image_rows and image_cols initialization in Idefics3 and SmolVLM processors * Fix default initialization of image_rows and image_cols in Idefics3 and SmolVLM processors * Add GLPNImageProcessorFast (#41725) * Add GLPNImageProcessorFast for torch backend * Address review feedback - Simplified to_dict() method - Keep tensors as torch instead of converting to numpy for heterogeneous shapes - Removed unnecessary shape guards in post_process_depth_estimation - Improved variable names (tgt -> target_size, d -> resized) - Removed unnecessary GLPNImageProcessorKwargs class * Address review feedback - Simplified to_dict() method - Keep tensors as torch instead of converting to numpy for heterogeneous shapes - Removed unnecessary shape guards in post_process_depth_estimation - Improved variable names (tgt -> target_size, d -> resized) - Removed unnecessary GLPNImageProcessorKwargs class * commits after 2nd review * Address all review feedback and add explicit batched test - Simplified to_dict() with descriptive variable names (d->output_dict) - Fixed resize operation: changed from crop to proper resize with interpolation - Added padding for heterogeneous batch shapes in both slow and fast processors - Fused rescale and normalize operations for efficiency - Improved all variable names (tgt->target_size, d->depth_4d->resized) - Added GLPNImageProcessorKwargs class in slow processor and imported in fast - Renamed test_equivalence_slow_fast to test_slow_fast_equivalence - Added explicit test_slow_fast_equivalence_batched test - All 20 tests passing * using padding from utils * simplify glpn image processor fast * fix docstring --------- Co-authored-by: yonigozlan <yoni.gozlan@huggingface.co> Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com> * add fuyu fast image processors (#41817) * added fast processor for fuyu (#36978) * updated docs for fuyu model (#36978) * updated test_image_processing and image_processing_fuyu_fast * updated fuyu.md and image_processing_fuyu_fast (#36978) * updated test_image_processing_fuyu (#36978) * formatted image_processing_fuyu_fast and test_image_processing_fuyu (#36978) * updated tests and fuyu fast image processing (#36978) * Merge branch 'fuyu-fast-image-processors' of https://github.com/DeXtAr47-oss/transformers into fuyu-fast-image-processors * fixed format (#36978) * formatted files (#36978) * formatted files * revert unnecessary changes * clean up and process by group --------- Co-authored-by: yonigozlan <yoni.gozlan@huggingface.co> * [kernels] Fix XPU layernorm kernel (#41583) * fix * add comment * better fix * style * Update src/transformers/modeling_utils.py Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com> --------- Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com> * [v5] Deprecate Text2Text and related pipelines (#41996) * Deprecate Text2Text and related pipelines * Try a restructure * make fixup * logging -> logger * [FPQuant] MXFP8 and MXFP4 backwards support (#41897) * FP-Quant backwards * fp-quant v0.3.0 docker * availability version bump * fp_quant==0.3.1 * fp_quant v0.3.2 * add working auto_docstring for processors * add auto_docstring to processors first part * add auto_docstring to processors part 2 * modifs after review * fully working auto_docstring and check_docstring with placeholder docstrings * Working check_docstrings for Typed dicts * Add recurring processor args to auto_docstring and add support for removing redundant docstring and placeholders * replace placeholders with real docstrings * fix copies * fixup * remove unwanted changes * fix unprotected imports * Fix unprotected imports * fix unprotected imports * Add __call__ to all docs of processors * nits docs --------- Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com> Co-authored-by: Yih-Dar <2521628+ydshieh@users.noreply.github.com> Co-authored-by: ydshieh <ydshieh@users.noreply.github.com> Co-authored-by: Nithin Rao <nithinrao.koluguri@gmail.com> Co-authored-by: Eustache Le Bihan <eulebihan@gmail.com> Co-authored-by: eustlb <94853470+eustlb@users.noreply.github.com> Co-authored-by: Rémi Ouazan <83456801+remi-or@users.noreply.github.com> Co-authored-by: Ferdinand Mom <47445085+3outeille@users.noreply.github.com> Co-authored-by: Ryan Mullins <ryanmullins@google.com> Co-authored-by: Luc Georges <McPatate@users.noreply.github.com> Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com> Co-authored-by: Guillaume LEGENDRE <glegendre01@gmail.com> Co-authored-by: kaixuanliu <kaixuan.liu@intel.com> Co-authored-by: Sahil Kabir <66221472+sahil-kabir@users.noreply.github.com> Co-authored-by: Sahil Kabir <sahilkabir@Sahils-MacBook-Pro.local> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: James <67161633+gjamesgoenawan@users.noreply.github.com> Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com> Co-authored-by: Yacklin Wong <139425274+Yacklin@users.noreply.github.com> Co-authored-by: Matt <Rocketknight1@users.noreply.github.com> Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com> Co-authored-by: MilkClouds <claude@maum.ai> Co-authored-by: ARAVINDHAN T <arvindhant01@gmail.com> Co-authored-by: Pritam Das <79273068+DeXtAr47-oss@users.noreply.github.com> Co-authored-by: Andrei Panferov <andrei@panferov.org>
What does this PR do?
This PR adds a fast image processor for the GLPN model, implemented as
GLPNImageProcessorFast.Fixes # (issue)
Before submitting
GLPNImageProcessorFastusingBaseImageProcessorFast.🧪 Testing
test_slow_fast_equivalence_batched). I would like some help here.📄 Files updated
src/transformers/models/glpn/image_processing_glpn_fast.pysrc/transformers/models/glpn/__init__.pysrc/transformers/models/auto/image_processing_auto.pytests/models/glpn/test_image_processing_glpn.pydocs/source/en/model_doc/glpn.mdBefore submitting
make styleandmake quality.Who can review?
@yonigozlan @molbap