-
Notifications
You must be signed in to change notification settings - Fork 31.8k
Integrate colqwen2.5 using colqwen2 modelling code #40600
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrate colqwen2.5 using colqwen2 modelling code #40600
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice job! 👍️ Please see my remarks below.
CI/CD tests are breaking because double quotes instead of single quotes are used inside f-strings with double quotes. But that error will disappear because we should avoid using f-strings inside loggers in general.
src/transformers/models/colqwen2/convert_colqwen2_weights_to_hf.py
Outdated
Show resolved
Hide resolved
src/transformers/models/colqwen2/convert_colqwen2_weights_to_hf.py
Outdated
Show resolved
Hide resolved
src/transformers/models/colqwen2/convert_colqwen2_weights_to_hf.py
Outdated
Show resolved
Hide resolved
|
Seems like document retrieval / VLM so cc @zucchini-nlp and maybe @tomaarsen ? |
yonigozlan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello @sahil-kabir, thanks for contributing! Very nice if we can support colqwen2.5 without needing to add a whole new model 🤗.
Could you add a mention of colqwen2.5 support in the documentation as well? In transformers/docs/source/en/model_doc/colqwen2.md.
It would be great to add an integration test for colqwen2.5 like we have for colqwen2 in transformers/tests/models/colqwen2/test_modeling_colqwen2.py.
| dtype, device = self._get_dtype_device() | ||
| pixel_values = pixel_values.to(dtype=dtype, device=device) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not a big fan of this, it'd be great if we can avoid having use_qwen2_5 in the config. Let's just use the dtype and device of input_embeds
| dtype, device = self._get_dtype_device() | |
| pixel_values = pixel_values.to(dtype=dtype, device=device) | |
| pixel_values = pixel_values.to(inputs_embeds.device, inputs_embeds.dtype) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BTW, in qwen-2 vision tower, we cast pixels to correct dtype manually so it is not needed. Also, LM and vision might be loaded with different dtypes and devices in specific cases :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zucchini-nlp Did I understand correctly that this line could be removed entirely and it should work anyway?
@sahil-kabir Maybe worth a quick try. 😉
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep, correct
| def _get_dtype_device(self) -> tuple[str, str]: | ||
| if self.config.use_qwen2_5: | ||
| parameters = next(self.vlm.visual.parameters()) | ||
| else: | ||
| parameters = next(self.parameters()) | ||
| dtype, device = parameters.dtype, parameters.device | ||
| return dtype, device | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need for that then
| def _get_dtype_device(self) -> tuple[str, str]: | |
| if self.config.use_qwen2_5: | |
| parameters = next(self.vlm.visual.parameters()) | |
| else: | |
| parameters = next(self.parameters()) | |
| dtype, device = parameters.dtype, parameters.device | |
| return dtype, device |
| config = ColQwen2Config( | ||
| vlm_config=original_config, | ||
| embedding_dim=128, # hardcoded in the original model | ||
| use_qwen2_5=use_qwen2_5, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instantiate a qwen2_5 config instead
…en2_modelling_code
Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com>
…en2_modelling_code
|
Hey @yonigozlan, integration tests and documentation are updated. I have details on what I did to make this work in the PR description above. Going to undraft this now 👍 |
yonigozlan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice!
Very last thing to do would be to add an integration test directly against the original implementation. Other than that, LGTM!
@tonywu71 Do you think we could add the converted checkpoint to the vidore org and here?
Now waiting on @ArthurZucker or @Cyrilvallez final approval to get this merged!
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
|
@yonigozlan Should we also add the code and clear instructions for merging adapter weights either to |
|
run-slow: colqwen2 |
|
This comment contains run-slow, running the specified jobs: models: ['models/colqwen2'] |
Cyrilvallez
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Thanks a lot, no diff to the original model 🤗
@yonigozlan, feel free to merge once everyone is happy and the tests are green! 🤗
Yes absolutely! Users should be able to convert the original colqwen2.5 weights to the Transformers weights using |
@antonioloison and @QuentinJGMace have the admin rights on the repository and are carrying the initial work on ColPali, can you check with them please? 🙏🏼 |
|
@yonigozlan confirmed the endpoint matches the original implementation and tests are added. I don't have access to a GPU with CUDA support (I have cpu and mps) so I set |
…5_using_colqwen2_modelling_code
|
Hey @sahil-kabir ! Sorry for the delay, I just updated the integration tests with the values I get when running on our CI hardware.
|
|
@bot /style |
|
Style bot fixed some files and pushed the changes. |
|
[For maintainers] Suggested jobs to run (before merge) run-slow: colqwen2 |
yonigozlan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding the conversion script @sahil-kabir ! Waiting for the CI to pass and I'll merge :)
* adding option for 2.5 * minor - arg in conversion script * getting started on modelling.py * minor - shouldve been using modular * adressing comments + fixing datatype/device _get method * minor * commiting suggestion Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com> * docs + first test * ruff fix * minor fix * ruff fix * model fix * model fix * fine-grained check, with a hardcoded score from the original Hf implementation. * minor ruff * update tests values with CI hardware * adding 2.5 to conversion script * Apply style fixes --------- Co-authored-by: Sahil Kabir <sahilkabir@Sahils-MacBook-Pro.local> Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com> Co-authored-by: yonigozlan <yoni.gozlan@huggingface.co> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
* adding option for 2.5 * minor - arg in conversion script * getting started on modelling.py * minor - shouldve been using modular * adressing comments + fixing datatype/device _get method * minor * commiting suggestion Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com> * docs + first test * ruff fix * minor fix * ruff fix * model fix * model fix * fine-grained check, with a hardcoded score from the original Hf implementation. * minor ruff * update tests values with CI hardware * adding 2.5 to conversion script * Apply style fixes --------- Co-authored-by: Sahil Kabir <sahilkabir@Sahils-MacBook-Pro.local> Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com> Co-authored-by: yonigozlan <yoni.gozlan@huggingface.co> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
* remove attributes and add all missing sub processors to their auto classes * remove all mentions of .attributes * cleanup * fix processor tests * fix modular * remove last attributes * fixup * fixes after merge * fix wrong tokenizer in auto florence2 * fix missing audio_processor + nits * Override __init__ in NewProcessor and change hf-internal-testing-repo (temporarily) * fix auto tokenizer test * add init to markup_lm * update CustomProcessor in custom_processing * remove print * nit * fix test modeling owlv2 * fix test_processing_layoutxlm * Fix owlv2, wav2vec2, markuplm, voxtral issues * add support for loading and saving multiple tokenizer natively * remove exclude_attributes from save_pretrained * Run slow v2 (#41914) * Super * Super * Super * Super --------- Co-authored-by: ydshieh <ydshieh@users.noreply.github.com> * Fix `detectron2` installation in docker files (#41975) * detectron2 - part 1 * detectron2 - part 2 --------- Co-authored-by: ydshieh <ydshieh@users.noreply.github.com> * Fix `autoawq[kernels]` installation in quantization docker file (#41978) fix autoawq[kernels] Co-authored-by: ydshieh <ydshieh@users.noreply.github.com> * add support for saving encoder only so any parakeet model can be loaded for inference (#41969) * add support for saving encoder only so any decoder model can be loaded Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * use convolution_bias * convert modular * convolution_bias in convertion script --------- Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> Co-authored-by: Eustache Le Bihan <eulebihan@gmail.com> Co-authored-by: eustlb <94853470+eustlb@users.noreply.github.com> * Use indices as position_ids in modernebert (#41789) * Use indices as position_ids in modernebert * Move position_ids init to the branch * test tensor parallel: make tests for dense model more robust (#41968) * make test forward and backward more robust * refactor compile part of test tensor parallel * linting * pass rank around instead of calling it over and over * Run slow v2 (#41914) * Super * Super * Super * Super --------- Co-authored-by: ydshieh <ydshieh@users.noreply.github.com> * Fix `detectron2` installation in docker files (#41975) * detectron2 - part 1 * detectron2 - part 2 --------- Co-authored-by: ydshieh <ydshieh@users.noreply.github.com> * Fix `autoawq[kernels]` installation in quantization docker file (#41978) fix autoawq[kernels] Co-authored-by: ydshieh <ydshieh@users.noreply.github.com> * add support for saving encoder only so any parakeet model can be loaded for inference (#41969) * add support for saving encoder only so any decoder model can be loaded Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * use convolution_bias * convert modular * convolution_bias in convertion script --------- Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> Co-authored-by: Eustache Le Bihan <eulebihan@gmail.com> Co-authored-by: eustlb <94853470+eustlb@users.noreply.github.com> --------- Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> Co-authored-by: Yih-Dar <2521628+ydshieh@users.noreply.github.com> Co-authored-by: ydshieh <ydshieh@users.noreply.github.com> Co-authored-by: Nithin Rao <nithinrao.koluguri@gmail.com> Co-authored-by: Eustache Le Bihan <eulebihan@gmail.com> Co-authored-by: eustlb <94853470+eustlb@users.noreply.github.com> * fix: dict[RopeParameters] to dict[str, RopeParameters] (#41963) * docs: add continuous batching page (#41847) * docs: add continuous batching page * docs(cb): add `generate_batch` example * docs(cb): add `opentelemtry` and `serving` section * feat: add `TODO` note about opentelemetry dependency * docs(cb): add supported features * docs(cb): add unsupported features * docs(cb): add `ContinuousBatchingManager` example * docs(cb): x reference CB in optimizing inference * Fix `torchcodec` version in quantization docker file (#41988) check Co-authored-by: ydshieh <ydshieh@users.noreply.github.com> * [kernels] Add Tests & CI for kernels (#41765) * first commit * add tests * add kernel config * add more tests * add ci * small fix * change branch name * update tests * nit * change test name * revert jobs * addressing review * reenable all jobs * address second review * Move the Mi355 to regular docker (#41989) * Move the Mi355 to regular docker * Disable gfx950 compilation for FA on AMD * More data in benchmarking (#41848) * Reduce scope of cross-generate * Rm generate_sall configs * Workflow benchmarks more * Prevent crash when FA is not installed * fix (CI): Refactor SSH runners (#41991) * Change ssh runner type * Add wait step to SSH runner workflow * Rename wait step to wait2 in ssh-runner.yml * Remove wait step from ssh-runner.yml Removed the wait step from the SSH runner workflow. * Update runner type for single GPU A10 instance * Update SSH runner version to 1.90.3 * Add sha256sum to ssh-runner workflow * Update runner type and remove unused steps * fix 3 failed test cases for video_llama_3 model on Intel XPU (#41931) * fix 3 failed test cases for video_llama_3 model on Intel XPU Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com> * update Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com> * adjust format Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com> * update code Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com> --------- Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com> * Integrate colqwen2.5 using colqwen2 modelling code (#40600) * adding option for 2.5 * minor - arg in conversion script * getting started on modelling.py * minor - shouldve been using modular * adressing comments + fixing datatype/device _get method * minor * commiting suggestion Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com> * docs + first test * ruff fix * minor fix * ruff fix * model fix * model fix * fine-grained check, with a hardcoded score from the original Hf implementation. * minor ruff * update tests values with CI hardware * adding 2.5 to conversion script * Apply style fixes --------- Co-authored-by: Sahil Kabir <sahilkabir@Sahils-MacBook-Pro.local> Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com> Co-authored-by: yonigozlan <yoni.gozlan@huggingface.co> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> * Fixed wrong padding value in OWLv2 (#41938) * Update image_processing_owlv2_fast.py fixed padding value * fixed padding value * Change padding constant value from 0.5 to 0.0 * Fixed missed padding value in modular_owlv2.py --------- Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com> * Fix `run slow v2`: empty report when there is only one model (#42002) fix Co-authored-by: ydshieh <ydshieh@users.noreply.github.com> * [kernels] change import time in KernelConfig (#42004) * change import time * style * DOC Fix typo in argument name: pseudoquant (#41994) The correct argument name is pseudoquantization. Since there is no error on passing wrong arguments name (which is arguably an anti-pattern), this is difficult for users to debug. * Fix `torch+deepspeed` docker file (#41985) * fix * delete --------- Co-authored-by: ydshieh <ydshieh@users.noreply.github.com> * Correct syntax error in trainer.md (#42001) A comma is missing between two parameters in the signature of compute_loss function. * Reduce the number of benchmark in the CI (#42008) Changed how benchmark cfgs are chosen * Fix continuous batching tests (#42012) * Fix continuous batching tests * make fixup * add back `logging_dir` (#42013) * add back * Apply style fixes --------- Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> * Fix issue with from pretrained and kwargs in image processors (#41997) * accept kwargs in image proc from_pretrained * only use kwargs that are in cls.valid_kwargs * remove specific logic for _from_auto * add image_seq_length to Images_kwargs for backward compatibility * fix missing image kwargs in pix2struct * Fix default image_rows and image_cols initialization in Idefics3 and SmolVLM processors (#41871) * Fix default image_rows and image_cols initialization in Idefics3 and SmolVLM processors * Fix default initialization of image_rows and image_cols in Idefics3 and SmolVLM processors * Add GLPNImageProcessorFast (#41725) * Add GLPNImageProcessorFast for torch backend * Address review feedback - Simplified to_dict() method - Keep tensors as torch instead of converting to numpy for heterogeneous shapes - Removed unnecessary shape guards in post_process_depth_estimation - Improved variable names (tgt -> target_size, d -> resized) - Removed unnecessary GLPNImageProcessorKwargs class * Address review feedback - Simplified to_dict() method - Keep tensors as torch instead of converting to numpy for heterogeneous shapes - Removed unnecessary shape guards in post_process_depth_estimation - Improved variable names (tgt -> target_size, d -> resized) - Removed unnecessary GLPNImageProcessorKwargs class * commits after 2nd review * Address all review feedback and add explicit batched test - Simplified to_dict() with descriptive variable names (d->output_dict) - Fixed resize operation: changed from crop to proper resize with interpolation - Added padding for heterogeneous batch shapes in both slow and fast processors - Fused rescale and normalize operations for efficiency - Improved all variable names (tgt->target_size, d->depth_4d->resized) - Added GLPNImageProcessorKwargs class in slow processor and imported in fast - Renamed test_equivalence_slow_fast to test_slow_fast_equivalence - Added explicit test_slow_fast_equivalence_batched test - All 20 tests passing * using padding from utils * simplify glpn image processor fast * fix docstring --------- Co-authored-by: yonigozlan <yoni.gozlan@huggingface.co> Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com> * add fuyu fast image processors (#41817) * added fast processor for fuyu (#36978) * updated docs for fuyu model (#36978) * updated test_image_processing and image_processing_fuyu_fast * updated fuyu.md and image_processing_fuyu_fast (#36978) * updated test_image_processing_fuyu (#36978) * formatted image_processing_fuyu_fast and test_image_processing_fuyu (#36978) * updated tests and fuyu fast image processing (#36978) * Merge branch 'fuyu-fast-image-processors' of https://github.com/DeXtAr47-oss/transformers into fuyu-fast-image-processors * fixed format (#36978) * formatted files (#36978) * formatted files * revert unnecessary changes * clean up and process by group --------- Co-authored-by: yonigozlan <yoni.gozlan@huggingface.co> * [kernels] Fix XPU layernorm kernel (#41583) * fix * add comment * better fix * style * Update src/transformers/modeling_utils.py Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com> --------- Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com> * [v5] Deprecate Text2Text and related pipelines (#41996) * Deprecate Text2Text and related pipelines * Try a restructure * make fixup * logging -> logger * [FPQuant] MXFP8 and MXFP4 backwards support (#41897) * FP-Quant backwards * fp-quant v0.3.0 docker * availability version bump * fp_quant==0.3.1 * fp_quant v0.3.2 * add working auto_docstring for processors * add auto_docstring to processors first part * add auto_docstring to processors part 2 * modifs after review * fully working auto_docstring and check_docstring with placeholder docstrings * Working check_docstrings for Typed dicts * Add recurring processor args to auto_docstring and add support for removing redundant docstring and placeholders * replace placeholders with real docstrings * fix copies * fixup * remove unwanted changes * fix unprotected imports * Fix unprotected imports * fix unprotected imports * Add __call__ to all docs of processors * nits docs --------- Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com> Co-authored-by: Yih-Dar <2521628+ydshieh@users.noreply.github.com> Co-authored-by: ydshieh <ydshieh@users.noreply.github.com> Co-authored-by: Nithin Rao <nithinrao.koluguri@gmail.com> Co-authored-by: Eustache Le Bihan <eulebihan@gmail.com> Co-authored-by: eustlb <94853470+eustlb@users.noreply.github.com> Co-authored-by: Rémi Ouazan <83456801+remi-or@users.noreply.github.com> Co-authored-by: Ferdinand Mom <47445085+3outeille@users.noreply.github.com> Co-authored-by: Ryan Mullins <ryanmullins@google.com> Co-authored-by: Luc Georges <McPatate@users.noreply.github.com> Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com> Co-authored-by: Guillaume LEGENDRE <glegendre01@gmail.com> Co-authored-by: kaixuanliu <kaixuan.liu@intel.com> Co-authored-by: Sahil Kabir <66221472+sahil-kabir@users.noreply.github.com> Co-authored-by: Sahil Kabir <sahilkabir@Sahils-MacBook-Pro.local> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: James <67161633+gjamesgoenawan@users.noreply.github.com> Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com> Co-authored-by: Yacklin Wong <139425274+Yacklin@users.noreply.github.com> Co-authored-by: Matt <Rocketknight1@users.noreply.github.com> Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com> Co-authored-by: MilkClouds <claude@maum.ai> Co-authored-by: ARAVINDHAN T <arvindhant01@gmail.com> Co-authored-by: Pritam Das <79273068+DeXtAr47-oss@users.noreply.github.com> Co-authored-by: Andrei Panferov <andrei@panferov.org>
Main contributor: @sahil-kabir
Guided by: @PavloFesenko
What does this PR do?
Fixes #39549 Is there plan to integrate ColQwen2.5 into Transformers?
A PR was made on this but was closed because colqwen2.5 could be integrated without making a new class for it. This PR allows weights for colqwen2.5 to be imported using the existing colqwen2 class.
Edit:
Looks like passing in weights for colqwen2.5 and the config for Qwen2.5 works on its own. All I'm adding now is a integration test and documentation.
I made an endpoint with colqwen2.5 weights by merging the adpater weights from
vidore/colqwen2.5-v0.2toQwen/Qwen2.5-VL-3B-Instruct(As per the documentation on the model card forvidore/colqwen2.5-v0.2) and shooting those weights up toSahil-Kabir/colqwen2.5-v0.2-hf. And moved some extra config files fromvidore/colqwen2.5-v0.2over toSahil-Kabir/colqwen2.5-v0.2.I've added an integration test to use my endpoint and show that it works on the basic tests. The test I wrote is passing.
Before submitting
Pull Request section?
to it if that's the case. here
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Hey @yonigozlan, with minimal code changes, it looks like colqwen2.5 can be imported using the colqwen2 class. By running,
!python transformers/src/transformers/models/colqwen2/convert_colqwen2_weights_to_hf.py --model_id cjkasbdkjnlakb/colqwen2.5-v0.2-merged --original_vlm_name_or_path Qwen/Qwen2.5-VL-3B-Instruct --output_dir ./colqwen2_5 --using_qwen2_5I'm able to get a model that seems to have similar outputs to the 2.5 in the colpali engine repo. @PavloFesenko and I are working on figuring out where one should import the colqwen2.5 adapter weights from, tests, and documentation for now.