Skip to content

Add Aria#34157

Merged
aymeric-roucher merged 141 commits intomainfrom
add-aria
Dec 6, 2024
Merged

Add Aria#34157
aymeric-roucher merged 141 commits intomainfrom
add-aria

Conversation

@aymeric-roucher
Copy link
Copy Markdown
Contributor

What does this PR do?

Add rhymes-ai/Aria to transformers!

@ArthurZucker

@aymeric-roucher
Copy link
Copy Markdown
Contributor Author

@ArthurZucker I have this error that we discussed in python utils/check_modular_conversion.py:

Traceback (most recent call last):
  File "/home/ubuntu/transformers/utils/check_modular_conversion.py", line 73, in <module>
    non_matching_files += compare_files(modular_file_path, args.fix_and_overwrite)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/transformers/utils/check_modular_conversion.py", line 53, in compare_files
    generated_modeling_content = convert_modular_file(modular_file_path)
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/transformers/utils/modular_model_converter.py", line 1138, in convert_modular_file
    wrapper.visit(cst_transformers)
  File "/home/ubuntu/venv/aria/lib/python3.12/site-packages/libcst/metadata/wrapper.py", line 204, in visit
    return self.module.visit(visitor)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/aria/lib/python3.12/site-packages/libcst/_nodes/module.py", line 89, in visit
    result = super(Module, self).visit(visitor)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/aria/lib/python3.12/site-packages/libcst/_nodes/base.py", line 236, in visit
    leave_result = visitor.on_leave(self, with_updated_children)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/aria/lib/python3.12/site-packages/libcst/_visitors.py", line 71, in on_leave
    updated_node = leave_func(original_node, updated_node)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/transformers/utils/modular_model_converter.py", line 1111, in leave_Module
    self._recursively_add_all_new_needed_functions_in_files()
  File "/home/ubuntu/transformers/utils/modular_model_converter.py", line 1097, in _recursively_add_all_new_needed_functions_in_files
    dependency, body, self.all_definitions[dependency], parent=parent
                      ~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
KeyError: 'select_best_resolution'

@HuggingFaceDocBuilderDev
Copy link
Copy Markdown

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@ArthurZucker
Copy link
Copy Markdown
Collaborator

cc @Cyrilvallez as well!

Comment thread utils/modular_model_converter.py Outdated
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ArthurZucker I added the second part of the check here to pass the test, it does not seem to create any missed imports so I let it but maybe it has unwanted consequences.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice then thanks!

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ha nice @aymeric-roucher, indeed I forgot about built-ins when adding this! Thanks for correcting it! If you only check for if dependency in self.all_definitions, it should be enough though no? Built-ins will never be added to self.all_definitions during visit, so it would avoid having the large list

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's done! ✅

@aymeric-roucher aymeric-roucher force-pushed the add-aria branch 4 times, most recently from fea2ddf to e5038aa Compare October 15, 2024 17:08
@aymeric-roucher aymeric-roucher marked this pull request as ready for review October 15, 2024 17:44
@aymeric-roucher aymeric-roucher force-pushed the add-aria branch 2 times, most recently from 8908df0 to 7922635 Compare October 17, 2024 03:02
@aymeric-roucher aymeric-roucher force-pushed the add-aria branch 2 times, most recently from 68996e3 to c0f5774 Compare October 17, 2024 03:35
Comment thread src/transformers/models/aria/modular_aria.py Outdated
Comment thread src/transformers/models/aria/modular_aria.py Outdated
Comment thread src/transformers/models/aria/modular_aria.py
Comment thread src/transformers/models/aria/modular_aria.py Outdated
Copy link
Copy Markdown
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, just
image
to make more readable / simplify

Comment thread docs/source/en/index.md
Comment thread src/transformers/models/aria/modular_aria.py Outdated
@DarkLight1337
Copy link
Copy Markdown

DarkLight1337 commented Jan 19, 2025

I'm unable to run the processor of this model in vLLM:

  File "/home/cyrus/miniconda3/envs/vllm/lib/python3.9/site-packages/transformers/models/aria/processing_aria.py", line 129, in __call__
    sample = sample.replace(self.tokenizer.image_token, self.tokenizer.image_token * num_crops)
  File "/home/cyrus/miniconda3/envs/vllm/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1108, in __getattr__
    raise AttributeError(f"{self.__class__.__name__} has no attribute {key}")
AttributeError: CachedLlamaTokenizerFast has no attribute image_token

Perhaps we need to add the image_token attribute to this processor, similar to the other VLMs?

@DarkLight1337
Copy link
Copy Markdown

DarkLight1337 commented Jan 19, 2025

This also happens with vanilla Transformers. Let me open a new issue on this.

Edit: Opened #35768

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants