Skip to content

Prepend all PyTorch QNNPACK symbols with pytorch_ to avoid symbol collision.#26238

Closed
AshkanAliabadi wants to merge 1 commit intopytorch:masterfrom
AshkanAliabadi:qnnpack_prefix
Closed

Prepend all PyTorch QNNPACK symbols with pytorch_ to avoid symbol collision.#26238
AshkanAliabadi wants to merge 1 commit intopytorch:masterfrom
AshkanAliabadi:qnnpack_prefix

Conversation

@AshkanAliabadi
Copy link
Copy Markdown
Contributor

@AshkanAliabadi AshkanAliabadi commented Sep 14, 2019

This probably doesn't catch absolutely everything but it's a good starting point.

@supriyar
Copy link
Copy Markdown
Contributor

Did we test this using unit tests?
Can we also run the python tests with your changes from #25844 stacked on this to make sure we're using the correct symbols in pytorch as well.

Copy link
Copy Markdown
Contributor

@ljk53 ljk53 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like these are simply adding prefix to symbols under aten/src/ATen/native/quantized/cpu/qnnpack. Stamp it as it won't affect things outside this directory.

@dzhulgakov
Copy link
Copy Markdown
Collaborator

Looks good. For testing - you probably can run objdump/nm on the resulting static library and see whether anything is left

@AshkanAliabadi
Copy link
Copy Markdown
Contributor Author

Merged into #25844.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

oncall: quantization Quantization support in PyTorch

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants