-
Notifications
You must be signed in to change notification settings - Fork 32.5k
Description
🌟 New model addition
Model description
Hi,
I just found this really interesting upcoming ICLR 2021 paper: "Rethinking Embedding Coupling in Pre-trained Language Models":
We re-evaluate the standard practice of sharing weights between input and output embeddings in state-of-the-art pre-trained language models. We show that decoupled embeddings provide increased modeling flexibility, allowing us to significantly improve the efficiency of parameter allocation in the input embedding of multilingual models. By reallocating the input embedding parameters in the Transformer layers, we achieve dramatically better performance on standard natural language understanding tasks with the same number of parameters during fine-tuning. We also show that allocating additional capacity to the output embedding provides benefits to the model that persist through the fine-tuning stage even though the output embedding is discarded after pre-training. Our analysis shows that larger output embeddings prevent the model's last layers from overspecializing to the pre-training task and encourage Transformer representations to be more general and more transferable to other tasks and languages. Harnessing these findings, we are able to train models that achieve strong performance on the XTREME benchmark without increasing the number of parameters at the fine-tuning stage.
Paper can be found here.
Thus, the authors propose a new Rebalanced mBERT (RemBERT) model that outperforms XLM-R. An integration into Transformers would be awesome!
I would really like to help with the integration into Transformers, as soon as the model is out!
Open source status
- the model implementation is available: authors plan to release model implementation
- the model weights are available: authors plan to release model checkpoint
- who are the authors: @hwchung27, @Iwontbecreative, Henry Tsai, Melvin Johnson and @sebastianruder