Hi, I noticed that there is something called Attention Mask in the model.
In the annotation of class BertForQuestionAnswering,
`attention_mask`: an optional torch.LongTensor of shape [batch_size, sequence_length] with indices
selected in [0, 1]. It's a mask to be used if the input sequence length is smaller than the max
input sequence length in the current batch. It's the mask that we typically use for attention when
a batch has varying length sentences.
And its usage is in class BertSelfAttention, function forward,
# Apply the attention mask is (precomputed for all layers in BertModel forward() function)
attention_scores = attention_scores + attention_mask
It seems the attention_mask is used to add 1 to the scores for positions that is taken up by real tokens, and add 0 to the positions outside current sequence.
Then, why not set the scores to -inf where the positions are outside the current sequence. Then pass the scores to a softmax layer, those score will become 0 as we want.
Hi, I noticed that there is something called
Attention Maskin the model.In the annotation of class
BertForQuestionAnswering,And its usage is in class
BertSelfAttention, functionforward,It seems the attention_mask is used to add 1 to the scores for positions that is taken up by real tokens, and add 0 to the positions outside current sequence.
Then, why not set the scores to
-infwhere the positions are outside the current sequence. Then pass the scores to a softmax layer, those score will become 0 as we want.