We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The text was updated successfully, but these errors were encountered:
Hello, I encountered the same issue, and I resolved it by changing the version of transformers to 4.27.3!
Sorry, something went wrong.
option 1: !pip install -q --upgrade transformers==4.25.1 downgrade the transformers will help you
!pip install -q --upgrade transformers==4.25.1
option 2: another fix, without having to downgrade, could be to use what the function _build_causal_attention_mask used to have (from here
def build_causal_attention_mask(bsz, seq_len, dtype): # lazily create causal attention mask, with full attention between the vision tokens # pytorch uses additive attention mask; fill with -inf mask = torch.empty(bsz, seq_len, seq_len, dtype=dtype) mask.fill_(torch.tensor(torch.finfo(dtype).min)) mask.triu_(1) # zero out the lower diagonal mask = mask.unsqueeze(1) # expand mask return mask
Thanks! Hope this helps
No branches or pull requests
The text was updated successfully, but these errors were encountered: