Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'CLIPTextTransformer' object has no attribute '_build_causal_attention_mask' #52

Open
Aukture opened this issue Mar 19, 2024 · 2 comments

Comments

@Aukture
Copy link

Aukture commented Mar 19, 2024

image

@xyhanHIT
Copy link

Hello, I encountered the same issue, and I resolved it by changing the version of transformers to 4.27.3!

@ankan8145
Copy link

option 1: !pip install -q --upgrade transformers==4.25.1 downgrade the transformers will help you

option 2: another fix, without having to downgrade, could be to use what the function _build_causal_attention_mask used to have (from here

def build_causal_attention_mask(bsz, seq_len, dtype):
    # lazily create causal attention mask, with full attention between the vision tokens
    # pytorch uses additive attention mask; fill with -inf
    mask = torch.empty(bsz, seq_len, seq_len, dtype=dtype)
    mask.fill_(torch.tensor(torch.finfo(dtype).min))
    mask.triu_(1)  # zero out the lower diagonal
    mask = mask.unsqueeze(1)  # expand mask
    return mask

Thanks! Hope this helps

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants