Skip to content

Commit

Permalink
Add assertion message
Browse files Browse the repository at this point in the history
Co-authored-by: Madeesh Kannan <[email protected]>
  • Loading branch information
danieldk and shadeMe committed Feb 8, 2024
1 parent 130df32 commit eefe900
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion curated_transformers/layers/attention.py
Original file line number Diff line number Diff line change
Expand Up @@ -764,7 +764,7 @@ def forward(
#
# Doing this properly requires a redesign of our AttentionMask
# class.
assert attention_mask.bool_mask.size(-2) == 1
assert attention_mask.bool_mask.size(-2) == 1, "Torch SDP does not support attention masks with non-broadcastable query length yet"
return torch.where(
attention_mask.bool_mask.transpose(-1, -2), attn_values, 0.0
)
Expand Down

0 comments on commit eefe900

Please sign in to comment.