Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

make test_eager_matches_sdpa_inference less flaky #34512

Merged
merged 13 commits into from
Oct 31, 2024
Merged

make test_eager_matches_sdpa_inference less flaky #34512

merged 13 commits into from
Oct 31, 2024

Conversation

ydshieh
Copy link
Collaborator

@ydshieh ydshieh commented Oct 30, 2024

What does this PR do?

With torch.bfloat16 the numerical difference/instability occurs quite often, especially with multiple hidden layers.

This PR first changes test_eager_matches_sdpa_inference to create models with only 1 hidden layer.

number of failures per 500 runs

main n_layer=1
llama 16 2
idefics2 391 15

Then it relaxes the condition a bit: only checks 80% of the sequences. If the results match on those 80%, the test pass.

This makes the test much less flaky. On 500 runs, it pass (for llama, mistral, idefics2 and Llava)

Finally, change the image size of llava and VipLlava from 30 to 8 so the sequence length is much smaller and avoid numerical issues.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Member

@gante gante left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👍 Thank you for fixing

Extra note: L4170 (model_sdpa = model_class.from_pretrained(tmpdirname, torch_dtype=torch_dtype)) should also have attn_implementation="sdpa", in case we update the default.

tests/generation/test_utils.py Show resolved Hide resolved
tests/test_modeling_common.py Show resolved Hide resolved
tests/test_modeling_common.py Show resolved Hide resolved
Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks !

@@ -1263,6 +1263,9 @@ def test_dola_decoding_sample(self):

if model.get_output_embeddings() is None:
self.skipTest("DoLa is not supported for models that don't have output embeddings")

logits_processor_kwargs = self._get_logits_processor_kwargs(do_sample=True, config=model.config)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do sample is random no?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am using the same value as in generation_kwargs = {...} a few line below.

Yes it is random but this method is test_...._sample so makes sense.

@ydshieh ydshieh merged commit 114dd81 into main Oct 31, 2024
25 of 27 checks passed
@ydshieh ydshieh deleted the less_flaky branch October 31, 2024 17:34
2015aroras pushed a commit to 2015aroras/transformers that referenced this pull request Nov 15, 2024
* try

* try

* try

* try

* try

* try

* update

* update

* update

* update

* update

* update

* update

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
BernardZach pushed a commit to BernardZach/transformers that referenced this pull request Dec 5, 2024
* try

* try

* try

* try

* try

* try

* update

* update

* update

* update

* update

* update

* update

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
BernardZach pushed a commit to innovationcore/transformers that referenced this pull request Dec 6, 2024
* try

* try

* try

* try

* try

* try

* update

* update

* update

* update

* update

* update

* update

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants
  NODES
COMMUNITY 2
innovation 1
Note 1
Project 5
USERS 4