Skip to content

Fix _can_use_flash_attention. #21512

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 25, 2025

Conversation

james77777778
Copy link
Contributor

Fix #21507

This PR addresses the issue by using None for kwargs if not specified in _can_use_flash_attention.

Hopefully, this change will make the impl more robust.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @james77777778, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the robustness of the _can_use_flash_attention function within the JAX backend. It achieves this by implementing a more flexible approach to passing keyword arguments to the check_layout utility, ensuring that the function can adapt to changes in check_layout's signature without requiring manual updates.

Highlights

  • Dynamic Keyword Argument Handling: The _can_use_flash_attention function now dynamically determines and passes None for any unspecified keyword arguments to the check_layout function. This change makes the call more robust and resilient to future modifications in the check_layout function's signature, addressing potential issues where new parameters might be added.
  • New Module Import: The inspect module has been imported into keras/src/backend/jax/nn.py. This module is crucial for enabling the dynamic introspection of function signatures, which is a core part of the new keyword argument handling logic.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request improves the robustness of _can_use_flash_attention by dynamically determining the parameters for check_layout using inspect.signature. This is a good change that avoids hardcoding parameter names and makes the code more resilient to upstream changes in JAX.

My review includes a suggestion to cache the result of inspect.signature to mitigate potential performance overhead, as this function may be on a hot path. This ensures we get the robustness benefits without a performance penalty on subsequent calls.

@codecov-commenter
Copy link

codecov-commenter commented Jul 25, 2025

Codecov Report

Attention: Patch coverage is 20.00000% with 4 lines in your changes missing coverage. Please review.

Project coverage is 82.71%. Comparing base (c9383e2) to head (5ecad9e).

Files with missing lines Patch % Lines
keras/src/backend/jax/nn.py 20.00% 4 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master   #21512      +/-   ##
==========================================
- Coverage   82.72%   82.71%   -0.01%     
==========================================
  Files         567      567              
  Lines       56214    56219       +5     
  Branches     8786     8787       +1     
==========================================
+ Hits        46501    46502       +1     
- Misses       7556     7560       +4     
  Partials     2157     2157              
Flag Coverage Δ
keras 82.52% <20.00%> (-0.01%) ⬇️
keras-jax 63.91% <20.00%> (-0.01%) ⬇️
keras-numpy 58.41% <20.00%> (-0.01%) ⬇️
keras-openvino 34.56% <20.00%> (-0.01%) ⬇️
keras-tensorflow 64.33% <20.00%> (-0.01%) ⬇️
keras-torch 63.97% <20.00%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@google-ml-butler google-ml-butler bot added kokoro:force-run ready to pull Ready to be merged into the codebase labels Jul 25, 2025
@fchollet fchollet merged commit 90c8da6 into keras-team:master Jul 25, 2025
12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kokoro:force-run ready to pull Ready to be merged into the codebase size:S
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Bug for _can_use_flash_attention
6 participants