Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Branch] Quant modifier UX #2263

Merged
merged 8 commits into from
May 22, 2024
Merged

[Feature Branch] Quant modifier UX #2263

merged 8 commits into from
May 22, 2024

Commits on May 20, 2024

  1. Split WandaPruningModifier and SparseGPTModifier

    Make sparsegpt not inherit from wanda modifier
    Decouple SparseGPTModifierPyTorch from WandaPruningModifier
    Fix docstrings
    rahul-tuli committed May 20, 2024
    Configuration menu
    Copy the full SHA
    d4d85ff View commit details
    Browse the repository at this point in the history
  2. Split SparseGPT and GPTQ modifiers (#2272)

    * Update OBCQ
    
    * Extract GPTQ Modifier
    rahul-tuli committed May 20, 2024
    Configuration menu
    Copy the full SHA
    5dd9985 View commit details
    Browse the repository at this point in the history
  3. [GPTQ Modifier UX] Update tests to use GPTQModifier for obcq style qu…

    …antization (#2294)
    
    * Update OBCQ
    
    * Extract GPTQ Modifier
    
    * Update test recipes
    rahul-tuli committed May 20, 2024
    Configuration menu
    Copy the full SHA
    c695567 View commit details
    Browse the repository at this point in the history
  4. GPTQ UX config groups support (#2273)

    * Update OBCQ
    
    * Extract GPTQ Modifier
    
    * Update test recipes
    
    * Add config_groups support to GPTQModifier
    
    * mask_structure preservation test (#2284)
    
    * test
    
    * Preserve weight sparsity if greater than threshold
    
    * Add argument to preserve sparsity mask in SPARSEGPT
    
    * fix case when mask is none
    
    * Add test to check mask_structure
    - initial mask structure should be preserved
    b/w consecutive runs; added test to check this
    
    * Update tensor_follows_mask_structure to check for atleast n zeros
    
    ---------
    
    Co-authored-by: Sara Adkins <sara@neuralmagic.com>
    
    * PR comments
    
    ---------
    
    Co-authored-by: Sara Adkins <sara@neuralmagic.com>
    rahul-tuli and Satrat committed May 20, 2024
    Configuration menu
    Copy the full SHA
    93300b1 View commit details
    Browse the repository at this point in the history

Commits on May 21, 2024

  1. Fix default case

    rahul-tuli committed May 21, 2024
    Configuration menu
    Copy the full SHA
    227cf8e View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    876e5ae View commit details
    Browse the repository at this point in the history
  3. Style

    rahul-tuli committed May 21, 2024
    Configuration menu
    Copy the full SHA
    a7f4eef View commit details
    Browse the repository at this point in the history

Commits on May 22, 2024

  1. Configuration menu
    Copy the full SHA
    c062a6b View commit details
    Browse the repository at this point in the history