PEFT v0.17.0: MiSS replaces Bone, SHiRA adds sparse adapters; LoRA targets nn.Parameter
Action Required
Bone-based PEFT workflows must migrate to MiSS before Bone removal in PEFT v0.19.0, or training using Bone checkpoints will fail.
AI Impact Summary
SHiRA introduces Sparse High Rank Adapters that train only 1-2% of weights, potentially reducing adapter-switching costs and improving concept loss versus LoRAs. MiSS is introduced as a successor to Bone, promising better performance and memory efficiency, with Bone slated for removal in PEFT v0.19.0 and a script to convert Bone checkpoints to MiSS. LoRA now supports targeting nn.Parameter directly (target_parameters), which is useful for MoE models like Llama4, though it remains experimental and may increase memory usage; additional injection options via inject_adapter_in_model and state_dict-based adapter injection are provided to handle unknown configs. Changes also include fixes for prompt-learning module saving and MoE weight handling, along with compatibility considerations for large MoE parameter layouts.
Affected Systems
- Date
- Date not specified
- Change type
- deprecation
- Severity
- high