PEFT v0.16.0 adds LoRA-FA, RandLoRA, C³A; memory-efficient training and INC/GPTQ support
AI Impact Summary
PEFT v0.16.0 introduces LoRA-FA, RandLoRA, and C³A, enabling memory-efficient LoRA training with the potential for higher effective ranks. It expands quantization workflows with INC support, DoRA for Conv1d, orthogonal initialization, and QLoRA-style improvements, plus broader Conv2d group support. A major refactor of Orthogonal Finetuning (OFT) and Transformers compatibility changes may break old checkpoints or prompt-learning setups, necessitating upgrade planning or version pinning; expect migration challenges for vlm prompts and attention-mask changes across models. overall, pipelines relying on GPTQ/INC workflows should revalidate compatibility after upgrading PEFT and Transformers.
Affected Systems
- Date
- Date not specified
- Change type
- capability
- Severity
- info