Hugging Face integrates Protect AI Guardian to auto-scan public models on Hub for pickle and Lambda exploit risks
AI Impact Summary
Hugging Face has formed a partnership with Protect AI to integrate Guardian into the Hub's security tooling. Guardian, together with Hugging Face's picklescan, scans all public model repos for unsafe serialization formats (notably pickle) and other exploitable patterns such as Keras Lambda layers, preventing potential arbitrary code execution. All public repositories pushed to the Hugging Face Hub will be automatically scanned, with frontend UI updates surfacing pickle-related scan results. This strengthens model-sharing safety across the community, though initial coverage may lag given the >1 million repos to process.
Affected Systems
- Date
- Date not specified
- Change type
- capability
- Severity
- info