Hugging Face and Protect AI Guardian add four threat-detection modules to scan Hugging Face Hub
AI Impact Summary
Protect AI’s Guardian expanded its collaboration with Hugging Face Hub by adding four threat-detection modules (PAIT-ARV-100, PAIT-JOBLIB-101, PAIT-TF-200, PAIT-LMAFL-300) to detect archive slips, suspicious Joblib code, backdoors in TensorFlow SavedModel, and malicious Llamafile behavior during inference. Guardian now covers more model formats and obfuscation techniques, surfacing inline alerts on Hugging Face model pages and providing comprehensive vulnerability reports in Insights DB, including CVE-2025-1550 in Keras. The outreach and tooling push is reinforced by the Huntr bug bounty program, with large-scale scanning activity (millions of model versions and hundreds of thousands of identified issues) reinforcing zero-trust security across the ecosystem.
Affected Systems
- Date
- Date not specified
- Change type
- capability
- Severity
- info