OVHcloud added as Inference Provider on Hugging Face Hub — two routing modes and EU data residency
AI Impact Summary
OVHcloud is now a supported Inference Provider on the Hugging Face Hub, enabling direct routing of model calls to OVHcloud AI Endpoints from model pages and client SDKs. The rollout supports two modes: Custom key for direct provider authentication and Routed by HF where HF bills the requests, impacting credential management and billing workflows. The release highlights open-weight models such as gpt-oss, Qwen3, DeepSeek R1, and Llama, EU data-center hosting for data sovereignty, sub-200ms first-token latency, and Python/JS SDK integrations through huggingface_hub and @huggingface/inference.
Affected Systems
- Date
- Date not specified
- Change type
- capability
- Severity
- info