InfoCapability
Granite 4.1 LLMs: 3B/8B/30B dense models with 512K context and five-phase pre-training
- Date
- Date not specified
- Change type
- capability
- Severity
- info
Groq becomes Inference Provider on Hugging Face Hub with Llama-4 and QWQ-32B support
11 May 2026
LoRA Fine-Tuning FLUX.1-dev on Consumer GPUs with QLoRA and 4-bit Quantization
11 May 2026
SGLang adds Hugging Face transformers backend for high-throughput LLM inference
11 May 2026
Gemma 3n now available in open-source ecosystem for on-device multimodal inference
SignalBreak monitors Hugging Face and 27 other AI providers across 150+ endpoints. Sign up free to get notified when things change.
Sign up free — no credit card required11 May 2026