Google Gemma 3 opens multimodal LLM with 128k context and 140+ languages
AI Impact Summary
Gemma 3 provides open-weight LLMs (1B, 4B, 12B, 27B) with up to 128k token context and multimodal input via SigLIP, expanding capabilities beyond Gemma 2. The models are integrated with the Hugging Face ecosystem and support 140+ languages, enabling in-house deployment of image+text reasoning without vendor-locked weights. The longer context and vision support imply higher memory and compute requirements and necessitate attention to tokenizer behavior (SentencePiece) and modality handling when migrating from Gemma 2 or earlier Gemma variants.
Affected Systems
- Date
- Date not specified
- Change type
- capability
- Severity
- medium