Keras now supports Llama-3.2 from Hugging Face checkpoints via keras_hub
AI Impact Summary
Llama-3.2 can be loaded in Keras directly from Hugging Face checkpoints with on-the-fly conversion, using keras_hub and presets like Llama3CausalLM.from_preset('hf://meta-llama/Llama-3.2-1B-Instruct', dtype='bfloat16'). The stack demonstrates end-to-end support for Llama3CausalLM, Llama3Tokenizer, and Llama3Backbone across multiple backends, including JAX, PyTorch, and TensorFlow, and even shows model parallelism and distributed training workflows. This reduces integration friction for teams wanting to deploy or fine-tune Llama-3.2 within Keras without duplicating tooling, enabling faster experimentation and scalable inference/training on large LLMs. In short, Llama-3.2 is usable in Keras out of the box with HF checkpoints, via on-the-fly conversion.
Affected Systems
- Date
- Date not specified
- Change type
- capability
- Severity
- medium