LM Studio 0.3.4 ships with Apple MLX support
Action Required
Users can now leverage Apple's MLX framework for faster and more efficient on-device LLM inference, improving the performance of LM Studio on Apple Silicon Macs.
AI Impact Summary
LM Studio has released version 0.3.4, which includes support for Apple's MLX framework on Apple Silicon Macs. This allows users to run on-device LLM inference with significantly improved performance and efficiency compared to traditional methods like GGUF. This release leverages MLX's optimized hardware acceleration, making it a compelling option for developers seeking to deploy LLMs locally on Apple Silicon devices. The integration is open-source, with the MLX engine being a Python module.
Affected Systems
- Date
- Date not specified
- Change type
- capability
- Severity
- high