Together Fine-Tuning Platform Adds Preference Optimization and Continued Training
Action Required
Businesses can now continuously improve their language models, leading to more accurate and relevant AI applications and reducing the risk of models becoming outdated.
AI Impact Summary
Together AI has significantly enhanced its Fine-Tuning Platform with Preference Optimization and Continued Training capabilities. This allows businesses to dynamically adapt their language models based on user preferences and evolving data, moving beyond static model deployments. The introduction of Direct Preference Optimization (DPO) and the ability to resume training from checkpoints dramatically simplifies the fine-tuning process and enables continuous model improvement, a key factor for applications like Protege AI’s personalized compliance solutions.
Affected Systems
- Date
- Date not specified
- Change type
- capability
- Severity
- high