LLMs gain capability to express uncertainty in words
AI Impact Summary
This CAPABILITY update enables models to express uncertainty in natural language, allowing responses to include explicit qualifiers about confidence. For technical teams, this implies changes to prompt design, response logging, and evaluation pipelines to capture and surface uncertainty signals, and to track calibration between stated confidence and actual accuracy. When adopted, it can improve trust and safety in automated decisions, but downstream applications must be prepared to interpret and act on uncertainty rather than treat every answer as definitive.
Business Impact
Improved transparency in model outputs will support safer automation and better user trust, but downstream systems must interpret and act on uncertainty signals.
Risk domains
Source text
- Date
- Date not specified
- Change type
- capability
- Severity
- medium