ChatGPT introduces Trusted Contact for Self-Harm Detection
AI Impact Summary
ChatGPT now offers a 'Trusted Contact' feature, allowing users to designate a trusted individual to be notified if the system detects concerning language related to self-harm. This represents a significant shift in ChatGPT's approach to safety, moving beyond simple content filtering to proactive intervention. Developers should evaluate how this feature integrates with existing monitoring and alerting systems, and consider the implications for user privacy and consent.
Affected Systems
Business Impact
The Trusted Contact feature provides a new channel for intervention in potentially dangerous user conversations, potentially reducing the risk of harm and improving user safety.
- Date
- Date not specified
- Change type
- capability
- Severity
- medium