Capabilities, limitations, and societal impact of large language models (LLMs)
AI Impact Summary
This CAPABILITY change signals a shift toward evaluating not just what LLMs can do, but how they should be used in production and governed. Technical teams should align on evaluation criteria that balance capability with risk, implement guardrails, and design architectures that support safe deployment (e.g., retrieval-augmented generation and prompt containment). Expect governance, security, and compliance workstreams to become central to product roadmaps as organizations explore use cases across customer-facing and regulated domains.
Business Impact
Growing LLM capabilities require governance, safety, and compliance controls to prevent data leakage, bias, and regulatory risk in product features.
Risk domains
Source text
- Date
- Date not specified
- Change type
- capability
- Severity
- medium