Text and code embeddings released with contrastive pre-training
AI Impact Summary
A new embedding capability uses contrastive pre-training to generate unified representations for both text and code. This can improve cross-modal similarity and boost performance in code search, documentation retrieval, and developer-focused analytics. To realize value, teams should compare the new embeddings against existing vectors, update client applications to request the new model, and reindex vector stores to capture the improved representations.
Business Impact
This enables more accurate text/code semantic search and downstream analytics, but will require reindexing vector stores and updating clients to use the new embedding model.
Risk domains
Source text
- Date
- Date not specified
- Change type
- capability
- Severity
- medium