Anthropic Consolidates Claude Platform as xAI Launches Grok API: Week of 12 January 2026
AI Provider Intelligence: Week of 12 January 2026
Anthropic dominated the week with a major platform consolidation move, whilst xAI finally delivered on its API promises with a feature-rich Grok launch. Meanwhile, Together AI users faced critical service disruptions that highlight the risks of provider dependency.
Anthropic Forces Claude Console Migration to Unified Platform
Anthropic has completed its long-anticipated platform consolidation, redirecting all Claude Console traffic from console.anthropic.com to platform.claude.com. Effective 12 January, users accessing the legacy console URL now face automatic redirects to the unified Claude platform.
This isn't merely a cosmetic rebrand. The move signals Anthropic's strategic shift towards a centralised platform experience, consolidating API management, model access, and developer tools under a single domain. For organisations with hardcoded URLs in their deployment scripts, monitoring dashboards, or internal documentation, this change demands immediate attention to bookmark and link updates.
The timing suggests Anthropic is preparing for broader platform changes. With the console migration complete, expect further consolidation of Claude's ecosystem components. Development teams should audit their systems for any console.anthropic.com references and update them promptly. Whilst the redirect ensures continuity for now, relying on redirects introduces unnecessary latency and potential failure points in production environments.
xAI Delivers Comprehensive Grok API with Advanced Capabilities
xAI has finally launched its production Grok API, introducing tool use, image analysis, OCR, and function calling capabilities that position it as a serious competitor to established providers. The release marks xAI's transition from experimental chat interface to enterprise-ready API platform.
The new Grok API deprecates the legacy Chat Completions endpoint, forcing existing integrations to migrate to the updated methods. This deprecation carries high severity due to the immediate impact on applications built during Grok's beta phase. Developers must transition their implementations to the new API structure to maintain functionality.
What's particularly noteworthy is the breadth of capabilities launched simultaneously. The inclusion of image analysis and OCR suggests xAI is targeting multimodal use cases from day one, rather than following the typical pattern of text-first, vision-later rollouts. The tool use functionality enables agentic workflows, positioning Grok as a viable alternative for developers seeking alternatives to OpenAI's function calling or Anthropic's tool use features.
Together AI Suffers Critical Service Disruption
Together AI experienced a critical incident this week that underscores the operational risks of depending on smaller AI providers. The incident, marked as critical severity, lacked detailed public communication about scope, duration, or root cause analysis.
This outage highlights a crucial consideration for enterprise AI deployments: provider reliability and incident response maturity. Whilst Together AI offers competitive pricing and model variety, their incident handling suggests less mature operational practices compared to hyperscale cloud providers. Organisations running production workloads should evaluate their disaster recovery plans and consider multi-provider strategies to mitigate single points of failure.
The lack of transparent incident communication also raises questions about Together AI's commitment to enterprise-grade service level agreements. For mission-critical applications, this incident serves as a reminder that cost savings from smaller providers may come with operational trade-offs.
Worth Watching: Platform Updates and Pricing Changes
AWS Bedrock Introduces Reserved Tier Pricing Amazon launched Reserved Tier pricing for Claude Opus 4.5 and Haiku 4.5 on Bedrock, effective 16 January. This pricing model allows organisations to commit to specific usage levels in exchange for discounted rates. High-volume users should analyse their consumption patterns to determine potential cost savings, particularly for predictable workloads where reserved capacity makes financial sense.
Google Enhances Vertex AI Workbench with Gemini CLI Google's Vertex AI Workbench v2 introduces direct Gemini CLI access within notebook environments, effective 16 January. The update includes migration to Debian 12 and Python 3.12, requiring users to update their instances. The Gemini CLI integration streamlines development workflows by eliminating context switching between tools, particularly valuable for data scientists working within the Google ecosystem.
Hugging Face Updates Open Responses Policy Hugging Face modified its open responses policy on 15 January, introducing new guidelines for model interactions. The policy changes could affect how organisations deploy and interact with open-source models, particularly around acceptable use cases and response handling. Teams using Hugging Face for production deployments should review the updated terms to ensure continued compliance.
Vertex AI Addresses Critical Daemon Process Issues Google released Vertex AI Workbench v2 M138 on 14 January, fixing persistent daemon process problems that could cause data loss. The update includes OS upgrades and framework updates that may impact existing configurations. Users should prioritise this upgrade to prevent potential stability issues and data loss scenarios.
Amazon Lex Improves English Speech Recognition AWS enhanced Lex's English speech recognition models on 13 January, potentially improving accuracy for voice-driven applications. Whilst not a breaking change, the improvements could reduce post-processing requirements and enhance user experiences for conversational AI implementations.
Quick Hits
- Veo 3.1 Preview: Google launched enhanced video generation with 9:16 aspect ratio and 4K upsampling support
- Replicate Cog Updates: New API prediction source filtering and Go-based runtime improvements
- AWS GovCloud: Bedrock API keys now available in government cloud regions
- ChatGPT Incident: OpenAI experienced elevated error rates with mitigation efforts applied
- Cursor Partnership: Together AI collaboration on NVIDIA Blackwell for real-time inference
The Week Ahead: Key Dates and Migrations
Watch for potential follow-up communications from Together AI regarding their critical incident and recovery measures. The lack of detailed incident reporting suggests either ongoing investigation or communication gaps that may require clarification.
Organisations using deprecated xAI Chat Completions endpoints should prioritise migration planning, as the high severity rating suggests limited grace periods. The comprehensive nature of the new Grok API capabilities may require more extensive integration work than simple endpoint swaps.
AWS Bedrock's Reserved Tier pricing becomes available 16 January, providing immediate cost optimisation opportunities for high-volume Claude users. Teams should prepare usage analysis to evaluate potential savings before the pricing model launches.
Google's Vertex AI updates require active migration planning, particularly for teams relying on older Workbench versions. The combination of OS upgrades, framework updates, and new CLI capabilities suggests significant changes that warrant testing in non-production environments first.