xAI Grok Suffers Major Outage as Google Forces Vertex AI Migration
xAI Grok Suffers Major Outage as Google Forces Vertex AI Migration
This week delivered a stark reminder that even the biggest AI providers aren't immune to catastrophic failures. xAI's Grok service went completely offline across all platforms, leaving users stranded whilst Google simultaneously announced sweeping changes to Vertex AI that will force widespread migrations by June.
The Big Moves
xAI's Double Disaster: Complete Service Blackout
xAI experienced what can only be described as a perfect storm this week. Not only did the entire Grok AI service go dark across iOS, Android, and web platforms, but their Single Sign-On infrastructure collapsed simultaneously. This wasn't a partial degradation or regional hiccup—users simply couldn't access any xAI services whatsoever.
The timing couldn't be worse for xAI, which has been positioning itself as a serious challenger to OpenAI and Anthropic. Complete service outages are the kind of reliability nightmare that enterprise customers remember for years. Whilst the technical details remain murky, the simultaneous failure of both core services and authentication suggests either a catastrophic infrastructure failure or a deployment gone badly wrong.
For organisations evaluating xAI as an alternative to established providers, this incident raises serious questions about operational maturity. The lack of graceful degradation or fallback mechanisms indicates that xAI's infrastructure may not yet be ready for mission-critical workloads. Companies with Grok integrations should be reviewing their disaster recovery plans and considering backup providers.
Google's Vertex AI Shake-Up: Major Migration Deadline Looms
Google dropped a significant bombshell with the release of Vertex AI v1, introducing GLM 5 whilst simultaneously deprecating multiple image and video generation endpoints. The deadline is unforgiving: migrate by 30 June 2026 or face service disruption.
This isn't a gentle nudge towards newer models—it's a forced march. Applications currently using the deprecated endpoints will simply stop working after the sunset date. The migration path involves moving to newer models like Gemini 2.5 and Veo 3.1, which may require substantial code changes and retraining of workflows.
The introduction of GLM 5 for complex systems engineering and long-horizon agentic tasks suggests Google is pushing hard into enterprise automation territory. However, the aggressive deprecation timeline feels punitive, particularly for organisations that built substantial integrations around the older endpoints. Development teams should start migration planning immediately—June will arrive faster than expected.
Anthropic's $30 Billion War Chest Changes the Game
Anthropic's Series G funding round raised eyebrows across the industry, not just for the staggering $30 billion amount but for the $380 billion valuation it implies. This positions Anthropic as a genuine rival to OpenAI in terms of financial firepower, with particular strength in enterprise markets.
The funding appears to be driven largely by Claude Code's rapid adoption and the broader enterprise appetite for agentic coding capabilities. Unlike consumer-focused AI applications, enterprise coding tools represent recurring, high-value revenue streams that justify massive valuations.
This capital injection will likely accelerate Anthropic's infrastructure expansion and model development. Competitors should expect more aggressive pricing and faster feature rollouts as Anthropic leverages its new resources. For enterprise buyers, this funding round validates Anthropic as a stable, long-term partner—assuming they can execute on their ambitious roadmap.
Worth Watching
Transformers.js v4 Brings WebGPU to the Browser
The release of Transformers.js v4 marks a significant leap forward for client-side AI applications. The adoption of WebGPU enables hardware acceleration directly in browsers, whilst the migration to esbuild dramatically reduces build times. The new modular architecture supports state-space models and Mixture of Experts architectures, expanding what's possible in browser-based AI applications. This could be the catalyst for a new wave of privacy-preserving AI tools that run entirely on user devices.
Together AI's Container Inference Promises 2.6x Speed Boost
Together AI's new Dedicated Container Inference service targets a specific pain point: deploying custom generative media models without building your own infrastructure. The claimed 2.6x performance improvement over existing solutions could be significant for teams working with video generation and avatar synthesis. The focus on non-LLM workloads suggests Together AI is carving out a niche in the increasingly crowded inference market.
Amazon Connect Gets AI-Powered Task Assistance
Amazon's introduction of AI-powered task assistance and critical alert notifications within Connect represents a broader trend towards embedding intelligence directly into business applications. Rather than requiring separate AI tools, agents get contextual recommendations and proactive alerts within their existing workflow. This approach could become the template for AI integration across enterprise software.
Groq Forces API Parameter Migration
Groq is deprecating several key API parameters including function_call, functions, and max_tokens. Applications using these deprecated parameters will simply stop working after the cutoff date. Development teams should audit their Groq integrations immediately and plan migration to the supported alternatives.
Quick Hits
- Pinecone Python client experiencing data insertion failures when recreating indexes with identical names
- Qdrant hit by potential deadlock in transfer handling that could cause service disruption
- OpenAI reporting degraded performance across services
- AWS Bedrock expanding open-weight model support to Sydney region
- Vertex AI Agent Builder billing delayed until 11 February 2026
- Weaviate v1.36.0-rc.0 adds server-side batching and TTL capabilities
The Week Ahead
11 February marks the start of Vertex AI Agent Builder billing for sessions, memory bank, and code execution—budget accordingly. The Groq API parameter deprecations don't have a firm deadline yet, but teams should begin migration planning now.
Watch for updates on the xAI outage investigation and any service reliability commitments from the company. Google's Vertex AI migration timeline means organisations have roughly four months to complete their endpoint transitions—not long in enterprise development cycles.
The Anthropic funding announcement will likely trigger competitive responses from other providers. Expect pricing adjustments and capability announcements as the market reacts to the new funding landscape.