AWS Bedrock
cloud_wrapper
502 signals tracked
Anthropic Claude 3.7 Sonnet moving to legacy status — migration to Claude Sonnet 4.5 required by April 2026
Anthropic Claude 3.7 Sonnet model is now in legacy status on AWS Bedrock and will be discontinued. Users must migrate to Claude Sonnet 4.5 before the April 28, 2026 deadline to avoid service interruption.
28 Apr 2026
CriticalDeprecationAWS Deadline Cloud Launches AI-Powered Troubleshooting Assistant for Render Jobs
AWS Deadline Cloud introduces an AI-powered troubleshooting assistant for render jobs, helping diagnose and resolve rendering issues automatically.
17 Apr 2026
HighCapabilitySageMaker JumpStart optimized for foundation models
SageMaker JumpStart now offers optimized deployments for foundation models, enabling faster and more cost-effective model deployment for AI/ML workloads.
17 Apr 2026
MediumCapabilityAmazon SageMaker HyperPod supports flexible instance groups
Amazon SageMaker HyperPod now supports flexible instance groups, enabling customers to specify multiple instance types and multiple subnets within a single instance group. Customers running training and inference workloads on HyperPod often need to span multiple instance types and availability zones for capacity resilience, cost optimization, and subnet utilization, but previously had to create and manage a separate instance group for every instance type and availability zone combination, resulting in operational overhead across cluster configuration, scaling, patching, and monitoring. With flexible instance groups, you can define an ordered list of instance types using the new InstanceRequirements parameter and provide multiple subnets across availability zones in a single instance group. HyperPod provisions instances using the highest-priority type first and automatically falls back to lower-priority types when capacity is unavailable, eliminating the need for customers to manually retry across individual instance groups. Training customers benefit from multi-subnet distribution within an availability zone to avoid subnet exhaustion. Inference customers scaling manually get automatic priority-based fallback across instance types without needing to retry each instance group individually, while those using Karpenter autoscaling can reference a single flexible instance group. Karpenter automatically detects supported instance types from the flexible instance group and provisions the optimal type and availability zone based on pod requirements. You can create flexible instance groups using the CreateCluster and UpdateCluster APIs, the AWS CLI, or the AWS Management Console. Flexible instance groups are available for SageMaker HyperPod clusters using the EKS orchestrator in all AWS Regions where SageMaker HyperPod is supported. To learn more, see Flexible instance groups .
17 Apr 2026
HighCapabilitySageMaker HyperPod now supports flexible instance groups
Amazon SageMaker HyperPod now supports flexible instance groups, allowing better resource allocation and cost optimization for distributed training workloads.
17 Apr 2026
HighCapabilitySageMaker HyperPod now supports flexible instance groups
Amazon SageMaker HyperPod now supports flexible instance groups, providing improved resource management and cost optimization for distributed training workloads.
17 Apr 2026
HighCapabilitySageMaker JumpStart optimized deployments for foundation models
SageMaker JumpStart now offers optimized deployments for foundation models, enabling faster and more cost-effective model deployment with pre-configured instances and auto-scaling.
17 Apr 2026
MediumCapabilityAmazon Managed Grafana supports Grafana 12.4 workspaces
Amazon Managed Grafana now supports creating new workspaces with Grafana version 12.4. This release includes features that were launched as a part of open source Grafana versions 11.0 to 12.4, including Drilldown apps, scenes powered dashboards, variables in transformations, visualization enhancements, and new features with the Amazon CloudWatch plugin. Queryless Drilldown apps enable customers to perform point-and-click exploration of Prometheus metrics, Loki logs, Tempo traces, and Pyroscope profiles. The Scenes-powered rendering engine boosts dashboard performance. Amazon CloudWatch Logs adds support for PPL and SQL queries, cross-account Metrics Insights, and log anomaly detection. The rebuilt table visualization improves performance with CSS cell styling and interactive Actions buttons, while trendline transformations and navigation bookmarks enhance data exploration. Grafana 12.4 is supported in all AWS regions where Amazon Managed Grafana is generally available. You can create a new Amazon Managed Grafana workspace from the AWS Console, SDK, or CLI. To explore the complete list of new features, please refer to the user documentation . Follow the instructions here to create workspaces with version 12.4. To learn more about Amazon Managed Grafana features and its pricing, visit the product page and pricing page .
17 Apr 2026
MediumCapabilityAmazon EC2 U7i High Memory Instances Available in Singapore
Amazon EC2 High Memory U7i-8TB instances (u7i-8tb.112xlarge) and U7i-12TB instances (u7i-12tb.224xlarge) are now available in AWS Asia Pacific (Singapore) region. U7i instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7i-8tb instances offer 8TiB of DDR5 memory, and U7i-12tb instances offer 12TiB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment. U7i-8tb instances deliver 448 vCPUs; U7i-12tb instances deliver 896 vCPUs. Both instances support up to 100 Gbps of Amazon EBS bandwidth for faster data loading and backups, 100 Gbps of network bandwidth, and ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server. To learn more about U7i instances, visit the High Memory instances page .
17 Apr 2026
MediumCapabilityAmazon ECR Pull Through Cache Now Supports Referrer Discovery and Sync
Amazon Elastic Container Registry (Amazon ECR) now automatically discovers and syncs OCI referrers, such as image signatures, SBOMs, and attestations, from upstream registries into your Amazon ECR private repositories with its pull through cache feature. Previously, when you listed referrers on a repository with a matching pull through cache rule, Amazon ECR would not return or sync referrers from the upstream repository. This meant that you had to manually list and fetch the upstream referrers. With today's launch, Amazon ECR's pull through cache will now reach upstream during referrers API requests and automatically cache related referrer artifacts in your private repository. This enables end-to-end image signature verification, SBOM discovery, and attestation retrieval workflows to work seamlessly with pull through cache repositories without requiring any client-side workarounds. This feature is available today in all AWS Regions where Amazon ECR pull through cache is supported. To learn more, visit the Amazon ECR documentation .
17 Apr 2026
MediumCapabilityAWS Deadline Cloud announces AI-powered troubleshooting assistant
AWS Deadline Cloud announces AI-powered troubleshooting assistant for render jobs, automating diagnostics and reducing manual troubleshooting time.
17 Apr 2026
HighCapabilitySageMaker JumpStart adds optimized deployments for foundation models
SageMaker JumpStart now offers optimized deployments, enabling customers to deploy foundation models with pre-configured settings tailored to specific use cases and performance constraints. SageMaker JumpStart optimized deployments simplify model deployment by offering task-aware configurations that optimize for cost, throughput, or latency based on your workload requirements - whether content generation, summarization, or Q&A. This launch includes support for 30+ popular models from Meta, Microsoft, Mistral AI, Qwen, Google, and TII, with visibility into key performance metrics like P50 latency, time-to-first token (TTFT), and throughput before deployment. With SageMaker JumpStart optimized deployments, customers can select from use case-specific configurations (such as generative writing or chat-style interactions) and choose optimization targets including cost-optimized, throughput-optimized, latency-optimized, or balanced performance. Models deploy to SageMaker AI Managed Inference endpoints or SageMaker HyperPod clusters with pre-set configurations that eliminate guesswork while maintaining full visibility into deployment details. Available models include Meta Llama 3.1 and 3.2 variants, Microsoft Phi-3, Mistral AI models including the new Mistral-Small-24B-Instruct-2501, Qwen 2 and 3 series including multimodal Qwen2-VL, Google Gemma, and TII Falcon3. All deployments leverage SageMaker's VPC deployment capabilities, ensuring data control and production-ready infrastructure with enterprise-grade security. The feature is available in all AWS regions where SageMaker JumpStart is curretly supported. To get started with optimized deployments, navigate to Models in SageMaker Studio, select your desired foundation model in the JumpStart Models tab, choose "Deploy," and select your use case and performance optimization target. For details, visit the SageMaker JumpStart documentation . AWS is actively expanding support to include additional models.
17 Apr 2026
MediumCapabilityAWS Deadline Cloud adds AI-powered troubleshooting assistant for render jobs
Today, AWS Deadline Cloud announces an AI-powered troubleshooting assistant that helps you quickly diagnose and resolve render job failures. AWS Deadline Cloud is a fully managed service that simplifies render management for computer-generated 2D/3D graphics and visual effects for films, TV shows, commercials, games, and industrial design. Render job failures from missing assets, software errors, configuration mismatches, and resource constraints can stall production pipelines and waste compute resources. Previously, diagnosing these issues required specialized technical staff to manually parse logs and identify root causes — a process that is time-consuming, difficult to scale, and often unavailable to smaller studios. The new Deadline Cloud assistant investigates failed jobs you identify, analyzes logs and metrics, detects common issues, and provides troubleshooting recommendations based on industry best practices and a pre-trained knowledge base covering Deadline Cloud, common render farm issues, and popular digital content creation applications including Autodesk Maya, 3ds Max, VRED, Blender, SideFX Houdini, Maxon Cinema 4D, Foundry Nuke, and Adobe After Effects. The assistant runs within your AWS account using Amazon Bedrock, keeping all data and analysis within your control. The Deadline Cloud assistant is available today in all AWS Regions where AWS Deadline Cloud is supported. Watch a demo on YouTube to see it in action, or visit the AWS Deadline Cloud documentation to learn more.
17 Apr 2026
HighCapabilityAWS launches EC2 C8in and C8ib instances with Intel Xeon Scalable processors
AWS is announcing the general availability of Amazon EC2 C8in and C8ib instances powered by custom, sixth generation Intel Xeon Scalable processors, available only on AWS. These instances feature the latest sixth generation AWS Nitro cards. C8in and C8ib instances deliver up to 43% higher performance compared to previous generation C6in instances. C8in and C8ib instances deliver larger sizes and scale up to 384 vCPUs. C8in instances deliver 600 Gbps network bandwidth—the highest among enhanced networking EC2 instances—making them ideal for network-intensive workloads like distributed compute and large-scale data analytics. C8ib instances deliver up to 300 Gbps EBS bandwidth, the highest among non-accelerated compute instances, making them ideal for high-performance commercial databases and file systems. C8in instances are available in US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), and Europe (Spain) regions. C8ib instances are available in US East (N. Virginia) and US West (Oregon). Both, C8in and C8ib instances are available via Savings Plans, On-Demand, and Spot instances. For more information, visit the Amazon EC2 C8i instance page .
16 Apr 2026
HighCapabilityAmazon FSx for Lustre Persistent-2 available in Asia Pacific, Europe, and South America
You can now create Amazon FSx for Lustre Persistent-2 file systems in four additional AWS Regions: Asia Pacific (Hyderabad, Jakarta), Europe (Zurich), and South America (São Paulo). Amazon FSx for Lustre Persistent-2 file systems are built on AWS Graviton processors and provide higher throughput per terabyte (up to 1 GB/s per terabyte) and lower cost of throughput compared to previous generation FSx for Lustre file systems. Using FSx for Lustre Persistent-2 file systems, you can accelerate execution of machine learning, high-performance computing, media & entertainment, and financial simulations workloads while reducing your cost of storage. To get started with Amazon FSx for Lustre Persistent-2 in these new regions, create a file system through the AWS Management Console . To learn more about Amazon FSx for Lustre, visit our product pag e , and see the AWS Region Table for complete regional availability information.
16 Apr 2026
MediumCapabilityAmazon CloudWatch: Cross-Region Telemetry Auditing & Enablement Rules
Amazon CloudWatch now supports auditing telemetry configuration and enabling telemetry from AWS services such as Amazon EC2, Amazon VPC, and AWS CloudTrail across multiple AWS Regions from a single region. Customers can enable the telemetry auditing feature for their account or organization across all supported regions at once and create enablement rules that automatically apply to selected regions or all available regions. With today's launch, customers can scope enablement rules to specific regions or all supported regions. For example, a central security team can create a single organization-wide enablement rule for VPC Flow Logs that applies across all regions, ensuring consistent telemetry collection for every VPC across every account. Rules configured for all regions automatically expand to include new regions as they become available. CloudWatch's cross-region telemetry configuration and enablement rule is available in all AWS commercial regions. Standard CloudWatch pricing applies for telemetry ingestion. To learn more, visit the Amazon CloudWatch documentation .
16 Apr 2026
MediumCapabilityAmazon CloudWatch: Cross-Region Telemetry Auditing & Enablement Rules
Amazon CloudWatch now supports auditing telemetry configuration and enabling telemetry from AWS services such as Amazon EC2, Amazon VPC, and AWS CloudTrail across multiple AWS Regions from a single region. Customers can enable the telemetry auditing feature for their account or organization across all supported regions at once and create enablement rules that automatically apply to selected regions or all available regions. With today's launch, customers can scope enablement rules to specific regions or all supported regions. For example, a central security team can create a single organization-wide enablement rule for VPC Flow Logs that applies across all regions, ensuring consistent telemetry collection for every VPC across every account. Rules configured for all regions automatically expand to include new regions as they become available. CloudWatch's cross-region telemetry configuration and enablement rule is available in all AWS commercial regions. Standard CloudWatch pricing applies for telemetry ingestion. To learn more, visit the Amazon CloudWatch documentation .
16 Apr 2026
MediumCapabilityAmazon FSx for Lustre Persistent-2 available in 4 new AWS Regions
You can now create Amazon FSx for Lustre Persistent-2 file systems in four additional AWS Regions: Asia Pacific (Hyderabad, Jakarta), Europe (Zurich), and South America (São Paulo). Amazon FSx for Lustre Persistent-2 file systems are built on AWS Graviton processors and provide higher throughput per terabyte (up to 1 GB/s per terabyte) and lower cost of throughput compared to previous generation FSx for Lustre file systems. Using FSx for Lustre Persistent-2 file systems, you can accelerate execution of machine learning, high-performance computing, media & entertainment, and financial simulations workloads while reducing your cost of storage. To get started with Amazon FSx for Lustre Persistent-2 in these new regions, create a file system through the AWS Management Console . To learn more about Amazon FSx for Lustre, visit our product pag e , and see the AWS Region Table for complete regional availability information.
16 Apr 2026
MediumCapabilityAmazon CloudWatch RUM now available in AWS European Sovereign Cloud
Amazon CloudWatch RUM (Real User Monitoring) is a feature of Amazon CloudWatch that enables developers and operations teams to collect, view, and analyze client-side performance data from real end-user sessions in web and mobile applications. With its expansion to the AWS European Sovereign Cloud, customers operating under strict European data residency and sovereignty requirements can now monitor their web application performance without data leaving the sovereign boundary. This capability is designed for enterprises, public sector organizations, and regulated industries in Europe that require full control over where their data is stored and processed. CloudWatch RUM helps teams proactively identify and resolve performance bottlenecks across both web and mobile applications by surfacing real-time metrics such as page load times, JavaScript errors, HTTP failures, and mobile-specific signals like crash rates and network latency — enabling faster root cause analysis and improved end-user experience. For example, a European public sector organization can use CloudWatch RUM within the AWS European Sovereign Cloud to monitor citizen-facing web portals and mobile apps while maintaining full data sovereignty compliance. CloudWatch RUM in the AWS European Sovereign Cloud is available today in the EU Sovereign (eusc-de-east-1) region — to get started, visit the Amazon CloudWatch RUM documentation .
16 Apr 2026
MediumCapabilityAmazon WorkSpaces Personal and Core now available in US East (Ohio) and Asia Pacific (Malaysia)
Amazon WorkSpaces Personal and Amazon WorkSpaces Core are now available in US East (Ohio) and Asia Pacific (Malaysia) AWS Regions. You can now provision WorkSpaces closer to your users, helping to provide in-country data residency and a more responsive experience. In US East (Ohio), organizations can also now implement disaster recovery solutions, meet local data residency compliance mandates, and support regional workforces with consistent, low-latency access to their virtual desktop environments across varying network conditions. Amazon WorkSpaces Personal provides users with instant access to their desktops from anywhere. It allows users to stream desktops from AWS to their devices, and WorkSpaces Personal manages the AWS resources required to host and run your desktops, scales automatically, and provides access to your users on demand. Amazon WorkSpaces Core provides cloud-based, fully managed virtual desktop infrastructure (VDI) accessible to third-party VDI management solutions via API. To get started with Amazon WorkSpaces Personal or Amazon WorkSpaces Core, sign into the WorkSpaces management console and select the AWS Region of your choice. To learn more about Amazon WorkSpaces offerings, visit the product page and technical documentation .
16 Apr 2026
MediumCapability
Get alerts for AWS Bedrock
Never miss a breaking change. SignalBreak monitors AWS Bedrock and dozens of other AI providers in real time.
Sign up free — no credit card required