<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Throughput on StorageNews</title><link>https://storagenews.top/tags/throughput/</link><description>Recent content in Throughput on StorageNews</description><generator>Hugo</generator><language>en</language><lastBuildDate>Tue, 17 Mar 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://storagenews.top/tags/throughput/index.xml" rel="self" type="application/rss+xml"/><item><title>NetApp storage cuts AI training bottlenecks fast</title><link>https://storagenews.top/posts/netapp-storage-cuts-ai-training-bottlenecks-fast/</link><pubDate>Tue, 17 Mar 2026 00:00:00 +0000</pubDate><guid>https://storagenews.top/posts/netapp-storage-cuts-ai-training-bottlenecks-fast/</guid><description>&lt;meta charset="utf-8">
&lt;!-- wp:paragraph {"className":"std-text"} -->
&lt;!-- /wp:paragraph -->
&lt;!-- wp:paragraph {"className":"std-text"} -->
&lt;p class="std-text">The new NetApp EF-Series delivers 100GBps read throughput, a massive leap designed to eliminate GPU idle time. This release proves that &lt;strong>extreme block storage&lt;/strong> is now the primary bottleneck for scaling &lt;strong>AI model training&lt;/strong> and &lt;strong>HPC simulations&lt;/strong>.&lt;/p></description></item><item><title>S3 Vectors cut vector DB costs by removing extra layers</title><link>https://storagenews.top/posts/s3-vectors-cut-vector-db-costs-by-removing-extra-layers/</link><pubDate>Wed, 11 Mar 2026 00:00:00 +0000</pubDate><guid>https://storagenews.top/posts/s3-vectors-cut-vector-db-costs-by-removing-extra-layers/</guid><description>&lt;meta charset="utf-8">
&lt;!-- wp:paragraph {"className":"std-text"} -->
&lt;!-- /wp:paragraph -->
&lt;!-- wp:paragraph {"className":"std-text"} -->
&lt;p class="std-text">Amazon S3&amp;#039;s 20th anniversary on Pi Day 2026 arrives as &lt;strong>S3 Vectors&lt;/strong> fundamentally changes object storage for AI workloads. The central thesis is that AWS Storage has evolved from simple durability into the active data foundation required for generative AI and agentic systems. Readers will learn how &lt;strong>S3 Tables&lt;/strong> now support Intelligent-Tiering and Replication to slash analytics costs, alongside concrete strategies for executing complex NAS migrations without downtime.&lt;/p></description></item><item><title>Whitelabel storage cuts GPU farm build time by 18 months</title><link>https://storagenews.top/posts/whitelabel-storage-cuts-gpu-farm-build-time-by-18-months/</link><pubDate>Mon, 23 Feb 2026 00:00:00 +0000</pubDate><guid>https://storagenews.top/posts/whitelabel-storage-cuts-gpu-farm-build-time-by-18-months/</guid><description>&lt;meta charset="utf-8">
&lt;!-- wp:paragraph {"className":"std-text"} -->
&lt;!-- /wp:paragraph -->
&lt;!-- wp:paragraph {"className":"std-text"} -->
&lt;p class="std-text">With the neocloud market hitting $35.22 billion in 2026, Backblaze now offers a white-label storage backend to stop GPU farms from stalling. &lt;strong>B2 Neo&lt;/strong> eliminates the need for emerging providers like CoreWeave and Lambda to waste years building their own object stores. Readers will discover how &lt;strong>white-label object storage&lt;/strong> allows neoclouds to bypass an 18-to-24-month engineering distraction, according to Backblaze CEO Gleb Budman. Instead of diverting resources from their core GPU roadmap, providers can integrate a fully &lt;strong>S3-compatible&lt;/strong> layer in weeks. We examine the operational risks of moving massive datasets without integrated storage, where latency directly corrodes expensive GPU utilization rates.&lt;/p></description></item></channel></rss>