What if you could train massive machine learning models in half the time without compromising performance? For researchers and developers tackling the ever-growing complexity of AI, this isn’t just a ...
Connect X9 (1.6 TB/s bandwidth), Bluefield 4 DPU (offloads storage/security), NVLink 6 switch (scales 72 GPUs as one), ...
Microsoft EVP of Cloud + AI Scott Guthrie says the goal is to be able to 10X the AI training every 18-24 months. This likely includes maintaining the 2-4X pace of chip improvement. Satya Nadella gave ...
On October 6, 2025, AMD and OpenAI announced one of the largest compute partnerships in modern Artificial Intelligence (AI). Under this deal, OpenAI plans to use up to six gigawatts of AMD Instinct ...
In Atlanta, Microsoft has flipped the switch on a new class of datacenter – one that doesn’t stand alone but joins a dedicated network of sites functioning as an AI superfactory to accelerate AI ...
AMD and OpenAI have announced a massive multi-year deal for AI GPUs. Exactly how many GPUs AMD will sell to OpenAI isn't clear, but the announcement says that if the arrangement comes fully to ...
A new technical paper titled “Power Stabilization for AI Training Datacenters” was published by researchers at Microsoft, OpenAI, and NVIDIA. “Large Artificial Intelligence (AI) training workloads ...
With so much focus on inference processing, it is easy to overlook the AI training market, which continues to drive gigawatts of AI computing capacity. The latest benchmarks show that the training of ...
You're currently following this author! Want to unfollow? Unsubscribe via the link in your email. Follow Samuel O'Brient Every time Samuel publishes a story, you’ll get an alert straight to your inbox ...