A Chinese AI company's more frugal approach to training large language models could point toward a less energy-intensive—and more climate-friendly—future for AI, according to some energy analysts. "It ...
A new technical paper titled “Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention” was published by DeepSeek, Peking University and University of Washington.
Chinese AI startup MiniMax, perhaps best known in the West for its hit realistic AI video model Hailuo, has released its latest large language model, MiniMax-M1 — and in great news for enterprises and ...
In my previous article, I discussed the role of data management innovation in improving data center efficiency. I concluded with words of caution and optimism regarding the growing use of larger, ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results