All the major computing-related companies — Microsoft, Intel, AMD, Apple and Qualcomm — are talking big about AI these days, but Nvidia’s already laughing all the way to the bank on it. And the keynote was predominantly about the company’s large-scale hardware for cloud-based AI and the nitty gritty about Nvidia’s developments (admittedly impressive) for making AI training feasible and scalable –currently the Blackwell platform and next year Blackwell Ultra followed by Rubin– as well as how the cost is dropping as a result.
But even if you forget about its data center experience, the company first shipped consumer GPUs with AI-accelerating Tensor cores almost six years ago. Now it joins the branding herd with “RTX AI PCs,” but at the lead: Any system, desktop or laptop, with an RTX 2000 series and the workstation equivalents or later GPU is a member (they have Tensor processors). Windows Copilot Plus PC branding? Yeah, they can have that too — as long as there’s a shiny new CPU in it with a qualifying NPU.
Nvidia also announced it’s collaborating with Microsoft to bring tools for developers later this year to enable them to use RTX GPUs to accelerate their applications via the Copilot Plus programming interface. That’s key, because image and video generative AI tasks need a lot more power to run on-device than they can get running on the integrated GPUs in the most recent CPU chips (and forget about the NPU, which is designed more to take up as little battery as possible, not to handle anything but text-output generative AI).
- Nvidia é líder em AI no setor de tecnologia
- Lançamento de novos produtos para acelerar aplicações de AI
- Colaboração com a Microsoft para desenvolver ferramentas de aceleração de aplicações via GPUs